Commit d99391ec authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull RCU updates from Ingo Molnar:
 "The RCU changes in this cycle were:
   - Expedited grace-period updates
   - kfree_rcu() updates
   - RCU list updates
   - Preemptible RCU updates
   - Torture-test updates
   - Miscellaneous fixes
   - Documentation updates"

* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (69 commits)
  rcu: Remove unused stop-machine #include
  powerpc: Remove comment about read_barrier_depends()
  .mailmap: Add entries for old paulmck@kernel.org addresses
  srcu: Apply *_ONCE() to ->srcu_last_gp_end
  rcu: Switch force_qs_rnp() to for_each_leaf_node_cpu_mask()
  rcu: Move rcu_{expedited,normal} definitions into rcupdate.h
  rcu: Move gp_state_names[] and gp_state_getname() to tree_stall.h
  rcu: Remove the declaration of call_rcu() in tree.h
  rcu: Fix tracepoint tracking RCU CPU kthread utilization
  rcu: Fix harmless omission of "CONFIG_" from #if condition
  rcu: Avoid tick_dep_set_cpu() misordering
  rcu: Provide wrappers for uses of ->rcu_read_lock_nesting
  rcu: Use READ_ONCE() for ->expmask in rcu_read_unlock_special()
  rcu: Clear ->rcu_read_unlock_special only once
  rcu: Clear .exp_hint only when deferred quiescent state has been reported
  rcu: Rename some instance of CONFIG_PREEMPTION to CONFIG_PREEMPT_RCU
  rcu: Remove kfree_call_rcu_nobatch()
  rcu: Remove kfree_rcu() special casing and lazy-callback handling
  rcu: Add support for debug_objects debugging for kfree_rcu()
  rcu: Add multiple in-flight batches of kfree_rcu() work
  ...
parents 8b561778 f8a4bb6b
...@@ -210,6 +210,10 @@ Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> ...@@ -210,6 +210,10 @@ Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Patrick Mochel <mochel@digitalimplant.org> Patrick Mochel <mochel@digitalimplant.org>
Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com> Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com>
Paul Burton <paulburton@kernel.org> <paul.burton@mips.com> Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.ibm.com>
Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.vnet.ibm.com>
Paul E. McKenney <paulmck@kernel.org> <paul.mckenney@linaro.org>
Paul E. McKenney <paulmck@kernel.org> <paulmck@us.ibm.com>
Peter A Jonsson <pj@ludd.ltu.se> Peter A Jonsson <pj@ludd.ltu.se>
Peter Oruba <peter@oruba.de> Peter Oruba <peter@oruba.de>
Peter Oruba <peter.oruba@amd.com> Peter Oruba <peter.oruba@amd.com>
......
.. _NMI_rcu_doc:
Using RCU to Protect Dynamic NMI Handlers Using RCU to Protect Dynamic NMI Handlers
=========================================
Although RCU is usually used to protect read-mostly data structures, Although RCU is usually used to protect read-mostly data structures,
...@@ -9,7 +12,7 @@ work in "arch/x86/oprofile/nmi_timer_int.c" and in ...@@ -9,7 +12,7 @@ work in "arch/x86/oprofile/nmi_timer_int.c" and in
"arch/x86/kernel/traps.c". "arch/x86/kernel/traps.c".
The relevant pieces of code are listed below, each followed by a The relevant pieces of code are listed below, each followed by a
brief explanation. brief explanation::
static int dummy_nmi_callback(struct pt_regs *regs, int cpu) static int dummy_nmi_callback(struct pt_regs *regs, int cpu)
{ {
...@@ -18,12 +21,12 @@ brief explanation. ...@@ -18,12 +21,12 @@ brief explanation.
The dummy_nmi_callback() function is a "dummy" NMI handler that does The dummy_nmi_callback() function is a "dummy" NMI handler that does
nothing, but returns zero, thus saying that it did nothing, allowing nothing, but returns zero, thus saying that it did nothing, allowing
the NMI handler to take the default machine-specific action. the NMI handler to take the default machine-specific action::
static nmi_callback_t nmi_callback = dummy_nmi_callback; static nmi_callback_t nmi_callback = dummy_nmi_callback;
This nmi_callback variable is a global function pointer to the current This nmi_callback variable is a global function pointer to the current
NMI handler. NMI handler::
void do_nmi(struct pt_regs * regs, long error_code) void do_nmi(struct pt_regs * regs, long error_code)
{ {
...@@ -53,11 +56,12 @@ anyway. However, in practice it is a good documentation aid, particularly ...@@ -53,11 +56,12 @@ anyway. However, in practice it is a good documentation aid, particularly
for anyone attempting to do something similar on Alpha or on systems for anyone attempting to do something similar on Alpha or on systems
with aggressive optimizing compilers. with aggressive optimizing compilers.
Quick Quiz: Why might the rcu_dereference_sched() be necessary on Alpha, Quick Quiz:
given that the code referenced by the pointer is read-only? Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?
:ref:`Answer to Quick Quiz <answer_quick_quiz_NMI>`
Back to the discussion of NMI and RCU... Back to the discussion of NMI and RCU::
void set_nmi_callback(nmi_callback_t callback) void set_nmi_callback(nmi_callback_t callback)
{ {
...@@ -68,7 +72,7 @@ The set_nmi_callback() function registers an NMI handler. Note that any ...@@ -68,7 +72,7 @@ The set_nmi_callback() function registers an NMI handler. Note that any
data that is to be used by the callback must be initialized up -before- data that is to be used by the callback must be initialized up -before-
the call to set_nmi_callback(). On architectures that do not order the call to set_nmi_callback(). On architectures that do not order
writes, the rcu_assign_pointer() ensures that the NMI handler sees the writes, the rcu_assign_pointer() ensures that the NMI handler sees the
initialized values. initialized values::
void unset_nmi_callback(void) void unset_nmi_callback(void)
{ {
...@@ -82,7 +86,7 @@ up any data structures used by the old NMI handler until execution ...@@ -82,7 +86,7 @@ up any data structures used by the old NMI handler until execution
of it completes on all other CPUs. of it completes on all other CPUs.
One way to accomplish this is via synchronize_rcu(), perhaps as One way to accomplish this is via synchronize_rcu(), perhaps as
follows: follows::
unset_nmi_callback(); unset_nmi_callback();
synchronize_rcu(); synchronize_rcu();
...@@ -98,24 +102,23 @@ to free up the handler's data as soon as synchronize_rcu() returns. ...@@ -98,24 +102,23 @@ to free up the handler's data as soon as synchronize_rcu() returns.
Important note: for this to work, the architecture in question must Important note: for this to work, the architecture in question must
invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively. invoke nmi_enter() and nmi_exit() on NMI entry and exit, respectively.
.. _answer_quick_quiz_NMI:
Answer to Quick Quiz Answer to Quick Quiz:
Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only?
Why might the rcu_dereference_sched() be necessary on Alpha, given
that the code referenced by the pointer is read-only?
Answer: The caller to set_nmi_callback() might well have The caller to set_nmi_callback() might well have
initialized some data that is to be used by the new NMI initialized some data that is to be used by the new NMI
handler. In this case, the rcu_dereference_sched() would handler. In this case, the rcu_dereference_sched() would
be needed, because otherwise a CPU that received an NMI be needed, because otherwise a CPU that received an NMI
just after the new handler was set might see the pointer just after the new handler was set might see the pointer
to the new NMI handler, but the old pre-initialized to the new NMI handler, but the old pre-initialized
version of the handler's data. version of the handler's data.
This same sad story can happen on other CPUs when using This same sad story can happen on other CPUs when using
a compiler with aggressive pointer-value speculation a compiler with aggressive pointer-value speculation
optimizations. optimizations.
More important, the rcu_dereference_sched() makes it More important, the rcu_dereference_sched() makes it
clear to someone reading the code that the pointer is clear to someone reading the code that the pointer is
being protected by RCU-sched. being protected by RCU-sched.
Using RCU to Protect Read-Mostly Arrays .. _array_rcu_doc:
Using RCU to Protect Read-Mostly Arrays
=======================================
Although RCU is more commonly used to protect linked lists, it can Although RCU is more commonly used to protect linked lists, it can
also be used to protect arrays. Three situations are as follows: also be used to protect arrays. Three situations are as follows:
1. Hash Tables 1. :ref:`Hash Tables <hash_tables>`
2. Static Arrays 2. :ref:`Static Arrays <static_arrays>`
3. Resizeable Arrays 3. :ref:`Resizable Arrays <resizable_arrays>`
Each of these three situations involves an RCU-protected pointer to an Each of these three situations involves an RCU-protected pointer to an
array that is separately indexed. It might be tempting to consider use array that is separately indexed. It might be tempting to consider use
of RCU to instead protect the index into an array, however, this use of RCU to instead protect the index into an array, however, this use
case is -not- supported. The problem with RCU-protected indexes into case is **not** supported. The problem with RCU-protected indexes into
arrays is that compilers can play way too many optimization games with arrays is that compilers can play way too many optimization games with
integers, which means that the rules governing handling of these indexes integers, which means that the rules governing handling of these indexes
are far more trouble than they are worth. If RCU-protected indexes into are far more trouble than they are worth. If RCU-protected indexes into
...@@ -24,16 +26,20 @@ to be safely used. ...@@ -24,16 +26,20 @@ to be safely used.
That aside, each of the three RCU-protected pointer situations are That aside, each of the three RCU-protected pointer situations are
described in the following sections. described in the following sections.
.. _hash_tables:
Situation 1: Hash Tables Situation 1: Hash Tables
------------------------
Hash tables are often implemented as an array, where each array entry Hash tables are often implemented as an array, where each array entry
has a linked-list hash chain. Each hash chain can be protected by RCU has a linked-list hash chain. Each hash chain can be protected by RCU
as described in the listRCU.txt document. This approach also applies as described in the listRCU.txt document. This approach also applies
to other array-of-list situations, such as radix trees. to other array-of-list situations, such as radix trees.
.. _static_arrays:
Situation 2: Static Arrays Situation 2: Static Arrays
--------------------------
Static arrays, where the data (rather than a pointer to the data) is Static arrays, where the data (rather than a pointer to the data) is
located in each array element, and where the array is never resized, located in each array element, and where the array is never resized,
...@@ -41,13 +47,17 @@ have not been used with RCU. Rik van Riel recommends using seqlock in ...@@ -41,13 +47,17 @@ have not been used with RCU. Rik van Riel recommends using seqlock in
this situation, which would also have minimal read-side overhead as long this situation, which would also have minimal read-side overhead as long
as updates are rare. as updates are rare.
Quick Quiz: Why is it so important that updates be rare when Quick Quiz:
using seqlock? Why is it so important that updates be rare when using seqlock?
:ref:`Answer to Quick Quiz <answer_quick_quiz_seqlock>`
.. _resizable_arrays:
Situation 3: Resizeable Arrays Situation 3: Resizable Arrays
------------------------------
Use of RCU for resizeable arrays is demonstrated by the grow_ary() Use of RCU for resizable arrays is demonstrated by the grow_ary()
function formerly used by the System V IPC code. The array is used function formerly used by the System V IPC code. The array is used
to map from semaphore, message-queue, and shared-memory IDs to the data to map from semaphore, message-queue, and shared-memory IDs to the data
structure that represents the corresponding IPC construct. The grow_ary() structure that represents the corresponding IPC construct. The grow_ary()
...@@ -60,7 +70,7 @@ the remainder of the new, updates the ids->entries pointer to point to ...@@ -60,7 +70,7 @@ the remainder of the new, updates the ids->entries pointer to point to
the new array, and invokes ipc_rcu_putref() to free up the old array. the new array, and invokes ipc_rcu_putref() to free up the old array.
Note that rcu_assign_pointer() is used to update the ids->entries pointer, Note that rcu_assign_pointer() is used to update the ids->entries pointer,
which includes any memory barriers required on whatever architecture which includes any memory barriers required on whatever architecture
you are running on. you are running on::
static int grow_ary(struct ipc_ids* ids, int newsize) static int grow_ary(struct ipc_ids* ids, int newsize)
{ {
...@@ -112,7 +122,7 @@ a simple check suffices. The pointer to the structure corresponding ...@@ -112,7 +122,7 @@ a simple check suffices. The pointer to the structure corresponding
to the desired IPC object is placed in "out", with NULL indicating to the desired IPC object is placed in "out", with NULL indicating
a non-existent entry. After acquiring "out->lock", the "out->deleted" a non-existent entry. After acquiring "out->lock", the "out->deleted"
flag indicates whether the IPC object is in the process of being flag indicates whether the IPC object is in the process of being
deleted, and, if not, the pointer is returned. deleted, and, if not, the pointer is returned::
struct kern_ipc_perm* ipc_lock(struct ipc_ids* ids, int id) struct kern_ipc_perm* ipc_lock(struct ipc_ids* ids, int id)
{ {
...@@ -144,8 +154,10 @@ deleted, and, if not, the pointer is returned. ...@@ -144,8 +154,10 @@ deleted, and, if not, the pointer is returned.
return out; return out;
} }
.. _answer_quick_quiz_seqlock:
Answer to Quick Quiz: Answer to Quick Quiz:
Why is it so important that updates be rare when using seqlock?
The reason that it is important that updates be rare when The reason that it is important that updates be rare when
using seqlock is that frequent updates can livelock readers. using seqlock is that frequent updates can livelock readers.
......
...@@ -7,8 +7,13 @@ RCU concepts ...@@ -7,8 +7,13 @@ RCU concepts
.. toctree:: .. toctree::
:maxdepth: 3 :maxdepth: 3
arrayRCU
rcubarrier
rcu_dereference
whatisRCU
rcu rcu
listRCU listRCU
NMI-RCU
UP UP
Design/Memory-Ordering/Tree-RCU-Memory-Ordering Design/Memory-Ordering/Tree-RCU-Memory-Ordering
......
...@@ -99,7 +99,7 @@ With this change, the rcu_dereference() is always within an RCU ...@@ -99,7 +99,7 @@ With this change, the rcu_dereference() is always within an RCU
read-side critical section, which again would have suppressed the read-side critical section, which again would have suppressed the
above lockdep-RCU splat. above lockdep-RCU splat.
But in this particular case, we don't actually deference the pointer But in this particular case, we don't actually dereference the pointer
returned from rcu_dereference(). Instead, that pointer is just compared returned from rcu_dereference(). Instead, that pointer is just compared
to the cic pointer, which means that the rcu_dereference() can be replaced to the cic pointer, which means that the rcu_dereference() can be replaced
by rcu_access_pointer() as follows: by rcu_access_pointer() as follows:
......
...@@ -225,18 +225,13 @@ an estimate of the total number of RCU callbacks queued across all CPUs ...@@ -225,18 +225,13 @@ an estimate of the total number of RCU callbacks queued across all CPUs
In kernels with CONFIG_RCU_FAST_NO_HZ, more information is printed In kernels with CONFIG_RCU_FAST_NO_HZ, more information is printed
for each CPU: for each CPU:
0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 Nonlazy posted: ..D 0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 dyntick_enabled: 1
The "last_accelerate:" prints the low-order 16 bits (in hex) of the The "last_accelerate:" prints the low-order 16 bits (in hex) of the
jiffies counter when this CPU last invoked rcu_try_advance_all_cbs() jiffies counter when this CPU last invoked rcu_try_advance_all_cbs()
from rcu_needs_cpu() or last invoked rcu_accelerate_cbs() from from rcu_needs_cpu() or last invoked rcu_accelerate_cbs() from
rcu_prepare_for_idle(). The "Nonlazy posted:" indicates lazy-callback rcu_prepare_for_idle(). "dyntick_enabled: 1" indicates that dyntick-idle
status, so that an "l" indicates that all callbacks were lazy at the start processing is enabled.
of the last idle period and an "L" indicates that there are currently
no non-lazy callbacks (in both cases, "." is printed otherwise, as
shown above) and "D" indicates that dyntick-idle processing is enabled
("." is printed otherwise, for example, if disabled via the "nohz="
kernel boot parameter).
If the grace period ends just as the stall warning starts printing, If the grace period ends just as the stall warning starts printing,
there will be a spurious stall-warning message, which will include there will be a spurious stall-warning message, which will include
......
...@@ -4001,6 +4001,19 @@ ...@@ -4001,6 +4001,19 @@
test until boot completes in order to avoid test until boot completes in order to avoid
interference. interference.
rcuperf.kfree_rcu_test= [KNL]
Set to measure performance of kfree_rcu() flooding.
rcuperf.kfree_nthreads= [KNL]
The number of threads running loops of kfree_rcu().
rcuperf.kfree_alloc_num= [KNL]
Number of allocations and frees done in an iteration.
rcuperf.kfree_loops= [KNL]
Number of loops doing rcuperf.kfree_alloc_num number
of allocations and frees.
rcuperf.nreaders= [KNL] rcuperf.nreaders= [KNL]
Set number of RCU readers. The value -1 selects Set number of RCU readers. The value -1 selects
N, where N is the number of CPUs. A value N, where N is the number of CPUs. A value
......
...@@ -18,8 +18,6 @@ ...@@ -18,8 +18,6 @@
* mb() prevents loads and stores being reordered across this point. * mb() prevents loads and stores being reordered across this point.
* rmb() prevents loads being reordered across this point. * rmb() prevents loads being reordered across this point.
* wmb() prevents stores being reordered across this point. * wmb() prevents stores being reordered across this point.
* read_barrier_depends() prevents data-dependent loads being reordered
* across this point (nop on PPC).
* *
* *mb() variants without smp_ prefix must order all types of memory * *mb() variants without smp_ prefix must order all types of memory
* operations with one another. sync is the only instruction sufficient * operations with one another. sync is the only instruction sufficient
......
...@@ -281,8 +281,8 @@ void mt76_rx_aggr_stop(struct mt76_dev *dev, struct mt76_wcid *wcid, u8 tidno) ...@@ -281,8 +281,8 @@ void mt76_rx_aggr_stop(struct mt76_dev *dev, struct mt76_wcid *wcid, u8 tidno)
{ {
struct mt76_rx_tid *tid = NULL; struct mt76_rx_tid *tid = NULL;
rcu_swap_protected(wcid->aggr[tidno], tid, tid = rcu_replace_pointer(wcid->aggr[tidno], tid,
lockdep_is_held(&dev->mutex)); lockdep_is_held(&dev->mutex));
if (tid) { if (tid) {
mt76_rx_aggr_shutdown(dev, tid); mt76_rx_aggr_shutdown(dev, tid);
kfree_rcu(tid, rcu_head); kfree_rcu(tid, rcu_head);
......
...@@ -23,6 +23,13 @@ ...@@ -23,6 +23,13 @@
#define LIST_HEAD(name) \ #define LIST_HEAD(name) \
struct list_head name = LIST_HEAD_INIT(name) struct list_head name = LIST_HEAD_INIT(name)
/**
* INIT_LIST_HEAD - Initialize a list_head structure
* @list: list_head structure to be initialized.
*
* Initializes the list_head to point to itself. If it is a list header,
* the result is an empty list.
*/
static inline void INIT_LIST_HEAD(struct list_head *list) static inline void INIT_LIST_HEAD(struct list_head *list)
{ {
WRITE_ONCE(list->next, list); WRITE_ONCE(list->next, list);
...@@ -120,12 +127,6 @@ static inline void __list_del_clearprev(struct list_head *entry) ...@@ -120,12 +127,6 @@ static inline void __list_del_clearprev(struct list_head *entry)
entry->prev = NULL; entry->prev = NULL;
} }
/**
* list_del - deletes entry from list.
* @entry: the element to delete from the list.
* Note: list_empty() on entry does not return true after this, the entry is
* in an undefined state.
*/
static inline void __list_del_entry(struct list_head *entry) static inline void __list_del_entry(struct list_head *entry)
{ {
if (!__list_del_entry_valid(entry)) if (!__list_del_entry_valid(entry))
...@@ -134,6 +135,12 @@ static inline void __list_del_entry(struct list_head *entry) ...@@ -134,6 +135,12 @@ static inline void __list_del_entry(struct list_head *entry)
__list_del(entry->prev, entry->next); __list_del(entry->prev, entry->next);
} }
/**
* list_del - deletes entry from list.
* @entry: the element to delete from the list.
* Note: list_empty() on entry does not return true after this, the entry is
* in an undefined state.
*/
static inline void list_del(struct list_head *entry) static inline void list_del(struct list_head *entry)
{ {
__list_del_entry(entry); __list_del_entry(entry);
...@@ -157,8 +164,15 @@ static inline void list_replace(struct list_head *old, ...@@ -157,8 +164,15 @@ static inline void list_replace(struct list_head *old,
new->prev->next = new; new->prev->next = new;
} }
/**
* list_replace_init - replace old entry by new one and initialize the old one
* @old : the element to be replaced
* @new : the new element to insert
*
* If @old was empty, it will be overwritten.
*/
static inline void list_replace_init(struct list_head *old, static inline void list_replace_init(struct list_head *old,
struct list_head *new) struct list_head *new)
{ {
list_replace(old, new); list_replace(old, new);
INIT_LIST_HEAD(old); INIT_LIST_HEAD(old);
...@@ -754,11 +768,36 @@ static inline void INIT_HLIST_NODE(struct hlist_node *h) ...@@ -754,11 +768,36 @@ static inline void INIT_HLIST_NODE(struct hlist_node *h)
h->pprev = NULL; h->pprev = NULL;
} }
/**
* hlist_unhashed - Has node been removed from list and reinitialized?
* @h: Node to be checked
*
* Not that not all removal functions will leave a node in unhashed
* state. For example, hlist_nulls_del_init_rcu() does leave the
* node in unhashed state, but hlist_nulls_del() does not.
*/
static inline int hlist_unhashed(const struct hlist_node *h) static inline int hlist_unhashed(const struct hlist_node *h)
{ {
return !h->pprev; return !h->pprev;
} }
/**
* hlist_unhashed_lockless - Version of hlist_unhashed for lockless use
* @h: Node to be checked
*
* This variant of hlist_unhashed() must be used in lockless contexts
* to avoid potential load-tearing. The READ_ONCE() is paired with the
* various WRITE_ONCE() in hlist helpers that are defined below.
*/
static inline int hlist_unhashed_lockless(const struct hlist_node *h)
{
return !READ_ONCE(h->pprev);
}
/**
* hlist_empty - Is the specified hlist_head structure an empty hlist?
* @h: Structure to check.
*/
static inline int hlist_empty(const struct hlist_head *h) static inline int hlist_empty(const struct hlist_head *h)
{ {
return !READ_ONCE(h->first); return !READ_ONCE(h->first);
...@@ -771,9 +810,16 @@ static inline void __hlist_del(struct hlist_node *n) ...@@ -771,9 +810,16 @@ static inline void __hlist_del(struct hlist_node *n)
WRITE_ONCE(*pprev, next); WRITE_ONCE(*pprev, next);
if (next) if (next)
next->pprev = pprev; WRITE_ONCE(next->pprev, pprev);
} }
/**
* hlist_del - Delete the specified hlist_node from its list
* @n: Node to delete.
*
* Note that this function leaves the node in hashed state. Use
* hlist_del_init() or similar instead to unhash @n.
*/
static inline void hlist_del(struct hlist_node *n) static inline void hlist_del(struct hlist_node *n)
{ {
__hlist_del(n); __hlist_del(n);
...@@ -781,6 +827,12 @@ static inline void hlist_del(struct hlist_node *n) ...@@ -781,6 +827,12 @@ static inline void hlist_del(struct hlist_node *n)
n->pprev = LIST_POISON2; n->pprev = LIST_POISON2;
} }
/**
* hlist_del_init - Delete the specified hlist_node from its list and initialize
* @n: Node to delete.
*
* Note that this function leaves the node in unhashed state.
*/
static inline void hlist_del_init(struct hlist_node *n) static inline void hlist_del_init(struct hlist_node *n)
{ {
if (!hlist_unhashed(n)) { if (!hlist_unhashed(n)) {
...@@ -789,51 +841,83 @@ static inline void hlist_del_init(struct hlist_node *n) ...@@ -789,51 +841,83 @@ static inline void hlist_del_init(struct hlist_node *n)
} }
} }
/**
* hlist_add_head - add a new entry at the beginning of the hlist
* @n: new entry to be added
* @h: hlist head to add it after
*
* Insert a new entry after the specified head.
* This is good for implementing stacks.
*/
static inline void hlist_add_head(struct hlist_node *n, struct hlist_head *h) static inline void hlist_add_head(struct hlist_node *n, struct hlist_head *h)
{ {
struct hlist_node *first = h->first; struct hlist_node *first = h->first;
n->next = first; WRITE_ONCE(n->next, first);
if (first) if (first)
first->pprev = &n->next; WRITE_ONCE(first->pprev, &n->next);
WRITE_ONCE(h->first, n); WRITE_ONCE(h->first, n);
n->pprev = &h->first; WRITE_ONCE(n->pprev, &h->first);
} }
/* next must be != NULL */ /**
* hlist_add_before - add a new entry before the one specified
* @n: new entry to be added
* @next: hlist node to add it before, which must be non-NULL
*/
static inline void hlist_add_before(struct hlist_node *n, static inline void hlist_add_before(struct hlist_node *n,
struct hlist_node *next) struct hlist_node *next)
{ {
n->pprev = next->pprev; WRITE_ONCE(n->pprev, next->pprev);
n->next = next; WRITE_ONCE(n->next, next);
next->pprev = &n->next; WRITE_ONCE(next->pprev, &n->next);
WRITE_ONCE(*(n->pprev), n); WRITE_ONCE(*(n->pprev), n);
} }
/**
* hlist_add_behing - add a new entry after the one specified
* @n: new entry to be added
* @prev: hlist node to add it after, which must be non-NULL
*/
static inline void hlist_add_behind(struct hlist_node *n, static inline void hlist_add_behind(struct hlist_node *n,
struct hlist_node *prev) struct hlist_node *prev)
{ {
n->next = prev->next; WRITE_ONCE(n->next, prev->next);
prev->next = n; WRITE_ONCE(prev->next, n);
n->pprev = &prev->next; WRITE_ONCE(n->pprev, &prev->next);
if (n->next) if (n->next)
n->next->pprev = &n->next; WRITE_ONCE(n->next->pprev, &n->next);
} }
/* after that we'll appear to be on some hlist and hlist_del will work */ /**
* hlist_add_fake - create a fake hlist consisting of a single headless node
* @n: Node to make a fake list out of
*
* This makes @n appear to be its own predecessor on a headless hlist.
* The point of this is to allow things like hlist_del() to work correctly
* in cases where there is no list.
*/
static inline void hlist_add_fake(struct hlist_node *n) static inline void hlist_add_fake(struct hlist_node *n)
{ {
n->pprev = &n->next; n->pprev = &n->next;
} }
/**
* hlist_fake: Is this node a fake hlist?
* @h: Node to check for being a self-referential fake hlist.
*/
static inline bool hlist_fake(struct hlist_node *h) static inline bool hlist_fake(struct hlist_node *h)
{ {
return h->pprev == &h->next; return h->pprev == &h->next;
} }
/* /**
* hlist_is_singular_node - is node the only element of the specified hlist?
* @n: Node to check for singularity.
* @h: Header for potentially singular list.
*
* Check whether the node is the only node of the head without * Check whether the node is the only node of the head without
* accessing head: * accessing head, thus avoiding unnecessary cache misses.
*/ */
static inline bool static inline bool
hlist_is_singular_node(struct hlist_node *n, struct hlist_head *h) hlist_is_singular_node(struct hlist_node *n, struct hlist_head *h)
...@@ -841,7 +925,11 @@ hlist_is_singular_node(struct hlist_node *n, struct hlist_head *h) ...@@ -841,7 +925,11 @@ hlist_is_singular_node(struct hlist_node *n, struct hlist_head *h)
return !n->next && n->pprev == &h->first; return !n->next && n->pprev == &h->first;
} }
/* /**
* hlist_move_list - Move an hlist
* @old: hlist_head for old list.
* @new: hlist_head for new list.
*
* Move a list from one list head to another. Fixup the pprev * Move a list from one list head to another. Fixup the pprev
* reference of the first entry if it exists. * reference of the first entry if it exists.
*/ */
......
...@@ -56,11 +56,33 @@ static inline unsigned long get_nulls_value(const struct hlist_nulls_node *ptr) ...@@ -56,11 +56,33 @@ static inline unsigned long get_nulls_value(const struct hlist_nulls_node *ptr)
return ((unsigned long)ptr) >> 1; return ((unsigned long)ptr) >> 1;
} }
/**
* hlist_nulls_unhashed - Has node been removed and reinitialized?
* @h: Node to be checked
*
* Not that not all removal functions will leave a node in unhashed state.
* For example, hlist_del_init_rcu() leaves the node in unhashed state,
* but hlist_nulls_del() does not.
*/
static inline int hlist_nulls_unhashed(const struct hlist_nulls_node *h) static inline int hlist_nulls_unhashed(const struct hlist_nulls_node *h)
{ {
return !h->pprev; return !h->pprev;
} }
/**
* hlist_nulls_unhashed_lockless - Has node been removed and reinitialized?
* @h: Node to be checked
*
* Not that not all removal functions will leave a node in unhashed state.
* For example, hlist_del_init_rcu() leaves the node in unhashed state,
* but hlist_nulls_del() does not. Unlike hlist_nulls_unhashed(), this
* function may be used locklessly.
*/
static inline int hlist_nulls_unhashed_lockless(const struct hlist_nulls_node *h)
{
return !READ_ONCE(h->pprev);
}
static inline int hlist_nulls_empty(const struct hlist_nulls_head *h) static inline int hlist_nulls_empty(const struct hlist_nulls_head *h)
{ {
return is_a_nulls(READ_ONCE(h->first)); return is_a_nulls(READ_ONCE(h->first));
...@@ -72,10 +94,10 @@ static inline void hlist_nulls_add_head(struct hlist_nulls_node *n, ...@@ -72,10 +94,10 @@ static inline void hlist_nulls_add_head(struct hlist_nulls_node *n,
struct hlist_nulls_node *first = h->first; struct hlist_nulls_node *first = h->first;
n->next = first; n->next = first;
n->pprev = &h->first; WRITE_ONCE(n->pprev, &h->first);
h->first = n; h->first = n;
if (!is_a_nulls(first)) if (!is_a_nulls(first))
first->pprev = &n->next; WRITE_ONCE(first->pprev, &n->next);
} }
static inline void __hlist_nulls_del(struct hlist_nulls_node *n) static inline void __hlist_nulls_del(struct hlist_nulls_node *n)
...@@ -85,13 +107,13 @@ static inline void __hlist_nulls_del(struct hlist_nulls_node *n) ...@@ -85,13 +107,13 @@ static inline void __hlist_nulls_del(struct hlist_nulls_node *n)
WRITE_ONCE(*pprev, next); WRITE_ONCE(*pprev, next);
if (!is_a_nulls(next)) if (!is_a_nulls(next))
next->pprev = pprev; WRITE_ONCE(next->pprev, pprev);
} }
static inline void hlist_nulls_del(struct hlist_nulls_node *n) static inline void hlist_nulls_del(struct hlist_nulls_node *n)
{ {
__hlist_nulls_del(n); __hlist_nulls_del(n);
n->pprev = LIST_POISON2; WRITE_ONCE(n->pprev, LIST_POISON2);
} }
/** /**
......
...@@ -22,7 +22,6 @@ struct rcu_cblist { ...@@ -22,7 +22,6 @@ struct rcu_cblist {
struct rcu_head *head; struct rcu_head *head;
struct rcu_head **tail; struct rcu_head **tail;
long len; long len;
long len_lazy;
}; };
#define RCU_CBLIST_INITIALIZER(n) { .head = NULL, .tail = &n.head } #define RCU_CBLIST_INITIALIZER(n) { .head = NULL, .tail = &n.head }
...@@ -73,7 +72,6 @@ struct rcu_segcblist { ...@@ -73,7 +72,6 @@ struct rcu_segcblist {
#else #else
long len; long len;
#endif #endif
long len_lazy;
u8 enabled; u8 enabled;
u8 offloaded; u8 offloaded;
}; };
......
...@@ -40,6 +40,16 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list) ...@@ -40,6 +40,16 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
*/ */
#define list_next_rcu(list) (*((struct list_head __rcu **)(&(list)->next))) #define list_next_rcu(list) (*((struct list_head __rcu **)(&(list)->next)))
/**
* list_tail_rcu - returns the prev pointer of the head of the list
* @head: the head of the list
*
* Note: This should only be used with the list header, and even then
* only if list_del() and similar primitives are not also used on the
* list header.
*/
#define list_tail_rcu(head) (*((struct list_head __rcu **)(&(head)->prev)))
/* /*
* Check during list traversal that we are within an RCU reader * Check during list traversal that we are within an RCU reader
*/ */
...@@ -173,7 +183,7 @@ static inline void hlist_del_init_rcu(struct hlist_node *n) ...@@ -173,7 +183,7 @@ static inline void hlist_del_init_rcu(struct hlist_node *n)
{ {
if (!hlist_unhashed(n)) { if (!hlist_unhashed(n)) {
__hlist_del(n); __hlist_del(n);
n->pprev = NULL; WRITE_ONCE(n->pprev, NULL);
} }
} }
...@@ -361,7 +371,7 @@ static inline void list_splice_tail_init_rcu(struct list_head *list, ...@@ -361,7 +371,7 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
* @pos: the type * to use as a loop cursor. * @pos: the type * to use as a loop cursor.
* @head: the head for your list. * @head: the head for your list.
* @member: the name of the list_head within the struct. * @member: the name of the list_head within the struct.
* @cond: optional lockdep expression if called from non-RCU protection. * @cond...: optional lockdep expression if called from non-RCU protection.
* *
* This list-traversal primitive may safely run concurrently with * This list-traversal primitive may safely run concurrently with
* the _rcu list-mutation primitives such as list_add_rcu() * the _rcu list-mutation primitives such as list_add_rcu()
...@@ -473,7 +483,7 @@ static inline void list_splice_tail_init_rcu(struct list_head *list, ...@@ -473,7 +483,7 @@ static inline void list_splice_tail_init_rcu(struct list_head *list,
static inline void hlist_del_rcu(struct hlist_node *n) static inline void hlist_del_rcu(struct hlist_node *n)
{ {
__hlist_del(n); __hlist_del(n);
n->pprev = LIST_POISON2; WRITE_ONCE(n->pprev, LIST_POISON2);
} }
/** /**
...@@ -489,11 +499,11 @@ static inline void hlist_replace_rcu(struct hlist_node *old, ...@@ -489,11 +499,11 @@ static inline void hlist_replace_rcu(struct hlist_node *old,
struct hlist_node *next = old->next; struct hlist_node *next = old->next;
new->next = next; new->next = next;
new->pprev = old->pprev; WRITE_ONCE(new->pprev, old->pprev);
rcu_assign_pointer(*(struct hlist_node __rcu **)new->pprev, new); rcu_assign_pointer(*(struct hlist_node __rcu **)new->pprev, new);
if (next) if (next)
new->next->pprev = &new->next; WRITE_ONCE(new->next->pprev, &new->next);
old->pprev = LIST_POISON2; WRITE_ONCE(old->pprev, LIST_POISON2);
} }
/* /*
...@@ -528,10 +538,10 @@ static inline void hlist_add_head_rcu(struct hlist_node *n, ...@@ -528,10 +538,10 @@ static inline void hlist_add_head_rcu(struct hlist_node *n,
struct hlist_node *first = h->first; struct hlist_node *first = h->first;
n->next = first; n->next = first;
n->pprev = &h->first; WRITE_ONCE(n->pprev, &h->first);
rcu_assign_pointer(hlist_first_rcu(h), n); rcu_assign_pointer(hlist_first_rcu(h), n);
if (first) if (first)
first->pprev = &n->next; WRITE_ONCE(first->pprev, &n->next);
} }
/** /**
...@@ -564,7 +574,7 @@ static inline void hlist_add_tail_rcu(struct hlist_node *n, ...@@ -564,7 +574,7 @@ static inline void hlist_add_tail_rcu(struct hlist_node *n,
if (last) { if (last) {
n->next = last->next; n->next = last->next;
n->pprev = &last->next; WRITE_ONCE(n->pprev, &last->next);
rcu_assign_pointer(hlist_next_rcu(last), n); rcu_assign_pointer(hlist_next_rcu(last), n);
} else { } else {
hlist_add_head_rcu(n, h); hlist_add_head_rcu(n, h);
...@@ -592,10 +602,10 @@ static inline void hlist_add_tail_rcu(struct hlist_node *n, ...@@ -592,10 +602,10 @@ static inline void hlist_add_tail_rcu(struct hlist_node *n,
static inline void hlist_add_before_rcu(struct hlist_node *n, static inline void hlist_add_before_rcu(struct hlist_node *n,
struct hlist_node *next) struct hlist_node *next)
{ {
n->pprev = next->pprev; WRITE_ONCE(n->pprev, next->pprev);
n->next = next; n->next = next;
rcu_assign_pointer(hlist_pprev_rcu(n), n); rcu_assign_pointer(hlist_pprev_rcu(n), n);
next->pprev = &n->next; WRITE_ONCE(next->pprev, &n->next);
} }
/** /**
...@@ -620,10 +630,10 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n, ...@@ -620,10 +630,10 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n,
struct hlist_node *prev) struct hlist_node *prev)
{ {
n->next = prev->next; n->next = prev->next;
n->pprev = &prev->next; WRITE_ONCE(n->pprev, &prev->next);
rcu_assign_pointer(hlist_next_rcu(prev), n); rcu_assign_pointer(hlist_next_rcu(prev), n);
if (n->next) if (n->next)
n->next->pprev = &n->next; WRITE_ONCE(n->next->pprev, &n->next);
} }
#define __hlist_for_each_rcu(pos, head) \ #define __hlist_for_each_rcu(pos, head) \
...@@ -636,7 +646,7 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n, ...@@ -636,7 +646,7 @@ static inline void hlist_add_behind_rcu(struct hlist_node *n,
* @pos: the type * to use as a loop cursor. * @pos: the type * to use as a loop cursor.
* @head: the head for your list. * @head: the head for your list.
* @member: the name of the hlist_node within the struct. * @member: the name of the hlist_node within the struct.
* @cond: optional lockdep expression if called from non-RCU protection. * @cond...: optional lockdep expression if called from non-RCU protection.
* *
* This list-traversal primitive may safely run concurrently with * This list-traversal primitive may safely run concurrently with
* the _rcu list-mutation primitives such as hlist_add_head_rcu() * the _rcu list-mutation primitives such as hlist_add_head_rcu()
......
...@@ -34,13 +34,21 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n) ...@@ -34,13 +34,21 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
{ {
if (!hlist_nulls_unhashed(n)) { if (!hlist_nulls_unhashed(n)) {
__hlist_nulls_del(n); __hlist_nulls_del(n);
n->pprev = NULL; WRITE_ONCE(n->pprev, NULL);
} }
} }
/**
* hlist_nulls_first_rcu - returns the first element of the hash list.
* @head: the head of the list.
*/
#define hlist_nulls_first_rcu(head) \ #define hlist_nulls_first_rcu(head) \
(*((struct hlist_nulls_node __rcu __force **)&(head)->first)) (*((struct hlist_nulls_node __rcu __force **)&(head)->first))
/**
* hlist_nulls_next_rcu - returns the element of the list after @node.
* @node: element of the list.
*/
#define hlist_nulls_next_rcu(node) \ #define hlist_nulls_next_rcu(node) \
(*((struct hlist_nulls_node __rcu __force **)&(node)->next)) (*((struct hlist_nulls_node __rcu __force **)&(node)->next))
...@@ -66,7 +74,7 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n) ...@@ -66,7 +74,7 @@ static inline void hlist_nulls_del_init_rcu(struct hlist_nulls_node *n)
static inline void hlist_nulls_del_rcu(struct hlist_nulls_node *n) static inline void hlist_nulls_del_rcu(struct hlist_nulls_node *n)
{ {
__hlist_nulls_del(n); __hlist_nulls_del(n);
n->pprev = LIST_POISON2; WRITE_ONCE(n->pprev, LIST_POISON2);
} }
/** /**
...@@ -94,10 +102,10 @@ static inline void hlist_nulls_add_head_rcu(struct hlist_nulls_node *n, ...@@ -94,10 +102,10 @@ static inline void hlist_nulls_add_head_rcu(struct hlist_nulls_node *n,
struct hlist_nulls_node *first = h->first; struct hlist_nulls_node *first = h->first;
n->next = first; n->next = first;
n->pprev = &h->first; WRITE_ONCE(n->pprev, &h->first);
rcu_assign_pointer(hlist_nulls_first_rcu(h), n); rcu_assign_pointer(hlist_nulls_first_rcu(h), n);
if (!is_a_nulls(first)) if (!is_a_nulls(first))
first->pprev = &n->next; WRITE_ONCE(first->pprev, &n->next);
} }
/** /**
...@@ -141,7 +149,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n, ...@@ -141,7 +149,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n,
* hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type * hlist_nulls_for_each_entry_rcu - iterate over rcu list of given type
* @tpos: the type * to use as a loop cursor. * @tpos: the type * to use as a loop cursor.
* @pos: the &struct hlist_nulls_node to use as a loop cursor. * @pos: the &struct hlist_nulls_node to use as a loop cursor.
* @head: the head for your list. * @head: the head of the list.
* @member: the name of the hlist_nulls_node within the struct. * @member: the name of the hlist_nulls_node within the struct.
* *
* The barrier() is needed to make sure compiler doesn't cache first element [1], * The barrier() is needed to make sure compiler doesn't cache first element [1],
...@@ -161,7 +169,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n, ...@@ -161,7 +169,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n,
* iterate over list of given type safe against removal of list entry * iterate over list of given type safe against removal of list entry
* @tpos: the type * to use as a loop cursor. * @tpos: the type * to use as a loop cursor.
* @pos: the &struct hlist_nulls_node to use as a loop cursor. * @pos: the &struct hlist_nulls_node to use as a loop cursor.
* @head: the head for your list. * @head: the head of the list.
* @member: the name of the hlist_nulls_node within the struct. * @member: the name of the hlist_nulls_node within the struct.
*/ */
#define hlist_nulls_for_each_entry_safe(tpos, pos, head, member) \ #define hlist_nulls_for_each_entry_safe(tpos, pos, head, member) \
......
...@@ -154,7 +154,7 @@ static inline void exit_tasks_rcu_finish(void) { } ...@@ -154,7 +154,7 @@ static inline void exit_tasks_rcu_finish(void) { }
* *
* This macro resembles cond_resched(), except that it is defined to * This macro resembles cond_resched(), except that it is defined to
* report potential quiescent states to RCU-tasks even if the cond_resched() * report potential quiescent states to RCU-tasks even if the cond_resched()
* machinery were to be shut off, as some advocate for PREEMPT kernels. * machinery were to be shut off, as some advocate for PREEMPTION kernels.
*/ */
#define cond_resched_tasks_rcu_qs() \ #define cond_resched_tasks_rcu_qs() \
do { \ do { \
...@@ -167,7 +167,7 @@ do { \ ...@@ -167,7 +167,7 @@ do { \
* TREE_RCU and rcu_barrier_() primitives in TINY_RCU. * TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
*/ */
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU)
#include <linux/rcutree.h> #include <linux/rcutree.h>
#elif defined(CONFIG_TINY_RCU) #elif defined(CONFIG_TINY_RCU)
#include <linux/rcutiny.h> #include <linux/rcutiny.h>
...@@ -400,22 +400,6 @@ do { \ ...@@ -400,22 +400,6 @@ do { \
__tmp; \ __tmp; \
}) })
/**
* rcu_swap_protected() - swap an RCU and a regular pointer
* @rcu_ptr: RCU pointer
* @ptr: regular pointer
* @c: the conditions under which the dereference will take place
*
* Perform swap(@rcu_ptr, @ptr) where @rcu_ptr is an RCU-annotated pointer and
* @c is the argument that is passed to the rcu_dereference_protected() call
* used to read that pointer.
*/
#define rcu_swap_protected(rcu_ptr, ptr, c) do { \
typeof(ptr) __tmp = rcu_dereference_protected((rcu_ptr), (c)); \
rcu_assign_pointer((rcu_ptr), (ptr)); \
(ptr) = __tmp; \
} while (0)
/** /**
* rcu_access_pointer() - fetch RCU pointer with no dereferencing * rcu_access_pointer() - fetch RCU pointer with no dereferencing
* @p: The pointer to read * @p: The pointer to read
...@@ -598,10 +582,10 @@ do { \ ...@@ -598,10 +582,10 @@ do { \
* *
* You can avoid reading and understanding the next paragraph by * You can avoid reading and understanding the next paragraph by
* following this rule: don't put anything in an rcu_read_lock() RCU * following this rule: don't put anything in an rcu_read_lock() RCU
* read-side critical section that would block in a !PREEMPT kernel. * read-side critical section that would block in a !PREEMPTION kernel.
* But if you want the full story, read on! * But if you want the full story, read on!
* *
* In non-preemptible RCU implementations (TREE_RCU and TINY_RCU), * In non-preemptible RCU implementations (pure TREE_RCU and TINY_RCU),
* it is illegal to block while in an RCU read-side critical section. * it is illegal to block while in an RCU read-side critical section.
* In preemptible RCU implementations (PREEMPT_RCU) in CONFIG_PREEMPTION * In preemptible RCU implementations (PREEMPT_RCU) in CONFIG_PREEMPTION
* kernel builds, RCU read-side critical sections may be preempted, * kernel builds, RCU read-side critical sections may be preempted,
...@@ -912,4 +896,8 @@ rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f) ...@@ -912,4 +896,8 @@ rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
return false; return false;
} }
/* kernel/ksysfs.c definitions */
extern int rcu_expedited;
extern int rcu_normal;
#endif /* __LINUX_RCUPDATE_H */ #endif /* __LINUX_RCUPDATE_H */
...@@ -85,6 +85,7 @@ static inline void rcu_scheduler_starting(void) { } ...@@ -85,6 +85,7 @@ static inline void rcu_scheduler_starting(void) { }
static inline void rcu_end_inkernel_boot(void) { } static inline void rcu_end_inkernel_boot(void) { }
static inline bool rcu_is_watching(void) { return true; } static inline bool rcu_is_watching(void) { return true; }
static inline void rcu_momentary_dyntick_idle(void) { } static inline void rcu_momentary_dyntick_idle(void) { }
static inline void kfree_rcu_scheduler_running(void) { }
/* Avoid RCU read-side critical sections leaking across. */ /* Avoid RCU read-side critical sections leaking across. */
static inline void rcu_all_qs(void) { barrier(); } static inline void rcu_all_qs(void) { barrier(); }
......
...@@ -38,6 +38,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func); ...@@ -38,6 +38,7 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func);
void rcu_barrier(void); void rcu_barrier(void);
bool rcu_eqs_special_set(int cpu); bool rcu_eqs_special_set(int cpu);
void rcu_momentary_dyntick_idle(void); void rcu_momentary_dyntick_idle(void);
void kfree_rcu_scheduler_running(void);
unsigned long get_state_synchronize_rcu(void); unsigned long get_state_synchronize_rcu(void);
void cond_synchronize_rcu(unsigned long oldstate); void cond_synchronize_rcu(unsigned long oldstate);
......
...@@ -109,8 +109,10 @@ enum tick_dep_bits { ...@@ -109,8 +109,10 @@ enum tick_dep_bits {
TICK_DEP_BIT_PERF_EVENTS = 1, TICK_DEP_BIT_PERF_EVENTS = 1,
TICK_DEP_BIT_SCHED = 2, TICK_DEP_BIT_SCHED = 2,
TICK_DEP_BIT_CLOCK_UNSTABLE = 3, TICK_DEP_BIT_CLOCK_UNSTABLE = 3,
TICK_DEP_BIT_RCU = 4 TICK_DEP_BIT_RCU = 4,
TICK_DEP_BIT_RCU_EXP = 5
}; };
#define TICK_DEP_BIT_MAX TICK_DEP_BIT_RCU_EXP
#define TICK_DEP_MASK_NONE 0 #define TICK_DEP_MASK_NONE 0
#define TICK_DEP_MASK_POSIX_TIMER (1 << TICK_DEP_BIT_POSIX_TIMER) #define TICK_DEP_MASK_POSIX_TIMER (1 << TICK_DEP_BIT_POSIX_TIMER)
...@@ -118,6 +120,7 @@ enum tick_dep_bits { ...@@ -118,6 +120,7 @@ enum tick_dep_bits {
#define TICK_DEP_MASK_SCHED (1 << TICK_DEP_BIT_SCHED) #define TICK_DEP_MASK_SCHED (1 << TICK_DEP_BIT_SCHED)
#define TICK_DEP_MASK_CLOCK_UNSTABLE (1 << TICK_DEP_BIT_CLOCK_UNSTABLE) #define TICK_DEP_MASK_CLOCK_UNSTABLE (1 << TICK_DEP_BIT_CLOCK_UNSTABLE)
#define TICK_DEP_MASK_RCU (1 << TICK_DEP_BIT_RCU) #define TICK_DEP_MASK_RCU (1 << TICK_DEP_BIT_RCU)
#define TICK_DEP_MASK_RCU_EXP (1 << TICK_DEP_BIT_RCU_EXP)
#ifdef CONFIG_NO_HZ_COMMON #ifdef CONFIG_NO_HZ_COMMON
extern bool tick_nohz_enabled; extern bool tick_nohz_enabled;
......
...@@ -41,7 +41,7 @@ TRACE_EVENT(rcu_utilization, ...@@ -41,7 +41,7 @@ TRACE_EVENT(rcu_utilization,
TP_printk("%s", __entry->s) TP_printk("%s", __entry->s)
); );
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU)
/* /*
* Tracepoint for grace-period events. Takes a string identifying the * Tracepoint for grace-period events. Takes a string identifying the
...@@ -432,7 +432,7 @@ TRACE_EVENT_RCU(rcu_fqs, ...@@ -432,7 +432,7 @@ TRACE_EVENT_RCU(rcu_fqs,
__entry->cpu, __entry->qsevent) __entry->cpu, __entry->qsevent)
); );
#endif /* #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) */ #endif /* #if defined(CONFIG_TREE_RCU) */
/* /*
* Tracepoint for dyntick-idle entry/exit events. These take a string * Tracepoint for dyntick-idle entry/exit events. These take a string
...@@ -449,7 +449,7 @@ TRACE_EVENT_RCU(rcu_fqs, ...@@ -449,7 +449,7 @@ TRACE_EVENT_RCU(rcu_fqs,
*/ */
TRACE_EVENT_RCU(rcu_dyntick, TRACE_EVENT_RCU(rcu_dyntick,
TP_PROTO(const char *polarity, long oldnesting, long newnesting, atomic_t dynticks), TP_PROTO(const char *polarity, long oldnesting, long newnesting, int dynticks),
TP_ARGS(polarity, oldnesting, newnesting, dynticks), TP_ARGS(polarity, oldnesting, newnesting, dynticks),
...@@ -464,7 +464,7 @@ TRACE_EVENT_RCU(rcu_dyntick, ...@@ -464,7 +464,7 @@ TRACE_EVENT_RCU(rcu_dyntick,
__entry->polarity = polarity; __entry->polarity = polarity;
__entry->oldnesting = oldnesting; __entry->oldnesting = oldnesting;
__entry->newnesting = newnesting; __entry->newnesting = newnesting;
__entry->dynticks = atomic_read(&dynticks); __entry->dynticks = dynticks;
), ),
TP_printk("%s %lx %lx %#3x", __entry->polarity, TP_printk("%s %lx %lx %#3x", __entry->polarity,
...@@ -481,16 +481,14 @@ TRACE_EVENT_RCU(rcu_dyntick, ...@@ -481,16 +481,14 @@ TRACE_EVENT_RCU(rcu_dyntick,
*/ */
TRACE_EVENT_RCU(rcu_callback, TRACE_EVENT_RCU(rcu_callback,
TP_PROTO(const char *rcuname, struct rcu_head *rhp, long qlen_lazy, TP_PROTO(const char *rcuname, struct rcu_head *rhp, long qlen),
long qlen),
TP_ARGS(rcuname, rhp, qlen_lazy, qlen), TP_ARGS(rcuname, rhp, qlen),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(void *, rhp) __field(void *, rhp)
__field(void *, func) __field(void *, func)
__field(long, qlen_lazy)
__field(long, qlen) __field(long, qlen)
), ),
...@@ -498,13 +496,12 @@ TRACE_EVENT_RCU(rcu_callback, ...@@ -498,13 +496,12 @@ TRACE_EVENT_RCU(rcu_callback,
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->rhp = rhp; __entry->rhp = rhp;
__entry->func = rhp->func; __entry->func = rhp->func;
__entry->qlen_lazy = qlen_lazy;
__entry->qlen = qlen; __entry->qlen = qlen;
), ),
TP_printk("%s rhp=%p func=%ps %ld/%ld", TP_printk("%s rhp=%p func=%ps %ld",
__entry->rcuname, __entry->rhp, __entry->func, __entry->rcuname, __entry->rhp, __entry->func,
__entry->qlen_lazy, __entry->qlen) __entry->qlen)
); );
/* /*
...@@ -518,15 +515,14 @@ TRACE_EVENT_RCU(rcu_callback, ...@@ -518,15 +515,14 @@ TRACE_EVENT_RCU(rcu_callback,
TRACE_EVENT_RCU(rcu_kfree_callback, TRACE_EVENT_RCU(rcu_kfree_callback,
TP_PROTO(const char *rcuname, struct rcu_head *rhp, unsigned long offset, TP_PROTO(const char *rcuname, struct rcu_head *rhp, unsigned long offset,
long qlen_lazy, long qlen), long qlen),
TP_ARGS(rcuname, rhp, offset, qlen_lazy, qlen), TP_ARGS(rcuname, rhp, offset, qlen),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(void *, rhp) __field(void *, rhp)
__field(unsigned long, offset) __field(unsigned long, offset)
__field(long, qlen_lazy)
__field(long, qlen) __field(long, qlen)
), ),
...@@ -534,13 +530,12 @@ TRACE_EVENT_RCU(rcu_kfree_callback, ...@@ -534,13 +530,12 @@ TRACE_EVENT_RCU(rcu_kfree_callback,
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->rhp = rhp; __entry->rhp = rhp;
__entry->offset = offset; __entry->offset = offset;
__entry->qlen_lazy = qlen_lazy;
__entry->qlen = qlen; __entry->qlen = qlen;
), ),
TP_printk("%s rhp=%p func=%ld %ld/%ld", TP_printk("%s rhp=%p func=%ld %ld",
__entry->rcuname, __entry->rhp, __entry->offset, __entry->rcuname, __entry->rhp, __entry->offset,
__entry->qlen_lazy, __entry->qlen) __entry->qlen)
); );
/* /*
...@@ -552,27 +547,24 @@ TRACE_EVENT_RCU(rcu_kfree_callback, ...@@ -552,27 +547,24 @@ TRACE_EVENT_RCU(rcu_kfree_callback,
*/ */
TRACE_EVENT_RCU(rcu_batch_start, TRACE_EVENT_RCU(rcu_batch_start,
TP_PROTO(const char *rcuname, long qlen_lazy, long qlen, long blimit), TP_PROTO(const char *rcuname, long qlen, long blimit),
TP_ARGS(rcuname, qlen_lazy, qlen, blimit), TP_ARGS(rcuname, qlen, blimit),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(const char *, rcuname) __field(const char *, rcuname)
__field(long, qlen_lazy)
__field(long, qlen) __field(long, qlen)
__field(long, blimit) __field(long, blimit)
), ),
TP_fast_assign( TP_fast_assign(
__entry->rcuname = rcuname; __entry->rcuname = rcuname;
__entry->qlen_lazy = qlen_lazy;
__entry->qlen = qlen; __entry->qlen = qlen;
__entry->blimit = blimit; __entry->blimit = blimit;
), ),
TP_printk("%s CBs=%ld/%ld bl=%ld", TP_printk("%s CBs=%ld bl=%ld",
__entry->rcuname, __entry->qlen_lazy, __entry->qlen, __entry->rcuname, __entry->qlen, __entry->blimit)
__entry->blimit)
); );
/* /*
......
...@@ -7,7 +7,7 @@ menu "RCU Subsystem" ...@@ -7,7 +7,7 @@ menu "RCU Subsystem"
config TREE_RCU config TREE_RCU
bool bool
default y if !PREEMPTION && SMP default y if SMP
help help
This option selects the RCU implementation that is This option selects the RCU implementation that is
designed for very large SMP system with hundreds or designed for very large SMP system with hundreds or
...@@ -17,6 +17,7 @@ config TREE_RCU ...@@ -17,6 +17,7 @@ config TREE_RCU
config PREEMPT_RCU config PREEMPT_RCU
bool bool
default y if PREEMPTION default y if PREEMPTION
select TREE_RCU
help help
This option selects the RCU implementation that is This option selects the RCU implementation that is
designed for very large SMP systems with hundreds or designed for very large SMP systems with hundreds or
...@@ -78,7 +79,7 @@ config TASKS_RCU ...@@ -78,7 +79,7 @@ config TASKS_RCU
user-mode execution as quiescent states. user-mode execution as quiescent states.
config RCU_STALL_COMMON config RCU_STALL_COMMON
def_bool ( TREE_RCU || PREEMPT_RCU ) def_bool TREE_RCU
help help
This option enables RCU CPU stall code that is common between This option enables RCU CPU stall code that is common between
the TINY and TREE variants of RCU. The purpose is to allow the TINY and TREE variants of RCU. The purpose is to allow
...@@ -86,13 +87,13 @@ config RCU_STALL_COMMON ...@@ -86,13 +87,13 @@ config RCU_STALL_COMMON
making these warnings mandatory for the tree variants. making these warnings mandatory for the tree variants.
config RCU_NEED_SEGCBLIST config RCU_NEED_SEGCBLIST
def_bool ( TREE_RCU || PREEMPT_RCU || TREE_SRCU ) def_bool ( TREE_RCU || TREE_SRCU )
config RCU_FANOUT config RCU_FANOUT
int "Tree-based hierarchical RCU fanout value" int "Tree-based hierarchical RCU fanout value"
range 2 64 if 64BIT range 2 64 if 64BIT
range 2 32 if !64BIT range 2 32 if !64BIT
depends on (TREE_RCU || PREEMPT_RCU) && RCU_EXPERT depends on TREE_RCU && RCU_EXPERT
default 64 if 64BIT default 64 if 64BIT
default 32 if !64BIT default 32 if !64BIT
help help
...@@ -112,7 +113,7 @@ config RCU_FANOUT_LEAF ...@@ -112,7 +113,7 @@ config RCU_FANOUT_LEAF
int "Tree-based hierarchical RCU leaf-level fanout value" int "Tree-based hierarchical RCU leaf-level fanout value"
range 2 64 if 64BIT range 2 64 if 64BIT
range 2 32 if !64BIT range 2 32 if !64BIT
depends on (TREE_RCU || PREEMPT_RCU) && RCU_EXPERT depends on TREE_RCU && RCU_EXPERT
default 16 default 16
help help
This option controls the leaf-level fanout of hierarchical This option controls the leaf-level fanout of hierarchical
...@@ -187,7 +188,7 @@ config RCU_BOOST_DELAY ...@@ -187,7 +188,7 @@ config RCU_BOOST_DELAY
config RCU_NOCB_CPU config RCU_NOCB_CPU
bool "Offload RCU callback processing from boot-selected CPUs" bool "Offload RCU callback processing from boot-selected CPUs"
depends on TREE_RCU || PREEMPT_RCU depends on TREE_RCU
depends on RCU_EXPERT || NO_HZ_FULL depends on RCU_EXPERT || NO_HZ_FULL
default n default n
help help
...@@ -200,8 +201,8 @@ config RCU_NOCB_CPU ...@@ -200,8 +201,8 @@ config RCU_NOCB_CPU
specified at boot time by the rcu_nocbs parameter. For each specified at boot time by the rcu_nocbs parameter. For each
such CPU, a kthread ("rcuox/N") will be created to invoke such CPU, a kthread ("rcuox/N") will be created to invoke
callbacks, where the "N" is the CPU being offloaded, and where callbacks, where the "N" is the CPU being offloaded, and where
the "p" for RCU-preempt (PREEMPT kernels) and "s" for RCU-sched the "p" for RCU-preempt (PREEMPTION kernels) and "s" for RCU-sched
(!PREEMPT kernels). Nothing prevents this kthread from running (!PREEMPTION kernels). Nothing prevents this kthread from running
on the specified CPUs, but (1) the kthreads may be preempted on the specified CPUs, but (1) the kthreads may be preempted
between each callback, and (2) affinity or cgroups can be used between each callback, and (2) affinity or cgroups can be used
to force the kthreads to run on whatever set of CPUs is desired. to force the kthreads to run on whatever set of CPUs is desired.
......
...@@ -9,6 +9,5 @@ obj-$(CONFIG_TINY_SRCU) += srcutiny.o ...@@ -9,6 +9,5 @@ obj-$(CONFIG_TINY_SRCU) += srcutiny.o
obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o
obj-$(CONFIG_RCU_PERF_TEST) += rcuperf.o obj-$(CONFIG_RCU_PERF_TEST) += rcuperf.o
obj-$(CONFIG_TREE_RCU) += tree.o obj-$(CONFIG_TREE_RCU) += tree.o
obj-$(CONFIG_PREEMPT_RCU) += tree.o
obj-$(CONFIG_TINY_RCU) += tiny.o obj-$(CONFIG_TINY_RCU) += tiny.o
obj-$(CONFIG_RCU_NEED_SEGCBLIST) += rcu_segcblist.o obj-$(CONFIG_RCU_NEED_SEGCBLIST) += rcu_segcblist.o
...@@ -198,33 +198,6 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head) ...@@ -198,33 +198,6 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head)
} }
#endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ #endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */
void kfree(const void *);
/*
* Reclaim the specified callback, either by invoking it (non-lazy case)
* or freeing it directly (lazy case). Return true if lazy, false otherwise.
*/
static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
{
rcu_callback_t f;
unsigned long offset = (unsigned long)head->func;
rcu_lock_acquire(&rcu_callback_map);
if (__is_kfree_rcu_offset(offset)) {
trace_rcu_invoke_kfree_callback(rn, head, offset);
kfree((void *)head - offset);
rcu_lock_release(&rcu_callback_map);
return true;
} else {
trace_rcu_invoke_callback(rn, head);
f = head->func;
WRITE_ONCE(head->func, (rcu_callback_t)0L);
f(head);
rcu_lock_release(&rcu_callback_map);
return false;
}
}
#ifdef CONFIG_RCU_STALL_COMMON #ifdef CONFIG_RCU_STALL_COMMON
extern int rcu_cpu_stall_ftrace_dump; extern int rcu_cpu_stall_ftrace_dump;
...@@ -281,7 +254,7 @@ void rcu_test_sync_prims(void); ...@@ -281,7 +254,7 @@ void rcu_test_sync_prims(void);
*/ */
extern void resched_cpu(int cpu); extern void resched_cpu(int cpu);
#if defined(SRCU) || !defined(TINY_RCU) #if defined(CONFIG_SRCU) || !defined(CONFIG_TINY_RCU)
#include <linux/rcu_node_tree.h> #include <linux/rcu_node_tree.h>
...@@ -418,7 +391,7 @@ do { \ ...@@ -418,7 +391,7 @@ do { \
#define raw_lockdep_assert_held_rcu_node(p) \ #define raw_lockdep_assert_held_rcu_node(p) \
lockdep_assert_held(&ACCESS_PRIVATE(p, lock)) lockdep_assert_held(&ACCESS_PRIVATE(p, lock))
#endif /* #if defined(SRCU) || !defined(TINY_RCU) */ #endif /* #if defined(CONFIG_SRCU) || !defined(CONFIG_TINY_RCU) */
#ifdef CONFIG_SRCU #ifdef CONFIG_SRCU
void srcu_init(void); void srcu_init(void);
...@@ -454,7 +427,7 @@ enum rcutorture_type { ...@@ -454,7 +427,7 @@ enum rcutorture_type {
INVALID_RCU_FLAVOR INVALID_RCU_FLAVOR
}; };
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU)
void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags, void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags,
unsigned long *gp_seq); unsigned long *gp_seq);
void do_trace_rcu_torture_read(const char *rcutorturename, void do_trace_rcu_torture_read(const char *rcutorturename,
......
...@@ -20,14 +20,10 @@ void rcu_cblist_init(struct rcu_cblist *rclp) ...@@ -20,14 +20,10 @@ void rcu_cblist_init(struct rcu_cblist *rclp)
rclp->head = NULL; rclp->head = NULL;
rclp->tail = &rclp->head; rclp->tail = &rclp->head;
rclp->len = 0; rclp->len = 0;
rclp->len_lazy = 0;
} }
/* /*
* Enqueue an rcu_head structure onto the specified callback list. * Enqueue an rcu_head structure onto the specified callback list.
* This function assumes that the callback is non-lazy because it
* is intended for use by no-CBs CPUs, which do not distinguish
* between lazy and non-lazy RCU callbacks.
*/ */
void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp) void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp)
{ {
...@@ -54,7 +50,6 @@ void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp, ...@@ -54,7 +50,6 @@ void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp,
else else
drclp->tail = &drclp->head; drclp->tail = &drclp->head;
drclp->len = srclp->len; drclp->len = srclp->len;
drclp->len_lazy = srclp->len_lazy;
if (!rhp) { if (!rhp) {
rcu_cblist_init(srclp); rcu_cblist_init(srclp);
} else { } else {
...@@ -62,16 +57,12 @@ void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp, ...@@ -62,16 +57,12 @@ void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp,
srclp->head = rhp; srclp->head = rhp;
srclp->tail = &rhp->next; srclp->tail = &rhp->next;
WRITE_ONCE(srclp->len, 1); WRITE_ONCE(srclp->len, 1);
srclp->len_lazy = 0;
} }
} }
/* /*
* Dequeue the oldest rcu_head structure from the specified callback * Dequeue the oldest rcu_head structure from the specified callback
* list. This function assumes that the callback is non-lazy, but * list.
* the caller can later invoke rcu_cblist_dequeued_lazy() if it
* finds otherwise (and if it cares about laziness). This allows
* different users to have different ways of determining laziness.
*/ */
struct rcu_head *rcu_cblist_dequeue(struct rcu_cblist *rclp) struct rcu_head *rcu_cblist_dequeue(struct rcu_cblist *rclp)
{ {
...@@ -161,7 +152,6 @@ void rcu_segcblist_init(struct rcu_segcblist *rsclp) ...@@ -161,7 +152,6 @@ void rcu_segcblist_init(struct rcu_segcblist *rsclp)
for (i = 0; i < RCU_CBLIST_NSEGS; i++) for (i = 0; i < RCU_CBLIST_NSEGS; i++)
rsclp->tails[i] = &rsclp->head; rsclp->tails[i] = &rsclp->head;
rcu_segcblist_set_len(rsclp, 0); rcu_segcblist_set_len(rsclp, 0);
rsclp->len_lazy = 0;
rsclp->enabled = 1; rsclp->enabled = 1;
} }
...@@ -173,7 +163,6 @@ void rcu_segcblist_disable(struct rcu_segcblist *rsclp) ...@@ -173,7 +163,6 @@ void rcu_segcblist_disable(struct rcu_segcblist *rsclp)
{ {
WARN_ON_ONCE(!rcu_segcblist_empty(rsclp)); WARN_ON_ONCE(!rcu_segcblist_empty(rsclp));
WARN_ON_ONCE(rcu_segcblist_n_cbs(rsclp)); WARN_ON_ONCE(rcu_segcblist_n_cbs(rsclp));
WARN_ON_ONCE(rcu_segcblist_n_lazy_cbs(rsclp));
rsclp->enabled = 0; rsclp->enabled = 0;
} }
...@@ -253,11 +242,9 @@ bool rcu_segcblist_nextgp(struct rcu_segcblist *rsclp, unsigned long *lp) ...@@ -253,11 +242,9 @@ bool rcu_segcblist_nextgp(struct rcu_segcblist *rsclp, unsigned long *lp)
* absolutely not OK for it to ever miss posting a callback. * absolutely not OK for it to ever miss posting a callback.
*/ */
void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp, void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp,
struct rcu_head *rhp, bool lazy) struct rcu_head *rhp)
{ {
rcu_segcblist_inc_len(rsclp); rcu_segcblist_inc_len(rsclp);
if (lazy)
rsclp->len_lazy++;
smp_mb(); /* Ensure counts are updated before callback is enqueued. */ smp_mb(); /* Ensure counts are updated before callback is enqueued. */
rhp->next = NULL; rhp->next = NULL;
WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rhp); WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rhp);
...@@ -275,15 +262,13 @@ void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp, ...@@ -275,15 +262,13 @@ void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp,
* period. You have been warned. * period. You have been warned.
*/ */
bool rcu_segcblist_entrain(struct rcu_segcblist *rsclp, bool rcu_segcblist_entrain(struct rcu_segcblist *rsclp,
struct rcu_head *rhp, bool lazy) struct rcu_head *rhp)
{ {
int i; int i;
if (rcu_segcblist_n_cbs(rsclp) == 0) if (rcu_segcblist_n_cbs(rsclp) == 0)
return false; return false;
rcu_segcblist_inc_len(rsclp); rcu_segcblist_inc_len(rsclp);
if (lazy)
rsclp->len_lazy++;
smp_mb(); /* Ensure counts are updated before callback is entrained. */ smp_mb(); /* Ensure counts are updated before callback is entrained. */
rhp->next = NULL; rhp->next = NULL;
for (i = RCU_NEXT_TAIL; i > RCU_DONE_TAIL; i--) for (i = RCU_NEXT_TAIL; i > RCU_DONE_TAIL; i--)
...@@ -307,8 +292,6 @@ bool rcu_segcblist_entrain(struct rcu_segcblist *rsclp, ...@@ -307,8 +292,6 @@ bool rcu_segcblist_entrain(struct rcu_segcblist *rsclp,
void rcu_segcblist_extract_count(struct rcu_segcblist *rsclp, void rcu_segcblist_extract_count(struct rcu_segcblist *rsclp,
struct rcu_cblist *rclp) struct rcu_cblist *rclp)
{ {
rclp->len_lazy += rsclp->len_lazy;
rsclp->len_lazy = 0;
rclp->len = rcu_segcblist_xchg_len(rsclp, 0); rclp->len = rcu_segcblist_xchg_len(rsclp, 0);
} }
...@@ -361,9 +344,7 @@ void rcu_segcblist_extract_pend_cbs(struct rcu_segcblist *rsclp, ...@@ -361,9 +344,7 @@ void rcu_segcblist_extract_pend_cbs(struct rcu_segcblist *rsclp,
void rcu_segcblist_insert_count(struct rcu_segcblist *rsclp, void rcu_segcblist_insert_count(struct rcu_segcblist *rsclp,
struct rcu_cblist *rclp) struct rcu_cblist *rclp)
{ {
rsclp->len_lazy += rclp->len_lazy;
rcu_segcblist_add_len(rsclp, rclp->len); rcu_segcblist_add_len(rsclp, rclp->len);
rclp->len_lazy = 0;
rclp->len = 0; rclp->len = 0;
} }
......
...@@ -15,15 +15,6 @@ static inline long rcu_cblist_n_cbs(struct rcu_cblist *rclp) ...@@ -15,15 +15,6 @@ static inline long rcu_cblist_n_cbs(struct rcu_cblist *rclp)
return READ_ONCE(rclp->len); return READ_ONCE(rclp->len);
} }
/*
* Account for the fact that a previously dequeued callback turned out
* to be marked as lazy.
*/
static inline void rcu_cblist_dequeued_lazy(struct rcu_cblist *rclp)
{
rclp->len_lazy--;
}
void rcu_cblist_init(struct rcu_cblist *rclp); void rcu_cblist_init(struct rcu_cblist *rclp);
void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp); void rcu_cblist_enqueue(struct rcu_cblist *rclp, struct rcu_head *rhp);
void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp, void rcu_cblist_flush_enqueue(struct rcu_cblist *drclp,
...@@ -59,18 +50,6 @@ static inline long rcu_segcblist_n_cbs(struct rcu_segcblist *rsclp) ...@@ -59,18 +50,6 @@ static inline long rcu_segcblist_n_cbs(struct rcu_segcblist *rsclp)
#endif #endif
} }
/* Return number of lazy callbacks in segmented callback list. */
static inline long rcu_segcblist_n_lazy_cbs(struct rcu_segcblist *rsclp)
{
return rsclp->len_lazy;
}
/* Return number of lazy callbacks in segmented callback list. */
static inline long rcu_segcblist_n_nonlazy_cbs(struct rcu_segcblist *rsclp)
{
return rcu_segcblist_n_cbs(rsclp) - rsclp->len_lazy;
}
/* /*
* Is the specified rcu_segcblist enabled, for example, not corresponding * Is the specified rcu_segcblist enabled, for example, not corresponding
* to an offline CPU? * to an offline CPU?
...@@ -106,9 +85,9 @@ struct rcu_head *rcu_segcblist_first_cb(struct rcu_segcblist *rsclp); ...@@ -106,9 +85,9 @@ struct rcu_head *rcu_segcblist_first_cb(struct rcu_segcblist *rsclp);
struct rcu_head *rcu_segcblist_first_pend_cb(struct rcu_segcblist *rsclp); struct rcu_head *rcu_segcblist_first_pend_cb(struct rcu_segcblist *rsclp);
bool rcu_segcblist_nextgp(struct rcu_segcblist *rsclp, unsigned long *lp); bool rcu_segcblist_nextgp(struct rcu_segcblist *rsclp, unsigned long *lp);
void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp, void rcu_segcblist_enqueue(struct rcu_segcblist *rsclp,
struct rcu_head *rhp, bool lazy); struct rcu_head *rhp);
bool rcu_segcblist_entrain(struct rcu_segcblist *rsclp, bool rcu_segcblist_entrain(struct rcu_segcblist *rsclp,
struct rcu_head *rhp, bool lazy); struct rcu_head *rhp);
void rcu_segcblist_extract_count(struct rcu_segcblist *rsclp, void rcu_segcblist_extract_count(struct rcu_segcblist *rsclp,
struct rcu_cblist *rclp); struct rcu_cblist *rclp);
void rcu_segcblist_extract_done_cbs(struct rcu_segcblist *rsclp, void rcu_segcblist_extract_done_cbs(struct rcu_segcblist *rsclp,
......
...@@ -86,6 +86,7 @@ torture_param(bool, shutdown, RCUPERF_SHUTDOWN, ...@@ -86,6 +86,7 @@ torture_param(bool, shutdown, RCUPERF_SHUTDOWN,
"Shutdown at end of performance tests."); "Shutdown at end of performance tests.");
torture_param(int, verbose, 1, "Enable verbose debugging printk()s"); torture_param(int, verbose, 1, "Enable verbose debugging printk()s");
torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to disable"); torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to disable");
torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() perf test?");
static char *perf_type = "rcu"; static char *perf_type = "rcu";
module_param(perf_type, charp, 0444); module_param(perf_type, charp, 0444);
...@@ -105,8 +106,8 @@ static atomic_t n_rcu_perf_writer_finished; ...@@ -105,8 +106,8 @@ static atomic_t n_rcu_perf_writer_finished;
static wait_queue_head_t shutdown_wq; static wait_queue_head_t shutdown_wq;
static u64 t_rcu_perf_writer_started; static u64 t_rcu_perf_writer_started;
static u64 t_rcu_perf_writer_finished; static u64 t_rcu_perf_writer_finished;
static unsigned long b_rcu_perf_writer_started; static unsigned long b_rcu_gp_test_started;
static unsigned long b_rcu_perf_writer_finished; static unsigned long b_rcu_gp_test_finished;
static DEFINE_PER_CPU(atomic_t, n_async_inflight); static DEFINE_PER_CPU(atomic_t, n_async_inflight);
#define MAX_MEAS 10000 #define MAX_MEAS 10000
...@@ -378,10 +379,10 @@ rcu_perf_writer(void *arg) ...@@ -378,10 +379,10 @@ rcu_perf_writer(void *arg)
if (atomic_inc_return(&n_rcu_perf_writer_started) >= nrealwriters) { if (atomic_inc_return(&n_rcu_perf_writer_started) >= nrealwriters) {
t_rcu_perf_writer_started = t; t_rcu_perf_writer_started = t;
if (gp_exp) { if (gp_exp) {
b_rcu_perf_writer_started = b_rcu_gp_test_started =
cur_ops->exp_completed() / 2; cur_ops->exp_completed() / 2;
} else { } else {
b_rcu_perf_writer_started = cur_ops->get_gp_seq(); b_rcu_gp_test_started = cur_ops->get_gp_seq();
} }
} }
...@@ -429,10 +430,10 @@ rcu_perf_writer(void *arg) ...@@ -429,10 +430,10 @@ rcu_perf_writer(void *arg)
PERFOUT_STRING("Test complete"); PERFOUT_STRING("Test complete");
t_rcu_perf_writer_finished = t; t_rcu_perf_writer_finished = t;
if (gp_exp) { if (gp_exp) {
b_rcu_perf_writer_finished = b_rcu_gp_test_finished =
cur_ops->exp_completed() / 2; cur_ops->exp_completed() / 2;
} else { } else {
b_rcu_perf_writer_finished = b_rcu_gp_test_finished =
cur_ops->get_gp_seq(); cur_ops->get_gp_seq();
} }
if (shutdown) { if (shutdown) {
...@@ -515,8 +516,8 @@ rcu_perf_cleanup(void) ...@@ -515,8 +516,8 @@ rcu_perf_cleanup(void)
t_rcu_perf_writer_finished - t_rcu_perf_writer_finished -
t_rcu_perf_writer_started, t_rcu_perf_writer_started,
ngps, ngps,
rcuperf_seq_diff(b_rcu_perf_writer_finished, rcuperf_seq_diff(b_rcu_gp_test_finished,
b_rcu_perf_writer_started)); b_rcu_gp_test_started));
for (i = 0; i < nrealwriters; i++) { for (i = 0; i < nrealwriters; i++) {
if (!writer_durations) if (!writer_durations)
break; break;
...@@ -584,6 +585,159 @@ rcu_perf_shutdown(void *arg) ...@@ -584,6 +585,159 @@ rcu_perf_shutdown(void *arg)
return -EINVAL; return -EINVAL;
} }
/*
* kfree_rcu() performance tests: Start a kfree_rcu() loop on all CPUs for number
* of iterations and measure total time and number of GP for all iterations to complete.
*/
torture_param(int, kfree_nthreads, -1, "Number of threads running loops of kfree_rcu().");
torture_param(int, kfree_alloc_num, 8000, "Number of allocations and frees done in an iteration.");
torture_param(int, kfree_loops, 10, "Number of loops doing kfree_alloc_num allocations and frees.");
static struct task_struct **kfree_reader_tasks;
static int kfree_nrealthreads;
static atomic_t n_kfree_perf_thread_started;
static atomic_t n_kfree_perf_thread_ended;
struct kfree_obj {
char kfree_obj[8];
struct rcu_head rh;
};
static int
kfree_perf_thread(void *arg)
{
int i, loop = 0;
long me = (long)arg;
struct kfree_obj *alloc_ptr;
u64 start_time, end_time;
VERBOSE_PERFOUT_STRING("kfree_perf_thread task started");
set_cpus_allowed_ptr(current, cpumask_of(me % nr_cpu_ids));
set_user_nice(current, MAX_NICE);
start_time = ktime_get_mono_fast_ns();
if (atomic_inc_return(&n_kfree_perf_thread_started) >= kfree_nrealthreads) {
if (gp_exp)
b_rcu_gp_test_started = cur_ops->exp_completed() / 2;
else
b_rcu_gp_test_started = cur_ops->get_gp_seq();
}
do {
for (i = 0; i < kfree_alloc_num; i++) {
alloc_ptr = kmalloc(sizeof(struct kfree_obj), GFP_KERNEL);
if (!alloc_ptr)
return -ENOMEM;
kfree_rcu(alloc_ptr, rh);
}
cond_resched();
} while (!torture_must_stop() && ++loop < kfree_loops);
if (atomic_inc_return(&n_kfree_perf_thread_ended) >= kfree_nrealthreads) {
end_time = ktime_get_mono_fast_ns();
if (gp_exp)
b_rcu_gp_test_finished = cur_ops->exp_completed() / 2;
else
b_rcu_gp_test_finished = cur_ops->get_gp_seq();
pr_alert("Total time taken by all kfree'ers: %llu ns, loops: %d, batches: %ld\n",
(unsigned long long)(end_time - start_time), kfree_loops,
rcuperf_seq_diff(b_rcu_gp_test_finished, b_rcu_gp_test_started));
if (shutdown) {
smp_mb(); /* Assign before wake. */
wake_up(&shutdown_wq);
}
}
torture_kthread_stopping("kfree_perf_thread");
return 0;
}
static void
kfree_perf_cleanup(void)
{
int i;
if (torture_cleanup_begin())
return;
if (kfree_reader_tasks) {
for (i = 0; i < kfree_nrealthreads; i++)
torture_stop_kthread(kfree_perf_thread,
kfree_reader_tasks[i]);
kfree(kfree_reader_tasks);
}
torture_cleanup_end();
}
/*
* shutdown kthread. Just waits to be awakened, then shuts down system.
*/
static int
kfree_perf_shutdown(void *arg)
{
do {
wait_event(shutdown_wq,
atomic_read(&n_kfree_perf_thread_ended) >=
kfree_nrealthreads);
} while (atomic_read(&n_kfree_perf_thread_ended) < kfree_nrealthreads);
smp_mb(); /* Wake before output. */
kfree_perf_cleanup();
kernel_power_off();
return -EINVAL;
}
static int __init
kfree_perf_init(void)
{
long i;
int firsterr = 0;
kfree_nrealthreads = compute_real(kfree_nthreads);
/* Start up the kthreads. */
if (shutdown) {
init_waitqueue_head(&shutdown_wq);
firsterr = torture_create_kthread(kfree_perf_shutdown, NULL,
shutdown_task);
if (firsterr)
goto unwind;
schedule_timeout_uninterruptible(1);
}
kfree_reader_tasks = kcalloc(kfree_nrealthreads, sizeof(kfree_reader_tasks[0]),
GFP_KERNEL);
if (kfree_reader_tasks == NULL) {
firsterr = -ENOMEM;
goto unwind;
}
for (i = 0; i < kfree_nrealthreads; i++) {
firsterr = torture_create_kthread(kfree_perf_thread, (void *)i,
kfree_reader_tasks[i]);
if (firsterr)
goto unwind;
}
while (atomic_read(&n_kfree_perf_thread_started) < kfree_nrealthreads)
schedule_timeout_uninterruptible(1);
torture_init_end();
return 0;
unwind:
torture_init_end();
kfree_perf_cleanup();
return firsterr;
}
static int __init static int __init
rcu_perf_init(void) rcu_perf_init(void)
{ {
...@@ -616,6 +770,9 @@ rcu_perf_init(void) ...@@ -616,6 +770,9 @@ rcu_perf_init(void)
if (cur_ops->init) if (cur_ops->init)
cur_ops->init(); cur_ops->init();
if (kfree_rcu_test)
return kfree_perf_init();
nrealwriters = compute_real(nwriters); nrealwriters = compute_real(nwriters);
nrealreaders = compute_real(nreaders); nrealreaders = compute_real(nreaders);
atomic_set(&n_rcu_perf_reader_started, 0); atomic_set(&n_rcu_perf_reader_started, 0);
......
This diff is collapsed.
...@@ -103,7 +103,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock); ...@@ -103,7 +103,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_unlock);
/* /*
* Workqueue handler to drive one grace period and invoke any callbacks * Workqueue handler to drive one grace period and invoke any callbacks
* that become ready as a result. Single-CPU and !PREEMPT operation * that become ready as a result. Single-CPU and !PREEMPTION operation
* means that we get away with murder on synchronization. ;-) * means that we get away with murder on synchronization. ;-)
*/ */
void srcu_drive_gp(struct work_struct *wp) void srcu_drive_gp(struct work_struct *wp)
......
...@@ -530,7 +530,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) ...@@ -530,7 +530,7 @@ static void srcu_gp_end(struct srcu_struct *ssp)
idx = rcu_seq_state(ssp->srcu_gp_seq); idx = rcu_seq_state(ssp->srcu_gp_seq);
WARN_ON_ONCE(idx != SRCU_STATE_SCAN2); WARN_ON_ONCE(idx != SRCU_STATE_SCAN2);
cbdelay = srcu_get_delay(ssp); cbdelay = srcu_get_delay(ssp);
ssp->srcu_last_gp_end = ktime_get_mono_fast_ns(); WRITE_ONCE(ssp->srcu_last_gp_end, ktime_get_mono_fast_ns());
rcu_seq_end(&ssp->srcu_gp_seq); rcu_seq_end(&ssp->srcu_gp_seq);
gpseq = rcu_seq_current(&ssp->srcu_gp_seq); gpseq = rcu_seq_current(&ssp->srcu_gp_seq);
if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, gpseq)) if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, gpseq))
...@@ -762,6 +762,7 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp) ...@@ -762,6 +762,7 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
unsigned long flags; unsigned long flags;
struct srcu_data *sdp; struct srcu_data *sdp;
unsigned long t; unsigned long t;
unsigned long tlast;
/* If the local srcu_data structure has callbacks, not idle. */ /* If the local srcu_data structure has callbacks, not idle. */
local_irq_save(flags); local_irq_save(flags);
...@@ -780,9 +781,9 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp) ...@@ -780,9 +781,9 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp)
/* First, see if enough time has passed since the last GP. */ /* First, see if enough time has passed since the last GP. */
t = ktime_get_mono_fast_ns(); t = ktime_get_mono_fast_ns();
tlast = READ_ONCE(ssp->srcu_last_gp_end);
if (exp_holdoff == 0 || if (exp_holdoff == 0 ||
time_in_range_open(t, ssp->srcu_last_gp_end, time_in_range_open(t, tlast, tlast + exp_holdoff))
ssp->srcu_last_gp_end + exp_holdoff))
return false; /* Too soon after last GP. */ return false; /* Too soon after last GP. */
/* Next, check for probable idleness. */ /* Next, check for probable idleness. */
...@@ -853,7 +854,7 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp, ...@@ -853,7 +854,7 @@ static void __call_srcu(struct srcu_struct *ssp, struct rcu_head *rhp,
local_irq_save(flags); local_irq_save(flags);
sdp = this_cpu_ptr(ssp->sda); sdp = this_cpu_ptr(ssp->sda);
spin_lock_rcu_node(sdp); spin_lock_rcu_node(sdp);
rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp, false); rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp);
rcu_segcblist_advance(&sdp->srcu_cblist, rcu_segcblist_advance(&sdp->srcu_cblist,
rcu_seq_current(&ssp->srcu_gp_seq)); rcu_seq_current(&ssp->srcu_gp_seq));
s = rcu_seq_snap(&ssp->srcu_gp_seq); s = rcu_seq_snap(&ssp->srcu_gp_seq);
...@@ -1052,7 +1053,7 @@ void srcu_barrier(struct srcu_struct *ssp) ...@@ -1052,7 +1053,7 @@ void srcu_barrier(struct srcu_struct *ssp)
sdp->srcu_barrier_head.func = srcu_barrier_cb; sdp->srcu_barrier_head.func = srcu_barrier_cb;
debug_rcu_head_queue(&sdp->srcu_barrier_head); debug_rcu_head_queue(&sdp->srcu_barrier_head);
if (!rcu_segcblist_entrain(&sdp->srcu_cblist, if (!rcu_segcblist_entrain(&sdp->srcu_cblist,
&sdp->srcu_barrier_head, 0)) { &sdp->srcu_barrier_head)) {
debug_rcu_head_unqueue(&sdp->srcu_barrier_head); debug_rcu_head_unqueue(&sdp->srcu_barrier_head);
atomic_dec(&ssp->srcu_barrier_cpu_cnt); atomic_dec(&ssp->srcu_barrier_cpu_cnt);
} }
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/time.h> #include <linux/time.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/prefetch.h> #include <linux/prefetch.h>
#include <linux/slab.h>
#include "rcu.h" #include "rcu.h"
...@@ -73,6 +74,31 @@ void rcu_sched_clock_irq(int user) ...@@ -73,6 +74,31 @@ void rcu_sched_clock_irq(int user)
} }
} }
/*
* Reclaim the specified callback, either by invoking it for non-kfree cases or
* freeing it directly (for kfree). Return true if kfreeing, false otherwise.
*/
static inline bool rcu_reclaim_tiny(struct rcu_head *head)
{
rcu_callback_t f;
unsigned long offset = (unsigned long)head->func;
rcu_lock_acquire(&rcu_callback_map);
if (__is_kfree_rcu_offset(offset)) {
trace_rcu_invoke_kfree_callback("", head, offset);
kfree((void *)head - offset);
rcu_lock_release(&rcu_callback_map);
return true;
}
trace_rcu_invoke_callback("", head);
f = head->func;
WRITE_ONCE(head->func, (rcu_callback_t)0L);
f(head);
rcu_lock_release(&rcu_callback_map);
return false;
}
/* Invoke the RCU callbacks whose grace period has elapsed. */ /* Invoke the RCU callbacks whose grace period has elapsed. */
static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused) static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused)
{ {
...@@ -100,7 +126,7 @@ static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused ...@@ -100,7 +126,7 @@ static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused
prefetch(next); prefetch(next);
debug_rcu_head_unqueue(list); debug_rcu_head_unqueue(list);
local_bh_disable(); local_bh_disable();
__rcu_reclaim("", list); rcu_reclaim_tiny(list);
local_bh_enable(); local_bh_enable();
list = next; list = next;
} }
......
This diff is collapsed.
...@@ -16,7 +16,6 @@ ...@@ -16,7 +16,6 @@
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/seqlock.h> #include <linux/seqlock.h>
#include <linux/swait.h> #include <linux/swait.h>
#include <linux/stop_machine.h>
#include <linux/rcu_node_tree.h> #include <linux/rcu_node_tree.h>
#include "rcu_segcblist.h" #include "rcu_segcblist.h"
...@@ -182,8 +181,8 @@ struct rcu_data { ...@@ -182,8 +181,8 @@ struct rcu_data {
bool rcu_need_heavy_qs; /* GP old, so heavy quiescent state! */ bool rcu_need_heavy_qs; /* GP old, so heavy quiescent state! */
bool rcu_urgent_qs; /* GP old need light quiescent state. */ bool rcu_urgent_qs; /* GP old need light quiescent state. */
bool rcu_forced_tick; /* Forced tick to provide QS. */ bool rcu_forced_tick; /* Forced tick to provide QS. */
bool rcu_forced_tick_exp; /* ... provide QS to expedited GP. */
#ifdef CONFIG_RCU_FAST_NO_HZ #ifdef CONFIG_RCU_FAST_NO_HZ
bool all_lazy; /* All CPU's CBs lazy at idle start? */
unsigned long last_accelerate; /* Last jiffy CBs were accelerated. */ unsigned long last_accelerate; /* Last jiffy CBs were accelerated. */
unsigned long last_advance_all; /* Last jiffy CBs were all advanced. */ unsigned long last_advance_all; /* Last jiffy CBs were all advanced. */
int tick_nohz_enabled_snap; /* Previously seen value from sysfs. */ int tick_nohz_enabled_snap; /* Previously seen value from sysfs. */
...@@ -368,18 +367,6 @@ struct rcu_state { ...@@ -368,18 +367,6 @@ struct rcu_state {
#define RCU_GP_CLEANUP 7 /* Grace-period cleanup started. */ #define RCU_GP_CLEANUP 7 /* Grace-period cleanup started. */
#define RCU_GP_CLEANED 8 /* Grace-period cleanup complete. */ #define RCU_GP_CLEANED 8 /* Grace-period cleanup complete. */
static const char * const gp_state_names[] = {
"RCU_GP_IDLE",
"RCU_GP_WAIT_GPS",
"RCU_GP_DONE_GPS",
"RCU_GP_ONOFF",
"RCU_GP_INIT",
"RCU_GP_WAIT_FQS",
"RCU_GP_DOING_FQS",
"RCU_GP_CLEANUP",
"RCU_GP_CLEANED",
};
/* /*
* In order to export the rcu_state name to the tracing tools, it * In order to export the rcu_state name to the tracing tools, it
* needs to be added in the __tracepoint_string section. * needs to be added in the __tracepoint_string section.
...@@ -403,8 +390,6 @@ static const char *tp_rcu_varname __used __tracepoint_string = rcu_name; ...@@ -403,8 +390,6 @@ static const char *tp_rcu_varname __used __tracepoint_string = rcu_name;
#define RCU_NAME rcu_name #define RCU_NAME rcu_name
#endif /* #else #ifdef CONFIG_TRACING */ #endif /* #else #ifdef CONFIG_TRACING */
int rcu_dynticks_snap(struct rcu_data *rdp);
/* Forward declarations for tree_plugin.h */ /* Forward declarations for tree_plugin.h */
static void rcu_bootup_announce(void); static void rcu_bootup_announce(void);
static void rcu_qs(void); static void rcu_qs(void);
...@@ -415,7 +400,6 @@ static bool rcu_preempt_has_tasks(struct rcu_node *rnp); ...@@ -415,7 +400,6 @@ static bool rcu_preempt_has_tasks(struct rcu_node *rnp);
static int rcu_print_task_exp_stall(struct rcu_node *rnp); static int rcu_print_task_exp_stall(struct rcu_node *rnp);
static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp); static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp);
static void rcu_flavor_sched_clock_irq(int user); static void rcu_flavor_sched_clock_irq(int user);
void call_rcu(struct rcu_head *head, rcu_callback_t func);
static void dump_blkd_tasks(struct rcu_node *rnp, int ncheck); static void dump_blkd_tasks(struct rcu_node *rnp, int ncheck);
static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags); static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
static void rcu_preempt_boost_start_gp(struct rcu_node *rnp); static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
......
This diff is collapsed.
This diff is collapsed.
...@@ -163,7 +163,7 @@ static void rcu_iw_handler(struct irq_work *iwp) ...@@ -163,7 +163,7 @@ static void rcu_iw_handler(struct irq_work *iwp)
// //
// Printing RCU CPU stall warnings // Printing RCU CPU stall warnings
#ifdef CONFIG_PREEMPTION #ifdef CONFIG_PREEMPT_RCU
/* /*
* Dump detailed information for all tasks blocking the current RCU * Dump detailed information for all tasks blocking the current RCU
...@@ -215,7 +215,7 @@ static int rcu_print_task_stall(struct rcu_node *rnp) ...@@ -215,7 +215,7 @@ static int rcu_print_task_stall(struct rcu_node *rnp)
return ndetected; return ndetected;
} }
#else /* #ifdef CONFIG_PREEMPTION */ #else /* #ifdef CONFIG_PREEMPT_RCU */
/* /*
* Because preemptible RCU does not exist, we never have to check for * Because preemptible RCU does not exist, we never have to check for
...@@ -233,7 +233,7 @@ static int rcu_print_task_stall(struct rcu_node *rnp) ...@@ -233,7 +233,7 @@ static int rcu_print_task_stall(struct rcu_node *rnp)
{ {
return 0; return 0;
} }
#endif /* #else #ifdef CONFIG_PREEMPTION */ #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
/* /*
* Dump stacks of all tasks running on stalled CPUs. First try using * Dump stacks of all tasks running on stalled CPUs. First try using
...@@ -263,11 +263,9 @@ static void print_cpu_stall_fast_no_hz(char *cp, int cpu) ...@@ -263,11 +263,9 @@ static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
{ {
struct rcu_data *rdp = &per_cpu(rcu_data, cpu); struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
sprintf(cp, "last_accelerate: %04lx/%04lx, Nonlazy posted: %c%c%c", sprintf(cp, "last_accelerate: %04lx/%04lx dyntick_enabled: %d",
rdp->last_accelerate & 0xffff, jiffies & 0xffff, rdp->last_accelerate & 0xffff, jiffies & 0xffff,
".l"[rdp->all_lazy], !!rdp->tick_nohz_enabled_snap);
".L"[!rcu_segcblist_n_nonlazy_cbs(&rdp->cblist)],
".D"[!!rdp->tick_nohz_enabled_snap]);
} }
#else /* #ifdef CONFIG_RCU_FAST_NO_HZ */ #else /* #ifdef CONFIG_RCU_FAST_NO_HZ */
...@@ -279,6 +277,28 @@ static void print_cpu_stall_fast_no_hz(char *cp, int cpu) ...@@ -279,6 +277,28 @@ static void print_cpu_stall_fast_no_hz(char *cp, int cpu)
#endif /* #else #ifdef CONFIG_RCU_FAST_NO_HZ */ #endif /* #else #ifdef CONFIG_RCU_FAST_NO_HZ */
static const char * const gp_state_names[] = {
[RCU_GP_IDLE] = "RCU_GP_IDLE",
[RCU_GP_WAIT_GPS] = "RCU_GP_WAIT_GPS",
[RCU_GP_DONE_GPS] = "RCU_GP_DONE_GPS",
[RCU_GP_ONOFF] = "RCU_GP_ONOFF",
[RCU_GP_INIT] = "RCU_GP_INIT",
[RCU_GP_WAIT_FQS] = "RCU_GP_WAIT_FQS",
[RCU_GP_DOING_FQS] = "RCU_GP_DOING_FQS",
[RCU_GP_CLEANUP] = "RCU_GP_CLEANUP",
[RCU_GP_CLEANED] = "RCU_GP_CLEANED",
};
/*
* Convert a ->gp_state value to a character string.
*/
static const char *gp_state_getname(short gs)
{
if (gs < 0 || gs >= ARRAY_SIZE(gp_state_names))
return "???";
return gp_state_names[gs];
}
/* /*
* Print out diagnostic information for the specified stalled CPU. * Print out diagnostic information for the specified stalled CPU.
* *
......
...@@ -40,6 +40,7 @@ ...@@ -40,6 +40,7 @@
#include <linux/rcupdate_wait.h> #include <linux/rcupdate_wait.h>
#include <linux/sched/isolation.h> #include <linux/sched/isolation.h>
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <linux/slab.h>
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
...@@ -51,9 +52,7 @@ ...@@ -51,9 +52,7 @@
#define MODULE_PARAM_PREFIX "rcupdate." #define MODULE_PARAM_PREFIX "rcupdate."
#ifndef CONFIG_TINY_RCU #ifndef CONFIG_TINY_RCU
extern int rcu_expedited; /* from sysctl */
module_param(rcu_expedited, int, 0); module_param(rcu_expedited, int, 0);
extern int rcu_normal; /* from sysctl */
module_param(rcu_normal, int, 0); module_param(rcu_normal, int, 0);
static int rcu_normal_after_boot; static int rcu_normal_after_boot;
module_param(rcu_normal_after_boot, int, 0); module_param(rcu_normal_after_boot, int, 0);
...@@ -218,6 +217,7 @@ static int __init rcu_set_runtime_mode(void) ...@@ -218,6 +217,7 @@ static int __init rcu_set_runtime_mode(void)
{ {
rcu_test_sync_prims(); rcu_test_sync_prims();
rcu_scheduler_active = RCU_SCHEDULER_RUNNING; rcu_scheduler_active = RCU_SCHEDULER_RUNNING;
kfree_rcu_scheduler_running();
rcu_test_sync_prims(); rcu_test_sync_prims();
return 0; return 0;
} }
...@@ -435,7 +435,7 @@ struct debug_obj_descr rcuhead_debug_descr = { ...@@ -435,7 +435,7 @@ struct debug_obj_descr rcuhead_debug_descr = {
EXPORT_SYMBOL_GPL(rcuhead_debug_descr); EXPORT_SYMBOL_GPL(rcuhead_debug_descr);
#endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */ #endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) || defined(CONFIG_RCU_TRACE) #if defined(CONFIG_TREE_RCU) || defined(CONFIG_RCU_TRACE)
void do_trace_rcu_torture_read(const char *rcutorturename, struct rcu_head *rhp, void do_trace_rcu_torture_read(const char *rcutorturename, struct rcu_head *rhp,
unsigned long secs, unsigned long secs,
unsigned long c_old, unsigned long c) unsigned long c_old, unsigned long c)
...@@ -853,14 +853,22 @@ static void test_callback(struct rcu_head *r) ...@@ -853,14 +853,22 @@ static void test_callback(struct rcu_head *r)
DEFINE_STATIC_SRCU(early_srcu); DEFINE_STATIC_SRCU(early_srcu);
struct early_boot_kfree_rcu {
struct rcu_head rh;
};
static void early_boot_test_call_rcu(void) static void early_boot_test_call_rcu(void)
{ {
static struct rcu_head head; static struct rcu_head head;
static struct rcu_head shead; static struct rcu_head shead;
struct early_boot_kfree_rcu *rhp;
call_rcu(&head, test_callback); call_rcu(&head, test_callback);
if (IS_ENABLED(CONFIG_SRCU)) if (IS_ENABLED(CONFIG_SRCU))
call_srcu(&early_srcu, &shead, test_callback); call_srcu(&early_srcu, &shead, test_callback);
rhp = kmalloc(sizeof(*rhp), GFP_KERNEL);
if (!WARN_ON_ONCE(!rhp))
kfree_rcu(rhp, rh);
} }
void rcu_early_boot_tests(void) void rcu_early_boot_tests(void)
......
...@@ -1268,7 +1268,7 @@ static struct ctl_table kern_table[] = { ...@@ -1268,7 +1268,7 @@ static struct ctl_table kern_table[] = {
.proc_handler = proc_do_static_key, .proc_handler = proc_do_static_key,
}, },
#endif #endif
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU)
{ {
.procname = "panic_on_rcu_stall", .procname = "panic_on_rcu_stall",
.data = &sysctl_panic_on_rcu_stall, .data = &sysctl_panic_on_rcu_stall,
......
...@@ -257,9 +257,6 @@ static char *tipc_key_change_dump(struct tipc_key old, struct tipc_key new, ...@@ -257,9 +257,6 @@ static char *tipc_key_change_dump(struct tipc_key old, struct tipc_key new,
#define tipc_aead_rcu_ptr(rcu_ptr, lock) \ #define tipc_aead_rcu_ptr(rcu_ptr, lock) \
rcu_dereference_protected((rcu_ptr), lockdep_is_held(lock)) rcu_dereference_protected((rcu_ptr), lockdep_is_held(lock))
#define tipc_aead_rcu_swap(rcu_ptr, ptr, lock) \
rcu_swap_protected((rcu_ptr), (ptr), lockdep_is_held(lock))
#define tipc_aead_rcu_replace(rcu_ptr, ptr, lock) \ #define tipc_aead_rcu_replace(rcu_ptr, ptr, lock) \
do { \ do { \
typeof(rcu_ptr) __tmp = rcu_dereference_protected((rcu_ptr), \ typeof(rcu_ptr) __tmp = rcu_dereference_protected((rcu_ptr), \
...@@ -1189,7 +1186,7 @@ static bool tipc_crypto_key_try_align(struct tipc_crypto *rx, u8 new_pending) ...@@ -1189,7 +1186,7 @@ static bool tipc_crypto_key_try_align(struct tipc_crypto *rx, u8 new_pending)
/* Move passive key if any */ /* Move passive key if any */
if (key.passive) { if (key.passive) {
tipc_aead_rcu_swap(rx->aead[key.passive], tmp2, &rx->lock); tmp2 = rcu_replace_pointer(rx->aead[key.passive], tmp2, lockdep_is_held(&rx->lock));
x = (key.passive - key.pending + new_pending) % KEY_MAX; x = (key.passive - key.pending + new_pending) % KEY_MAX;
new_passive = (x <= 0) ? x + KEY_MAX : x; new_passive = (x <= 0) ? x + KEY_MAX : x;
} }
......
...@@ -15,8 +15,15 @@ then ...@@ -15,8 +15,15 @@ then
exit 0 exit 0
fi fi
ncpus=`grep '^processor' /proc/cpuinfo | wc -l` ncpus=`grep '^processor' /proc/cpuinfo | wc -l`
idlecpus=`mpstat | tail -1 | \ if mpstat -V > /dev/null 2>&1
awk -v ncpus=$ncpus '{ print ncpus * ($7 + $NF) / 100 }'` then
idlecpus=`mpstat | tail -1 | \
awk -v ncpus=$ncpus '{ print ncpus * ($7 + $NF) / 100 }'`
else
# No mpstat command, so use all available CPUs.
echo The mpstat command is not available, so greedily using all CPUs.
idlecpus=$ncpus
fi
awk -v ncpus=$ncpus -v idlecpus=$idlecpus < /dev/null ' awk -v ncpus=$ncpus -v idlecpus=$idlecpus < /dev/null '
BEGIN { BEGIN {
cpus2use = idlecpus; cpus2use = idlecpus;
......
...@@ -23,25 +23,39 @@ spinmax=${4-1000} ...@@ -23,25 +23,39 @@ spinmax=${4-1000}
n=1 n=1
starttime=`awk 'BEGIN { print systime(); }' < /dev/null` starttime=`gawk 'BEGIN { print systime(); }' < /dev/null`
nohotplugcpus=
for i in /sys/devices/system/cpu/cpu[0-9]*
do
if test -f $i/online
then
:
else
curcpu=`echo $i | sed -e 's/^[^0-9]*//'`
nohotplugcpus="$nohotplugcpus $curcpu"
fi
done
while : while :
do do
# Check for done. # Check for done.
t=`awk -v s=$starttime 'BEGIN { print systime() - s; }' < /dev/null` t=`gawk -v s=$starttime 'BEGIN { print systime() - s; }' < /dev/null`
if test "$t" -gt "$duration" if test "$t" -gt "$duration"
then then
exit 0; exit 0;
fi fi
# Set affinity to randomly selected online CPU # Set affinity to randomly selected online CPU
cpus=`grep 1 /sys/devices/system/cpu/*/online | if cpus=`grep 1 /sys/devices/system/cpu/*/online 2>&1 |
sed -e 's,/[^/]*$,,' -e 's/^[^0-9]*//'` sed -e 's,/[^/]*$,,' -e 's/^[^0-9]*//'`
then
# Do not leave out poor old cpu0 which may not be hot-pluggable :
if [ ! -f "/sys/devices/system/cpu/cpu0/online" ]; then else
cpus="0 $cpus" cpus=
fi fi
# Do not leave out non-hot-pluggable CPUs
cpus="$cpus $nohotplugcpus"
cpumask=`awk -v cpus="$cpus" -v me=$me -v n=$n 'BEGIN { cpumask=`awk -v cpus="$cpus" -v me=$me -v n=$n 'BEGIN {
srand(n + me + systime()); srand(n + me + systime());
......
...@@ -25,6 +25,7 @@ stopstate="`grep 'End-test grace-period state: g' $i/console.log 2> /dev/null | ...@@ -25,6 +25,7 @@ stopstate="`grep 'End-test grace-period state: g' $i/console.log 2> /dev/null |
tail -1 | sed -e 's/^\[[ 0-9.]*] //' | tail -1 | sed -e 's/^\[[ 0-9.]*] //' |
awk '{ print \"[\" $1 \" \" $5 \" \" $6 \" \" $7 \"]\"; }' | awk '{ print \"[\" $1 \" \" $5 \" \" $6 \" \" $7 \"]\"; }' |
tr -d '\012\015'`" tr -d '\012\015'`"
fwdprog="`grep 'rcu_torture_fwd_prog_cr Duration' $i/console.log 2> /dev/null | sed -e 's/^\[[^]]*] //' | sort -k15nr | head -1 | awk '{ print $14 " " $15 }'`"
if test -z "$ngps" if test -z "$ngps"
then then
echo "$configfile ------- " $stopstate echo "$configfile ------- " $stopstate
...@@ -39,7 +40,7 @@ else ...@@ -39,7 +40,7 @@ else
BEGIN { print ngps / dur }' < /dev/null` BEGIN { print ngps / dur }' < /dev/null`
title="$title ($ngpsps/s)" title="$title ($ngpsps/s)"
fi fi
echo $title $stopstate echo $title $stopstate $fwdprog
nclosecalls=`grep --binary-files=text 'torture: Reader Batch' $i/console.log | tail -1 | awk '{for (i=NF-8;i<=NF;i++) sum+=$i; } END {print sum}'` nclosecalls=`grep --binary-files=text 'torture: Reader Batch' $i/console.log | tail -1 | awk '{for (i=NF-8;i<=NF;i++) sum+=$i; } END {print sum}'`
if test -z "$nclosecalls" if test -z "$nclosecalls"
then then
......
...@@ -123,7 +123,7 @@ qemu_args=$5 ...@@ -123,7 +123,7 @@ qemu_args=$5
boot_args=$6 boot_args=$6
cd $KVM cd $KVM
kstarttime=`awk 'BEGIN { print systime() }' < /dev/null` kstarttime=`gawk 'BEGIN { print systime() }' < /dev/null`
if test -z "$TORTURE_BUILDONLY" if test -z "$TORTURE_BUILDONLY"
then then
echo ' ---' `date`: Starting kernel echo ' ---' `date`: Starting kernel
...@@ -133,11 +133,10 @@ fi ...@@ -133,11 +133,10 @@ fi
qemu_args="-enable-kvm -nographic $qemu_args" qemu_args="-enable-kvm -nographic $qemu_args"
cpu_count=`configNR_CPUS.sh $resdir/ConfigFragment` cpu_count=`configNR_CPUS.sh $resdir/ConfigFragment`
cpu_count=`configfrag_boot_cpus "$boot_args" "$config_template" "$cpu_count"` cpu_count=`configfrag_boot_cpus "$boot_args" "$config_template" "$cpu_count"`
vcpus=`identify_qemu_vcpus` if test "$cpu_count" -gt "$TORTURE_ALLOTED_CPUS"
if test $cpu_count -gt $vcpus
then then
echo CPU count limited from $cpu_count to $vcpus | tee -a $resdir/Warnings echo CPU count limited from $cpu_count to $TORTURE_ALLOTED_CPUS | tee -a $resdir/Warnings
cpu_count=$vcpus cpu_count=$TORTURE_ALLOTED_CPUS
fi fi
qemu_args="`specify_qemu_cpus "$QEMU" "$qemu_args" "$cpu_count"`" qemu_args="`specify_qemu_cpus "$QEMU" "$qemu_args" "$cpu_count"`"
...@@ -177,7 +176,7 @@ do ...@@ -177,7 +176,7 @@ do
then then
qemu_pid=`cat "$resdir/qemu_pid"` qemu_pid=`cat "$resdir/qemu_pid"`
fi fi
kruntime=`awk 'BEGIN { print systime() - '"$kstarttime"' }' < /dev/null` kruntime=`gawk 'BEGIN { print systime() - '"$kstarttime"' }' < /dev/null`
if test -z "$qemu_pid" || kill -0 "$qemu_pid" > /dev/null 2>&1 if test -z "$qemu_pid" || kill -0 "$qemu_pid" > /dev/null 2>&1
then then
if test $kruntime -ge $seconds if test $kruntime -ge $seconds
...@@ -213,7 +212,7 @@ then ...@@ -213,7 +212,7 @@ then
oldline="`tail $resdir/console.log`" oldline="`tail $resdir/console.log`"
while : while :
do do
kruntime=`awk 'BEGIN { print systime() - '"$kstarttime"' }' < /dev/null` kruntime=`gawk 'BEGIN { print systime() - '"$kstarttime"' }' < /dev/null`
if kill -0 $qemu_pid > /dev/null 2>&1 if kill -0 $qemu_pid > /dev/null 2>&1
then then
: :
......
...@@ -24,7 +24,9 @@ dur=$((30*60)) ...@@ -24,7 +24,9 @@ dur=$((30*60))
dryrun="" dryrun=""
KVM="`pwd`/tools/testing/selftests/rcutorture"; export KVM KVM="`pwd`/tools/testing/selftests/rcutorture"; export KVM
PATH=${KVM}/bin:$PATH; export PATH PATH=${KVM}/bin:$PATH; export PATH
TORTURE_ALLOTED_CPUS="" . functions.sh
TORTURE_ALLOTED_CPUS="`identify_qemu_vcpus`"
TORTURE_DEFCONFIG=defconfig TORTURE_DEFCONFIG=defconfig
TORTURE_BOOT_IMAGE="" TORTURE_BOOT_IMAGE=""
TORTURE_INITRD="$KVM/initrd"; export TORTURE_INITRD TORTURE_INITRD="$KVM/initrd"; export TORTURE_INITRD
...@@ -40,8 +42,6 @@ cpus=0 ...@@ -40,8 +42,6 @@ cpus=0
ds=`date +%Y.%m.%d-%H:%M:%S` ds=`date +%Y.%m.%d-%H:%M:%S`
jitter="-1" jitter="-1"
. functions.sh
usage () { usage () {
echo "Usage: $scriptname optional arguments:" echo "Usage: $scriptname optional arguments:"
echo " --bootargs kernel-boot-arguments" echo " --bootargs kernel-boot-arguments"
...@@ -93,6 +93,11 @@ do ...@@ -93,6 +93,11 @@ do
checkarg --cpus "(number)" "$#" "$2" '^[0-9]*$' '^--' checkarg --cpus "(number)" "$#" "$2" '^[0-9]*$' '^--'
cpus=$2 cpus=$2
TORTURE_ALLOTED_CPUS="$2" TORTURE_ALLOTED_CPUS="$2"
max_cpus="`identify_qemu_vcpus`"
if test "$TORTURE_ALLOTED_CPUS" -gt "$max_cpus"
then
TORTURE_ALLOTED_CPUS=$max_cpus
fi
shift shift
;; ;;
--datestamp) --datestamp)
...@@ -198,9 +203,10 @@ fi ...@@ -198,9 +203,10 @@ fi
CONFIGFRAG=${KVM}/configs/${TORTURE_SUITE}; export CONFIGFRAG CONFIGFRAG=${KVM}/configs/${TORTURE_SUITE}; export CONFIGFRAG
defaultconfigs="`tr '\012' ' ' < $CONFIGFRAG/CFLIST`"
if test -z "$configs" if test -z "$configs"
then then
configs="`cat $CONFIGFRAG/CFLIST`" configs=$defaultconfigs
fi fi
if test -z "$resdir" if test -z "$resdir"
...@@ -209,7 +215,7 @@ then ...@@ -209,7 +215,7 @@ then
fi fi
# Create a file of test-name/#cpus pairs, sorted by decreasing #cpus. # Create a file of test-name/#cpus pairs, sorted by decreasing #cpus.
touch $T/cfgcpu configs_derep=
for CF in $configs for CF in $configs
do do
case $CF in case $CF in
...@@ -222,15 +228,21 @@ do ...@@ -222,15 +228,21 @@ do
CF1=$CF CF1=$CF
;; ;;
esac esac
for ((cur_rep=0;cur_rep<$config_reps;cur_rep++))
do
configs_derep="$configs_derep $CF1"
done
done
touch $T/cfgcpu
configs_derep="`echo $configs_derep | sed -e "s/\<CFLIST\>/$defaultconfigs/g"`"
for CF1 in $configs_derep
do
if test -f "$CONFIGFRAG/$CF1" if test -f "$CONFIGFRAG/$CF1"
then then
cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$CF1` cpu_count=`configNR_CPUS.sh $CONFIGFRAG/$CF1`
cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF1" "$cpu_count"` cpu_count=`configfrag_boot_cpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF1" "$cpu_count"`
cpu_count=`configfrag_boot_maxcpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF1" "$cpu_count"` cpu_count=`configfrag_boot_maxcpus "$TORTURE_BOOTARGS" "$CONFIGFRAG/$CF1" "$cpu_count"`
for ((cur_rep=0;cur_rep<$config_reps;cur_rep++)) echo $CF1 $cpu_count >> $T/cfgcpu
do
echo $CF1 $cpu_count >> $T/cfgcpu
done
else else
echo "The --configs file $CF1 does not exist, terminating." echo "The --configs file $CF1 does not exist, terminating."
exit 1 exit 1
......
...@@ -20,58 +20,9 @@ if [ -s "$D/initrd/init" ]; then ...@@ -20,58 +20,9 @@ if [ -s "$D/initrd/init" ]; then
exit 0 exit 0
fi fi
T=${TMPDIR-/tmp}/mkinitrd.sh.$$ # Create a C-language initrd/init infinite-loop program and statically
trap 'rm -rf $T' 0 2 # link it. This results in a very small initrd.
mkdir $T echo "Creating a statically linked C-language initrd"
cat > $T/init << '__EOF___'
#!/bin/sh
# Run in userspace a few milliseconds every second. This helps to
# exercise the NO_HZ_FULL portions of RCU. The 192 instances of "a" was
# empirically shown to give a nice multi-millisecond burst of user-mode
# execution on a 2GHz CPU, as desired. Modern CPUs will vary from a
# couple of milliseconds up to perhaps 100 milliseconds, which is an
# acceptable range.
#
# Why not calibrate an exact delay? Because within this initrd, we
# are restricted to Bourne-shell builtins, which as far as I know do not
# provide any means of obtaining a fine-grained timestamp.
a4="a a a a"
a16="$a4 $a4 $a4 $a4"
a64="$a16 $a16 $a16 $a16"
a192="$a64 $a64 $a64"
while :
do
q=
for i in $a192
do
q="$q $i"
done
sleep 1
done
__EOF___
# Try using dracut to create initrd
if command -v dracut >/dev/null 2>&1
then
echo Creating $D/initrd using dracut.
# Filesystem creation
dracut --force --no-hostonly --no-hostonly-cmdline --module "base" $T/initramfs.img
cd $D
mkdir -p initrd
cd initrd
zcat $T/initramfs.img | cpio -id
cp $T/init init
chmod +x init
echo Done creating $D/initrd using dracut
exit 0
fi
# No dracut, so create a C-language initrd/init program and statically
# link it. This results in a very small initrd, but might be a bit less
# future-proof than dracut.
echo "Could not find dracut, attempting C initrd"
cd $D cd $D
mkdir -p initrd mkdir -p initrd
cd initrd cd initrd
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment