Commit 7c4fa150 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull RCU updates from Ingo Molnar:
 "The main changes in this cycle were:

   - Make kfree_rcu() use kfree_bulk() for added performance

   - RCU updates

   - Callback-overload handling updates

   - Tasks-RCU KCSAN and sparse updates

   - Locking torture test and RCU torture test updates

   - Documentation updates

   - Miscellaneous fixes"

* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (74 commits)
  rcu: Make rcu_barrier() account for offline no-CBs CPUs
  rcu: Mark rcu_state.gp_seq to detect concurrent writes
  Documentation/memory-barriers: Fix typos
  doc: Add rcutorture scripting to torture.txt
  doc/RCU/rcu: Use https instead of http if possible
  doc/RCU/rcu: Use absolute paths for non-rst files
  doc/RCU/rcu: Use ':ref:' for links to other docs
  doc/RCU/listRCU: Update example function name
  doc/RCU/listRCU: Fix typos in a example code snippets
  doc/RCU/Design: Remove remaining HTML tags in ReST files
  doc: Add some more RCU list patterns in the kernel
  rcutorture: Set KCSAN Kconfig options to detect more data races
  rcutorture: Manually clean up after rcu_barrier() failure
  rcutorture: Make rcu_torture_barrier_cbs() post from corresponding CPU
  rcuperf: Measure memory footprint during kfree_rcu() test
  rcutorture: Annotation lockless accesses to rcu_torture_current
  rcutorture: Add READ_ONCE() to rcu_torture_count and rcu_torture_batch
  rcutorture: Fix stray access to rcu_fwd_cb_nodelay
  rcutorture: Fix rcu_torture_one_read()/rcu_torture_writer() data race
  rcutorture: Make kvm-find-errors.sh abort on bad directory
  ...
parents d937a6df baf5fe76
...@@ -4,7 +4,7 @@ A Tour Through TREE_RCU's Grace-Period Memory Ordering ...@@ -4,7 +4,7 @@ A Tour Through TREE_RCU's Grace-Period Memory Ordering
August 8, 2017 August 8, 2017
This article was contributed by Paul E. McKenney This article was contributed by Paul E. McKenney
Introduction Introduction
============ ============
...@@ -48,7 +48,7 @@ Tree RCU Grace Period Memory Ordering Building Blocks ...@@ -48,7 +48,7 @@ Tree RCU Grace Period Memory Ordering Building Blocks
The workhorse for RCU's grace-period memory ordering is the The workhorse for RCU's grace-period memory ordering is the
critical section for the ``rcu_node`` structure's critical section for the ``rcu_node`` structure's
``->lock``. These critical sections use helper functions for lock ``->lock``. These critical sections use helper functions for lock
acquisition, including ``raw_spin_lock_rcu_node()``, acquisition, including ``raw_spin_lock_rcu_node()``,
``raw_spin_lock_irq_rcu_node()``, and ``raw_spin_lock_irqsave_rcu_node()``. ``raw_spin_lock_irq_rcu_node()``, and ``raw_spin_lock_irqsave_rcu_node()``.
Their lock-release counterparts are ``raw_spin_unlock_rcu_node()``, Their lock-release counterparts are ``raw_spin_unlock_rcu_node()``,
...@@ -102,9 +102,9 @@ lock-acquisition and lock-release functions:: ...@@ -102,9 +102,9 @@ lock-acquisition and lock-release functions::
23 r3 = READ_ONCE(x); 23 r3 = READ_ONCE(x);
24 } 24 }
25 25
26 WARN_ON(r1 == 0 && r2 == 0 && r3 == 0); 26 WARN_ON(r1 == 0 && r2 == 0 && r3 == 0);
The ``WARN_ON()`` is evaluated at “the end of time”, The ``WARN_ON()`` is evaluated at "the end of time",
after all changes have propagated throughout the system. after all changes have propagated throughout the system.
Without the ``smp_mb__after_unlock_lock()`` provided by the Without the ``smp_mb__after_unlock_lock()`` provided by the
acquisition functions, this ``WARN_ON()`` could trigger, for example acquisition functions, this ``WARN_ON()`` could trigger, for example
......
This diff is collapsed.
...@@ -11,8 +11,8 @@ must be long enough that any readers accessing the item being deleted have ...@@ -11,8 +11,8 @@ must be long enough that any readers accessing the item being deleted have
since dropped their references. For example, an RCU-protected deletion since dropped their references. For example, an RCU-protected deletion
from a linked list would first remove the item from the list, wait for from a linked list would first remove the item from the list, wait for
a grace period to elapse, then free the element. See the a grace period to elapse, then free the element. See the
Documentation/RCU/listRCU.rst file for more information on using RCU with :ref:`Documentation/RCU/listRCU.rst <list_rcu_doc>` for more information on
linked lists. using RCU with linked lists.
Frequently Asked Questions Frequently Asked Questions
-------------------------- --------------------------
...@@ -50,7 +50,7 @@ Frequently Asked Questions ...@@ -50,7 +50,7 @@ Frequently Asked Questions
- If I am running on a uniprocessor kernel, which can only do one - If I am running on a uniprocessor kernel, which can only do one
thing at a time, why should I wait for a grace period? thing at a time, why should I wait for a grace period?
See the Documentation/RCU/UP.rst file for more information. See :ref:`Documentation/RCU/UP.rst <up_doc>` for more information.
- How can I see where RCU is currently used in the Linux kernel? - How can I see where RCU is currently used in the Linux kernel?
...@@ -68,18 +68,18 @@ Frequently Asked Questions ...@@ -68,18 +68,18 @@ Frequently Asked Questions
- Why the name "RCU"? - Why the name "RCU"?
"RCU" stands for "read-copy update". The file Documentation/RCU/listRCU.rst "RCU" stands for "read-copy update".
has more information on where this name came from, search for :ref:`Documentation/RCU/listRCU.rst <list_rcu_doc>` has more information on where
"read-copy update" to find it. this name came from, search for "read-copy update" to find it.
- I hear that RCU is patented? What is with that? - I hear that RCU is patented? What is with that?
Yes, it is. There are several known patents related to RCU, Yes, it is. There are several known patents related to RCU,
search for the string "Patent" in RTFP.txt to find them. search for the string "Patent" in Documentation/RCU/RTFP.txt to find them.
Of these, one was allowed to lapse by the assignee, and the Of these, one was allowed to lapse by the assignee, and the
others have been contributed to the Linux kernel under GPL. others have been contributed to the Linux kernel under GPL.
There are now also LGPL implementations of user-level RCU There are now also LGPL implementations of user-level RCU
available (http://liburcu.org/). available (https://liburcu.org/).
- I hear that RCU needs work in order to support realtime kernels? - I hear that RCU needs work in order to support realtime kernels?
...@@ -88,5 +88,5 @@ Frequently Asked Questions ...@@ -88,5 +88,5 @@ Frequently Asked Questions
- Where can I find more information on RCU? - Where can I find more information on RCU?
See the RTFP.txt file in this directory. See the Documentation/RCU/RTFP.txt file.
Or point your browser at (http://www.rdrop.com/users/paulmck/RCU/). Or point your browser at (http://www.rdrop.com/users/paulmck/RCU/).
...@@ -124,9 +124,14 @@ using a dynamically allocated srcu_struct (hence "srcud-" rather than ...@@ -124,9 +124,14 @@ using a dynamically allocated srcu_struct (hence "srcud-" rather than
debugging. The final "T" entry contains the totals of the counters. debugging. The final "T" entry contains the totals of the counters.
USAGE USAGE ON SPECIFIC KERNEL BUILDS
The following script may be used to torture RCU: It is sometimes desirable to torture RCU on a specific kernel build,
for example, when preparing to put that kernel build into production.
In that case, the kernel should be built with CONFIG_RCU_TORTURE_TEST=m
so that the test can be started using modprobe and terminated using rmmod.
For example, the following script may be used to torture RCU:
#!/bin/sh #!/bin/sh
...@@ -142,8 +147,136 @@ checked for such errors. The "rmmod" command forces a "SUCCESS", ...@@ -142,8 +147,136 @@ checked for such errors. The "rmmod" command forces a "SUCCESS",
two are self-explanatory, while the last indicates that while there two are self-explanatory, while the last indicates that while there
were no RCU failures, CPU-hotplug problems were detected. were no RCU failures, CPU-hotplug problems were detected.
However, the tools/testing/selftests/rcutorture/bin/kvm.sh script
provides better automation, including automatic failure analysis. USAGE ON MAINLINE KERNELS
It assumes a qemu/kvm-enabled platform, and runs guest OSes out of initrd.
See tools/testing/selftests/rcutorture/doc/initrd.txt for instructions When using rcutorture to test changes to RCU itself, it is often
on setting up such an initrd. necessary to build a number of kernels in order to test that change
across a broad range of combinations of the relevant Kconfig options
and of the relevant kernel boot parameters. In this situation, use
of modprobe and rmmod can be quite time-consuming and error-prone.
Therefore, the tools/testing/selftests/rcutorture/bin/kvm.sh
script is available for mainline testing for x86, arm64, and
powerpc. By default, it will run the series of tests specified by
tools/testing/selftests/rcutorture/configs/rcu/CFLIST, with each test
running for 30 minutes within a guest OS using a minimal userspace
supplied by an automatically generated initrd. After the tests are
complete, the resulting build products and console output are analyzed
for errors and the results of the runs are summarized.
On larger systems, rcutorture testing can be accelerated by passing the
--cpus argument to kvm.sh. For example, on a 64-CPU system, "--cpus 43"
would use up to 43 CPUs to run tests concurrently, which as of v5.4 would
complete all the scenarios in two batches, reducing the time to complete
from about eight hours to about one hour (not counting the time to build
the sixteen kernels). The "--dryrun sched" argument will not run tests,
but rather tell you how the tests would be scheduled into batches. This
can be useful when working out how many CPUs to specify in the --cpus
argument.
Not all changes require that all scenarios be run. For example, a change
to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the
--configs argument to kvm.sh as follows: "--configs 'SRCU-N SRCU-P'".
Large systems can run multiple copies of of the full set of scenarios,
for example, a system with 448 hardware threads can run five instances
of the full set concurrently. To make this happen:
kvm.sh --cpus 448 --configs '5*CFLIST'
Alternatively, such a system can run 56 concurrent instances of a single
eight-CPU scenario:
kvm.sh --cpus 448 --configs '56*TREE04'
Or 28 concurrent instances of each of two eight-CPU scenarios:
kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04'
Of course, each concurrent instance will use memory, which can be
limited using the --memory argument, which defaults to 512M. Small
values for memory may require disabling the callback-flooding tests
using the --bootargs parameter discussed below.
Sometimes additional debugging is useful, and in such cases the --kconfig
parameter to kvm.sh may be used, for example, "--kconfig 'CONFIG_KASAN=y'".
Kernel boot arguments can also be supplied, for example, to control
rcutorture's module parameters. For example, to test a change to RCU's
CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'".
This will of course result in the scripting reporting a failure, namely
the resuling RCU CPU stall warning. As noted above, reducing memory may
require disabling rcutorture's callback-flooding tests:
kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \
--bootargs 'rcutorture.fwd_progress=0'
Sometimes all that is needed is a full set of kernel builds. This is
what the --buildonly argument does.
Finally, the --trust-make argument allows each kernel build to reuse what
it can from the previous kernel build.
There are additional more arcane arguments that are documented in the
source code of the kvm.sh script.
If a run contains failures, the number of buildtime and runtime failures
is listed at the end of the kvm.sh output, which you really should redirect
to a file. The build products and console output of each run is kept in
tools/testing/selftests/rcutorture/res in timestamped directories. A
given directory can be supplied to kvm-find-errors.sh in order to have
it cycle you through summaries of errors and full error logs. For example:
tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \
tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23
However, it is often more convenient to access the files directly.
Files pertaining to all scenarios in a run reside in the top-level
directory (2020.01.20-15.54.23 in the example above), while per-scenario
files reside in a subdirectory named after the scenario (for example,
"TREE04"). If a given scenario ran more than once (as in "--configs
'56*TREE04'" above), the directories corresponding to the second and
subsequent runs of that scenario include a sequence number, for example,
"TREE04.2", "TREE04.3", and so on.
The most frequently used file in the top-level directory is testid.txt.
If the test ran in a git repository, then this file contains the commit
that was tested and any uncommitted changes in diff format.
The most frequently used files in each per-scenario-run directory are:
.config: This file contains the Kconfig options.
Make.out: This contains build output for a specific scenario.
console.log: This contains the console output for a specific scenario.
This file may be examined once the kernel has booted, but
it might not exist if the build failed.
vmlinux: This contains the kernel, which can be useful with tools like
objdump and gdb.
A number of additional files are available, but are less frequently used.
Many are intended for debugging of rcutorture itself or of its scripting.
As of v5.4, a successful run with the default set of scenarios produces
the following summary at the end of the run on a 12-CPU system:
SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ]
SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ]
SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ]
SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ]
TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ]
TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ]
TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ]
TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198
TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631
TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ]
TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844
TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497
CPU count limited from 16 to 12
TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961
TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997
TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732
CPU count limited from 16 to 12
TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011
...@@ -4005,6 +4005,15 @@ ...@@ -4005,6 +4005,15 @@
Set threshold of queued RCU callbacks below which Set threshold of queued RCU callbacks below which
batch limiting is re-enabled. batch limiting is re-enabled.
rcutree.qovld= [KNL]
Set threshold of queued RCU callbacks beyond which
RCU's force-quiescent-state scan will aggressively
enlist help from cond_resched() and sched IPIs to
help CPUs more quickly reach quiescent states.
Set to less than zero to make this be set based
on rcutree.qhimark at boot time and to zero to
disable more aggressive help enlistment.
rcutree.rcu_idle_gp_delay= [KNL] rcutree.rcu_idle_gp_delay= [KNL]
Set wakeup interval for idle CPUs that have Set wakeup interval for idle CPUs that have
RCU callbacks (RCU_FAST_NO_HZ=y). RCU callbacks (RCU_FAST_NO_HZ=y).
...@@ -4220,6 +4229,12 @@ ...@@ -4220,6 +4229,12 @@
rcupdate.rcu_cpu_stall_suppress= [KNL] rcupdate.rcu_cpu_stall_suppress= [KNL]
Suppress RCU CPU stall warning messages. Suppress RCU CPU stall warning messages.
rcupdate.rcu_cpu_stall_suppress_at_boot= [KNL]
Suppress RCU CPU stall warning messages and
rcutorture writer stall warnings that occur
during early boot, that is, during the time
before the init task is spawned.
rcupdate.rcu_cpu_stall_timeout= [KNL] rcupdate.rcu_cpu_stall_timeout= [KNL]
Set timeout for RCU CPU stall warning messages. Set timeout for RCU CPU stall warning messages.
...@@ -4892,6 +4907,10 @@ ...@@ -4892,6 +4907,10 @@
topology updates sent by the hypervisor to this topology updates sent by the hypervisor to this
LPAR. LPAR.
torture.disable_onoff_at_boot= [KNL]
Prevent the CPU-hotplug component of torturing
until after init has spawned.
tp720= [HW,PS2] tp720= [HW,PS2]
tpm_suspend_pcr=[HW,TPM] tpm_suspend_pcr=[HW,TPM]
......
...@@ -185,7 +185,7 @@ As a further example, consider this sequence of events: ...@@ -185,7 +185,7 @@ As a further example, consider this sequence of events:
=============== =============== =============== ===============
{ A == 1, B == 2, C == 3, P == &A, Q == &C } { A == 1, B == 2, C == 3, P == &A, Q == &C }
B = 4; Q = P; B = 4; Q = P;
P = &B D = *Q; P = &B; D = *Q;
There is an obvious data dependency here, as the value loaded into D depends on There is an obvious data dependency here, as the value loaded into D depends on
the address retrieved from P by CPU 2. At the end of the sequence, any of the the address retrieved from P by CPU 2. At the end of the sequence, any of the
...@@ -569,7 +569,7 @@ following sequence of events: ...@@ -569,7 +569,7 @@ following sequence of events:
{ A == 1, B == 2, C == 3, P == &A, Q == &C } { A == 1, B == 2, C == 3, P == &A, Q == &C }
B = 4; B = 4;
<write barrier> <write barrier>
WRITE_ONCE(P, &B) WRITE_ONCE(P, &B);
Q = READ_ONCE(P); Q = READ_ONCE(P);
D = *Q; D = *Q;
...@@ -1721,7 +1721,7 @@ of optimizations: ...@@ -1721,7 +1721,7 @@ of optimizations:
and WRITE_ONCE() are more selective: With READ_ONCE() and and WRITE_ONCE() are more selective: With READ_ONCE() and
WRITE_ONCE(), the compiler need only forget the contents of the WRITE_ONCE(), the compiler need only forget the contents of the
indicated memory locations, while with barrier() the compiler must indicated memory locations, while with barrier() the compiler must
discard the value of all memory locations that it has currented discard the value of all memory locations that it has currently
cached in any machine registers. Of course, the compiler must also cached in any machine registers. Of course, the compiler must also
respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur, respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
though the CPU of course need not do so. though the CPU of course need not do so.
...@@ -1833,7 +1833,7 @@ Aside: In the case of data dependencies, the compiler would be expected ...@@ -1833,7 +1833,7 @@ Aside: In the case of data dependencies, the compiler would be expected
to issue the loads in the correct order (eg. `a[b]` would have to load to issue the loads in the correct order (eg. `a[b]` would have to load
the value of b before loading a[b]), however there is no guarantee in the value of b before loading a[b]), however there is no guarantee in
the C specification that the compiler may not speculate the value of b the C specification that the compiler may not speculate the value of b
(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1) (eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1)
tmp = a[b]; ). There is also the problem of a compiler reloading b after tmp = a[b]; ). There is also the problem of a compiler reloading b after
having loaded a[b], thus having a newer copy of b than a[b]. A consensus having loaded a[b], thus having a newer copy of b than a[b]. A consensus
has not yet been reached about these problems, however the READ_ONCE() has not yet been reached about these problems, however the READ_ONCE()
......
...@@ -2489,7 +2489,7 @@ static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cre ...@@ -2489,7 +2489,7 @@ static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cre
rcu_read_lock(); rcu_read_lock();
if (nfsi->cache_validity & NFS_INO_INVALID_ACCESS) if (nfsi->cache_validity & NFS_INO_INVALID_ACCESS)
goto out; goto out;
lh = rcu_dereference(nfsi->access_cache_entry_lru.prev); lh = rcu_dereference(list_tail_rcu(&nfsi->access_cache_entry_lru));
cache = list_entry(lh, struct nfs_access_entry, lru); cache = list_entry(lh, struct nfs_access_entry, lru);
if (lh == &nfsi->access_cache_entry_lru || if (lh == &nfsi->access_cache_entry_lru ||
cred_fscmp(cred, cache->cred) != 0) cred_fscmp(cred, cache->cred) != 0)
......
...@@ -60,7 +60,7 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list) ...@@ -60,7 +60,7 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
#define __list_check_rcu(dummy, cond, extra...) \ #define __list_check_rcu(dummy, cond, extra...) \
({ \ ({ \
check_arg_count_one(extra); \ check_arg_count_one(extra); \
RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(), \ RCU_LOCKDEP_WARN(!(cond) && !rcu_read_lock_any_held(), \
"RCU-list traversed in non-reader section!"); \ "RCU-list traversed in non-reader section!"); \
}) })
#else #else
......
...@@ -83,6 +83,7 @@ void rcu_scheduler_starting(void); ...@@ -83,6 +83,7 @@ void rcu_scheduler_starting(void);
static inline void rcu_scheduler_starting(void) { } static inline void rcu_scheduler_starting(void) { }
#endif /* #else #ifndef CONFIG_SRCU */ #endif /* #else #ifndef CONFIG_SRCU */
static inline void rcu_end_inkernel_boot(void) { } static inline void rcu_end_inkernel_boot(void) { }
static inline bool rcu_inkernel_boot_has_ended(void) { return true; }
static inline bool rcu_is_watching(void) { return true; } static inline bool rcu_is_watching(void) { return true; }
static inline void rcu_momentary_dyntick_idle(void) { } static inline void rcu_momentary_dyntick_idle(void) { }
static inline void kfree_rcu_scheduler_running(void) { } static inline void kfree_rcu_scheduler_running(void) { }
......
...@@ -54,6 +54,7 @@ void exit_rcu(void); ...@@ -54,6 +54,7 @@ void exit_rcu(void);
void rcu_scheduler_starting(void); void rcu_scheduler_starting(void);
extern int rcu_scheduler_active __read_mostly; extern int rcu_scheduler_active __read_mostly;
void rcu_end_inkernel_boot(void); void rcu_end_inkernel_boot(void);
bool rcu_inkernel_boot_has_ended(void);
bool rcu_is_watching(void); bool rcu_is_watching(void);
#ifndef CONFIG_PREEMPTION #ifndef CONFIG_PREEMPTION
void rcu_all_qs(void); void rcu_all_qs(void);
......
...@@ -164,7 +164,7 @@ static inline void destroy_timer_on_stack(struct timer_list *timer) { } ...@@ -164,7 +164,7 @@ static inline void destroy_timer_on_stack(struct timer_list *timer) { }
*/ */
static inline int timer_pending(const struct timer_list * timer) static inline int timer_pending(const struct timer_list * timer)
{ {
return timer->entry.pprev != NULL; return !hlist_unhashed_lockless(&timer->entry);
} }
extern void add_timer_on(struct timer_list *timer, int cpu); extern void add_timer_on(struct timer_list *timer, int cpu);
......
...@@ -623,6 +623,34 @@ TRACE_EVENT_RCU(rcu_invoke_kfree_callback, ...@@ -623,6 +623,34 @@ TRACE_EVENT_RCU(rcu_invoke_kfree_callback,
__entry->rcuname, __entry->rhp, __entry->offset) __entry->rcuname, __entry->rhp, __entry->offset)
); );
/*
* Tracepoint for the invocation of a single RCU callback of the special
* kfree_bulk() form. The first argument is the RCU flavor, the second
* argument is a number of elements in array to free, the third is an
* address of the array holding nr_records entries.
*/
TRACE_EVENT_RCU(rcu_invoke_kfree_bulk_callback,
TP_PROTO(const char *rcuname, unsigned long nr_records, void **p),
TP_ARGS(rcuname, nr_records, p),
TP_STRUCT__entry(
__field(const char *, rcuname)
__field(unsigned long, nr_records)
__field(void **, p)
),
TP_fast_assign(
__entry->rcuname = rcuname;
__entry->nr_records = nr_records;
__entry->p = p;
),
TP_printk("%s bulk=0x%p nr_records=%lu",
__entry->rcuname, __entry->p, __entry->nr_records)
);
/* /*
* Tracepoint for exiting rcu_do_batch after RCU callbacks have been * Tracepoint for exiting rcu_do_batch after RCU callbacks have been
* invoked. The first argument is the name of the RCU flavor, * invoked. The first argument is the name of the RCU flavor,
...@@ -712,6 +740,7 @@ TRACE_EVENT_RCU(rcu_torture_read, ...@@ -712,6 +740,7 @@ TRACE_EVENT_RCU(rcu_torture_read,
* "Begin": rcu_barrier() started. * "Begin": rcu_barrier() started.
* "EarlyExit": rcu_barrier() piggybacked, thus early exit. * "EarlyExit": rcu_barrier() piggybacked, thus early exit.
* "Inc1": rcu_barrier() piggyback check counter incremented. * "Inc1": rcu_barrier() piggyback check counter incremented.
* "OfflineNoCBQ": rcu_barrier() found offline no-CBs CPU with callbacks.
* "OnlineQ": rcu_barrier() found online CPU with callbacks. * "OnlineQ": rcu_barrier() found online CPU with callbacks.
* "OnlineNQ": rcu_barrier() found online CPU, no callbacks. * "OnlineNQ": rcu_barrier() found online CPU, no callbacks.
* "IRQ": An rcu_barrier_callback() callback posted on remote CPU. * "IRQ": An rcu_barrier_callback() callback posted on remote CPU.
......
...@@ -618,7 +618,7 @@ static struct lock_torture_ops percpu_rwsem_lock_ops = { ...@@ -618,7 +618,7 @@ static struct lock_torture_ops percpu_rwsem_lock_ops = {
static int lock_torture_writer(void *arg) static int lock_torture_writer(void *arg)
{ {
struct lock_stress_stats *lwsp = arg; struct lock_stress_stats *lwsp = arg;
static DEFINE_TORTURE_RANDOM(rand); DEFINE_TORTURE_RANDOM(rand);
VERBOSE_TOROUT_STRING("lock_torture_writer task started"); VERBOSE_TOROUT_STRING("lock_torture_writer task started");
set_user_nice(current, MAX_NICE); set_user_nice(current, MAX_NICE);
...@@ -655,7 +655,7 @@ static int lock_torture_writer(void *arg) ...@@ -655,7 +655,7 @@ static int lock_torture_writer(void *arg)
static int lock_torture_reader(void *arg) static int lock_torture_reader(void *arg)
{ {
struct lock_stress_stats *lrsp = arg; struct lock_stress_stats *lrsp = arg;
static DEFINE_TORTURE_RANDOM(rand); DEFINE_TORTURE_RANDOM(rand);
VERBOSE_TOROUT_STRING("lock_torture_reader task started"); VERBOSE_TOROUT_STRING("lock_torture_reader task started");
set_user_nice(current, MAX_NICE); set_user_nice(current, MAX_NICE);
...@@ -696,15 +696,16 @@ static void __torture_print_stats(char *page, ...@@ -696,15 +696,16 @@ static void __torture_print_stats(char *page,
if (statp[i].n_lock_fail) if (statp[i].n_lock_fail)
fail = true; fail = true;
sum += statp[i].n_lock_acquired; sum += statp[i].n_lock_acquired;
if (max < statp[i].n_lock_fail) if (max < statp[i].n_lock_acquired)
max = statp[i].n_lock_fail; max = statp[i].n_lock_acquired;
if (min > statp[i].n_lock_fail) if (min > statp[i].n_lock_acquired)
min = statp[i].n_lock_fail; min = statp[i].n_lock_acquired;
} }
page += sprintf(page, page += sprintf(page,
"%s: Total: %lld Max/Min: %ld/%ld %s Fail: %d %s\n", "%s: Total: %lld Max/Min: %ld/%ld %s Fail: %d %s\n",
write ? "Writes" : "Reads ", write ? "Writes" : "Reads ",
sum, max, min, max / 2 > min ? "???" : "", sum, max, min,
!onoff_interval && max / 2 > min ? "???" : "",
fail, fail ? "!!!" : ""); fail, fail ? "!!!" : "");
if (fail) if (fail)
atomic_inc(&cxt.n_lock_torture_errors); atomic_inc(&cxt.n_lock_torture_errors);
......
...@@ -57,7 +57,7 @@ rt_mutex_set_owner(struct rt_mutex *lock, struct task_struct *owner) ...@@ -57,7 +57,7 @@ rt_mutex_set_owner(struct rt_mutex *lock, struct task_struct *owner)
if (rt_mutex_has_waiters(lock)) if (rt_mutex_has_waiters(lock))
val |= RT_MUTEX_HAS_WAITERS; val |= RT_MUTEX_HAS_WAITERS;
lock->owner = (struct task_struct *)val; WRITE_ONCE(lock->owner, (struct task_struct *)val);
} }
static inline void clear_rt_mutex_waiters(struct rt_mutex *lock) static inline void clear_rt_mutex_waiters(struct rt_mutex *lock)
......
...@@ -3,6 +3,10 @@ ...@@ -3,6 +3,10 @@
# and is generally not a function of system call inputs. # and is generally not a function of system call inputs.
KCOV_INSTRUMENT := n KCOV_INSTRUMENT := n
ifeq ($(CONFIG_KCSAN),y)
KBUILD_CFLAGS += -g -fno-omit-frame-pointer
endif
obj-y += update.o sync.o obj-y += update.o sync.o
obj-$(CONFIG_TREE_SRCU) += srcutree.o obj-$(CONFIG_TREE_SRCU) += srcutree.o
obj-$(CONFIG_TINY_SRCU) += srcutiny.o obj-$(CONFIG_TINY_SRCU) += srcutiny.o
......
...@@ -198,6 +198,13 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head) ...@@ -198,6 +198,13 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head)
} }
#endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ #endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */
extern int rcu_cpu_stall_suppress_at_boot;
static inline bool rcu_stall_is_suppressed_at_boot(void)
{
return rcu_cpu_stall_suppress_at_boot && !rcu_inkernel_boot_has_ended();
}
#ifdef CONFIG_RCU_STALL_COMMON #ifdef CONFIG_RCU_STALL_COMMON
extern int rcu_cpu_stall_ftrace_dump; extern int rcu_cpu_stall_ftrace_dump;
...@@ -205,6 +212,11 @@ extern int rcu_cpu_stall_suppress; ...@@ -205,6 +212,11 @@ extern int rcu_cpu_stall_suppress;
extern int rcu_cpu_stall_timeout; extern int rcu_cpu_stall_timeout;
int rcu_jiffies_till_stall_check(void); int rcu_jiffies_till_stall_check(void);
static inline bool rcu_stall_is_suppressed(void)
{
return rcu_stall_is_suppressed_at_boot() || rcu_cpu_stall_suppress;
}
#define rcu_ftrace_dump_stall_suppress() \ #define rcu_ftrace_dump_stall_suppress() \
do { \ do { \
if (!rcu_cpu_stall_suppress) \ if (!rcu_cpu_stall_suppress) \
...@@ -218,6 +230,11 @@ do { \ ...@@ -218,6 +230,11 @@ do { \
} while (0) } while (0)
#else /* #endif #ifdef CONFIG_RCU_STALL_COMMON */ #else /* #endif #ifdef CONFIG_RCU_STALL_COMMON */
static inline bool rcu_stall_is_suppressed(void)
{
return rcu_stall_is_suppressed_at_boot();
}
#define rcu_ftrace_dump_stall_suppress() #define rcu_ftrace_dump_stall_suppress()
#define rcu_ftrace_dump_stall_unsuppress() #define rcu_ftrace_dump_stall_unsuppress()
#endif /* #ifdef CONFIG_RCU_STALL_COMMON */ #endif /* #ifdef CONFIG_RCU_STALL_COMMON */
...@@ -325,7 +342,8 @@ static inline void rcu_init_levelspread(int *levelspread, const int *levelcnt) ...@@ -325,7 +342,8 @@ static inline void rcu_init_levelspread(int *levelspread, const int *levelcnt)
* Iterate over all possible CPUs in a leaf RCU node. * Iterate over all possible CPUs in a leaf RCU node.
*/ */
#define for_each_leaf_node_possible_cpu(rnp, cpu) \ #define for_each_leaf_node_possible_cpu(rnp, cpu) \
for ((cpu) = cpumask_next((rnp)->grplo - 1, cpu_possible_mask); \ for (WARN_ON_ONCE(!rcu_is_leaf_node(rnp)), \
(cpu) = cpumask_next((rnp)->grplo - 1, cpu_possible_mask); \
(cpu) <= rnp->grphi; \ (cpu) <= rnp->grphi; \
(cpu) = cpumask_next((cpu), cpu_possible_mask)) (cpu) = cpumask_next((cpu), cpu_possible_mask))
...@@ -335,7 +353,8 @@ static inline void rcu_init_levelspread(int *levelspread, const int *levelcnt) ...@@ -335,7 +353,8 @@ static inline void rcu_init_levelspread(int *levelspread, const int *levelcnt)
#define rcu_find_next_bit(rnp, cpu, mask) \ #define rcu_find_next_bit(rnp, cpu, mask) \
((rnp)->grplo + find_next_bit(&(mask), BITS_PER_LONG, (cpu))) ((rnp)->grplo + find_next_bit(&(mask), BITS_PER_LONG, (cpu)))
#define for_each_leaf_node_cpu_mask(rnp, cpu, mask) \ #define for_each_leaf_node_cpu_mask(rnp, cpu, mask) \
for ((cpu) = rcu_find_next_bit((rnp), 0, (mask)); \ for (WARN_ON_ONCE(!rcu_is_leaf_node(rnp)), \
(cpu) = rcu_find_next_bit((rnp), 0, (mask)); \
(cpu) <= rnp->grphi; \ (cpu) <= rnp->grphi; \
(cpu) = rcu_find_next_bit((rnp), (cpu) + 1 - (rnp->grplo), (mask))) (cpu) = rcu_find_next_bit((rnp), (cpu) + 1 - (rnp->grplo), (mask)))
......
...@@ -182,7 +182,7 @@ void rcu_segcblist_offload(struct rcu_segcblist *rsclp) ...@@ -182,7 +182,7 @@ void rcu_segcblist_offload(struct rcu_segcblist *rsclp)
bool rcu_segcblist_ready_cbs(struct rcu_segcblist *rsclp) bool rcu_segcblist_ready_cbs(struct rcu_segcblist *rsclp)
{ {
return rcu_segcblist_is_enabled(rsclp) && return rcu_segcblist_is_enabled(rsclp) &&
&rsclp->head != rsclp->tails[RCU_DONE_TAIL]; &rsclp->head != READ_ONCE(rsclp->tails[RCU_DONE_TAIL]);
} }
/* /*
...@@ -381,8 +381,6 @@ void rcu_segcblist_insert_pend_cbs(struct rcu_segcblist *rsclp, ...@@ -381,8 +381,6 @@ void rcu_segcblist_insert_pend_cbs(struct rcu_segcblist *rsclp,
return; /* Nothing to do. */ return; /* Nothing to do. */
WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rclp->head); WRITE_ONCE(*rsclp->tails[RCU_NEXT_TAIL], rclp->head);
WRITE_ONCE(rsclp->tails[RCU_NEXT_TAIL], rclp->tail); WRITE_ONCE(rsclp->tails[RCU_NEXT_TAIL], rclp->tail);
rclp->head = NULL;
rclp->tail = &rclp->head;
} }
/* /*
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/mm.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/kthread.h> #include <linux/kthread.h>
#include <linux/err.h> #include <linux/err.h>
...@@ -611,6 +612,7 @@ kfree_perf_thread(void *arg) ...@@ -611,6 +612,7 @@ kfree_perf_thread(void *arg)
long me = (long)arg; long me = (long)arg;
struct kfree_obj *alloc_ptr; struct kfree_obj *alloc_ptr;
u64 start_time, end_time; u64 start_time, end_time;
long long mem_begin, mem_during = 0;
VERBOSE_PERFOUT_STRING("kfree_perf_thread task started"); VERBOSE_PERFOUT_STRING("kfree_perf_thread task started");
set_cpus_allowed_ptr(current, cpumask_of(me % nr_cpu_ids)); set_cpus_allowed_ptr(current, cpumask_of(me % nr_cpu_ids));
...@@ -626,6 +628,12 @@ kfree_perf_thread(void *arg) ...@@ -626,6 +628,12 @@ kfree_perf_thread(void *arg)
} }
do { do {
if (!mem_during) {
mem_during = mem_begin = si_mem_available();
} else if (loop % (kfree_loops / 4) == 0) {
mem_during = (mem_during + si_mem_available()) / 2;
}
for (i = 0; i < kfree_alloc_num; i++) { for (i = 0; i < kfree_alloc_num; i++) {
alloc_ptr = kmalloc(sizeof(struct kfree_obj), GFP_KERNEL); alloc_ptr = kmalloc(sizeof(struct kfree_obj), GFP_KERNEL);
if (!alloc_ptr) if (!alloc_ptr)
...@@ -645,9 +653,11 @@ kfree_perf_thread(void *arg) ...@@ -645,9 +653,11 @@ kfree_perf_thread(void *arg)
else else
b_rcu_gp_test_finished = cur_ops->get_gp_seq(); b_rcu_gp_test_finished = cur_ops->get_gp_seq();
pr_alert("Total time taken by all kfree'ers: %llu ns, loops: %d, batches: %ld\n", pr_alert("Total time taken by all kfree'ers: %llu ns, loops: %d, batches: %ld, memory footprint: %lldMB\n",
(unsigned long long)(end_time - start_time), kfree_loops, (unsigned long long)(end_time - start_time), kfree_loops,
rcuperf_seq_diff(b_rcu_gp_test_finished, b_rcu_gp_test_started)); rcuperf_seq_diff(b_rcu_gp_test_finished, b_rcu_gp_test_started),
(mem_begin - mem_during) >> (20 - PAGE_SHIFT));
if (shutdown) { if (shutdown) {
smp_mb(); /* Assign before wake. */ smp_mb(); /* Assign before wake. */
wake_up(&shutdown_wq); wake_up(&shutdown_wq);
......
...@@ -339,7 +339,7 @@ rcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp) ...@@ -339,7 +339,7 @@ rcu_read_delay(struct torture_random_state *rrsp, struct rt_read_seg *rtrsp)
* period, and we want a long delay occasionally to trigger * period, and we want a long delay occasionally to trigger
* force_quiescent_state. */ * force_quiescent_state. */
if (!rcu_fwd_cb_nodelay && if (!READ_ONCE(rcu_fwd_cb_nodelay) &&
!(torture_random(rrsp) % (nrealreaders * 2000 * longdelay_ms))) { !(torture_random(rrsp) % (nrealreaders * 2000 * longdelay_ms))) {
started = cur_ops->get_gp_seq(); started = cur_ops->get_gp_seq();
ts = rcu_trace_clock_local(); ts = rcu_trace_clock_local();
...@@ -375,11 +375,12 @@ rcu_torture_pipe_update_one(struct rcu_torture *rp) ...@@ -375,11 +375,12 @@ rcu_torture_pipe_update_one(struct rcu_torture *rp)
{ {
int i; int i;
i = rp->rtort_pipe_count; i = READ_ONCE(rp->rtort_pipe_count);
if (i > RCU_TORTURE_PIPE_LEN) if (i > RCU_TORTURE_PIPE_LEN)
i = RCU_TORTURE_PIPE_LEN; i = RCU_TORTURE_PIPE_LEN;
atomic_inc(&rcu_torture_wcount[i]); atomic_inc(&rcu_torture_wcount[i]);
if (++rp->rtort_pipe_count >= RCU_TORTURE_PIPE_LEN) { WRITE_ONCE(rp->rtort_pipe_count, i + 1);
if (rp->rtort_pipe_count >= RCU_TORTURE_PIPE_LEN) {
rp->rtort_mbtest = 0; rp->rtort_mbtest = 0;
return true; return true;
} }
...@@ -1015,7 +1016,8 @@ rcu_torture_writer(void *arg) ...@@ -1015,7 +1016,8 @@ rcu_torture_writer(void *arg)
if (i > RCU_TORTURE_PIPE_LEN) if (i > RCU_TORTURE_PIPE_LEN)
i = RCU_TORTURE_PIPE_LEN; i = RCU_TORTURE_PIPE_LEN;
atomic_inc(&rcu_torture_wcount[i]); atomic_inc(&rcu_torture_wcount[i]);
old_rp->rtort_pipe_count++; WRITE_ONCE(old_rp->rtort_pipe_count,
old_rp->rtort_pipe_count + 1);
switch (synctype[torture_random(&rand) % nsynctypes]) { switch (synctype[torture_random(&rand) % nsynctypes]) {
case RTWS_DEF_FREE: case RTWS_DEF_FREE:
rcu_torture_writer_state = RTWS_DEF_FREE; rcu_torture_writer_state = RTWS_DEF_FREE;
...@@ -1067,7 +1069,8 @@ rcu_torture_writer(void *arg) ...@@ -1067,7 +1069,8 @@ rcu_torture_writer(void *arg)
if (stutter_wait("rcu_torture_writer") && if (stutter_wait("rcu_torture_writer") &&
!READ_ONCE(rcu_fwd_cb_nodelay) && !READ_ONCE(rcu_fwd_cb_nodelay) &&
!cur_ops->slow_gps && !cur_ops->slow_gps &&
!torture_must_stop()) !torture_must_stop() &&
rcu_inkernel_boot_has_ended())
for (i = 0; i < ARRAY_SIZE(rcu_tortures); i++) for (i = 0; i < ARRAY_SIZE(rcu_tortures); i++)
if (list_empty(&rcu_tortures[i].rtort_free) && if (list_empty(&rcu_tortures[i].rtort_free) &&
rcu_access_pointer(rcu_torture_current) != rcu_access_pointer(rcu_torture_current) !=
...@@ -1290,7 +1293,7 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp) ...@@ -1290,7 +1293,7 @@ static bool rcu_torture_one_read(struct torture_random_state *trsp)
atomic_inc(&n_rcu_torture_mberror); atomic_inc(&n_rcu_torture_mberror);
rtrsp = rcutorture_loop_extend(&readstate, trsp, rtrsp); rtrsp = rcutorture_loop_extend(&readstate, trsp, rtrsp);
preempt_disable(); preempt_disable();
pipe_count = p->rtort_pipe_count; pipe_count = READ_ONCE(p->rtort_pipe_count);
if (pipe_count > RCU_TORTURE_PIPE_LEN) { if (pipe_count > RCU_TORTURE_PIPE_LEN) {
/* Should not happen, but... */ /* Should not happen, but... */
pipe_count = RCU_TORTURE_PIPE_LEN; pipe_count = RCU_TORTURE_PIPE_LEN;
...@@ -1404,14 +1407,15 @@ rcu_torture_stats_print(void) ...@@ -1404,14 +1407,15 @@ rcu_torture_stats_print(void)
int i; int i;
long pipesummary[RCU_TORTURE_PIPE_LEN + 1] = { 0 }; long pipesummary[RCU_TORTURE_PIPE_LEN + 1] = { 0 };
long batchsummary[RCU_TORTURE_PIPE_LEN + 1] = { 0 }; long batchsummary[RCU_TORTURE_PIPE_LEN + 1] = { 0 };
struct rcu_torture *rtcp;
static unsigned long rtcv_snap = ULONG_MAX; static unsigned long rtcv_snap = ULONG_MAX;
static bool splatted; static bool splatted;
struct task_struct *wtp; struct task_struct *wtp;
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
for (i = 0; i < RCU_TORTURE_PIPE_LEN + 1; i++) { for (i = 0; i < RCU_TORTURE_PIPE_LEN + 1; i++) {
pipesummary[i] += per_cpu(rcu_torture_count, cpu)[i]; pipesummary[i] += READ_ONCE(per_cpu(rcu_torture_count, cpu)[i]);
batchsummary[i] += per_cpu(rcu_torture_batch, cpu)[i]; batchsummary[i] += READ_ONCE(per_cpu(rcu_torture_batch, cpu)[i]);
} }
} }
for (i = RCU_TORTURE_PIPE_LEN - 1; i >= 0; i--) { for (i = RCU_TORTURE_PIPE_LEN - 1; i >= 0; i--) {
...@@ -1420,9 +1424,10 @@ rcu_torture_stats_print(void) ...@@ -1420,9 +1424,10 @@ rcu_torture_stats_print(void)
} }
pr_alert("%s%s ", torture_type, TORTURE_FLAG); pr_alert("%s%s ", torture_type, TORTURE_FLAG);
rtcp = rcu_access_pointer(rcu_torture_current);
pr_cont("rtc: %p %s: %lu tfle: %d rta: %d rtaf: %d rtf: %d ", pr_cont("rtc: %p %s: %lu tfle: %d rta: %d rtaf: %d rtf: %d ",
rcu_torture_current, rtcp,
rcu_torture_current ? "ver" : "VER", rtcp && !rcu_stall_is_suppressed_at_boot() ? "ver" : "VER",
rcu_torture_current_version, rcu_torture_current_version,
list_empty(&rcu_torture_freelist), list_empty(&rcu_torture_freelist),
atomic_read(&n_rcu_torture_alloc), atomic_read(&n_rcu_torture_alloc),
...@@ -1478,7 +1483,8 @@ rcu_torture_stats_print(void) ...@@ -1478,7 +1483,8 @@ rcu_torture_stats_print(void)
if (cur_ops->stats) if (cur_ops->stats)
cur_ops->stats(); cur_ops->stats();
if (rtcv_snap == rcu_torture_current_version && if (rtcv_snap == rcu_torture_current_version &&
rcu_torture_current != NULL) { rcu_access_pointer(rcu_torture_current) &&
!rcu_stall_is_suppressed()) {
int __maybe_unused flags = 0; int __maybe_unused flags = 0;
unsigned long __maybe_unused gp_seq = 0; unsigned long __maybe_unused gp_seq = 0;
...@@ -1993,7 +1999,10 @@ static int rcu_torture_fwd_prog(void *args) ...@@ -1993,7 +1999,10 @@ static int rcu_torture_fwd_prog(void *args)
schedule_timeout_interruptible(fwd_progress_holdoff * HZ); schedule_timeout_interruptible(fwd_progress_holdoff * HZ);
WRITE_ONCE(rcu_fwd_emergency_stop, false); WRITE_ONCE(rcu_fwd_emergency_stop, false);
register_oom_notifier(&rcutorture_oom_nb); register_oom_notifier(&rcutorture_oom_nb);
if (!IS_ENABLED(CONFIG_TINY_RCU) ||
rcu_inkernel_boot_has_ended())
rcu_torture_fwd_prog_nr(rfp, &tested, &tested_tries); rcu_torture_fwd_prog_nr(rfp, &tested, &tested_tries);
if (rcu_inkernel_boot_has_ended())
rcu_torture_fwd_prog_cr(rfp); rcu_torture_fwd_prog_cr(rfp);
unregister_oom_notifier(&rcutorture_oom_nb); unregister_oom_notifier(&rcutorture_oom_nb);
...@@ -2044,6 +2053,14 @@ static void rcu_torture_barrier_cbf(struct rcu_head *rcu) ...@@ -2044,6 +2053,14 @@ static void rcu_torture_barrier_cbf(struct rcu_head *rcu)
atomic_inc(&barrier_cbs_invoked); atomic_inc(&barrier_cbs_invoked);
} }
/* IPI handler to get callback posted on desired CPU, if online. */
static void rcu_torture_barrier1cb(void *rcu_void)
{
struct rcu_head *rhp = rcu_void;
cur_ops->call(rhp, rcu_torture_barrier_cbf);
}
/* kthread function to register callbacks used to test RCU barriers. */ /* kthread function to register callbacks used to test RCU barriers. */
static int rcu_torture_barrier_cbs(void *arg) static int rcu_torture_barrier_cbs(void *arg)
{ {
...@@ -2067,9 +2084,11 @@ static int rcu_torture_barrier_cbs(void *arg) ...@@ -2067,9 +2084,11 @@ static int rcu_torture_barrier_cbs(void *arg)
* The above smp_load_acquire() ensures barrier_phase load * The above smp_load_acquire() ensures barrier_phase load
* is ordered before the following ->call(). * is ordered before the following ->call().
*/ */
local_irq_disable(); /* Just to test no-irq call_rcu(). */ if (smp_call_function_single(myid, rcu_torture_barrier1cb,
&rcu, 1)) {
// IPI failed, so use direct call from current CPU.
cur_ops->call(&rcu, rcu_torture_barrier_cbf); cur_ops->call(&rcu, rcu_torture_barrier_cbf);
local_irq_enable(); }
if (atomic_dec_and_test(&barrier_cbs_count)) if (atomic_dec_and_test(&barrier_cbs_count))
wake_up(&barrier_wq); wake_up(&barrier_wq);
} while (!torture_must_stop()); } while (!torture_must_stop());
...@@ -2105,7 +2124,21 @@ static int rcu_torture_barrier(void *arg) ...@@ -2105,7 +2124,21 @@ static int rcu_torture_barrier(void *arg)
pr_err("barrier_cbs_invoked = %d, n_barrier_cbs = %d\n", pr_err("barrier_cbs_invoked = %d, n_barrier_cbs = %d\n",
atomic_read(&barrier_cbs_invoked), atomic_read(&barrier_cbs_invoked),
n_barrier_cbs); n_barrier_cbs);
WARN_ON_ONCE(1); WARN_ON(1);
// Wait manually for the remaining callbacks
i = 0;
do {
if (WARN_ON(i++ > HZ))
i = INT_MIN;
schedule_timeout_interruptible(1);
cur_ops->cb_barrier();
} while (atomic_read(&barrier_cbs_invoked) !=
n_barrier_cbs &&
!torture_must_stop());
smp_mb(); // Can't trust ordering if broken.
if (!torture_must_stop())
pr_err("Recovered: barrier_cbs_invoked = %d\n",
atomic_read(&barrier_cbs_invoked));
} else { } else {
n_barrier_successes++; n_barrier_successes++;
} }
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
* Copyright (C) IBM Corporation, 2006 * Copyright (C) IBM Corporation, 2006
* Copyright (C) Fujitsu, 2012 * Copyright (C) Fujitsu, 2012
* *
* Author: Paul McKenney <paulmck@linux.ibm.com> * Authors: Paul McKenney <paulmck@linux.ibm.com>
* Lai Jiangshan <laijs@cn.fujitsu.com> * Lai Jiangshan <laijs@cn.fujitsu.com>
* *
* For detailed explanation of Read-Copy Update mechanism see - * For detailed explanation of Read-Copy Update mechanism see -
...@@ -450,7 +450,7 @@ static void srcu_gp_start(struct srcu_struct *ssp) ...@@ -450,7 +450,7 @@ static void srcu_gp_start(struct srcu_struct *ssp)
spin_unlock_rcu_node(sdp); /* Interrupts remain disabled. */ spin_unlock_rcu_node(sdp); /* Interrupts remain disabled. */
smp_mb(); /* Order prior store to ->srcu_gp_seq_needed vs. GP start. */ smp_mb(); /* Order prior store to ->srcu_gp_seq_needed vs. GP start. */
rcu_seq_start(&ssp->srcu_gp_seq); rcu_seq_start(&ssp->srcu_gp_seq);
state = rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)); state = rcu_seq_state(ssp->srcu_gp_seq);
WARN_ON_ONCE(state != SRCU_STATE_SCAN1); WARN_ON_ONCE(state != SRCU_STATE_SCAN1);
} }
...@@ -534,7 +534,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) ...@@ -534,7 +534,7 @@ static void srcu_gp_end(struct srcu_struct *ssp)
rcu_seq_end(&ssp->srcu_gp_seq); rcu_seq_end(&ssp->srcu_gp_seq);
gpseq = rcu_seq_current(&ssp->srcu_gp_seq); gpseq = rcu_seq_current(&ssp->srcu_gp_seq);
if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, gpseq)) if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, gpseq))
ssp->srcu_gp_seq_needed_exp = gpseq; WRITE_ONCE(ssp->srcu_gp_seq_needed_exp, gpseq);
spin_unlock_irq_rcu_node(ssp); spin_unlock_irq_rcu_node(ssp);
mutex_unlock(&ssp->srcu_gp_mutex); mutex_unlock(&ssp->srcu_gp_mutex);
/* A new grace period can start at this point. But only one. */ /* A new grace period can start at this point. But only one. */
...@@ -550,7 +550,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) ...@@ -550,7 +550,7 @@ static void srcu_gp_end(struct srcu_struct *ssp)
snp->srcu_have_cbs[idx] = gpseq; snp->srcu_have_cbs[idx] = gpseq;
rcu_seq_set_state(&snp->srcu_have_cbs[idx], 1); rcu_seq_set_state(&snp->srcu_have_cbs[idx], 1);
if (ULONG_CMP_LT(snp->srcu_gp_seq_needed_exp, gpseq)) if (ULONG_CMP_LT(snp->srcu_gp_seq_needed_exp, gpseq))
snp->srcu_gp_seq_needed_exp = gpseq; WRITE_ONCE(snp->srcu_gp_seq_needed_exp, gpseq);
mask = snp->srcu_data_have_cbs[idx]; mask = snp->srcu_data_have_cbs[idx];
snp->srcu_data_have_cbs[idx] = 0; snp->srcu_data_have_cbs[idx] = 0;
spin_unlock_irq_rcu_node(snp); spin_unlock_irq_rcu_node(snp);
...@@ -614,7 +614,7 @@ static void srcu_funnel_exp_start(struct srcu_struct *ssp, struct srcu_node *snp ...@@ -614,7 +614,7 @@ static void srcu_funnel_exp_start(struct srcu_struct *ssp, struct srcu_node *snp
} }
spin_lock_irqsave_rcu_node(ssp, flags); spin_lock_irqsave_rcu_node(ssp, flags);
if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, s)) if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, s))
ssp->srcu_gp_seq_needed_exp = s; WRITE_ONCE(ssp->srcu_gp_seq_needed_exp, s);
spin_unlock_irqrestore_rcu_node(ssp, flags); spin_unlock_irqrestore_rcu_node(ssp, flags);
} }
...@@ -660,7 +660,7 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, ...@@ -660,7 +660,7 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp,
if (snp == sdp->mynode) if (snp == sdp->mynode)
snp->srcu_data_have_cbs[idx] |= sdp->grpmask; snp->srcu_data_have_cbs[idx] |= sdp->grpmask;
if (!do_norm && ULONG_CMP_LT(snp->srcu_gp_seq_needed_exp, s)) if (!do_norm && ULONG_CMP_LT(snp->srcu_gp_seq_needed_exp, s))
snp->srcu_gp_seq_needed_exp = s; WRITE_ONCE(snp->srcu_gp_seq_needed_exp, s);
spin_unlock_irqrestore_rcu_node(snp, flags); spin_unlock_irqrestore_rcu_node(snp, flags);
} }
...@@ -674,7 +674,7 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, ...@@ -674,7 +674,7 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp,
smp_store_release(&ssp->srcu_gp_seq_needed, s); /*^^^*/ smp_store_release(&ssp->srcu_gp_seq_needed, s); /*^^^*/
} }
if (!do_norm && ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, s)) if (!do_norm && ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, s))
ssp->srcu_gp_seq_needed_exp = s; WRITE_ONCE(ssp->srcu_gp_seq_needed_exp, s);
/* If grace period not already done and none in progress, start it. */ /* If grace period not already done and none in progress, start it. */
if (!rcu_seq_done(&ssp->srcu_gp_seq, s) && if (!rcu_seq_done(&ssp->srcu_gp_seq, s) &&
...@@ -1079,7 +1079,7 @@ EXPORT_SYMBOL_GPL(srcu_barrier); ...@@ -1079,7 +1079,7 @@ EXPORT_SYMBOL_GPL(srcu_barrier);
*/ */
unsigned long srcu_batches_completed(struct srcu_struct *ssp) unsigned long srcu_batches_completed(struct srcu_struct *ssp)
{ {
return ssp->srcu_idx; return READ_ONCE(ssp->srcu_idx);
} }
EXPORT_SYMBOL_GPL(srcu_batches_completed); EXPORT_SYMBOL_GPL(srcu_batches_completed);
...@@ -1130,7 +1130,9 @@ static void srcu_advance_state(struct srcu_struct *ssp) ...@@ -1130,7 +1130,9 @@ static void srcu_advance_state(struct srcu_struct *ssp)
return; /* readers present, retry later. */ return; /* readers present, retry later. */
} }
srcu_flip(ssp); srcu_flip(ssp);
spin_lock_irq_rcu_node(ssp);
rcu_seq_set_state(&ssp->srcu_gp_seq, SRCU_STATE_SCAN2); rcu_seq_set_state(&ssp->srcu_gp_seq, SRCU_STATE_SCAN2);
spin_unlock_irq_rcu_node(ssp);
} }
if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) == SRCU_STATE_SCAN2) { if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) == SRCU_STATE_SCAN2) {
......
This diff is collapsed.
...@@ -68,6 +68,8 @@ struct rcu_node { ...@@ -68,6 +68,8 @@ struct rcu_node {
/* Online CPUs for next expedited GP. */ /* Online CPUs for next expedited GP. */
/* Any CPU that has ever been online will */ /* Any CPU that has ever been online will */
/* have its bit set. */ /* have its bit set. */
unsigned long cbovldmask;
/* CPUs experiencing callback overload. */
unsigned long ffmask; /* Fully functional CPUs. */ unsigned long ffmask; /* Fully functional CPUs. */
unsigned long grpmask; /* Mask to apply to parent qsmask. */ unsigned long grpmask; /* Mask to apply to parent qsmask. */
/* Only one bit will be set in this mask. */ /* Only one bit will be set in this mask. */
...@@ -321,6 +323,8 @@ struct rcu_state { ...@@ -321,6 +323,8 @@ struct rcu_state {
atomic_t expedited_need_qs; /* # CPUs left to check in. */ atomic_t expedited_need_qs; /* # CPUs left to check in. */
struct swait_queue_head expedited_wq; /* Wait for check-ins. */ struct swait_queue_head expedited_wq; /* Wait for check-ins. */
int ncpus_snap; /* # CPUs seen last time. */ int ncpus_snap; /* # CPUs seen last time. */
u8 cbovld; /* Callback overload now? */
u8 cbovldnext; /* ^ ^ next time? */
unsigned long jiffies_force_qs; /* Time at which to invoke */ unsigned long jiffies_force_qs; /* Time at which to invoke */
/* force_quiescent_state(). */ /* force_quiescent_state(). */
......
...@@ -314,7 +314,7 @@ static bool exp_funnel_lock(unsigned long s) ...@@ -314,7 +314,7 @@ static bool exp_funnel_lock(unsigned long s)
sync_exp_work_done(s)); sync_exp_work_done(s));
return true; return true;
} }
rnp->exp_seq_rq = s; /* Followers can wait on us. */ WRITE_ONCE(rnp->exp_seq_rq, s); /* Followers can wait on us. */
spin_unlock(&rnp->exp_lock); spin_unlock(&rnp->exp_lock);
trace_rcu_exp_funnel_lock(rcu_state.name, rnp->level, trace_rcu_exp_funnel_lock(rcu_state.name, rnp->level,
rnp->grplo, rnp->grphi, TPS("nxtlvl")); rnp->grplo, rnp->grphi, TPS("nxtlvl"));
...@@ -485,6 +485,7 @@ static bool synchronize_rcu_expedited_wait_once(long tlimit) ...@@ -485,6 +485,7 @@ static bool synchronize_rcu_expedited_wait_once(long tlimit)
static void synchronize_rcu_expedited_wait(void) static void synchronize_rcu_expedited_wait(void)
{ {
int cpu; int cpu;
unsigned long j;
unsigned long jiffies_stall; unsigned long jiffies_stall;
unsigned long jiffies_start; unsigned long jiffies_start;
unsigned long mask; unsigned long mask;
...@@ -496,7 +497,7 @@ static void synchronize_rcu_expedited_wait(void) ...@@ -496,7 +497,7 @@ static void synchronize_rcu_expedited_wait(void)
trace_rcu_exp_grace_period(rcu_state.name, rcu_exp_gp_seq_endval(), TPS("startwait")); trace_rcu_exp_grace_period(rcu_state.name, rcu_exp_gp_seq_endval(), TPS("startwait"));
jiffies_stall = rcu_jiffies_till_stall_check(); jiffies_stall = rcu_jiffies_till_stall_check();
jiffies_start = jiffies; jiffies_start = jiffies;
if (IS_ENABLED(CONFIG_NO_HZ_FULL)) { if (tick_nohz_full_enabled() && rcu_inkernel_boot_has_ended()) {
if (synchronize_rcu_expedited_wait_once(1)) if (synchronize_rcu_expedited_wait_once(1))
return; return;
rcu_for_each_leaf_node(rnp) { rcu_for_each_leaf_node(rnp) {
...@@ -508,12 +509,16 @@ static void synchronize_rcu_expedited_wait(void) ...@@ -508,12 +509,16 @@ static void synchronize_rcu_expedited_wait(void)
tick_dep_set_cpu(cpu, TICK_DEP_BIT_RCU_EXP); tick_dep_set_cpu(cpu, TICK_DEP_BIT_RCU_EXP);
} }
} }
j = READ_ONCE(jiffies_till_first_fqs);
if (synchronize_rcu_expedited_wait_once(j + HZ))
return;
WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_RT));
} }
for (;;) { for (;;) {
if (synchronize_rcu_expedited_wait_once(jiffies_stall)) if (synchronize_rcu_expedited_wait_once(jiffies_stall))
return; return;
if (rcu_cpu_stall_suppress) if (rcu_stall_is_suppressed())
continue; continue;
panic_on_rcu_stall(); panic_on_rcu_stall();
pr_err("INFO: %s detected expedited stalls on CPUs/tasks: {", pr_err("INFO: %s detected expedited stalls on CPUs/tasks: {",
...@@ -589,7 +594,7 @@ static void rcu_exp_wait_wake(unsigned long s) ...@@ -589,7 +594,7 @@ static void rcu_exp_wait_wake(unsigned long s)
spin_lock(&rnp->exp_lock); spin_lock(&rnp->exp_lock);
/* Recheck, avoid hang in case someone just arrived. */ /* Recheck, avoid hang in case someone just arrived. */
if (ULONG_CMP_LT(rnp->exp_seq_rq, s)) if (ULONG_CMP_LT(rnp->exp_seq_rq, s))
rnp->exp_seq_rq = s; WRITE_ONCE(rnp->exp_seq_rq, s);
spin_unlock(&rnp->exp_lock); spin_unlock(&rnp->exp_lock);
} }
smp_mb(); /* All above changes before wakeup. */ smp_mb(); /* All above changes before wakeup. */
......
...@@ -56,6 +56,8 @@ static void __init rcu_bootup_announce_oddness(void) ...@@ -56,6 +56,8 @@ static void __init rcu_bootup_announce_oddness(void)
pr_info("\tBoot-time adjustment of callback high-water mark to %ld.\n", qhimark); pr_info("\tBoot-time adjustment of callback high-water mark to %ld.\n", qhimark);
if (qlowmark != DEFAULT_RCU_QLOMARK) if (qlowmark != DEFAULT_RCU_QLOMARK)
pr_info("\tBoot-time adjustment of callback low-water mark to %ld.\n", qlowmark); pr_info("\tBoot-time adjustment of callback low-water mark to %ld.\n", qlowmark);
if (qovld != DEFAULT_RCU_QOVLD)
pr_info("\tBoot-time adjustment of callback overload level to %ld.\n", qovld);
if (jiffies_till_first_fqs != ULONG_MAX) if (jiffies_till_first_fqs != ULONG_MAX)
pr_info("\tBoot-time adjustment of first FQS scan delay to %ld jiffies.\n", jiffies_till_first_fqs); pr_info("\tBoot-time adjustment of first FQS scan delay to %ld jiffies.\n", jiffies_till_first_fqs);
if (jiffies_till_next_fqs != ULONG_MAX) if (jiffies_till_next_fqs != ULONG_MAX)
...@@ -753,7 +755,7 @@ dump_blkd_tasks(struct rcu_node *rnp, int ncheck) ...@@ -753,7 +755,7 @@ dump_blkd_tasks(struct rcu_node *rnp, int ncheck)
raw_lockdep_assert_held_rcu_node(rnp); raw_lockdep_assert_held_rcu_node(rnp);
pr_info("%s: grp: %d-%d level: %d ->gp_seq %ld ->completedqs %ld\n", pr_info("%s: grp: %d-%d level: %d ->gp_seq %ld ->completedqs %ld\n",
__func__, rnp->grplo, rnp->grphi, rnp->level, __func__, rnp->grplo, rnp->grphi, rnp->level,
(long)rnp->gp_seq, (long)rnp->completedqs); (long)READ_ONCE(rnp->gp_seq), (long)rnp->completedqs);
for (rnp1 = rnp; rnp1; rnp1 = rnp1->parent) for (rnp1 = rnp; rnp1; rnp1 = rnp1->parent)
pr_info("%s: %d:%d ->qsmask %#lx ->qsmaskinit %#lx ->qsmaskinitnext %#lx\n", pr_info("%s: %d:%d ->qsmask %#lx ->qsmaskinit %#lx ->qsmaskinitnext %#lx\n",
__func__, rnp1->grplo, rnp1->grphi, rnp1->qsmask, rnp1->qsmaskinit, rnp1->qsmaskinitnext); __func__, rnp1->grplo, rnp1->grphi, rnp1->qsmask, rnp1->qsmaskinit, rnp1->qsmaskinitnext);
...@@ -1032,18 +1034,18 @@ static int rcu_boost_kthread(void *arg) ...@@ -1032,18 +1034,18 @@ static int rcu_boost_kthread(void *arg)
trace_rcu_utilization(TPS("Start boost kthread@init")); trace_rcu_utilization(TPS("Start boost kthread@init"));
for (;;) { for (;;) {
rnp->boost_kthread_status = RCU_KTHREAD_WAITING; WRITE_ONCE(rnp->boost_kthread_status, RCU_KTHREAD_WAITING);
trace_rcu_utilization(TPS("End boost kthread@rcu_wait")); trace_rcu_utilization(TPS("End boost kthread@rcu_wait"));
rcu_wait(rnp->boost_tasks || rnp->exp_tasks); rcu_wait(rnp->boost_tasks || rnp->exp_tasks);
trace_rcu_utilization(TPS("Start boost kthread@rcu_wait")); trace_rcu_utilization(TPS("Start boost kthread@rcu_wait"));
rnp->boost_kthread_status = RCU_KTHREAD_RUNNING; WRITE_ONCE(rnp->boost_kthread_status, RCU_KTHREAD_RUNNING);
more2boost = rcu_boost(rnp); more2boost = rcu_boost(rnp);
if (more2boost) if (more2boost)
spincnt++; spincnt++;
else else
spincnt = 0; spincnt = 0;
if (spincnt > 10) { if (spincnt > 10) {
rnp->boost_kthread_status = RCU_KTHREAD_YIELDING; WRITE_ONCE(rnp->boost_kthread_status, RCU_KTHREAD_YIELDING);
trace_rcu_utilization(TPS("End boost kthread@rcu_yield")); trace_rcu_utilization(TPS("End boost kthread@rcu_yield"));
schedule_timeout_interruptible(2); schedule_timeout_interruptible(2);
trace_rcu_utilization(TPS("Start boost kthread@rcu_yield")); trace_rcu_utilization(TPS("Start boost kthread@rcu_yield"));
...@@ -1077,12 +1079,12 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags) ...@@ -1077,12 +1079,12 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
(rnp->gp_tasks != NULL && (rnp->gp_tasks != NULL &&
rnp->boost_tasks == NULL && rnp->boost_tasks == NULL &&
rnp->qsmask == 0 && rnp->qsmask == 0 &&
ULONG_CMP_GE(jiffies, rnp->boost_time))) { (ULONG_CMP_GE(jiffies, rnp->boost_time) || rcu_state.cbovld))) {
if (rnp->exp_tasks == NULL) if (rnp->exp_tasks == NULL)
rnp->boost_tasks = rnp->gp_tasks; rnp->boost_tasks = rnp->gp_tasks;
raw_spin_unlock_irqrestore_rcu_node(rnp, flags); raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
rcu_wake_cond(rnp->boost_kthread_task, rcu_wake_cond(rnp->boost_kthread_task,
rnp->boost_kthread_status); READ_ONCE(rnp->boost_kthread_status));
} else { } else {
raw_spin_unlock_irqrestore_rcu_node(rnp, flags); raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
} }
...@@ -1486,6 +1488,7 @@ module_param(nocb_nobypass_lim_per_jiffy, int, 0); ...@@ -1486,6 +1488,7 @@ module_param(nocb_nobypass_lim_per_jiffy, int, 0);
* flag the contention. * flag the contention.
*/ */
static void rcu_nocb_bypass_lock(struct rcu_data *rdp) static void rcu_nocb_bypass_lock(struct rcu_data *rdp)
__acquires(&rdp->nocb_bypass_lock)
{ {
lockdep_assert_irqs_disabled(); lockdep_assert_irqs_disabled();
if (raw_spin_trylock(&rdp->nocb_bypass_lock)) if (raw_spin_trylock(&rdp->nocb_bypass_lock))
...@@ -1529,6 +1532,7 @@ static bool rcu_nocb_bypass_trylock(struct rcu_data *rdp) ...@@ -1529,6 +1532,7 @@ static bool rcu_nocb_bypass_trylock(struct rcu_data *rdp)
* Release the specified rcu_data structure's ->nocb_bypass_lock. * Release the specified rcu_data structure's ->nocb_bypass_lock.
*/ */
static void rcu_nocb_bypass_unlock(struct rcu_data *rdp) static void rcu_nocb_bypass_unlock(struct rcu_data *rdp)
__releases(&rdp->nocb_bypass_lock)
{ {
lockdep_assert_irqs_disabled(); lockdep_assert_irqs_disabled();
raw_spin_unlock(&rdp->nocb_bypass_lock); raw_spin_unlock(&rdp->nocb_bypass_lock);
...@@ -1577,8 +1581,7 @@ static void rcu_nocb_unlock_irqrestore(struct rcu_data *rdp, ...@@ -1577,8 +1581,7 @@ static void rcu_nocb_unlock_irqrestore(struct rcu_data *rdp,
static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp) static void rcu_lockdep_assert_cblist_protected(struct rcu_data *rdp)
{ {
lockdep_assert_irqs_disabled(); lockdep_assert_irqs_disabled();
if (rcu_segcblist_is_offloaded(&rdp->cblist) && if (rcu_segcblist_is_offloaded(&rdp->cblist))
cpu_online(rdp->cpu))
lockdep_assert_held(&rdp->nocb_lock); lockdep_assert_held(&rdp->nocb_lock);
} }
...@@ -1930,6 +1933,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) ...@@ -1930,6 +1933,7 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
struct rcu_data *rdp; struct rcu_data *rdp;
struct rcu_node *rnp; struct rcu_node *rnp;
unsigned long wait_gp_seq = 0; // Suppress "use uninitialized" warning. unsigned long wait_gp_seq = 0; // Suppress "use uninitialized" warning.
bool wasempty = false;
/* /*
* Each pass through the following loop checks for CBs and for the * Each pass through the following loop checks for CBs and for the
...@@ -1969,10 +1973,13 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) ...@@ -1969,10 +1973,13 @@ static void nocb_gp_wait(struct rcu_data *my_rdp)
rcu_seq_done(&rnp->gp_seq, cur_gp_seq))) { rcu_seq_done(&rnp->gp_seq, cur_gp_seq))) {
raw_spin_lock_rcu_node(rnp); /* irqs disabled. */ raw_spin_lock_rcu_node(rnp); /* irqs disabled. */
needwake_gp = rcu_advance_cbs(rnp, rdp); needwake_gp = rcu_advance_cbs(rnp, rdp);
wasempty = rcu_segcblist_restempty(&rdp->cblist,
RCU_NEXT_READY_TAIL);
raw_spin_unlock_rcu_node(rnp); /* irqs disabled. */ raw_spin_unlock_rcu_node(rnp); /* irqs disabled. */
} }
// Need to wait on some grace period? // Need to wait on some grace period?
WARN_ON_ONCE(!rcu_segcblist_restempty(&rdp->cblist, WARN_ON_ONCE(wasempty &&
!rcu_segcblist_restempty(&rdp->cblist,
RCU_NEXT_READY_TAIL)); RCU_NEXT_READY_TAIL));
if (rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq)) { if (rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq)) {
if (!needwait_gp || if (!needwait_gp ||
......
...@@ -102,7 +102,7 @@ static void record_gp_stall_check_time(void) ...@@ -102,7 +102,7 @@ static void record_gp_stall_check_time(void)
unsigned long j = jiffies; unsigned long j = jiffies;
unsigned long j1; unsigned long j1;
rcu_state.gp_start = j; WRITE_ONCE(rcu_state.gp_start, j);
j1 = rcu_jiffies_till_stall_check(); j1 = rcu_jiffies_till_stall_check();
/* Record ->gp_start before ->jiffies_stall. */ /* Record ->gp_start before ->jiffies_stall. */
smp_store_release(&rcu_state.jiffies_stall, j + j1); /* ^^^ */ smp_store_release(&rcu_state.jiffies_stall, j + j1); /* ^^^ */
...@@ -383,7 +383,7 @@ static void print_other_cpu_stall(unsigned long gp_seq) ...@@ -383,7 +383,7 @@ static void print_other_cpu_stall(unsigned long gp_seq)
/* Kick and suppress, if so configured. */ /* Kick and suppress, if so configured. */
rcu_stall_kick_kthreads(); rcu_stall_kick_kthreads();
if (rcu_cpu_stall_suppress) if (rcu_stall_is_suppressed())
return; return;
/* /*
...@@ -452,7 +452,7 @@ static void print_cpu_stall(void) ...@@ -452,7 +452,7 @@ static void print_cpu_stall(void)
/* Kick and suppress, if so configured. */ /* Kick and suppress, if so configured. */
rcu_stall_kick_kthreads(); rcu_stall_kick_kthreads();
if (rcu_cpu_stall_suppress) if (rcu_stall_is_suppressed())
return; return;
/* /*
...@@ -504,7 +504,7 @@ static void check_cpu_stall(struct rcu_data *rdp) ...@@ -504,7 +504,7 @@ static void check_cpu_stall(struct rcu_data *rdp)
unsigned long js; unsigned long js;
struct rcu_node *rnp; struct rcu_node *rnp;
if ((rcu_cpu_stall_suppress && !rcu_kick_kthreads) || if ((rcu_stall_is_suppressed() && !rcu_kick_kthreads) ||
!rcu_gp_in_progress()) !rcu_gp_in_progress())
return; return;
rcu_stall_kick_kthreads(); rcu_stall_kick_kthreads();
...@@ -578,6 +578,7 @@ void show_rcu_gp_kthreads(void) ...@@ -578,6 +578,7 @@ void show_rcu_gp_kthreads(void)
unsigned long jw; unsigned long jw;
struct rcu_data *rdp; struct rcu_data *rdp;
struct rcu_node *rnp; struct rcu_node *rnp;
struct task_struct *t = READ_ONCE(rcu_state.gp_kthread);
j = jiffies; j = jiffies;
ja = j - READ_ONCE(rcu_state.gp_activity); ja = j - READ_ONCE(rcu_state.gp_activity);
...@@ -585,28 +586,28 @@ void show_rcu_gp_kthreads(void) ...@@ -585,28 +586,28 @@ void show_rcu_gp_kthreads(void)
jw = j - READ_ONCE(rcu_state.gp_wake_time); jw = j - READ_ONCE(rcu_state.gp_wake_time);
pr_info("%s: wait state: %s(%d) ->state: %#lx delta ->gp_activity %lu ->gp_req_activity %lu ->gp_wake_time %lu ->gp_wake_seq %ld ->gp_seq %ld ->gp_seq_needed %ld ->gp_flags %#x\n", pr_info("%s: wait state: %s(%d) ->state: %#lx delta ->gp_activity %lu ->gp_req_activity %lu ->gp_wake_time %lu ->gp_wake_seq %ld ->gp_seq %ld ->gp_seq_needed %ld ->gp_flags %#x\n",
rcu_state.name, gp_state_getname(rcu_state.gp_state), rcu_state.name, gp_state_getname(rcu_state.gp_state),
rcu_state.gp_state, rcu_state.gp_state, t ? t->state : 0x1ffffL,
rcu_state.gp_kthread ? rcu_state.gp_kthread->state : 0x1ffffL,
ja, jr, jw, (long)READ_ONCE(rcu_state.gp_wake_seq), ja, jr, jw, (long)READ_ONCE(rcu_state.gp_wake_seq),
(long)READ_ONCE(rcu_state.gp_seq), (long)READ_ONCE(rcu_state.gp_seq),
(long)READ_ONCE(rcu_get_root()->gp_seq_needed), (long)READ_ONCE(rcu_get_root()->gp_seq_needed),
READ_ONCE(rcu_state.gp_flags)); READ_ONCE(rcu_state.gp_flags));
rcu_for_each_node_breadth_first(rnp) { rcu_for_each_node_breadth_first(rnp) {
if (ULONG_CMP_GE(rcu_state.gp_seq, rnp->gp_seq_needed)) if (ULONG_CMP_GE(READ_ONCE(rcu_state.gp_seq),
READ_ONCE(rnp->gp_seq_needed)))
continue; continue;
pr_info("\trcu_node %d:%d ->gp_seq %ld ->gp_seq_needed %ld\n", pr_info("\trcu_node %d:%d ->gp_seq %ld ->gp_seq_needed %ld\n",
rnp->grplo, rnp->grphi, (long)rnp->gp_seq, rnp->grplo, rnp->grphi, (long)READ_ONCE(rnp->gp_seq),
(long)rnp->gp_seq_needed); (long)READ_ONCE(rnp->gp_seq_needed));
if (!rcu_is_leaf_node(rnp)) if (!rcu_is_leaf_node(rnp))
continue; continue;
for_each_leaf_node_possible_cpu(rnp, cpu) { for_each_leaf_node_possible_cpu(rnp, cpu) {
rdp = per_cpu_ptr(&rcu_data, cpu); rdp = per_cpu_ptr(&rcu_data, cpu);
if (rdp->gpwrap || if (READ_ONCE(rdp->gpwrap) ||
ULONG_CMP_GE(rcu_state.gp_seq, ULONG_CMP_GE(READ_ONCE(rcu_state.gp_seq),
rdp->gp_seq_needed)) READ_ONCE(rdp->gp_seq_needed)))
continue; continue;
pr_info("\tcpu %d ->gp_seq_needed %ld\n", pr_info("\tcpu %d ->gp_seq_needed %ld\n",
cpu, (long)rdp->gp_seq_needed); cpu, (long)READ_ONCE(rdp->gp_seq_needed));
} }
} }
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
...@@ -631,7 +632,9 @@ static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp, ...@@ -631,7 +632,9 @@ static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp,
static atomic_t warned = ATOMIC_INIT(0); static atomic_t warned = ATOMIC_INIT(0);
if (!IS_ENABLED(CONFIG_PROVE_RCU) || rcu_gp_in_progress() || if (!IS_ENABLED(CONFIG_PROVE_RCU) || rcu_gp_in_progress() ||
ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed)) ULONG_CMP_GE(READ_ONCE(rnp_root->gp_seq),
READ_ONCE(rnp_root->gp_seq_needed)) ||
!smp_load_acquire(&rcu_state.gp_kthread)) // Get stable kthread.
return; return;
j = jiffies; /* Expensive access, and in common case don't get here. */ j = jiffies; /* Expensive access, and in common case don't get here. */
if (time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) || if (time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
...@@ -642,7 +645,8 @@ static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp, ...@@ -642,7 +645,8 @@ static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp,
raw_spin_lock_irqsave_rcu_node(rnp, flags); raw_spin_lock_irqsave_rcu_node(rnp, flags);
j = jiffies; j = jiffies;
if (rcu_gp_in_progress() || if (rcu_gp_in_progress() ||
ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) || ULONG_CMP_GE(READ_ONCE(rnp_root->gp_seq),
READ_ONCE(rnp_root->gp_seq_needed)) ||
time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) || time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) || time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
atomic_read(&warned)) { atomic_read(&warned)) {
...@@ -655,9 +659,10 @@ static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp, ...@@ -655,9 +659,10 @@ static void rcu_check_gp_start_stall(struct rcu_node *rnp, struct rcu_data *rdp,
raw_spin_lock_rcu_node(rnp_root); /* irqs already disabled. */ raw_spin_lock_rcu_node(rnp_root); /* irqs already disabled. */
j = jiffies; j = jiffies;
if (rcu_gp_in_progress() || if (rcu_gp_in_progress() ||
ULONG_CMP_GE(rnp_root->gp_seq, rnp_root->gp_seq_needed) || ULONG_CMP_GE(READ_ONCE(rnp_root->gp_seq),
time_before(j, rcu_state.gp_req_activity + gpssdelay) || READ_ONCE(rnp_root->gp_seq_needed)) ||
time_before(j, rcu_state.gp_activity + gpssdelay) || time_before(j, READ_ONCE(rcu_state.gp_req_activity) + gpssdelay) ||
time_before(j, READ_ONCE(rcu_state.gp_activity) + gpssdelay) ||
atomic_xchg(&warned, 1)) { atomic_xchg(&warned, 1)) {
if (rnp_root != rnp) if (rnp_root != rnp)
/* irqs remain disabled. */ /* irqs remain disabled. */
......
...@@ -183,6 +183,8 @@ void rcu_unexpedite_gp(void) ...@@ -183,6 +183,8 @@ void rcu_unexpedite_gp(void)
} }
EXPORT_SYMBOL_GPL(rcu_unexpedite_gp); EXPORT_SYMBOL_GPL(rcu_unexpedite_gp);
static bool rcu_boot_ended __read_mostly;
/* /*
* Inform RCU of the end of the in-kernel boot sequence. * Inform RCU of the end of the in-kernel boot sequence.
*/ */
...@@ -191,8 +193,18 @@ void rcu_end_inkernel_boot(void) ...@@ -191,8 +193,18 @@ void rcu_end_inkernel_boot(void)
rcu_unexpedite_gp(); rcu_unexpedite_gp();
if (rcu_normal_after_boot) if (rcu_normal_after_boot)
WRITE_ONCE(rcu_normal, 1); WRITE_ONCE(rcu_normal, 1);
rcu_boot_ended = 1;
} }
/*
* Let rcutorture know when it is OK to turn it up to eleven.
*/
bool rcu_inkernel_boot_has_ended(void)
{
return rcu_boot_ended;
}
EXPORT_SYMBOL_GPL(rcu_inkernel_boot_has_ended);
#endif /* #ifndef CONFIG_TINY_RCU */ #endif /* #ifndef CONFIG_TINY_RCU */
/* /*
...@@ -464,13 +476,19 @@ EXPORT_SYMBOL_GPL(rcutorture_sched_setaffinity); ...@@ -464,13 +476,19 @@ EXPORT_SYMBOL_GPL(rcutorture_sched_setaffinity);
#ifdef CONFIG_RCU_STALL_COMMON #ifdef CONFIG_RCU_STALL_COMMON
int rcu_cpu_stall_ftrace_dump __read_mostly; int rcu_cpu_stall_ftrace_dump __read_mostly;
module_param(rcu_cpu_stall_ftrace_dump, int, 0644); module_param(rcu_cpu_stall_ftrace_dump, int, 0644);
int rcu_cpu_stall_suppress __read_mostly; /* 1 = suppress stall warnings. */ int rcu_cpu_stall_suppress __read_mostly; // !0 = suppress stall warnings.
EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress); EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress);
module_param(rcu_cpu_stall_suppress, int, 0644); module_param(rcu_cpu_stall_suppress, int, 0644);
int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT; int rcu_cpu_stall_timeout __read_mostly = CONFIG_RCU_CPU_STALL_TIMEOUT;
module_param(rcu_cpu_stall_timeout, int, 0644); module_param(rcu_cpu_stall_timeout, int, 0644);
#endif /* #ifdef CONFIG_RCU_STALL_COMMON */ #endif /* #ifdef CONFIG_RCU_STALL_COMMON */
// Suppress boot-time RCU CPU stall warnings and rcutorture writer stall
// warnings. Also used by rcutorture even if stall warnings are excluded.
int rcu_cpu_stall_suppress_at_boot __read_mostly; // !0 = suppress boot stalls.
EXPORT_SYMBOL_GPL(rcu_cpu_stall_suppress_at_boot);
module_param(rcu_cpu_stall_suppress_at_boot, int, 0444);
#ifdef CONFIG_TASKS_RCU #ifdef CONFIG_TASKS_RCU
/* /*
...@@ -528,7 +546,7 @@ void call_rcu_tasks(struct rcu_head *rhp, rcu_callback_t func) ...@@ -528,7 +546,7 @@ void call_rcu_tasks(struct rcu_head *rhp, rcu_callback_t func)
rhp->func = func; rhp->func = func;
raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags); raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
needwake = !rcu_tasks_cbs_head; needwake = !rcu_tasks_cbs_head;
*rcu_tasks_cbs_tail = rhp; WRITE_ONCE(*rcu_tasks_cbs_tail, rhp);
rcu_tasks_cbs_tail = &rhp->next; rcu_tasks_cbs_tail = &rhp->next;
raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags); raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
/* We can't create the thread unless interrupts are enabled. */ /* We can't create the thread unless interrupts are enabled. */
...@@ -658,7 +676,7 @@ static int __noreturn rcu_tasks_kthread(void *arg) ...@@ -658,7 +676,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
/* If there were none, wait a bit and start over. */ /* If there were none, wait a bit and start over. */
if (!list) { if (!list) {
wait_event_interruptible(rcu_tasks_cbs_wq, wait_event_interruptible(rcu_tasks_cbs_wq,
rcu_tasks_cbs_head); READ_ONCE(rcu_tasks_cbs_head));
if (!rcu_tasks_cbs_head) { if (!rcu_tasks_cbs_head) {
WARN_ON(signal_pending(current)); WARN_ON(signal_pending(current));
schedule_timeout_interruptible(HZ/10); schedule_timeout_interruptible(HZ/10);
...@@ -801,7 +819,7 @@ static int __init rcu_spawn_tasks_kthread(void) ...@@ -801,7 +819,7 @@ static int __init rcu_spawn_tasks_kthread(void)
core_initcall(rcu_spawn_tasks_kthread); core_initcall(rcu_spawn_tasks_kthread);
/* Do the srcu_read_lock() for the above synchronize_srcu(). */ /* Do the srcu_read_lock() for the above synchronize_srcu(). */
void exit_tasks_rcu_start(void) void exit_tasks_rcu_start(void) __acquires(&tasks_rcu_exit_srcu)
{ {
preempt_disable(); preempt_disable();
current->rcu_tasks_idx = __srcu_read_lock(&tasks_rcu_exit_srcu); current->rcu_tasks_idx = __srcu_read_lock(&tasks_rcu_exit_srcu);
...@@ -809,7 +827,7 @@ void exit_tasks_rcu_start(void) ...@@ -809,7 +827,7 @@ void exit_tasks_rcu_start(void)
} }
/* Do the srcu_read_unlock() for the above synchronize_srcu(). */ /* Do the srcu_read_unlock() for the above synchronize_srcu(). */
void exit_tasks_rcu_finish(void) void exit_tasks_rcu_finish(void) __releases(&tasks_rcu_exit_srcu)
{ {
preempt_disable(); preempt_disable();
__srcu_read_unlock(&tasks_rcu_exit_srcu, current->rcu_tasks_idx); __srcu_read_unlock(&tasks_rcu_exit_srcu, current->rcu_tasks_idx);
......
...@@ -944,6 +944,7 @@ static struct timer_base *lock_timer_base(struct timer_list *timer, ...@@ -944,6 +944,7 @@ static struct timer_base *lock_timer_base(struct timer_list *timer,
#define MOD_TIMER_PENDING_ONLY 0x01 #define MOD_TIMER_PENDING_ONLY 0x01
#define MOD_TIMER_REDUCE 0x02 #define MOD_TIMER_REDUCE 0x02
#define MOD_TIMER_NOTPENDING 0x04
static inline int static inline int
__mod_timer(struct timer_list *timer, unsigned long expires, unsigned int options) __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int options)
...@@ -960,7 +961,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int option ...@@ -960,7 +961,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int option
* the timer is re-modified to have the same timeout or ends up in the * the timer is re-modified to have the same timeout or ends up in the
* same array bucket then just return: * same array bucket then just return:
*/ */
if (timer_pending(timer)) { if (!(options & MOD_TIMER_NOTPENDING) && timer_pending(timer)) {
/* /*
* The downside of this optimization is that it can result in * The downside of this optimization is that it can result in
* larger granularity than you would get from adding a new * larger granularity than you would get from adding a new
...@@ -1133,7 +1134,7 @@ EXPORT_SYMBOL(timer_reduce); ...@@ -1133,7 +1134,7 @@ EXPORT_SYMBOL(timer_reduce);
void add_timer(struct timer_list *timer) void add_timer(struct timer_list *timer)
{ {
BUG_ON(timer_pending(timer)); BUG_ON(timer_pending(timer));
mod_timer(timer, timer->expires); __mod_timer(timer, timer->expires, MOD_TIMER_NOTPENDING);
} }
EXPORT_SYMBOL(add_timer); EXPORT_SYMBOL(add_timer);
...@@ -1891,7 +1892,7 @@ signed long __sched schedule_timeout(signed long timeout) ...@@ -1891,7 +1892,7 @@ signed long __sched schedule_timeout(signed long timeout)
timer.task = current; timer.task = current;
timer_setup_on_stack(&timer.timer, process_timeout, 0); timer_setup_on_stack(&timer.timer, process_timeout, 0);
__mod_timer(&timer.timer, expire, 0); __mod_timer(&timer.timer, expire, MOD_TIMER_NOTPENDING);
schedule(); schedule();
del_singleshot_timer_sync(&timer.timer); del_singleshot_timer_sync(&timer.timer);
......
...@@ -42,6 +42,9 @@ ...@@ -42,6 +42,9 @@
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>"); MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>");
static bool disable_onoff_at_boot;
module_param(disable_onoff_at_boot, bool, 0444);
static char *torture_type; static char *torture_type;
static int verbose; static int verbose;
...@@ -84,6 +87,7 @@ bool torture_offline(int cpu, long *n_offl_attempts, long *n_offl_successes, ...@@ -84,6 +87,7 @@ bool torture_offline(int cpu, long *n_offl_attempts, long *n_offl_successes,
{ {
unsigned long delta; unsigned long delta;
int ret; int ret;
char *s;
unsigned long starttime; unsigned long starttime;
if (!cpu_online(cpu) || !cpu_is_hotpluggable(cpu)) if (!cpu_online(cpu) || !cpu_is_hotpluggable(cpu))
...@@ -99,10 +103,16 @@ bool torture_offline(int cpu, long *n_offl_attempts, long *n_offl_successes, ...@@ -99,10 +103,16 @@ bool torture_offline(int cpu, long *n_offl_attempts, long *n_offl_successes,
(*n_offl_attempts)++; (*n_offl_attempts)++;
ret = cpu_down(cpu); ret = cpu_down(cpu);
if (ret) { if (ret) {
s = "";
if (!rcu_inkernel_boot_has_ended() && ret == -EBUSY) {
// PCI probe frequently disables hotplug during boot.
(*n_offl_attempts)--;
s = " (-EBUSY forgiven during boot)";
}
if (verbose) if (verbose)
pr_alert("%s" TORTURE_FLAG pr_alert("%s" TORTURE_FLAG
"torture_onoff task: offline %d failed: errno %d\n", "torture_onoff task: offline %d failed%s: errno %d\n",
torture_type, cpu, ret); torture_type, cpu, s, ret);
} else { } else {
if (verbose > 1) if (verbose > 1)
pr_alert("%s" TORTURE_FLAG pr_alert("%s" TORTURE_FLAG
...@@ -137,6 +147,7 @@ bool torture_online(int cpu, long *n_onl_attempts, long *n_onl_successes, ...@@ -137,6 +147,7 @@ bool torture_online(int cpu, long *n_onl_attempts, long *n_onl_successes,
{ {
unsigned long delta; unsigned long delta;
int ret; int ret;
char *s;
unsigned long starttime; unsigned long starttime;
if (cpu_online(cpu) || !cpu_is_hotpluggable(cpu)) if (cpu_online(cpu) || !cpu_is_hotpluggable(cpu))
...@@ -150,10 +161,16 @@ bool torture_online(int cpu, long *n_onl_attempts, long *n_onl_successes, ...@@ -150,10 +161,16 @@ bool torture_online(int cpu, long *n_onl_attempts, long *n_onl_successes,
(*n_onl_attempts)++; (*n_onl_attempts)++;
ret = cpu_up(cpu); ret = cpu_up(cpu);
if (ret) { if (ret) {
s = "";
if (!rcu_inkernel_boot_has_ended() && ret == -EBUSY) {
// PCI probe frequently disables hotplug during boot.
(*n_onl_attempts)--;
s = " (-EBUSY forgiven during boot)";
}
if (verbose) if (verbose)
pr_alert("%s" TORTURE_FLAG pr_alert("%s" TORTURE_FLAG
"torture_onoff task: online %d failed: errno %d\n", "torture_onoff task: online %d failed%s: errno %d\n",
torture_type, cpu, ret); torture_type, cpu, s, ret);
} else { } else {
if (verbose > 1) if (verbose > 1)
pr_alert("%s" TORTURE_FLAG pr_alert("%s" TORTURE_FLAG
...@@ -215,6 +232,10 @@ torture_onoff(void *arg) ...@@ -215,6 +232,10 @@ torture_onoff(void *arg)
VERBOSE_TOROUT_STRING("torture_onoff end holdoff"); VERBOSE_TOROUT_STRING("torture_onoff end holdoff");
} }
while (!torture_must_stop()) { while (!torture_must_stop()) {
if (disable_onoff_at_boot && !rcu_inkernel_boot_has_ended()) {
schedule_timeout_interruptible(HZ / 10);
continue;
}
cpu = (torture_random(&rand) >> 4) % (maxcpu + 1); cpu = (torture_random(&rand) >> 4) % (maxcpu + 1);
if (!torture_offline(cpu, if (!torture_offline(cpu,
&n_offline_attempts, &n_offline_successes, &n_offline_attempts, &n_offline_successes,
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
# Returns 1 if the specified boot-parameter string tells rcutorture to # Returns 1 if the specified boot-parameter string tells rcutorture to
# test CPU-hotplug operations. # test CPU-hotplug operations.
bootparam_hotplug_cpu () { bootparam_hotplug_cpu () {
echo "$1" | grep -q "rcutorture\.onoff_" echo "$1" | grep -q "torture\.onoff_"
} }
# checkarg --argname argtype $# arg mustmatch cannotmatch # checkarg --argname argtype $# arg mustmatch cannotmatch
......
...@@ -20,7 +20,9 @@ ...@@ -20,7 +20,9 @@
rundir="${1}" rundir="${1}"
if test -z "$rundir" -o ! -d "$rundir" if test -z "$rundir" -o ! -d "$rundir"
then then
echo Directory "$rundir" not found.
echo Usage: $0 directory echo Usage: $0 directory
exit 1
fi fi
editor=${EDITOR-vi} editor=${EDITOR-vi}
......
...@@ -13,6 +13,9 @@ ...@@ -13,6 +13,9 @@
# #
# Authors: Paul E. McKenney <paulmck@linux.ibm.com> # Authors: Paul E. McKenney <paulmck@linux.ibm.com>
T=/tmp/kvm-recheck.sh.$$
trap 'rm -f $T' 0 2
PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH
. functions.sh . functions.sh
for rd in "$@" for rd in "$@"
...@@ -68,4 +71,16 @@ do ...@@ -68,4 +71,16 @@ do
fi fi
done done
done done
EDITOR=echo kvm-find-errors.sh "${@: -1}" > /dev/null 2>&1 EDITOR=echo kvm-find-errors.sh "${@: -1}" > $T 2>&1
ret=$?
builderrors="`tr ' ' '\012' < $T | grep -c '/Make.out.diags'`"
if test "$builderrors" -gt 0
then
echo $builderrors runs with build errors.
fi
runerrors="`tr ' ' '\012' < $T | grep -c '/console.log.diags'`"
if test "$runerrors" -gt 0
then
echo $runerrors runs with runtime errors.
fi
exit $ret
...@@ -39,7 +39,7 @@ TORTURE_TRUST_MAKE="" ...@@ -39,7 +39,7 @@ TORTURE_TRUST_MAKE=""
resdir="" resdir=""
configs="" configs=""
cpus=0 cpus=0
ds=`date +%Y.%m.%d-%H:%M:%S` ds=`date +%Y.%m.%d-%H.%M.%S`
jitter="-1" jitter="-1"
usage () { usage () {
......
...@@ -3,3 +3,5 @@ CONFIG_PRINTK_TIME=y ...@@ -3,3 +3,5 @@ CONFIG_PRINTK_TIME=y
CONFIG_HYPERVISOR_GUEST=y CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y CONFIG_PARAVIRT=y
CONFIG_KVM_GUEST=y CONFIG_KVM_GUEST=y
CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n
CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n
CONFIG_SMP=y
CONFIG_NR_CPUS=100
CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n
CONFIG_RCU_FAST_NO_HZ=n
CONFIG_RCU_TRACE=n
CONFIG_RCU_NOCB_CPU=n
CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_PROVE_LOCKING=n
#CHECK#CONFIG_PROVE_RCU=n
CONFIG_DEBUG_OBJECTS=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=n
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment