Commit 68cadad1 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'rcu.2023.08.21a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

Pull RCU updates from Paul McKenney:

 - Documentation updates

 - Miscellaneous fixes, perhaps most notably simplifying
   SRCU_NOTIFIER_INIT() as suggested

 - RCU Tasks updates, most notably treating Tasks RCU callbacks as lazy
   while still treating synchronous grace periods as urgent. Also fixes
   one bug that restores the ability to apply debug-objects to RCU Tasks
   and another that fixes a race condition that could result in
   false-positive failures of the boot-time self-test code

 - RCU-scalability performance-test updates, most notably adding the
   ability to measure the RCU-Tasks's grace-period kthread's CPU
   consumption. This proved quite useful for the RCU Tasks work

 - Reference-acquisition/release performance-test updates, including a
   fix for an uninitialized wait_queue_head_t

 - Miscellaneous torture-test updates

 - Torture-test scripting updates, including removal of the
   non-longer-functional formal-verification scripts, test builds of
   individual RCU Tasks flavors, better diagnostics for loss of
   connectivity for distributed rcutorture tests, disabling of reboot
   loops in qemu/KVM-based rcutorture testing, and passing of init
   parameters to rcutorture's init program

* tag 'rcu.2023.08.21a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (64 commits)
  rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls
  rcu: Make the rcu_nocb_poll boot parameter usable via boot config
  rcu: Mark __rcu_irq_enter_check_tick() ->rcu_urgent_qs load
  srcu,notifier: Remove #ifdefs in favor of SRCU Tiny srcu_usage
  rcutorture: Stop right-shifting torture_random() return values
  torture: Stop right-shifting torture_random() return values
  torture: Move stutter_wait() timeouts to hrtimers
  torture: Move torture_shuffle() timeouts to hrtimers
  torture: Move torture_onoff() timeouts to hrtimers
  torture: Make torture_hrtimeout_*() use TASK_IDLE
  torture: Add lock_torture writer_fifo module parameter
  torture: Add a kthread-creation callback to _torture_create_kthread()
  rcu-tasks: Fix boot-time RCU tasks debug-only deadlock
  rcu-tasks: Permit use of debug-objects with RCU Tasks flavors
  checkpatch: Complain about unexpected uses of RCU Tasks Trace
  torture: Cause mkinitrd.sh to indicate failure on compile errors
  torture: Make init program dump command-line arguments
  torture: Switch qemu from -nographic to -display none
  torture: Add init-program support for loongarch
  torture: Avoid torture-test reboot loops
  ...
parents 727dbda1 fe24a0b6
...@@ -10,7 +10,7 @@ misuses of the RCU API, most notably using one of the rcu_dereference() ...@@ -10,7 +10,7 @@ misuses of the RCU API, most notably using one of the rcu_dereference()
family to access an RCU-protected pointer without the proper protection. family to access an RCU-protected pointer without the proper protection.
When such misuse is detected, an lockdep-RCU splat is emitted. When such misuse is detected, an lockdep-RCU splat is emitted.
The usual cause of a lockdep-RCU slat is someone accessing an The usual cause of a lockdep-RCU splat is someone accessing an
RCU-protected data structure without either (1) being in the right kind of RCU-protected data structure without either (1) being in the right kind of
RCU read-side critical section or (2) holding the right update-side lock. RCU read-side critical section or (2) holding the right update-side lock.
This problem can therefore be serious: it might result in random memory This problem can therefore be serious: it might result in random memory
......
...@@ -18,7 +18,16 @@ to solve following problem. ...@@ -18,7 +18,16 @@ to solve following problem.
Without 'nulls', a typical RCU linked list managing objects which are Without 'nulls', a typical RCU linked list managing objects which are
allocated with SLAB_TYPESAFE_BY_RCU kmem_cache can use the following allocated with SLAB_TYPESAFE_BY_RCU kmem_cache can use the following
algorithms: algorithms. Following examples assume 'obj' is a pointer to such
objects, which is having below type.
::
struct object {
struct hlist_node obj_node;
atomic_t refcnt;
unsigned int key;
};
1) Lookup algorithm 1) Lookup algorithm
------------------- -------------------
...@@ -26,11 +35,13 @@ algorithms: ...@@ -26,11 +35,13 @@ algorithms:
:: ::
begin: begin:
rcu_read_lock() rcu_read_lock();
obj = lockless_lookup(key); obj = lockless_lookup(key);
if (obj) { if (obj) {
if (!try_get_ref(obj)) // might fail for free objects if (!try_get_ref(obj)) { // might fail for free objects
rcu_read_unlock();
goto begin; goto begin;
}
/* /*
* Because a writer could delete object, and a writer could * Because a writer could delete object, and a writer could
* reuse these object before the RCU grace period, we * reuse these object before the RCU grace period, we
...@@ -54,7 +65,7 @@ but a version with an additional memory barrier (smp_rmb()) ...@@ -54,7 +65,7 @@ but a version with an additional memory barrier (smp_rmb())
struct hlist_node *node, *next; struct hlist_node *node, *next;
for (pos = rcu_dereference((head)->first); for (pos = rcu_dereference((head)->first);
pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) && pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) &&
({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; }); ({ obj = hlist_entry(pos, typeof(*obj), obj_node); 1; });
pos = rcu_dereference(next)) pos = rcu_dereference(next))
if (obj->key == key) if (obj->key == key)
return obj; return obj;
...@@ -66,10 +77,10 @@ And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb():: ...@@ -66,10 +77,10 @@ And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb()::
struct hlist_node *node; struct hlist_node *node;
for (pos = rcu_dereference((head)->first); for (pos = rcu_dereference((head)->first);
pos && ({ prefetch(pos->next); 1; }) && pos && ({ prefetch(pos->next); 1; }) &&
({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; }); ({ obj = hlist_entry(pos, typeof(*obj), obj_node); 1; });
pos = rcu_dereference(pos->next)) pos = rcu_dereference(pos->next))
if (obj->key == key) if (obj->key == key)
return obj; return obj;
return NULL; return NULL;
Quoting Corey Minyard:: Quoting Corey Minyard::
...@@ -86,7 +97,7 @@ Quoting Corey Minyard:: ...@@ -86,7 +97,7 @@ Quoting Corey Minyard::
2) Insertion algorithm 2) Insertion algorithm
---------------------- ----------------------
We need to make sure a reader cannot read the new 'obj->obj_next' value We need to make sure a reader cannot read the new 'obj->obj_node.next' value
and previous value of 'obj->key'. Otherwise, an item could be deleted and previous value of 'obj->key'. Otherwise, an item could be deleted
from a chain, and inserted into another chain. If new chain was empty from a chain, and inserted into another chain. If new chain was empty
before the move, 'next' pointer is NULL, and lockless reader can not before the move, 'next' pointer is NULL, and lockless reader can not
...@@ -129,8 +140,7 @@ very very fast (before the end of RCU grace period) ...@@ -129,8 +140,7 @@ very very fast (before the end of RCU grace period)
Avoiding extra smp_rmb() Avoiding extra smp_rmb()
======================== ========================
With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup() With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup().
and extra _release() in insert function.
For example, if we choose to store the slot number as the 'nulls' For example, if we choose to store the slot number as the 'nulls'
end-of-list marker for each slot of the hash table, we can detect end-of-list marker for each slot of the hash table, we can detect
...@@ -142,6 +152,9 @@ the beginning. If the object was moved to the same chain, ...@@ -142,6 +152,9 @@ the beginning. If the object was moved to the same chain,
then the reader doesn't care: It might occasionally then the reader doesn't care: It might occasionally
scan the list again without harm. scan the list again without harm.
Note that using hlist_nulls means the type of 'obj_node' field of
'struct object' becomes 'struct hlist_nulls_node'.
1) lookup algorithm 1) lookup algorithm
------------------- -------------------
...@@ -151,7 +164,7 @@ scan the list again without harm. ...@@ -151,7 +164,7 @@ scan the list again without harm.
head = &table[slot]; head = &table[slot];
begin: begin:
rcu_read_lock(); rcu_read_lock();
hlist_nulls_for_each_entry_rcu(obj, node, head, member) { hlist_nulls_for_each_entry_rcu(obj, node, head, obj_node) {
if (obj->key == key) { if (obj->key == key) {
if (!try_get_ref(obj)) { // might fail for free objects if (!try_get_ref(obj)) { // might fail for free objects
rcu_read_unlock(); rcu_read_unlock();
...@@ -182,6 +195,9 @@ scan the list again without harm. ...@@ -182,6 +195,9 @@ scan the list again without harm.
2) Insert algorithm 2) Insert algorithm
------------------- -------------------
Same to the above one, but uses hlist_nulls_add_head_rcu() instead of
hlist_add_head_rcu().
:: ::
/* /*
......
...@@ -2938,6 +2938,10 @@ ...@@ -2938,6 +2938,10 @@
locktorture.torture_type= [KNL] locktorture.torture_type= [KNL]
Specify the locking implementation to test. Specify the locking implementation to test.
locktorture.writer_fifo= [KNL]
Run the write-side locktorture kthreads at
sched_set_fifo() real-time priority.
locktorture.verbose= [KNL] locktorture.verbose= [KNL]
Enable additional printk() statements. Enable additional printk() statements.
...@@ -4949,6 +4953,15 @@ ...@@ -4949,6 +4953,15 @@
test until boot completes in order to avoid test until boot completes in order to avoid
interference. interference.
rcuscale.kfree_by_call_rcu= [KNL]
In kernels built with CONFIG_RCU_LAZY=y, test
call_rcu() instead of kfree_rcu().
rcuscale.kfree_mult= [KNL]
Instead of allocating an object of size kfree_obj,
allocate one of kfree_mult * sizeof(kfree_obj).
Defaults to 1.
rcuscale.kfree_rcu_test= [KNL] rcuscale.kfree_rcu_test= [KNL]
Set to measure performance of kfree_rcu() flooding. Set to measure performance of kfree_rcu() flooding.
...@@ -4974,6 +4987,12 @@ ...@@ -4974,6 +4987,12 @@
Number of loops doing rcuscale.kfree_alloc_num number Number of loops doing rcuscale.kfree_alloc_num number
of allocations and frees. of allocations and frees.
rcuscale.minruntime= [KNL]
Set the minimum test run time in seconds. This
does not affect the data-collection interval,
but instead allows better measurement of things
like CPU consumption.
rcuscale.nreaders= [KNL] rcuscale.nreaders= [KNL]
Set number of RCU readers. The value -1 selects Set number of RCU readers. The value -1 selects
N, where N is the number of CPUs. A value N, where N is the number of CPUs. A value
...@@ -4988,7 +5007,7 @@ ...@@ -4988,7 +5007,7 @@
the same as for rcuscale.nreaders. the same as for rcuscale.nreaders.
N, where N is the number of CPUs N, where N is the number of CPUs
rcuscale.perf_type= [KNL] rcuscale.scale_type= [KNL]
Specify the RCU implementation to test. Specify the RCU implementation to test.
rcuscale.shutdown= [KNL] rcuscale.shutdown= [KNL]
...@@ -5004,6 +5023,11 @@ ...@@ -5004,6 +5023,11 @@
in microseconds. The default of zero says in microseconds. The default of zero says
no holdoff. no holdoff.
rcuscale.writer_holdoff_jiffies= [KNL]
Additional write-side holdoff between grace
periods, but in jiffies. The default of zero
says no holdoff.
rcutorture.fqs_duration= [KNL] rcutorture.fqs_duration= [KNL]
Set duration of force_quiescent_state bursts Set duration of force_quiescent_state bursts
in microseconds. in microseconds.
...@@ -5285,6 +5309,13 @@ ...@@ -5285,6 +5309,13 @@
number avoids disturbing real-time workloads, number avoids disturbing real-time workloads,
but lengthens grace periods. but lengthens grace periods.
rcupdate.rcu_task_lazy_lim= [KNL]
Number of callbacks on a given CPU that will
cancel laziness on that CPU. Use -1 to disable
cancellation of laziness, but be advised that
doing so increases the danger of OOM due to
callback flooding.
rcupdate.rcu_task_stall_info= [KNL] rcupdate.rcu_task_stall_info= [KNL]
Set initial timeout in jiffies for RCU task stall Set initial timeout in jiffies for RCU task stall
informational messages, which give some indication informational messages, which give some indication
...@@ -5314,6 +5345,29 @@ ...@@ -5314,6 +5345,29 @@
A change in value does not take effect until A change in value does not take effect until
the beginning of the next grace period. the beginning of the next grace period.
rcupdate.rcu_tasks_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks asynchronous
callback batching for call_rcu_tasks().
A negative value will take the default. A value
of zero will disable batching. Batching is
always disabled for synchronize_rcu_tasks().
rcupdate.rcu_tasks_rude_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks
Rude asynchronous callback batching for
call_rcu_tasks_rude(). A negative value
will take the default. A value of zero will
disable batching. Batching is always disabled
for synchronize_rcu_tasks_rude().
rcupdate.rcu_tasks_trace_lazy_ms= [KNL]
Set timeout in milliseconds RCU Tasks
Trace asynchronous callback batching for
call_rcu_tasks_trace(). A negative value
will take the default. A value of zero will
disable batching. Batching is always disabled
for synchronize_rcu_tasks_trace().
rcupdate.rcu_self_test= [KNL] rcupdate.rcu_self_test= [KNL]
Run the RCU early boot self tests Run the RCU early boot self tests
......
...@@ -73,9 +73,7 @@ struct raw_notifier_head { ...@@ -73,9 +73,7 @@ struct raw_notifier_head {
struct srcu_notifier_head { struct srcu_notifier_head {
struct mutex mutex; struct mutex mutex;
#ifdef CONFIG_TREE_SRCU
struct srcu_usage srcuu; struct srcu_usage srcuu;
#endif
struct srcu_struct srcu; struct srcu_struct srcu;
struct notifier_block __rcu *head; struct notifier_block __rcu *head;
}; };
...@@ -106,7 +104,6 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh); ...@@ -106,7 +104,6 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
#define RAW_NOTIFIER_INIT(name) { \ #define RAW_NOTIFIER_INIT(name) { \
.head = NULL } .head = NULL }
#ifdef CONFIG_TREE_SRCU
#define SRCU_NOTIFIER_INIT(name, pcpu) \ #define SRCU_NOTIFIER_INIT(name, pcpu) \
{ \ { \
.mutex = __MUTEX_INITIALIZER(name.mutex), \ .mutex = __MUTEX_INITIALIZER(name.mutex), \
...@@ -114,14 +111,6 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh); ...@@ -114,14 +111,6 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
.srcuu = __SRCU_USAGE_INIT(name.srcuu), \ .srcuu = __SRCU_USAGE_INIT(name.srcuu), \
.srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu), \ .srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu), \
} }
#else
#define SRCU_NOTIFIER_INIT(name, pcpu) \
{ \
.mutex = __MUTEX_INITIALIZER(name.mutex), \
.head = NULL, \
.srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu), \
}
#endif
#define ATOMIC_NOTIFIER_HEAD(name) \ #define ATOMIC_NOTIFIER_HEAD(name) \
struct atomic_notifier_head name = \ struct atomic_notifier_head name = \
......
...@@ -101,7 +101,7 @@ static inline void hlist_nulls_add_head_rcu(struct hlist_nulls_node *n, ...@@ -101,7 +101,7 @@ static inline void hlist_nulls_add_head_rcu(struct hlist_nulls_node *n,
{ {
struct hlist_nulls_node *first = h->first; struct hlist_nulls_node *first = h->first;
n->next = first; WRITE_ONCE(n->next, first);
WRITE_ONCE(n->pprev, &h->first); WRITE_ONCE(n->pprev, &h->first);
rcu_assign_pointer(hlist_nulls_first_rcu(h), n); rcu_assign_pointer(hlist_nulls_first_rcu(h), n);
if (!is_a_nulls(first)) if (!is_a_nulls(first))
...@@ -137,7 +137,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n, ...@@ -137,7 +137,7 @@ static inline void hlist_nulls_add_tail_rcu(struct hlist_nulls_node *n,
last = i; last = i;
if (last) { if (last) {
n->next = last->next; WRITE_ONCE(n->next, last->next);
n->pprev = &last->next; n->pprev = &last->next;
rcu_assign_pointer(hlist_nulls_next_rcu(last), n); rcu_assign_pointer(hlist_nulls_next_rcu(last), n);
} else { } else {
......
...@@ -87,6 +87,7 @@ static inline void rcu_read_unlock_trace(void) ...@@ -87,6 +87,7 @@ static inline void rcu_read_unlock_trace(void)
void call_rcu_tasks_trace(struct rcu_head *rhp, rcu_callback_t func); void call_rcu_tasks_trace(struct rcu_head *rhp, rcu_callback_t func);
void synchronize_rcu_tasks_trace(void); void synchronize_rcu_tasks_trace(void);
void rcu_barrier_tasks_trace(void); void rcu_barrier_tasks_trace(void);
struct task_struct *get_rcu_tasks_trace_gp_kthread(void);
#else #else
/* /*
* The BPF JIT forms these addresses even when it doesn't call these * The BPF JIT forms these addresses even when it doesn't call these
......
...@@ -42,6 +42,11 @@ do { \ ...@@ -42,6 +42,11 @@ do { \
* call_srcu() function, with this wrapper supplying the pointer to the * call_srcu() function, with this wrapper supplying the pointer to the
* corresponding srcu_struct. * corresponding srcu_struct.
* *
* Note that call_rcu_hurry() should be used instead of call_rcu()
* because in kernels built with CONFIG_RCU_LAZY=y the delay between the
* invocation of call_rcu() and that of the corresponding RCU callback
* can be multiple seconds.
*
* The first argument tells Tiny RCU's _wait_rcu_gp() not to * The first argument tells Tiny RCU's _wait_rcu_gp() not to
* bother waiting for RCU. The reason for this is because anywhere * bother waiting for RCU. The reason for this is because anywhere
* synchronize_rcu_mult() can be called is automatically already a full * synchronize_rcu_mult() can be called is automatically already a full
......
...@@ -48,6 +48,10 @@ void srcu_drive_gp(struct work_struct *wp); ...@@ -48,6 +48,10 @@ void srcu_drive_gp(struct work_struct *wp);
#define DEFINE_STATIC_SRCU(name) \ #define DEFINE_STATIC_SRCU(name) \
static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name) static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name)
// Dummy structure for srcu_notifier_head.
struct srcu_usage { };
#define __SRCU_USAGE_INIT(name) { }
void synchronize_srcu(struct srcu_struct *ssp); void synchronize_srcu(struct srcu_struct *ssp);
/* /*
......
...@@ -108,12 +108,15 @@ bool torture_must_stop(void); ...@@ -108,12 +108,15 @@ bool torture_must_stop(void);
bool torture_must_stop_irq(void); bool torture_must_stop_irq(void);
void torture_kthread_stopping(char *title); void torture_kthread_stopping(char *title);
int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m, int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m,
char *f, struct task_struct **tp); char *f, struct task_struct **tp, void (*cbf)(struct task_struct *tp));
void _torture_stop_kthread(char *m, struct task_struct **tp); void _torture_stop_kthread(char *m, struct task_struct **tp);
#define torture_create_kthread(n, arg, tp) \ #define torture_create_kthread(n, arg, tp) \
_torture_create_kthread(n, (arg), #n, "Creating " #n " task", \ _torture_create_kthread(n, (arg), #n, "Creating " #n " task", \
"Failed to create " #n, &(tp)) "Failed to create " #n, &(tp), NULL)
#define torture_create_kthread_cb(n, arg, tp, cbf) \
_torture_create_kthread(n, (arg), #n, "Creating " #n " task", \
"Failed to create " #n, &(tp), cbf)
#define torture_stop_kthread(n, tp) \ #define torture_stop_kthread(n, tp) \
_torture_stop_kthread("Stopping " #n " task", &(tp)) _torture_stop_kthread("Stopping " #n " task", &(tp))
......
...@@ -45,6 +45,7 @@ torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable"); ...@@ -45,6 +45,7 @@ torture_param(int, stutter, 5, "Number of jiffies to run/halt test, 0=disable");
torture_param(int, rt_boost, 2, torture_param(int, rt_boost, 2,
"Do periodic rt-boost. 0=Disable, 1=Only for rt_mutex, 2=For all lock types."); "Do periodic rt-boost. 0=Disable, 1=Only for rt_mutex, 2=For all lock types.");
torture_param(int, rt_boost_factor, 50, "A factor determining how often rt-boost happens."); torture_param(int, rt_boost_factor, 50, "A factor determining how often rt-boost happens.");
torture_param(int, writer_fifo, 0, "Run writers at sched_set_fifo() priority");
torture_param(int, verbose, 1, "Enable verbose debugging printk()s"); torture_param(int, verbose, 1, "Enable verbose debugging printk()s");
torture_param(int, nested_locks, 0, "Number of nested locks (max = 8)"); torture_param(int, nested_locks, 0, "Number of nested locks (max = 8)");
/* Going much higher trips "BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!" errors */ /* Going much higher trips "BUG: MAX_LOCKDEP_CHAIN_HLOCKS too low!" errors */
...@@ -809,7 +810,8 @@ static int lock_torture_writer(void *arg) ...@@ -809,7 +810,8 @@ static int lock_torture_writer(void *arg)
bool skip_main_lock; bool skip_main_lock;
VERBOSE_TOROUT_STRING("lock_torture_writer task started"); VERBOSE_TOROUT_STRING("lock_torture_writer task started");
set_user_nice(current, MAX_NICE); if (!rt_task(current))
set_user_nice(current, MAX_NICE);
do { do {
if ((torture_random(&rand) & 0xfffff) == 0) if ((torture_random(&rand) & 0xfffff) == 0)
...@@ -1015,8 +1017,7 @@ static void lock_torture_cleanup(void) ...@@ -1015,8 +1017,7 @@ static void lock_torture_cleanup(void)
if (writer_tasks) { if (writer_tasks) {
for (i = 0; i < cxt.nrealwriters_stress; i++) for (i = 0; i < cxt.nrealwriters_stress; i++)
torture_stop_kthread(lock_torture_writer, torture_stop_kthread(lock_torture_writer, writer_tasks[i]);
writer_tasks[i]);
kfree(writer_tasks); kfree(writer_tasks);
writer_tasks = NULL; writer_tasks = NULL;
} }
...@@ -1244,8 +1245,9 @@ static int __init lock_torture_init(void) ...@@ -1244,8 +1245,9 @@ static int __init lock_torture_init(void)
goto create_reader; goto create_reader;
/* Create writer. */ /* Create writer. */
firsterr = torture_create_kthread(lock_torture_writer, &cxt.lwsa[i], firsterr = torture_create_kthread_cb(lock_torture_writer, &cxt.lwsa[i],
writer_tasks[i]); writer_tasks[i],
writer_fifo ? sched_set_fifo : NULL);
if (torture_init_error(firsterr)) if (torture_init_error(firsterr))
goto unwind; goto unwind;
......
...@@ -511,6 +511,14 @@ static inline void show_rcu_tasks_gp_kthreads(void) {} ...@@ -511,6 +511,14 @@ static inline void show_rcu_tasks_gp_kthreads(void) {}
void rcu_request_urgent_qs_task(struct task_struct *t); void rcu_request_urgent_qs_task(struct task_struct *t);
#endif /* #else #ifdef CONFIG_TINY_RCU */ #endif /* #else #ifdef CONFIG_TINY_RCU */
#ifdef CONFIG_TASKS_RCU
struct task_struct *get_rcu_tasks_gp_kthread(void);
#endif // # ifdef CONFIG_TASKS_RCU
#ifdef CONFIG_TASKS_RUDE_RCU
struct task_struct *get_rcu_tasks_rude_gp_kthread(void);
#endif // # ifdef CONFIG_TASKS_RUDE_RCU
#define RCU_SCHEDULER_INACTIVE 0 #define RCU_SCHEDULER_INACTIVE 0
#define RCU_SCHEDULER_INIT 1 #define RCU_SCHEDULER_INIT 1
#define RCU_SCHEDULER_RUNNING 2 #define RCU_SCHEDULER_RUNNING 2
......
...@@ -84,15 +84,17 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>"); ...@@ -84,15 +84,17 @@ MODULE_AUTHOR("Paul E. McKenney <paulmck@linux.ibm.com>");
#endif #endif
torture_param(bool, gp_async, false, "Use asynchronous GP wait primitives"); torture_param(bool, gp_async, false, "Use asynchronous GP wait primitives");
torture_param(int, gp_async_max, 1000, "Max # outstanding waits per reader"); torture_param(int, gp_async_max, 1000, "Max # outstanding waits per writer");
torture_param(bool, gp_exp, false, "Use expedited GP wait primitives"); torture_param(bool, gp_exp, false, "Use expedited GP wait primitives");
torture_param(int, holdoff, 10, "Holdoff time before test start (s)"); torture_param(int, holdoff, 10, "Holdoff time before test start (s)");
torture_param(int, minruntime, 0, "Minimum run time (s)");
torture_param(int, nreaders, -1, "Number of RCU reader threads"); torture_param(int, nreaders, -1, "Number of RCU reader threads");
torture_param(int, nwriters, -1, "Number of RCU updater threads"); torture_param(int, nwriters, -1, "Number of RCU updater threads");
torture_param(bool, shutdown, RCUSCALE_SHUTDOWN, torture_param(bool, shutdown, RCUSCALE_SHUTDOWN,
"Shutdown at end of scalability tests."); "Shutdown at end of scalability tests.");
torture_param(int, verbose, 1, "Enable verbose debugging printk()s"); torture_param(int, verbose, 1, "Enable verbose debugging printk()s");
torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to disable"); torture_param(int, writer_holdoff, 0, "Holdoff (us) between GPs, zero to disable");
torture_param(int, writer_holdoff_jiffies, 0, "Holdoff (jiffies) between GPs, zero to disable");
torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() scale test?"); torture_param(int, kfree_rcu_test, 0, "Do we run a kfree_rcu() scale test?");
torture_param(int, kfree_mult, 1, "Multiple of kfree_obj size to allocate."); torture_param(int, kfree_mult, 1, "Multiple of kfree_obj size to allocate.");
torture_param(int, kfree_by_call_rcu, 0, "Use call_rcu() to emulate kfree_rcu()?"); torture_param(int, kfree_by_call_rcu, 0, "Use call_rcu() to emulate kfree_rcu()?");
...@@ -139,6 +141,7 @@ struct rcu_scale_ops { ...@@ -139,6 +141,7 @@ struct rcu_scale_ops {
void (*gp_barrier)(void); void (*gp_barrier)(void);
void (*sync)(void); void (*sync)(void);
void (*exp_sync)(void); void (*exp_sync)(void);
struct task_struct *(*rso_gp_kthread)(void);
const char *name; const char *name;
}; };
...@@ -295,6 +298,7 @@ static struct rcu_scale_ops tasks_ops = { ...@@ -295,6 +298,7 @@ static struct rcu_scale_ops tasks_ops = {
.gp_barrier = rcu_barrier_tasks, .gp_barrier = rcu_barrier_tasks,
.sync = synchronize_rcu_tasks, .sync = synchronize_rcu_tasks,
.exp_sync = synchronize_rcu_tasks, .exp_sync = synchronize_rcu_tasks,
.rso_gp_kthread = get_rcu_tasks_gp_kthread,
.name = "tasks" .name = "tasks"
}; };
...@@ -306,6 +310,44 @@ static struct rcu_scale_ops tasks_ops = { ...@@ -306,6 +310,44 @@ static struct rcu_scale_ops tasks_ops = {
#endif // #else // #ifdef CONFIG_TASKS_RCU #endif // #else // #ifdef CONFIG_TASKS_RCU
#ifdef CONFIG_TASKS_RUDE_RCU
/*
* Definitions for RCU-tasks-rude scalability testing.
*/
static int tasks_rude_scale_read_lock(void)
{
return 0;
}
static void tasks_rude_scale_read_unlock(int idx)
{
}
static struct rcu_scale_ops tasks_rude_ops = {
.ptype = RCU_TASKS_RUDE_FLAVOR,
.init = rcu_sync_scale_init,
.readlock = tasks_rude_scale_read_lock,
.readunlock = tasks_rude_scale_read_unlock,
.get_gp_seq = rcu_no_completed,
.gp_diff = rcu_seq_diff,
.async = call_rcu_tasks_rude,
.gp_barrier = rcu_barrier_tasks_rude,
.sync = synchronize_rcu_tasks_rude,
.exp_sync = synchronize_rcu_tasks_rude,
.rso_gp_kthread = get_rcu_tasks_rude_gp_kthread,
.name = "tasks-rude"
};
#define TASKS_RUDE_OPS &tasks_rude_ops,
#else // #ifdef CONFIG_TASKS_RUDE_RCU
#define TASKS_RUDE_OPS
#endif // #else // #ifdef CONFIG_TASKS_RUDE_RCU
#ifdef CONFIG_TASKS_TRACE_RCU #ifdef CONFIG_TASKS_TRACE_RCU
/* /*
...@@ -334,6 +376,7 @@ static struct rcu_scale_ops tasks_tracing_ops = { ...@@ -334,6 +376,7 @@ static struct rcu_scale_ops tasks_tracing_ops = {
.gp_barrier = rcu_barrier_tasks_trace, .gp_barrier = rcu_barrier_tasks_trace,
.sync = synchronize_rcu_tasks_trace, .sync = synchronize_rcu_tasks_trace,
.exp_sync = synchronize_rcu_tasks_trace, .exp_sync = synchronize_rcu_tasks_trace,
.rso_gp_kthread = get_rcu_tasks_trace_gp_kthread,
.name = "tasks-tracing" .name = "tasks-tracing"
}; };
...@@ -410,10 +453,12 @@ rcu_scale_writer(void *arg) ...@@ -410,10 +453,12 @@ rcu_scale_writer(void *arg)
{ {
int i = 0; int i = 0;
int i_max; int i_max;
unsigned long jdone;
long me = (long)arg; long me = (long)arg;
struct rcu_head *rhp = NULL; struct rcu_head *rhp = NULL;
bool started = false, done = false, alldone = false; bool started = false, done = false, alldone = false;
u64 t; u64 t;
DEFINE_TORTURE_RANDOM(tr);
u64 *wdp; u64 *wdp;
u64 *wdpp = writer_durations[me]; u64 *wdpp = writer_durations[me];
...@@ -424,7 +469,7 @@ rcu_scale_writer(void *arg) ...@@ -424,7 +469,7 @@ rcu_scale_writer(void *arg)
sched_set_fifo_low(current); sched_set_fifo_low(current);
if (holdoff) if (holdoff)
schedule_timeout_uninterruptible(holdoff * HZ); schedule_timeout_idle(holdoff * HZ);
/* /*
* Wait until rcu_end_inkernel_boot() is called for normal GP tests * Wait until rcu_end_inkernel_boot() is called for normal GP tests
...@@ -445,9 +490,12 @@ rcu_scale_writer(void *arg) ...@@ -445,9 +490,12 @@ rcu_scale_writer(void *arg)
} }
} }
jdone = jiffies + minruntime * HZ;
do { do {
if (writer_holdoff) if (writer_holdoff)
udelay(writer_holdoff); udelay(writer_holdoff);
if (writer_holdoff_jiffies)
schedule_timeout_idle(torture_random(&tr) % writer_holdoff_jiffies + 1);
wdp = &wdpp[i]; wdp = &wdpp[i];
*wdp = ktime_get_mono_fast_ns(); *wdp = ktime_get_mono_fast_ns();
if (gp_async) { if (gp_async) {
...@@ -475,7 +523,7 @@ rcu_scale_writer(void *arg) ...@@ -475,7 +523,7 @@ rcu_scale_writer(void *arg)
if (!started && if (!started &&
atomic_read(&n_rcu_scale_writer_started) >= nrealwriters) atomic_read(&n_rcu_scale_writer_started) >= nrealwriters)
started = true; started = true;
if (!done && i >= MIN_MEAS) { if (!done && i >= MIN_MEAS && time_after(jiffies, jdone)) {
done = true; done = true;
sched_set_normal(current, 0); sched_set_normal(current, 0);
pr_alert("%s%s rcu_scale_writer %ld has %d measurements\n", pr_alert("%s%s rcu_scale_writer %ld has %d measurements\n",
...@@ -518,8 +566,8 @@ static void ...@@ -518,8 +566,8 @@ static void
rcu_scale_print_module_parms(struct rcu_scale_ops *cur_ops, const char *tag) rcu_scale_print_module_parms(struct rcu_scale_ops *cur_ops, const char *tag)
{ {
pr_alert("%s" SCALE_FLAG pr_alert("%s" SCALE_FLAG
"--- %s: nreaders=%d nwriters=%d verbose=%d shutdown=%d\n", "--- %s: gp_async=%d gp_async_max=%d gp_exp=%d holdoff=%d minruntime=%d nreaders=%d nwriters=%d writer_holdoff=%d writer_holdoff_jiffies=%d verbose=%d shutdown=%d\n",
scale_type, tag, nrealreaders, nrealwriters, verbose, shutdown); scale_type, tag, gp_async, gp_async_max, gp_exp, holdoff, minruntime, nrealreaders, nrealwriters, writer_holdoff, writer_holdoff_jiffies, verbose, shutdown);
} }
/* /*
...@@ -556,6 +604,8 @@ static struct task_struct **kfree_reader_tasks; ...@@ -556,6 +604,8 @@ static struct task_struct **kfree_reader_tasks;
static int kfree_nrealthreads; static int kfree_nrealthreads;
static atomic_t n_kfree_scale_thread_started; static atomic_t n_kfree_scale_thread_started;
static atomic_t n_kfree_scale_thread_ended; static atomic_t n_kfree_scale_thread_ended;
static struct task_struct *kthread_tp;
static u64 kthread_stime;
struct kfree_obj { struct kfree_obj {
char kfree_obj[8]; char kfree_obj[8];
...@@ -701,6 +751,10 @@ kfree_scale_init(void) ...@@ -701,6 +751,10 @@ kfree_scale_init(void)
unsigned long jif_start; unsigned long jif_start;
unsigned long orig_jif; unsigned long orig_jif;
pr_alert("%s" SCALE_FLAG
"--- kfree_rcu_test: kfree_mult=%d kfree_by_call_rcu=%d kfree_nthreads=%d kfree_alloc_num=%d kfree_loops=%d kfree_rcu_test_double=%d kfree_rcu_test_single=%d\n",
scale_type, kfree_mult, kfree_by_call_rcu, kfree_nthreads, kfree_alloc_num, kfree_loops, kfree_rcu_test_double, kfree_rcu_test_single);
// Also, do a quick self-test to ensure laziness is as much as // Also, do a quick self-test to ensure laziness is as much as
// expected. // expected.
if (kfree_by_call_rcu && !IS_ENABLED(CONFIG_RCU_LAZY)) { if (kfree_by_call_rcu && !IS_ENABLED(CONFIG_RCU_LAZY)) {
...@@ -797,6 +851,18 @@ rcu_scale_cleanup(void) ...@@ -797,6 +851,18 @@ rcu_scale_cleanup(void)
if (gp_exp && gp_async) if (gp_exp && gp_async)
SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!"); SCALEOUT_ERRSTRING("No expedited async GPs, so went with async!");
// If built-in, just report all of the GP kthread's CPU time.
if (IS_BUILTIN(CONFIG_RCU_SCALE_TEST) && !kthread_tp && cur_ops->rso_gp_kthread)
kthread_tp = cur_ops->rso_gp_kthread();
if (kthread_tp) {
u32 ns;
u64 us;
kthread_stime = kthread_tp->stime - kthread_stime;
us = div_u64_rem(kthread_stime, 1000, &ns);
pr_info("rcu_scale: Grace-period kthread CPU time: %llu.%03u us\n", us, ns);
show_rcu_gp_kthreads();
}
if (kfree_rcu_test) { if (kfree_rcu_test) {
kfree_scale_cleanup(); kfree_scale_cleanup();
return; return;
...@@ -885,7 +951,7 @@ rcu_scale_init(void) ...@@ -885,7 +951,7 @@ rcu_scale_init(void)
long i; long i;
int firsterr = 0; int firsterr = 0;
static struct rcu_scale_ops *scale_ops[] = { static struct rcu_scale_ops *scale_ops[] = {
&rcu_ops, &srcu_ops, &srcud_ops, TASKS_OPS TASKS_TRACING_OPS &rcu_ops, &srcu_ops, &srcud_ops, TASKS_OPS TASKS_RUDE_OPS TASKS_TRACING_OPS
}; };
if (!torture_init_begin(scale_type, verbose)) if (!torture_init_begin(scale_type, verbose))
...@@ -910,6 +976,11 @@ rcu_scale_init(void) ...@@ -910,6 +976,11 @@ rcu_scale_init(void)
if (cur_ops->init) if (cur_ops->init)
cur_ops->init(); cur_ops->init();
if (cur_ops->rso_gp_kthread) {
kthread_tp = cur_ops->rso_gp_kthread();
if (kthread_tp)
kthread_stime = kthread_tp->stime;
}
if (kfree_rcu_test) if (kfree_rcu_test)
return kfree_scale_init(); return kfree_scale_init();
......
...@@ -1581,6 +1581,7 @@ rcu_torture_writer(void *arg) ...@@ -1581,6 +1581,7 @@ rcu_torture_writer(void *arg)
rcu_access_pointer(rcu_torture_current) != rcu_access_pointer(rcu_torture_current) !=
&rcu_tortures[i]) { &rcu_tortures[i]) {
tracing_off(); tracing_off();
show_rcu_gp_kthreads();
WARN(1, "%s: rtort_pipe_count: %d\n", __func__, rcu_tortures[i].rtort_pipe_count); WARN(1, "%s: rtort_pipe_count: %d\n", __func__, rcu_tortures[i].rtort_pipe_count);
rcu_ftrace_dump(DUMP_ALL); rcu_ftrace_dump(DUMP_ALL);
} }
...@@ -1876,7 +1877,7 @@ static int ...@@ -1876,7 +1877,7 @@ static int
rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp) rcutorture_extend_mask(int oldmask, struct torture_random_state *trsp)
{ {
int mask = rcutorture_extend_mask_max(); int mask = rcutorture_extend_mask_max();
unsigned long randmask1 = torture_random(trsp) >> 8; unsigned long randmask1 = torture_random(trsp);
unsigned long randmask2 = randmask1 >> 3; unsigned long randmask2 = randmask1 >> 3;
unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED; unsigned long preempts = RCUTORTURE_RDR_PREEMPT | RCUTORTURE_RDR_SCHED;
unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ; unsigned long preempts_irq = preempts | RCUTORTURE_RDR_IRQ;
...@@ -1935,7 +1936,7 @@ rcutorture_loop_extend(int *readstate, struct torture_random_state *trsp, ...@@ -1935,7 +1936,7 @@ rcutorture_loop_extend(int *readstate, struct torture_random_state *trsp,
if (!((mask - 1) & mask)) if (!((mask - 1) & mask))
return rtrsp; /* Current RCU reader not extendable. */ return rtrsp; /* Current RCU reader not extendable. */
/* Bias towards larger numbers of loops. */ /* Bias towards larger numbers of loops. */
i = (torture_random(trsp) >> 3); i = torture_random(trsp);
i = ((i | (i >> 3)) & RCUTORTURE_RDR_MAX_LOOPS) + 1; i = ((i | (i >> 3)) & RCUTORTURE_RDR_MAX_LOOPS) + 1;
for (j = 0; j < i; j++) { for (j = 0; j < i; j++) {
mask = rcutorture_extend_mask(*readstate, trsp); mask = rcutorture_extend_mask(*readstate, trsp);
...@@ -2136,7 +2137,7 @@ static int rcu_nocb_toggle(void *arg) ...@@ -2136,7 +2137,7 @@ static int rcu_nocb_toggle(void *arg)
toggle_fuzz = NSEC_PER_USEC; toggle_fuzz = NSEC_PER_USEC;
do { do {
r = torture_random(&rand); r = torture_random(&rand);
cpu = (r >> 4) % (maxcpu + 1); cpu = (r >> 1) % (maxcpu + 1);
if (r & 0x1) { if (r & 0x1) {
rcu_nocb_cpu_offload(cpu); rcu_nocb_cpu_offload(cpu);
atomic_long_inc(&n_nocb_offload); atomic_long_inc(&n_nocb_offload);
......
...@@ -528,6 +528,38 @@ static struct ref_scale_ops clock_ops = { ...@@ -528,6 +528,38 @@ static struct ref_scale_ops clock_ops = {
.name = "clock" .name = "clock"
}; };
static void ref_jiffies_section(const int nloops)
{
u64 x = 0;
int i;
preempt_disable();
for (i = nloops; i >= 0; i--)
x += jiffies;
preempt_enable();
stopopts = x;
}
static void ref_jiffies_delay_section(const int nloops, const int udl, const int ndl)
{
u64 x = 0;
int i;
preempt_disable();
for (i = nloops; i >= 0; i--) {
x += jiffies;
un_delay(udl, ndl);
}
preempt_enable();
stopopts = x;
}
static struct ref_scale_ops jiffies_ops = {
.readsection = ref_jiffies_section,
.delaysection = ref_jiffies_delay_section,
.name = "jiffies"
};
//////////////////////////////////////////////////////////////////////// ////////////////////////////////////////////////////////////////////////
// //
// Methods leveraging SLAB_TYPESAFE_BY_RCU. // Methods leveraging SLAB_TYPESAFE_BY_RCU.
...@@ -1047,7 +1079,7 @@ ref_scale_init(void) ...@@ -1047,7 +1079,7 @@ ref_scale_init(void)
int firsterr = 0; int firsterr = 0;
static struct ref_scale_ops *scale_ops[] = { static struct ref_scale_ops *scale_ops[] = {
&rcu_ops, &srcu_ops, RCU_TRACE_OPS RCU_TASKS_OPS &refcnt_ops, &rwlock_ops, &rcu_ops, &srcu_ops, RCU_TRACE_OPS RCU_TASKS_OPS &refcnt_ops, &rwlock_ops,
&rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops, &clock_ops, &rwsem_ops, &lock_ops, &lock_irq_ops, &acqrel_ops, &clock_ops, &jiffies_ops,
&typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops, &typesafe_ref_ops, &typesafe_lock_ops, &typesafe_seqlock_ops,
}; };
...@@ -1107,12 +1139,11 @@ ref_scale_init(void) ...@@ -1107,12 +1139,11 @@ ref_scale_init(void)
VERBOSE_SCALEOUT("Starting %d reader threads", nreaders); VERBOSE_SCALEOUT("Starting %d reader threads", nreaders);
for (i = 0; i < nreaders; i++) { for (i = 0; i < nreaders; i++) {
init_waitqueue_head(&reader_tasks[i].wq);
firsterr = torture_create_kthread(ref_scale_reader, (void *)i, firsterr = torture_create_kthread(ref_scale_reader, (void *)i,
reader_tasks[i].task); reader_tasks[i].task);
if (torture_init_error(firsterr)) if (torture_init_error(firsterr))
goto unwind; goto unwind;
init_waitqueue_head(&(reader_tasks[i].wq));
} }
// Main Task // Main Task
......
This diff is collapsed.
...@@ -632,7 +632,7 @@ void __rcu_irq_enter_check_tick(void) ...@@ -632,7 +632,7 @@ void __rcu_irq_enter_check_tick(void)
// prevents self-deadlock. So we can safely recheck under the lock. // prevents self-deadlock. So we can safely recheck under the lock.
// Note that the nohz_full state currently cannot change. // Note that the nohz_full state currently cannot change.
raw_spin_lock_rcu_node(rdp->mynode); raw_spin_lock_rcu_node(rdp->mynode);
if (rdp->rcu_urgent_qs && !rdp->rcu_forced_tick) { if (READ_ONCE(rdp->rcu_urgent_qs) && !rdp->rcu_forced_tick) {
// A nohz_full CPU is in the kernel and RCU needs a // A nohz_full CPU is in the kernel and RCU needs a
// quiescent state. Turn on the tick! // quiescent state. Turn on the tick!
WRITE_ONCE(rdp->rcu_forced_tick, true); WRITE_ONCE(rdp->rcu_forced_tick, true);
...@@ -677,12 +677,16 @@ static void rcu_disable_urgency_upon_qs(struct rcu_data *rdp) ...@@ -677,12 +677,16 @@ static void rcu_disable_urgency_upon_qs(struct rcu_data *rdp)
} }
/** /**
* rcu_is_watching - see if RCU thinks that the current CPU is not idle * rcu_is_watching - RCU read-side critical sections permitted on current CPU?
* *
* Return true if RCU is watching the running CPU, which means that this * Return @true if RCU is watching the running CPU and @false otherwise.
* CPU can safely enter RCU read-side critical sections. In other words, * An @true return means that this CPU can safely enter RCU read-side
* if the current CPU is not in its idle loop or is in an interrupt or * critical sections.
* NMI handler, return true. *
* Although calls to rcu_is_watching() from most parts of the kernel
* will return @true, there are important exceptions. For example, if the
* current CPU is deep within its idle loop, in kernel entry/exit code,
* or offline, rcu_is_watching() will return @false.
* *
* Make notrace because it can be called by the internal functions of * Make notrace because it can be called by the internal functions of
* ftrace, and making this notrace removes unnecessary recursion calls. * ftrace, and making this notrace removes unnecessary recursion calls.
......
...@@ -77,9 +77,9 @@ __setup("rcu_nocbs", rcu_nocb_setup); ...@@ -77,9 +77,9 @@ __setup("rcu_nocbs", rcu_nocb_setup);
static int __init parse_rcu_nocb_poll(char *arg) static int __init parse_rcu_nocb_poll(char *arg)
{ {
rcu_nocb_poll = true; rcu_nocb_poll = true;
return 0; return 1;
} }
early_param("rcu_nocb_poll", parse_rcu_nocb_poll); __setup("rcu_nocb_poll", parse_rcu_nocb_poll);
/* /*
* Don't bother bypassing ->cblist if the call_rcu() rate is low. * Don't bother bypassing ->cblist if the call_rcu() rate is low.
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <linux/ktime.h> #include <linux/ktime.h>
#include <asm/byteorder.h> #include <asm/byteorder.h>
#include <linux/torture.h> #include <linux/torture.h>
#include <linux/sched/rt.h>
#include "rcu/rcu.h" #include "rcu/rcu.h"
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
...@@ -54,6 +55,9 @@ module_param(verbose_sleep_frequency, int, 0444); ...@@ -54,6 +55,9 @@ module_param(verbose_sleep_frequency, int, 0444);
static int verbose_sleep_duration = 1; static int verbose_sleep_duration = 1;
module_param(verbose_sleep_duration, int, 0444); module_param(verbose_sleep_duration, int, 0444);
static int random_shuffle;
module_param(random_shuffle, int, 0444);
static char *torture_type; static char *torture_type;
static int verbose; static int verbose;
...@@ -88,8 +92,8 @@ int torture_hrtimeout_ns(ktime_t baset_ns, u32 fuzzt_ns, struct torture_random_s ...@@ -88,8 +92,8 @@ int torture_hrtimeout_ns(ktime_t baset_ns, u32 fuzzt_ns, struct torture_random_s
ktime_t hto = baset_ns; ktime_t hto = baset_ns;
if (trsp) if (trsp)
hto += (torture_random(trsp) >> 3) % fuzzt_ns; hto += torture_random(trsp) % fuzzt_ns;
set_current_state(TASK_UNINTERRUPTIBLE); set_current_state(TASK_IDLE);
return schedule_hrtimeout(&hto, HRTIMER_MODE_REL); return schedule_hrtimeout(&hto, HRTIMER_MODE_REL);
} }
EXPORT_SYMBOL_GPL(torture_hrtimeout_ns); EXPORT_SYMBOL_GPL(torture_hrtimeout_ns);
...@@ -350,22 +354,22 @@ torture_onoff(void *arg) ...@@ -350,22 +354,22 @@ torture_onoff(void *arg)
if (onoff_holdoff > 0) { if (onoff_holdoff > 0) {
VERBOSE_TOROUT_STRING("torture_onoff begin holdoff"); VERBOSE_TOROUT_STRING("torture_onoff begin holdoff");
schedule_timeout_interruptible(onoff_holdoff); torture_hrtimeout_jiffies(onoff_holdoff, &rand);
VERBOSE_TOROUT_STRING("torture_onoff end holdoff"); VERBOSE_TOROUT_STRING("torture_onoff end holdoff");
} }
while (!torture_must_stop()) { while (!torture_must_stop()) {
if (disable_onoff_at_boot && !rcu_inkernel_boot_has_ended()) { if (disable_onoff_at_boot && !rcu_inkernel_boot_has_ended()) {
schedule_timeout_interruptible(HZ / 10); torture_hrtimeout_jiffies(HZ / 10, &rand);
continue; continue;
} }
cpu = (torture_random(&rand) >> 4) % (maxcpu + 1); cpu = torture_random(&rand) % (maxcpu + 1);
if (!torture_offline(cpu, if (!torture_offline(cpu,
&n_offline_attempts, &n_offline_successes, &n_offline_attempts, &n_offline_successes,
&sum_offline, &min_offline, &max_offline)) &sum_offline, &min_offline, &max_offline))
torture_online(cpu, torture_online(cpu,
&n_online_attempts, &n_online_successes, &n_online_attempts, &n_online_successes,
&sum_online, &min_online, &max_online); &sum_online, &min_online, &max_online);
schedule_timeout_interruptible(onoff_interval); torture_hrtimeout_jiffies(onoff_interval, &rand);
} }
stop: stop:
...@@ -518,6 +522,7 @@ static void torture_shuffle_task_unregister_all(void) ...@@ -518,6 +522,7 @@ static void torture_shuffle_task_unregister_all(void)
*/ */
static void torture_shuffle_tasks(void) static void torture_shuffle_tasks(void)
{ {
DEFINE_TORTURE_RANDOM(rand);
struct shuffle_task *stp; struct shuffle_task *stp;
cpumask_setall(shuffle_tmp_mask); cpumask_setall(shuffle_tmp_mask);
...@@ -537,8 +542,10 @@ static void torture_shuffle_tasks(void) ...@@ -537,8 +542,10 @@ static void torture_shuffle_tasks(void)
cpumask_clear_cpu(shuffle_idle_cpu, shuffle_tmp_mask); cpumask_clear_cpu(shuffle_idle_cpu, shuffle_tmp_mask);
mutex_lock(&shuffle_task_mutex); mutex_lock(&shuffle_task_mutex);
list_for_each_entry(stp, &shuffle_task_list, st_l) list_for_each_entry(stp, &shuffle_task_list, st_l) {
set_cpus_allowed_ptr(stp->st_t, shuffle_tmp_mask); if (!random_shuffle || torture_random(&rand) & 0x1)
set_cpus_allowed_ptr(stp->st_t, shuffle_tmp_mask);
}
mutex_unlock(&shuffle_task_mutex); mutex_unlock(&shuffle_task_mutex);
cpus_read_unlock(); cpus_read_unlock();
...@@ -550,9 +557,11 @@ static void torture_shuffle_tasks(void) ...@@ -550,9 +557,11 @@ static void torture_shuffle_tasks(void)
*/ */
static int torture_shuffle(void *arg) static int torture_shuffle(void *arg)
{ {
DEFINE_TORTURE_RANDOM(rand);
VERBOSE_TOROUT_STRING("torture_shuffle task started"); VERBOSE_TOROUT_STRING("torture_shuffle task started");
do { do {
schedule_timeout_interruptible(shuffle_interval); torture_hrtimeout_jiffies(shuffle_interval, &rand);
torture_shuffle_tasks(); torture_shuffle_tasks();
torture_shutdown_absorb("torture_shuffle"); torture_shutdown_absorb("torture_shuffle");
} while (!torture_must_stop()); } while (!torture_must_stop());
...@@ -728,12 +737,12 @@ bool stutter_wait(const char *title) ...@@ -728,12 +737,12 @@ bool stutter_wait(const char *title)
cond_resched_tasks_rcu_qs(); cond_resched_tasks_rcu_qs();
spt = READ_ONCE(stutter_pause_test); spt = READ_ONCE(stutter_pause_test);
for (; spt; spt = READ_ONCE(stutter_pause_test)) { for (; spt; spt = READ_ONCE(stutter_pause_test)) {
if (!ret) { if (!ret && !rt_task(current)) {
sched_set_normal(current, MAX_NICE); sched_set_normal(current, MAX_NICE);
ret = true; ret = true;
} }
if (spt == 1) { if (spt == 1) {
schedule_timeout_interruptible(1); torture_hrtimeout_jiffies(1, NULL);
} else if (spt == 2) { } else if (spt == 2) {
while (READ_ONCE(stutter_pause_test)) { while (READ_ONCE(stutter_pause_test)) {
if (!(i++ & 0xffff)) if (!(i++ & 0xffff))
...@@ -741,7 +750,7 @@ bool stutter_wait(const char *title) ...@@ -741,7 +750,7 @@ bool stutter_wait(const char *title)
cond_resched(); cond_resched();
} }
} else { } else {
schedule_timeout_interruptible(round_jiffies_relative(HZ)); torture_hrtimeout_jiffies(round_jiffies_relative(HZ), NULL);
} }
torture_shutdown_absorb(title); torture_shutdown_absorb(title);
} }
...@@ -926,7 +935,7 @@ EXPORT_SYMBOL_GPL(torture_kthread_stopping); ...@@ -926,7 +935,7 @@ EXPORT_SYMBOL_GPL(torture_kthread_stopping);
* it starts, you will need to open-code your own. * it starts, you will need to open-code your own.
*/ */
int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m, int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m,
char *f, struct task_struct **tp) char *f, struct task_struct **tp, void (*cbf)(struct task_struct *tp))
{ {
int ret = 0; int ret = 0;
...@@ -938,6 +947,10 @@ int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m, ...@@ -938,6 +947,10 @@ int _torture_create_kthread(int (*fn)(void *arg), void *arg, char *s, char *m,
*tp = NULL; *tp = NULL;
return ret; return ret;
} }
if (cbf)
cbf(*tp);
wake_up_process(*tp); // Process is sleeping, so ordering provided. wake_up_process(*tp); // Process is sleeping, so ordering provided.
torture_shuffle_task_register(*tp); torture_shuffle_task_register(*tp);
return ret; return ret;
......
...@@ -7457,6 +7457,30 @@ sub process { ...@@ -7457,6 +7457,30 @@ sub process {
} }
} }
# Complain about RCU Tasks Trace used outside of BPF (and of course, RCU).
our $rcu_trace_funcs = qr{(?x:
rcu_read_lock_trace |
rcu_read_lock_trace_held |
rcu_read_unlock_trace |
call_rcu_tasks_trace |
synchronize_rcu_tasks_trace |
rcu_barrier_tasks_trace |
rcu_request_urgent_qs_task
)};
our $rcu_trace_paths = qr{(?x:
kernel/bpf/ |
include/linux/bpf |
net/bpf/ |
kernel/rcu/ |
include/linux/rcu
)};
if ($line =~ /\b($rcu_trace_funcs)\s*\(/) {
if ($realfile !~ m{^$rcu_trace_paths}) {
WARN("RCU_TASKS_TRACE",
"use of RCU tasks trace is incorrect outside BPF or core RCU code\n" . $herecurr);
}
}
# check for lockdep_set_novalidate_class # check for lockdep_set_novalidate_class
if ($line =~ /^.\s*lockdep_set_novalidate_class\s*\(/ || if ($line =~ /^.\s*lockdep_set_novalidate_class\s*\(/ ||
$line =~ /__lockdep_no_validate__\s*\)/ ) { $line =~ /__lockdep_no_validate__\s*\)/ ) {
......
...@@ -3,6 +3,8 @@ ...@@ -3,6 +3,8 @@
# #
# Usage: configcheck.sh .config .config-template # Usage: configcheck.sh .config .config-template
# #
# Non-empty output if errors detected.
#
# Copyright (C) IBM Corporation, 2011 # Copyright (C) IBM Corporation, 2011
# #
# Authors: Paul E. McKenney <paulmck@linux.ibm.com> # Authors: Paul E. McKenney <paulmck@linux.ibm.com>
...@@ -10,32 +12,35 @@ ...@@ -10,32 +12,35 @@
T="`mktemp -d ${TMPDIR-/tmp}/configcheck.sh.XXXXXX`" T="`mktemp -d ${TMPDIR-/tmp}/configcheck.sh.XXXXXX`"
trap 'rm -rf $T' 0 trap 'rm -rf $T' 0
sed -e 's/"//g' < $1 > $T/.config # function test_kconfig_enabled ( Kconfig-var=val )
function test_kconfig_enabled () {
if ! grep -q "^$1$" $T/.config
then
echo :$1: improperly set
return 1
fi
return 0
}
sed -e 's/"//g' -e 's/\(.*\)=n/# \1 is not set/' -e 's/^#CHECK#//' < $2 | # function test_kconfig_disabled ( Kconfig-var )
awk ' function test_kconfig_disabled () {
{ if grep -q "^$1=n$" $T/.config
print "if grep -q \"" $0 "\" < '"$T/.config"'"; then
print "then"; return 0
print "\t:"; fi
print "else"; if grep -q "^$1=" $T/.config
if ($1 == "#") { then
print "\tif grep -q \"" $2 "\" < '"$T/.config"'"; echo :$1=n: improperly set
print "\tthen"; return 1
print "\t\tif test \"$firsttime\" = \"\"" fi
print "\t\tthen" return 0
print "\t\t\tfirsttime=1" }
print "\t\tfi"
print "\t\techo \":" $2 ": improperly set\""; sed -e 's/"//g' < $1 > $T/.config
print "\telse"; sed -e 's/^#CHECK#//' < $2 > $T/ConfigFragment
print "\t\t:"; grep '^CONFIG_.*=n$' $T/ConfigFragment |
print "\tfi"; sed -e 's/^/test_kconfig_disabled /' -e 's/=n$//' > $T/kconfig-n.sh
} else { . $T/kconfig-n.sh
print "\tif test \"$firsttime\" = \"\"" grep -v '^CONFIG_.*=n$' $T/ConfigFragment | grep '^CONFIG_' |
print "\tthen" sed -e 's/^/test_kconfig_enabled /' > $T/kconfig-not-n.sh
print "\t\tfirsttime=1" . $T/kconfig-not-n.sh
print "\tfi"
print "\techo \":" $0 ": improperly set\"";
}
print "fi";
}' | sh
...@@ -45,7 +45,7 @@ checkarg () { ...@@ -45,7 +45,7 @@ checkarg () {
configfrag_boot_params () { configfrag_boot_params () {
if test -r "$2.boot" if test -r "$2.boot"
then then
echo $1 `grep -v '^#' "$2.boot" | tr '\012' ' '` echo `grep -v '^#' "$2.boot" | tr '\012' ' '` $1
else else
echo $1 echo $1
fi fi
......
...@@ -40,6 +40,10 @@ awk ' ...@@ -40,6 +40,10 @@ awk '
sum += $5 / 1000.; sum += $5 / 1000.;
} }
/rcu_scale: Grace-period kthread CPU time/ {
cputime = $6;
}
END { END {
newNR = asort(gptimes); newNR = asort(gptimes);
if (newNR <= 0) { if (newNR <= 0) {
...@@ -78,6 +82,8 @@ END { ...@@ -78,6 +82,8 @@ END {
print "90th percentile grace-period duration: " gptimes[pct90]; print "90th percentile grace-period duration: " gptimes[pct90];
print "99th percentile grace-period duration: " gptimes[pct99]; print "99th percentile grace-period duration: " gptimes[pct99];
print "Maximum grace-period duration: " gptimes[newNR]; print "Maximum grace-period duration: " gptimes[newNR];
print "Grace periods: " ngps + 0 " Batches: " nbatches + 0 " Ratio: " ngps / nbatches; if (cputime != "")
cpustr = " CPU: " cputime;
print "Grace periods: " ngps + 0 " Batches: " nbatches + 0 " Ratio: " ngps / nbatches cpustr;
print "Computed from rcuscale printk output."; print "Computed from rcuscale printk output.";
}' }'
...@@ -16,6 +16,8 @@ ...@@ -16,6 +16,8 @@
T=/tmp/kvm-recheck.sh.$$ T=/tmp/kvm-recheck.sh.$$
trap 'rm -f $T' 0 2 trap 'rm -f $T' 0 2
configerrors=0
PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH PATH=`pwd`/tools/testing/selftests/rcutorture/bin:$PATH; export PATH
. functions.sh . functions.sh
for rd in "$@" for rd in "$@"
...@@ -32,7 +34,7 @@ do ...@@ -32,7 +34,7 @@ do
fi fi
TORTURE_SUITE="`cat $i/../torture_suite`" ; export TORTURE_SUITE TORTURE_SUITE="`cat $i/../torture_suite`" ; export TORTURE_SUITE
configfile=`echo $i | sed -e 's,^.*/,,'` configfile=`echo $i | sed -e 's,^.*/,,'`
rm -f $i/console.log.*.diags rm -f $i/console.log.*.diags $i/ConfigFragment.diags
case "${TORTURE_SUITE}" in case "${TORTURE_SUITE}" in
X*) X*)
;; ;;
...@@ -49,8 +51,21 @@ do ...@@ -49,8 +51,21 @@ do
then then
echo QEMU killed echo QEMU killed
fi fi
configcheck.sh $i/.config $i/ConfigFragment > $T 2>&1 configcheck.sh $i/.config $i/ConfigFragment > $i/ConfigFragment.diags 2>&1
cat $T if grep -q '^CONFIG_KCSAN=y$' $i/ConfigFragment.input
then
# KCSAN forces a number of Kconfig options, so remove
# complaints about those Kconfig options in KCSAN runs.
mv $i/ConfigFragment.diags $i/ConfigFragment.diags.kcsan
grep -v -E 'CONFIG_PROVE_RCU|CONFIG_PREEMPT_COUNT' $i/ConfigFragment.diags.kcsan > $i/ConfigFragment.diags
fi
if test -s $i/ConfigFragment.diags
then
cat $i/ConfigFragment.diags
configerrors=$((configerrors+1))
else
rm $i/ConfigFragment.diags
fi
if test -r $i/Make.oldconfig.err if test -r $i/Make.oldconfig.err
then then
cat $i/Make.oldconfig.err cat $i/Make.oldconfig.err
...@@ -65,7 +80,14 @@ do ...@@ -65,7 +80,14 @@ do
if test -f "$i/buildonly" if test -f "$i/buildonly"
then then
echo Build-only run, no boot/test echo Build-only run, no boot/test
configcheck.sh $i/.config $i/ConfigFragment configcheck.sh $i/.config $i/ConfigFragment > $i/ConfigFragment.diags 2>&1
if test -s $i/ConfigFragment.diags
then
cat $i/ConfigFragment.diags
configerrors=$((configerrors+1))
else
rm $i/ConfigFragment.diags
fi
parse-build.sh $i/Make.out $configfile parse-build.sh $i/Make.out $configfile
elif test -f "$i/qemu-cmd" elif test -f "$i/qemu-cmd"
then then
...@@ -79,10 +101,10 @@ do ...@@ -79,10 +101,10 @@ do
done done
if test -f "$rd/kcsan.sum" if test -f "$rd/kcsan.sum"
then then
if ! test -f $T if ! test -f $i/ConfigFragment.diags
then then
: :
elif grep -q CONFIG_KCSAN=y $T elif grep -q CONFIG_KCSAN=y $i/ConfigFragment.diags
then then
echo "Compiler or architecture does not support KCSAN!" echo "Compiler or architecture does not support KCSAN!"
echo Did you forget to switch your compiler with '--kmake-arg CC=<cc-that-supports-kcsan>'? echo Did you forget to switch your compiler with '--kmake-arg CC=<cc-that-supports-kcsan>'?
...@@ -94,17 +116,23 @@ do ...@@ -94,17 +116,23 @@ do
fi fi
fi fi
done done
if test "$configerrors" -gt 0
then
echo $configerrors runs with .config errors.
ret=1
fi
EDITOR=echo kvm-find-errors.sh "${@: -1}" > $T 2>&1 EDITOR=echo kvm-find-errors.sh "${@: -1}" > $T 2>&1
builderrors="`tr ' ' '\012' < $T | grep -c '/Make.out.diags'`" builderrors="`tr ' ' '\012' < $T | grep -c '/Make.out.diags'`"
if test "$builderrors" -gt 0 if test "$builderrors" -gt 0
then then
echo $builderrors runs with build errors. echo $builderrors runs with build errors.
ret=1 ret=2
fi fi
runerrors="`tr ' ' '\012' < $T | grep -c '/console.log.diags'`" runerrors="`tr ' ' '\012' < $T | grep -c '/console.log.diags'`"
if test "$runerrors" -gt 0 if test "$runerrors" -gt 0
then then
echo $runerrors runs with runtime errors. echo $runerrors runs with runtime errors.
ret=2 ret=3
fi fi
exit $ret exit $ret
...@@ -137,14 +137,20 @@ chmod +x $T/bin/kvm-remote-*.sh ...@@ -137,14 +137,20 @@ chmod +x $T/bin/kvm-remote-*.sh
# Check first to avoid the need for cleanup for system-name typos # Check first to avoid the need for cleanup for system-name typos
for i in $systems for i in $systems
do do
ncpus="`ssh -o BatchMode=yes $i getconf _NPROCESSORS_ONLN 2> /dev/null`" ssh -o BatchMode=yes $i getconf _NPROCESSORS_ONLN > $T/ssh.stdout 2> $T/ssh.stderr
ret=$? ret=$?
if test "$ret" -ne 0 if test "$ret" -ne 0
then then
echo System $i unreachable, giving up. | tee -a "$oldrun/remote-log" echo "System $i unreachable ($ret), giving up." | tee -a "$oldrun/remote-log"
echo ' --- ssh stdout: vvv' | tee -a "$oldrun/remote-log"
cat $T/ssh.stdout | tee -a "$oldrun/remote-log"
echo ' --- ssh stdout: ^^^' | tee -a "$oldrun/remote-log"
echo ' --- ssh stderr: vvv' | tee -a "$oldrun/remote-log"
cat $T/ssh.stderr | tee -a "$oldrun/remote-log"
echo ' --- ssh stderr: ^^^' | tee -a "$oldrun/remote-log"
exit 4 exit 4
fi fi
echo $i: $ncpus CPUs " " `date` | tee -a "$oldrun/remote-log" echo $i: `cat $T/ssh.stdout` CPUs " " `date` | tee -a "$oldrun/remote-log"
done done
# Download and expand the tarball on all systems. # Download and expand the tarball on all systems.
......
...@@ -9,9 +9,10 @@ ...@@ -9,9 +9,10 @@
# #
# Usage: kvm-test-1-run.sh config resdir seconds qemu-args boot_args_in # Usage: kvm-test-1-run.sh config resdir seconds qemu-args boot_args_in
# #
# qemu-args defaults to "-enable-kvm -nographic", along with arguments # qemu-args defaults to "-enable-kvm -display none -no-reboot", along
# specifying the number of CPUs and other options # with arguments specifying the number of CPUs
# generated from the underlying CPU architecture. # and other options generated from the underlying
# CPU architecture.
# boot_args_in defaults to value returned by the per_version_boot_params # boot_args_in defaults to value returned by the per_version_boot_params
# shell function. # shell function.
# #
...@@ -57,7 +58,6 @@ config_override_param () { ...@@ -57,7 +58,6 @@ config_override_param () {
cat $T/Kconfig_args >> $resdir/ConfigFragment.input cat $T/Kconfig_args >> $resdir/ConfigFragment.input
config_override.sh $T/$2 $T/Kconfig_args > $T/$2.tmp config_override.sh $T/$2 $T/Kconfig_args > $T/$2.tmp
mv $T/$2.tmp $T/$2 mv $T/$2.tmp $T/$2
# Note that "#CHECK#" is not permitted on commandline.
fi fi
} }
...@@ -140,7 +140,7 @@ then ...@@ -140,7 +140,7 @@ then
fi fi
# Generate -smp qemu argument. # Generate -smp qemu argument.
qemu_args="-enable-kvm -nographic $qemu_args" qemu_args="-enable-kvm -display none -no-reboot $qemu_args"
cpu_count=`configNR_CPUS.sh $resdir/ConfigFragment` cpu_count=`configNR_CPUS.sh $resdir/ConfigFragment`
cpu_count=`configfrag_boot_cpus "$boot_args_in" "$config_template" "$cpu_count"` cpu_count=`configfrag_boot_cpus "$boot_args_in" "$config_template" "$cpu_count"`
if test "$cpu_count" -gt "$TORTURE_ALLOTED_CPUS" if test "$cpu_count" -gt "$TORTURE_ALLOTED_CPUS"
...@@ -163,7 +163,7 @@ boot_args="`configfrag_boot_params "$boot_args_in" "$config_template"`" ...@@ -163,7 +163,7 @@ boot_args="`configfrag_boot_params "$boot_args_in" "$config_template"`"
boot_args="`per_version_boot_params "$boot_args" $resdir/.config $seconds`" boot_args="`per_version_boot_params "$boot_args" $resdir/.config $seconds`"
if test -n "$TORTURE_BOOT_GDB_ARG" if test -n "$TORTURE_BOOT_GDB_ARG"
then then
boot_args="$boot_args $TORTURE_BOOT_GDB_ARG" boot_args="$TORTURE_BOOT_GDB_ARG $boot_args"
fi fi
# Give bare-metal advice # Give bare-metal advice
......
...@@ -186,7 +186,7 @@ do ...@@ -186,7 +186,7 @@ do
fi fi
;; ;;
--kconfig|--kconfigs) --kconfig|--kconfigs)
checkarg --kconfig "(Kconfig options)" $# "$2" '^CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\( CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\)*$' '^error$' checkarg --kconfig "(Kconfig options)" $# "$2" '^\(#CHECK#\)\?CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\( \(#CHECK#\)\?CONFIG_[A-Z0-9_]\+=\([ynm]\|[0-9]\+\|"[^"]*"\)\)*$' '^error$'
TORTURE_KCONFIG_ARG="`echo "$TORTURE_KCONFIG_ARG $2" | sed -e 's/^ *//' -e 's/ *$//'`" TORTURE_KCONFIG_ARG="`echo "$TORTURE_KCONFIG_ARG $2" | sed -e 's/^ *//' -e 's/ *$//'`"
shift shift
;; ;;
......
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
D=tools/testing/selftests/rcutorture D=tools/testing/selftests/rcutorture
# Prerequisite checks # Prerequisite checks
[ -z "$D" ] && echo >&2 "No argument supplied" && exit 1
if [ ! -d "$D" ]; then if [ ! -d "$D" ]; then
echo >&2 "$D does not exist: Malformed kernel source tree?" echo >&2 "$D does not exist: Malformed kernel source tree?"
exit 1 exit 1
...@@ -34,12 +33,16 @@ cat > init.c << '___EOF___' ...@@ -34,12 +33,16 @@ cat > init.c << '___EOF___'
volatile unsigned long delaycount; volatile unsigned long delaycount;
int main(int argc, int argv[]) int main(int argc, char *argv[])
{ {
int i; int i;
struct timeval tv; struct timeval tv;
struct timeval tvb; struct timeval tvb;
printf("Torture-test rudimentary init program started, command line:\n");
for (i = 0; i < argc; i++)
printf(" %s", argv[i]);
printf("\n");
for (;;) { for (;;) {
sleep(1); sleep(1);
/* Need some userspace time. */ /* Need some userspace time. */
...@@ -64,15 +67,23 @@ ___EOF___ ...@@ -64,15 +67,23 @@ ___EOF___
# build using nolibc on supported archs (smaller executable) and fall # build using nolibc on supported archs (smaller executable) and fall
# back to regular glibc on other ones. # back to regular glibc on other ones.
if echo -e "#if __x86_64__||__i386__||__i486__||__i586__||__i686__" \ if echo -e "#if __x86_64__||__i386__||__i486__||__i586__||__i686__" \
"||__ARM_EABI__||__aarch64__||__s390x__\nyes\n#endif" \ "||__ARM_EABI__||__aarch64__||__s390x__||__loongarch__\nyes\n#endif" \
| ${CROSS_COMPILE}gcc -E -nostdlib -xc - \ | ${CROSS_COMPILE}gcc -E -nostdlib -xc - \
| grep -q '^yes'; then | grep -q '^yes'; then
# architecture supported by nolibc # architecture supported by nolibc
${CROSS_COMPILE}gcc -fno-asynchronous-unwind-tables -fno-ident \ ${CROSS_COMPILE}gcc -fno-asynchronous-unwind-tables -fno-ident \
-nostdlib -include ../../../../include/nolibc/nolibc.h \ -nostdlib -include ../../../../include/nolibc/nolibc.h \
-s -static -Os -o init init.c -lgcc -s -static -Os -o init init.c -lgcc
ret=$?
else else
${CROSS_COMPILE}gcc -s -static -Os -o init init.c ${CROSS_COMPILE}gcc -s -static -Os -o init init.c
ret=$?
fi
if [ "$ret" -ne 0 ]
then
echo "Failed to create a statically linked C-language initrd"
exit "$ret"
fi fi
rm init.c rm init.c
......
...@@ -55,6 +55,8 @@ do_kasan=yes ...@@ -55,6 +55,8 @@ do_kasan=yes
do_kcsan=no do_kcsan=no
do_clocksourcewd=yes do_clocksourcewd=yes
do_rt=yes do_rt=yes
do_rcutasksflavors=yes
do_srcu_lockdep=yes
# doyesno - Helper function for yes/no arguments # doyesno - Helper function for yes/no arguments
function doyesno () { function doyesno () {
...@@ -73,18 +75,20 @@ usage () { ...@@ -73,18 +75,20 @@ usage () {
echo " --configs-locktorture \"config-file list w/ repeat factor (10*LOCK01)\"" echo " --configs-locktorture \"config-file list w/ repeat factor (10*LOCK01)\""
echo " --configs-scftorture \"config-file list w/ repeat factor (2*CFLIST)\"" echo " --configs-scftorture \"config-file list w/ repeat factor (2*CFLIST)\""
echo " --do-all" echo " --do-all"
echo " --do-allmodconfig / --do-no-allmodconfig" echo " --do-allmodconfig / --do-no-allmodconfig / --no-allmodconfig"
echo " --do-clocksourcewd / --do-no-clocksourcewd" echo " --do-clocksourcewd / --do-no-clocksourcewd / --no-clocksourcewd"
echo " --do-kasan / --do-no-kasan" echo " --do-kasan / --do-no-kasan / --no-kasan"
echo " --do-kcsan / --do-no-kcsan" echo " --do-kcsan / --do-no-kcsan / --no-kcsan"
echo " --do-kvfree / --do-no-kvfree" echo " --do-kvfree / --do-no-kvfree / --no-kvfree"
echo " --do-locktorture / --do-no-locktorture" echo " --do-locktorture / --do-no-locktorture / --no-locktorture"
echo " --do-none" echo " --do-none"
echo " --do-rcuscale / --do-no-rcuscale" echo " --do-rcuscale / --do-no-rcuscale / --no-rcuscale"
echo " --do-rcutorture / --do-no-rcutorture" echo " --do-rcutasksflavors / --do-no-rcutasksflavors / --no-rcutasksflavors"
echo " --do-refscale / --do-no-refscale" echo " --do-rcutorture / --do-no-rcutorture / --no-rcutorture"
echo " --do-rt / --do-no-rt" echo " --do-refscale / --do-no-refscale / --no-refscale"
echo " --do-scftorture / --do-no-scftorture" echo " --do-rt / --do-no-rt / --no-rt"
echo " --do-scftorture / --do-no-scftorture / --no-scftorture"
echo " --do-srcu-lockdep / --do-no-srcu-lockdep / --no-srcu-lockdep"
echo " --duration [ <minutes> | <hours>h | <days>d ]" echo " --duration [ <minutes> | <hours>h | <days>d ]"
echo " --kcsan-kmake-arg kernel-make-arguments" echo " --kcsan-kmake-arg kernel-make-arguments"
exit 1 exit 1
...@@ -115,6 +119,7 @@ do ...@@ -115,6 +119,7 @@ do
;; ;;
--do-all|--doall) --do-all|--doall)
do_allmodconfig=yes do_allmodconfig=yes
do_rcutasksflavor=yes
do_rcutorture=yes do_rcutorture=yes
do_locktorture=yes do_locktorture=yes
do_scftorture=yes do_scftorture=yes
...@@ -125,27 +130,29 @@ do ...@@ -125,27 +130,29 @@ do
do_kasan=yes do_kasan=yes
do_kcsan=yes do_kcsan=yes
do_clocksourcewd=yes do_clocksourcewd=yes
do_srcu_lockdep=yes
;; ;;
--do-allmodconfig|--do-no-allmodconfig) --do-allmodconfig|--do-no-allmodconfig|--no-allmodconfig)
do_allmodconfig=`doyesno "$1" --do-allmodconfig` do_allmodconfig=`doyesno "$1" --do-allmodconfig`
;; ;;
--do-clocksourcewd|--do-no-clocksourcewd) --do-clocksourcewd|--do-no-clocksourcewd|--no-clocksourcewd)
do_clocksourcewd=`doyesno "$1" --do-clocksourcewd` do_clocksourcewd=`doyesno "$1" --do-clocksourcewd`
;; ;;
--do-kasan|--do-no-kasan) --do-kasan|--do-no-kasan|--no-kasan)
do_kasan=`doyesno "$1" --do-kasan` do_kasan=`doyesno "$1" --do-kasan`
;; ;;
--do-kcsan|--do-no-kcsan) --do-kcsan|--do-no-kcsan|--no-kcsan)
do_kcsan=`doyesno "$1" --do-kcsan` do_kcsan=`doyesno "$1" --do-kcsan`
;; ;;
--do-kvfree|--do-no-kvfree) --do-kvfree|--do-no-kvfree|--no-kvfree)
do_kvfree=`doyesno "$1" --do-kvfree` do_kvfree=`doyesno "$1" --do-kvfree`
;; ;;
--do-locktorture|--do-no-locktorture) --do-locktorture|--do-no-locktorture|--no-locktorture)
do_locktorture=`doyesno "$1" --do-locktorture` do_locktorture=`doyesno "$1" --do-locktorture`
;; ;;
--do-none|--donone) --do-none|--donone)
do_allmodconfig=no do_allmodconfig=no
do_rcutasksflavors=no
do_rcutorture=no do_rcutorture=no
do_locktorture=no do_locktorture=no
do_scftorture=no do_scftorture=no
...@@ -156,22 +163,29 @@ do ...@@ -156,22 +163,29 @@ do
do_kasan=no do_kasan=no
do_kcsan=no do_kcsan=no
do_clocksourcewd=no do_clocksourcewd=no
do_srcu_lockdep=no
;; ;;
--do-rcuscale|--do-no-rcuscale) --do-rcuscale|--do-no-rcuscale|--no-rcuscale)
do_rcuscale=`doyesno "$1" --do-rcuscale` do_rcuscale=`doyesno "$1" --do-rcuscale`
;; ;;
--do-rcutorture|--do-no-rcutorture) --do-rcutasksflavors|--do-no-rcutasksflavors|--no-rcutasksflavors)
do_rcutasksflavors=`doyesno "$1" --do-rcutasksflavors`
;;
--do-rcutorture|--do-no-rcutorture|--no-rcutorture)
do_rcutorture=`doyesno "$1" --do-rcutorture` do_rcutorture=`doyesno "$1" --do-rcutorture`
;; ;;
--do-refscale|--do-no-refscale) --do-refscale|--do-no-refscale|--no-refscale)
do_refscale=`doyesno "$1" --do-refscale` do_refscale=`doyesno "$1" --do-refscale`
;; ;;
--do-rt|--do-no-rt) --do-rt|--do-no-rt|--no-rt)
do_rt=`doyesno "$1" --do-rt` do_rt=`doyesno "$1" --do-rt`
;; ;;
--do-scftorture|--do-no-scftorture) --do-scftorture|--do-no-scftorture|--no-scftorture)
do_scftorture=`doyesno "$1" --do-scftorture` do_scftorture=`doyesno "$1" --do-scftorture`
;; ;;
--do-srcu-lockdep|--do-no-srcu-lockdep|--no-srcu-lockdep)
do_srcu_lockdep=`doyesno "$1" --do-srcu-lockdep`
;;
--duration) --duration)
checkarg --duration "(minutes)" $# "$2" '^[0-9][0-9]*\(m\|h\|d\|\)$' '^error' checkarg --duration "(minutes)" $# "$2" '^[0-9][0-9]*\(m\|h\|d\|\)$' '^error'
mult=1 mult=1
...@@ -361,6 +375,40 @@ then ...@@ -361,6 +375,40 @@ then
fi fi
fi fi
# Test building RCU Tasks flavors in isolation, both SMP and !SMP
if test "$do_rcutasksflavors" = "yes"
then
echo " --- rcutasksflavors:" Start `date` | tee -a $T/log
rtfdir="tools/testing/selftests/rcutorture/res/$ds/results-rcutasksflavors"
mkdir -p "$rtfdir"
cat > $T/rcutasksflavors << __EOF__
#CHECK#CONFIG_TASKS_RCU=n
#CHECK#CONFIG_TASKS_RUDE_RCU=n
#CHECK#CONFIG_TASKS_TRACE_RCU=n
__EOF__
for flavor in CONFIG_TASKS_RCU CONFIG_TASKS_RUDE_RCU CONFIG_TASKS_TRACE_RCU
do
forceflavor="`echo $flavor | sed -e 's/^CONFIG/CONFIG_FORCE/'`"
deselectedflavors="`grep -v $flavor $T/rcutasksflavors | tr '\012' ' ' | tr -s ' ' | sed -e 's/ *$//'`"
echo " --- Running RCU Tasks Trace flavor $flavor `date`" >> $rtfdir/log
tools/testing/selftests/rcutorture/bin/kvm.sh --datestamp "$ds/results-rcutasksflavors/$flavor" --buildonly --configs "TINY01 TREE04" --kconfig "CONFIG_RCU_EXPERT=y CONFIG_RCU_SCALE_TEST=y $forceflavor=y $deselectedflavors" --trust-make > $T/$flavor.out 2>&1
retcode=$?
if test "$retcode" -ne 0
then
break
fi
done
if test "$retcode" -eq 0
then
echo "rcutasksflavors($retcode)" $rtfdir >> $T/successes
echo Success >> $rtfdir/log
else
echo "rcutasksflavors($retcode)" $rtfdir >> $T/failures
echo " --- rcutasksflavors Test summary:" >> $rtfdir/log
echo " --- Summary: Exit code $retcode from $flavor, see Make.out" >> $rtfdir/log
fi
fi
# --torture rcu # --torture rcu
if test "$do_rcutorture" = "yes" if test "$do_rcutorture" = "yes"
then then
...@@ -391,6 +439,23 @@ then ...@@ -391,6 +439,23 @@ then
torture_set "rcurttorture-exp" tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration "$duration_rcutorture" --configs "TREE03" --trust-make torture_set "rcurttorture-exp" tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration "$duration_rcutorture" --configs "TREE03" --trust-make
fi fi
if test "$do_srcu_lockdep" = "yes"
then
echo " --- do-srcu-lockdep:" Start `date` | tee -a $T/log
tools/testing/selftests/rcutorture/bin/srcu_lockdep.sh --datestamp "$ds/results-srcu-lockdep" > $T/srcu_lockdep.sh.out 2>&1
retcode=$?
cp $T/srcu_lockdep.sh.out "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep/log"
if test "$retcode" -eq 0
then
echo "srcu_lockdep($retcode)" "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep" >> $T/successes
echo Success >> "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep/log"
else
echo "srcu_lockdep($retcode)" "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep" >> $T/failures
echo " --- srcu_lockdep Test Summary:" >> "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep/log"
echo " --- Summary: Exit code $retcode from srcu_lockdep.sh, see ds/results-srcu-lockdep" >> "tools/testing/selftests/rcutorture/res/$ds/results-srcu-lockdep/log"
fi
fi
if test "$do_refscale" = yes if test "$do_refscale" = yes
then then
primlist="`grep '\.name[ ]*=' kernel/rcu/refscale.c | sed -e 's/^[^"]*"//' -e 's/".*$//'`" primlist="`grep '\.name[ ]*=' kernel/rcu/refscale.c | sed -e 's/^[^"]*"//' -e 's/".*$//'`"
...@@ -541,11 +606,23 @@ then ...@@ -541,11 +606,23 @@ then
fi fi
echo Started at $startdate, ended at `date`, duration `get_starttime_duration $starttime`. | tee -a $T/log echo Started at $startdate, ended at `date`, duration `get_starttime_duration $starttime`. | tee -a $T/log
echo Summary: Successes: $nsuccesses Failures: $nfailures. | tee -a $T/log echo Summary: Successes: $nsuccesses Failures: $nfailures. | tee -a $T/log
tdir="`cat $T/successes $T/failures | head -1 | awk '{ print $NF }' | sed -e 's,/[^/]\+/*$,,'`"
find "$tdir" -name 'ConfigFragment.diags' -print > $T/configerrors
find "$tdir" -name 'Make.out.diags' -print > $T/builderrors
if test -s "$T/configerrors"
then
echo " Scenarios with .config errors: `wc -l "$T/configerrors" | awk '{ print $1 }'`"
nonkcsanbug="yes"
fi
if test -s "$T/builderrors"
then
echo " Scenarios with build errors: `wc -l "$T/builderrors" | awk '{ print $1 }'`"
nonkcsanbug="yes"
fi
if test -z "$nonkcsanbug" && test -s "$T/failuresum" if test -z "$nonkcsanbug" && test -s "$T/failuresum"
then then
echo " All bugs were KCSAN failures." echo " All bugs were KCSAN failures."
fi fi
tdir="`cat $T/successes $T/failures | head -1 | awk '{ print $NF }' | sed -e 's,/[^/]\+/*$,,'`"
if test -n "$tdir" && test $compress_concurrency -gt 0 if test -n "$tdir" && test $compress_concurrency -gt 0
then then
# KASAN vmlinux files can approach 1GB in size, so compress them. # KASAN vmlinux files can approach 1GB in size, so compress them.
......
...@@ -22,8 +22,9 @@ locktorture_param_onoff () { ...@@ -22,8 +22,9 @@ locktorture_param_onoff () {
# #
# Adds per-version torture-module parameters to kernels supporting them. # Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () { per_version_boot_params () {
echo $1 `locktorture_param_onoff "$1" "$2"` \ echo `locktorture_param_onoff "$1" "$2"` \
locktorture.stat_interval=15 \ locktorture.stat_interval=15 \
locktorture.shutdown_secs=$3 \ locktorture.shutdown_secs=$3 \
locktorture.verbose=1 locktorture.verbose=1 \
$1
} }
...@@ -6,6 +6,5 @@ CONFIG_PREEMPT=y ...@@ -6,6 +6,5 @@ CONFIG_PREEMPT=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=n CONFIG_NO_HZ_IDLE=n
CONFIG_NO_HZ_FULL=y CONFIG_NO_HZ_FULL=y
#CHECK#CONFIG_RCU_EXPERT=n
CONFIG_TASKS_RCU=y CONFIG_TASKS_RCU=y
CONFIG_RCU_EXPERT=y CONFIG_RCU_EXPERT=y
...@@ -15,4 +15,3 @@ CONFIG_DEBUG_LOCK_ALLOC=n ...@@ -15,4 +15,3 @@ CONFIG_DEBUG_LOCK_ALLOC=n
CONFIG_RCU_BOOST=n CONFIG_RCU_BOOST=n
CONFIG_DEBUG_OBJECTS_RCU_HEAD=n CONFIG_DEBUG_OBJECTS_RCU_HEAD=n
CONFIG_RCU_EXPERT=y CONFIG_RCU_EXPERT=y
CONFIG_BOOTPARAM_HOTPLUG_CPU0=y
...@@ -46,10 +46,11 @@ rcutorture_param_stat_interval () { ...@@ -46,10 +46,11 @@ rcutorture_param_stat_interval () {
# #
# Adds per-version torture-module parameters to kernels supporting them. # Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () { per_version_boot_params () {
echo $1 `rcutorture_param_onoff "$1" "$2"` \ echo `rcutorture_param_onoff "$1" "$2"` \
`rcutorture_param_n_barrier_cbs "$1"` \ `rcutorture_param_n_barrier_cbs "$1"` \
`rcutorture_param_stat_interval "$1"` \ `rcutorture_param_stat_interval "$1"` \
rcutorture.shutdown_secs=$3 \ rcutorture.shutdown_secs=$3 \
rcutorture.test_no_idle_hz=1 \ rcutorture.test_no_idle_hz=1 \
rcutorture.verbose=1 rcutorture.verbose=1 \
$1
} }
...@@ -2,5 +2,7 @@ CONFIG_RCU_SCALE_TEST=y ...@@ -2,5 +2,7 @@ CONFIG_RCU_SCALE_TEST=y
CONFIG_PRINTK_TIME=y CONFIG_PRINTK_TIME=y
CONFIG_FORCE_TASKS_RCU=y CONFIG_FORCE_TASKS_RCU=y
#CHECK#CONFIG_TASKS_RCU=y #CHECK#CONFIG_TASKS_RCU=y
CONFIG_FORCE_TASKS_RUDE_RCU=y
#CHECK#CONFIG_TASKS_RUDE_RCU=y
CONFIG_FORCE_TASKS_TRACE_RCU=y CONFIG_FORCE_TASKS_TRACE_RCU=y
#CHECK#CONFIG_TASKS_TRACE_RCU=y #CHECK#CONFIG_TASKS_TRACE_RCU=y
...@@ -2,6 +2,8 @@ CONFIG_SMP=y ...@@ -2,6 +2,8 @@ CONFIG_SMP=y
CONFIG_PREEMPT_NONE=y CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n CONFIG_PREEMPT=n
CONFIG_PREEMPT_DYNAMIC=n
#CHECK#CONFIG_TREE_RCU=y
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
CONFIG_NO_HZ_FULL=n CONFIG_NO_HZ_FULL=n
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
# #
# Adds per-version torture-module parameters to kernels supporting them. # Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () { per_version_boot_params () {
echo $1 rcuscale.shutdown=1 \ echo rcuscale.shutdown=1 \
rcuscale.verbose=0 rcuscale.verbose=0 \
$1
} }
...@@ -2,6 +2,7 @@ CONFIG_SMP=y ...@@ -2,6 +2,7 @@ CONFIG_SMP=y
CONFIG_PREEMPT_NONE=y CONFIG_PREEMPT_NONE=y
CONFIG_PREEMPT_VOLUNTARY=n CONFIG_PREEMPT_VOLUNTARY=n
CONFIG_PREEMPT=n CONFIG_PREEMPT=n
CONFIG_PREEMPT_DYNAMIC=n
#CHECK#CONFIG_PREEMPT_RCU=n #CHECK#CONFIG_PREEMPT_RCU=n
CONFIG_HZ_PERIODIC=n CONFIG_HZ_PERIODIC=n
CONFIG_NO_HZ_IDLE=y CONFIG_NO_HZ_IDLE=y
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
# #
# Adds per-version torture-module parameters to kernels supporting them. # Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () { per_version_boot_params () {
echo $1 refscale.shutdown=1 \ echo refscale.shutdown=1 \
refscale.verbose=0 refscale.verbose=0 \
$1
} }
...@@ -22,8 +22,9 @@ scftorture_param_onoff () { ...@@ -22,8 +22,9 @@ scftorture_param_onoff () {
# #
# Adds per-version torture-module parameters to kernels supporting them. # Adds per-version torture-module parameters to kernels supporting them.
per_version_boot_params () { per_version_boot_params () {
echo $1 `scftorture_param_onoff "$1" "$2"` \ echo `scftorture_param_onoff "$1" "$2"` \
scftorture.stat_interval=15 \ scftorture.stat_interval=15 \
scftorture.shutdown_secs=$3 \ scftorture.shutdown_secs=$3 \
scftorture.verbose=1 scftorture.verbose=1 \
$1
} }
# SPDX-License-Identifier: GPL-2.0
all: srcu.c store_buffering
LINUX_SOURCE = ../../../../../..
modified_srcu_input = $(LINUX_SOURCE)/include/linux/srcu.h \
$(LINUX_SOURCE)/kernel/rcu/srcu.c
modified_srcu_output = include/linux/srcu.h srcu.c
include/linux/srcu.h: srcu.c
srcu.c: modify_srcu.awk Makefile $(modified_srcu_input)
awk -f modify_srcu.awk $(modified_srcu_input) $(modified_srcu_output)
store_buffering:
@cd tests/store_buffering; make
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This header has been modifies to remove definitions of types that
* are defined in standard userspace headers or are problematic for some
* other reason.
*/
#ifndef _LINUX_TYPES_H
#define _LINUX_TYPES_H
#define __EXPORTED_HEADERS__
#include <uapi/linux/types.h>
#ifndef __ASSEMBLY__
#define DECLARE_BITMAP(name, bits) \
unsigned long name[BITS_TO_LONGS(bits)]
typedef __u32 __kernel_dev_t;
/* bsd */
typedef unsigned char u_char;
typedef unsigned short u_short;
typedef unsigned int u_int;
typedef unsigned long u_long;
/* sysv */
typedef unsigned char unchar;
typedef unsigned short ushort;
typedef unsigned int uint;
typedef unsigned long ulong;
#ifndef __BIT_TYPES_DEFINED__
#define __BIT_TYPES_DEFINED__
typedef __u8 u_int8_t;
typedef __s8 int8_t;
typedef __u16 u_int16_t;
typedef __s16 int16_t;
typedef __u32 u_int32_t;
typedef __s32 int32_t;
#endif /* !(__BIT_TYPES_DEFINED__) */
typedef __u8 uint8_t;
typedef __u16 uint16_t;
typedef __u32 uint32_t;
/* this is a special 64bit data type that is 8-byte aligned */
#define aligned_u64 __u64 __attribute__((aligned(8)))
#define aligned_be64 __be64 __attribute__((aligned(8)))
#define aligned_le64 __le64 __attribute__((aligned(8)))
/**
* The type used for indexing onto a disc or disc partition.
*
* Linux always considers sectors to be 512 bytes long independently
* of the devices real block size.
*
* blkcnt_t is the type of the inode's block count.
*/
typedef u64 sector_t;
/*
* The type of an index into the pagecache.
*/
#define pgoff_t unsigned long
/*
* A dma_addr_t can hold any valid DMA address, i.e., any address returned
* by the DMA API.
*
* If the DMA API only uses 32-bit addresses, dma_addr_t need only be 32
* bits wide. Bus addresses, e.g., PCI BARs, may be wider than 32 bits,
* but drivers do memory-mapped I/O to ioremapped kernel virtual addresses,
* so they don't care about the size of the actual bus addresses.
*/
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
typedef u64 dma_addr_t;
#else
typedef u32 dma_addr_t;
#endif
#ifdef CONFIG_PHYS_ADDR_T_64BIT
typedef u64 phys_addr_t;
#else
typedef u32 phys_addr_t;
#endif
typedef phys_addr_t resource_size_t;
/*
* This type is the placeholder for a hardware interrupt number. It has to be
* big enough to enclose whatever representation is used by a given platform.
*/
typedef unsigned long irq_hw_number_t;
typedef struct {
int counter;
} atomic_t;
#ifdef CONFIG_64BIT
typedef struct {
long counter;
} atomic64_t;
#endif
struct list_head {
struct list_head *next, *prev;
};
struct hlist_head {
struct hlist_node *first;
};
struct hlist_node {
struct hlist_node *next, **pprev;
};
/**
* struct callback_head - callback structure for use with RCU and task_work
* @next: next update requests in a list
* @func: actual update function to call after the grace period.
*
* The struct is aligned to size of pointer. On most architectures it happens
* naturally due ABI requirements, but some architectures (like CRIS) have
* weird ABI and we need to ask it explicitly.
*
* The alignment is required to guarantee that bits 0 and 1 of @next will be
* clear under normal conditions -- as long as we use call_rcu() or
* call_srcu() to queue callback.
*
* This guarantee is important for few reasons:
* - future call_rcu_lazy() will make use of lower bits in the pointer;
* - the structure shares storage spacer in struct page with @compound_head,
* which encode PageTail() in bit 0. The guarantee is needed to avoid
* false-positive PageTail().
*/
struct callback_head {
struct callback_head *next;
void (*func)(struct callback_head *head);
} __attribute__((aligned(sizeof(void *))));
#define rcu_head callback_head
typedef void (*rcu_callback_t)(struct rcu_head *head);
typedef void (*call_rcu_func_t)(struct rcu_head *head, rcu_callback_t func);
/* clocksource cycle base type */
typedef u64 cycle_t;
#endif /* __ASSEMBLY__ */
#endif /* _LINUX_TYPES_H */
#!/usr/bin/awk -f
# SPDX-License-Identifier: GPL-2.0
# Modify SRCU for formal verification. The first argument should be srcu.h and
# the second should be srcu.c. Outputs modified srcu.h and srcu.c into the
# current directory.
BEGIN {
if (ARGC != 5) {
print "Usange: input.h input.c output.h output.c" > "/dev/stderr";
exit 1;
}
h_output = ARGV[3];
c_output = ARGV[4];
ARGC = 3;
# Tokenize using FS and not RS as FS supports regular expressions. Each
# record is one line of source, except that backslashed lines are
# combined. Comments are treated as field separators, as are quotes.
quote_regexp="\"([^\\\\\"]|\\\\.)*\"";
comment_regexp="\\/\\*([^*]|\\*+[^*/])*\\*\\/|\\/\\/.*(\n|$)";
FS="([ \\\\\t\n\v\f;,.=(){}+*/<>&|^-]|\\[|\\]|" comment_regexp "|" quote_regexp ")+";
inside_srcu_struct = 0;
inside_srcu_init_def = 0;
srcu_init_param_name = "";
in_macro = 0;
brace_nesting = 0;
paren_nesting = 0;
# Allow the manipulation of the last field separator after has been
# seen.
last_fs = "";
# Whether the last field separator was intended to be output.
last_fs_print = 0;
# rcu_batches stores the initialization for each instance of struct
# rcu_batch
in_comment = 0;
outputfile = "";
}
{
prev_outputfile = outputfile;
if (FILENAME ~ /\.h$/) {
outputfile = h_output;
if (FNR != NR) {
print "Incorrect file order" > "/dev/stderr";
exit 1;
}
}
else
outputfile = c_output;
if (prev_outputfile && outputfile != prev_outputfile) {
new_outputfile = outputfile;
outputfile = prev_outputfile;
update_fieldsep("", 0);
outputfile = new_outputfile;
}
}
# Combine the next line into $0.
function combine_line() {
ret = getline next_line;
if (ret == 0) {
# Don't allow two consecutive getlines at the end of the file
if (eof_found) {
print "Error: expected more input." > "/dev/stderr";
exit 1;
} else {
eof_found = 1;
}
} else if (ret == -1) {
print "Error reading next line of file" FILENAME > "/dev/stderr";
exit 1;
}
$0 = $0 "\n" next_line;
}
# Combine backslashed lines and multiline comments.
function combine_backslashes() {
while (/\\$|\/\*([^*]|\*+[^*\/])*\**$/) {
combine_line();
}
}
function read_line() {
combine_line();
combine_backslashes();
}
# Print out field separators and update variables that depend on them. Only
# print if p is true. Call with sep="" and p=0 to print out the last field
# separator.
function update_fieldsep(sep, p) {
# Count braces
sep_tmp = sep;
gsub(quote_regexp "|" comment_regexp, "", sep_tmp);
while (1)
{
if (sub("[^{}()]*\\{", "", sep_tmp)) {
brace_nesting++;
continue;
}
if (sub("[^{}()]*\\}", "", sep_tmp)) {
brace_nesting--;
if (brace_nesting < 0) {
print "Unbalanced braces!" > "/dev/stderr";
exit 1;
}
continue;
}
if (sub("[^{}()]*\\(", "", sep_tmp)) {
paren_nesting++;
continue;
}
if (sub("[^{}()]*\\)", "", sep_tmp)) {
paren_nesting--;
if (paren_nesting < 0) {
print "Unbalanced parenthesis!" > "/dev/stderr";
exit 1;
}
continue;
}
break;
}
if (last_fs_print)
printf("%s", last_fs) > outputfile;
last_fs = sep;
last_fs_print = p;
}
# Shifts the fields down by n positions. Calls next if there are no more. If p
# is true then print out field separators.
function shift_fields(n, p) {
do {
if (match($0, FS) > 0) {
update_fieldsep(substr($0, RSTART, RLENGTH), p);
if (RSTART + RLENGTH <= length())
$0 = substr($0, RSTART + RLENGTH);
else
$0 = "";
} else {
update_fieldsep("", 0);
print "" > outputfile;
next;
}
} while (--n > 0);
}
# Shifts and prints the first n fields.
function print_fields(n) {
do {
update_fieldsep("", 0);
printf("%s", $1) > outputfile;
shift_fields(1, 1);
} while (--n > 0);
}
{
combine_backslashes();
}
# Print leading FS
{
if (match($0, "^(" FS ")+") > 0) {
update_fieldsep(substr($0, RSTART, RLENGTH), 1);
if (RSTART + RLENGTH <= length())
$0 = substr($0, RSTART + RLENGTH);
else
$0 = "";
}
}
# Parse the line.
{
while (NF > 0) {
if ($1 == "struct" && NF < 3) {
read_line();
continue;
}
if (FILENAME ~ /\.h$/ && !inside_srcu_struct &&
brace_nesting == 0 && paren_nesting == 0 &&
$1 == "struct" && $2 == "srcu_struct" &&
$0 ~ "^struct(" FS ")+srcu_struct(" FS ")+\\{") {
inside_srcu_struct = 1;
print_fields(2);
continue;
}
if (inside_srcu_struct && brace_nesting == 0 &&
paren_nesting == 0) {
inside_srcu_struct = 0;
update_fieldsep("", 0);
for (name in rcu_batches)
print "extern struct rcu_batch " name ";" > outputfile;
}
if (inside_srcu_struct && $1 == "struct" && $2 == "rcu_batch") {
# Move rcu_batches outside of the struct.
rcu_batches[$3] = "";
shift_fields(3, 1);
sub(/;[[:space:]]*$/, "", last_fs);
continue;
}
if (FILENAME ~ /\.h$/ && !inside_srcu_init_def &&
$1 == "#define" && $2 == "__SRCU_STRUCT_INIT") {
inside_srcu_init_def = 1;
srcu_init_param_name = $3;
in_macro = 1;
print_fields(3);
continue;
}
if (inside_srcu_init_def && brace_nesting == 0 &&
paren_nesting == 0) {
inside_srcu_init_def = 0;
in_macro = 0;
continue;
}
if (inside_srcu_init_def && brace_nesting == 1 &&
paren_nesting == 0 && last_fs ~ /\.[[:space:]]*$/ &&
$1 ~ /^[[:alnum:]_]+$/) {
name = $1;
if (name in rcu_batches) {
# Remove the dot.
sub(/\.[[:space:]]*$/, "", last_fs);
old_record = $0;
do
shift_fields(1, 0);
while (last_fs !~ /,/ || paren_nesting > 0);
end_loc = length(old_record) - length($0);
end_loc += index(last_fs, ",") - length(last_fs);
last_fs = substr(last_fs, index(last_fs, ",") + 1);
last_fs_print = 1;
match(old_record, "^"name"("FS")+=");
start_loc = RSTART + RLENGTH;
len = end_loc - start_loc;
initializer = substr(old_record, start_loc, len);
gsub(srcu_init_param_name "\\.", "", initializer);
rcu_batches[name] = initializer;
continue;
}
}
# Don't include a nonexistent file
if (!in_macro && $1 == "#include" && /^#include[[:space:]]+"rcu\.h"/) {
update_fieldsep("", 0);
next;
}
# Ignore most preprocessor stuff.
if (!in_macro && $1 ~ /#/) {
break;
}
if (brace_nesting > 0 && $1 ~ "^[[:alnum:]_]+$" && NF < 2) {
read_line();
continue;
}
if (brace_nesting > 0 &&
$0 ~ "^[[:alnum:]_]+[[:space:]]*(\\.|->)[[:space:]]*[[:alnum:]_]+" &&
$2 in rcu_batches) {
# Make uses of rcu_batches global. Somewhat unreliable.
shift_fields(1, 0);
print_fields(1);
continue;
}
if ($1 == "static" && NF < 3) {
read_line();
continue;
}
if ($1 == "static" && ($2 == "bool" && $3 == "try_check_zero" ||
$2 == "void" && $3 == "srcu_flip")) {
shift_fields(1, 1);
print_fields(2);
continue;
}
# Distinguish between read-side and write-side memory barriers.
if ($1 == "smp_mb" && NF < 2) {
read_line();
continue;
}
if (match($0, /^smp_mb[[:space:]();\/*]*[[:alnum:]]/)) {
barrier_letter = substr($0, RLENGTH, 1);
if (barrier_letter ~ /A|D/)
new_barrier_name = "sync_smp_mb";
else if (barrier_letter ~ /B|C/)
new_barrier_name = "rs_smp_mb";
else {
print "Unrecognized memory barrier." > "/dev/null";
exit 1;
}
shift_fields(1, 1);
printf("%s", new_barrier_name) > outputfile;
continue;
}
# Skip definition of rcu_synchronize, since it is already
# defined in misc.h. Only present in old versions of srcu.
if (brace_nesting == 0 && paren_nesting == 0 &&
$1 == "struct" && $2 == "rcu_synchronize" &&
$0 ~ "^struct(" FS ")+rcu_synchronize(" FS ")+\\{") {
shift_fields(2, 0);
while (brace_nesting) {
if (NF < 2)
read_line();
shift_fields(1, 0);
}
}
# Skip definition of wakeme_after_rcu for the same reason
if (brace_nesting == 0 && $1 == "static" && $2 == "void" &&
$3 == "wakeme_after_rcu") {
while (NF < 5)
read_line();
shift_fields(3, 0);
do {
while (NF < 3)
read_line();
shift_fields(1, 0);
} while (paren_nesting || brace_nesting);
}
if ($1 ~ /^(unsigned|long)$/ && NF < 3) {
read_line();
continue;
}
# Give srcu_batches_completed the correct type for old SRCU.
if (brace_nesting == 0 && $1 == "long" &&
$2 == "srcu_batches_completed") {
update_fieldsep("", 0);
printf("unsigned ") > outputfile;
print_fields(2);
continue;
}
if (brace_nesting == 0 && $1 == "unsigned" && $2 == "long" &&
$3 == "srcu_batches_completed") {
print_fields(3);
continue;
}
# Just print out the input code by default.
print_fields(1);
}
update_fieldsep("", 0);
print > outputfile;
next;
}
END {
update_fieldsep("", 0);
if (brace_nesting != 0) {
print "Unbalanced braces!" > "/dev/stderr";
exit 1;
}
# Define the rcu_batches
for (name in rcu_batches)
print "struct rcu_batch " name " = " rcu_batches[name] ";" > c_output;
}
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef ASSUME_H
#define ASSUME_H
/* Provide an assumption macro that can be disabled for gcc. */
#ifdef RUN
#define assume(x) \
do { \
/* Evaluate x to suppress warnings. */ \
(void) (x); \
} while (0)
#else
#define assume(x) __CPROVER_assume(x)
#endif
#endif
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef BARRIERS_H
#define BARRIERS_H
#define barrier() __asm__ __volatile__("" : : : "memory")
#ifdef RUN
#define smp_mb() __sync_synchronize()
#define smp_mb__after_unlock_lock() __sync_synchronize()
#else
/*
* Copied from CBMC's implementation of __sync_synchronize(), which
* seems to be disabled by default.
*/
#define smp_mb() __CPROVER_fence("WWfence", "RRfence", "RWfence", "WRfence", \
"WWcumul", "RRcumul", "RWcumul", "WRcumul")
#define smp_mb__after_unlock_lock() __CPROVER_fence("WWfence", "RRfence", "RWfence", "WRfence", \
"WWcumul", "RRcumul", "RWcumul", "WRcumul")
#endif
/*
* Allow memory barriers to be disabled in either the read or write side
* of SRCU individually.
*/
#ifndef NO_SYNC_SMP_MB
#define sync_smp_mb() smp_mb()
#else
#define sync_smp_mb() do {} while (0)
#endif
#ifndef NO_READ_SIDE_SMP_MB
#define rs_smp_mb() smp_mb()
#else
#define rs_smp_mb() do {} while (0)
#endif
#define READ_ONCE(x) (*(volatile typeof(x) *) &(x))
#define WRITE_ONCE(x) ((*(volatile typeof(x) *) &(x)) = (val))
#endif
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef BUG_ON_H
#define BUG_ON_H
#include <assert.h>
#define BUG() assert(0)
#define BUG_ON(x) assert(!(x))
/* Does it make sense to treat warnings as errors? */
#define WARN() BUG()
#define WARN_ON(x) (BUG_ON(x), false)
#endif
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
/* Include all source files. */
#include "include_srcu.c"
#include "preempt.c"
#include "misc.c"
/* Used by test.c files */
#include <pthread.h>
#include <stdlib.h>
#include <linux/srcu.h>
/* SPDX-License-Identifier: GPL-2.0 */
/* "Cheater" definitions based on restricted Kconfig choices. */
#undef CONFIG_TINY_RCU
#undef __CHECKER__
#undef CONFIG_DEBUG_LOCK_ALLOC
#undef CONFIG_DEBUG_OBJECTS_RCU_HEAD
#undef CONFIG_HOTPLUG_CPU
#undef CONFIG_MODULES
#undef CONFIG_NO_HZ_FULL_SYSIDLE
#undef CONFIG_PREEMPT_COUNT
#undef CONFIG_PREEMPT_RCU
#undef CONFIG_PROVE_RCU
#undef CONFIG_RCU_NOCB_CPU
#undef CONFIG_RCU_NOCB_CPU_ALL
#undef CONFIG_RCU_STALL_COMMON
#undef CONFIG_RCU_TRACE
#undef CONFIG_RCU_USER_QS
#undef CONFIG_TASKS_RCU
#define CONFIG_TREE_RCU
#define CONFIG_GENERIC_ATOMIC64
#if NR_CPUS > 1
#define CONFIG_SMP
#else
#undef CONFIG_SMP
#endif
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
#include <assert.h>
#include <errno.h>
#include <inttypes.h>
#include <pthread.h>
#include <stddef.h>
#include <string.h>
#include <sys/types.h>
#include "int_typedefs.h"
#include "barriers.h"
#include "bug_on.h"
#include "locks.h"
#include "misc.h"
#include "preempt.h"
#include "percpu.h"
#include "workqueues.h"
#ifdef USE_SIMPLE_SYNC_SRCU
#define synchronize_srcu(sp) synchronize_srcu_original(sp)
#endif
#include <srcu.c>
#ifdef USE_SIMPLE_SYNC_SRCU
#undef synchronize_srcu
#include "simple_sync_srcu.c"
#endif
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef INT_TYPEDEFS_H
#define INT_TYPEDEFS_H
#include <inttypes.h>
typedef int8_t s8;
typedef uint8_t u8;
typedef int16_t s16;
typedef uint16_t u16;
typedef int32_t s32;
typedef uint32_t u32;
typedef int64_t s64;
typedef uint64_t u64;
typedef int8_t __s8;
typedef uint8_t __u8;
typedef int16_t __s16;
typedef uint16_t __u16;
typedef int32_t __s32;
typedef uint32_t __u32;
typedef int64_t __s64;
typedef uint64_t __u64;
#define S8_C(x) INT8_C(x)
#define U8_C(x) UINT8_C(x)
#define S16_C(x) INT16_C(x)
#define U16_C(x) UINT16_C(x)
#define S32_C(x) INT32_C(x)
#define U32_C(x) UINT32_C(x)
#define S64_C(x) INT64_C(x)
#define U64_C(x) UINT64_C(x)
#endif
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef LOCKS_H
#define LOCKS_H
#include <limits.h>
#include <pthread.h>
#include <stdbool.h>
#include "assume.h"
#include "bug_on.h"
#include "preempt.h"
int nondet_int(void);
#define __acquire(x)
#define __acquires(x)
#define __release(x)
#define __releases(x)
/* Only use one lock mechanism. Select which one. */
#ifdef PTHREAD_LOCK
struct lock_impl {
pthread_mutex_t mutex;
};
static inline void lock_impl_lock(struct lock_impl *lock)
{
BUG_ON(pthread_mutex_lock(&lock->mutex));
}
static inline void lock_impl_unlock(struct lock_impl *lock)
{
BUG_ON(pthread_mutex_unlock(&lock->mutex));
}
static inline bool lock_impl_trylock(struct lock_impl *lock)
{
int err = pthread_mutex_trylock(&lock->mutex);
if (!err)
return true;
else if (err == EBUSY)
return false;
BUG();
}
static inline void lock_impl_init(struct lock_impl *lock)
{
pthread_mutex_init(&lock->mutex, NULL);
}
#define LOCK_IMPL_INITIALIZER {.mutex = PTHREAD_MUTEX_INITIALIZER}
#else /* !defined(PTHREAD_LOCK) */
/* Spinlock that assumes that it always gets the lock immediately. */
struct lock_impl {
bool locked;
};
static inline bool lock_impl_trylock(struct lock_impl *lock)
{
#ifdef RUN
/* TODO: Should this be a test and set? */
return __sync_bool_compare_and_swap(&lock->locked, false, true);
#else
__CPROVER_atomic_begin();
bool old_locked = lock->locked;
lock->locked = true;
__CPROVER_atomic_end();
/* Minimal barrier to prevent accesses leaking out of lock. */
__CPROVER_fence("RRfence", "RWfence");
return !old_locked;
#endif
}
static inline void lock_impl_lock(struct lock_impl *lock)
{
/*
* CBMC doesn't support busy waiting, so just assume that the
* lock is available.
*/
assume(lock_impl_trylock(lock));
/*
* If the lock was already held by this thread then the assumption
* is unsatisfiable (deadlock).
*/
}
static inline void lock_impl_unlock(struct lock_impl *lock)
{
#ifdef RUN
BUG_ON(!__sync_bool_compare_and_swap(&lock->locked, true, false));
#else
/* Minimal barrier to prevent accesses leaking out of lock. */
__CPROVER_fence("RWfence", "WWfence");
__CPROVER_atomic_begin();
bool old_locked = lock->locked;
lock->locked = false;
__CPROVER_atomic_end();
BUG_ON(!old_locked);
#endif
}
static inline void lock_impl_init(struct lock_impl *lock)
{
lock->locked = false;
}
#define LOCK_IMPL_INITIALIZER {.locked = false}
#endif /* !defined(PTHREAD_LOCK) */
/*
* Implement spinlocks using the lock mechanism. Wrap the lock to prevent mixing
* locks of different types.
*/
typedef struct {
struct lock_impl internal_lock;
} spinlock_t;
#define SPIN_LOCK_UNLOCKED {.internal_lock = LOCK_IMPL_INITIALIZER}
#define __SPIN_LOCK_UNLOCKED(x) SPIN_LOCK_UNLOCKED
#define DEFINE_SPINLOCK(x) spinlock_t x = SPIN_LOCK_UNLOCKED
static inline void spin_lock_init(spinlock_t *lock)
{
lock_impl_init(&lock->internal_lock);
}
static inline void spin_lock(spinlock_t *lock)
{
/*
* Spin locks also need to be removed in order to eliminate all
* memory barriers. They are only used by the write side anyway.
*/
#ifndef NO_SYNC_SMP_MB
preempt_disable();
lock_impl_lock(&lock->internal_lock);
#endif
}
static inline void spin_unlock(spinlock_t *lock)
{
#ifndef NO_SYNC_SMP_MB
lock_impl_unlock(&lock->internal_lock);
preempt_enable();
#endif
}
/* Don't bother with interrupts */
#define spin_lock_irq(lock) spin_lock(lock)
#define spin_unlock_irq(lock) spin_unlock(lock)
#define spin_lock_irqsave(lock, flags) spin_lock(lock)
#define spin_unlock_irqrestore(lock, flags) spin_unlock(lock)
/*
* This is supposed to return an int, but I think that a bool should work as
* well.
*/
static inline bool spin_trylock(spinlock_t *lock)
{
#ifndef NO_SYNC_SMP_MB
preempt_disable();
return lock_impl_trylock(&lock->internal_lock);
#else
return true;
#endif
}
struct completion {
/* Hopefully this won't overflow. */
unsigned int count;
};
#define COMPLETION_INITIALIZER(x) {.count = 0}
#define DECLARE_COMPLETION(x) struct completion x = COMPLETION_INITIALIZER(x)
#define DECLARE_COMPLETION_ONSTACK(x) DECLARE_COMPLETION(x)
static inline void init_completion(struct completion *c)
{
c->count = 0;
}
static inline void wait_for_completion(struct completion *c)
{
unsigned int prev_count = __sync_fetch_and_sub(&c->count, 1);
assume(prev_count);
}
static inline void complete(struct completion *c)
{
unsigned int prev_count = __sync_fetch_and_add(&c->count, 1);
BUG_ON(prev_count == UINT_MAX);
}
/* This function probably isn't very useful for CBMC. */
static inline bool try_wait_for_completion(struct completion *c)
{
BUG();
}
static inline bool completion_done(struct completion *c)
{
return c->count;
}
/* TODO: Implement complete_all */
static inline void complete_all(struct completion *c)
{
BUG();
}
#endif
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
#include "misc.h"
#include "bug_on.h"
struct rcu_head;
void wakeme_after_rcu(struct rcu_head *head)
{
BUG();
}
#ifndef MISC_H
#define MISC_H
#include "assume.h"
#include "int_typedefs.h"
#include "locks.h"
#include <linux/types.h>
/* Probably won't need to deal with bottom halves. */
static inline void local_bh_disable(void) {}
static inline void local_bh_enable(void) {}
#define MODULE_ALIAS(X)
#define module_param(...)
#define EXPORT_SYMBOL_GPL(x)
#define container_of(ptr, type, member) ({ \
const typeof(((type *)0)->member) *__mptr = (ptr); \
(type *)((char *)__mptr - offsetof(type, member)); \
})
#ifndef USE_SIMPLE_SYNC_SRCU
/* Abuse udelay to make sure that busy loops terminate. */
#define udelay(x) assume(0)
#else
/* The simple custom synchronize_srcu is ok with try_check_zero failing. */
#define udelay(x) do { } while (0)
#endif
#define trace_rcu_torture_read(rcutorturename, rhp, secs, c_old, c) \
do { } while (0)
#define notrace
/* Avoid including rcupdate.h */
struct rcu_synchronize {
struct rcu_head head;
struct completion completion;
};
void wakeme_after_rcu(struct rcu_head *head);
#define rcu_lock_acquire(a) do { } while (0)
#define rcu_lock_release(a) do { } while (0)
#define rcu_lockdep_assert(c, s) do { } while (0)
#define RCU_LOCKDEP_WARN(c, s) do { } while (0)
/* Let CBMC non-deterministically choose switch between normal and expedited. */
bool rcu_gp_is_normal(void);
bool rcu_gp_is_expedited(void);
/* Do the same for old versions of rcu. */
#define rcu_expedited (rcu_gp_is_expedited())
#endif
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef PERCPU_H
#define PERCPU_H
#include <stddef.h>
#include "bug_on.h"
#include "preempt.h"
#define __percpu
/* Maximum size of any percpu data. */
#define PERCPU_OFFSET (4 * sizeof(long))
/* Ignore alignment, as CBMC doesn't care about false sharing. */
#define alloc_percpu(type) __alloc_percpu(sizeof(type), 1)
static inline void *__alloc_percpu(size_t size, size_t align)
{
BUG();
return NULL;
}
static inline void free_percpu(void *ptr)
{
BUG();
}
#define per_cpu_ptr(ptr, cpu) \
((typeof(ptr)) ((char *) (ptr) + PERCPU_OFFSET * cpu))
#define __this_cpu_inc(pcp) __this_cpu_add(pcp, 1)
#define __this_cpu_dec(pcp) __this_cpu_sub(pcp, 1)
#define __this_cpu_sub(pcp, n) __this_cpu_add(pcp, -(typeof(pcp)) (n))
#define this_cpu_inc(pcp) this_cpu_add(pcp, 1)
#define this_cpu_dec(pcp) this_cpu_sub(pcp, 1)
#define this_cpu_sub(pcp, n) this_cpu_add(pcp, -(typeof(pcp)) (n))
/* Make CBMC use atomics to work around bug. */
#ifdef RUN
#define THIS_CPU_ADD_HELPER(ptr, x) (*(ptr) += (x))
#else
/*
* Split the atomic into a read and a write so that it has the least
* possible ordering.
*/
#define THIS_CPU_ADD_HELPER(ptr, x) \
do { \
typeof(ptr) this_cpu_add_helper_ptr = (ptr); \
typeof(ptr) this_cpu_add_helper_x = (x); \
typeof(*ptr) this_cpu_add_helper_temp; \
__CPROVER_atomic_begin(); \
this_cpu_add_helper_temp = *(this_cpu_add_helper_ptr); \
__CPROVER_atomic_end(); \
this_cpu_add_helper_temp += this_cpu_add_helper_x; \
__CPROVER_atomic_begin(); \
*(this_cpu_add_helper_ptr) = this_cpu_add_helper_temp; \
__CPROVER_atomic_end(); \
} while (0)
#endif
/*
* For some reason CBMC needs an atomic operation even though this is percpu
* data.
*/
#define __this_cpu_add(pcp, n) \
do { \
BUG_ON(preemptible()); \
THIS_CPU_ADD_HELPER(per_cpu_ptr(&(pcp), thread_cpu_id), \
(typeof(pcp)) (n)); \
} while (0)
#define this_cpu_add(pcp, n) \
do { \
int this_cpu_add_impl_cpu = get_cpu(); \
THIS_CPU_ADD_HELPER(per_cpu_ptr(&(pcp), this_cpu_add_impl_cpu), \
(typeof(pcp)) (n)); \
put_cpu(); \
} while (0)
/*
* This will cause a compiler warning because of the cast from char[][] to
* type*. This will cause a compile time error if type is too big.
*/
#define DEFINE_PER_CPU(type, name) \
char name[NR_CPUS][PERCPU_OFFSET]; \
typedef char percpu_too_big_##name \
[sizeof(type) > PERCPU_OFFSET ? -1 : 1]
#define for_each_possible_cpu(cpu) \
for ((cpu) = 0; (cpu) < NR_CPUS; ++(cpu))
#endif
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
#include "preempt.h"
#include "assume.h"
#include "locks.h"
/* Support NR_CPUS of at most 64 */
#define CPU_PREEMPTION_LOCKS_INIT0 LOCK_IMPL_INITIALIZER
#define CPU_PREEMPTION_LOCKS_INIT1 \
CPU_PREEMPTION_LOCKS_INIT0, CPU_PREEMPTION_LOCKS_INIT0
#define CPU_PREEMPTION_LOCKS_INIT2 \
CPU_PREEMPTION_LOCKS_INIT1, CPU_PREEMPTION_LOCKS_INIT1
#define CPU_PREEMPTION_LOCKS_INIT3 \
CPU_PREEMPTION_LOCKS_INIT2, CPU_PREEMPTION_LOCKS_INIT2
#define CPU_PREEMPTION_LOCKS_INIT4 \
CPU_PREEMPTION_LOCKS_INIT3, CPU_PREEMPTION_LOCKS_INIT3
#define CPU_PREEMPTION_LOCKS_INIT5 \
CPU_PREEMPTION_LOCKS_INIT4, CPU_PREEMPTION_LOCKS_INIT4
/*
* Simulate disabling preemption by locking a particular cpu. NR_CPUS
* should be the actual number of cpus, not just the maximum.
*/
struct lock_impl cpu_preemption_locks[NR_CPUS] = {
CPU_PREEMPTION_LOCKS_INIT0
#if (NR_CPUS - 1) & 1
, CPU_PREEMPTION_LOCKS_INIT0
#endif
#if (NR_CPUS - 1) & 2
, CPU_PREEMPTION_LOCKS_INIT1
#endif
#if (NR_CPUS - 1) & 4
, CPU_PREEMPTION_LOCKS_INIT2
#endif
#if (NR_CPUS - 1) & 8
, CPU_PREEMPTION_LOCKS_INIT3
#endif
#if (NR_CPUS - 1) & 16
, CPU_PREEMPTION_LOCKS_INIT4
#endif
#if (NR_CPUS - 1) & 32
, CPU_PREEMPTION_LOCKS_INIT5
#endif
};
#undef CPU_PREEMPTION_LOCKS_INIT0
#undef CPU_PREEMPTION_LOCKS_INIT1
#undef CPU_PREEMPTION_LOCKS_INIT2
#undef CPU_PREEMPTION_LOCKS_INIT3
#undef CPU_PREEMPTION_LOCKS_INIT4
#undef CPU_PREEMPTION_LOCKS_INIT5
__thread int thread_cpu_id;
__thread int preempt_disable_count;
void preempt_disable(void)
{
BUG_ON(preempt_disable_count < 0 || preempt_disable_count == INT_MAX);
if (preempt_disable_count++)
return;
thread_cpu_id = nondet_int();
assume(thread_cpu_id >= 0);
assume(thread_cpu_id < NR_CPUS);
lock_impl_lock(&cpu_preemption_locks[thread_cpu_id]);
}
void preempt_enable(void)
{
BUG_ON(preempt_disable_count < 1);
if (--preempt_disable_count)
return;
lock_impl_unlock(&cpu_preemption_locks[thread_cpu_id]);
}
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef PREEMPT_H
#define PREEMPT_H
#include <stdbool.h>
#include "bug_on.h"
/* This flag contains garbage if preempt_disable_count is 0. */
extern __thread int thread_cpu_id;
/* Support recursive preemption disabling. */
extern __thread int preempt_disable_count;
void preempt_disable(void);
void preempt_enable(void);
static inline void preempt_disable_notrace(void)
{
preempt_disable();
}
static inline void preempt_enable_no_resched(void)
{
preempt_enable();
}
static inline void preempt_enable_notrace(void)
{
preempt_enable();
}
static inline int preempt_count(void)
{
return preempt_disable_count;
}
static inline bool preemptible(void)
{
return !preempt_count();
}
static inline int get_cpu(void)
{
preempt_disable();
return thread_cpu_id;
}
static inline void put_cpu(void)
{
preempt_enable();
}
static inline void might_sleep(void)
{
BUG_ON(preempt_disable_count);
}
#endif
// SPDX-License-Identifier: GPL-2.0
#include <config.h>
#include <assert.h>
#include <errno.h>
#include <inttypes.h>
#include <pthread.h>
#include <stddef.h>
#include <string.h>
#include <sys/types.h>
#include "int_typedefs.h"
#include "barriers.h"
#include "bug_on.h"
#include "locks.h"
#include "misc.h"
#include "preempt.h"
#include "percpu.h"
#include "workqueues.h"
#include <linux/srcu.h>
/* Functions needed from modify_srcu.c */
bool try_check_zero(struct srcu_struct *sp, int idx, int trycount);
void srcu_flip(struct srcu_struct *sp);
/* Simpler implementation of synchronize_srcu that ignores batching. */
void synchronize_srcu(struct srcu_struct *sp)
{
int idx;
/*
* This code assumes that try_check_zero will succeed anyway,
* so there is no point in multiple tries.
*/
const int trycount = 1;
might_sleep();
/* Ignore the lock, as multiple writers aren't working yet anyway. */
idx = 1 ^ (sp->completed & 1);
/* For comments see srcu_advance_batches. */
assume(try_check_zero(sp, idx, trycount));
srcu_flip(sp);
assume(try_check_zero(sp, idx^1, trycount));
}
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef WORKQUEUES_H
#define WORKQUEUES_H
#include <stdbool.h>
#include "barriers.h"
#include "bug_on.h"
#include "int_typedefs.h"
#include <linux/types.h>
/* Stub workqueue implementation. */
struct work_struct;
typedef void (*work_func_t)(struct work_struct *work);
void delayed_work_timer_fn(unsigned long __data);
struct work_struct {
/* atomic_long_t data; */
unsigned long data;
struct list_head entry;
work_func_t func;
#ifdef CONFIG_LOCKDEP
struct lockdep_map lockdep_map;
#endif
};
struct timer_list {
struct hlist_node entry;
unsigned long expires;
void (*function)(unsigned long);
unsigned long data;
u32 flags;
int slack;
};
struct delayed_work {
struct work_struct work;
struct timer_list timer;
/* target workqueue and CPU ->timer uses to queue ->work */
struct workqueue_struct *wq;
int cpu;
};
static inline bool schedule_work(struct work_struct *work)
{
BUG();
return true;
}
static inline bool schedule_work_on(int cpu, struct work_struct *work)
{
BUG();
return true;
}
static inline bool queue_work(struct workqueue_struct *wq,
struct work_struct *work)
{
BUG();
return true;
}
static inline bool queue_delayed_work(struct workqueue_struct *wq,
struct delayed_work *dwork,
unsigned long delay)
{
BUG();
return true;
}
#define INIT_WORK(w, f) \
do { \
(w)->data = 0; \
(w)->func = (f); \
} while (0)
#define INIT_DELAYED_WORK(w, f) INIT_WORK(&(w)->work, (f))
#define __WORK_INITIALIZER(n, f) { \
.data = 0, \
.entry = { &(n).entry, &(n).entry }, \
.func = f \
}
/* Don't bother initializing timer. */
#define __DELAYED_WORK_INITIALIZER(n, f, tflags) { \
.work = __WORK_INITIALIZER((n).work, (f)), \
}
#define DECLARE_WORK(n, f) \
struct workqueue_struct n = __WORK_INITIALIZER
#define DECLARE_DELAYED_WORK(n, f) \
struct delayed_work n = __DELAYED_WORK_INITIALIZER(n, f, 0)
#define system_power_efficient_wq ((struct workqueue_struct *) NULL)
#endif
# SPDX-License-Identifier: GPL-2.0
CBMC_FLAGS = -I../.. -I../../src -I../../include -I../../empty_includes -32 -pointer-check -mm pso
all:
for i in ./*.pass; do \
echo $$i ; \
CBMC_FLAGS="$(CBMC_FLAGS)" sh ../test_script.sh --should-pass $$i > $$i.out 2>&1 ; \
done
for i in ./*.fail; do \
echo $$i ; \
CBMC_FLAGS="$(CBMC_FLAGS)" sh ../test_script.sh --should-fail $$i > $$i.out 2>&1 ; \
done
// SPDX-License-Identifier: GPL-2.0
#include <src/combined_source.c>
int x;
int y;
int __unbuffered_tpr_x;
int __unbuffered_tpr_y;
DEFINE_SRCU(ss);
void rcu_reader(void)
{
int idx;
#ifndef FORCE_FAILURE_3
idx = srcu_read_lock(&ss);
#endif
might_sleep();
__unbuffered_tpr_y = READ_ONCE(y);
#ifdef FORCE_FAILURE
srcu_read_unlock(&ss, idx);
idx = srcu_read_lock(&ss);
#endif
WRITE_ONCE(x, 1);
#ifndef FORCE_FAILURE_3
srcu_read_unlock(&ss, idx);
#endif
might_sleep();
}
void *thread_update(void *arg)
{
WRITE_ONCE(y, 1);
#ifndef FORCE_FAILURE_2
synchronize_srcu(&ss);
#endif
might_sleep();
__unbuffered_tpr_x = READ_ONCE(x);
return NULL;
}
void *thread_process_reader(void *arg)
{
rcu_reader();
return NULL;
}
int main(int argc, char *argv[])
{
pthread_t tu;
pthread_t tpr;
if (pthread_create(&tu, NULL, thread_update, NULL))
abort();
if (pthread_create(&tpr, NULL, thread_process_reader, NULL))
abort();
if (pthread_join(tu, NULL))
abort();
if (pthread_join(tpr, NULL))
abort();
assert(__unbuffered_tpr_y != 0 || __unbuffered_tpr_x != 0);
#ifdef ASSERT_END
assert(0);
#endif
return 0;
}
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# This script expects a mode (either --should-pass or --should-fail) followed by
# an input file. The script uses the following environment variables. The test C
# source file is expected to be named test.c in the directory containing the
# input file.
#
# CBMC: The command to run CBMC. Default: cbmc
# CBMC_FLAGS: Additional flags to pass to CBMC
# NR_CPUS: Number of cpus to run tests with. Default specified by the test
# SYNC_SRCU_MODE: Choose implementation of synchronize_srcu. Defaults to simple.
# kernel: Version included in the linux kernel source.
# simple: Use try_check_zero directly.
#
# The input file is a script that is sourced by this file. It can define any of
# the following variables to configure the test.
#
# test_cbmc_options: Extra options to pass to CBMC.
# min_cpus_fail: Minimum number of CPUs (NR_CPUS) for verification to fail.
# The test is expected to pass if it is run with fewer. (Only
# useful for .fail files)
# default_cpus: Quantity of CPUs to use for the test, if not specified on the
# command line. Default: Larger of 2 and MIN_CPUS_FAIL.
set -e
if test "$#" -ne 2; then
echo "Expected one option followed by an input file" 1>&2
exit 99
fi
if test "x$1" = "x--should-pass"; then
should_pass="yes"
elif test "x$1" = "x--should-fail"; then
should_pass="no"
else
echo "Unrecognized argument '$1'" 1>&2
# Exit code 99 indicates a hard error.
exit 99
fi
CBMC=${CBMC:-cbmc}
SYNC_SRCU_MODE=${SYNC_SRCU_MODE:-simple}
case ${SYNC_SRCU_MODE} in
kernel) sync_srcu_mode_flags="" ;;
simple) sync_srcu_mode_flags="-DUSE_SIMPLE_SYNC_SRCU" ;;
*)
echo "Unrecognized argument '${SYNC_SRCU_MODE}'" 1>&2
exit 99
;;
esac
min_cpus_fail=1
c_file=`dirname "$2"`/test.c
# Source the input file.
. $2
if test ${min_cpus_fail} -gt 2; then
default_default_cpus=${min_cpus_fail}
else
default_default_cpus=2
fi
default_cpus=${default_cpus:-${default_default_cpus}}
cpus=${NR_CPUS:-${default_cpus}}
# Check if there are two few cpus to make the test fail.
if test $cpus -lt ${min_cpus_fail:-0}; then
should_pass="yes"
fi
cbmc_opts="-DNR_CPUS=${cpus} ${sync_srcu_mode_flags} ${test_cbmc_options} ${CBMC_FLAGS}"
echo "Running CBMC: ${CBMC} ${cbmc_opts} ${c_file}"
if ${CBMC} ${cbmc_opts} "${c_file}"; then
# Verification successful. Make sure that it was supposed to verify.
test "x${should_pass}" = xyes
else
cbmc_exit_status=$?
# An exit status of 10 indicates a failed verification.
# (see cbmc_parse_optionst::do_bmc in the CBMC source code)
if test ${cbmc_exit_status} -eq 10 && test "x${should_pass}" = xno; then
:
else
echo "CBMC returned ${cbmc_exit_status} exit status" 1>&2
# Parse errors have exit status 6. Any other type of error
# should be considered a hard error.
if test ${cbmc_exit_status} -ne 6 && \
test ${cbmc_exit_status} -ne 10; then
exit 99
else
exit 1
fi
fi
fi
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment