Commit b854e4de authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull RCU updates from Ingo Molnar:
 "Main RCU changes this cycle were:

   - Full-system idle detection.  This is for use by Frederic
     Weisbecker's adaptive-ticks mechanism.  Its purpose is to allow the
     timekeeping CPU to shut off its tick when all other CPUs are idle.

   - Miscellaneous fixes.

   - Improved rcutorture test coverage.

   - Updated RCU documentation"

* 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (30 commits)
  nohz_full: Force RCU's grace-period kthreads onto timekeeping CPU
  nohz_full: Add full-system-idle state machine
  jiffies: Avoid undefined behavior from signed overflow
  rcu: Simplify _rcu_barrier() processing
  rcu: Make rcutorture emit online failures if verbose
  rcu: Remove unused variable from rcu_torture_writer()
  rcu: Sort rcutorture module parameters
  rcu: Increase rcutorture test coverage
  rcu: Add duplicate-callback tests to rcutorture
  doc: Fix memory-barrier control-dependency example
  rcu: Update RTFP documentation
  nohz_full: Add full-system-idle arguments to API
  nohz_full: Add full-system idle states and variables
  nohz_full: Add per-CPU idle-state tracking
  nohz_full: Add rcu_dyntick data for scalable detection of all-idle state
  nohz_full: Add Kconfig parameter for scalable detection of all-idle state
  nohz_full: Add testing information to documentation
  rcu: Eliminate unused APIs intended for adaptive ticks
  rcu: Select IRQ_WORK from TREE_PREEMPT_RCU
  rculist: list_first_or_null_rcu() should use list_entry_rcu()
  ...
parents 458c3f60 7d992feb
This diff is collapsed.
...@@ -70,10 +70,14 @@ in realtime kernels in order to avoid excessive scheduling latencies. ...@@ -70,10 +70,14 @@ in realtime kernels in order to avoid excessive scheduling latencies.
rcu_barrier() rcu_barrier()
We instead need the rcu_barrier() primitive. This primitive is similar We instead need the rcu_barrier() primitive. Rather than waiting for
to synchronize_rcu(), but instead of waiting solely for a grace a grace period to elapse, rcu_barrier() waits for all outstanding RCU
period to elapse, it also waits for all outstanding RCU callbacks to callbacks to complete. Please note that rcu_barrier() does -not- imply
complete. Pseudo-code using rcu_barrier() is as follows: synchronize_rcu(), in particular, if there are no RCU callbacks queued
anywhere, rcu_barrier() is within its rights to return immediately,
without waiting for a grace period to elapse.
Pseudo-code using rcu_barrier() is as follows:
1. Prevent any new RCU callbacks from being posted. 1. Prevent any new RCU callbacks from being posted.
2. Execute rcu_barrier(). 2. Execute rcu_barrier().
......
...@@ -42,6 +42,16 @@ fqs_holdoff Holdoff time (in microseconds) between consecutive calls ...@@ -42,6 +42,16 @@ fqs_holdoff Holdoff time (in microseconds) between consecutive calls
fqs_stutter Wait time (in seconds) between consecutive bursts fqs_stutter Wait time (in seconds) between consecutive bursts
of calls to force_quiescent_state(). of calls to force_quiescent_state().
gp_normal Make the fake writers use normal synchronous grace-period
primitives.
gp_exp Make the fake writers use expedited synchronous grace-period
primitives. If both gp_normal and gp_exp are set, or
if neither gp_normal nor gp_exp are set, then randomly
choose the primitive so that about 50% are normal and
50% expedited. By default, neither are set, which
gives best overall test coverage.
irqreader Says to invoke RCU readers from irq level. This is currently irqreader Says to invoke RCU readers from irq level. This is currently
done via timers. Defaults to "1" for variants of RCU that done via timers. Defaults to "1" for variants of RCU that
permit this. (Or, more accurately, variants of RCU that do permit this. (Or, more accurately, variants of RCU that do
......
...@@ -531,9 +531,10 @@ dependency barrier to make it work correctly. Consider the following bit of ...@@ -531,9 +531,10 @@ dependency barrier to make it work correctly. Consider the following bit of
code: code:
q = &a; q = &a;
if (p) if (p) {
<data dependency barrier>
q = &b; q = &b;
<data dependency barrier> }
x = *q; x = *q;
This will not have the desired effect because there is no actual data This will not have the desired effect because there is no actual data
...@@ -542,9 +543,10 @@ attempting to predict the outcome in advance. In such a case what's actually ...@@ -542,9 +543,10 @@ attempting to predict the outcome in advance. In such a case what's actually
required is: required is:
q = &a; q = &a;
if (p) if (p) {
<read barrier>
q = &b; q = &b;
<read barrier> }
x = *q; x = *q;
......
...@@ -24,8 +24,8 @@ There are three main ways of managing scheduling-clock interrupts ...@@ -24,8 +24,8 @@ There are three main ways of managing scheduling-clock interrupts
workloads, you will normally -not- want this option. workloads, you will normally -not- want this option.
These three cases are described in the following three sections, followed These three cases are described in the following three sections, followed
by a third section on RCU-specific considerations and a fourth and final by a third section on RCU-specific considerations, a fourth section
section listing known issues. discussing testing, and a fifth and final section listing known issues.
NEVER OMIT SCHEDULING-CLOCK TICKS NEVER OMIT SCHEDULING-CLOCK TICKS
...@@ -121,14 +121,15 @@ boot parameter specifies the adaptive-ticks CPUs. For example, ...@@ -121,14 +121,15 @@ boot parameter specifies the adaptive-ticks CPUs. For example,
"nohz_full=1,6-8" says that CPUs 1, 6, 7, and 8 are to be adaptive-ticks "nohz_full=1,6-8" says that CPUs 1, 6, 7, and 8 are to be adaptive-ticks
CPUs. Note that you are prohibited from marking all of the CPUs as CPUs. Note that you are prohibited from marking all of the CPUs as
adaptive-tick CPUs: At least one non-adaptive-tick CPU must remain adaptive-tick CPUs: At least one non-adaptive-tick CPU must remain
online to handle timekeeping tasks in order to ensure that system calls online to handle timekeeping tasks in order to ensure that system
like gettimeofday() returns accurate values on adaptive-tick CPUs. calls like gettimeofday() returns accurate values on adaptive-tick CPUs.
(This is not an issue for CONFIG_NO_HZ_IDLE=y because there are no (This is not an issue for CONFIG_NO_HZ_IDLE=y because there are no running
running user processes to observe slight drifts in clock rate.) user processes to observe slight drifts in clock rate.) Therefore, the
Therefore, the boot CPU is prohibited from entering adaptive-ticks boot CPU is prohibited from entering adaptive-ticks mode. Specifying a
mode. Specifying a "nohz_full=" mask that includes the boot CPU will "nohz_full=" mask that includes the boot CPU will result in a boot-time
result in a boot-time error message, and the boot CPU will be removed error message, and the boot CPU will be removed from the mask. Note that
from the mask. this means that your system must have at least two CPUs in order for
CONFIG_NO_HZ_FULL=y to do anything for you.
Alternatively, the CONFIG_NO_HZ_FULL_ALL=y Kconfig parameter specifies Alternatively, the CONFIG_NO_HZ_FULL_ALL=y Kconfig parameter specifies
that all CPUs other than the boot CPU are adaptive-ticks CPUs. This that all CPUs other than the boot CPU are adaptive-ticks CPUs. This
...@@ -232,6 +233,29 @@ scheduler will decide where to run them, which might or might not be ...@@ -232,6 +233,29 @@ scheduler will decide where to run them, which might or might not be
where you want them to run. where you want them to run.
TESTING
So you enable all the OS-jitter features described in this document,
but do not see any change in your workload's behavior. Is this because
your workload isn't affected that much by OS jitter, or is it because
something else is in the way? This section helps answer this question
by providing a simple OS-jitter test suite, which is available on branch
master of the following git archive:
git://git.kernel.org/pub/scm/linux/kernel/git/frederic/dynticks-testing.git
Clone this archive and follow the instructions in the README file.
This test procedure will produce a trace that will allow you to evaluate
whether or not you have succeeded in removing OS jitter from your system.
If this trace shows that you have removed OS jitter as much as is
possible, then you can conclude that your workload is not all that
sensitive to OS jitter.
Note: this test requires that your system have at least two CPUs.
We do not currently have a good way to remove OS jitter from single-CPU
systems.
KNOWN ISSUES KNOWN ISSUES
o Dyntick-idle slows transitions to and from idle slightly. o Dyntick-idle slows transitions to and from idle slightly.
......
...@@ -122,8 +122,12 @@ ...@@ -122,8 +122,12 @@
#define TRACE_PRINTKS() VMLINUX_SYMBOL(__start___trace_bprintk_fmt) = .; \ #define TRACE_PRINTKS() VMLINUX_SYMBOL(__start___trace_bprintk_fmt) = .; \
*(__trace_printk_fmt) /* Trace_printk fmt' pointer */ \ *(__trace_printk_fmt) /* Trace_printk fmt' pointer */ \
VMLINUX_SYMBOL(__stop___trace_bprintk_fmt) = .; VMLINUX_SYMBOL(__stop___trace_bprintk_fmt) = .;
#define TRACEPOINT_STR() VMLINUX_SYMBOL(__start___tracepoint_str) = .; \
*(__tracepoint_str) /* Trace_printk fmt' pointer */ \
VMLINUX_SYMBOL(__stop___tracepoint_str) = .;
#else #else
#define TRACE_PRINTKS() #define TRACE_PRINTKS()
#define TRACEPOINT_STR()
#endif #endif
#ifdef CONFIG_FTRACE_SYSCALLS #ifdef CONFIG_FTRACE_SYSCALLS
...@@ -190,7 +194,8 @@ ...@@ -190,7 +194,8 @@
VMLINUX_SYMBOL(__stop___verbose) = .; \ VMLINUX_SYMBOL(__stop___verbose) = .; \
LIKELY_PROFILE() \ LIKELY_PROFILE() \
BRANCH_PROFILE() \ BRANCH_PROFILE() \
TRACE_PRINTKS() TRACE_PRINTKS() \
TRACEPOINT_STR()
/* /*
* Data section helpers * Data section helpers
......
...@@ -63,7 +63,7 @@ struct debug_obj_descr { ...@@ -63,7 +63,7 @@ struct debug_obj_descr {
extern void debug_object_init (void *addr, struct debug_obj_descr *descr); extern void debug_object_init (void *addr, struct debug_obj_descr *descr);
extern void extern void
debug_object_init_on_stack(void *addr, struct debug_obj_descr *descr); debug_object_init_on_stack(void *addr, struct debug_obj_descr *descr);
extern void debug_object_activate (void *addr, struct debug_obj_descr *descr); extern int debug_object_activate (void *addr, struct debug_obj_descr *descr);
extern void debug_object_deactivate(void *addr, struct debug_obj_descr *descr); extern void debug_object_deactivate(void *addr, struct debug_obj_descr *descr);
extern void debug_object_destroy (void *addr, struct debug_obj_descr *descr); extern void debug_object_destroy (void *addr, struct debug_obj_descr *descr);
extern void debug_object_free (void *addr, struct debug_obj_descr *descr); extern void debug_object_free (void *addr, struct debug_obj_descr *descr);
...@@ -85,8 +85,8 @@ static inline void ...@@ -85,8 +85,8 @@ static inline void
debug_object_init (void *addr, struct debug_obj_descr *descr) { } debug_object_init (void *addr, struct debug_obj_descr *descr) { }
static inline void static inline void
debug_object_init_on_stack(void *addr, struct debug_obj_descr *descr) { } debug_object_init_on_stack(void *addr, struct debug_obj_descr *descr) { }
static inline void static inline int
debug_object_activate (void *addr, struct debug_obj_descr *descr) { } debug_object_activate (void *addr, struct debug_obj_descr *descr) { return 0; }
static inline void static inline void
debug_object_deactivate(void *addr, struct debug_obj_descr *descr) { } debug_object_deactivate(void *addr, struct debug_obj_descr *descr) { }
static inline void static inline void
......
...@@ -359,6 +359,40 @@ do { \ ...@@ -359,6 +359,40 @@ do { \
__trace_printk(ip, fmt, ##args); \ __trace_printk(ip, fmt, ##args); \
} while (0) } while (0)
/**
* tracepoint_string - register constant persistent string to trace system
* @str - a constant persistent string that will be referenced in tracepoints
*
* If constant strings are being used in tracepoints, it is faster and
* more efficient to just save the pointer to the string and reference
* that with a printf "%s" instead of saving the string in the ring buffer
* and wasting space and time.
*
* The problem with the above approach is that userspace tools that read
* the binary output of the trace buffers do not have access to the string.
* Instead they just show the address of the string which is not very
* useful to users.
*
* With tracepoint_string(), the string will be registered to the tracing
* system and exported to userspace via the debugfs/tracing/printk_formats
* file that maps the string address to the string text. This way userspace
* tools that read the binary buffers have a way to map the pointers to
* the ASCII strings they represent.
*
* The @str used must be a constant string and persistent as it would not
* make sense to show a string that no longer exists. But it is still fine
* to be used with modules, because when modules are unloaded, if they
* had tracepoints, the ring buffers are cleared too. As long as the string
* does not change during the life of the module, it is fine to use
* tracepoint_string() within a module.
*/
#define tracepoint_string(str) \
({ \
static const char *___tp_str __tracepoint_string = str; \
___tp_str; \
})
#define __tracepoint_string __attribute__((section("__tracepoint_str")))
#ifdef CONFIG_PERF_EVENTS #ifdef CONFIG_PERF_EVENTS
struct perf_event; struct perf_event;
......
...@@ -101,13 +101,13 @@ static inline u64 get_jiffies_64(void) ...@@ -101,13 +101,13 @@ static inline u64 get_jiffies_64(void)
#define time_after(a,b) \ #define time_after(a,b) \
(typecheck(unsigned long, a) && \ (typecheck(unsigned long, a) && \
typecheck(unsigned long, b) && \ typecheck(unsigned long, b) && \
((long)(b) - (long)(a) < 0)) ((long)((b) - (a)) < 0))
#define time_before(a,b) time_after(b,a) #define time_before(a,b) time_after(b,a)
#define time_after_eq(a,b) \ #define time_after_eq(a,b) \
(typecheck(unsigned long, a) && \ (typecheck(unsigned long, a) && \
typecheck(unsigned long, b) && \ typecheck(unsigned long, b) && \
((long)(a) - (long)(b) >= 0)) ((long)((a) - (b)) >= 0))
#define time_before_eq(a,b) time_after_eq(b,a) #define time_before_eq(a,b) time_after_eq(b,a)
/* /*
...@@ -130,13 +130,13 @@ static inline u64 get_jiffies_64(void) ...@@ -130,13 +130,13 @@ static inline u64 get_jiffies_64(void)
#define time_after64(a,b) \ #define time_after64(a,b) \
(typecheck(__u64, a) && \ (typecheck(__u64, a) && \
typecheck(__u64, b) && \ typecheck(__u64, b) && \
((__s64)(b) - (__s64)(a) < 0)) ((__s64)((b) - (a)) < 0))
#define time_before64(a,b) time_after64(b,a) #define time_before64(a,b) time_after64(b,a)
#define time_after_eq64(a,b) \ #define time_after_eq64(a,b) \
(typecheck(__u64, a) && \ (typecheck(__u64, a) && \
typecheck(__u64, b) && \ typecheck(__u64, b) && \
((__s64)(a) - (__s64)(b) >= 0)) ((__s64)((a) - (b)) >= 0))
#define time_before_eq64(a,b) time_after_eq64(b,a) #define time_before_eq64(a,b) time_after_eq64(b,a)
#define time_in_range64(a, b, c) \ #define time_in_range64(a, b, c) \
......
...@@ -267,8 +267,9 @@ static inline void list_splice_init_rcu(struct list_head *list, ...@@ -267,8 +267,9 @@ static inline void list_splice_init_rcu(struct list_head *list,
*/ */
#define list_first_or_null_rcu(ptr, type, member) \ #define list_first_or_null_rcu(ptr, type, member) \
({struct list_head *__ptr = (ptr); \ ({struct list_head *__ptr = (ptr); \
struct list_head __rcu *__next = list_next_rcu(__ptr); \ struct list_head *__next = ACCESS_ONCE(__ptr->next); \
likely(__ptr != __next) ? container_of(__next, type, member) : NULL; \ likely(__ptr != __next) ? \
list_entry_rcu(__next, type, member) : NULL; \
}) })
/** /**
......
...@@ -52,7 +52,7 @@ extern int rcutorture_runnable; /* for sysctl */ ...@@ -52,7 +52,7 @@ extern int rcutorture_runnable; /* for sysctl */
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU)
extern void rcutorture_record_test_transition(void); extern void rcutorture_record_test_transition(void);
extern void rcutorture_record_progress(unsigned long vernum); extern void rcutorture_record_progress(unsigned long vernum);
extern void do_trace_rcu_torture_read(char *rcutorturename, extern void do_trace_rcu_torture_read(const char *rcutorturename,
struct rcu_head *rhp, struct rcu_head *rhp,
unsigned long secs, unsigned long secs,
unsigned long c_old, unsigned long c_old,
...@@ -65,7 +65,7 @@ static inline void rcutorture_record_progress(unsigned long vernum) ...@@ -65,7 +65,7 @@ static inline void rcutorture_record_progress(unsigned long vernum)
{ {
} }
#ifdef CONFIG_RCU_TRACE #ifdef CONFIG_RCU_TRACE
extern void do_trace_rcu_torture_read(char *rcutorturename, extern void do_trace_rcu_torture_read(const char *rcutorturename,
struct rcu_head *rhp, struct rcu_head *rhp,
unsigned long secs, unsigned long secs,
unsigned long c_old, unsigned long c_old,
...@@ -229,13 +229,9 @@ extern void rcu_irq_exit(void); ...@@ -229,13 +229,9 @@ extern void rcu_irq_exit(void);
#ifdef CONFIG_RCU_USER_QS #ifdef CONFIG_RCU_USER_QS
extern void rcu_user_enter(void); extern void rcu_user_enter(void);
extern void rcu_user_exit(void); extern void rcu_user_exit(void);
extern void rcu_user_enter_after_irq(void);
extern void rcu_user_exit_after_irq(void);
#else #else
static inline void rcu_user_enter(void) { } static inline void rcu_user_enter(void) { }
static inline void rcu_user_exit(void) { } static inline void rcu_user_exit(void) { }
static inline void rcu_user_enter_after_irq(void) { }
static inline void rcu_user_exit_after_irq(void) { }
static inline void rcu_user_hooks_switch(struct task_struct *prev, static inline void rcu_user_hooks_switch(struct task_struct *prev,
struct task_struct *next) { } struct task_struct *next) { }
#endif /* CONFIG_RCU_USER_QS */ #endif /* CONFIG_RCU_USER_QS */
...@@ -1015,4 +1011,22 @@ static inline bool rcu_is_nocb_cpu(int cpu) { return false; } ...@@ -1015,4 +1011,22 @@ static inline bool rcu_is_nocb_cpu(int cpu) { return false; }
#endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */ #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */
/* Only for use by adaptive-ticks code. */
#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
extern bool rcu_sys_is_idle(void);
extern void rcu_sysidle_force_exit(void);
#else /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
static inline bool rcu_sys_is_idle(void)
{
return false;
}
static inline void rcu_sysidle_force_exit(void)
{
}
#endif /* #else #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
#endif /* __LINUX_RCUPDATE_H */ #endif /* __LINUX_RCUPDATE_H */
...@@ -19,12 +19,12 @@ ...@@ -19,12 +19,12 @@
*/ */
TRACE_EVENT(rcu_utilization, TRACE_EVENT(rcu_utilization,
TP_PROTO(char *s), TP_PROTO(const char *s),
TP_ARGS(s), TP_ARGS(s),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, s) __field(const char *, s)
), ),
TP_fast_assign( TP_fast_assign(
...@@ -51,14 +51,14 @@ TRACE_EVENT(rcu_utilization, ...@@ -51,14 +51,14 @@ TRACE_EVENT(rcu_utilization,
*/ */
TRACE_EVENT(rcu_grace_period, TRACE_EVENT(rcu_grace_period,
TP_PROTO(char *rcuname, unsigned long gpnum, char *gpevent), TP_PROTO(const char *rcuname, unsigned long gpnum, const char *gpevent),
TP_ARGS(rcuname, gpnum, gpevent), TP_ARGS(rcuname, gpnum, gpevent),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gpnum) __field(unsigned long, gpnum)
__field(char *, gpevent) __field(const char *, gpevent)
), ),
TP_fast_assign( TP_fast_assign(
...@@ -89,21 +89,21 @@ TRACE_EVENT(rcu_grace_period, ...@@ -89,21 +89,21 @@ TRACE_EVENT(rcu_grace_period,
*/ */
TRACE_EVENT(rcu_future_grace_period, TRACE_EVENT(rcu_future_grace_period,
TP_PROTO(char *rcuname, unsigned long gpnum, unsigned long completed, TP_PROTO(const char *rcuname, unsigned long gpnum, unsigned long completed,
unsigned long c, u8 level, int grplo, int grphi, unsigned long c, u8 level, int grplo, int grphi,
char *gpevent), const char *gpevent),
TP_ARGS(rcuname, gpnum, completed, c, level, grplo, grphi, gpevent), TP_ARGS(rcuname, gpnum, completed, c, level, grplo, grphi, gpevent),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gpnum) __field(unsigned long, gpnum)
__field(unsigned long, completed) __field(unsigned long, completed)
__field(unsigned long, c) __field(unsigned long, c)
__field(u8, level) __field(u8, level)
__field(int, grplo) __field(int, grplo)
__field(int, grphi) __field(int, grphi)
__field(char *, gpevent) __field(const char *, gpevent)
), ),
TP_fast_assign( TP_fast_assign(
...@@ -132,13 +132,13 @@ TRACE_EVENT(rcu_future_grace_period, ...@@ -132,13 +132,13 @@ TRACE_EVENT(rcu_future_grace_period,
*/ */
TRACE_EVENT(rcu_grace_period_init, TRACE_EVENT(rcu_grace_period_init,
TP_PROTO(char *rcuname, unsigned long gpnum, u8 level, TP_PROTO(const char *rcuname, unsigned long gpnum, u8 level,
int grplo, int grphi, unsigned long qsmask), int grplo, int grphi, unsigned long qsmask),
TP_ARGS(rcuname, gpnum, level, grplo, grphi, qsmask), TP_ARGS(rcuname, gpnum, level, grplo, grphi, qsmask),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gpnum) __field(unsigned long, gpnum)
__field(u8, level) __field(u8, level)
__field(int, grplo) __field(int, grplo)
...@@ -168,12 +168,12 @@ TRACE_EVENT(rcu_grace_period_init, ...@@ -168,12 +168,12 @@ TRACE_EVENT(rcu_grace_period_init,
*/ */
TRACE_EVENT(rcu_preempt_task, TRACE_EVENT(rcu_preempt_task,
TP_PROTO(char *rcuname, int pid, unsigned long gpnum), TP_PROTO(const char *rcuname, int pid, unsigned long gpnum),
TP_ARGS(rcuname, pid, gpnum), TP_ARGS(rcuname, pid, gpnum),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gpnum) __field(unsigned long, gpnum)
__field(int, pid) __field(int, pid)
), ),
...@@ -195,12 +195,12 @@ TRACE_EVENT(rcu_preempt_task, ...@@ -195,12 +195,12 @@ TRACE_EVENT(rcu_preempt_task,
*/ */
TRACE_EVENT(rcu_unlock_preempted_task, TRACE_EVENT(rcu_unlock_preempted_task,
TP_PROTO(char *rcuname, unsigned long gpnum, int pid), TP_PROTO(const char *rcuname, unsigned long gpnum, int pid),
TP_ARGS(rcuname, gpnum, pid), TP_ARGS(rcuname, gpnum, pid),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gpnum) __field(unsigned long, gpnum)
__field(int, pid) __field(int, pid)
), ),
...@@ -224,14 +224,14 @@ TRACE_EVENT(rcu_unlock_preempted_task, ...@@ -224,14 +224,14 @@ TRACE_EVENT(rcu_unlock_preempted_task,
*/ */
TRACE_EVENT(rcu_quiescent_state_report, TRACE_EVENT(rcu_quiescent_state_report,
TP_PROTO(char *rcuname, unsigned long gpnum, TP_PROTO(const char *rcuname, unsigned long gpnum,
unsigned long mask, unsigned long qsmask, unsigned long mask, unsigned long qsmask,
u8 level, int grplo, int grphi, int gp_tasks), u8 level, int grplo, int grphi, int gp_tasks),
TP_ARGS(rcuname, gpnum, mask, qsmask, level, grplo, grphi, gp_tasks), TP_ARGS(rcuname, gpnum, mask, qsmask, level, grplo, grphi, gp_tasks),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gpnum) __field(unsigned long, gpnum)
__field(unsigned long, mask) __field(unsigned long, mask)
__field(unsigned long, qsmask) __field(unsigned long, qsmask)
...@@ -268,15 +268,15 @@ TRACE_EVENT(rcu_quiescent_state_report, ...@@ -268,15 +268,15 @@ TRACE_EVENT(rcu_quiescent_state_report,
*/ */
TRACE_EVENT(rcu_fqs, TRACE_EVENT(rcu_fqs,
TP_PROTO(char *rcuname, unsigned long gpnum, int cpu, char *qsevent), TP_PROTO(const char *rcuname, unsigned long gpnum, int cpu, const char *qsevent),
TP_ARGS(rcuname, gpnum, cpu, qsevent), TP_ARGS(rcuname, gpnum, cpu, qsevent),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(unsigned long, gpnum) __field(unsigned long, gpnum)
__field(int, cpu) __field(int, cpu)
__field(char *, qsevent) __field(const char *, qsevent)
), ),
TP_fast_assign( TP_fast_assign(
...@@ -308,12 +308,12 @@ TRACE_EVENT(rcu_fqs, ...@@ -308,12 +308,12 @@ TRACE_EVENT(rcu_fqs,
*/ */
TRACE_EVENT(rcu_dyntick, TRACE_EVENT(rcu_dyntick,
TP_PROTO(char *polarity, long long oldnesting, long long newnesting), TP_PROTO(const char *polarity, long long oldnesting, long long newnesting),
TP_ARGS(polarity, oldnesting, newnesting), TP_ARGS(polarity, oldnesting, newnesting),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, polarity) __field(const char *, polarity)
__field(long long, oldnesting) __field(long long, oldnesting)
__field(long long, newnesting) __field(long long, newnesting)
), ),
...@@ -352,12 +352,12 @@ TRACE_EVENT(rcu_dyntick, ...@@ -352,12 +352,12 @@ TRACE_EVENT(rcu_dyntick,
*/ */
TRACE_EVENT(rcu_prep_idle, TRACE_EVENT(rcu_prep_idle,
TP_PROTO(char *reason), TP_PROTO(const char *reason),
TP_ARGS(reason), TP_ARGS(reason),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, reason) __field(const char *, reason)
), ),
TP_fast_assign( TP_fast_assign(
...@@ -376,13 +376,13 @@ TRACE_EVENT(rcu_prep_idle, ...@@ -376,13 +376,13 @@ TRACE_EVENT(rcu_prep_idle,
*/ */
TRACE_EVENT(rcu_callback, TRACE_EVENT(rcu_callback,
TP_PROTO(char *rcuname, struct rcu_head *rhp, long qlen_lazy, TP_PROTO(const char *rcuname, struct rcu_head *rhp, long qlen_lazy,
long qlen), long qlen),
TP_ARGS(rcuname, rhp, qlen_lazy, qlen), TP_ARGS(rcuname, rhp, qlen_lazy, qlen),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(void *, rhp) __field(void *, rhp)
__field(void *, func) __field(void *, func)
__field(long, qlen_lazy) __field(long, qlen_lazy)
...@@ -412,13 +412,13 @@ TRACE_EVENT(rcu_callback, ...@@ -412,13 +412,13 @@ TRACE_EVENT(rcu_callback,
*/ */
TRACE_EVENT(rcu_kfree_callback, TRACE_EVENT(rcu_kfree_callback,
TP_PROTO(char *rcuname, struct rcu_head *rhp, unsigned long offset, TP_PROTO(const char *rcuname, struct rcu_head *rhp, unsigned long offset,
long qlen_lazy, long qlen), long qlen_lazy, long qlen),
TP_ARGS(rcuname, rhp, offset, qlen_lazy, qlen), TP_ARGS(rcuname, rhp, offset, qlen_lazy, qlen),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(void *, rhp) __field(void *, rhp)
__field(unsigned long, offset) __field(unsigned long, offset)
__field(long, qlen_lazy) __field(long, qlen_lazy)
...@@ -447,12 +447,12 @@ TRACE_EVENT(rcu_kfree_callback, ...@@ -447,12 +447,12 @@ TRACE_EVENT(rcu_kfree_callback,
*/ */
TRACE_EVENT(rcu_batch_start, TRACE_EVENT(rcu_batch_start,
TP_PROTO(char *rcuname, long qlen_lazy, long qlen, long blimit), TP_PROTO(const char *rcuname, long qlen_lazy, long qlen, long blimit),
TP_ARGS(rcuname, qlen_lazy, qlen, blimit), TP_ARGS(rcuname, qlen_lazy, qlen, blimit),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(long, qlen_lazy) __field(long, qlen_lazy)
__field(long, qlen) __field(long, qlen)
__field(long, blimit) __field(long, blimit)
...@@ -477,12 +477,12 @@ TRACE_EVENT(rcu_batch_start, ...@@ -477,12 +477,12 @@ TRACE_EVENT(rcu_batch_start,
*/ */
TRACE_EVENT(rcu_invoke_callback, TRACE_EVENT(rcu_invoke_callback,
TP_PROTO(char *rcuname, struct rcu_head *rhp), TP_PROTO(const char *rcuname, struct rcu_head *rhp),
TP_ARGS(rcuname, rhp), TP_ARGS(rcuname, rhp),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(void *, rhp) __field(void *, rhp)
__field(void *, func) __field(void *, func)
), ),
...@@ -506,12 +506,12 @@ TRACE_EVENT(rcu_invoke_callback, ...@@ -506,12 +506,12 @@ TRACE_EVENT(rcu_invoke_callback,
*/ */
TRACE_EVENT(rcu_invoke_kfree_callback, TRACE_EVENT(rcu_invoke_kfree_callback,
TP_PROTO(char *rcuname, struct rcu_head *rhp, unsigned long offset), TP_PROTO(const char *rcuname, struct rcu_head *rhp, unsigned long offset),
TP_ARGS(rcuname, rhp, offset), TP_ARGS(rcuname, rhp, offset),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(void *, rhp) __field(void *, rhp)
__field(unsigned long, offset) __field(unsigned long, offset)
), ),
...@@ -539,13 +539,13 @@ TRACE_EVENT(rcu_invoke_kfree_callback, ...@@ -539,13 +539,13 @@ TRACE_EVENT(rcu_invoke_kfree_callback,
*/ */
TRACE_EVENT(rcu_batch_end, TRACE_EVENT(rcu_batch_end,
TP_PROTO(char *rcuname, int callbacks_invoked, TP_PROTO(const char *rcuname, int callbacks_invoked,
bool cb, bool nr, bool iit, bool risk), bool cb, bool nr, bool iit, bool risk),
TP_ARGS(rcuname, callbacks_invoked, cb, nr, iit, risk), TP_ARGS(rcuname, callbacks_invoked, cb, nr, iit, risk),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(int, callbacks_invoked) __field(int, callbacks_invoked)
__field(bool, cb) __field(bool, cb)
__field(bool, nr) __field(bool, nr)
...@@ -577,13 +577,13 @@ TRACE_EVENT(rcu_batch_end, ...@@ -577,13 +577,13 @@ TRACE_EVENT(rcu_batch_end,
*/ */
TRACE_EVENT(rcu_torture_read, TRACE_EVENT(rcu_torture_read,
TP_PROTO(char *rcutorturename, struct rcu_head *rhp, TP_PROTO(const char *rcutorturename, struct rcu_head *rhp,
unsigned long secs, unsigned long c_old, unsigned long c), unsigned long secs, unsigned long c_old, unsigned long c),
TP_ARGS(rcutorturename, rhp, secs, c_old, c), TP_ARGS(rcutorturename, rhp, secs, c_old, c),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcutorturename) __field(const char *, rcutorturename)
__field(struct rcu_head *, rhp) __field(struct rcu_head *, rhp)
__field(unsigned long, secs) __field(unsigned long, secs)
__field(unsigned long, c_old) __field(unsigned long, c_old)
...@@ -623,13 +623,13 @@ TRACE_EVENT(rcu_torture_read, ...@@ -623,13 +623,13 @@ TRACE_EVENT(rcu_torture_read,
*/ */
TRACE_EVENT(rcu_barrier, TRACE_EVENT(rcu_barrier,
TP_PROTO(char *rcuname, char *s, int cpu, int cnt, unsigned long done), TP_PROTO(const char *rcuname, const char *s, int cpu, int cnt, unsigned long done),
TP_ARGS(rcuname, s, cpu, cnt, done), TP_ARGS(rcuname, s, cpu, cnt, done),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(char *, rcuname) __field(const char *, rcuname)
__field(char *, s) __field(const char *, s)
__field(int, cpu) __field(int, cpu)
__field(int, cnt) __field(int, cnt)
__field(unsigned long, done) __field(unsigned long, done)
......
...@@ -470,6 +470,7 @@ config TREE_RCU ...@@ -470,6 +470,7 @@ config TREE_RCU
config TREE_PREEMPT_RCU config TREE_PREEMPT_RCU
bool "Preemptible tree-based hierarchical RCU" bool "Preemptible tree-based hierarchical RCU"
depends on PREEMPT depends on PREEMPT
select IRQ_WORK
help help
This option selects the RCU implementation that is This option selects the RCU implementation that is
designed for very large SMP systems with hundreds or designed for very large SMP systems with hundreds or
......
...@@ -67,12 +67,15 @@ ...@@ -67,12 +67,15 @@
extern struct debug_obj_descr rcuhead_debug_descr; extern struct debug_obj_descr rcuhead_debug_descr;
static inline void debug_rcu_head_queue(struct rcu_head *head) static inline int debug_rcu_head_queue(struct rcu_head *head)
{ {
debug_object_activate(head, &rcuhead_debug_descr); int r1;
r1 = debug_object_activate(head, &rcuhead_debug_descr);
debug_object_active_state(head, &rcuhead_debug_descr, debug_object_active_state(head, &rcuhead_debug_descr,
STATE_RCU_HEAD_READY, STATE_RCU_HEAD_READY,
STATE_RCU_HEAD_QUEUED); STATE_RCU_HEAD_QUEUED);
return r1;
} }
static inline void debug_rcu_head_unqueue(struct rcu_head *head) static inline void debug_rcu_head_unqueue(struct rcu_head *head)
...@@ -83,8 +86,9 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head) ...@@ -83,8 +86,9 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head)
debug_object_deactivate(head, &rcuhead_debug_descr); debug_object_deactivate(head, &rcuhead_debug_descr);
} }
#else /* !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ #else /* !CONFIG_DEBUG_OBJECTS_RCU_HEAD */
static inline void debug_rcu_head_queue(struct rcu_head *head) static inline int debug_rcu_head_queue(struct rcu_head *head)
{ {
return 0;
} }
static inline void debug_rcu_head_unqueue(struct rcu_head *head) static inline void debug_rcu_head_unqueue(struct rcu_head *head)
...@@ -94,7 +98,7 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head) ...@@ -94,7 +98,7 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head)
extern void kfree(const void *); extern void kfree(const void *);
static inline bool __rcu_reclaim(char *rn, struct rcu_head *head) static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
{ {
unsigned long offset = (unsigned long)head->func; unsigned long offset = (unsigned long)head->func;
......
...@@ -211,43 +211,6 @@ static inline void debug_rcu_head_free(struct rcu_head *head) ...@@ -211,43 +211,6 @@ static inline void debug_rcu_head_free(struct rcu_head *head)
debug_object_free(head, &rcuhead_debug_descr); debug_object_free(head, &rcuhead_debug_descr);
} }
/*
* fixup_init is called when:
* - an active object is initialized
*/
static int rcuhead_fixup_init(void *addr, enum debug_obj_state state)
{
struct rcu_head *head = addr;
switch (state) {
case ODEBUG_STATE_ACTIVE:
/*
* Ensure that queued callbacks are all executed.
* If we detect that we are nested in a RCU read-side critical
* section, we should simply fail, otherwise we would deadlock.
* In !PREEMPT configurations, there is no way to tell if we are
* in a RCU read-side critical section or not, so we never
* attempt any fixup and just print a warning.
*/
#ifndef CONFIG_PREEMPT
WARN_ON_ONCE(1);
return 0;
#endif
if (rcu_preempt_depth() != 0 || preempt_count() != 0 ||
irqs_disabled()) {
WARN_ON_ONCE(1);
return 0;
}
rcu_barrier();
rcu_barrier_sched();
rcu_barrier_bh();
debug_object_init(head, &rcuhead_debug_descr);
return 1;
default:
return 0;
}
}
/* /*
* fixup_activate is called when: * fixup_activate is called when:
* - an active object is activated * - an active object is activated
...@@ -268,69 +231,8 @@ static int rcuhead_fixup_activate(void *addr, enum debug_obj_state state) ...@@ -268,69 +231,8 @@ static int rcuhead_fixup_activate(void *addr, enum debug_obj_state state)
debug_object_init(head, &rcuhead_debug_descr); debug_object_init(head, &rcuhead_debug_descr);
debug_object_activate(head, &rcuhead_debug_descr); debug_object_activate(head, &rcuhead_debug_descr);
return 0; return 0;
case ODEBUG_STATE_ACTIVE:
/*
* Ensure that queued callbacks are all executed.
* If we detect that we are nested in a RCU read-side critical
* section, we should simply fail, otherwise we would deadlock.
* In !PREEMPT configurations, there is no way to tell if we are
* in a RCU read-side critical section or not, so we never
* attempt any fixup and just print a warning.
*/
#ifndef CONFIG_PREEMPT
WARN_ON_ONCE(1);
return 0;
#endif
if (rcu_preempt_depth() != 0 || preempt_count() != 0 ||
irqs_disabled()) {
WARN_ON_ONCE(1);
return 0;
}
rcu_barrier();
rcu_barrier_sched();
rcu_barrier_bh();
debug_object_activate(head, &rcuhead_debug_descr);
return 1;
default: default:
return 0;
}
}
/*
* fixup_free is called when:
* - an active object is freed
*/
static int rcuhead_fixup_free(void *addr, enum debug_obj_state state)
{
struct rcu_head *head = addr;
switch (state) {
case ODEBUG_STATE_ACTIVE:
/*
* Ensure that queued callbacks are all executed.
* If we detect that we are nested in a RCU read-side critical
* section, we should simply fail, otherwise we would deadlock.
* In !PREEMPT configurations, there is no way to tell if we are
* in a RCU read-side critical section or not, so we never
* attempt any fixup and just print a warning.
*/
#ifndef CONFIG_PREEMPT
WARN_ON_ONCE(1);
return 0;
#endif
if (rcu_preempt_depth() != 0 || preempt_count() != 0 ||
irqs_disabled()) {
WARN_ON_ONCE(1);
return 0;
}
rcu_barrier();
rcu_barrier_sched();
rcu_barrier_bh();
debug_object_free(head, &rcuhead_debug_descr);
return 1; return 1;
default:
return 0;
} }
} }
...@@ -369,15 +271,13 @@ EXPORT_SYMBOL_GPL(destroy_rcu_head_on_stack); ...@@ -369,15 +271,13 @@ EXPORT_SYMBOL_GPL(destroy_rcu_head_on_stack);
struct debug_obj_descr rcuhead_debug_descr = { struct debug_obj_descr rcuhead_debug_descr = {
.name = "rcu_head", .name = "rcu_head",
.fixup_init = rcuhead_fixup_init,
.fixup_activate = rcuhead_fixup_activate, .fixup_activate = rcuhead_fixup_activate,
.fixup_free = rcuhead_fixup_free,
}; };
EXPORT_SYMBOL_GPL(rcuhead_debug_descr); EXPORT_SYMBOL_GPL(rcuhead_debug_descr);
#endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */ #endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) || defined(CONFIG_RCU_TRACE) #if defined(CONFIG_TREE_RCU) || defined(CONFIG_TREE_PREEMPT_RCU) || defined(CONFIG_RCU_TRACE)
void do_trace_rcu_torture_read(char *rcutorturename, struct rcu_head *rhp, void do_trace_rcu_torture_read(const char *rcutorturename, struct rcu_head *rhp,
unsigned long secs, unsigned long secs,
unsigned long c_old, unsigned long c) unsigned long c_old, unsigned long c)
{ {
......
...@@ -264,7 +264,7 @@ void rcu_check_callbacks(int cpu, int user) ...@@ -264,7 +264,7 @@ void rcu_check_callbacks(int cpu, int user)
*/ */
static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp) static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
{ {
char *rn = NULL; const char *rn = NULL;
struct rcu_head *next, *list; struct rcu_head *next, *list;
unsigned long flags; unsigned long flags;
RCU_TRACE(int cb_count = 0); RCU_TRACE(int cb_count = 0);
......
...@@ -36,7 +36,7 @@ struct rcu_ctrlblk { ...@@ -36,7 +36,7 @@ struct rcu_ctrlblk {
RCU_TRACE(unsigned long gp_start); /* Start time for stalls. */ RCU_TRACE(unsigned long gp_start); /* Start time for stalls. */
RCU_TRACE(unsigned long ticks_this_gp); /* Statistic for stalls. */ RCU_TRACE(unsigned long ticks_this_gp); /* Statistic for stalls. */
RCU_TRACE(unsigned long jiffies_stall); /* Jiffies at next stall. */ RCU_TRACE(unsigned long jiffies_stall); /* Jiffies at next stall. */
RCU_TRACE(char *name); /* Name of RCU type. */ RCU_TRACE(const char *name); /* Name of RCU type. */
}; };
/* Definition for rcupdate control block. */ /* Definition for rcupdate control block. */
......
This diff is collapsed.
This diff is collapsed.
...@@ -88,6 +88,14 @@ struct rcu_dynticks { ...@@ -88,6 +88,14 @@ struct rcu_dynticks {
/* Process level is worth LLONG_MAX/2. */ /* Process level is worth LLONG_MAX/2. */
int dynticks_nmi_nesting; /* Track NMI nesting level. */ int dynticks_nmi_nesting; /* Track NMI nesting level. */
atomic_t dynticks; /* Even value for idle, else odd. */ atomic_t dynticks; /* Even value for idle, else odd. */
#ifdef CONFIG_NO_HZ_FULL_SYSIDLE
long long dynticks_idle_nesting;
/* irq/process nesting level from idle. */
atomic_t dynticks_idle; /* Even value for idle, else odd. */
/* "Idle" excludes userspace execution. */
unsigned long dynticks_idle_jiffies;
/* End of last non-NMI non-idle period. */
#endif /* #ifdef CONFIG_NO_HZ_FULL_SYSIDLE */
#ifdef CONFIG_RCU_FAST_NO_HZ #ifdef CONFIG_RCU_FAST_NO_HZ
bool all_lazy; /* Are all CPU's CBs lazy? */ bool all_lazy; /* Are all CPU's CBs lazy? */
unsigned long nonlazy_posted; unsigned long nonlazy_posted;
...@@ -445,7 +453,7 @@ struct rcu_state { ...@@ -445,7 +453,7 @@ struct rcu_state {
/* for CPU stalls. */ /* for CPU stalls. */
unsigned long gp_max; /* Maximum GP duration in */ unsigned long gp_max; /* Maximum GP duration in */
/* jiffies. */ /* jiffies. */
char *name; /* Name of structure. */ const char *name; /* Name of structure. */
char abbr; /* Abbreviated name. */ char abbr; /* Abbreviated name. */
struct list_head flavors; /* List of RCU flavors. */ struct list_head flavors; /* List of RCU flavors. */
struct irq_work wakeup_work; /* Postponed wakeups */ struct irq_work wakeup_work; /* Postponed wakeups */
...@@ -545,6 +553,15 @@ static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp); ...@@ -545,6 +553,15 @@ static void rcu_boot_init_nocb_percpu_data(struct rcu_data *rdp);
static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp); static void rcu_spawn_nocb_kthreads(struct rcu_state *rsp);
static void rcu_kick_nohz_cpu(int cpu); static void rcu_kick_nohz_cpu(int cpu);
static bool init_nocb_callback_list(struct rcu_data *rdp); static bool init_nocb_callback_list(struct rcu_data *rdp);
static void rcu_sysidle_enter(struct rcu_dynticks *rdtp, int irq);
static void rcu_sysidle_exit(struct rcu_dynticks *rdtp, int irq);
static void rcu_sysidle_check_cpu(struct rcu_data *rdp, bool *isidle,
unsigned long *maxj);
static bool is_sysidle_rcu_state(struct rcu_state *rsp);
static void rcu_sysidle_report_gp(struct rcu_state *rsp, int isidle,
unsigned long maxj);
static void rcu_bind_gp_kthread(void);
static void rcu_sysidle_init_percpu_data(struct rcu_dynticks *rdtp);
#endif /* #ifndef RCU_TREE_NONCORE */ #endif /* #ifndef RCU_TREE_NONCORE */
......
This diff is collapsed.
...@@ -134,6 +134,56 @@ config NO_HZ_FULL_ALL ...@@ -134,6 +134,56 @@ config NO_HZ_FULL_ALL
Note the boot CPU will still be kept outside the range to Note the boot CPU will still be kept outside the range to
handle the timekeeping duty. handle the timekeeping duty.
config NO_HZ_FULL_SYSIDLE
bool "Detect full-system idle state for full dynticks system"
depends on NO_HZ_FULL
default n
help
At least one CPU must keep the scheduling-clock tick running for
timekeeping purposes whenever there is a non-idle CPU, where
"non-idle" also includes dynticks CPUs as long as they are
running non-idle tasks. Because the underlying adaptive-tick
support cannot distinguish between all CPUs being idle and
all CPUs each running a single task in dynticks mode, the
underlying support simply ensures that there is always a CPU
handling the scheduling-clock tick, whether or not all CPUs
are idle. This Kconfig option enables scalable detection of
the all-CPUs-idle state, thus allowing the scheduling-clock
tick to be disabled when all CPUs are idle. Note that scalable
detection of the all-CPUs-idle state means that larger systems
will be slower to declare the all-CPUs-idle state.
Say Y if you would like to help debug all-CPUs-idle detection.
Say N if you are unsure.
config NO_HZ_FULL_SYSIDLE_SMALL
int "Number of CPUs above which large-system approach is used"
depends on NO_HZ_FULL_SYSIDLE
range 1 NR_CPUS
default 8
help
The full-system idle detection mechanism takes a lazy approach
on large systems, as is required to attain decent scalability.
However, on smaller systems, scalability is not anywhere near as
large a concern as is energy efficiency. The sysidle subsystem
therefore uses a fast but non-scalable algorithm for small
systems and a lazier but scalable algorithm for large systems.
This Kconfig parameter defines the number of CPUs in the largest
system that will be considered to be "small".
The default value will be fine in most cases. Battery-powered
systems that (1) enable NO_HZ_FULL_SYSIDLE, (2) have larger
numbers of CPUs, and (3) are suffering from battery-lifetime
problems due to long sysidle latencies might wish to experiment
with larger values for this Kconfig parameter. On the other
hand, they might be even better served by disabling NO_HZ_FULL
entirely, given that NO_HZ_FULL is intended for HPC and
real-time workloads that at present do not tend to be run on
battery-powered systems.
Take the default if you are unsure.
config NO_HZ config NO_HZ
bool "Old Idle dynticks config" bool "Old Idle dynticks config"
depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS depends on !ARCH_USES_GETTIMEOFFSET && GENERIC_CLOCKEVENTS
......
...@@ -1022,6 +1022,9 @@ extern struct list_head ftrace_events; ...@@ -1022,6 +1022,9 @@ extern struct list_head ftrace_events;
extern const char *__start___trace_bprintk_fmt[]; extern const char *__start___trace_bprintk_fmt[];
extern const char *__stop___trace_bprintk_fmt[]; extern const char *__stop___trace_bprintk_fmt[];
extern const char *__start___tracepoint_str[];
extern const char *__stop___tracepoint_str[];
void trace_printk_init_buffers(void); void trace_printk_init_buffers(void);
void trace_printk_start_comm(void); void trace_printk_start_comm(void);
int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set); int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
......
...@@ -244,12 +244,31 @@ static const char **find_next(void *v, loff_t *pos) ...@@ -244,12 +244,31 @@ static const char **find_next(void *v, loff_t *pos)
{ {
const char **fmt = v; const char **fmt = v;
int start_index; int start_index;
int last_index;
start_index = __stop___trace_bprintk_fmt - __start___trace_bprintk_fmt; start_index = __stop___trace_bprintk_fmt - __start___trace_bprintk_fmt;
if (*pos < start_index) if (*pos < start_index)
return __start___trace_bprintk_fmt + *pos; return __start___trace_bprintk_fmt + *pos;
/*
* The __tracepoint_str section is treated the same as the
* __trace_printk_fmt section. The difference is that the
* __trace_printk_fmt section should only be used by trace_printk()
* in a debugging environment, as if anything exists in that section
* the trace_prink() helper buffers are allocated, which would just
* waste space in a production environment.
*
* The __tracepoint_str sections on the other hand are used by
* tracepoints which need to map pointers to their strings to
* the ASCII text for userspace.
*/
last_index = start_index;
start_index = __stop___tracepoint_str - __start___tracepoint_str;
if (*pos < last_index + start_index)
return __start___tracepoint_str + (*pos - last_index);
return find_next_mod_format(start_index, v, fmt, pos); return find_next_mod_format(start_index, v, fmt, pos);
} }
......
...@@ -381,19 +381,21 @@ void debug_object_init_on_stack(void *addr, struct debug_obj_descr *descr) ...@@ -381,19 +381,21 @@ void debug_object_init_on_stack(void *addr, struct debug_obj_descr *descr)
* debug_object_activate - debug checks when an object is activated * debug_object_activate - debug checks when an object is activated
* @addr: address of the object * @addr: address of the object
* @descr: pointer to an object specific debug description structure * @descr: pointer to an object specific debug description structure
* Returns 0 for success, -EINVAL for check failed.
*/ */
void debug_object_activate(void *addr, struct debug_obj_descr *descr) int debug_object_activate(void *addr, struct debug_obj_descr *descr)
{ {
enum debug_obj_state state; enum debug_obj_state state;
struct debug_bucket *db; struct debug_bucket *db;
struct debug_obj *obj; struct debug_obj *obj;
unsigned long flags; unsigned long flags;
int ret;
struct debug_obj o = { .object = addr, struct debug_obj o = { .object = addr,
.state = ODEBUG_STATE_NOTAVAILABLE, .state = ODEBUG_STATE_NOTAVAILABLE,
.descr = descr }; .descr = descr };
if (!debug_objects_enabled) if (!debug_objects_enabled)
return; return 0;
db = get_bucket((unsigned long) addr); db = get_bucket((unsigned long) addr);
...@@ -405,23 +407,26 @@ void debug_object_activate(void *addr, struct debug_obj_descr *descr) ...@@ -405,23 +407,26 @@ void debug_object_activate(void *addr, struct debug_obj_descr *descr)
case ODEBUG_STATE_INIT: case ODEBUG_STATE_INIT:
case ODEBUG_STATE_INACTIVE: case ODEBUG_STATE_INACTIVE:
obj->state = ODEBUG_STATE_ACTIVE; obj->state = ODEBUG_STATE_ACTIVE;
ret = 0;
break; break;
case ODEBUG_STATE_ACTIVE: case ODEBUG_STATE_ACTIVE:
debug_print_object(obj, "activate"); debug_print_object(obj, "activate");
state = obj->state; state = obj->state;
raw_spin_unlock_irqrestore(&db->lock, flags); raw_spin_unlock_irqrestore(&db->lock, flags);
debug_object_fixup(descr->fixup_activate, addr, state); ret = debug_object_fixup(descr->fixup_activate, addr, state);
return; return ret ? -EINVAL : 0;
case ODEBUG_STATE_DESTROYED: case ODEBUG_STATE_DESTROYED:
debug_print_object(obj, "activate"); debug_print_object(obj, "activate");
ret = -EINVAL;
break; break;
default: default:
ret = 0;
break; break;
} }
raw_spin_unlock_irqrestore(&db->lock, flags); raw_spin_unlock_irqrestore(&db->lock, flags);
return; return ret;
} }
raw_spin_unlock_irqrestore(&db->lock, flags); raw_spin_unlock_irqrestore(&db->lock, flags);
...@@ -431,8 +436,11 @@ void debug_object_activate(void *addr, struct debug_obj_descr *descr) ...@@ -431,8 +436,11 @@ void debug_object_activate(void *addr, struct debug_obj_descr *descr)
* true or not. * true or not.
*/ */
if (debug_object_fixup(descr->fixup_activate, addr, if (debug_object_fixup(descr->fixup_activate, addr,
ODEBUG_STATE_NOTAVAILABLE)) ODEBUG_STATE_NOTAVAILABLE)) {
debug_print_object(&o, "activate"); debug_print_object(&o, "activate");
return -EINVAL;
}
return 0;
} }
/** /**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment