1. 25 Aug, 2020 2 commits
    • Paul E. McKenney's avatar
      rcu: Execute RCU reader shortly after rcu_core for strict GPs · a657f261
      Paul E. McKenney authored
      
      A kernel built with CONFIG_RCU_STRICT_GRACE_PERIOD=y needs a quiescent
      state to appear very shortly after a CPU has noticed a new grace period.
      Placing an RCU reader immediately after this point is ineffective because
      this normally happens in softirq context, which acts as a big RCU reader.
      This commit therefore introduces a new per-CPU work_struct, which is
      used at the end of rcu_core() processing to schedule an RCU read-side
      critical section from within a clean environment.
      
      Reported-by Jann Horn <jannh@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      a657f261
    • Paul E. McKenney's avatar
      rcu: Move rcu_cpu_started per-CPU variable to rcu_data · c0f97f20
      Paul E. McKenney authored
      When the rcu_cpu_started per-CPU variable was added by commit
      f64c6013
      
       ("rcu/x86: Provide early rcu_cpu_starting() callback"),
      there were multiple sets of per-CPU rcu_data structures.  Therefore, the
      rcu_cpu_started flag was added as a separate per-CPU variable.  But now
      there is only one set of per-CPU rcu_data structures, so this commit
      moves rcu_cpu_started to a new ->cpu_started field in that structure.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      c0f97f20
  2. 29 Jun, 2020 5 commits
  3. 27 Apr, 2020 2 commits
    • Paul E. McKenney's avatar
      rcu-tasks: Avoid IPIing userspace/idle tasks if kernel is so built · 7d0c9c50
      Paul E. McKenney authored
      
      Systems running CPU-bound real-time task do not want IPIs sent to CPUs
      executing nohz_full userspace tasks.  Battery-powered systems don't
      want IPIs sent to idle CPUs in low-power mode.  Unfortunately, RCU tasks
      trace can and will send such IPIs in some cases.
      
      Both of these situations occur only when the target CPU is in RCU
      dyntick-idle mode, in other words, when RCU is not watching the
      target CPU.  This suggests that CPUs in dyntick-idle mode should use
      memory barriers in outermost invocations of rcu_read_lock_trace()
      and rcu_read_unlock_trace(), which would allow the RCU tasks trace
      grace period to directly read out the target CPU's read-side state.
      One challenge is that RCU tasks trace is not targeting a specific
      CPU, but rather a task.  And that task could switch from one CPU to
      another at any time.
      
      This commit therefore uses try_invoke_on_locked_down_task()
      and checks for task_curr() in trc_inspect_reader_notrunning().
      When this condition holds, the target task is running and cannot move.
      If CONFIG_TASKS_TRACE_RCU_READ_MB=y, the new rcu_dynticks_zero_in_eqs()
      function can be used to check if the specified integer (in this case,
      t->trc_reader_nesting) is zero while the target CPU remains in that same
      dyntick-idle sojourn.  If so, the target task is in a quiescent state.
      If not, trc_read_check_handler() must indicate failure so that the
      grace-period kthread can take appropriate action or retry after an
      appropriate delay, as the case may be.
      
      With this change, given CONFIG_TASKS_TRACE_RCU_READ_MB=y, if a given
      CPU remains idle or a given task continues executing in nohz_full mode,
      the RCU tasks trace grace-period kthread will detect this without the
      need to send an IPI.
      Suggested-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      7d0c9c50
    • Paul E. McKenney's avatar
      rcu: Expedite first two FQS scans under callback-overload conditions · 1fca4d12
      Paul E. McKenney authored
      
      Even if some CPUs have excessive numbers of callbacks, RCU's grace-period
      kthread will still wait normally between successive force-quiescent-state
      scans.  The first two are the most important, as they are the ones that
      enlist aid from the scheduler when overloaded.  This commit therefore
      omits the wait before the first and the second force-quiescent-state
      scan under callback-overload conditions.
      
      This approach was inspired by a discussion with Jeff Roberson.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      1fca4d12
  4. 21 Feb, 2020 1 commit
    • Paul E. McKenney's avatar
      rcu: React to callback overload by aggressively seeking quiescent states · b2b00ddf
      Paul E. McKenney authored
      
      In default configutions, RCU currently waits at least 100 milliseconds
      before asking cond_resched() and/or resched_rcu() for help seeking
      quiescent states to end a grace period.  But 100 milliseconds can be
      one good long time during an RCU callback flood, for example, as can
      happen when user processes repeatedly open and close files in a tight
      loop.  These 100-millisecond gaps in successive grace periods during a
      callback flood can result in excessive numbers of callbacks piling up,
      unnecessarily increasing memory footprint.
      
      This commit therefore asks cond_resched() and/or resched_rcu() for help
      as early as the first FQS scan when at least one of the CPUs has more
      than 20,000 callbacks queued, a number that can be changed using the new
      rcutree.qovld kernel boot parameter.  An auxiliary qovld_calc variable
      is used to avoid acquisition of locks that have not yet been initialized.
      Early tests indicate that this reduces the RCU-callback memory footprint
      during rcutorture floods by from 50% to 4x, depending on configuration.
      Reported-by: default avatarJoel Fernandes (Google) <joel@joelfernandes.org>
      Reported-by: default avatarTejun Heo <tj@kernel.org>
      [ paulmck: Fix bug located by Qian Cai. ]
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      Tested-by: default avatarDexuan Cui <decui@microsoft.com>
      Tested-by: default avatarQian Cai <cai@lca.pw>
      b2b00ddf
  5. 24 Jan, 2020 4 commits
  6. 12 Dec, 2019 1 commit
  7. 09 Dec, 2019 1 commit
    • Paul E. McKenney's avatar
      rcu: Enable tick for nohz_full CPUs slow to provide expedited QS · df1e849a
      Paul E. McKenney authored
      
      An expedited grace period can be stalled by a nohz_full CPU looping
      in kernel context.  This possibility is currently handled by some
      carefully crafted checks in rcu_read_unlock_special() that enlist help
      from ksoftirqd when permitted by the scheduler.  However, it is exactly
      these checks that require the scheduler avoid holding any of its rq or
      pi locks across rcu_read_unlock() without also having held them across
      the entire RCU read-side critical section.
      
      It would therefore be very nice if expedited grace periods could
      handle nohz_full CPUs looping in kernel context without such checks.
      This commit therefore adds code to the expedited grace period's wait
      and cleanup code that forces the scheduler-clock interrupt on for CPUs
      that fail to quickly supply a quiescent state.  "Quickly" is currently
      a hard-coded single-jiffy delay.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      df1e849a
  8. 28 Oct, 2019 1 commit
    • Paul E. McKenney's avatar
      rcu: Force tick on for nohz_full CPUs not reaching quiescent states · 66e4c33b
      Paul E. McKenney authored
      
      CPUs running for long time periods in the kernel in nohz_full mode
      might leave the scheduling-clock interrupt disabled for then full
      duration of their in-kernel execution.  This can (among other things)
      delay grace periods.  This commit therefore forces the tick back on
      for any nohz_full CPU that is failing to pass through a quiescent state
      upon return from interrupt, which the resched_cpu() will induce.
      Reported-by: default avatarJoel Fernandes <joel@joelfernandes.org>
      [ paulmck: Clear ->rcu_forced_tick as reported by Joel Fernandes testing. ]
      [ paulmck: Apply Joel Fernandes TICK_DEP_MASK_RCU->TICK_DEP_BIT_RCU fix. ]
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      66e4c33b
  9. 13 Aug, 2019 13 commits
    • Paul E. McKenney's avatar
      rcu/nocb: Print no-CBs diagnostics when rcutorture writer unduly delayed · f7a81b12
      Paul E. McKenney authored
      
      This commit causes locking, sleeping, and callback state to be printed
      for no-CBs CPUs when the rcutorture writer is delayed sufficiently for
      rcutorture to complain.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      f7a81b12
    • Paul E. McKenney's avatar
      rcu/nocb: Add bypass callback queueing · d1b222c6
      Paul E. McKenney authored
      
      Use of the rcu_data structure's segmented ->cblist for no-CBs CPUs
      takes advantage of unrelated grace periods, thus reducing the memory
      footprint in the face of floods of call_rcu() invocations.  However,
      the ->cblist field is a more-complex rcu_segcblist structure which must
      be protected via locking.  Even though there are only three entities
      which can acquire this lock (the CPU invoking call_rcu(), the no-CBs
      grace-period kthread, and the no-CBs callbacks kthread), the contention
      on this lock is excessive under heavy stress.
      
      This commit therefore greatly reduces contention by provisioning
      an rcu_cblist structure field named ->nocb_bypass within the
      rcu_data structure.  Each no-CBs CPU is permitted only a limited
      number of enqueues onto the ->cblist per jiffy, controlled by a new
      nocb_nobypass_lim_per_jiffy kernel boot parameter that defaults to
      about 16 enqueues per millisecond (16 * 1000 / HZ).  When that limit is
      exceeded, the CPU instead enqueues onto the new ->nocb_bypass.
      
      The ->nocb_bypass is flushed into the ->cblist every jiffy or when
      the number of callbacks on ->nocb_bypass exceeds qhimark, whichever
      happens first.  During call_rcu() floods, this flushing is carried out
      by the CPU during the course of its call_rcu() invocations.  However,
      a CPU could simply stop invoking call_rcu() at any time.  The no-CBs
      grace-period kthread therefore carries out less-aggressive flushing
      (every few jiffies or when the number of callbacks on ->nocb_bypass
      exceeds (2 * qhimark), whichever comes first).  This means that the
      no-CBs grace-period kthread cannot be permitted to do unbounded waits
      while there are callbacks on ->nocb_bypass.  A ->nocb_bypass_timer is
      used to provide the needed wakeups.
      
      [ paulmck: Apply Coverity feedback reported by Colin Ian King. ]
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      d1b222c6
    • Paul E. McKenney's avatar
      rcu/nocb: Reduce ->nocb_lock contention with separate ->nocb_gp_lock · 4fd8c5f1
      Paul E. McKenney authored
      
      The sleep/wakeup of the no-CBs grace-period kthreads is synchronized
      using the ->nocb_lock of the first CPU corresponding to that kthread.
      This commit provides a separate ->nocb_gp_lock for this purpose, thus
      reducing contention on ->nocb_lock.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      4fd8c5f1
    • Paul E. McKenney's avatar
      rcu/nocb: Avoid ->nocb_lock capture by corresponding CPU · 81c0b3d7
      Paul E. McKenney authored
      
      A given rcu_data structure's ->nocb_lock can be acquired very frequently
      by the corresponding CPU and occasionally by the corresponding no-CBs
      grace-period and callbacks kthreads.  In particular, these two kthreads
      will have frequent gaps between ->nocb_lock acquisitions that are roughly
      a grace period in duration.  This means that any excessive ->nocb_lock
      contention will be due to the CPU's acquisitions, and this in turn
      enables a very naive contention-avoidance strategy to be quite effective.
      
      This commit therefore modifies rcu_nocb_lock() to first
      attempt a raw_spin_trylock(), and to atomically increment a
      separate ->nocb_lock_contended across a raw_spin_lock().  This new
      ->nocb_lock_contended field is checked in __call_rcu_nocb_wake() when
      interrupts are enabled, with a spin-wait for contending acquisitions
      to complete, thus allowing the kthreads a chance to acquire the lock.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      81c0b3d7
    • Paul E. McKenney's avatar
    • Paul E. McKenney's avatar
    • Paul E. McKenney's avatar
      rcu/nocb: Remove obsolete nocb_q_count and nocb_q_count_lazy fields · c035280f
      Paul E. McKenney authored
      
      This commit removes the obsolete nocb_q_count and nocb_q_count_lazy
      fields, also removing rcu_get_n_cbs_nocb_cpu(), adjusting
      rcu_get_n_cbs_cpu(), and making rcutree_migrate_callbacks() once again
      disable the ->cblist fields of offline CPUs.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      c035280f
    • Paul E. McKenney's avatar
    • Paul E. McKenney's avatar
      rcu/nocb: Use rcu_segcblist for no-CBs CPUs · 5d6742b3
      Paul E. McKenney authored
      
      Currently the RCU callbacks for no-CBs CPUs are queued on a series of
      ad-hoc linked lists, which means that these callbacks cannot benefit
      from "drive-by" grace periods, thus suffering needless delays prior
      to invocation.  In addition, the no-CBs grace-period kthreads first
      wait for callbacks to appear and later wait for a new grace period,
      which means that callbacks appearing during a grace-period wait can
      be delayed.  These delays increase memory footprint, and could even
      result in an out-of-memory condition.
      
      This commit therefore enqueues RCU callbacks from no-CBs CPUs on the
      rcu_segcblist structure that is already used by non-no-CBs CPUs.  It also
      restructures the no-CBs grace-period kthread to be checking for incoming
      callbacks while waiting for grace periods.  Also, instead of waiting
      for a new grace period, it waits for the closest grace period that will
      cause some of the callbacks to be safe to invoke.  All of these changes
      reduce callback latency and thus the number of outstanding callbacks,
      in turn reducing the probability of an out-of-memory condition.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      5d6742b3
    • Paul E. McKenney's avatar
      rcu/nocb: Leave ->cblist enabled for no-CBs CPUs · e83e73f5
      Paul E. McKenney authored
      
      As a first step towards making no-CBs CPUs use the ->cblist, this commit
      leaves the ->cblist enabled for these CPUs.  The main reason to make
      no-CBs CPUs use ->cblist is to take advantage of callback numbering,
      which will reduce the effects of missed grace periods which in turn will
      reduce forward-progress problems for no-CBs CPUs.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      e83e73f5
    • Paul E. McKenney's avatar
      rcu/nocb: Provide separate no-CBs grace-period kthreads · 12f54c3a
      Paul E. McKenney authored
      
      Currently, there is one no-CBs rcuo kthread per CPU, and these kthreads
      are divided into groups.  The first rcuo kthread to come online in a
      given group is that group's leader, and the leader both waits for grace
      periods and invokes its CPU's callbacks.  The non-leader rcuo kthreads
      only invoke callbacks.
      
      This works well in the real-time/embedded environments for which it was
      intended because such environments tend not to generate all that many
      callbacks.  However, given huge floods of callbacks, it is possible for
      the leader kthread to be stuck invoking callbacks while its followers
      wait helplessly while their callbacks pile up.  This is a good recipe
      for an OOM, and rcutorture's new callback-flood capability does generate
      such OOMs.
      
      One strategy would be to wait until such OOMs start happening in
      production, but similar OOMs have in fact happened starting in 2018.
      It would therefore be wise to take a more proactive approach.
      
      This commit therefore features per-CPU rcuo kthreads that do nothing
      but invoke callbacks.  Instead of having one of these kthreads act as
      leader, each group has a separate rcog kthread that handles grace periods
      for its group.  Because these rcuog kthreads do not invoke callbacks,
      callback floods on one CPU no longer block callbacks from reaching the
      rcuc callback-invocation kthreads on other CPUs.
      
      This change does introduce additional kthreads, however:
      
      1.	The number of additional kthreads is about the square root of
      	the number of CPUs, so that a 4096-CPU system would have only
      	about 64 additional kthreads.  Note that recent changes
      	decreased the number of rcuo kthreads by a factor of two
      	(CONFIG_PREEMPT=n) or even three (CONFIG_PREEMPT=y), so
      	this still represents a significant improvement on most systems.
      
      2.	The leading "rcuo" of the rcuog kthreads should allow existing
      	scripting to affinity these additional kthreads as needed, the
      	same as for the rcuop and rcuos kthreads.  (There are no longer
      	any rcuob kthreads.)
      
      3.	A state-machine approach was considered and rejected.  Although
      	this would allow the rcuo kthreads to continue their dual
      	leader/follower roles, it complicates callback invocation
      	and makes it more difficult to consolidate rcuo callback
      	invocation with existing softirq callback invocation.
      
      The introduction of rcuog kthreads should thus be acceptable.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      12f54c3a
    • Paul E. McKenney's avatar
      rcu/nocb: Update comments to prepare for forward-progress work · 6484fe54
      Paul E. McKenney authored
      
      This commit simply rewords comments to prepare for leader nocb kthreads
      doing only grace-period work and callback shuffling.  This will mean
      the addition of replacement kthreads to invoke callbacks.  The "leader"
      and "follower" thus become less meaningful, so the commit changes no-CB
      comments with these strings to "GP" and "CB", respectively.  (Give or
      take the usual grammatical transformations.)
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      6484fe54
    • Paul E. McKenney's avatar
      rcu/nocb: Rename rcu_data fields to prepare for forward-progress work · 58bf6f77
      Paul E. McKenney authored
      
      This commit simply renames rcu_data fields to prepare for leader
      nocb kthreads doing only grace-period work and callback shuffling.
      This will mean the addition of replacement kthreads to invoke callbacks.
      The "leader" and "follower" thus become less meaningful, so the commit
      changes no-CB fields with these strings to "gp" and "cb", respectively.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      58bf6f77
  10. 28 May, 2019 1 commit
  11. 25 May, 2019 2 commits
    • Paul E. McKenney's avatar
      rcu: Use irq_work to get scheduler's attention in clean context · 0864f057
      Paul E. McKenney authored
      
      When rcu_read_unlock_special() is invoked with interrupts disabled, is
      either not in an interrupt handler or is not using RCU_SOFTIRQ, is not
      the first RCU read-side critical section in the chain, and either there
      is an expedited grace period in flight or this is a NO_HZ_FULL kernel,
      the end of the grace period can be unduly delayed.  The reason for this
      is that it is not safe to do wakeups in this situation.
      
      This commit fixes this problem by using the irq_work subsystem to
      force a later interrupt handler in a clean environment.  Because
      set_tsk_need_resched(current) and set_preempt_need_resched() are
      invoked prior to this, the scheduler will force a context switch
      upon return from this interrupt (though perhaps at the end of any
      interrupted preempt-disable or BH-disable region of code), which will
      invoke rcu_note_context_switch() (again in a clean environment), which
      will in turn give RCU the chance to report the deferred quiescent state.
      
      Of course, by then this task might be within another RCU read-side
      critical section.  But that will be detected at that time and reporting
      will be further deferred to the outermost rcu_read_unlock().  See
      rcu_preempt_need_deferred_qs() and rcu_preempt_deferred_qs() for more
      details on the checking.
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      0864f057
    • Sebastian Andrzej Siewior's avatar
      rcu: Enable elimination of Tree-RCU softirq processing · 48d07c04
      Sebastian Andrzej Siewior authored
      
      Some workloads need to change kthread priority for RCU core processing
      without affecting other softirq work.  This commit therefore introduces
      the rcutree.use_softirq kernel boot parameter, which moves the RCU core
      work from softirq to a per-CPU SCHED_OTHER kthread named rcuc.  Use of
      SCHED_OTHER approach avoids the scalability problems that appeared
      with the earlier attempt to move RCU core processing to from softirq
      to kthreads.  That said, kernels built with RCU_BOOST=y will run the
      rcuc kthreads at the RCU-boosting priority.
      
      Note that rcutree.use_softirq=0 must be specified to move RCU core
      processing to the rcuc kthreads: rcutree.use_softirq=1 is the default.
      Reported-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Tested-by: default avatarMike Galbraith <efault@gmx.de>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      [ paulmck: Adjust for invoke_rcu_callbacks() only ever being invoked
        from RCU core processing, in contrast to softirq->rcuc transition
        in old mainline RCU priority boosting. ]
      [ paulmck: Avoid wakeups when scheduler might have invoked rcu_read_unlock()
        while holding rq or pi locks, also possibly fixing a pre-existing latent
        bug involving raise_softirq()-induced wakeups. ]
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      48d07c04
  12. 26 Mar, 2019 7 commits
    • Paul E. McKenney's avatar
      rcu: Move forward-progress checkers into tree_stall.h · b51bcbbf
      Paul E. McKenney authored
      
      This commit further consolidates stall-warning functionality by moving
      forward-progress checkers into kernel/rcu/tree_stall.h, updating a
      comment or two while in the area.  More specifically, this commit moves
      show_rcu_gp_kthreads(), rcu_check_gp_start_stall(), rcu_fwd_progress_check(),
      sysrq_rcu, sysrq_show_rcu(), sysrq_rcudump_op, and rcu_sysrq_init() from
      kernel/rcu/tree.c to kernel/rcu/tree_stall.h.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      b51bcbbf
    • Paul E. McKenney's avatar
      rcu: Move irq-disabled stall-warning checking to tree_stall.h · 7ac1907c
      Paul E. McKenney authored
      
      The rcu_iw_handler() function's sole purpose in life is to indicate
      whether a stalled CPU had interrupts disabled, so it belongs in
      kernel/rcu/tree_stall.h.  This commit therefore makes that move,
      clarifying its header comment while in the area.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      7ac1907c
    • Paul E. McKenney's avatar
      rcu: Organize functions in tree_stall.h · e23344c2
      Paul E. McKenney authored
      
      This commit does only code movement, removal of now-unneeded forward
      declarations, and addition of comments.  It organizes the functions
      that implement RCU CPU stall warnings for normal grace periods into
      three categories:
      
      1.	Control of RCU CPU stall warnings, including computing timeouts.
      
      2.	Interaction of stall warnings with grace periods.
      
      3.	Actual printing of the RCU CPU stall-warning messages.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      e23344c2
    • Paul E. McKenney's avatar
      rcu: Move FAST_NO_HZ stall-warning code to tree_stall.h · 59b73a27
      Paul E. McKenney authored
      
      This commit further consolidates the stall-warning code by moving
      print_cpu_stall_info() and its helper functions along with
      zero_cpu_stall_ticks() to kernel/rcu/tree_stall.h.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      59b73a27
    • Paul E. McKenney's avatar
      rcu: Inline RCU stall-warning info helper functions · 40e69ac7
      Paul E. McKenney authored
      
      The print_cpu_stall_info_begin() and print_cpu_stall_info_end() print a
      single character each onto the console, and are a holdover from a time
      when RCU CPU stall warning messages could be abbreviated using a long-gone
      Kconfig option.  This commit therefore adds these single characters to
      already-printed strings in the calling functions, and then eliminates
      both print_cpu_stall_info_begin() and print_cpu_stall_info_end().
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      40e69ac7
    • Paul E. McKenney's avatar
      rcu: Inline RCU task stall-warning helper functions · 21d0d79a
      Paul E. McKenney authored
      
      The rcu_print_detail_task_stall(), rcu_print_task_stall_begin(), and
      rcu_print_task_stall_end() functions were defined to allow long-gone
      Kconfig options to provide an abbreviated RCU CPU stall warning printout.
      This commit saves a few lines of code by inlining them into their sole
      callers.
      
      While in the area, a useless call of rcu_print_detail_task_stall_rnp()
      on the root rcu_node structure was eliminated.  If there is only one
      rcu_node structure, its tasks get printed twice, but if there are more,
      the root rcu_node structure is guaranteed to have an empty list of blocked
      tasks, hence the uselessness.  (Long ago, root rcu_node structures with
      non-empty ->blkd_tasks lists could happen, but no longer.)
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      21d0d79a
    • Paul E. McKenney's avatar
      rcu: Move RCU CPU stall-warning code out of tree.c · 32255d51
      Paul E. McKenney authored
      
      This commit completes the process of consolidating the code for RCU CPU
      stall warnings for normal grace periods by moving the remaining such
      code from kernel/rcu/tree.c to kernel/rcu/tree_stall.h.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.ibm.com>
      32255d51