Commit b9023b91 authored by Balasubramani Vivekanandan's avatar Balasubramani Vivekanandan Committed by Thomas Gleixner

tick: broadcast-hrtimer: Fix a race in bc_set_next

When a cpu requests broadcasting, before starting the tick broadcast
hrtimer, bc_set_next() checks if the timer callback (bc_handler) is active
using hrtimer_try_to_cancel(). But hrtimer_try_to_cancel() does not provide
the required synchronization when the callback is active on other core.

The callback could have already executed tick_handle_oneshot_broadcast()
and could have also returned. But still there is a small time window where
the hrtimer_try_to_cancel() returns -1. In that case bc_set_next() returns
without doing anything, but the next_event of the tick broadcast clock
device is already set to a timeout value.

In the race condition diagram below, CPU #1 is running the timer callback
and CPU #2 is entering idle state and so calls bc_set_next().

In the worst case, the next_event will contain an expiry time, but the
hrtimer will not be started which happens when the racing callback returns
HRTIMER_NORESTART. The hrtimer might never recover if all further requests
from the CPUs to subscribe to tick broadcast have timeout greater than the
next_event of tick broadcast clock device. This leads to cascading of
failures and finally noticed as rcu stall warnings

Here is a depiction of the race condition

CPU #1 (Running timer callback)                   CPU #2 (Enter idle
                                                  and subscribe to
                                                  tick broadcast)
---------------------                             ---------------------

__run_hrtimer()                                   tick_broadcast_enter()

  bc_handler()                                      __tick_broadcast_oneshot_control()

    tick_handle_oneshot_broadcast()

      raw_spin_lock(&tick_broadcast_lock);

      dev->next_event = KTIME_MAX;                  //wait for tick_broadcast_lock
      //next_event for tick broadcast clock
      set to KTIME_MAX since no other cores
      subscribed to tick broadcasting

      raw_spin_unlock(&tick_broadcast_lock);

    if (dev->next_event == KTIME_MAX)
      return HRTIMER_NORESTART
    // callback function exits without
       restarting the hrtimer                      //tick_broadcast_lock acquired
                                                   raw_spin_lock(&tick_broadcast_lock);

                                                   tick_broadcast_set_event()

                                                     clockevents_program_event()

                                                       dev->next_event = expires;

                                                       bc_set_next()

                                                         hrtimer_try_to_cancel()
                                                         //returns -1 since the timer
                                                         callback is active. Exits without
                                                         restarting the timer
  cpu_base->running = NULL;

The comment that hrtimer cannot be armed from within the callback is
wrong. It is fine to start the hrtimer from within the callback. Also it is
safe to start the hrtimer from the enter/exit idle code while the broadcast
handler is active. The enter/exit idle code and the broadcast handler are
synchronized using tick_broadcast_lock. So there is no need for the
existing try to cancel logic. All this can be removed which will eliminate
the race condition as well.

Fixes: 5d1638ac ("tick: Introduce hrtimer based broadcast")
Originally-by: default avatarThomas Gleixner <tglx@linutronix.de>
Signed-off-by: default avatarBalasubramani Vivekanandan <balasubramani_vivekanandan@mentor.com>
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20190926135101.12102-2-balasubramani_vivekanandan@mentor.com
parent da05b5ea
...@@ -42,39 +42,39 @@ static int bc_shutdown(struct clock_event_device *evt) ...@@ -42,39 +42,39 @@ static int bc_shutdown(struct clock_event_device *evt)
*/ */
static int bc_set_next(ktime_t expires, struct clock_event_device *bc) static int bc_set_next(ktime_t expires, struct clock_event_device *bc)
{ {
int bc_moved;
/* /*
* We try to cancel the timer first. If the callback is on * This is called either from enter/exit idle code or from the
* flight on some other cpu then we let it handle it. If we * broadcast handler. In all cases tick_broadcast_lock is held.
* were able to cancel the timer nothing can rearm it as we
* own broadcast_lock.
* *
* However we can also be called from the event handler of * hrtimer_cancel() cannot be called here neither from the
* ce_broadcast_hrtimer itself when it expires. We cannot * broadcast handler nor from the enter/exit idle code. The idle
* restart the timer because we are in the callback, but we * code can run into the problem described in bc_shutdown() and the
* can set the expiry time and let the callback return * broadcast handler cannot wait for itself to complete for obvious
* HRTIMER_RESTART. * reasons.
* *
* Since we are in the idle loop at this point and because * Each caller tries to arm the hrtimer on its own CPU, but if the
* hrtimer_{start/cancel} functions call into tracing, * hrtimer callbback function is currently running, then
* calls to these functions must be bound within RCU_NONIDLE. * hrtimer_start() cannot move it and the timer stays on the CPU on
* which it is assigned at the moment.
*
* As this can be called from idle code, the hrtimer_start()
* invocation has to be wrapped with RCU_NONIDLE() as
* hrtimer_start() can call into tracing.
*/ */
RCU_NONIDLE( RCU_NONIDLE( {
{ hrtimer_start(&bctimer, expires, HRTIMER_MODE_ABS_PINNED_HARD);
bc_moved = hrtimer_try_to_cancel(&bctimer) >= 0; /*
if (bc_moved) { * The core tick broadcast mode expects bc->bound_on to be set
hrtimer_start(&bctimer, expires, * correctly to prevent a CPU which has the broadcast hrtimer
HRTIMER_MODE_ABS_PINNED_HARD); * armed from going deep idle.
} *
} * As tick_broadcast_lock is held, nothing can change the cpu
); * base which was just established in hrtimer_start() above. So
* the below access is safe even without holding the hrtimer
if (bc_moved) { * base lock.
/* Bind the "device" to the cpu */ */
bc->bound_on = smp_processor_id(); bc->bound_on = bctimer.base->cpu_base->cpu;
} else if (bc->bound_on == smp_processor_id()) { } );
hrtimer_set_expires(&bctimer, expires);
}
return 0; return 0;
} }
...@@ -100,10 +100,6 @@ static enum hrtimer_restart bc_handler(struct hrtimer *t) ...@@ -100,10 +100,6 @@ static enum hrtimer_restart bc_handler(struct hrtimer *t)
{ {
ce_broadcast_hrtimer.event_handler(&ce_broadcast_hrtimer); ce_broadcast_hrtimer.event_handler(&ce_broadcast_hrtimer);
if (clockevent_state_oneshot(&ce_broadcast_hrtimer))
if (ce_broadcast_hrtimer.next_event != KTIME_MAX)
return HRTIMER_RESTART;
return HRTIMER_NORESTART; return HRTIMER_NORESTART;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment