Commit b2c4623d authored by Paul E. McKenney's avatar Paul E. McKenney

rcu: More on deadlock between CPU hotplug and expedited grace periods

Commit dd56af42 (rcu: Eliminate deadlock between CPU hotplug and
expedited grace periods) was incomplete.  Although it did eliminate
deadlocks involving synchronize_sched_expedited()'s acquisition of
cpu_hotplug.lock via get_online_cpus(), it did nothing about the similar
deadlock involving acquisition of this same lock via put_online_cpus().
This deadlock became apparent with testing involving hibernation.

This commit therefore changes put_online_cpus() acquisition of this lock
to be conditional, and increments a new cpu_hotplug.puts_pending field
in case of acquisition failure.  Then cpu_hotplug_begin() checks for this
new field being non-zero, and applies any changes to cpu_hotplug.refcount.
Reported-by: default avatarJiri Kosina <jkosina@suse.cz>
Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: default avatarJiri Kosina <jkosina@suse.cz>
Tested-by: default avatarBorislav Petkov <bp@suse.de>
parent f114040e
......@@ -64,6 +64,8 @@ static struct {
* an ongoing cpu hotplug operation.
*/
int refcount;
/* And allows lockless put_online_cpus(). */
atomic_t puts_pending;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lockdep_map dep_map;
......@@ -113,7 +115,11 @@ void put_online_cpus(void)
{
if (cpu_hotplug.active_writer == current)
return;
mutex_lock(&cpu_hotplug.lock);
if (!mutex_trylock(&cpu_hotplug.lock)) {
atomic_inc(&cpu_hotplug.puts_pending);
cpuhp_lock_release();
return;
}
if (WARN_ON(!cpu_hotplug.refcount))
cpu_hotplug.refcount++; /* try to fix things up */
......@@ -155,6 +161,12 @@ void cpu_hotplug_begin(void)
cpuhp_lock_acquire();
for (;;) {
mutex_lock(&cpu_hotplug.lock);
if (atomic_read(&cpu_hotplug.puts_pending)) {
int delta;
delta = atomic_xchg(&cpu_hotplug.puts_pending, 0);
cpu_hotplug.refcount -= delta;
}
if (likely(!cpu_hotplug.refcount))
break;
__set_current_state(TASK_UNINTERRUPTIBLE);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment