Commit 25269871 authored by Frederic Weisbecker's avatar Frederic Weisbecker Committed by Ingo Molnar

irq_work: Fix irq_work_claim() memory ordering

When irq_work_claim() finds IRQ_WORK_PENDING flag already set, we just
return and don't raise a new IPI. We expect the destination to see
and handle our latest updades thanks to the pairing atomic_xchg()
in irq_work_run_list().

But cmpxchg() doesn't guarantee a full memory barrier upon failure. So
it's possible that the destination misses our latest updates.

So use atomic_fetch_or() instead that is unconditionally fully ordered
and also performs exactly what we want here and simplify the code.
Signed-off-by: default avatarFrederic Weisbecker <frederic@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E . McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20191108160858.31665-3-frederic@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 153bedba
......@@ -29,24 +29,16 @@ static DEFINE_PER_CPU(struct llist_head, lazy_list);
*/
static bool irq_work_claim(struct irq_work *work)
{
int flags, oflags, nflags;
int oflags;
oflags = atomic_fetch_or(IRQ_WORK_CLAIMED, &work->flags);
/*
* Start with our best wish as a premise but only trust any
* flag value after cmpxchg() result.
* If the work is already pending, no need to raise the IPI.
* The pairing atomic_xchg() in irq_work_run() makes sure
* everything we did before is visible.
*/
flags = atomic_read(&work->flags) & ~IRQ_WORK_PENDING;
for (;;) {
nflags = flags | IRQ_WORK_CLAIMED;
oflags = atomic_cmpxchg(&work->flags, flags, nflags);
if (oflags == flags)
break;
if (oflags & IRQ_WORK_PENDING)
return false;
flags = oflags;
cpu_relax();
}
if (oflags & IRQ_WORK_PENDING)
return false;
return true;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment