Commit 415d8324 authored by Hector Martin's avatar Hector Martin Committed by Linus Torvalds

locking/atomic: Make test_and_*_bit() ordered on failure

These operations are documented as always ordered in
include/asm-generic/bitops/instrumented-atomic.h, and producer-consumer
type use cases where one side needs to ensure a flag is left pending
after some shared data was updated rely on this ordering, even in the
failure case.

This is the case with the workqueue code, which currently suffers from a
reproducible ordering violation on Apple M1 platforms (which are
notoriously out-of-order) that ends up causing the TTY layer to fail to
deliver data to userspace properly under the right conditions.  This
change fixes that bug.

Change the documentation to restrict the "no order on failure" story to
the _lock() variant (for which it makes sense), and remove the
early-exit from the generic implementation, which is what causes the
missing barrier semantics in that case.  Without this, the remaining
atomic op is fully ordered (including on ARM64 LSE, as of recent
versions of the architecture spec).
Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Cc: stable@vger.kernel.org
Fixes: e986a0d6 ("locking/atomics, asm-generic/bitops/atomic.h: Rewrite using atomic_*() APIs")
Fixes: 61e02392 ("locking/atomic/bitops: Document and clarify ordering semantics for failed test_and_{}_bit()")
Signed-off-by: default avatarHector Martin <marcan@marcan.st>
Acked-by: default avatarWill Deacon <will@kernel.org>
Reviewed-by: default avatarArnd Bergmann <arnd@arndb.de>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 568035b0
...@@ -59,7 +59,7 @@ Like with atomic_t, the rule of thumb is: ...@@ -59,7 +59,7 @@ Like with atomic_t, the rule of thumb is:
- RMW operations that have a return value are fully ordered. - RMW operations that have a return value are fully ordered.
- RMW operations that are conditional are unordered on FAILURE, - RMW operations that are conditional are unordered on FAILURE,
otherwise the above rules apply. In the case of test_and_{}_bit() operations, otherwise the above rules apply. In the case of test_and_set_bit_lock(),
if the bit in memory is unchanged by the operation then it is deemed to have if the bit in memory is unchanged by the operation then it is deemed to have
failed. failed.
......
...@@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p) ...@@ -39,9 +39,6 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr); unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr); p += BIT_WORD(nr);
if (READ_ONCE(*p) & mask)
return 1;
old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p); old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
return !!(old & mask); return !!(old & mask);
} }
...@@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p) ...@@ -53,9 +50,6 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr); unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr); p += BIT_WORD(nr);
if (!(READ_ONCE(*p) & mask))
return 0;
old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p); old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
return !!(old & mask); return !!(old & mask);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment