Commit ee202601 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'lkmm.2024.05.10a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

Pull LKMM documentation updates from Paul McKenney:
 "This upgrades LKMM documentation, perhaps most notably adding a number
  of litmus tests illustrating cmpxchg() ordering properties.

  TL;DR: Failing cmpxchg() operations provide no ordering"

* tag 'lkmm.2024.05.10a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu:
  Documentation/litmus-tests: Make cmpxchg() tests safe for klitmus
  Documentation/atomic_t: Emphasize that failed atomic operations give no ordering
  Documentation/litmus-tests: Demonstrate unordered failing cmpxchg
  Documentation/litmus-tests: Add locking tests to README
parents 2e57d1d6 2ba5b413
...@@ -171,14 +171,14 @@ The rule of thumb: ...@@ -171,14 +171,14 @@ The rule of thumb:
- RMW operations that are conditional are unordered on FAILURE, - RMW operations that are conditional are unordered on FAILURE,
otherwise the above rules apply. otherwise the above rules apply.
Except of course when an operation has an explicit ordering like: Except of course when a successful operation has an explicit ordering like:
{}_relaxed: unordered {}_relaxed: unordered
{}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE
{}_release: the W of the RMW (or atomic_set) is a RELEASE {}_release: the W of the RMW (or atomic_set) is a RELEASE
Where 'unordered' is against other memory locations. Address dependencies are Where 'unordered' is against other memory locations. Address dependencies are
not defeated. not defeated. Conditional operations are still unordered on FAILURE.
Fully ordered primitives are ordered against everything prior and everything Fully ordered primitives are ordered against everything prior and everything
subsequent. Therefore a fully ordered primitive is like having an smp_mb() subsequent. Therefore a fully ordered primitive is like having an smp_mb()
......
...@@ -21,6 +21,51 @@ Atomic-RMW-ops-are-atomic-WRT-atomic_set.litmus ...@@ -21,6 +21,51 @@ Atomic-RMW-ops-are-atomic-WRT-atomic_set.litmus
Test that atomic_set() cannot break the atomicity of atomic RMWs. Test that atomic_set() cannot break the atomicity of atomic RMWs.
NOTE: Require herd7 7.56 or later which supports "(void)expr". NOTE: Require herd7 7.56 or later which supports "(void)expr".
cmpxchg-fail-ordered-1.litmus
Demonstrate that a failing cmpxchg() operation acts as a full barrier
when followed by smp_mb__after_atomic().
cmpxchg-fail-ordered-2.litmus
Demonstrate that a failing cmpxchg() operation acts as an acquire
operation when followed by smp_mb__after_atomic().
cmpxchg-fail-unordered-1.litmus
Demonstrate that a failing cmpxchg() operation does not act as a
full barrier.
cmpxchg-fail-unordered-2.litmus
Demonstrate that a failing cmpxchg() operation does not act as an
acquire operation.
locking (/locking directory)
----------------------------
DCL-broken.litmus
Demonstrates that double-checked locking needs more than just
the obvious lock acquisitions and releases.
DCL-fixed.litmus
Demonstrates corrected double-checked locking that uses
smp_store_release() and smp_load_acquire() in addition to the
obvious lock acquisitions and releases.
RM-broken.litmus
Demonstrates problems with "roach motel" locking, where code is
freely moved into lock-based critical sections. This example also
shows how to use the "filter" clause to discard executions that
would be excluded by other code not modeled in the litmus test.
Note also that this "roach motel" optimization is emulated by
physically moving P1()'s two reads from x under the lock.
What is a roach motel? This is from an old advertisement for
a cockroach trap, much later featured in one of the "Men in
Black" movies. "The roaches check in. They don't check out."
RM-fixed.litmus
The counterpart to RM-broken.litmus, showing P0()'s two loads from
x safely outside of the critical section.
RCU (/rcu directory) RCU (/rcu directory)
-------------------- --------------------
......
C cmpxchg-fail-ordered-1
(*
* Result: Never
*
* Demonstrate that a failing cmpxchg() operation will act as a full
* barrier when followed by smp_mb__after_atomic().
*)
{}
P0(int *x, int *y, int *z)
{
int r0;
int r1;
WRITE_ONCE(*x, 1);
r1 = cmpxchg(z, 1, 0);
smp_mb__after_atomic();
r0 = READ_ONCE(*y);
}
P1(int *x, int *y, int *z)
{
int r0;
int r1;
WRITE_ONCE(*y, 1);
r1 = cmpxchg(z, 1, 0);
smp_mb__after_atomic();
r0 = READ_ONCE(*x);
}
locations[0:r1;1:r1]
exists (0:r0=0 /\ 1:r0=0)
C cmpxchg-fail-ordered-2
(*
* Result: Never
*
* Demonstrate use of smp_mb__after_atomic() to make a failing cmpxchg
* operation have acquire ordering.
*)
{}
P0(int *x, int *y)
{
int r1;
WRITE_ONCE(*x, 1);
r1 = cmpxchg(y, 0, 1);
}
P1(int *x, int *y)
{
int r1;
int r2;
r1 = cmpxchg(y, 0, 1);
smp_mb__after_atomic();
r2 = READ_ONCE(*x);
}
exists (0:r1=0 /\ 1:r1=1 /\ 1:r2=0)
C cmpxchg-fail-unordered-1
(*
* Result: Sometimes
*
* Demonstrate that a failing cmpxchg() operation does not act as a
* full barrier. (In contrast, a successful cmpxchg() does act as a
* full barrier.)
*)
{}
P0(int *x, int *y, int *z)
{
int r0;
int r1;
WRITE_ONCE(*x, 1);
r1 = cmpxchg(z, 1, 0);
r0 = READ_ONCE(*y);
}
P1(int *x, int *y, int *z)
{
int r0;
int r1;
WRITE_ONCE(*y, 1);
r1 = cmpxchg(z, 1, 0);
r0 = READ_ONCE(*x);
}
locations[0:r1;1:r1]
exists (0:r0=0 /\ 1:r0=0)
C cmpxchg-fail-unordered-2
(*
* Result: Sometimes
*
* Demonstrate that a failing cmpxchg() operation does not act as either
* an acquire release operation. (In contrast, a successful cmpxchg()
* does act as both an acquire and a release operation.)
*)
{}
P0(int *x, int *y)
{
int r1;
WRITE_ONCE(*x, 1);
r1 = cmpxchg(y, 0, 1);
}
P1(int *x, int *y)
{
int r1;
int r2;
r1 = cmpxchg(y, 0, 1);
r2 = READ_ONCE(*x);
}
exists (0:r1=0 /\ 1:r1=1 /\ 1:r2=0)
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment