Commit 564cbc87 authored by Mark Rutland's avatar Mark Rutland Committed by Ingo Molnar

locking/atomics, selftests/powerpc: Convert ACCESS_ONCE() to READ_ONCE()/WRITE_ONCE()

For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't currently harmful.

However, for some features it is necessary to instrument reads and
writes separately, which is not possible with ACCESS_ONCE(). This
distinction is critical to correct operation.

The bulk of the kernel code can be transformed via Coccinelle to use
{READ,WRITE}_ONCE(), though this only modifies users of ACCESS_ONCE(),
and not the implementation itself. As such, it has the potential to
break homebrew ACCESS_ONCE() macros seen in some user code in the kernel
tree (e.g. the virtio code, as fixed in commit ea9156fb).

To avoid fragility if/when that transformation occurs, and to align with
the preferred usage of {READ,WRITE}_ONCE(), this patch updates the DSCR
selftest code to use READ_ONCE() rather than ACCESS_ONCE(). There should
be no functional change as a result of this patch.
Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: davem@davemloft.net
Cc: linux-arch@vger.kernel.org
Cc: snitzer@redhat.com
Cc: thor.thayer@linux.intel.com
Cc: tj@kernel.org
Cc: viro@zeniv.linux.org.uk
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/1508792849-3115-11-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 94bbc9c1
...@@ -39,7 +39,7 @@ ...@@ -39,7 +39,7 @@
#define rmb() asm volatile("lwsync":::"memory") #define rmb() asm volatile("lwsync":::"memory")
#define wmb() asm volatile("lwsync":::"memory") #define wmb() asm volatile("lwsync":::"memory")
#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x)) #define READ_ONCE(x) (*(volatile typeof(x) *)&(x))
/* Prilvilege state DSCR access */ /* Prilvilege state DSCR access */
inline unsigned long get_dscr(void) inline unsigned long get_dscr(void)
......
...@@ -27,7 +27,7 @@ static void *do_test(void *in) ...@@ -27,7 +27,7 @@ static void *do_test(void *in)
unsigned long d, cur_dscr, cur_dscr_usr; unsigned long d, cur_dscr, cur_dscr_usr;
unsigned long s1, s2; unsigned long s1, s2;
s1 = ACCESS_ONCE(sequence); s1 = READ_ONCE(sequence);
if (s1 & 1) if (s1 & 1)
continue; continue;
rmb(); rmb();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment