Commit 39c8275d authored by Mark Rutland's avatar Mark Rutland Committed by Will Deacon

arm64: uaccess: permit __smp_store_release() to use zero register

Currently the asm constraints for __smp_store_release() require that the
value is placed in a "real" GPR (i.e. one other than [XW]ZR or SP).
This means that for cases such as:

    __smp_store_release(ptr, 0)

... the compiler has to move '0' into "real" GPR, e.g.

    mov     xN, #0
    stlr    xN, [<addr>]

This is unfortunate, as using the zero register would require fewer
instructions and save a "real" GPR for other usage, allowing the
compiler to generate:

    stlr    xzr, [<addr>]

Modify the asm constaints for __smp_store_release() to permit the use of
the zero register for the value.

There should be no functional change as a result of this patch.
Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230314153700.787701-3-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
parent e5cacb54
......@@ -131,25 +131,25 @@ do { \
case 1: \
asm volatile ("stlrb %w1, %0" \
: "=Q" (*__p) \
: "r" (*(__u8 *)__u.__c) \
: "rZ" (*(__u8 *)__u.__c) \
: "memory"); \
break; \
case 2: \
asm volatile ("stlrh %w1, %0" \
: "=Q" (*__p) \
: "r" (*(__u16 *)__u.__c) \
: "rZ" (*(__u16 *)__u.__c) \
: "memory"); \
break; \
case 4: \
asm volatile ("stlr %w1, %0" \
: "=Q" (*__p) \
: "r" (*(__u32 *)__u.__c) \
: "rZ" (*(__u32 *)__u.__c) \
: "memory"); \
break; \
case 8: \
asm volatile ("stlr %1, %0" \
asm volatile ("stlr %x1, %0" \
: "=Q" (*__p) \
: "r" (*(__u64 *)__u.__c) \
: "rZ" (*(__u64 *)__u.__c) \
: "memory"); \
break; \
} \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment