• Mark Rutland's avatar
    arm64: io: permit offset addressing · d044d6ba
    Mark Rutland authored
    Currently our IO accessors all use register addressing without offsets,
    but we could safely use offset addressing (without writeback) to
    simplify and optimize the generated code.
    
    To function correctly under a hypervisor which emulates IO accesses, we
    must ensure that any faulting/trapped IO access results in an ESR_ELx
    value with ESR_ELX.ISS.ISV=1 and with the tranfer register described in
    ESR_ELx.ISS.SRT. This means that we can only use loads/stores of a
    single general purpose register (or the zero register), and must avoid
    writeback addressing modes. However, we can use immediate offset
    addressing modes, as these still provide ESR_ELX.ISS.ISV=1 and a valid
    ESR_ELx.ISS.SRT when those accesses fault at Stage-2.
    
    Currently we only use register addressing without offsets. We use the
    "r" constraint to place the address into a register, and manually
    generate the register addressing by surrounding the resulting register
    operand with square braces, e.g.
    
    | static __always_inline void __raw_writeq(u64 val, volatile void __iomem *addr)
    | {
    |         asm volatile("str %x0, [%1]" : : "rZ" (val), "r" (addr));
    | }
    
    Due to this, sequences of adjacent accesses need to generate addresses
    using separate instructions. For example, the following code:
    
    | void writeq_zero_8_times(void *ptr)
    | {
    |        writeq_relaxed(0, ptr + 8 * 0);
    |        writeq_relaxed(0, ptr + 8 * 1);
    |        writeq_relaxed(0, ptr + 8 * 2);
    |        writeq_relaxed(0, ptr + 8 * 3);
    |        writeq_relaxed(0, ptr + 8 * 4);
    |        writeq_relaxed(0, ptr + 8 * 5);
    |        writeq_relaxed(0, ptr + 8 * 6);
    |        writeq_relaxed(0, ptr + 8 * 7);
    | }
    
    ... is compiled to:
    
    | <writeq_zero_8_times>:
    |     str     xzr, [x0]
    |     add     x1, x0, #0x8
    |     str     xzr, [x1]
    |     add     x1, x0, #0x10
    |     str     xzr, [x1]
    |     add     x1, x0, #0x18
    |     str     xzr, [x1]
    |     add     x1, x0, #0x20
    |     str     xzr, [x1]
    |     add     x1, x0, #0x28
    |     str     xzr, [x1]
    |     add     x1, x0, #0x30
    |     str     xzr, [x1]
    |     add     x0, x0, #0x38
    |     str     xzr, [x0]
    |     ret
    
    As described above, we could safely use immediate offset addressing,
    which would allow the ADDs to be folded into the address generation for
    the STRs, resulting in simpler and smaller generated assembly. We can do
    this by using the "o" constraint to allow the compiler to generate
    offset addressing (without writeback) for a memory operand, e.g.
    
    | static __always_inline void __raw_writeq(u64 val, volatile void __iomem *addr)
    | {
    |         volatile u64 __iomem *ptr = addr;
    |         asm volatile("str %x0, %1" : : "rZ" (val), "o" (*ptr));
    | }
    
    ... which results in the earlier code sequence being compiled to:
    
    | <writeq_zero_8_times>:
    |     str     xzr, [x0]
    |     str     xzr, [x0, #8]
    |     str     xzr, [x0, #16]
    |     str     xzr, [x0, #24]
    |     str     xzr, [x0, #32]
    |     str     xzr, [x0, #40]
    |     str     xzr, [x0, #48]
    |     str     xzr, [x0, #56]
    |     ret
    
    As Will notes at:
    
      https://lore.kernel.org/linux-arm-kernel/20240117160528.GA3398@willie-the-truck/
    
    ... some compilers struggle with a plain "o" constraint, so it's
    preferable to use "Qo", where the additional "Q" constraint permits
    using non-offset register addressing.
    
    This patch modifies our IO write accessors to use "Qo" constraints,
    resulting in the better code generation described above. The IO read
    accessors are left as-is because ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE
    requires that non-offset register addressing is used, as the LDAR
    instruction does not support offset addressing.
    
    When compiling v6.8-rc1 defconfig with GCC 13.2.0, this saves ~4KiB of
    text:
    
    | [mark@lakrids:~/src/linux]% ls -al vmlinux-*
    | -rwxr-xr-x 1 mark mark 153960576 Jan 23 12:01 vmlinux-after
    | -rwxr-xr-x 1 mark mark 153862192 Jan 23 11:57 vmlinux-before
    |
    | [mark@lakrids:~/src/linux]% size vmlinux-before vmlinux-after
    |    text    data     bss     dec     hex filename
    | 26708921        16690350         622736 44022007        29fb8f7 vmlinux-before
    | 26704761        16690414         622736 44017911        29fa8f7 vmlinux-after
    
    ... though due to internal alignment of sections, this has no impact on
    the size of the resulting Image:
    
    | [mark@lakrids:~/src/linux]% ls -al Image-*
    | -rw-r--r-- 1 mark mark 43590144 Jan 23 12:01 Image-after
    | -rw-r--r-- 1 mark mark 43590144 Jan 23 11:57 Image-before
    
    Aside from the better code generation, there should be no functional
    change as a result of this patch. I have lightly tested this patch,
    including booting under KVM (where some devices such as PL011 are
    emulated).
    Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
    Cc: Jason Gunthorpe <jgg@nvidia.com>
    Cc: Marc Zyngier <maz@kernel.org>
    Cc: Will Deacon <will@kernel.org>
    Reviewed-by: default avatarJason Gunthorpe <jgg@nvidia.com>
    Acked-by: default avatarWill Deacon <will@kernel.org>
    Link: https://lore.kernel.org/r/20240124111259.874975-1-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    d044d6ba
io.h 5.32 KB