1. 11 Dec, 2018 2 commits
    • Will Deacon's avatar
      arm64: preempt: Fix big-endian when checking preempt count in assembly · 7faa313f
      Will Deacon authored
      Commit 39624469 ("arm64: preempt: Provide our own implementation of
      asm/preempt.h") extended the preempt count field in struct thread_info
      to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
      indicating whether or not the current task needs rescheduling.
      
      Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
      to this new field, the assembly usage was left untouched meaning that a
      32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
      the reschedule flag instead of the count.
      
      Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
      we're actually better off reworking the two assembly users so that they
      operate on the whole 64-bit value in favour of inspecting the thread
      flags separately in order to determine whether a reschedule is needed.
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: default avatar"kernelci.org bot" <bot@kernelci.org>
      Tested-by: default avatarKevin Hilman <khilman@baylibre.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      7faa313f
    • Arnd Bergmann's avatar
      arm64: kexec_file: include linux/vmalloc.h · 732291c4
      Arnd Bergmann authored
      This is needed for compilation in some configurations that don't
      include it implicitly:
      
      arch/arm64/kernel/machine_kexec_file.c: In function 'arch_kimage_file_post_load_cleanup':
      arch/arm64/kernel/machine_kexec_file.c:37:2: error: implicit declaration of function 'vfree'; did you mean 'kvfree'? [-Werror=implicit-function-declaration]
      
      Fixes: 52b2a8af ("arm64: kexec_file: load initrd and device-tree")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      732291c4
  2. 10 Dec, 2018 33 commits
  3. 07 Dec, 2018 5 commits
    • Will Deacon's avatar
      arm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraint · 42305099
      Will Deacon authored
      The "L" AArch64 machine constraint, which we use for the "old" value in
      an LL/SC cmpxchg(), generates an immediate that is suitable for a 64-bit
      logical instruction. However, for cmpxchg() operations on types smaller
      than 64 bits, this constraint can result in an invalid instruction which
      is correctly rejected by GAS, such as EOR W1, W1, #0xffffffff.
      
      Whilst we could special-case the constraint based on the cmpxchg size,
      it's far easier to change the constraint to "K" and put up with using
      a register for large 64-bit immediates. For out-of-line LL/SC atomics,
      this is all moot anyway.
      Reported-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      42305099
    • Will Deacon's avatar
      arm64: percpu: Rewrite per-cpu ops to allow use of LSE atomics · 959bf2fd
      Will Deacon authored
      Our percpu code is a bit of an inconsistent mess:
      
        * It rolls its own xchg(), but reuses cmpxchg_local()
        * It uses various different flavours of preempt_{enable,disable}()
        * It returns values even for the non-returning RmW operations
        * It makes no use of LSE atomics outside of the cmpxchg() ops
        * There are individual macros for different sizes of access, but these
          are all funneled through a switch statement rather than dispatched
          directly to the relevant case
      
      This patch rewrites the per-cpu operations to address these shortcomings.
      Whilst the new code is a lot cleaner, the big advantage is that we can
      use the non-returning ST- atomic instructions when we have LSE.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      959bf2fd
    • Will Deacon's avatar
      arm64: Avoid masking "old" for LSE cmpxchg() implementation · b4f9209b
      Will Deacon authored
      The CAS instructions implicitly access only the relevant bits of the "old"
      argument, so there is no need for explicit masking via type-casting as
      there is in the LL/SC implementation.
      
      Move the casting into the LL/SC code and remove it altogether for the LSE
      implementation.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      b4f9209b
    • Will Deacon's avatar
      arm64: Avoid redundant type conversions in xchg() and cmpxchg() · 5ef3fe4c
      Will Deacon authored
      Our atomic instructions (either LSE atomics of LDXR/STXR sequences)
      natively support byte, half-word, word and double-word memory accesses
      so there is no need to mask the data register prior to being stored.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      5ef3fe4c
    • James Morse's avatar
      arm64: kexec_file: forbid kdump via kexec_file_load() · 394135c1
      James Morse authored
      Now that kexec_walk_memblock() can do the crash-kernel placement itself
      architectures that don't support kdump via kexe_file_load() need to
      explicitly forbid it.
      
      We don't support this on arm64 until the kernel can add the elfcorehdr
      and usable-memory-range fields to the DT. Without these the crash-kernel
      overwrites the previous kernel's memory during startup.
      
      Add a check to refuse crash image loading.
      Reviewed-by: default avatarBhupesh Sharma <bhsharma@redhat.com>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      394135c1