1. 31 Jul, 2018 2 commits
  2. 30 Jul, 2018 2 commits
  3. 27 Jul, 2018 2 commits
  4. 26 Jul, 2018 3 commits
  5. 24 Jul, 2018 2 commits
  6. 23 Jul, 2018 8 commits
    • Will Deacon's avatar
      rseq/selftests: Add support for arm64 · b9657463
      Will Deacon authored
      Hook up arm64 support to the rseq selftests.
      Acked-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      b9657463
    • AKASHI Takahiro's avatar
      arm64: acpi: fix alignment fault in accessing ACPI · 09ffcb0d
      AKASHI Takahiro authored
      This is a fix against the issue that crash dump kernel may hang up
      during booting, which can happen on any ACPI-based system with "ACPI
      Reclaim Memory."
      
      (kernel messages after panic kicked off kdump)
      	   (snip...)
      	Bye!
      	   (snip...)
      	ACPI: Core revision 20170728
      	pud=000000002e7d0003, *pmd=000000002e7c0003, *pte=00e8000039710707
      	Internal error: Oops: 96000021 [#1] SMP
      	Modules linked in:
      	CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.14.0-rc6 #1
      	task: ffff000008d05180 task.stack: ffff000008cc0000
      	PC is at acpi_ns_lookup+0x25c/0x3c0
      	LR is at acpi_ds_load1_begin_op+0xa4/0x294
      	   (snip...)
      	Process swapper/0 (pid: 0, stack limit = 0xffff000008cc0000)
      	Call trace:
      	   (snip...)
      	[<ffff0000084a6764>] acpi_ns_lookup+0x25c/0x3c0
      	[<ffff00000849b4f8>] acpi_ds_load1_begin_op+0xa4/0x294
      	[<ffff0000084ad4ac>] acpi_ps_build_named_op+0xc4/0x198
      	[<ffff0000084ad6cc>] acpi_ps_create_op+0x14c/0x270
      	[<ffff0000084acfa8>] acpi_ps_parse_loop+0x188/0x5c8
      	[<ffff0000084ae048>] acpi_ps_parse_aml+0xb0/0x2b8
      	[<ffff0000084a8e10>] acpi_ns_one_complete_parse+0x144/0x184
      	[<ffff0000084a8e98>] acpi_ns_parse_table+0x48/0x68
      	[<ffff0000084a82cc>] acpi_ns_load_table+0x4c/0xdc
      	[<ffff0000084b32f8>] acpi_tb_load_namespace+0xe4/0x264
      	[<ffff000008baf9b4>] acpi_load_tables+0x48/0xc0
      	[<ffff000008badc20>] acpi_early_init+0x9c/0xd0
      	[<ffff000008b70d50>] start_kernel+0x3b4/0x43c
      	Code: b9008fb9 2a000318 36380054 32190318 (b94002c0)
      	---[ end trace c46ed37f9651c58e ]---
      	Kernel panic - not syncing: Fatal exception
      	Rebooting in 10 seconds..
      
      (diagnosis)
      * This fault is a data abort, alignment fault (ESR=0x96000021)
        during reading out ACPI table.
      * Initial ACPI tables are normally stored in system ram and marked as
        "ACPI Reclaim memory" by the firmware.
      * After the commit f56ab9a5 ("efi/arm: Don't mark ACPI reclaim
        memory as MEMBLOCK_NOMAP"), those regions are differently handled
        as they are "memblock-reserved", without NOMAP bit.
      * So they are now excluded from device tree's "usable-memory-range"
        which kexec-tools determines based on a current view of /proc/iomem.
      * When crash dump kernel boots up, it tries to accesses ACPI tables by
        mapping them with ioremap(), not ioremap_cache(), in acpi_os_ioremap()
        since they are no longer part of mapped system ram.
      * Given that ACPI accessor/helper functions are compiled in without
        unaligned access support (ACPI_MISALIGNMENT_NOT_SUPPORTED),
        any unaligned access to ACPI tables can cause a fatal panic.
      
      With this patch, acpi_os_ioremap() always honors memory attribute
      information provided by the firmware (EFI) and retaining cacheability
      allows the kernel safe access to ACPI tables.
      Signed-off-by: default avatarAKASHI Takahiro <takahiro.akashi@linaro.org>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by and Tested-by: Bhupesh Sharma <bhsharma@redhat.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      09ffcb0d
    • AKASHI Takahiro's avatar
      efi/arm: map UEFI memory map even w/o runtime services enabled · 20d12cf9
      AKASHI Takahiro authored
      Under the current implementation, UEFI memory map will be mapped and made
      available in virtual mappings only if runtime services are enabled.
      But in a later patch, we want to use UEFI memory map in acpi_os_ioremap()
      to create mappings of ACPI tables using memory attributes described in
      UEFI memory map.
      See the following commit:
          arm64: acpi: fix alignment fault in accessing ACPI tables
      
      So, as a first step, arm_enter_runtime_services() is modified, alongside
      Ard's patch[1], so that UEFI memory map will not be freed even if
      efi=noruntime.
      
      [1] https://marc.info/?l=linux-efi&m=152930773507524&w=2Signed-off-by: default avatarAKASHI Takahiro <takahiro.akashi@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      20d12cf9
    • Ard Biesheuvel's avatar
      efi/arm: preserve early mapping of UEFI memory map longer for BGRT · 3ea86495
      Ard Biesheuvel authored
      The BGRT code validates the contents of the table against the UEFI
      memory map, and so it expects it to be mapped when the code runs.
      
      On ARM, this is currently not the case, since we tear down the early
      mapping after efi_init() completes, and only create the permanent
      mapping in arm_enable_runtime_services(), which executes as an early
      initcall, but still leaves a window where the UEFI memory map is not
      mapped.
      
      So move the call to efi_memmap_unmap() from efi_init() to
      arm_enable_runtime_services().
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: fold in EFI_MEMMAP attribute check from Ard]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      3ea86495
    • AKASHI Takahiro's avatar
      drivers: acpi: add dependency of EFI for arm64 · 5bcd4408
      AKASHI Takahiro authored
      As Ard suggested, CONFIG_ACPI && !CONFIG_EFI doesn't make sense on arm64,
      while CONFIG_ACPI and CONFIG_CPU_BIG_ENDIAN doesn't make sense either.
      
      As CONFIG_EFI already has a dependency of !CONFIG_CPU_BIG_ENDIAN, it is
      good enough to add a dependency of CONFIG_EFI to avoid any useless
      combination of configuration.
      
      This bug, reported by Will, will be revealed when my patch series,
      "arm64: kexec,kdump: fix boot failures on acpi-only system," is applied
      and the kernel is built under allmodconfig.
      Signed-off-by: default avatarAKASHI Takahiro <takahiro.akashi@linaro.org>
      Suggested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      5bcd4408
    • James Morse's avatar
      arm64: export memblock_reserve()d regions via /proc/iomem · 50d7ba36
      James Morse authored
      There has been some confusion around what is necessary to prevent kexec
      overwriting important memory regions. memblock: reserve, or nomap?
      Only memblock nomap regions are reported via /proc/iomem, kexec's
      user-space doesn't know about memblock_reserve()d regions.
      
      Until commit f56ab9a5 ("efi/arm: Don't mark ACPI reclaim memory
      as MEMBLOCK_NOMAP") the ACPI tables were nomap, now they are reserved
      and thus possible for kexec to overwrite with the new kernel or initrd.
      But this was always broken, as the UEFI memory map is also reserved
      and not marked as nomap.
      
      Exporting both nomap and reserved memblock types is a nuisance as
      they live in different memblock structures which we can't walk at
      the same time.
      
      Take a second walk over memblock.reserved and add new 'reserved'
      subnodes for the memblock_reserved() regions that aren't already
      described by the existing code. (e.g. Kernel Code)
      
      We use reserve_region_with_split() to find the gaps in existing named
      regions. This handles the gap between 'kernel code' and 'kernel data'
      which is memblock_reserve()d, but already partially described by
      request_standard_resources(). e.g.:
      | 80000000-dfffffff : System RAM
      |   80080000-80ffffff : Kernel code
      |   81000000-8158ffff : reserved
      |   81590000-8237efff : Kernel data
      |   a0000000-dfffffff : Crash kernel
      | e00f0000-f949ffff : System RAM
      
      reserve_region_with_split needs kzalloc() which isn't available when
      request_standard_resources() is called, use an initcall.
      Reported-by: default avatarBhupesh Sharma <bhsharma@redhat.com>
      Reported-by: default avatarTyler Baicar <tbaicar@codeaurora.org>
      Suggested-by: default avatarAkashi Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Fixes: d28f6df1 ("arm64/kexec: Add core kexec support")
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      CC: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      50d7ba36
    • Olof Johansson's avatar
      arm64: build with baremetal linker target instead of Linux when available · c931d34e
      Olof Johansson authored
      Not all toolchains have the baremetal elf targets, RedHat/Fedora ones
      in particular. So, probe for whether it's available and use the previous
      (linux) targets if it isn't.
      Reported-by: default avatarLaura Abbott <labbott@redhat.com>
      Tested-by: default avatarLaura Abbott <labbott@redhat.com>
      Acked-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Paul Kocialkowski <contact@paulk.fr>
      Signed-off-by: default avatarOlof Johansson <olof@lixom.net>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      c931d34e
    • Mark Rutland's avatar
      arm64: fix possible spectre-v1 write in ptrace_hbp_set_event() · 14d6e289
      Mark Rutland authored
      It's possible for userspace to control idx. Sanitize idx when using it
      as an array index, to inhibit the potential spectre-v1 write gadget.
      
      Found by smatch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      14d6e289
  7. 12 Jul, 2018 21 commits
    • Will Deacon's avatar
      arm64: Drop asmlinkage qualifier from syscall_trace_{enter,exit} · 11527b3e
      Will Deacon authored
      syscall_trace_{enter,exit} are only called from C code, so drop the
      asmlinkage qualifier from their definitions.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      11527b3e
    • Mark Rutland's avatar
      arm64: implement syscall wrappers · 4378a7d4
      Mark Rutland authored
      To minimize the risk of userspace-controlled values being used under
      speculation, this patch adds pt_regs based syscall wrappers for arm64,
      which pass the minimum set of required userspace values to syscall
      implementations. For each syscall, a wrapper which takes a pt_regs
      argument is automatically generated, and this extracts the arguments
      before calling the "real" syscall implementation.
      
      Each syscall has three functions generated:
      
      * __do_<compat_>sys_<name> is the "real" syscall implementation, with
        the expected prototype.
      
      * __se_<compat_>sys_<name> is the sign-extension/narrowing wrapper,
        inherited from common code. This takes a series of long parameters,
        casting each to the requisite types required by the "real" syscall
        implementation in __do_<compat_>sys_<name>.
      
        This wrapper *may* not be necessary on arm64 given the AAPCS rules on
        unused register bits, but it seemed safer to keep the wrapper for now.
      
      * __arm64_<compat_>_sys_<name> takes a struct pt_regs pointer, and
        extracts *only* the relevant register values, passing these on to the
        __se_<compat_>sys_<name> wrapper.
      
      The syscall invocation code is updated to handle the calling convention
      required by __arm64_<compat_>_sys_<name>, and passes a single struct
      pt_regs pointer.
      
      The compiler can fold the syscall implementation and its wrappers, such
      that the overhead of this approach is minimized.
      
      Note that we play games with sys_ni_syscall(). It can't be defined with
      SYSCALL_DEFINE0() because we must avoid the possibility of error
      injection. Additionally, there are a couple of locations where we need
      to call it from C code, and we don't (currently) have a
      ksys_ni_syscall().  While it has no wrapper, passing in a redundant
      pt_regs pointer is benign per the AAPCS.
      
      When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is defines for
      sys_ni_syscall(). Since we need to treat it differently for in-kernel
      calls and the syscall tables, the prototype is defined as-required.
      
      The wrappers are largely the same as their x86 counterparts, but
      simplified as we don't have a variety of compat calling conventions that
      require separate stubs. Unlike x86, we have some zero-argument compat
      syscalls, and must define COMPAT_SYSCALL_DEFINE0() to ensure that these
      are also given an __arm64_compat_sys_ prefix.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      4378a7d4
    • Mark Rutland's avatar
      arm64: convert compat wrappers to C · 55f84926
      Mark Rutland authored
      In preparation for converting to pt_regs syscall wrappers, convert our
      existing compat wrappers to C. This will allow the pt_regs wrappers to
      be automatically generated, and will allow for the compat register
      manipulation to be folded in with the pt_regs accesses.
      
      To avoid confusion with the upcoming pt_regs wrappers and existing
      compat wrappers provided by core code, the C wrappers are renamed to
      compat_sys_aarch32_<syscall>.
      
      With the assembly wrappers gone, we can get rid of entry32.S and the
      associated boilerplate.
      
      Note that these must call the ksys_* syscall entry points, as the usual
      sys_* entry points will be modified to take a single pt_regs pointer
      argument.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      55f84926
    • Mark Rutland's avatar
      arm64: use SYSCALL_DEFINE6() for mmap · d3516c90
      Mark Rutland authored
      We don't currently annotate our mmap implementation as a syscall, as we
      need to do to use pt_regs syscall wrappers.
      
      Let's mark it as a real syscall.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      d3516c90
    • Mark Rutland's avatar
      arm64: use {COMPAT,}SYSCALL_DEFINE0 for sigreturn · bf4ce5cc
      Mark Rutland authored
      We don't currently annotate our various sigreturn functions as syscalls,
      as we need to do to use pt_regs syscall wrappers.
      
      Let's mark them as real syscalls.
      
      For compat_sys_sigreturn and compat_sys_rt_sigreturn, this changes the
      return type from int to long, matching the prototypes in sys32.c.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      bf4ce5cc
    • Mark Rutland's avatar
      arm64: remove in-kernel call to sys_personality() · 3f7deccb
      Mark Rutland authored
      With pt_regs syscall wrappers, the calling convention for
      sys_personality() will change. Use ksys_personality(), which is
      functionally equivalent.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      3f7deccb
    • Mark Rutland's avatar
      kernel: add kcompat_sys_{f,}statfs64() · 9b54bf9d
      Mark Rutland authored
      Using this helper allows us to avoid the in-kernel calls to the
      compat_sys_{f,}statfs64() sycalls, as are necessary for parameter
      mangling in arm64's compat handling.
      
      Following the example of ksys_* functions, kcompat_sys_* functions are
      intended to be a drop-in replacement for their compat_sys_*
      counterparts, with the same calling convention.
      
      This is necessary to enable conversion of arm64's syscall handling to
      use pt_regs wrappers.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: linux-fsdevel@vger.kernel.org
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      9b54bf9d
    • Mark Rutland's avatar
      kernel: add ksys_personality() · bf1c77b4
      Mark Rutland authored
      Using this helper allows us to avoid the in-kernel call to the
      sys_personality() syscall. The ksys_ prefix denotes that this function
      is meant as a drop-in replacement for the syscall. In particular, it
      uses the same calling convention as sys_personality().
      
      Since ksys_personality is trivial, it is implemented directly in
      <linux/syscalls.h>, as we do for ksys_close() and friends.
      
      This helper is necessary to enable conversion of arm64's syscall
      handling to use pt_regs wrappers.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Dave Martin <dave.martin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      bf1c77b4
    • Mark Rutland's avatar
      arm64: drop alignment from syscall tables · 80d63bc3
      Mark Rutland authored
      Our syscall tables are aligned to 4096 bytes, which allowed their
      addresses to be generated with a single adrp in entry.S. This has the
      unfortunate property of wasting space in .rodata for the necessary
      padding.
      
      Now that the address is generated by C code, we can rely on the compiler
      to do the right thing, and drop the alignemnt.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      80d63bc3
    • Mark Rutland's avatar
      arm64: zero GPRs upon entry from EL0 · baaa7237
      Mark Rutland authored
      We can zero GPRs x0 - x29 upon entry from EL0 to make it harder for
      userspace to control values consumed by speculative gadgets.
      
      We don't blat x30, since this is stashed much later, and we'll blat it
      before invoking C code.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      baaa7237
    • Mark Rutland's avatar
      arm64: don't reload GPRs after apply_ssbd · 99ed3ed0
      Mark Rutland authored
      Now that all of the syscall logic works on the saved pt_regs, apply_ssbd
      can safely corrupt x0-x3 in the entry paths, and we no longer need to
      restore them. So let's remove the logic doing so.
      
      With that logic gone, we can fold the branch target into the macro, so
      that callers need not deal with this. GAS provides \@, which provides a
      unique value per macro invocation, which we can use to create a unique
      label.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      99ed3ed0
    • Mark Rutland's avatar
      arm64: don't restore GPRs when context tracking · d9be0325
      Mark Rutland authored
      Now that syscalls are invoked with pt_regs, we no longer need to ensure
      that the argument regsiters are live in the entry assembly, and it's
      fine to not restore them after context_tracking_user_exit() has
      corrupted them.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      d9be0325
    • Mark Rutland's avatar
      arm64: convert native/compat syscall entry to C · 3b714275
      Mark Rutland authored
      Now that the syscall invocation logic is in C, we can migrate the rest
      of the syscall entry logic over, so that the entry assembly needn't look
      at the register values at all.
      
      The SVE reset across syscall logic now unconditionally clears TIF_SVE,
      but sve_user_disable() will only write back to CPACR_EL1 when SVE is
      actually enabled.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarDave Martin <dave.martin@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      3b714275
    • Mark Rutland's avatar
      arm64: convert syscall trace logic to C · f37099b6
      Mark Rutland authored
      Currently syscall tracing is a tricky assembly state machine, which can
      be rather difficult to follow, and even harder to modify. Before we
      start fiddling with it for pt_regs syscalls, let's convert it to C.
      
      This is not intended to have any functional change.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      f37099b6
    • Mark Rutland's avatar
      arm64: convert raw syscall invocation to C · 4141c857
      Mark Rutland authored
      As a first step towards invoking syscalls with a pt_regs argument,
      convert the raw syscall invocation logic to C. We end up with a bit more
      register shuffling, but the unified invocation logic means we can unify
      the tracing paths, too.
      
      Previously, assembly had to open-code calls to ni_sys() when the system
      call number was out-of-bounds for the relevant syscall table. This case
      is now handled by invoke_syscall(), and the assembly no longer need to
      handle this case explicitly. This allows the tracing paths to be
      simplified and unified, as we no longer need the __ni_sys_trace path and
      the __sys_trace_return label.
      
      This only converts the invocation of the syscall. The rest of the
      syscall triage and tracing is left in assembly for now, and will be
      converted in subsequent patches.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      4141c857
    • Mark Rutland's avatar
      arm64: introduce syscall_fn_t · 27d83e68
      Mark Rutland authored
      In preparation for invoking arbitrary syscalls from C code, let's define
      a type for an arbitrary syscall, matching the parameter passing rules of
      the AAPCS.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      27d83e68
    • Mark Rutland's avatar
      arm64: remove sigreturn wrappers · 3085e164
      Mark Rutland authored
      The arm64 sigreturn* syscall handlers are non-standard. Rather than
      taking a number of user parameters in registers as per the AAPCS,
      they expect the pt_regs as their sole argument.
      
      To make this work, we override the syscall definitions to invoke
      wrappers written in assembly, which mov the SP into x0, and branch to
      their respective C functions.
      
      On other architectures (such as x86), the sigreturn* functions take no
      argument and instead use current_pt_regs() to acquire the user
      registers. This requires less boilerplate code, and allows for other
      features such as interposing C code in this path.
      
      This patch takes the same approach for arm64.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tentatively-reviewed-by: default avatarDave Martin <dave.martin@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      3085e164
    • Mark Rutland's avatar
      arm64: move sve_user_{enable,disable} to <asm/fpsimd.h> · f9209e26
      Mark Rutland authored
      In subsequent patches, we'll want to make use of sve_user_enable() and
      sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
      <asm/fpsimd.h> where we can make use of them.
      
      To avoid ifdeffery in sequences like:
      
      if (system_supports_sve() && some_condition)
      	sve_user_disable();
      
      ... empty stubs are provided when support for SVE is not enabled. Note
      that system_supports_sve() contains as IS_ENABLED(CONFIG_ARM64_SVE), so
      the sve_user_disable() call should be optimized away entirely when
      CONFIG_ARM64_SVE is not selected.
      
      To ensure that this is the case, the stub definitions contain a
      BUILD_BUG(), as we do for other stubs for which calls should always be
      optimized away when the relevant config option is not selected.
      
      At the same time, the include list of <asm/fpsimd.h> is sorted while
      adding <asm/sysreg.h>.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarDave Martin <dave.martin@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      f9209e26
    • Mark Rutland's avatar
      arm64: kill change_cpacr() · 8d370933
      Mark Rutland authored
      Now that we have sysreg_clear_set(), we can use this instead of
      change_cpacr().
      
      Note that the order of the set and clear arguments differs between
      change_cpacr() and sysreg_clear_set(), so these are flipped as part of
      the conversion. Also, sve_user_enable() redundantly clears
      CPACR_EL1_ZEN_EL0EN before setting it; this is removed for clarity.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarDave Martin <dave.martin@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      8d370933
    • Mark Rutland's avatar
      arm64: kill config_sctlr_el1() · 25be597a
      Mark Rutland authored
      Now that we have sysreg_clear_set(), we can consistently use this
      instead of config_sctlr_el1().
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarDave Martin <dave.martin@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      25be597a
    • Mark Rutland's avatar
      arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h> · 1c312e84
      Mark Rutland authored
      Currently we assert that the SCTLR_EL{1,2}_{SET,CLEAR} bits are
      self-consistent with an assertion in config_sctlr_el1(). This is a bit
      unusual, since config_sctlr_el1() doesn't make use of these definitions,
      and is far away from the definitions themselves.
      
      We can use the CPP #error directive to have equivalent assertions in
      <asm/sysreg.h>, next to the definitions of the set/clear bits, which is
      a bit clearer and simpler.
      
      At the same time, lets fill in the upper 32 bits for both registers in
      their respective RES0 definitions. This could be a little nicer with
      GENMASK_ULL(63, 32), but this currently lives in <linux/bitops.h>, which
      cannot safely be included from assembly, as <asm/sysreg.h> can.
      
      Note the when the preprocessor evaluates an expression for an #if
      directive, all signed or unsigned values are treated as intmax_t or
      uintmax_t respectively. To avoid ambiguity, we define explicitly define
      the mask of all 64 bits.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      1c312e84