1. 04 Dec, 2020 13 commits
  2. 03 Dec, 2020 1 commit
  3. 02 Dec, 2020 16 commits
    • Mark Rutland's avatar
      arm64: uaccess: remove vestigal UAO support · 1517c4fa
      Mark Rutland authored
      Now that arm64 no longer uses UAO, remove the vestigal feature detection
      code and Kconfig text.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-13-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      1517c4fa
    • Mark Rutland's avatar
      arm64: uaccess: remove redundant PAN toggling · 7cf283c7
      Mark Rutland authored
      Some code (e.g. futex) needs to make privileged accesses to userspace
      memory, and uses uaccess_{enable,disable}_privileged() in order to
      permit this. All other uaccess primitives use LDTR/STTR, and never need
      to toggle PAN.
      
      Remove the redundant PAN toggling.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-12-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7cf283c7
    • Mark Rutland's avatar
      arm64: uaccess: remove addr_limit_user_check() · b5a5a01d
      Mark Rutland authored
      Now that set_fs() is gone, addr_limit_user_check() is redundant. Remove
      the checks and associated thread flag.
      
      To ensure that _TIF_WORK_MASK can be used as an immediate value in an
      AND instruction (as it is in `ret_to_user`), TIF_MTE_ASYNC_FAULT is
      renumbered to keep the constituent bits of _TIF_WORK_MASK contiguous.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-11-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b5a5a01d
    • Mark Rutland's avatar
      arm64: uaccess: remove set_fs() · 3d2403fd
      Mark Rutland authored
      Now that the uaccess primitives dont take addr_limit into account, we
      have no need to manipulate this via set_fs() and get_fs(). Remove
      support for these, along with some infrastructure this renders
      redundant.
      
      We no longer need to flip UAO to access kernel memory under KERNEL_DS,
      and head.S unconditionally clears UAO for all kernel configurations via
      an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
      nor do we need to context-switch it. However, we still need to adjust
      PAN during SDEI entry.
      
      Masking of __user pointers no longer needs to use the dynamic value of
      addr_limit, and can use a constant derived from the maximum possible
      userspace task size. A new TASK_SIZE_MAX constant is introduced for
      this, which is also used by core code. In configurations supporting
      52-bit VAs, this may include a region of unusable VA space above a
      48-bit TTBR0 limit, but never includes any portion of TTBR1.
      
      Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and
      KERNEL_DS were inclusive limits, and is converted to a mask by
      subtracting one.
      
      As the SDEI entry code repurposes the otherwise unnecessary
      pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted
      context, for now we rename that to pt_regs::sdei_ttbr1. In future we can
      consider factoring that out.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarJames Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      3d2403fd
    • Mark Rutland's avatar
      arm64: uaccess cleanup macro naming · 7b90dc40
      Mark Rutland authored
      Now the uaccess primitives use LDTR/STTR unconditionally, the
      uao_{ldp,stp,user_alternative} asm macros are misnamed, and have a
      redundant argument. Let's remove the redundant argument and rename these
      to user_{ldp,stp,ldst} respectively to clean this up.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarRobin Murohy <robin.murphy@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-9-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7b90dc40
    • Mark Rutland's avatar
      arm64: uaccess: split user/kernel routines · fc703d80
      Mark Rutland authored
      This patch separates arm64's user and kernel memory access primitives
      into distinct routines, adding new __{get,put}_kernel_nofault() helpers
      to access kernel memory, upon which core code builds larger copy
      routines.
      
      The kernel access routines (using LDR/STR) are not affected by PAN (when
      legitimately accessing kernel memory), nor are they affected by UAO.
      Switching to KERNEL_DS may set UAO, but this does not adversely affect
      the kernel access routines.
      
      The user access routines (using LDTR/STTR) are not affected by PAN (when
      legitimately accessing user memory), but are affected by UAO. As these
      are only legitimate to use under USER_DS with UAO clear, this should not
      be problematic.
      
      Routines performing atomics to user memory (futex and deprecated
      instruction emulation) still need to transiently clear PAN, and these
      are left as-is. These are never used on kernel memory.
      
      Subsequent patches will refactor the uaccess helpers to remove redundant
      code, and will also remove the redundant PAN/UAO manipulation.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-8-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fc703d80
    • Mark Rutland's avatar
      arm64: uaccess: refactor __{get,put}_user · f253d827
      Mark Rutland authored
      As a step towards implementing __{get,put}_kernel_nofault(), this patch
      splits most user-memory specific logic out of __{get,put}_user(), with
      the memory access and fault handling in new __{raw_get,put}_mem()
      helpers.
      
      For now the LDR/LDTR patching is left within the *get_mem() helpers, and
      will be removed in a subsequent patch.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-7-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f253d827
    • Mark Rutland's avatar
      arm64: uaccess: simplify __copy_user_flushcache() · 9e94fdad
      Mark Rutland authored
      Currently __copy_user_flushcache() open-codes raw_copy_from_user(), and
      doesn't use uaccess_mask_ptr() on the user address. Let's have it call
      raw_copy_from_user(), which is both a simplification and ensures that
      user pointers are masked under speculation.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-6-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9e94fdad
    • Mark Rutland's avatar
      arm64: uaccess: rename privileged uaccess routines · 923e1e7d
      Mark Rutland authored
      We currently have many uaccess_*{enable,disable}*() variants, which
      subsequent patches will cut down as part of removing set_fs() and
      friends. Once this simplification is made, most uaccess routines will
      only need to ensure that the user page tables are mapped in TTBR0, as is
      currently dealt with by uaccess_ttbr0_{enable,disable}().
      
      The existing uaccess_{enable,disable}() routines ensure that user page
      tables are mapped in TTBR0, and also disable PAN protections, which is
      necessary to be able to use atomics on user memory, but also permit
      unrelated privileged accesses to access user memory.
      
      As preparatory step, let's rename uaccess_{enable,disable}() to
      uaccess_{enable,disable}_privileged(), highlighting this caveat and
      discouraging wider misuse. Subsequent patches can reuse the
      uaccess_{enable,disable}() naming for the common case of ensuring the
      user page tables are mapped in TTBR0.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-5-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      923e1e7d
    • Mark Rutland's avatar
      arm64: sdei: explicitly simulate PAN/UAO entry · 2376e75c
      Mark Rutland authored
      In preparation for removing addr_limit and set_fs() we must decouple the
      SDEI PAN/UAO manipulation from the uaccess code, and explicitly
      reinitialize these as required.
      
      SDEI enters the kernel with a non-architectural exception, and prior to
      the most recent revision of the specification (ARM DEN 0054B), PSTATE
      bits (e.g. PAN, UAO) are not manipulated in the same way as for
      architectural exceptions. Notably, older versions of the spec can be
      read ambiguously as to whether PSTATE bits are inherited unchanged from
      the interrupted context or whether they are generated from scratch, with
      TF-A doing the latter.
      
      We have three cases to consider:
      
      1) The existing TF-A implementation of SDEI will clear PAN and clear UAO
         (along with other bits in PSTATE) when delivering an SDEI exception.
      
      2) In theory, implementations of SDEI prior to revision B could inherit
         PAN and UAO (along with other bits in PSTATE) unchanged from the
         interrupted context. However, in practice such implementations do not
         exist.
      
      3) Going forward, new implementations of SDEI must clear UAO, and
         depending on SCTLR_ELx.SPAN must either inherit or set PAN.
      
      As we can ignore (2) we can assume that upon SDEI entry, UAO is always
      clear, though PAN may be clear, inherited, or set per SCTLR_ELx.SPAN.
      Therefore, we must explicitly initialize PAN, but do not need to do
      anything for UAO.
      
      Considering what we need to do:
      
      * When set_fs() is removed, force_uaccess_begin() will have no HW
        side-effects. As this only clears UAO, which we can assume has already
        been cleared upon entry, this is not a problem. We do not need to add
        code to manipulate UAO explicitly.
      
      * PAN may be cleared upon entry (in case 1 above), so where a kernel is
        built to use PAN and this is supported by all CPUs, the kernel must
        set PAN upon entry to ensure expected behaviour.
      
      * PAN may be inherited from the interrupted context (in case 3 above),
        and so where a kernel is not built to use PAN or where PAN support is
        not uniform across CPUs, the kernel must clear PAN to ensure expected
        behaviour.
      
      This patch reworks the SDEI code accordingly, explicitly setting PAN to
      the expected state in all cases. To cater for the cases where the kernel
      does not use PAN or this is not uniformly supported by hardware we add a
      new cpu_has_pan() helper which can be used regardless of whether the
      kernel is built to use PAN.
      
      The existing system_uses_ttbr0_pan() is redefined in terms of
      system_uses_hw_pan() both for clarity and as a minor optimization when
      HW PAN is not selected.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-3-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      2376e75c
    • Mark Rutland's avatar
      arm64: sdei: move uaccess logic to arch/arm64/ · a0ccf2ba
      Mark Rutland authored
      The SDEI support code is split across arch/arm64/ and drivers/firmware/,
      largley this is split so that the arch-specific portions are under
      arch/arm64, and the management logic is under drivers/firmware/.
      However, exception entry fixups are currently under drivers/firmware.
      
      Let's move the exception entry fixups under arch/arm64/. This
      de-clutters the management logic, and puts all the arch-specific
      portions in one place. Doing this also allows the fixups to be applied
      earlier, so things like PAN and UAO will be in a known good state before
      we run other logic. This will also make subsequent refactoring easier.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-2-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a0ccf2ba
    • Mark Rutland's avatar
      arm64: head.S: always initialize PSTATE · d87a8e65
      Mark Rutland authored
      As with SCTLR_ELx and other control registers, some PSTATE bits are
      UNKNOWN out-of-reset, and we may not be able to rely on hardware or
      firmware to initialize them to our liking prior to entry to the kernel,
      e.g. in the primary/secondary boot paths and return from idle/suspend.
      
      It would be more robust (and easier to reason about) if we consistently
      initialized PSTATE to a default value, as we do with control registers.
      This will ensure that the kernel is not adversely affected by bits it is
      not aware of, e.g. when support for a feature such as PAN/UAO is
      disabled.
      
      This patch ensures that PSTATE is consistently initialized at boot time
      via an ERET. This is not intended to relax the existing requirements
      (e.g. DAIF bits must still be set prior to entering the kernel). For
      features detected dynamically (which may require system-wide support),
      it is still necessary to subsequently modify PSTATE.
      
      As ERET is not always a Context Synchronization Event, an ISB is placed
      before each exception return to ensure updates to control registers have
      taken effect. This handles the kernel being entered with SCTLR_ELx.EOS
      clear (or any future control bits being in an UNKNOWN state).
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-6-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d87a8e65
    • Mark Rutland's avatar
      arm64: head.S: cleanup SCTLR_ELx initialization · 2ffac9e3
      Mark Rutland authored
      Let's make SCTLR_ELx initialization a bit clearer by using meaningful
      names for the initialization values, following the same scheme for
      SCTLR_EL1 and SCTLR_EL2.
      
      These definitions will be used more widely in subsequent patches.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-5-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      2ffac9e3
    • Mark Rutland's avatar
      arm64: head.S: rename el2_setup -> init_kernel_el · ecbb11ab
      Mark Rutland authored
      For a while now el2_setup has performed some basic initialization of EL1
      even when the kernel is booted at EL1, so the name is a little
      misleading. Further, some comments are stale as with VHE it doesn't drop
      the CPU to EL1.
      
      To clarify things, rename el2_setup to init_kernel_el, and update
      comments to be clearer as to the function's purpose.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-4-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ecbb11ab
    • Mark Rutland's avatar
      arm64: add C wrappers for SET_PSTATE_*() · 515d5c8a
      Mark Rutland authored
      To make callsites easier to read, add trivial C wrappers for the
      SET_PSTATE_*() helpers, and convert trivial uses over to these. The new
      wrappers will be used further in subsequent patches.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-3-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      515d5c8a
    • Mark Rutland's avatar
      arm64: ensure ERET from kthread is illegal · f80d0340
      Mark Rutland authored
      For consistency, all tasks have a pt_regs reserved at the highest
      portion of their task stack. Among other things, this ensures that a
      task's SP is always pointing within its stack rather than pointing
      immediately past the end.
      
      While it is never legitimate to ERET from a kthread, we take pains to
      initialize pt_regs for kthreads as if this were legitimate. As this is
      never legitimate, the effects of an erroneous return are rarely tested.
      
      Let's simplify things by initializing a kthread's pt_regs such that an
      ERET is caught as an illegal exception return, and removing the explicit
      initialization of other exception context. Note that as
      spectre_v4_enable_task_mitigation() only manipulates the PSTATE within
      the unused regs this is safe to remove.
      
      As user tasks will have their exception context initialized via
      start_thread() or start_compat_thread(), this should only impact cases
      where something has gone very wrong and we'd like that to be clearly
      indicated.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-2-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f80d0340
  4. 27 Nov, 2020 3 commits
  5. 09 Nov, 2020 7 commits