1. 30 Sep, 2022 1 commit
  2. 29 Sep, 2022 2 commits
  3. 23 Sep, 2022 1 commit
    • Mark Rutland's avatar
      arm64: uaccess: simplify uaccess_mask_ptr() · 2305b809
      Mark Rutland authored
      We introduced uaccess pointer masking for arm64 in commit:
      
        4d8efc2d ("arm64: Use pointer masking to limit uaccess speculation")
      
      Which was intended to prevent speculative uaccesses to kernel memory on
      CPUs where access permissions were not respected under speculation.
      
      At the time, the uaccess primitives were occasionally used to access
      kernel memory, with the maximum permitted address held in
      thread_info::addr_limit. Consequently, the address masking needed to
      take this dynamic limit into account.
      
      Subsequently the uaccess primitives were reworked such that they are
      only used for user memory, and as of commit:
      
        3d2403fd ("arm64: uaccess: remove set_fs()")
      
      ... the address limit was made a compile-time constant, but the logic
      was otherwise unchanged.
      
      Regardless of the configured VA size or whether TBI is in use, the
      address space can be divided into three ranges:
      
      * The TTBR0 VA range, for which any valid pointer has bit 55 *clear*,
        and any non-tag bits [63-56] must match bit 55 (i.e. must be clear).
      
      * The TTBR1 VA range, for which any valid pointer has bit 55 *set*, and
        any non-tag bits [63-56] must match bit 55 (i.e. must be set).
      
      * The gap between the TTBR0 and TTBR1 ranges, where bit 55 may be set or
        clear, but any access will result in a fault.
      
      As the uaccess primitives are now only used for user memory in the TTBR0
      VA range, we can prevent generation of TTBR1 addresses by clearing bit
      55, which will either result in a TTBR0 address or a faulting address
      between the TTBR VA ranges.
      
      This is beneficial for code generation as:
      
      * We no longer clobber the condition codes.
      
      * We no longer burn a register on (TASK_SIZE_MAX - 1).
      
      * We no longer need to consume the untagged pointer.
      
      When building a defconfig v6.0-rc3 with GCC 12.1.0, this change makes
      the resulting Image 64KiB smaller.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Link: https://lore.kernel.org/r/20220922151053.3520750-1-mark.rutland@arm.com
      [catalin.marinas@arm.com: remove csdb() as the bit clearing is unconditional]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      2305b809
  4. 22 Sep, 2022 3 commits
  5. 16 Sep, 2022 1 commit
  6. 09 Sep, 2022 2 commits
  7. 06 Sep, 2022 1 commit
    • Ard Biesheuvel's avatar
      arm64: compat: Implement misalignment fixups for multiword loads · 3fc24ef3
      Ard Biesheuvel authored
      The 32-bit ARM kernel implements fixups on behalf of user space when
      using LDM/STM or LDRD/STRD instructions on addresses that are not 32-bit
      aligned. This is not something that is supported by the architecture,
      but was done anyway to increase compatibility with user space software,
      which mostly targeted x86 at the time and did not care about aligned
      accesses.
      
      This feature is one of the remaining impediments to being able to switch
      to 64-bit kernels on 64-bit capable hardware running 32-bit user space,
      so let's implement it for the arm64 compat layer as well.
      
      Note that the intent is to implement the exact same handling of
      misaligned multi-word loads and stores as the 32-bit kernel does,
      including what appears to be missing support for user space programs
      that rely on SETEND to switch to a different byte order and back. Also,
      like the 32-bit ARM version, we rely on the faulting address reported by
      the CPU to infer the memory address, instead of decoding the instruction
      fully to obtain this information.
      
      This implementation is taken from the 32-bit ARM tree, with all pieces
      removed that deal with instructions other than LDRD/STRD and LDM/STM, or
      that deal with alignment exceptions taken in kernel mode.
      
      Cc: debian-arm@lists.debian.org
      Cc: Vagrant Cascadian <vagrant@debian.org>
      Cc: Riku Voipio <riku.voipio@iki.fi>
      Cc: Steve McIntyre <steve@einval.com>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarArnd Bergmann <arnd@arndb.de>
      Link: https://lore.kernel.org/r/20220701135322.3025321-1-ardb@kernel.org
      [catalin.marinas@arm.com: change the option to 'default n']
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      3fc24ef3
  8. 28 Aug, 2022 25 commits
  9. 27 Aug, 2022 4 commits