1. 08 Apr, 2020 1 commit
    • Fredrik Strupe's avatar
      arm64: armv8_deprecated: Fix undef_hook mask for thumb setend · fc226601
      Fredrik Strupe authored
      For thumb instructions, call_undef_hook() in traps.c first reads a u16,
      and if the u16 indicates a T32 instruction (u16 >= 0xe800), a second
      u16 is read, which then makes up the the lower half-word of a T32
      instruction. For T16 instructions, the second u16 is not read,
      which makes the resulting u32 opcode always have the upper half set to
      0.
      
      However, having the upper half of instr_mask in the undef_hook set to 0
      masks out the upper half of all thumb instructions - both T16 and T32.
      This results in trapped T32 instructions with the lower half-word equal
      to the T16 encoding of setend (b650) being matched, even though the upper
      half-word is not 0000 and thus indicates a T32 opcode.
      
      An example of such a T32 instruction is eaa0b650, which should raise a
      SIGILL since T32 instructions with an eaa prefix are unallocated as per
      Arm ARM, but instead works as a SETEND because the second half-word is set
      to b650.
      
      This patch fixes the issue by extending instr_mask to include the
      upper u32 half, which will still match T16 instructions where the upper
      half is 0, but not T32 instructions.
      
      Fixes: 2d888f48 ("arm64: Emulate SETEND for AArch32 tasks")
      Cc: <stable@vger.kernel.org> # 4.0.x-
      Reviewed-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarFredrik Strupe <fredrik@strupe.net>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fc226601
  2. 01 Apr, 2020 4 commits
    • Ard Biesheuvel's avatar
      arm64: remove CONFIG_DEBUG_ALIGN_RODATA feature · e16e65a0
      Ard Biesheuvel authored
      When CONFIG_DEBUG_ALIGN_RODATA is enabled, kernel segments mapped with
      different permissions (r-x for .text, r-- for .rodata, rw- for .data,
      etc) are rounded up to 2 MiB so they can be mapped more efficiently.
      In particular, it permits the segments to be mapped using level 2
      block entries when using 4k pages, which is expected to result in less
      TLB pressure.
      
      However, the mappings for the bulk of the kernel will use level 2
      entries anyway, and the misaligned fringes are organized such that they
      can take advantage of the contiguous bit, and use far fewer level 3
      entries than would be needed otherwise.
      
      This makes the value of this feature dubious at best, and since it is not
      enabled in defconfig or in the distro configs, it does not appear to be
      in wide use either. So let's just remove it.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Acked-by: default avatarLaura Abbott <labbott@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      e16e65a0
    • Mark Brown's avatar
      arm64: Always force a branch protection mode when the compiler has one · b8fdef31
      Mark Brown authored
      Compilers with branch protection support can be configured to enable it by
      default, it is likely that distributions will do this as part of deploying
      branch protection system wide. As well as the slight overhead from having
      some extra NOPs for unused branch protection features this can cause more
      serious problems when the kernel is providing pointer authentication to
      userspace but not built for pointer authentication itself. In that case our
      switching of keys for userspace can affect the kernel unexpectedly, causing
      pointer authentication instructions in the kernel to corrupt addresses.
      
      To ensure that we get consistent and reliable behaviour always explicitly
      initialise the branch protection mode, ensuring that the kernel is built
      the same way regardless of the compiler defaults.
      
      Fixes: 75031975 (arm64: add basic pointer authentication support)
      Reported-by: default avatarSzabolcs Nagy <szabolcs.nagy@arm.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Cc: stable@vger.kernel.org
      [catalin.marinas@arm.com: remove Kconfig option in favour of Makefile check]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b8fdef31
    • Amit Daniel Kachhap's avatar
      arm64: Kconfig: ptrauth: Add binutils version check to fix mismatch · 15cd0e67
      Amit Daniel Kachhap authored
      Recent addition of ARM64_PTR_AUTH exposed a mismatch issue with binutils.
      9.1+ versions of gcc inserts a section note .note.gnu.property but this
      can be used properly by binutils version greater than 2.33.1. If older
      binutils are used then the following warnings are generated,
      
      aarch64-linux-ld: warning: arch/arm64/kernel/vdso/vgettimeofday.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      aarch64-linux-objdump: warning: arch/arm64/lib/csum.o: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      aarch64-linux-nm: warning: .tmp_vmlinux1: unsupported GNU_PROPERTY_TYPE (5) type: 0xc0000000
      
      This patch enables ARM64_PTR_AUTH when gcc and binutils versions are
      compatible with each other. Older gcc which do not insert such section
      continue to work as before.
      
      This scenario may not occur with clang as a recent commit 3b446c7d
      ("arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH") masks
      binutils version lesser then 2.34.
      Reported-by: default avatarkbuild test robot <lkp@intel.com>
      Suggested-by: default avatarVincenzo Frascino <Vincenzo.Frascino@arm.com>
      Signed-off-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      [catalin.marinas@arm.com: slight adjustment to the comment]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      15cd0e67
    • Amit Daniel Kachhap's avatar
      init/kconfig: Add LD_VERSION Kconfig · 9553d16f
      Amit Daniel Kachhap authored
      This option can be used in Kconfig files to compare the ld version
      and enable/disable incompatible config options if required.
      
      This option is used in the subsequent patch along with GCC_VERSION to
      filter out an incompatible feature.
      Signed-off-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9553d16f
  3. 26 Mar, 2020 1 commit
  4. 25 Mar, 2020 5 commits
    • Catalin Marinas's avatar
      Merge branch 'for-next/kernel-ptrauth' into for-next/core · 44ca0e00
      Catalin Marinas authored
      * for-next/kernel-ptrauth:
        : Return address signing - in-kernel support
        arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
        lkdtm: arm64: test kernel pointer authentication
        arm64: compile the kernel with ptrauth return address signing
        kconfig: Add support for 'as-option'
        arm64: suspend: restore the kernel ptrauth keys
        arm64: __show_regs: strip PAC from lr in printk
        arm64: unwind: strip PAC from kernel addresses
        arm64: mask PAC bits of __builtin_return_address
        arm64: initialize ptrauth keys for kernel booting task
        arm64: initialize and switch ptrauth kernel keys
        arm64: enable ptrauth earlier
        arm64: cpufeature: handle conflicts based on capability
        arm64: cpufeature: Move cpu capability helpers inside C file
        arm64: ptrauth: Add bootup/runtime flags for __cpu_setup
        arm64: install user ptrauth keys at kernel exit time
        arm64: rename ptrauth key structures to be user-specific
        arm64: cpufeature: add pointer auth meta-capabilities
        arm64: cpufeature: Fix meta-capability cpufeature check
      44ca0e00
    • Catalin Marinas's avatar
      Merge branch 'for-next/asm-cleanups' into for-next/core · 806dc825
      Catalin Marinas authored
      * for-next/asm-cleanups:
        : Various asm clean-ups (alignment, mov_q vs ldr, .idmap)
        arm64: move kimage_vaddr to .rodata
        arm64: use mov_q instead of literal ldr
      806dc825
    • Catalin Marinas's avatar
      Merge branch 'for-next/asm-annotations' into for-next/core · 0829a076
      Catalin Marinas authored
      * for-next/asm-annotations:
        : Modernise arm64 assembly annotations
        arm64: head: Convert install_el2_stub to SYM_INNER_LABEL
        arm64: Mark call_smc_arch_workaround_1 as __maybe_unused
        arm64: entry-ftrace.S: Fix missing argument for CONFIG_FUNCTION_GRAPH_TRACER=y
        arm64: vdso32: Convert to modern assembler annotations
        arm64: vdso: Convert to modern assembler annotations
        arm64: sdei: Annotate SDEI entry points using new style annotations
        arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations
        arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs
        arm64: kvm: Annotate assembly using modern annoations
        arm64: kernel: Convert to modern annotations for assembly data
        arm64: head: Annotate stext and preserve_boot_args as code
        arm64: head.S: Convert to modern annotations for assembly functions
        arm64: ftrace: Modernise annotation of return_to_handler
        arm64: ftrace: Correct annotation of ftrace_caller assembly
        arm64: entry-ftrace.S: Convert to modern annotations for assembly functions
        arm64: entry: Additional annotation conversions for entry.S
        arm64: entry: Annotate ret_from_fork as code
        arm64: entry: Annotate vector table and handlers as code
        arm64: crypto: Modernize names for AES function macros
        arm64: crypto: Modernize some extra assembly annotations
      0829a076
    • Catalin Marinas's avatar
      Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei',... · da12d273
      Catalin Marinas authored
      Merge branches 'for-next/memory-hotremove', 'for-next/arm_sdei', 'for-next/amu', 'for-next/final-cap-helper', 'for-next/cpu_ops-cleanup', 'for-next/misc' and 'for-next/perf' into for-next/core
      
      * for-next/memory-hotremove:
        : Memory hot-remove support for arm64
        arm64/mm: Enable memory hot remove
        arm64/mm: Hold memory hotplug lock while walking for kernel page table dump
      
      * for-next/arm_sdei:
        : SDEI: fix double locking on return from hibernate and clean-up
        firmware: arm_sdei: clean up sdei_event_create()
        firmware: arm_sdei: Use cpus_read_lock() to avoid races with cpuhp
        firmware: arm_sdei: fix possible double-lock on hibernate error path
        firmware: arm_sdei: fix double-lock on hibernate with shared events
      
      * for-next/amu:
        : ARMv8.4 Activity Monitors support
        clocksource/drivers/arm_arch_timer: validate arch_timer_rate
        arm64: use activity monitors for frequency invariance
        cpufreq: add function to get the hardware max frequency
        Documentation: arm64: document support for the AMU extension
        arm64/kvm: disable access to AMU registers from kvm guests
        arm64: trap to EL1 accesses to AMU counters from EL0
        arm64: add support for the AMU extension v1
      
      * for-next/final-cap-helper:
        : Introduce cpus_have_final_cap_helper(), migrate arm64 KVM to it
        arm64: kvm: hyp: use cpus_have_final_cap()
        arm64: cpufeature: add cpus_have_final_cap()
      
      * for-next/cpu_ops-cleanup:
        : cpu_ops[] access code clean-up
        arm64: Introduce get_cpu_ops() helper function
        arm64: Rename cpu_read_ops() to init_cpu_ops()
        arm64: Declare ACPI parking protocol CPU operation if needed
      
      * for-next/misc:
        : Various fixes and clean-ups
        arm64: define __alloc_zeroed_user_highpage
        arm64/kernel: Simplify __cpu_up() by bailing out early
        arm64: remove redundant blank for '=' operator
        arm64: kexec_file: Fixed code style.
        arm64: add blank after 'if'
        arm64: fix spelling mistake "ca not" -> "cannot"
        arm64: entry: unmask IRQ in el0_sp()
        arm64: efi: add efi-entry.o to targets instead of extra-$(CONFIG_EFI)
        arm64: csum: Optimise IPv6 header checksum
        arch/arm64: fix typo in a comment
        arm64: remove gratuitious/stray .ltorg stanzas
        arm64: Update comment for ASID() macro
        arm64: mm: convert cpu_do_switch_mm() to C
        arm64: fix NUMA Kconfig typos
      
      * for-next/perf:
        : arm64 perf updates
        arm64: perf: Add support for ARMv8.5-PMU 64-bit counters
        KVM: arm64: limit PMU version to PMUv3 for ARMv8.1
        arm64: cpufeature: Extract capped perfmon fields
        arm64: perf: Clean up enable/disable calls
        perf: arm-ccn: Use scnprintf() for robustness
        arm64: perf: Support new DT compatibles
        arm64: perf: Refactor PMU init callbacks
        perf: arm_spe: Remove unnecessary zero check on 'nr_pages'
      da12d273
    • Mark Brown's avatar
      arm64: head: Convert install_el2_stub to SYM_INNER_LABEL · d4abd29d
      Mark Brown authored
      New assembly annotations have recently been introduced which aim to
      make the way we describe symbols in assembly more consistent. Recently the
      arm64 assembler was converted to use these but install_el2_stub was missed.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      [catalin.marinas@arm.com: changed to SYM_L_LOCAL]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d4abd29d
  5. 24 Mar, 2020 5 commits
  6. 20 Mar, 2020 1 commit
  7. 18 Mar, 2020 17 commits
  8. 17 Mar, 2020 6 commits
    • Andrew Murray's avatar
      arm64: perf: Add support for ARMv8.5-PMU 64-bit counters · 8673e02e
      Andrew Murray authored
      At present ARMv8 event counters are limited to 32-bits, though by
      using the CHAIN event it's possible to combine adjacent counters to
      achieve 64-bits. The perf config1:0 bit can be set to use such a
      configuration.
      
      With the introduction of ARMv8.5-PMU support, all event counters can
      now be used as 64-bit counters.
      
      Let's enable 64-bit event counters where support exists. Unless the
      user sets config1:0 we will adjust the counter value such that it
      overflows upon 32-bit overflow. This follows the same behaviour as
      the cycle counter which has always been (and remains) 64-bits.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Reviewed-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      [Mark: fix ID field names, compare with 8.5 value]
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      8673e02e
    • Andrew Murray's avatar
      KVM: arm64: limit PMU version to PMUv3 for ARMv8.1 · c854188e
      Andrew Murray authored
      We currently expose the PMU version of the host to the guest via
      emulation of the DFR0_EL1 and AA64DFR0_EL1 debug feature registers.
      However many of the features offered beyond PMUv3 for 8.1 are not
      supported in KVM. Examples of this include support for the PMMIR
      registers (added in PMUv3 for ARMv8.4) and 64-bit event counters
      added in (PMUv3 for ARMv8.5).
      
      Let's trap the Debug Feature Registers in order to limit
      PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.1
      to avoid unexpected behaviour.
      
      Both ID_AA64DFR0.PMUVer and ID_DFR0.PerfMon follow the "Alternative ID
      scheme used for the Performance Monitors Extension version" where 0xF
      means an IMPLEMENTATION DEFINED PMU is implemented, and values 0x0-0xE
      are treated as with an unsigned field (with 0x0 meaning no PMU is
      present). As we don't expect to expose an IMPLEMENTATION DEFINED PMU,
      and our cap is below 0xF, we can treat these fields as unsigned when
      applying the cap.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Reviewed-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      [Mark: make field names consistent, use perfmon cap]
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      c854188e
    • Andrew Murray's avatar
      arm64: cpufeature: Extract capped perfmon fields · 8e35aa64
      Andrew Murray authored
      When emulating ID registers there is often a need to cap the version
      bits of a feature such that the guest will not use features that the
      host is not aware of. For example, when KVM mediates access to the PMU
      by emulating register accesses.
      
      Let's add a helper that extracts a performance monitors ID field and
      caps the version to a given value.
      
      Fields that identify the version of the Performance Monitors Extension
      do not follow the standard ID scheme, and instead follow the scheme
      described in ARM DDI 0487E.a page D13-2825 "Alternative ID scheme used
      for the Performance Monitors Extension version". The value 0xF means an
      IMPLEMENTATION DEFINED PMU is present, and values 0x0-OxE can be treated
      the same as an unsigned field with 0x0 meaning no PMU is present.
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Reviewed-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      [Mark: rework to handle perfmon fields]
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      8e35aa64
    • Robin Murphy's avatar
      arm64: perf: Clean up enable/disable calls · 29227d6e
      Robin Murphy authored
      Reading this code bordered on painful, what with all the repetition and
      pointless return values. More fundamentally, dribbling the hardware
      enables and disables in one bit at a time incurs needless system
      register overhead for chained events and on reset. We already use
      bitmask values for the KVM hooks, so consolidate all the register
      accesses to match, and make a reasonable saving in both source and
      object code.
      Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      29227d6e
    • Takashi Iwai's avatar
      perf: arm-ccn: Use scnprintf() for robustness · 06236821
      Takashi Iwai authored
      snprintf() is a hard-to-use function, it's especially difficult to use
      it for concatenating substrings in a buffer with a limited size.
      Since snprintf() returns the would-be-output size, not the actual
      size, the subsequent use of snprintf() may point to the incorrect
      position easily.  Although the current code doesn't actually overflow
      the buffer, it's an incorrect usage.
      
      This patch replaces such snprintf() calls with a safer version,
      scnprintf().
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      06236821
    • glider@google.com's avatar
      arm64: define __alloc_zeroed_user_highpage · c17a290f
      glider@google.com authored
      When running the kernel with init_on_alloc=1, calling the default
      implementation of __alloc_zeroed_user_highpage() from include/linux/highmem.h
      leads to double-initialization of the allocated page (first by the page
      allocator, then by clear_user_page().
      Calling alloc_page_vma() with __GFP_ZERO, similarly to e.g. x86, seems
      to be enough to ensure the user page is zeroed only once.
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c17a290f