1. 26 Apr, 2023 1 commit
  2. 20 Apr, 2023 13 commits
    • Will Deacon's avatar
      Merge branch 'for-next/sysreg' into for-next/core · eeb3557c
      Will Deacon authored
      * for-next/sysreg:
        arm64/sysreg: Convert HFGITR_EL2 to automatic generation
        arm64/idreg: Don't disable SME when disabling SVE
        arm64/sysreg: Update ID_AA64PFR1_EL1 for DDI0601 2022-12
        arm64/sysreg: Convert HFG[RW]TR_EL2 to automatic generation
        arm64/sysreg: allow *Enum blocks in SysregFields blocks
      eeb3557c
    • Will Deacon's avatar
      Merge branch 'for-next/stacktrace' into for-next/core · 9772b7f0
      Will Deacon authored
      * for-next/stacktrace:
        arm64: move PAC masks to <asm/pointer_auth.h>
        arm64: use XPACLRI to strip PAC
        arm64: avoid redundant PAC stripping in __builtin_return_address()
        arm64: stacktrace: always inline core stacktrace functions
        arm64: stacktrace: move dump functions to end of file
        arm64: stacktrace: recover return address for first entry
      9772b7f0
    • Will Deacon's avatar
      Merge branch 'for-next/perf' into for-next/core · 9651f00e
      Will Deacon authored
      * for-next/perf: (24 commits)
        KVM: arm64: Ensure CPU PMU probes before pKVM host de-privilege
        drivers/perf: hisi: add NULL check for name
        drivers/perf: hisi: Remove redundant initialized of pmu->name
        perf/arm-cmn: Fix port detection for CMN-700
        arm64: pmuv3: dynamically map PERF_COUNT_HW_BRANCH_INSTRUCTIONS
        perf/arm-cmn: Validate cycles events fully
        Revert "ARM: mach-virt: Select PMUv3 driver by default"
        drivers/perf: apple_m1: Add Apple M2 support
        dt-bindings: arm-pmu: Add PMU compatible strings for Apple M2 cores
        perf: arm_cspmu: Fix variable dereference warning
        perf/amlogic: Fix config1/config2 parsing issue
        drivers/perf: Use devm_platform_get_and_ioremap_resource()
        kbuild, drivers/perf: remove MODULE_LICENSE in non-modules
        perf: qcom: Use devm_platform_get_and_ioremap_resource()
        perf: arm: Use devm_platform_get_and_ioremap_resource()
        perf/arm-cmn: Move overlapping wp_combine field
        ARM: mach-virt: Select PMUv3 driver by default
        ARM: perf: Allow the use of the PMUv3 driver on 32bit ARM
        ARM: Make CONFIG_CPU_V7 valid for 32bit ARMv8 implementations
        perf: pmuv3: Change GENMASK to GENMASK_ULL
        ...
      9651f00e
    • Will Deacon's avatar
      KVM: arm64: Ensure CPU PMU probes before pKVM host de-privilege · 87727ba2
      Will Deacon authored
      Although pKVM supports CPU PMU emulation for non-protected guests since
      722625c6 ("KVM: arm64: Reenable pmu in Protected Mode"), this relies
      on the PMU driver probing before the host has de-privileged so that the
      'kvm_arm_pmu_available' static key can still be enabled by patching the
      hypervisor text.
      
      As it happens, both of these events hang off device_initcall() but the
      PMU consistently won the race until 7755cec6 ("arm64: perf: Move
      PMUv3 driver to drivers/perf"). Since then, the host will fail to boot
      when pKVM is enabled:
      
        | hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
        | kvm [1]: nVHE hyp BUG at: [<ffff8000090366e0>] __kvm_nvhe_handle_host_mem_abort+0x270/0x284!
        | kvm [1]: Cannot dump pKVM nVHE stacktrace: !CONFIG_PROTECTED_NVHE_STACKTRACE
        | kvm [1]: Hyp Offset: 0xfffea41fbdf70000
        | Kernel panic - not syncing: HYP panic:
        | PS:a00003c9 PC:0000dbe04b0c66e0 ESR:00000000f2000800
        | FAR:fffffbfffddfcf00 HPFAR:00000000010b0bf0 PAR:0000000000000000
        | VCPU:0000000000000000
        | CPU: 2 PID: 1 Comm: swapper/0 Not tainted 6.3.0-rc7-00083-g0bce6746d154 #1
        | Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
        | Call trace:
        |  dump_backtrace+0xec/0x108
        |  show_stack+0x18/0x2c
        |  dump_stack_lvl+0x50/0x68
        |  dump_stack+0x18/0x24
        |  panic+0x13c/0x33c
        |  nvhe_hyp_panic_handler+0x10c/0x190
        |  aarch64_insn_patch_text_nosync+0x64/0xc8
        |  arch_jump_label_transform+0x4c/0x5c
        |  __jump_label_update+0x84/0xfc
        |  jump_label_update+0x100/0x134
        |  static_key_enable_cpuslocked+0x68/0xac
        |  static_key_enable+0x20/0x34
        |  kvm_host_pmu_init+0x88/0xa4
        |  armpmu_register+0xf0/0xf4
        |  arm_pmu_acpi_probe+0x2ec/0x368
        |  armv8_pmu_driver_init+0x38/0x44
        |  do_one_initcall+0xcc/0x240
      
      Fix the race properly by deferring the de-privilege step to
      device_initcall_sync(). This will also be needed in future when probing
      IOMMU devices and allows us to separate the pKVM de-privilege logic from
      the core hypervisor initialisation path.
      
      Cc: Oliver Upton <oliver.upton@linux.dev>
      Cc: Fuad Tabba <tabba@google.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Fixes: 7755cec6 ("arm64: perf: Move PMUv3 driver to drivers/perf")
      Tested-by: default avatarFuad Tabba <tabba@google.com>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20230420123356.2708-1-will@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      87727ba2
    • Will Deacon's avatar
      Merge branch 'for-next/mm' into for-next/core · 1bb31cc7
      Will Deacon authored
      * for-next/mm:
        arm64: mm: always map fixmap at page granularity
        arm64: mm: move fixmap code to its own file
        arm64: add FIXADDR_TOT_{START,SIZE}
        Revert "Revert "arm64: dma: Drop cache invalidation from arch_dma_prep_coherent()""
        arm: uaccess: Remove memcpy_page_flushcache()
        mm,kfence: decouple kfence from page granularity mapping judgement
      1bb31cc7
    • Will Deacon's avatar
      Merge branch 'for-next/misc' into for-next/core · 81444b77
      Will Deacon authored
      * for-next/misc:
        arm64: kexec: include reboot.h
        arm64: delete dead code in this_cpu_set_vectors()
        arm64: kernel: Fix kernel warning when nokaslr is passed to commandline
        arm64: kgdb: Set PSTATE.SS to 1 to re-enable single-step
        arm64/sme: Fix some comments of ARM SME
        arm64/signal: Alloc tpidr2 sigframe after checking system_supports_tpidr2()
        arm64/signal: Use system_supports_tpidr2() to check TPIDR2
        arm64: compat: Remove defines now in asm-generic
        arm64: kexec: remove unnecessary (void*) conversions
        arm64: armv8_deprecated: remove unnecessary (void*) conversions
        firmware: arm_sdei: Fix sleep from invalid context BUG
      81444b77
    • Will Deacon's avatar
      Merge branch 'for-next/kdump' into for-next/core · f8863bc8
      Will Deacon authored
      * for-next/kdump:
        arm64: kdump: defer the crashkernel reservation for platforms with no DMA memory zones
        arm64: kdump: do not map crashkernel region specifically
        arm64: kdump : take off the protection on crashkernel memory region
      f8863bc8
    • Will Deacon's avatar
      Merge branch 'for-next/ftrace' into for-next/core · ea88dc92
      Will Deacon authored
      * for-next/ftrace:
        arm64: ftrace: Simplify get_ftrace_plt
        arm64: ftrace: Add direct call support
        ftrace: selftest: remove broken trace_direct_tramp
        ftrace: Make DIRECT_CALLS work WITH_ARGS and !WITH_REGS
        ftrace: Store direct called addresses in their ops
        ftrace: Rename _ftrace_direct_multi APIs to _ftrace_direct APIs
        ftrace: Remove the legacy _ftrace_direct API
        ftrace: Replace uses of _ftrace_direct APIs with _ftrace_direct_multi
        ftrace: Let unregister_ftrace_direct_multi() call ftrace_free_filter()
      ea88dc92
    • Will Deacon's avatar
      Merge branch 'for-next/cpufeature' into for-next/core · 31eb87cf
      Will Deacon authored
      * for-next/cpufeature:
        arm64/cpufeature: Use helper macro to specify ID register for capabilites
        arm64/cpufeature: Consistently use symbolic constants for min_field_value
        arm64/cpufeature: Pull out helper for CPUID register definitions
      31eb87cf
    • Will Deacon's avatar
      Merge branch 'for-next/asm' into for-next/core · 0f6563a3
      Will Deacon authored
      * for-next/asm:
        arm64: uaccess: remove unnecessary earlyclobber
        arm64: uaccess: permit put_{user,kernel} to use zero register
        arm64: uaccess: permit __smp_store_release() to use zero register
        arm64: atomics: lse: improve cmpxchg implementation
      0f6563a3
    • Will Deacon's avatar
      Merge branch 'for-next/acpi' into for-next/core · 67eacd61
      Will Deacon authored
      * for-next/acpi:
        ACPI: AGDI: Improve error reporting for problems during .remove()
      67eacd61
    • Simon Horman's avatar
      arm64: kexec: include reboot.h · b7b4ce84
      Simon Horman authored
      Include reboot.h in machine_kexec.c for declaration of
      machine_crash_shutdown.
      
      gcc-12 with W=1 reports:
      
       arch/arm64/kernel/machine_kexec.c:257:6: warning: no previous prototype for 'machine_crash_shutdown' [-Wmissing-prototypes]
         257 | void machine_crash_shutdown(struct pt_regs *regs)
      
      No functional changes intended.
      Compile tested only.
      Signed-off-by: default avatarSimon Horman <horms@kernel.org>
      Link: https://lore.kernel.org/r/20230418-arm64-kexec-include-reboot-v1-1-8453fd4fb3fb@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      b7b4ce84
    • Dan Carpenter's avatar
      arm64: delete dead code in this_cpu_set_vectors() · 460e70e2
      Dan Carpenter authored
      The "slot" variable is an enum, and in this context it is an unsigned
      int.  So the type means it can never be negative and also we never pass
      invalid data to this function.  If something did pass invalid data then
      this check would be insufficient protection.
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@linaro.org>
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/73859c9e-dea0-4764-bf01-7ae694fa2e37@kili.mountainSigned-off-by: default avatarWill Deacon <will@kernel.org>
      460e70e2
  3. 17 Apr, 2023 7 commits
  4. 14 Apr, 2023 3 commits
    • Pavankumar Kondeti's avatar
      arm64: kernel: Fix kernel warning when nokaslr is passed to commandline · a2a83eb4
      Pavankumar Kondeti authored
      'Unknown kernel command line parameters "nokaslr", will be passed to
      user space' message is noticed in the dmesg when nokaslr is passed to
      the kernel commandline on ARM64 platform. This is because nokaslr param
      is handled by early cpufeature detection infrastructure and the parameter
      is never consumed by a kernel param handler. Fix this warning by
      providing a dummy kernel param handler for nokaslr.
      Signed-off-by: default avatarPavankumar Kondeti <quic_pkondeti@quicinc.com>
      Link: https://lore.kernel.org/r/20230412043258.397455-1-quic_pkondeti@quicinc.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      a2a83eb4
    • Robin Murphy's avatar
      perf/arm-cmn: Fix port detection for CMN-700 · 2ad91e44
      Robin Murphy authored
      When the "extra device ports" configuration was first added, the
      additional mxp_device_port_connect_info registers were added around the
      existing mxp_mesh_port_connect_info registers. What I missed about
      CMN-700 is that it shuffled them around to remove this discontinuity.
      As such, tweak the definitions and factor out a helper for reading these
      registers so we can deal with this discrepancy easily, which does at
      least allow nicely tidying up the callsites. With this we can then also
      do the nice thing and skip accesses completely rather than relying on
      RES0 behaviour where we know the extra registers aren't defined.
      
      Fixes: 23760a01 ("perf/arm-cmn: Add CMN-700 support")
      Reported-by: default avatarJing Zhang <renyu.zj@linux.alibaba.com>
      Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Link: https://lore.kernel.org/r/71d129241d4d7923cde72a0e5b4c8d2f6084525f.1681295193.git.robin.murphy@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      2ad91e44
    • Sumit Garg's avatar
      arm64: kgdb: Set PSTATE.SS to 1 to re-enable single-step · af6c0bd5
      Sumit Garg authored
      Currently only the first attempt to single-step has any effect. After
      that all further stepping remains "stuck" at the same program counter
      value.
      
      Refer to the ARM Architecture Reference Manual (ARM DDI 0487E.a) D2.12,
      PSTATE.SS=1 should be set at each step before transferring the PE to the
      'Active-not-pending' state. The problem here is PSTATE.SS=1 is not set
      since the second single-step.
      
      After the first single-step, the PE transferes to the 'Inactive' state,
      with PSTATE.SS=0 and MDSCR.SS=1, thus PSTATE.SS won't be set to 1 due to
      kernel_active_single_step()=true. Then the PE transferes to the
      'Active-pending' state when ERET and returns to the debugger by step
      exception.
      
      Before this patch:
      ==================
      Entering kdb (current=0xffff3376039f0000, pid 1) on processor 0 due to Keyboard Entry
      [0]kdb>
      
      [0]kdb>
      [0]kdb> bp write_sysrq_trigger
      Instruction(i) BP #0 at 0xffffa45c13d09290 (write_sysrq_trigger)
          is enabled   addr at ffffa45c13d09290, hardtype=0 installed=0
      
      [0]kdb> go
      $ echo h > /proc/sysrq-trigger
      
      Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to Breakpoint @ 0xffffad651a309290
      [1]kdb> ss
      
      Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to SS trap @ 0xffffad651a309294
      [1]kdb> ss
      
      Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to SS trap @ 0xffffad651a309294
      [1]kdb>
      
      After this patch:
      =================
      Entering kdb (current=0xffff6851c39f0000, pid 1) on processor 0 due to Keyboard Entry
      [0]kdb> bp write_sysrq_trigger
      Instruction(i) BP #0 at 0xffffc02d2dd09290 (write_sysrq_trigger)
          is enabled   addr at ffffc02d2dd09290, hardtype=0 installed=0
      
      [0]kdb> go
      $ echo h > /proc/sysrq-trigger
      
      Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to Breakpoint @ 0xffffc02d2dd09290
      [1]kdb> ss
      
      Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd09294
      [1]kdb> ss
      
      Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd09298
      [1]kdb> ss
      
      Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd0929c
      [1]kdb>
      
      Fixes: 44679a4f ("arm64: KGDB: Add step debugging support")
      Co-developed-by: default avatarWei Li <liwei391@huawei.com>
      Signed-off-by: default avatarWei Li <liwei391@huawei.com>
      Signed-off-by: default avatarSumit Garg <sumit.garg@linaro.org>
      Tested-by: default avatarDouglas Anderson <dianders@chromium.org>
      Acked-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Tested-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Link: https://lore.kernel.org/r/20230202073148.657746-3-sumit.garg@linaro.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      af6c0bd5
  5. 13 Apr, 2023 3 commits
    • Mark Rutland's avatar
      arm64: move PAC masks to <asm/pointer_auth.h> · de1702f6
      Mark Rutland authored
      Now that we use XPACLRI to strip PACs within the kernel, the
      ptrauth_user_pac_mask() and ptrauth_kernel_pac_mask() definitions no
      longer need to live in <asm/compiler.h>.
      
      Move them to <asm/pointer_auth.h>, and ensure that this header is
      included where they are used.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kristina Martsenko <kristina.martsenko@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230412160134.306148-4-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      de1702f6
    • Mark Rutland's avatar
      arm64: use XPACLRI to strip PAC · ca708599
      Mark Rutland authored
      Currently we strip the PAC from pointers using C code, which requires
      generating bitmasks, and conditionally clearing/setting bits depending
      on bit 55. We can do better by using XPACLRI directly.
      
      When the logic was originally written to strip PACs from user pointers,
      contemporary toolchains used for the kernel had assemblers which were
      unaware of the PAC instructions. As stripping the PAC from userspace
      pointers required unconditional clearing of a fixed set of bits (which
      could be performed with a single instruction), it was simpler to
      implement the masking in C than it was to make use of XPACI or XPACLRI.
      
      When support for in-kernel pointer authentication was added, the
      stripping logic was extended to cover TTBR1 pointers, requiring several
      instructions to handle whether to clear/set bits dependent on bit 55 of
      the pointer.
      
      This patch simplifies the stripping of PACs by using XPACLRI directly,
      as contemporary toolchains do within __builtin_return_address(). This
      saves a number of instructions, especially where
      __builtin_return_address() does not implicitly strip the PAC but is
      heavily used (e.g. with tracepoints). As the kernel might be compiled
      with an assembler without knowledge of XPACLRI, it is assembled using
      the 'HINT #7' alias, which results in an identical opcode.
      
      At the same time, I've split ptrauth_strip_insn_pac() into
      ptrauth_strip_user_insn_pac() and ptrauth_strip_kernel_insn_pac()
      helpers so that we can avoid unnecessary PAC stripping when pointer
      authentication is not in use in userspace or kernel respectively.
      
      The underlying xpaclri() macro uses inline assembly which clobbers x30.
      The clobber causes the compiler to save/restore the original x30 value
      in a frame record (protected with PACIASP and AUTIASP when in-kernel
      authentication is enabled), so this does not provide a gadget to alter
      the return address. Similarly this does not adversely affect unwinding
      due to the presence of the frame record.
      
      The ptrauth_user_pac_mask() and ptrauth_kernel_pac_mask() are exported
      from the kernel in ptrace and core dumps, so these are retained. A
      subsequent patch will move them out of <asm/compiler.h>.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kristina Martsenko <kristina.martsenko@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230412160134.306148-3-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      ca708599
    • Mark Rutland's avatar
      arm64: avoid redundant PAC stripping in __builtin_return_address() · 9df3f508
      Mark Rutland authored
      In old versions of GCC and Clang, __builtin_return_address() did not
      strip the PAC. This was not the behaviour we desired, and so we wrapped
      this with code to strip the PAC in commit:
      
        689eae42 ("arm64: mask PAC bits of __builtin_return_address")
      
      Since then, both GCC and Clang decided that __builtin_return_address()
      *should* strip the PAC, and the existing behaviour was a bug.
      
      GCC was fixed in 11.1.0, with those fixes backported to 10.2.0, 9.4.0,
      8.5.0, but not earlier:
      
        https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94891
      
      Clang was fixed in 12.0.0, though this was not backported:
      
        https://reviews.llvm.org/D75044
      
      When using a compiler whose __builtin_return_address() strips the PAC,
      our wrapper to strip the PAC is redundant. Similarly, when pointer
      authentication is not in use within the kernel pointers will not have a
      PAC, and so there's no point stripping those pointers.
      
      To avoid this redundant work, this patch updates the
      __builtin_return_address() wrapper to only be used when in-kernel
      pointer authentication is configured and the compiler's
      __builtin_return_address() does not strip the PAC.
      
      This is a cleanup/optimization, and not a fix that requires backporting.
      Stripping a PAC should be an idempotent operation, and so redundantly
      stripping the PAC is not harmful.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Amit Daniel Kachhap <amit.kachhap@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kristina Martsenko <kristina.martsenko@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230412160134.306148-2-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      9df3f508
  6. 12 Apr, 2023 3 commits
  7. 11 Apr, 2023 10 commits