1. 22 Jan, 2020 8 commits
    • Mark Rutland's avatar
      arm64: acpi: fix DAIF manipulation with pNMI · e533dbe9
      Mark Rutland authored
      Since commit:
      
        d44f1b8d ("arm64: KVM/mm: Move SEA handling behind a single 'claim' interface")
      
      ... the top-level APEI SEA handler has the shape:
      
      1. current_flags = arch_local_save_flags()
      2. local_daif_restore(DAIF_ERRCTX)
      3. <GHES handler>
      4. local_daif_restore(current_flags)
      
      However, since commit:
      
        4a503217 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking")
      
      ... when pseudo-NMIs (pNMIs) are in use, arch_local_save_flags() will save
      the PMR value rather than the DAIF flags.
      
      The combination of these two commits means that the APEI SEA handler will
      erroneously attempt to restore the PMR value into DAIF. Fix this by
      factoring local_daif_save_flags() out of local_daif_save(), so that we
      can consistently save DAIF in step #1, regardless of whether pNMIs are in
      use.
      
      Both commits were introduced concurrently in v5.0.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 4a503217 ("arm64: irqflags: Use ICC_PMR_EL1 for interrupt masking")
      Fixes: d44f1b8d ("arm64: KVM/mm: Move SEA handling behind a single 'claim' interface")
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      e533dbe9
    • Will Deacon's avatar
      Merge branch 'for-next/rng' into for-next/core · bc206065
      Will Deacon authored
      * for-next/rng: (2 commits)
        arm64: Use v8.5-RNG entropy for KASLR seed
        ...
      bc206065
    • Will Deacon's avatar
      Merge branch 'for-next/errata' into for-next/core · ab3906c5
      Will Deacon authored
      * for-next/errata: (3 commits)
        arm64: Workaround for Cortex-A55 erratum 1530923
        ...
      ab3906c5
    • Will Deacon's avatar
      Merge branch 'for-next/asm-annotations' into for-next/core · aa246c05
      Will Deacon authored
      * for-next/asm-annotations: (6 commits)
        arm64: kernel: Correct annotation of end of el0_sync
        ...
      aa246c05
    • Will Deacon's avatar
      Merge branches 'for-next/acpi', 'for-next/cpufeatures', 'for-next/csum',... · 4f6cdf29
      Will Deacon authored
      Merge branches 'for-next/acpi', 'for-next/cpufeatures', 'for-next/csum', 'for-next/e0pd', 'for-next/entry', 'for-next/kbuild', 'for-next/kexec/cleanup', 'for-next/kexec/file-kdump', 'for-next/misc', 'for-next/nofpsimd', 'for-next/perf' and 'for-next/scs' into for-next/core
      
      * for-next/acpi:
        ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map()
      
      * for-next/cpufeatures: (2 commits)
        arm64: Introduce ID_ISAR6 CPU register
        ...
      
      * for-next/csum: (2 commits)
        arm64: csum: Fix pathological zero-length calls
        ...
      
      * for-next/e0pd: (7 commits)
        arm64: kconfig: Fix alignment of E0PD help text
        ...
      
      * for-next/entry: (5 commits)
        arm64: entry: cleanup sp_el0 manipulation
        ...
      
      * for-next/kbuild: (4 commits)
        arm64: kbuild: remove compressed images on 'make ARCH=arm64 (dist)clean'
        ...
      
      * for-next/kexec/cleanup: (11 commits)
        Revert "arm64: kexec: make dtb_mem always enabled"
        ...
      
      * for-next/kexec/file-kdump: (2 commits)
        arm64: kexec_file: add crash dump support
        ...
      
      * for-next/misc: (12 commits)
        arm64: entry: Avoid empty alternatives entries
        ...
      
      * for-next/nofpsimd: (7 commits)
        arm64: nofpsmid: Handle TIF_FOREIGN_FPSTATE flag cleanly
        ...
      
      * for-next/perf: (2 commits)
        perf/imx_ddr: Fix cpu hotplug state cleanup
        ...
      
      * for-next/scs: (6 commits)
        arm64: kernel: avoid x18 in __cpu_soft_restart
        ...
      4f6cdf29
    • Will Deacon's avatar
      arm64: kconfig: Fix alignment of E0PD help text · e717d93b
      Will Deacon authored
      Remove the additional space.
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      e717d93b
    • Mark Brown's avatar
      arm64: Use v8.5-RNG entropy for KASLR seed · 2e8e1ea8
      Mark Brown authored
      When seeding KALSR on a system where we have architecture level random
      number generation make use of that entropy, mixing it in with the seed
      passed by the bootloader. Since this is run very early in init before
      feature detection is complete we open code rather than use archrandom.h.
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      2e8e1ea8
    • Richard Henderson's avatar
      arm64: Implement archrandom.h for ARMv8.5-RNG · 1a50ec0b
      Richard Henderson authored
      Expose the ID_AA64ISAR0.RNDR field to userspace, as the RNG system
      registers are always available at EL0.
      
      Implement arch_get_random_seed_long using RNDR.  Given that the
      TRNG is likely to be a shared resource between cores, and VMs,
      do not explicitly force re-seeding with RNDRRS.  In order to avoid
      code complexity and potential issues with hetrogenous systems only
      provide values after cpufeature has finalized the system capabilities.
      Signed-off-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      [Modified to only function after cpufeature has finalized the system
      capabilities and move all the code into the header -- broonie]
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      [will: Advertise HWCAP via /proc/cpuinfo]
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      1a50ec0b
  2. 21 Jan, 2020 3 commits
  3. 17 Jan, 2020 10 commits
    • Robin Murphy's avatar
      arm64: csum: Fix pathological zero-length calls · c2c24edb
      Robin Murphy authored
      In validating the checksumming results of the new routine, I sadly
      neglected to test its not-checksumming results. Thus it slipped through
      that the one case where @buff is already dword-aligned and @len = 0
      manages to defeat the tail-masking logic and behave as if @len = 8.
      For a zero length it doesn't make much sense to deference @buff anyway,
      so just add an early return (which has essentially zero impact on
      performance).
      Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      c2c24edb
    • Mark Rutland's avatar
      arm64: entry: cleanup sp_el0 manipulation · 3e393417
      Mark Rutland authored
      The kernel stashes the current task struct in sp_el0 so that this can be
      acquired consistently/cheaply when required. When we take an exception
      from EL0 we have to:
      
      1) stash the original sp_el0 value
      2) find the current task
      3) update sp_el0 with the current task pointer
      
      Currently steps #1 and #2 occur in one place, and step #3 a while later.
      As the value of sp_el0 is immaterial between these points, let's move
      them together to make the code clearer and minimize ifdeffery. This
      necessitates moving the comment for MDSCR_EL1.SS.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      3e393417
    • Mark Rutland's avatar
      arm64: entry: cleanup el0 svc handler naming · 7a2c0944
      Mark Rutland authored
      For most of the exception entry code, <foo>_handler() is the first C
      function called from the entry assembly in entry-common.c, and external
      functions handling the bulk of the logic are called do_<foo>().
      
      For consistency, apply this scheme to el0_svc_handler and
      el0_svc_compat_handler, renaming them to do_el0_svc and
      do_el0_svc_compat respectively.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      7a2c0944
    • Mark Rutland's avatar
      arm64: entry: mark all entry code as notrace · 2d226c1e
      Mark Rutland authored
      Almost all functions in entry-common.c are marked notrace, with
      el1_undef and el1_inv being the only exceptions. We appear to have done
      this on the assumption that there were no exception registers that we
      needed to snapshot, and thus it was safe to run trace code that might
      result in further exceptions and clobber those registers.
      
      However, until we inherit the DAIF flags, our irq flag tracing is stale,
      and this discrepancy could set off warnings in some configurations. For
      example if CONFIG_DEBUG_LOCKDEP is selected and a trace function calls
      into any flag-checking locking routines. Given we don't expect to
      trigger el1_undef or el1_inv unless something is already wrong, any
      irqflag warnigns are liable to mask the information we'd actually care
      about.
      
      Let's keep things simple and mark el1_undef and el1_inv as notrace.
      Developers can trace do_undefinstr and bad_mode if they really want to
      monitor these cases.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      2d226c1e
    • Mark Rutland's avatar
      arm64: assembler: remove smp_dmb macro · ddb953f8
      Mark Rutland authored
      These days arm64 kernels are always SMP, and thus smp_dmb is an
      overly-long way of writing dmb. Naturally, no-one uses it.
      
      Remove the unused macro.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      ddb953f8
    • Mark Rutland's avatar
      arm64: assembler: remove inherit_daif macro · 170b25fa
      Mark Rutland authored
      We haven't needed the inherit_daif macro since commit:
      
        ed3768db ("arm64: entry: convert el1_sync to C")
      
      ... which converted all callers to C and the local_daif_inherit
      function.
      
      Remove the unused macro.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      170b25fa
    • Hanjun Guo's avatar
      ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map() · 3c23b83a
      Hanjun Guo authored
      The IORT specification [0] (Section 3, table 4, page 9) defines the
      'Number of IDs' as 'The number of IDs in the range minus one'.
      
      However, the IORT ID mapping function iort_id_map() treats the 'Number
      of IDs' field as if it were the full IDs mapping count, with the
      following check in place to detect out of boundary input IDs:
      
      InputID >= Input base + Number of IDs
      
      This check is flawed in that it considers the 'Number of IDs' field as
      the full number of IDs mapping and disregards the 'minus one' from
      the IDs count.
      
      The correct check in iort_id_map() should be implemented as:
      
      InputID > Input base + Number of IDs
      
      this implements the specification correctly but unfortunately it breaks
      existing firmwares that erroneously set the 'Number of IDs' as the full
      IDs mapping count rather than IDs mapping count minus one.
      
      e.g.
      
      PCI hostbridge mapping entry 1:
      Input base:  0x1000
      ID Count:    0x100
      Output base: 0x1000
      Output reference: 0xC4  //ITS reference
      
      PCI hostbridge mapping entry 2:
      Input base:  0x1100
      ID Count:    0x100
      Output base: 0x2000
      Output reference: 0xD4  //ITS reference
      
      Two mapping entries which the second entry's Input base = the first
      entry's Input base + ID count, so for InputID 0x1100 and with the
      correct InputID check in place in iort_id_map() the kernel would map
      the InputID to ITS 0xC4 not 0xD4 as it would be expected.
      
      Therefore, to keep supporting existing flawed firmwares, introduce a
      workaround that instructs the kernel to use the old InputID range check
      logic in iort_id_map(), so that we can support both firmwares written
      with the flawed 'Number of IDs' logic and the correct one as defined in
      the specifications.
      
      [0]: http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdfReported-by: default avatarPankaj Bansal <pankaj.bansal@nxp.com>
      Link: https://lore.kernel.org/linux-acpi/20191215203303.29811-1-pankaj.bansal@nxp.com/Signed-off-by: default avatarHanjun Guo <guohanjun@huawei.com>
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Pankaj Bansal <pankaj.bansal@nxp.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Sudeep Holla <sudeep.holla@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      3c23b83a
    • Dave Martin's avatar
      mm: Reserve asm-generic prot flags 0x10 and 0x20 for arch use · d41938d2
      Dave Martin authored
      The asm-generic/mman.h definitions are used by a few architectures that
      also define arch-specific PROT flags with value 0x10 and 0x20. This
      currently applies to sparc and powerpc for 0x10, while arm64 will soon
      join with 0x10 and 0x20.
      
      To help future maintainers, document the use of this flag in the
      asm-generic header too.
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      [catalin.marinas@arm.com: reserve 0x20 as well]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      d41938d2
    • Catalin Marinas's avatar
      arm64: Use macros instead of hard-coded constants for MAIR_EL1 · 95b3f74b
      Catalin Marinas authored
      Currently, the arm64 __cpu_setup has hard-coded constants for the memory
      attributes that go into the MAIR_EL1 register. Define proper macros in
      asm/sysreg.h and make use of them in proc.S.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      95b3f74b
    • Sai Prakash Ranjan's avatar
      arm64: Add KRYO{3,4}XX CPU cores to spectre-v2 safe list · 83b0c36b
      Sai Prakash Ranjan authored
      The "silver" KRYO3XX and KRYO4XX CPU cores are not affected by Spectre
      variant 2. Add them to spectre_v2 safe list to correct the spurious
      ARM_SMCCC_ARCH_WORKAROUND_1 warning and vulnerability status reported
      under sysfs.
      Reviewed-by: default avatarStephen Boyd <swboyd@chromium.org>
      Tested-by: default avatarStephen Boyd <swboyd@chromium.org>
      Signed-off-by: default avatarSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
      [will: tweaked commit message to remove stale mention of "gold" cores]
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      83b0c36b
  4. 16 Jan, 2020 11 commits
  5. 15 Jan, 2020 8 commits