1. 08 Nov, 2019 1 commit
    • Catalin Marinas's avatar
      Merge branch 'for-next/perf' into for-next/core · 51effa6d
      Catalin Marinas authored
      - Support for additional PMU topologies on HiSilicon platforms
      - Support for CCN-512 interconnect PMU
      - Support for AXI ID filtering in the IMX8 DDR PMU
      - Support for the CCPI2 uncore PMU in ThunderX2
      - Driver cleanup to use devm_platform_ioremap_resource()
      
      * for-next/perf:
        drivers/perf: hisi: update the sccl_id/ccl_id for certain HiSilicon platform
        perf/imx_ddr: Dump AXI ID filter info to userspace
        docs/perf: Add AXI ID filter capabilities information
        perf/imx_ddr: Add driver for DDR PMU in i.MX8MPlus
        perf/imx_ddr: Add enhanced AXI ID filter support
        bindings: perf: imx-ddr: Add new compatible string
        docs/perf: Add explanation for DDR_CAP_AXI_ID_FILTER_ENHANCED quirk
        arm64: perf: Simplify the ARMv8 PMUv3 event attributes
        drivers/perf: Add CCPI2 PMU support in ThunderX2 UNCORE driver.
        Documentation: perf: Update documentation for ThunderX2 PMU uncore driver
        Documentation: Add documentation for CCN-512 DTS binding
        perf: arm-ccn: Enable stats for CCN-512 interconnect
        perf/smmuv3: use devm_platform_ioremap_resource() to simplify code
        perf/arm-cci: use devm_platform_ioremap_resource() to simplify code
        perf/arm-ccn: use devm_platform_ioremap_resource() to simplify code
        perf: xgene: use devm_platform_ioremap_resource() to simplify code
        perf: hisi: use devm_platform_ioremap_resource() to simplify code
      51effa6d
  2. 07 Nov, 2019 2 commits
  3. 06 Nov, 2019 8 commits
    • Mark Rutland's avatar
      arm64: ftrace: minimize ifdeffery · 7f08ae53
      Mark Rutland authored
      Now that we no longer refer to mod->arch.ftrace_trampolines in the body
      of ftrace_make_call(), we can use IS_ENABLED() rather than ifdeffery,
      and make the code easier to follow. Likewise in ftrace_make_nop().
      
      Let's do so.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      7f08ae53
    • Torsten Duwe's avatar
      arm64: implement ftrace with regs · 3b23e499
      Torsten Duwe authored
      This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
      function's arguments (and some other registers) to be captured into a
      struct pt_regs, allowing these to be inspected and/or modified. This is
      a building block for live-patching, where a function's arguments may be
      forwarded to another function. This is also necessary to enable ftrace
      and in-kernel pointer authentication at the same time, as it allows the
      LR value to be captured and adjusted prior to signing.
      
      Using GCC's -fpatchable-function-entry=N option, we can have the
      compiler insert a configurable number of NOPs between the function entry
      point and the usual prologue. This also ensures functions are AAPCS
      compliant (e.g. disabling inter-procedural register allocation).
      
      For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
      following:
      
      | unsigned long bar(void);
      |
      | unsigned long foo(void)
      | {
      |         return bar() + 1;
      | }
      
      ... to:
      
      | <foo>:
      |         nop
      |         nop
      |         stp     x29, x30, [sp, #-16]!
      |         mov     x29, sp
      |         bl      0 <bar>
      |         add     x0, x0, #0x1
      |         ldp     x29, x30, [sp], #16
      |         ret
      
      This patch builds the kernel with -fpatchable-function-entry=2,
      prefixing each function with two NOPs. To trace a function, we replace
      these NOPs with a sequence that saves the LR into a GPR, then calls an
      ftrace entry assembly function which saves this and other relevant
      registers:
      
      | mov	x9, x30
      | bl	<ftrace-entry>
      
      Since patchable functions are AAPCS compliant (and the kernel does not
      use x18 as a platform register), x9-x18 can be safely clobbered in the
      patched sequence and the ftrace entry code.
      
      There are now two ftrace entry functions, ftrace_regs_entry (which saves
      all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
      allocated for each within modules.
      Signed-off-by: default avatarTorsten Duwe <duwe@suse.de>
      [Mark: rework asm, comments, PLTs, initialization, commit message]
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Julien Thierry <jthierry@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      3b23e499
    • Mark Rutland's avatar
      arm64: asm-offsets: add S_FP · 1f377e04
      Mark Rutland authored
      So that assembly code can more easily manipulate the FP (x29) within a
      pt_regs, add an S_FP asm-offsets definition.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      1f377e04
    • Mark Rutland's avatar
      arm64: insn: add encoder for MOV (register) · e3bf8a67
      Mark Rutland authored
      For FTRACE_WITH_REGS, we're going to want to generate a MOV (register)
      instruction as part of the callsite intialization. As MOV (register) is
      an alias for ORR (shifted register), we can generate this with
      aarch64_insn_gen_logical_shifted_reg(), but it's somewhat verbose and
      difficult to read in-context.
      
      Add a aarch64_insn_gen_move_reg() wrapper for this case so that we can
      write callers in a more straightforward way.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      e3bf8a67
    • Mark Rutland's avatar
      arm64: module/ftrace: intialize PLT at load time · f1a54ae9
      Mark Rutland authored
      Currently we lazily-initialize a module's ftrace PLT at runtime when we
      install the first ftrace call. To do so we have to apply a number of
      sanity checks, transiently mark the module text as RW, and perform an
      IPI as part of handling Neoverse-N1 erratum #1542419.
      
      We only expect the ftrace trampoline to point at ftrace_caller() (AKA
      FTRACE_ADDR), so let's simplify all of this by intializing the PLT at
      module load time, before the module loader marks the module RO and
      performs the intial I-cache maintenance for the module.
      
      Thus we can rely on the module having been correctly intialized, and can
      simplify the runtime work necessary to install an ftrace call in a
      module. This will also allow for the removal of module_disable_ro().
      
      Tested by forcing ftrace_make_call() to use the module PLT, and then
      loading up a module after setting up ftrace with:
      
      | echo ":mod:<module-name>" > set_ftrace_filter;
      | echo function > current_tracer;
      | modprobe <module-name>
      
      Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is
      selected, we wrap its use along with most of module_init_ftrace_plt()
      with ifdeffery rather than using IS_ENABLED().
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      f1a54ae9
    • Mark Rutland's avatar
      arm64: module: rework special section handling · bd8b21d3
      Mark Rutland authored
      When we load a module, we have to perform some special work for a couple
      of named sections. To do this, we iterate over all of the module's
      sections, and perform work for each section we recognize.
      
      To make it easier to handle the unexpected absence of a section, and to
      make the section-specific logic easer to read, let's factor the section
      search into a helper. Similar is already done in the core module loader,
      and other architectures (and ideally we'd unify these in future).
      
      If we expect a module to have an ftrace trampoline section, but it
      doesn't have one, we'll now reject loading the module. When
      ARM64_MODULE_PLTS is selected, any correctly built module should have
      one (and this is assumed by arm64's ftrace PLT code) and the absence of
      such a section implies something has gone wrong at build time.
      
      Subsequent patches will make use of the new helper.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      bd8b21d3
    • Mark Rutland's avatar
      module/ftrace: handle patchable-function-entry · a1326b17
      Mark Rutland authored
      When using patchable-function-entry, the compiler will record the
      callsites into a section named "__patchable_function_entries" rather
      than "__mcount_loc". Let's abstract this difference behind a new
      FTRACE_CALLSITE_SECTION, so that architectures don't have to handle this
      explicitly (e.g. with custom module linker scripts).
      
      As parisc currently handles this explicitly, it is fixed up accordingly,
      with its custom linker script removed. Since FTRACE_CALLSITE_SECTION is
      only defined when DYNAMIC_FTRACE is selected, the parisc module loading
      code is updated to only use the definition in that case. When
      DYNAMIC_FTRACE is not selected, modules shouldn't have this section, so
      this removes some redundant work in that case.
      
      To make sure that this is keep up-to-date for modules and the main
      kernel, a comment is added to vmlinux.lds.h, with the existing ifdeffery
      simplified for legibility.
      
      I built parisc generic-{32,64}bit_defconfig with DYNAMIC_FTRACE enabled,
      and verified that the section made it into the .ko files for modules.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarHelge Deller <deller@gmx.de>
      Acked-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarSven Schnelle <svens@stackframe.org>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: linux-parisc@vger.kernel.org
      a1326b17
    • Mark Rutland's avatar
      ftrace: add ftrace_init_nop() · fbf6c73c
      Mark Rutland authored
      Architectures may need to perform special initialization of ftrace
      callsites, and today they do so by special-casing ftrace_make_nop() when
      the expected branch address is MCOUNT_ADDR. In some cases (e.g. for
      patchable-function-entry), we don't have an mcount-like symbol and don't
      want a synthetic MCOUNT_ADDR, but we may need to perform some
      initialization of callsites.
      
      To make it possible to separate initialization from runtime
      modification, and to handle cases without an mcount-like symbol, this
      patch adds an optional ftrace_init_nop() function that architectures can
      implement, which does not pass a branch address.
      
      Where an architecture does not provide ftrace_init_nop(), we will fall
      back to the existing behaviour of calling ftrace_make_nop() with
      MCOUNT_ADDR.
      
      At the same time, ftrace_code_disable() is renamed to
      ftrace_nop_initialize() to make it clearer that it is intended to
      intialize a callsite into a disabled state, and is not for disabling a
      callsite that has been runtime enabled. The kerneldoc description of rec
      arguments is updated to cover non-mcount callsites.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarMiroslav Benes <mbenes@suse.cz>
      Reviewed-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Tested-by: default avatarAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: default avatarSven Schnelle <svens@stackframe.org>
      Tested-by: default avatarTorsten Duwe <duwe@suse.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      fbf6c73c
  4. 04 Nov, 2019 6 commits
  5. 01 Nov, 2019 1 commit
  6. 29 Oct, 2019 2 commits
  7. 28 Oct, 2019 14 commits
    • Catalin Marinas's avatar
      Merge branch 'for-next/entry-s-to-c' into for-next/core · 8301ae82
      Catalin Marinas authored
      Move the synchronous exception paths from entry.S into a C file to
      improve the code readability.
      
      * for-next/entry-s-to-c:
        arm64: entry-common: don't touch daif before bp-hardening
        arm64: Remove asmlinkage from updated functions
        arm64: entry: convert el0_sync to C
        arm64: entry: convert el1_sync to C
        arm64: add local_daif_inherit()
        arm64: Add prototypes for functions called by entry.S
        arm64: remove __exception annotations
      8301ae82
    • Catalin Marinas's avatar
      Merge branch 'kvm-arm64/erratum-1319367' of... · 346f6a46
      Catalin Marinas authored
      Merge branch 'kvm-arm64/erratum-1319367' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into for-next/core
      
      Similarly to erratum 1165522 that affects Cortex-A76, A57 and A72
      respectively suffer from errata 1319537 and 1319367, potentially
      resulting in TLB corruption if the CPU speculates an AT instruction
      while switching guests.
      
      The fix is slightly more involved since we don't have VHE to help us
      here, but the idea is the same: when switching a guest in, we must
      prevent any speculated AT from being able to parse the page tables
      until S2 is up and running. Only at this stage can we allow AT to take
      place.
      
      For this, we always restore the guest sysregs first, except for its
      SCTLR and TCR registers, which must be set with SCTLR.M=1 and
      TCR.EPD{0,1} = {1, 1}, effectively disabling the PTW and TLB
      allocation. Once S2 is setup, we restore the guest's SCTLR and
      TCR. Similar things must be done on TLB invalidation...
      
      * 'kvm-arm64/erratum-1319367' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms:
        arm64: Enable and document ARM errata 1319367 and 1319537
        arm64: KVM: Prevent speculative S1 PTW when restoring vcpu context
        arm64: KVM: Disable EL1 PTW when invalidating S2 TLBs
        arm64: KVM: Reorder system register restoration and stage-2 activation
        arm64: Add ARM64_WORKAROUND_1319367 for all A57 and A72 versions
      346f6a46
    • Catalin Marinas's avatar
      Merge branch 'for-next/neoverse-n1-stale-instr' into for-next/core · 6a036afb
      Catalin Marinas authored
      Neoverse-N1 cores with the 'COHERENT_ICACHE' feature may fetch stale
      instructions when software depends on prefetch-speculation-protection
      instead of explicit synchronization. [0]
      
      The workaround is to trap I-Cache maintenance and issue an
      inner-shareable TLBI. The affected cores have a Coherent I-Cache, so the
      I-Cache maintenance isn't necessary. The core tells user-space it can
      skip it with CTR_EL0.DIC. We also have to trap this register to hide the
      bit forcing DIC-aware user-space to perform the maintenance.
      
      To avoid trapping all cache-maintenance, this workaround depends on
      a firmware component that only traps I-cache maintenance from EL0 and
      performs the workaround.
      
      For user-space, the kernel's work is to trap CTR_EL0 to hide DIC, and
      produce a fake IminLine. EL3 traps the now-necessary I-Cache maintenance
      and performs the inner-shareable-TLBI that makes everything better.
      
      [0] https://developer.arm.com/docs/sden885747/latest/arm-neoverse-n1-mp050-software-developer-errata-notice
      
      * for-next/neoverse-n1-stale-instr:
        arm64: Silence clang warning on mismatched value/register sizes
        arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space
        arm64: Fake the IminLine size on systems affected by Neoverse-N1 #1542419
        arm64: errata: Hide CTR_EL0.DIC on systems affected by Neoverse-N1 #1542419
      6a036afb
    • Marek Bykowski's avatar
      Documentation: Add documentation for CCN-512 DTS binding · 05daff06
      Marek Bykowski authored
      Indicate the arm-ccn perf back-end supports now ccn-512.
      Acked-by: default avatarRob Herring <robh@kernel.org>
      Signed-off-by: default avatarMarek Bykowski <marek.bykowski@gmail.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      05daff06
    • Marek Bykowski's avatar
      perf: arm-ccn: Enable stats for CCN-512 interconnect · 126b0a17
      Marek Bykowski authored
      Add compatible string for the ARM CCN-512 interconnect
      Acked-by: default avatarPawel Moll <pawel.moll@arm.com>
      Signed-off-by: default avatarMarek Bykowski <marek.bykowski@gmail.com>
      Signed-off-by: default avatarBoleslaw Malecki <boleslaw.malecki@tieto.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      126b0a17
    • Catalin Marinas's avatar
      Merge remote-tracking branch 'arm64/for-next/fixes' into for-next/core · ba95e9bd
      Catalin Marinas authored
      This is required to solve the conflicts with subsequent merges of two
      more errata workaround branches.
      
      * arm64/for-next/fixes:
        arm64: tags: Preserve tags for addresses translated via TTBR1
        arm64: mm: fix inverted PAR_EL1.F check
        arm64: sysreg: fix incorrect definition of SYS_PAR_EL1_F
        arm64: entry.S: Do not preempt from IRQ before all cpufeatures are enabled
        arm64: hibernate: check pgd table allocation
        arm64: cpufeature: Treat ID_AA64ZFR0_EL1 as RAZ when SVE is not enabled
        arm64: Fix kcore macros after 52-bit virtual addressing fallout
        arm64: Allow CAVIUM_TX2_ERRATUM_219 to be selected
        arm64: Avoid Cavium TX2 erratum 219 when switching TTBR
        arm64: Enable workaround for Cavium TX2 erratum 219 when running SMT
        arm64: KVM: Trap VM ops when ARM64_WORKAROUND_CAVIUM_TX2_219_TVM is set
      ba95e9bd
    • James Morse's avatar
      arm64: entry-common: don't touch daif before bp-hardening · bfe29874
      James Morse authored
      The previous patches mechanically transformed the assembly version of
      entry.S to entry-common.c for synchronous exceptions.
      
      The C version of local_daif_restore() doesn't quite do the same thing
      as the assembly versions if pseudo-NMI is in use. In particular,
      | local_daif_restore(DAIF_PROCCTX_NOIRQ)
      will still allow pNMI to be delivered. This is not the behaviour
      do_el0_ia_bp_hardening() and do_sp_pc_abort() want as it should not
      be possible for the PMU handler to run as an NMI until the bp-hardening
      sequence has run.
      
      The bp-hardening calls were placed where they are because this was the
      first C code to run after the relevant exceptions. As we've now moved
      that point earlier, move the checks and calls earlier too.
      
      This makes it clearer that this stuff runs before any kind of exception,
      and saves modifying PSTATE twice.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bfe29874
    • James Morse's avatar
      arm64: Remove asmlinkage from updated functions · afa7c0e5
      James Morse authored
      Now that the callers of these functions have moved into C, they no longer
      need the asmlinkage annotation. Remove it.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      afa7c0e5
    • Mark Rutland's avatar
      arm64: entry: convert el0_sync to C · 582f9583
      Mark Rutland authored
      This is largely a 1-1 conversion of asm to C, with a couple of caveats.
      
      The el0_sync{_compat} switches explicitly handle all the EL0 debug
      cases, so el0_dbg doesn't have to try to bail out for unexpected EL1
      debug ESR values. This also means that an unexpected vector catch from
      AArch32 is routed to el0_inv.
      
      We *could* merge the native and compat switches, which would make the
      diffstat negative, but I've tried to stay as close to the existing
      assembly as possible for the moment.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      [split out of a bigger series, added nokprobes. removed irq trace
       calls as the C helpers do this. renamed el0_dbg's use of FAR]
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      582f9583
    • Mark Rutland's avatar
      arm64: entry: convert el1_sync to C · ed3768db
      Mark Rutland authored
      This patch converts the EL1 sync entry assembly logic to C code.
      
      Doing this will allow us to make changes in a slightly more
      readable way. A case in point is supporting kernel-first RAS.
      do_sea() should be called on the CPU that took the fault.
      
      Largely the assembly code is converted to C in a relatively
      straightforward manner.
      
      Since all sync sites share a common asm entry point, the ASM_BUG()
      instances are no longer required for effective backtraces back to
      assembly, and we don't need similar BUG() entries.
      
      The ESR_ELx.EC codes for all (supported) debug exceptions are now
      checked in the el1_sync_handler's switch statement, which renders the
      check in el1_dbg redundant. This both simplifies the el1_dbg handler,
      and makes the EL1 exception handling more robust to
      currently-unallocated ESR_ELx.EC encodings.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      [split out of a bigger series, added nokprobes, moved prototypes]
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ed3768db
    • Mark Rutland's avatar
      arm64: add local_daif_inherit() · 51077e03
      Mark Rutland authored
      Some synchronous exceptions can be taken from a number of contexts,
      e.g. where IRQs may or may not be masked. In the entry assembly for
      these exceptions, we use the inherit_daif assembly macro to ensure
      that we only mask those exceptions which were masked when the exception
      was taken.
      
      So that we can do the same from C code, this patch adds a new
      local_daif_inherit() function, following the existing local_daif_*()
      naming scheme.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      [moved away from local_daif_restore()]
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      51077e03
    • James Morse's avatar
      arm64: Add prototypes for functions called by entry.S · e540e0a7
      James Morse authored
      Functions that are only called by assembly don't always have a
      C header file prototype.
      
      Add the prototypes before moving the assembly callers to C.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      e540e0a7
    • James Morse's avatar
      arm64: remove __exception annotations · b6e43c0e
      James Morse authored
      Since commit 73267498 ("arm64: unwind: reference pt_regs via embedded
      stack frame") arm64 has not used the __exception annotation to dump
      the pt_regs during stack tracing. in_exception_text() has no callers.
      
      This annotation is only used to blacklist kprobes, it means the same as
      __kprobes.
      
      Section annotations like this require the functions to be grouped
      together between the start/end markers, and placed according to
      the linker script. For kprobes we also have NOKPROBE_SYMBOL() which
      logs the symbol address in a section that kprobes parses and
      blacklists at boot.
      
      Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of
      blacklisted symbols, and saves us from having an arm64 specific
      spelling of __kprobes.
      
      do_debug_exception() already has a NOKPROBE_SYMBOL() annotation.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b6e43c0e
    • Catalin Marinas's avatar
      arm64: Silence clang warning on mismatched value/register sizes · 27a22fbd
      Catalin Marinas authored
      Clang reports a warning on the __tlbi(aside1is, 0) macro expansion since
      the value size does not match the register size specified in the inline
      asm. Construct the ASID value using the __TLBI_VADDR() macro.
      
      Fixes: 222fc0c8 ("arm64: compat: Workaround Neoverse-N1 #1542419 for compat user-space")
      Reported-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      27a22fbd
  8. 26 Oct, 2019 4 commits
  9. 25 Oct, 2019 2 commits