1. 30 Aug, 2019 1 commit
    • Will Deacon's avatar
      Merge branches 'for-next/52-bit-kva', 'for-next/cpu-topology',... · ac12cf85
      Will Deacon authored
      Merge branches 'for-next/52-bit-kva', 'for-next/cpu-topology', 'for-next/error-injection', 'for-next/perf', 'for-next/psci-cpuidle', 'for-next/rng', 'for-next/smpboot', 'for-next/tbi' and 'for-next/tlbi' into for-next/core
      
      * for-next/52-bit-kva: (25 commits)
        Support for 52-bit virtual addressing in kernel space
      
      * for-next/cpu-topology: (9 commits)
        Move CPU topology parsing into core code and add support for ACPI 6.3
      
      * for-next/error-injection: (2 commits)
        Support for function error injection via kprobes
      
      * for-next/perf: (8 commits)
        Support for i.MX8 DDR PMU and proper SMMUv3 group validation
      
      * for-next/psci-cpuidle: (7 commits)
        Move PSCI idle code into a new CPUidle driver
      
      * for-next/rng: (4 commits)
        Support for 'rng-seed' property being passed in the devicetree
      
      * for-next/smpboot: (3 commits)
        Reduce fragility of secondary CPU bringup in debug configurations
      
      * for-next/tbi: (10 commits)
        Introduce new syscall ABI with relaxed requirements for pointer tags
      
      * for-next/tlbi: (6 commits)
        Handle spurious page faults arising from kernel space
      ac12cf85
  2. 28 Aug, 2019 4 commits
    • Joakim Zhang's avatar
      docs/perf: Add documentation for the i.MX8 DDR PMU · 3724e186
      Joakim Zhang authored
      Add some documentation describing the DDR PMU residing in the Freescale
      i.MDX SoC and its perf driver implementation in Linux.
      Signed-off-by: default avatarJoakim Zhang <qiangqing.zhang@nxp.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      3724e186
    • Joakim Zhang's avatar
      perf/imx_ddr: Add support for AXI ID filtering · c12c0288
      Joakim Zhang authored
      AXI filtering is used by events 0x41 and 0x42 to count reads or writes
      with an ARID or AWID matching a specified filter. The filter is exposed
      to userspace as an (ID, MASK) pair, where each set bit in the mask
      causes the corresponding bit in the ID to be ignored when matching
      against the ID of memory transactions for the purposes of incrementing
      the counter.
      
      For example:
      
        # perf stat -a -e imx8_ddr0/axid-read,axi_mask=0xff,axi_id=0x800/ cmd
      
      will count all read transactions from AXI IDs 0x800 - 0x8ff. If the
      'axi_mask' is omitted, then it is treated as 0x0 which means that the
      'axi_id' will be matched exactly.
      Signed-off-by: default avatarJoakim Zhang <qiangqing.zhang@nxp.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      c12c0288
    • Mark Rutland's avatar
      arm64: kpti: ensure patched kernel text is fetched from PoU · f32c7a8e
      Mark Rutland authored
      While the MMUs is disabled, I-cache speculation can result in
      instructions being fetched from the PoC. During boot we may patch
      instructions (e.g. for alternatives and jump labels), and these may be
      dirty at the PoU (and stale at the PoC).
      
      Thus, while the MMU is disabled in the KPTI pagetable fixup code we may
      load stale instructions into the I-cache, potentially leading to
      subsequent crashes when executing regions of code which have been
      modified at runtime.
      
      Similarly to commit:
      
        8ec41987 ("arm64: mm: ensure patched kernel text is fetched from PoU")
      
      ... we can invalidate the I-cache after enabling the MMU to prevent such
      issues.
      
      The KPTI pagetable fixup code itself should be clean to the PoC per the
      boot protocol, so no maintenance is required for this code.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      f32c7a8e
    • Mark Rutland's avatar
      arm64: fix fixmap copy for 16K pages and 48-bit VA · b333b0ba
      Mark Rutland authored
      With 16K pages and 48-bit VAs, the PGD level of table has two entries,
      and so the fixmap shares a PGD with the kernel image. Since commit:
      
        f9040773 ("arm64: move kernel image to base of vmalloc area")
      
      ... we copy the existing fixmap to the new fine-grained page tables at
      the PUD level in this case. When walking to the new PUD, we forgot to
      offset the PGD entry and always used the PGD entry at index 0, but this
      worked as the kernel image and fixmap were in the low half of the TTBR1
      address space.
      
      As of commit:
      
        14c127c9 ("arm64: mm: Flip kernel VA space")
      
      ... the kernel image and fixmap are in the high half of the TTBR1
      address space, and hence use the PGD at index 1, but we didn't update
      the fixmap copying code to account for this.
      
      Thus, we'll erroneously try to copy the fixmap slots into a PUD under
      the PGD entry at index 0. At the point we do so this PGD entry has not
      been initialised, and thus we'll try to write a value to a small offset
      from physical address 0, causing a number of potential problems.
      
      Fix this be correctly offsetting the PGD. This is split over a few steps
      for legibility.
      
      Fixes: 14c127c9 ("arm64: mm: Flip kernel VA space")
      Reported-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Tested-by: default avatarMarc Zyngier <maz@kernel.org>
      Acked-by: default avatarSteve Capper <Steve.Capper@arm.com>
      Tested-by: default avatarSteve Capper <Steve.Capper@arm.com>
      Tested-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      b333b0ba
  3. 27 Aug, 2019 13 commits
    • Robin Murphy's avatar
      perf/smmuv3: Validate groups for global filtering · 3c934735
      Robin Murphy authored
      With global filtering, it becomes possible for users to construct
      self-contradictory groups with conflicting filters. Make sure we
      cover that when initially validating events.
      Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      3c934735
    • Robin Murphy's avatar
      perf/smmuv3: Validate group size · 33e84ea4
      Robin Murphy authored
      Ensure that a group will actually fit into the available counters.
      Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      33e84ea4
    • Vincenzo Frascino's avatar
      arm64: Relax Documentation/arm64/tagged-pointers.rst · 92af2b69
      Vincenzo Frascino authored
      On AArch64 the TCR_EL1.TBI0 bit is set by default, allowing userspace
      (EL0) to perform memory accesses through 64-bit pointers with a non-zero
      top byte. However, such pointers were not allowed at the user-kernel
      syscall ABI boundary.
      
      With the Tagged Address ABI patchset, it is now possible to pass tagged
      pointers to the syscalls. Relax the requirements described in
      tagged-pointers.rst to be compliant with the behaviours guaranteed by
      the AArch64 Tagged Address ABI.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Szabolcs Nagy <szabolcs.nagy@arm.com>
      Cc: Kevin Brodsky <kevin.brodsky@arm.com>
      Acked-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Co-developed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      92af2b69
    • Will Deacon's avatar
      arm64: kvm: Replace hardcoded '1' with SYS_PAR_EL1_F · 5c062ef4
      Will Deacon authored
      Now that we have a definition for the 'F' field of PAR_EL1, use that
      instead of coding the immediate directly.
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      5c062ef4
    • Will Deacon's avatar
      arm64: mm: Ignore spurious translation faults taken from the kernel · 42f91093
      Will Deacon authored
      Thanks to address translation being performed out of order with respect to
      loads and stores, it is possible for a CPU to take a translation fault when
      accessing a page that was mapped by a different CPU.
      
      For example, in the case that one CPU maps a page and then sets a flag to
      tell another CPU:
      
      	CPU 0
      	-----
      
      	MOV	X0, <valid pte>
      	STR	X0, [Xptep]	// Store new PTE to page table
      	DSB	ISHST
      	ISB
      	MOV	X1, #1
      	STR	X1, [Xflag]	// Set the flag
      
      	CPU 1
      	-----
      
      loop:	LDAR	X0, [Xflag]	// Poll flag with Acquire semantics
      	CBZ	X0, loop
      	LDR	X1, [X2]	// Translates using the new PTE
      
      then the final load on CPU 1 can raise a translation fault because the
      translation can be performed speculatively before the read of the flag and
      marked as "faulting" by the CPU. This isn't quite as bad as it sounds
      since, in reality, code such as:
      
      	CPU 0				CPU 1
      	-----				-----
      	spin_lock(&lock);		spin_lock(&lock);
      	*ptr = vmalloc(size);		if (*ptr)
      	spin_unlock(&lock);			foo = **ptr;
      					spin_unlock(&lock);
      
      will not trigger the fault because there is an address dependency on CPU 1
      which prevents the speculative translation. However, more exotic code where
      the virtual address is known ahead of time, such as:
      
      	CPU 0				CPU 1
      	-----				-----
      	spin_lock(&lock);		spin_lock(&lock);
      	set_fixmap(0, paddr, prot);	if (mapped)
      	mapped = true;				foo = *fix_to_virt(0);
      	spin_unlock(&lock);		spin_unlock(&lock);
      
      could fault. This can be avoided by any of:
      
      	* Introducing broadcast TLB maintenance on the map path
      	* Adding a DSB;ISB sequence after checking a flag which indicates
      	  that a virtual address is now mapped
      	* Handling the spurious fault
      
      Given that we have never observed a problem due to this under Linux and
      future revisions of the architecture are being tightened so that
      translation table walks are effectively ordered in the same way as explicit
      memory accesses, we no longer treat spurious kernel faults as fatal if an
      AT instruction indicates that the access does not trigger a translation
      fault.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      42f91093
    • Will Deacon's avatar
      arm64: sysreg: Add some field definitions for PAR_EL1 · e8620cff
      Will Deacon authored
      PAR_EL1 is a mysterious creature, but sometimes it's necessary to read
      it when translating addresses in situations where we cannot walk the
      page table directly.
      
      Add a couple of system register definitions for the fault indication
      field ('F') and the fault status code ('FST').
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      e8620cff
    • Will Deacon's avatar
      arm64: mm: Add ISB instruction to set_pgd() · eb6a4dcc
      Will Deacon authored
      Commit 6a4cbd63c25a ("Revert "arm64: Remove unnecessary ISBs from
      set_{pte,pmd,pud}"") reintroduced ISB instructions to some of our
      page table setter functions in light of a recent clarification to the
      Armv8 architecture. Although 'set_pgd()' isn't currently used to update
      a live page table, add the ISB instruction there too for consistency
      with the other macros and to provide some future-proofing if we use it
      on live tables in the future.
      Reported-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      eb6a4dcc
    • Will Deacon's avatar
      arm64: tlb: Ensure we execute an ISB following walk cache invalidation · 51696d34
      Will Deacon authored
      05f2d2f8 ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
      added a new TLB invalidation helper which is used when freeing
      intermediate levels of page table used for kernel mappings, but is
      missing the required ISB instruction after completion of the TLBI
      instruction.
      
      Add the missing barrier.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 05f2d2f8 ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      51696d34
    • Will Deacon's avatar
      Revert "arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}" · d0b7a302
      Will Deacon authored
      This reverts commit 24fe1b0e.
      
      Commit 24fe1b0e ("arm64: Remove unnecessary ISBs from
      set_{pte,pmd,pud}") removed ISB instructions immediately following updates
      to the page table, on the grounds that they are not required by the
      architecture and a DSB alone is sufficient to ensure that subsequent data
      accesses use the new translation:
      
        DDI0487E_a, B2-128:
      
        | ... no instruction that appears in program order after the DSB
        | instruction can alter any state of the system or perform any part of
        | its functionality until the DSB completes other than:
        |
        | * Being fetched from memory and decoded
        | * Reading the general-purpose, SIMD and floating-point,
        |   Special-purpose, or System registers that are directly or indirectly
        |   read without causing side-effects.
      
      However, the same document also states the following:
      
        DDI0487E_a, B2-125:
      
        | DMB and DSB instructions affect reads and writes to the memory system
        | generated by Load/Store instructions and data or unified cache
        | maintenance instructions being executed by the PE. Instruction fetches
        | or accesses caused by a hardware translation table access are not
        | explicit accesses.
      
      which appears to claim that the DSB alone is insufficient.  Unfortunately,
      some CPU designers have followed the second clause above, whereas in Linux
      we've been relying on the first. This means that our mapping sequence:
      
      	MOV	X0, <valid pte>
      	STR	X0, [Xptep]	// Store new PTE to page table
      	DSB	ISHST
      	LDR	X1, [X2]	// Translates using the new PTE
      
      can actually raise a translation fault on the load instruction because the
      translation can be performed speculatively before the page table update and
      then marked as "faulting" by the CPU. For user PTEs, this is ok because we
      can handle the spurious fault, but for kernel PTEs and intermediate table
      entries this results in a panic().
      
      Revert the offending commit to reintroduce the missing barriers.
      
      Cc: <stable@vger.kernel.org>
      Fixes: 24fe1b0e ("arm64: Remove unnecessary ISBs from set_{pte,pmd,pud}")
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      d0b7a302
    • Will Deacon's avatar
      arm64: smp: Treat unknown boot failures as being 'stuck in kernel' · ebef7465
      Will Deacon authored
      When we fail to bring a secondary CPU online and it fails in an unknown
      state, we should assume the worst and increment 'cpus_stuck_in_kernel'
      so that things like kexec() are disabled.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      ebef7465
    • Will Deacon's avatar
      arm64: smp: Don't enter kernel with NULL stack pointer or task struct · 5b1cfe3a
      Will Deacon authored
      Although SMP bringup is inherently racy, we can significantly reduce
      the window during which secondary CPUs can unexpectedly enter the
      kernel by sanity checking the 'stack' and 'task' fields of the
      'secondary_data' structure. If the booting CPU gave up waiting for us,
      then they will have been cleared to NULL and we should spin in a WFE; WFI
      loop instead.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      5b1cfe3a
    • Will Deacon's avatar
      arm64: smp: Increase secondary CPU boot timeout value · 0e164555
      Will Deacon authored
      When many debug options are enabled simultaneously (e.g. PROVE_LOCKING,
      KMEMLEAK, DEBUG_PAGE_ALLOC, KASAN etc), it is possible for us to timeout
      when attempting to boot a secondary CPU and give up. Unfortunately, the
      CPU will /eventually/ appear, and sit in the background happily stuck
      in a recursive exception due to a NULL stack pointer.
      
      Increase the timeout to 5s, which will of course be enough for anybody.
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      0e164555
    • Hsin-Yi Wang's avatar
      fdt: Update CRC check for rng-seed · dd753d96
      Hsin-Yi Wang authored
      Commit 428826f5 ("fdt: add support for rng-seed") moves of_fdt_crc32
      from early_init_dt_verify() to early_init_dt_scan() since
      early_init_dt_scan_chosen() may modify fdt to erase rng-seed.
      
      However, arm and some other arch won't call early_init_dt_scan(), they
      call early_init_dt_verify() then early_init_dt_scan_nodes().
      
      Restore of_fdt_crc32 to early_init_dt_verify() then update it in
      early_init_dt_scan_chosen() if fdt if updated.
      
      Fixes: 428826f5 ("fdt: add support for rng-seed")
      Reported-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Signed-off-by: default avatarHsin-Yi Wang <hsinyi@chromium.org>
      Tested-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      dd753d96
  4. 23 Aug, 2019 3 commits
  5. 22 Aug, 2019 2 commits
  6. 21 Aug, 2019 2 commits
    • Masahiro Yamada's avatar
      arm64: add arch/arm64/Kbuild · 6bfa3134
      Masahiro Yamada authored
      Use the standard obj-y form to specify the sub-directories under
      arch/arm64/. No functional change intended.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      6bfa3134
    • James Morse's avatar
      arm64: entry: Move ct_user_exit before any other exception · 2671828c
      James Morse authored
      When taking an SError or Debug exception from EL0, we run the C
      handler for these exceptions before updating the context tracking
      code and unmasking lower priority interrupts.
      
      When booting with nohz_full lockdep tells us we got this wrong:
      | =============================
      | WARNING: suspicious RCU usage
      | 5.3.0-rc2-00010-gb4b5e9dcb11b-dirty #11271 Not tainted
      | -----------------------------
      | include/linux/rcupdate.h:643 rcu_read_unlock() used illegally wh!
      |
      | other info that might help us debug this:
      |
      |
      | RCU used illegally from idle CPU!
      | rcu_scheduler_active = 2, debug_locks = 1
      | RCU used illegally from extended quiescent state!
      | 1 lock held by a.out/432:
      |  #0: 00000000c7a79515 (rcu_read_lock){....}, at: brk_handler+0x00
      |
      | stack backtrace:
      | CPU: 1 PID: 432 Comm: a.out Not tainted 5.3.0-rc2-00010-gb4b5e9d1
      | Hardware name: ARM LTD ARM Juno Development Platform/ARM Juno De8
      | Call trace:
      |  dump_backtrace+0x0/0x140
      |  show_stack+0x14/0x20
      |  dump_stack+0xbc/0x104
      |  lockdep_rcu_suspicious+0xf8/0x108
      |  brk_handler+0x164/0x1b0
      |  do_debug_exception+0x11c/0x278
      |  el0_dbg+0x14/0x20
      
      Moving the ct_user_exit calls to be before do_debug_exception() means
      they are also before trace_hardirqs_off() has been updated. Add a new
      ct_user_exit_irqoff macro to avoid the context-tracking code using
      irqsave/restore before we've updated trace_hardirqs_off(). To be
      consistent, do this everywhere.
      
      The C helper is called enter_from_user_mode() to match x86 in the hope
      we can merge them into kernel/context_tracking.c later.
      
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Fixes: 6c81fe79 ("arm64: enable context tracking")
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      2671828c
  7. 20 Aug, 2019 5 commits
  8. 19 Aug, 2019 1 commit
  9. 15 Aug, 2019 3 commits
    • Mark Rutland's avatar
      kasan/arm64: fix CONFIG_KASAN_SW_TAGS && KASAN_INLINE · 34b5560d
      Mark Rutland authored
      The generic Makefile.kasan propagates CONFIG_KASAN_SHADOW_OFFSET into
      KASAN_SHADOW_OFFSET, but only does so for CONFIG_KASAN_GENERIC.
      
      Since commit:
      
        6bd1d0be ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")
      
      ... arm64 defines CONFIG_KASAN_SHADOW_OFFSET in Kconfig rather than
      defining KASAN_SHADOW_OFFSET in a Makefile. Thus, if
      CONFIG_KASAN_SW_TAGS && KASAN_INLINE are selected, we get build time
      splats due to KASAN_SHADOW_OFFSET not being set:
      
      | [mark@lakrids:~/src/linux]% usellvm 8.0.1 usekorg 8.1.0  make ARCH=arm64 CROSS_COMPILE=aarch64-linux- CC=clang
      | scripts/kconfig/conf  --syncconfig Kconfig
      |   CC      scripts/mod/empty.o
      | clang (LLVM option parsing): for the -hwasan-mapping-offset option: '' value invalid for uint argument!
      | scripts/Makefile.build:273: recipe for target 'scripts/mod/empty.o' failed
      | make[1]: *** [scripts/mod/empty.o] Error 1
      | Makefile:1123: recipe for target 'prepare0' failed
      | make: *** [prepare0] Error 2
      
      Let's fix this by always propagating CONFIG_KASAN_SHADOW_OFFSET into
      KASAN_SHADOW_OFFSET if CONFIG_KASAN is selected, moving the existing
      common definition of +CFLAGS_KASAN_NOSANITIZE to the top of
      Makefile.kasan.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Tested-by Steve Capper <steve.capper@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      34b5560d
    • Christoph Hellwig's avatar
      arm64: unexport set_memory_x and set_memory_nx · d225bb8d
      Christoph Hellwig authored
      No module currently messed with clearing or setting the execute
      permission of kernel memory, and none really should.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      d225bb8d
    • Sudeep Holla's avatar
      arm64: smp: disable hotplug on trusted OS resident CPU · d55c5f28
      Sudeep Holla authored
      The trusted OS may reject CPU_OFF calls to its resident CPU, so we must
      avoid issuing those. We never migrate a Trusted OS and we already take
      care to prevent CPU_OFF PSCI call. However, this is not reflected
      explicitly to the userspace. Any user can attempt to hotplug trusted OS
      resident CPU. The entire motion of going through the various state
      transitions in the CPU hotplug state machine gets executed and the
      PSCI layer finally refuses to make CPU_OFF call.
      
      This results is unnecessary unwinding of CPU hotplug state machine in
      the kernel. Instead we can mark the trusted OS resident CPU as not
      available for hotplug, so that the user attempt or request to do the
      same will get immediately rejected.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarSudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      d55c5f28
  10. 14 Aug, 2019 6 commits