- 01 Apr, 2015 2 commits
-
-
Bo Yan authored
Register MIDR_EL1 is masked to get variant and revision fields, then compared against midr_range_min and midr_range_max when checking whether CPU is affected by any particular erratum. However, variant and revision fields in MIDR_EL1 are separated by 16 bits, so the min and max of midr range should be constructed accordingly, otherwise the patch will not be applied when variant field is non-0. Cc: stable@vger.kernel.org # 3.19+ Acked-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Paul Walmsley <paul@pwsan.com> Signed-off-by: Bo Yan <byan@nvidia.com> [will: use MIDR_VARIANT_SHIFT to construct upper bound] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
When running a compat (AArch32) userspace on Cortex-A53, a load at EL0 from a virtual address that matches the bottom 32 bits of the virtual address used by a recent load at (AArch64) EL1 might return incorrect data. This patch works around the issue by writing to the contextidr_el1 register on the exception return path when returning to a 32-bit task. This workaround is patched in at runtime based on the MIDR value of the processor. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 31 Mar, 2015 1 commit
-
-
Joe Perches authored
Use the normal return values for bool functions Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 30 Mar, 2015 4 commits
-
-
Will Deacon authored
Enable a few useful options in our defconfig: - New platform support (exynos7, seattle, tegra132) - SKY2 (ethernet in newer revisions of Juno) - Xgene reboot support - Virtio-pci for kvmtool and qemu - EFIVAR_FS (previously selected as a module) - NFSv4 Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
As we detect more architectural features at runtime, it makes sense to reuse the existing framework whilst avoiding to call a feature an erratum... This patch extract the core capability parsing, moves it into a new file (cpufeature.c), and let the CPU errata detection code use it. Reviewed-by: Andre Przywara <andre.przywara@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
Since all immediate branches are PC-relative on Aarch64, these instructions cannot be used as an alternative with the simplistic approach we currently have (the immediate has been computed from the .altinstr_replacement section, and end-up being completely off if we insert it directly). This patch handles the b and bl instructions in a different way, using the insn framework to recompute the immediate, and generate the right displacement. Reviewed-by: Andre Przywara <andre.przywara@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
Patching an instruction sometimes requires extracting the immediate field from this instruction. To facilitate this, and avoid potential duplication of code, add aarch64_insn_decode_immediate as the reciprocal to aarch64_insn_encode_immediate. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 27 Mar, 2015 2 commits
-
-
Will Deacon authored
Just as we thought we'd fixed this, another old linker reared its ugly head trying to build linux-next. Unfortunately, it's the linker binary provided on kernel.org, so give up trying to be clever and align the hyp page to 4k.
-
Ard Biesheuvel authored
Older binutils do not support expressions involving the values of external symbols so just round up the HYP region to the page size. Tested-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [will: when will this ever end?!] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 25 Mar, 2015 2 commits
-
-
Will Deacon authored
linux-next testing found a bug with the PROVIDE keyword and older versions of binutils, so Ard has fixed that here.
-
Ard Biesheuvel authored
Using ASSERT() with an expression that involves a symbol that is only supplied through a PROVIDE() definition in the linker script itself is apparently not supported by some older versions of binutils. So instead, rewrite the expression so that only the section boundaries __hyp_idmap_text_start and __hyp_idmap_text_end are used. Note that this reverts the fix in 06f75a1f ("ARM, arm64: kvm: get rid of the bounce page") for the ASSERT() being triggered erroneously when unrelated linker emitted veneers happen to end up in the HYP idmap region. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 24 Mar, 2015 4 commits
-
-
Mark Rutland authored
We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the guarnatee that the kernel Image is clean, not invalid in the caches, and therefore we might read a stale value once the MMU is enabled. This patch ensures we invalidate the corresponding cacheline after the write as we do for all other data written before we set SCTLR_EL1.{C.M}, guaranteeing that the value will be visible later. We rely on the DSBs in __create_page_tables to complete the maintenance. Signed-off-by: Mark Rutland <mark.rutland@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Historically, the PMU devicetree bindings have expected SPIs to be listed in order of *logical* CPU number. This is problematic for bootloaders, especially when the boot CPU (logical ID 0) isn't listed first in the devicetree. This patch adds a new optional property, interrupt-affinity, to the PMU node which allows the interrupt affinity to be described using a list of phandled to CPU nodes, with each entry in the list corresponding to the SPI at the same index in the interrupts property. Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The current ARM PMU binding relies on the PMU interrupts being listed in CPU logical order, which the device-tree author simply cannot know anything about. This patch introduces a new "interrupt-affinity" property, which makes the relationship between the PMU interrupts and their corresponding CPU explicit. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
After writing the page tables, we use __inval_cache_range to invalidate any stale cache entries. Strongly Ordered memory accesses are not ordered w.r.t. cache maintenance instructions, and hence explicit memory barriers are required to provide this ordering. However, __inval_cache_range was written to be used on Normal Cacheable memory once the MMU and caches are on, and does not have any barriers prior to the DC instructions. This patch adds a DMB between the page tables being written and the corresponding cachelines being invalidated, ensuring that the invalidation makes the new data visible to subsequent cacheable accesses. A barrier is not required before the prior invalidate as we do not access the page table memory area prior to this, and earlier barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures ordering w.r.t. any stores performed prior to entering Linux. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Fixes: c218bca7 ("arm64: Relax the kernel cache requirements for boot") Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 23 Mar, 2015 4 commits
-
-
Ard Biesheuvel authored
This patch modifies the HYP init code so it can deal with system RAM residing at an offset which exceeds the reach of VA_BITS. Like for EL1, this involves configuring an additional level of translation for the ID map. However, in case of EL2, this implies that all translations use the extra level, as we cannot seamlessly switch between translation tables with different numbers of translation levels. So add an extra translation table at the root level. Since the ID map and the runtime HYP map are guaranteed not to overlap, they can share this root level, and we can essentially merge these two tables into one. Tested-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The page size and the number of translation levels, and hence the supported virtual address range, are build-time configurables on arm64 whose optimal values are use case dependent. However, in the current implementation, if the system's RAM is located at a very high offset, the virtual address range needs to reflect that merely because the identity mapping, which is only used to enable or disable the MMU, requires the extended virtual range to map the physical memory at an equal virtual offset. This patch relaxes that requirement, by increasing the number of translation levels for the identity mapping only, and only when actually needed, i.e., when system RAM's offset is found to be out of reach at runtime. Tested-by: Laura Abbott <lauraa@codeaurora.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Rework of the KVM HYP bounce page from Ard Biesheuvel. Subsequent arm64 idmap rework depends on this, so merge it here with Marc Zyngier's blessing (kvm-arm co-maintainer).
-
Ard Biesheuvel authored
Commit 06f75a1f ("ARM, arm64: kvm: get rid of the bounce page") uses ld's builtin function LOG2CEIL() to align the KVM init code to a log2 upper bound of its size. However, this function turns out to be a fairly recent addition to binutils, which breaks the build for older toolchains. So instead, implement a replacement LOG2_ROUNDUP() using the C preprocessor. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 19 Mar, 2015 18 commits
-
-
Will Deacon authored
cpu_get_pgd isn't used anywhere and is Probably Not What You Want. Remove it before anybody decides to use it. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
According to the arm64 boot protocol, registers x1 to x3 should be zero upon kernel entry, and non-zero values are reserved for future use. This future use is going to be problematic if we never enforce the current rules, so start enforcing them now, by emitting a warning if non-zero values are detected. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
This removes the function __calc_phys_offset and all open coded virtual to physical address translations using the offset kept in x28. Instead, just use absolute or PC-relative symbol references as appropriate when referring to virtual or physical addresses, respectively. Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Enabling of the MMU is split into two functions, with an align and a branch in the middle. On arm64, the entire kernel Image is ID mapped so this is really not necessary, and we can just merge it into a single function. Also replaces an open coded adrp/add reference to __enable_mmu pair with adr_l. Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Replace the confusing virtual/physical address arithmetic with a simple PC-relative reference. Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
This removes the confusing __switch_data object from head.S, and replaces it with standard PC-relative references to the various symbols it encapsulates. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The global processor_id is assigned the MIDR_EL1 value of the boot CPU in the early init code, but is never referenced afterwards. As the relevance of the MIDR_EL1 value of the boot CPU is debatable anyway, especially under big.LITTLE, let's remove it before anyone starts using it. Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The adrp instruction is mostly used in combination with either an add, a ldr or a str instruction with the low bits of the referenced symbol in the 12-bit immediate of the followup instruction. Introduce the macros adr_l, ldr_l and str_l that encapsulate these common patterns. Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
struct cpu_table is an artifact left from the (very) early days of the arm64 port, and its only real use is to allow the most beautiful "AArch64 Processor" string to be displayed at boot time. Really? Yes, really. Let's get rid of it. In order to avoid another BogoMips-gate, the aforementioned string is preserved. Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ganapatrao Kulkarni authored
Raise the maximum CPU limit to 4096 in preparation for upcoming platforms with large core counts. Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K. Poulose authored
The perf core implicitly rejects events spanning multiple HW PMUs, as in these cases the event->ctx will differ. However this validation is performed after pmu::event_init() is called in perf_init_event(), and thus pmu::event_init() may be called with a group leader from a different HW PMU. The ARM64 PMU driver does not take this fact into account, and when validating groups assumes that it can call to_arm_pmu(event->pmu) for any HW event. When the event in question is from another HW PMU this is wrong, and results in dereferencing garbage. This patch updates the ARM64 PMU driver to first test for and reject events from other PMUs, moving the to_arm_pmu and related logic after this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with a CCI PMU present: Bad mode in Synchronous Abort handler detected, code 0x86000006 -- IABT (current EL) CPU: 0 PID: 1371 Comm: perf_fuzzer Not tainted 3.19.0+ #249 Hardware name: V2F-1XV7 Cortex-A53x2 SMM (DT) task: ffffffc07c73a280 ti: ffffffc07b0a0000 task.ti: ffffffc07b0a0000 PC is at 0x0 LR is at validate_event+0x90/0xa8 pc : [<0000000000000000>] lr : [<ffffffc000090228>] pstate: 00000145 sp : ffffffc07b0a3ba0 [< (null)>] (null) [<ffffffc0000907d8>] armpmu_event_init+0x174/0x3cc [<ffffffc00015d870>] perf_try_init_event+0x34/0x70 [<ffffffc000164094>] perf_init_event+0xe0/0x10c [<ffffffc000164348>] perf_event_alloc+0x288/0x358 [<ffffffc000164c5c>] SyS_perf_event_open+0x464/0x98c Code: bad PC value Also cleans up the code to use the arm_pmu only when we know that we are dealing with an arm pmu event. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Peter Ziljstra (Intel) <peterz@infradead.org> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The HYP init bounce page is a runtime construct that ensures that the HYP init code does not cross a page boundary. However, this is something we can do perfectly well at build time, by aligning the code appropriately. For arm64, we just align to 4 KB, and enforce that the code size is less than 4 KB, regardless of the chosen page size. For ARM, the whole code is less than 256 bytes, so we tweak the linker script to align at a power of 2 upper bound of the code size Note that this also fixes a benign off-by-one error in the original bounce page code, where a bounce page would be allocated unnecessarily if the code was exactly 1 page in size. On ARM, it also fixes an issue with very large kernels reported by Arnd Bergmann, where stub sections with linker emitted veneers could erroneously trigger the size/alignment ASSERT() in the linker script. Tested-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
This changes the AES core transform implementations to issue aese/aesmc (and aesd/aesimc) in pairs. This enables a micro-architectural optimization in recent Cortex-A5x cores that improves performance by 50-90%. Measured performance in cycles per byte (Cortex-A57): CBC enc CBC dec CTR before 3.64 1.34 1.32 after 1.95 0.85 0.93 Note that this results in a ~5% performance decrease for older cores. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Fixmap indices are in the interval (FIX_HOLE, __end_of_fixed_addresses), but in __set_fixmap we only check idx <= __end_of_fixed_addresses, and therefore indices <= FIX_HOLE are erroneously accepted. If called with such an idx, __set_fixmap may corrupt page tables outside of the fixmap region. This patch ensures that we validate the idx against both endpoints of the interval. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
The FIX_TEXT_POKE0 is currently at the end of the temporary fixmap slots, despite the fact that it can be used at any point during runtime (e.g. for poking the text of loaded modules), and thus should be a permanent fixmap slot (as is the case on arm and x86). This patch moves FIX_TEXT_POKE0 into the set of permanent fixmap slots. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Daniel Borkmann authored
This effectively unexports set_memory_ro and set_memory_rw functions from commit 11d91a77 ("arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support"). No module user of those is in mainline kernel and we explicitly do not want modules to use these functions, as they i.e. RO-protect eBPF (interpreted and JIT'ed) images from malicious modifications/bugs. Outside of eBPF scope, I believe also other set_memory_* functions should be unexported on arm64 due to non-existant mainline module user. Laura mentioned that they have some uses for modules doing set_memory_*, but none that are in mainline and it's unclear if they would ever get there. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Alexander Graf authored
With binutils 2.25 the default alignment for 32bit arm sections changed to have everything 64k aligned. Armv7 binaries built with this binutils version run successfully on an arm64 system. Since effectively there is now the chance to run armv7 code on arm64 even with 64k page size, it doesn't make sense to block people from enabling CONFIG_COMPAT on those configurations. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Andreas Schwab authored
The arm mmap2 syscall takes the offset in units of 4K, thus with 64K pages the offset needs to be scaled to units of pages. Signed-off-by: Andreas Schwab <schwab@suse.de> Signed-off-by: Alexander Graf <agraf@suse.de> [will: removed redundant lr parameter, localised PAGE_SHIFT #if check] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 17 Mar, 2015 3 commits
-
-
Steve Capper authored
Commit f4f75ad5 ("efi: efistub: Convert into static library") introduced a static library for EFI stub, libstub. The EFI libstub directory is referenced by the kernel build system via a obj subdirectory rule in: drivers/firmware/efi/Makefile Unfortunately, arm64 also references the EFI libstub via: libs-$(CONFIG_EFI_STUB) += drivers/firmware/efi/libstub/ If we're unlucky, the kernel build system can enter libstub via two simultaneous threads resulting in build failures such as: fixdep: error opening depfile: drivers/firmware/efi/libstub/.efi-stub-helper.o.d: No such file or directory scripts/Makefile.build:257: recipe for target 'drivers/firmware/efi/libstub/efi-stub-helper.o' failed make[1]: *** [drivers/firmware/efi/libstub/efi-stub-helper.o] Error 2 Makefile:939: recipe for target 'drivers/firmware/efi/libstub' failed make: *** [drivers/firmware/efi/libstub] Error 2 make: *** Waiting for unfinished jobs.... This patch adjusts the arm64 Makefile to reference the compiled library explicitly (as is currently done in x86), rather than the directory. Fixes: f4f75ad5 efi: efistub: Convert into static library Signed-off-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
We currently don't log the boot mode for arm64 as we do for arm, and without KVM the user is provided with no indication as to which mode(s) CPUs were booted in, which can seriously hinder debugging in some cases. Add logging to the boot path once all CPUs are up. Where CPUs are mismatched in violation of the boot protocol, WARN and set a taint (as we do for CPU other CPU feature mismatches) given that the firmware/bootloader is buggy and should be fixed. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Commit 828e9834 ("arm64: head: create a new function for setting the boot_cpu_mode flag") added BOOT_CPU_MODE_EL1, a nonzero value replacing uses of zero. However it failed to update __boot_cpu_mode appropriately. A CPU booted at EL2 writes BOOT_CPU_MODE_EL2 to __boot_cpu_mode[0], and a CPU booted at EL1 writes BOOT_CPU_MODE_EL1 to __boot_cpu_mode[1]. Later is_hyp_mode_mismatched() determines there to be a mismatch if __boot_cpu_mode[0] != __boot_cpu_mode[1]. If all CPUs are booted at EL1, __boot_cpu_mode[0] will be set to BOOT_CPU_MODE_EL1, but __boot_cpu_mode[1] will retain its initial value of zero, and is_hyp_mode_mismatched will erroneously determine that the boot modes are mismatched. This hasn't been a problem so far, but later patches which will make use of is_hyp_mode_mismatched() expect it to work correctly. This patch initialises __boot_cpu_mode[1] to BOOT_CPU_MODE_EL1, fixing the erroneous mismatch detection when all CPUs are booted at EL1. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-