- 03 Oct, 2018 1 commit
-
-
Arnd Bergmann authored
arm64_1188873_read_cntvct_el0() is protected by the correct CONFIG_ARM64_ERRATUM_1188873 #ifdef, but the only reference to it is also inside of an CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND section, and causes a warning if that is disabled: drivers/clocksource/arm_arch_timer.c:323:20: error: 'arm64_1188873_read_cntvct_el0' defined but not used [-Werror=unused-function] Since the erratum requires that we always apply the workaround in the timer driver, select that symbol as we do for SoC specific errata. Fixes: 95b861a4 ("arm64: arch_timer: Add workaround for ARM erratum 1188873") Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 01 Oct, 2018 20 commits
-
-
Marc Zyngier authored
It recently came to light that userspace can execute WFI, and that the arm64 kernel doesn't trap this event. This sounds rather benign, but the kernel should decide when it wants to wait for an interrupt, and not userspace. Let's trap WFI and immediately return after having skipped the instruction. This effectively makes WFI a rather expensive NOP. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
We advertise the MRS/MSR instructions for toggling SSBS at EL0 using an HWCAP, so document it along with the others. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Giacomo Travaglini authored
Fix some typos in our hwcap documentation, where we refer to the wrong ID register for some of the capabilities. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Giacomo Travaglini <giacomo.travaglini@arm.com> [will: fix amusing binary constants] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
zhong jiang authored
There is an extra semicolon in arch_prepare_kprobe, remove it. Signed-off-by: zhong jiang <zhongjiang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Anshuman Khandual authored
At present numa_free_distance() is being called before numa_distance is even initialized with numa_alloc_distance() which is really pointless. Instead lets call numa_free_distance() on the common error path inside numa_init() after numa_alloc_distance() has been successful. Fixes: 1a2db300 ("arm64, numa: Add NUMA support for arm64 platforms") Acked-by: Punit Agrawal <punit.agrawal@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Anshuman Khandual authored
The dummy node ID is marked into all memory ranges on the system. So the dummy node really extends the entire memblock.memory. Hence report correct extent information for the dummy node using memblock range helper functions instead of the range [0LLU, PFN_PHYS(max_pfn) - 1)]. Fixes: 1a2db300 ("arm64, numa: Add NUMA support for arm64 platforms") Acked-by: Punit Agrawal <punit.agrawal@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Anshuman Khandual authored
fault_info[] and debug_fault_info[] are static arrays defining memory abort exception handling functions looking into ESR fault status code encodings. As esr_to_fault_info() is already available providing fault_info[] array lookup, it really makes sense to have a corresponding debug_fault_info[] array lookup function as well. This just adds an equivalent helper function esr_to_debug_fault_info(). Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Anshuman Khandual authored
Most memory abort exception handling related functions have the arguments in the order (addr, esr, regs) except is_el1_permission_fault(). This changes the argument order in this function as (addr, esr, regs) like others. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Anshuman Khandual authored
Just replace hard code value of 63 (0x111111) with an existing macro ESR_ELx_FSC when parsing for the status code during fault exception. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
When running on Cortex-A76, a timer access from an AArch32 EL0 task may end up with a corrupted value or register. The workaround for this is to trap these accesses at EL1/EL2 and execute them there. This only affects versions r0p0, r1p0 and r2p0 of the CPU. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Just like CNTVCT, we need to handle userspace trapping into the kernel if we're decided that the timer wasn't fit for purpose... 64bit userspace is already dealt with, but we're missing the equivalent compat handling. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Since people seem to make a point in breaking the userspace visible counter, we have no choice but to trap the access. We already do this for 64bit userspace, but this is lacking for compat. Let's provide the required handler. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
We're now ready to start handling CP15 access. Let's add (empty) arrays for both 32 and 64bit accessors, and the code that deals with them. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Here's a /really nice/ part of the architecture: a CP15 access is allowed to trap even if it fails its condition check, and SW must handle it. This includes decoding the IT state if this happens in am IT block. As a consequence, SW must also deal with advancing the IT state machine. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Instead of directly generating an UNDEF when trapping a CP15 access, let's add a new entry point to that effect (which only generates an UNDEF for now). Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
So far, we don't have anything to help decoding ESR_ELx when dealing with ESR_ELx_EC_CP15_{32,64}. As we're about to handle some of those, let's add some useful macros. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
arm64 does not define CONFIG_HAVE_ARCH_COMPILER_H, nor does it keep anything useful in its copy of asm/compiler.h, so let's remove it before anybody starts using it. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
arch/arm/ defines a SIGMINSTKSZ of 2k, so we should use the same value for compat tasks. Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Reported-by: Steve McIntyre <steve.mcintyre@arm.com> Tested-by: Steve McIntyre <93sam@debian.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
The sigaltstack(2) system call fails with -ENOMEM if the new alternative signal stack is found to be smaller than SIGMINSTKSZ. On architectures such as arm64, where the native value for SIGMINSTKSZ is larger than the compat value, this can result in an unexpected error being reported to a compat task. See, for example: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=904385 This patch fixes the problem by extending do_sigaltstack to take the minimum signal stack size as an additional parameter, allowing the native and compat system call entry code to pass in their respective values. COMPAT_SIGMINSTKSZ is just defined as SIGMINSTKSZ if it has not been defined by the architecture. Cc: Arnd Bergmann <arnd@arndb.de> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Reported-by: Steve McIntyre <steve.mcintyre@arm.com> Tested-by: Steve McIntyre <93sam@debian.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Rob Herring authored
In preparation to remove the node name pointer from struct device_node, convert printf users to use the %pOFn format specifier. Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 25 Sep, 2018 4 commits
-
-
Jun Yao authored
Now that deliberate writes to swapper_pg_dir are made via the fixmap, we can defend against errant writes by moving it into the rodata section. Since tramp_pg_dir and reserved_ttbr0 must be at a fixed offset from swapper_pg_dir, and are not modified at runtime, these are also moved into the rodata section. Likewise, idmap_pg_dir is not modified at runtime, and is moved into rodata. Signed-off-by: Jun Yao <yaojun8558363@gmail.com> Reviewed-by: James Morse <james.morse@arm.com> [Mark: simplify linker script, commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jun Yao authored
Once swapper_pg_dir is in the rodata section, it will not be possible to modify it directly, but we will need to modify it in some cases. To enable this, we can use the fixmap when deliberately modifying swapper_pg_dir. As the pgd is only transiently mapped, this provides some resilience against illicit modification of the pgd, e.g. for Kernel Space Mirror Attack (KSMA). Signed-off-by: Jun Yao <yaojun8558363@gmail.com> Reviewed-by: James Morse <james.morse@arm.com> [Mark: simplify ifdeffery, commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jun Yao authored
Since the address of swapper_pg_dir is fixed for a given kernel image, it is an attractive target for manipulation via an arbitrary write. To mitigate this we'd like to make it read-only by moving it into the rodata section. We require that swapper_pg_dir is at a fixed offset from tramp_pg_dir and reserved_ttbr0, so these will also need to move into rodata. However, swapper_pg_dir is allocated along with some transient page tables used for boot which we do not want to move into rodata. As a step towards this, this patch separates the boot-time page tables into a new init_pg_dir, and reduces swapper_pg_dir to the single page it needs to be. This allows us to retain the relationship between swapper_pg_dir, tramp_pg_dir, and swapper_pg_dir, while cleanly separating these from the boot-time page tables. The init_pg_dir holds all of the pgd/pud/pmd/pte levels needed during boot, and all of these levels will be freed when we switch to the swapper_pg_dir, which is initialized by the existing code in paging_init(). Since we start off on the init_pg_dir, we no longer need to allocate a transient page table in paging_init() in order to ensure that swapper_pg_dir isn't live while we initialize it. There should be no functional change as a result of this patch. Signed-off-by: Jun Yao <yaojun8558363@gmail.com> Reviewed-by: James Morse <james.morse@arm.com> [Mark: place init_pg_dir after BSS, fold mm changes, commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jun Yao authored
In subsequent patches we'll use a transient pgd during the primary cpu's boot process. To make this work while allowing secondary cpus to use the swapper_pg_dir, we need to pass the relevant TTBR1 pgd as a parameter to __enable_mmu(). This patch updates __enable__mmu() to take this as a parameter, updating callsites to pass swapper_pg_dir for now. There should be no functional change as a result of this patch. Signed-off-by: Jun Yao <yaojun8558363@gmail.com> Reviewed-by: James Morse <james.morse@arm.com> [Mark: simplify assembly, clarify commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 24 Sep, 2018 1 commit
-
-
Tri Vo authored
x0 is not callee-saved in the PCS. So there is no need to specify -fcall-used-x0. Clang doesn't currently support -fcall-used flags. This patch will help building the kernel with clang. Tested-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Tri Vo <trong@android.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 21 Sep, 2018 5 commits
-
-
Andrew Murray authored
Support for VGA_CONSOLE is not allowable due to commit ee23794b ("video: vgacon: Don't build on arm64"), thus remove the associated unused code. Whilst PCI on arm64 would support VGA a valid screen_info structure is missing. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Murray <andrew.murray@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
James Morse authored
include/linux/mmzone.h describes ARCH_HAS_HOLES_MEMORYMODEL as relevant when parts the memmap have been free()d. This would happen on systems where memory is smaller than a sparsemem-section, and the extra struct pages are expensive. pfn_valid() on these systems returns true for the whole sparsemem-section, so an extra memmap_valid_within() check is needed. On arm64 we have nomap memory, so always provide pfn_valid() to test for nomap pages. This means ARCH_HAS_HOLES_MEMORYMODEL's extra checks are already rolled up into pfn_valid(). Remove it. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Anshuman Khandual authored
Armv8.4-A extension enables MRS instruction encodings inside ESR_ELx.ISS during exception class ESR_ELx_EC_SYS64 (0x18). This encoding can be used to emulate MRS instructions which can avoid fetch/decode from user space thus improving performance. This adds a new sys64_hook structure element with applicable ESR mask/value pair for MRS instructions on various system registers but constrained by sysreg encodings which is currently allowed to be emulated. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Anshuman Khandual authored
MRS emulation gets triggered with exception class (0x00 or 0x18) eventually calling the function emulate_mrs() which fetches the user space instruction and analyses it's encodings (OP0, OP1, OP2, CRN, CRM, RT). The kernel tries to emulate the given instruction looking into the encoding details. Going forward these encodings can also be parsed from ESR_ELx.ISS fields without requiring to fetch/decode faulting userspace instruction which can improve performance. This factorizes emulate_mrs() function in a way that it can be called directly with MRS encodings (OP0, OP1, OP2, CRN, CRM) for any given target register which can then be used directly from 0x18 exception class. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Anshuman Khandual authored
Extracting target register from ESR.ISS encoding has already been required at multiple instances. Just make it a macro definition and replace all the existing use cases. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 19 Sep, 2018 1 commit
-
-
Will Deacon authored
There's no need to treat mismatched cache-line sizes reported by CTR_EL0 differently to any other mismatched fields that we treat as "STRICT" in the cpufeature code. In both cases we need to trap and emulate EL0 accesses to the register, so drop ARM64_MISMATCHED_CACHE_LINE_SIZE and rely on ARM64_MISMATCHED_CACHE_TYPE instead. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: move ARM64_HAS_CNP in the empty cpucaps.h slot] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 18 Sep, 2018 2 commits
-
-
Vladimir Murzin authored
We rely on cpufeature framework to detect and enable CNP so for KVM we need to patch hyp to set CNP bit just before TTBR0_EL2 gets written. For the guest we encode CNP bit while building vttbr, so we don't need to bother with that in a world switch. Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Vladimir Murzin authored
Common Not Private (CNP) is a feature of ARMv8.2 extension which allows translation table entries to be shared between different PEs in the same inner shareable domain, so the hardware can use this fact to optimise the caching of such entries in the TLB. CNP occupies one bit in TTBRx_ELy and VTTBR_EL2, which advertises to the hardware that the translation table entries pointed to by this TTBR are the same as every PE in the same inner shareable domain for which the equivalent TTBR also has CNP bit set. In case CNP bit is set but TTBR does not point at the same translation table entries for a given ASID and VMID, then the system is mis-configured, so the results of translations are UNPREDICTABLE. For kernel we postpone setting CNP till all cpus are up and rely on cpufeature framework to 1) patch the code which is sensitive to CNP and 2) update TTBR1_EL1 with CNP bit set. TTBR1_EL1 can be reprogrammed as result of hibernation or cpuidle (via __enable_mmu). For these two cases we restore CnP bit via __cpu_suspend_exit(). There are a few cases we need to care of changes in TTBR0_EL1: - a switch to idmap - software emulated PAN we rule out latter via Kconfig options and for the former we make sure that CNP is set for non-zero ASIDs only. Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> [catalin.marinas@arm.com: default y for CONFIG_ARM64_CNP] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 17 Sep, 2018 1 commit
-
-
Suzuki K Poulose authored
Instructions for modifying the PSTATE fields which were not supported in the older toolchains (e.g, PAN, UAO) are generated using macros. We have so far used the normal sys_reg() helper for defining the PSTATE fields. While this works fine, it is really difficult to correlate the code with the Arm ARM definition. As per Arm ARM, the PSTATE fields are defined only using Op1, Op2 fields, with fixed values for Op0, CRn. Also the CRm field has been reserved for the Immediate value for the instruction. So using the sys_reg() looks quite confusing. This patch cleans up the instruction helpers by bringing them in line with the Arm ARM definitions to make it easier to correlate code with the document. No functional changes. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 14 Sep, 2018 5 commits
-
-
Hari Vyas authored
The bad_mode() handler is called if we encounter an uunknown exception, with the expectation that the subsequent call to panic() will halt the system. Unfortunately, if the exception calling bad_mode() is taken from EL0, then the call to die() can end up killing the current user task and calling schedule() instead of falling through to panic(). Remove the die() call altogether, since we really want to bring down the machine in this "impossible" case. Signed-off-by: Hari Vyas <hari.vyas@broadcom.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
force_signal_inject() is designed to send a fatal signal to userspace, so WARN if the current pt_regs indicates a kernel context. This can currently happen for the undefined instruction trap, so patch that up so we always BUG() if we didn't have a handler. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
The cpu errata and feature enable callbacks are only called via their respective arm64_cpu_capabilities structure and therefore shouldn't exist in the global namespace. Move the PAN, RAS and cache maintenance emulation enable callbacks into the same files as their corresponding arm64_cpu_capabilities structures, making them static in the process. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
When running without VHE, it is necessary to set SCTLR_EL2.DSSBS if SSBD has been forcefully disabled on the kernel command-line. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
On CPUs with support for PSTATE.SSBS, the kernel can toggle the SSBD state without needing to call into firmware. This patch hooks into the existing SSBD infrastructure so that SSBS is used on CPUs that support it, but it's all made horribly complicated by the very real possibility of big/little systems that don't uniformly provide the new capability. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-