- 13 Apr, 2021 1 commit
-
-
Jisheng Zhang authored
If instruction being single stepped caused a page fault, the kprobes is cancelled to let the page fault handler continue as a normal page fault. But the local irqflags are disabled so cpu will restore pstate with DAIF masked. After pagefault is serviced, the kprobes is triggerred again, we overwrite the saved_irqflag by calling kprobes_save_local_irqflag(). NOTE, DAIF is masked in this new saved irqflag. After kprobes is serviced, the cpu pstate is retored with DAIF masked. This patch is inspired by one patch for riscv from Liao Chang. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20210412174101.6bfb0594@xhacker.debianSigned-off-by: Will Deacon <will@kernel.org>
-
- 12 Apr, 2021 1 commit
-
-
Catalin Marinas authored
The entry from EL0 code checks the TFSRE0_EL1 register for any asynchronous tag check faults in user space and sets the TIF_MTE_ASYNC_FAULT flag. This is not done atomically, potentially racing with another CPU calling set_tsk_thread_flag(). Replace the non-atomic ORR+STR with an STSET instruction. While STSET requires ARMv8.1 and an assembler that understands LSE atomics, the MTE feature is part of ARMv8.5 and already requires an updated assembler. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Fixes: 637ec831 ("arm64: mte: Handle synchronous and asynchronous tag check faults") Cc: <stable@vger.kernel.org> # 5.10.x Reported-by: Will Deacon <will@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210409173710.18582-1-catalin.marinas@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 01 Apr, 2021 1 commit
-
-
Peter Collingbourne authored
The inline asm's addr operand is marked as input-only, however in the case where an exception is taken it may be modified by the BIC instruction on the exception path. Fix the problem by using a temporary register as the destination register for the BIC instruction. Signed-off-by: Peter Collingbourne <pcc@google.com> Cc: stable@vger.kernel.org Link: https://linux-review.googlesource.com/id/I84538c8a2307d567b4f45bb20b715451005f9617 Link: https://lore.kernel.org/r/20210401165110.3952103-1-pcc@google.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 25 Mar, 2021 2 commits
-
-
Rich Wiley authored
On NVIDIA Carmel cores, CNP behaves differently than it does on standard ARM cores. On Carmel, if two cores have CNP enabled and share an L2 TLB entry created by core0 for a specific ASID, a non-shareable TLBI from core1 may still see the shared entry. On standard ARM cores, that TLBI will invalidate the shared entry as well. This causes issues with patchsets that attempt to do local TLBIs based on cpumasks instead of broadcast TLBIs. Avoid these issues by disabling CNP support for NVIDIA Carmel cores. Signed-off-by: Rich Wiley <rwiley@nvidia.com> Link: https://lore.kernel.org/r/20210324002809.30271-1-rwiley@nvidia.com [will: Fix pre-existing whitespace issue] Signed-off-by: Will Deacon <will@kernel.org>
-
Maninder Singh authored
Fix GCC warnings reported when building with "-Wmissing-prototypes": arch/arm64/kernel/process.c:261:6: warning: no previous prototype for '__show_regs' [-Wmissing-prototypes] 261 | void __show_regs(struct pt_regs *regs) | ^~~~~~~~~~~ arch/arm64/kernel/process.c:307:6: warning: no previous prototype for '__show_regs_alloc_free' [-Wmissing-prototypes] 307 | void __show_regs_alloc_free(struct pt_regs *regs) | ^~~~~~~~~~~~~~~~~~~~~~ arch/arm64/kernel/process.c:365:5: warning: no previous prototype for 'arch_dup_task_struct' [-Wmissing-prototypes] 365 | int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) | ^~~~~~~~~~~~~~~~~~~~ arch/arm64/kernel/process.c:546:41: warning: no previous prototype for '__switch_to' [-Wmissing-prototypes] 546 | __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, | ^~~~~~~~~~~ arch/arm64/kernel/process.c:710:25: warning: no previous prototype for 'arm64_preempt_schedule_irq' [-Wmissing-prototypes] 710 | asmlinkage void __sched arm64_preempt_schedule_irq(void) | ^~~~~~~~~~~~~~~~~~~~~~~~~~ Link: https://lore.kernel.org/lkml/202103192250.AennsfXM-lkp@intel.comReported-by: kernel test robot <lkp@intel.com> Signed-off-by: Maninder Singh <maninder1.s@samsung.com> Link: https://lore.kernel.org/r/1616568899-986-1-git-send-email-maninder1.s@samsung.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 22 Mar, 2021 6 commits
-
-
Andre Przywara authored
The "First Fault Register" (FFR) is an SVE register that mimics a predicate register, but clears bits when a load or store fails to handle an element of a vector. The supposed usage scenario is to initialise this register (using SETFFR), then *read* it later on to learn about elements that failed to load or store. Explicit writes to this register using the WRFFR instruction are only supposed to *restore* values previously read from the register (for context-switching only). As the manual describes, this register holds only certain values, it: "... contains a monotonic predicate value, in which starting from bit 0 there are zero or more 1 bits, followed only by 0 bits in any remaining bit positions." Any other value is UNPREDICTABLE and is not supposed to be "restored" into the register. The SVE test currently tries to write a signature pattern into the register, which is *not* a canonical FFR value. Apparently the existing setups treat UNPREDICTABLE as "read-as-written", but a new implementation actually only stores canonical values. As a consequence, the sve-test fails immediately when comparing the FFR value: ----------- # ./sve-test Vector length: 128 bits PID: 207 Mismatch: PID=207, iteration=0, reg=48 Expected [cf00] Got [0f00] Aborted ----------- Fix this by only populating the FFR with proper canonical values. Effectively the requirement described above limits us to 17 unique values over 16 bits worth of FFR, so we condense our signature down to 4 bits (2 bits from the PID, 2 bits from the generation) and generate the canonical pattern from it. Any bits describing elements above the minimum 128 bit are set to 0. This aligns the FFR usage to the architecture and fixes the test on microarchitectures implementing FFR in a more restricted way. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviwed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210319120128.29452-1-andre.przywara@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Pavel Tatashin authored
Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the linear map range is not checked correctly. The start physical address that linear map covers can be actually at the end of the range because of randomization. Check that and if so reduce it to 0. This can be verified on QEMU with setting kaslr-seed to ~0ul: memstart_offset_seed = 0xffff START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000 END: __pa(PAGE_END - 1) = 1000bfffffff Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Fixes: 58284a90 ("arm64/mm: Validate hotplug range before creating linear mapping") Tested-by: Tyler Hicks <tyhicks@linux.microsoft.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20210216150351.129018-2-pasha.tatashin@soleen.comSigned-off-by: Will Deacon <will@kernel.org>
-
Pavel Tatashin authored
The ppos points to a position in the old kernel memory (and in case of arm64 in the crash kernel since elfcorehdr is passed as a segment). The function should update the ppos by the amount that was read. This bug is not exposed by accident, but other platforms update this value properly. So, fix it in ARM64 version of elfcorehdr_read() as well. Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> Fixes: e62aaeac ("arm64: kdump: provide /proc/vmcore file") Reviewed-by: Tyler Hicks <tyhicks@linux.microsoft.com> Link: https://lore.kernel.org/r/20210319205054.743368-1-pasha.tatashin@soleen.comSigned-off-by: Will Deacon <will@kernel.org>
-
Bhaskar Chowdhury authored
s/acurate/accurate/ Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20210319222848.29928-1-unixbhaskar@gmail.comSigned-off-by: Will Deacon <will@kernel.org>
-
Tom Saeger authored
In commit 94bccc34 ("iscsi_ibft: make ISCSI_IBFT dependson ACPI instead of ISCSI_IBFT_FIND") Kconfig was disentangled to make ISCSI_IBFT selection not depend on x86. Update arm64 acpi documentation, changing IBFT support status from "Not Supported" to "Optional". Opportunistically re-flow paragraph for changed lines. Link: https://lore.kernel.org/lkml/1563475054-10680-1-git-send-email-thomas.tai@oracle.com/Signed-off-by: Tom Saeger <tom.saeger@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Link: https://lore.kernel.org/r/9efc652df2b8d6b53d9acb170eb7c9ca3938dfef.1615920441.git.tom.saeger@oracle.comSigned-off-by: Will Deacon <will@kernel.org>
-
Mark Rutland authored
We recently converted arm64 to use arch_stack_walk() in commit: 5fc57df2 ("arm64: stacktrace: Convert to ARCH_STACKWALK") The core stacktrace code expects that (when tracing the current task) arch_stack_walk() starts a trace at its caller, and does not include itself in the trace. However, arm64's arch_stack_walk() includes itself, and so traces include one more entry than callers expect. The core stacktrace code which calls arch_stack_walk() tries to skip a number of entries to prevent itself appearing in a trace, and the additional entry prevents skipping one of the core stacktrace functions, leaving this in the trace unexpectedly. We can fix this by having arm64's arch_stack_walk() begin the trace with its caller. The first value returned by the trace will be __builtin_return_address(0), i.e. the caller of arch_stack_walk(). The first frame record to be unwound will be __builtin_frame_address(1), i.e. the caller's frame record. To prevent surprises, arch_stack_walk() is also marked noinline. While __builtin_frame_address(1) is not safe in portable code, local GCC developers have confirmed that it is safe on arm64. To find the caller's frame record, the builtin can safely dereference the current function's frame record or (in theory) could stash the original FP into another GPR at function entry time, neither of which are problematic. Prior to this patch, the tracing code would unexpectedly show up in traces of the current task, e.g. | # cat /proc/self/stack | [<0>] stack_trace_save_tsk+0x98/0x100 | [<0>] proc_pid_stack+0xb4/0x130 | [<0>] proc_single_show+0x60/0x110 | [<0>] seq_read_iter+0x230/0x4d0 | [<0>] seq_read+0xdc/0x130 | [<0>] vfs_read+0xac/0x1e0 | [<0>] ksys_read+0x6c/0xfc | [<0>] __arm64_sys_read+0x20/0x30 | [<0>] el0_svc_common.constprop.0+0x60/0x120 | [<0>] do_el0_svc+0x24/0x90 | [<0>] el0_svc+0x2c/0x54 | [<0>] el0_sync_handler+0x1a4/0x1b0 | [<0>] el0_sync+0x170/0x180 After this patch, the tracing code will not show up in such traces: | # cat /proc/self/stack | [<0>] proc_pid_stack+0xb4/0x130 | [<0>] proc_single_show+0x60/0x110 | [<0>] seq_read_iter+0x230/0x4d0 | [<0>] seq_read+0xdc/0x130 | [<0>] vfs_read+0xac/0x1e0 | [<0>] ksys_read+0x6c/0xfc | [<0>] __arm64_sys_read+0x20/0x30 | [<0>] el0_svc_common.constprop.0+0x60/0x120 | [<0>] do_el0_svc+0x24/0x90 | [<0>] el0_svc+0x2c/0x54 | [<0>] el0_sync_handler+0x1a4/0x1b0 | [<0>] el0_sync+0x170/0x180 Erring on the side of caution, I've given this a spin with a bunch of toolchains, verifying the output of /proc/self/stack and checking that the assembly looked sound. For GCC (where we require version 5.1.0 or later) I tested with the kernel.org crosstool binares for versions 5.5.0, 6.4.0, 6.5.0, 7.3.0, 7.5.0, 8.1.0, 8.3.0, 8.4.0, 9.2.0, and 10.1.0. For clang (where we require version 10.0.1 or later) I tested with the llvm.org binary releases of 11.0.0, and 11.0.1. Fixes: 5fc57df2 ("arm64: stacktrace: Convert to ARCH_STACKWALK") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chen Jun <chenjun102@huawei.com> Cc: Marco Elver <elver@google.com> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: <stable@vger.kernel.org> # 5.10.x Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210319184106.5688-1-mark.rutland@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 15 Mar, 2021 1 commit
-
-
Alex Elder authored
The last line of ip_fast_csum() calls csum_fold(), forcing the type of the argument passed to be u32. But csum_fold() takes a __wsum argument (which is __u32 __bitwise for arm64). As long as we're forcing the cast, cast it to the right type. Signed-off-by: Alex Elder <elder@linaro.org> Acked-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210315012650.1221328-1-elder@linaro.orgSigned-off-by: Will Deacon <will@kernel.org>
-
- 12 Mar, 2021 1 commit
-
-
Wei Yongjun authored
Fix to return negative error code -ENOMEM from the error handling case instead of 0, as done elsewhere in this function. Fixes: 53c218da ("driver/perf: Add PMU driver for the ARM DMC-620 memory controller") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Link: https://lore.kernel.org/r/20210312080421.277562-1-weiyongjun1@huawei.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 11 Mar, 2021 2 commits
-
-
Ard Biesheuvel authored
These routines lost all existing users during the latest merge window so we can remove them. This avoids the need to fix them in the context of fixing a regression related to the ID map on 52-bit VA kernels. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210310171515.416643-3-ardb@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
Ard Biesheuvel authored
52-bit VA kernels can run on hardware that is only 48-bit capable, but configure the ID map as 52-bit by default. This was not a problem until recently, because the special T0SZ value for a 52-bit VA space was never programmed into the TCR register anwyay, and because a 52-bit ID map happens to use the same number of translation levels as a 48-bit one. This behavior was changed by commit 1401bef7 ("arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()"), which causes the unsupported T0SZ value for a 52-bit VA to be programmed into TCR_EL1. While some hardware simply ignores this, Mark reports that Amberwing systems choke on this, resulting in a broken boot. But even before that commit, the unsupported idmap_t0sz value was exposed to KVM and used to program TCR_EL2 incorrectly as well. Given that we already have to deal with address spaces being either 48-bit or 52-bit in size, the cleanest approach seems to be to simply default to a 48-bit VA ID map, and only switch to a 52-bit one if the placement of the kernel in DRAM requires it. This is guaranteed not to happen unless the system is actually 52-bit VA capable. Fixes: 90ec95cd ("arm64: mm: Introduce VA_BITS_MIN") Reported-by: Mark Salter <msalter@redhat.com> Link: http://lore.kernel.org/r/20210310003216.410037-1-msalter@redhat.comSigned-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210310171515.416643-2-ardb@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
- 10 Mar, 2021 4 commits
-
-
Rob Herring authored
Commit 0fdf1bb7 ("arm64: perf: Avoid PMXEV* indirection") changed armv8pmu_read_evcntr() to return a u32 instead of u64. The result is silent truncation of the event counter when using 64-bit counters. Given the offending commit appears to have passed thru several folks, it seems likely this was a bad rebase after v8.5 PMU 64-bit counters landed. Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Julien Thierry <julien.thierry.kdev@gmail.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: <stable@vger.kernel.org> Fixes: 0fdf1bb7 ("arm64: perf: Avoid PMXEV* indirection") Signed-off-by: Rob Herring <robh@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20210310004412.1450128-1-robh@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
James Morse authored
As per ARM ARM DDI 0487G.a, when FEAT_LPA2 is implemented, ID_AA64MMFR0_EL1 might contain a range of values to describe supported translation granules (4K and 16K pages sizes in particular) instead of just enabled or disabled values. This changes __enable_mmu() function to handle complete acceptable range of values (depending on whether the field is signed or unsigned) now represented with ID_AA64MMFR0_TGRAN_SUPPORTED_[MIN..MAX] pair. While here, also fix similar situations in EFI stub and KVM as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu Cc: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1615355590-21102-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Mark Brown authored
We track if sve-ptrace encountered a failure in a variable but don't actually use that value when we exit the program, do so. Signed-off-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210309190304.39169-1-broonie@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
Catalin Marinas authored
In a system supporting MTE, the linear map must allow reading/writing allocation tags by setting the memory type as Normal Tagged. Currently, this is only handled for memory present at boot. Hotplugged memory uses Normal non-Tagged memory. Introduce pgprot_mhp() for hotplugged memory and use it in add_memory_resource(). The arm64 code maps pgprot_mhp() to pgprot_tagged(). Note that ZONE_DEVICE memory should not be mapped as Tagged and therefore setting the memory type in arch_add_memory() is not feasible. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Fixes: 0178dc76 ("arm64: mte: Use Normal Tagged attributes for the linear map") Reported-by: Patrick Daly <pdaly@codeaurora.org> Tested-by: Patrick Daly <pdaly@codeaurora.org> Link: https://lore.kernel.org/r/1614745263-27827-1-git-send-email-pdaly@codeaurora.org Cc: <stable@vger.kernel.org> # 5.10.x Cc: Will Deacon <will@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: David Hildenbrand <david@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20210309122601.5543-1-catalin.marinas@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 09 Mar, 2021 1 commit
-
-
Andrey Konovalov authored
When CONFIG_DEBUG_VIRTUAL is enabled, the default page_to_virt() macro implementation from include/linux/mm.h is used. That definition doesn't account for KASAN tags, which leads to no tags on page_alloc allocations. Provide an arm64-specific definition for page_to_virt() when CONFIG_DEBUG_VIRTUAL is enabled that takes care of KASAN tags. Fixes: 2813b9c0 ("kasan, mm, arm64: tag non slab memory allocated via pagealloc") Cc: <stable@vger.kernel.org> Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/4b55b35202706223d3118230701c6a59749d9b72.1615219501.git.andreyknvl@google.comSigned-off-by: Will Deacon <will@kernel.org>
-
- 08 Mar, 2021 6 commits
-
-
Anshuman Khandual authored
There are multiple instances of pfn_to_section_nr() and __pfn_to_section() when CONFIG_SPARSEMEM is enabled. This can be optimized if memory section is fetched earlier. This replaces the open coded PFN and ADDR conversion with PFN_PHYS() and PHYS_PFN() helpers. While there, also add a comment. This does not cause any functional change. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614921898-4099-3-git-send-email-anshuman.khandual@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Anshuman Khandual authored
pfn_valid() validates a pfn but basically it checks for a valid struct page backing for that pfn. It should always return positive for memory ranges backed with struct page mapping. But currently pfn_valid() fails for all ZONE_DEVICE based memory types even though they have struct page mapping. pfn_valid() asserts that there is a memblock entry for a given pfn without MEMBLOCK_NOMAP flag being set. The problem with ZONE_DEVICE based memory is that they do not have memblock entries. Hence memblock_is_map_memory() will invariably fail via memblock_search() for a ZONE_DEVICE based address. This eventually fails pfn_valid() which is wrong. memblock_is_map_memory() needs to be skipped for such memory ranges. As ZONE_DEVICE memory gets hotplugged into the system via memremap_pages() called from a driver, their respective memory sections will not have SECTION_IS_EARLY set. Normal hotplug memory will never have MEMBLOCK_NOMAP set in their memblock regions. Because the flag MEMBLOCK_NOMAP was specifically designed and set for firmware reserved memory regions. memblock_is_map_memory() can just be skipped as its always going to be positive and that will be an optimization for the normal hotplug memory. Like ZONE_DEVICE based memory, all normal hotplugged memory too will not have SECTION_IS_EARLY set for their sections Skipping memblock_is_map_memory() for all non early memory sections would fix pfn_valid() problem for ZONE_DEVICE based memory and also improve its performance for normal hotplug memory as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Acked-by: David Hildenbrand <david@redhat.com> Fixes: 73b20c84 ("arm64: mm: implement pte_devmap support") Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614921898-4099-2-git-send-email-anshuman.khandual@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Anshuman Khandual authored
Currently without THP being enabled, MAX_ORDER via FORCE_MAX_ZONEORDER gets reduced to 11, which falls below HUGETLB_PAGE_ORDER for certain 16K and 64K page size configurations. This is problematic which throws up the following warning during boot as pageblock_order via HUGETLB_PAGE_ORDER order exceeds MAX_ORDER. WARNING: CPU: 7 PID: 127 at mm/vmstat.c:1092 __fragmentation_index+0x58/0x70 Modules linked in: CPU: 7 PID: 127 Comm: kswapd0 Not tainted 5.12.0-rc1-00005-g0221e3101a1 #237 Hardware name: linux,dummy-virt (DT) pstate: 20400005 (nzCv daif +PAN -UAO -TCO BTYPE=--) pc : __fragmentation_index+0x58/0x70 lr : fragmentation_index+0x88/0xa8 sp : ffff800016ccfc00 x29: ffff800016ccfc00 x28: 0000000000000000 x27: ffff800011fd4000 x26: 0000000000000002 x25: ffff800016ccfda0 x24: 0000000000000002 x23: 0000000000000640 x22: ffff0005ffcb5b18 x21: 0000000000000002 x20: 000000000000000d x19: ffff0005ffcb3980 x18: 0000000000000004 x17: 0000000000000001 x16: 0000000000000019 x15: ffff800011ca7fb8 x14: 00000000000002b3 x13: 0000000000000000 x12: 00000000000005e0 x11: 0000000000000003 x10: 0000000000000080 x9 : ffff800011c93948 x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000007000 x5 : 0000000000007944 x4 : 0000000000000032 x3 : 000000000000001c x2 : 000000000000000b x1 : ffff800016ccfc10 x0 : 000000000000000d Call trace: __fragmentation_index+0x58/0x70 compaction_suitable+0x58/0x78 wakeup_kcompactd+0x8c/0xd8 balance_pgdat+0x570/0x5d0 kswapd+0x1e0/0x388 kthread+0x154/0x158 ret_from_fork+0x10/0x30 This solves the problem via keeping FORCE_MAX_ZONEORDER unchanged with or without THP on 16K and 64K page size configurations, making sure that the HUGETLB_PAGE_ORDER (and pageblock_order) would never exceed MAX_ORDER. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1614597914-28565-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Anshuman Khandual authored
There is already an ARCH_WANT_HUGE_PMD_SHARE which is being selected for applicable configurations. Hence just drop the other redundant entry. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1614575192-21307-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
The documented behaviour for CMDLINE_EXTEND is that the arguments from the bootloader are appended to the built-in kernel command line. This also matches the option parsing behaviour for the EFI stub and early ID register overrides. Bizarrely, the fdt behaviour is the other way around: appending the built-in command line to the bootloader arguments, resulting in a command-line that doesn't necessarily line-up with the parsing order and definitely doesn't line-up with the documented behaviour. As it turns out, there is a proposal [1] to replace CMDLINE_EXTEND with CMDLINE_PREPEND and CMDLINE_APPEND options which should hopefully make the intended behaviour much clearer. While we wait for those to land, drop CMDLINE_EXTEND for now as there appears to be little enthusiasm for changing the current FDT behaviour. [1] https://lore.kernel.org/lkml/20190319232448.45964-2-danielwa@cisco.com/ Cc: Max Uvarov <muvarov@gmail.com> Cc: Rob Herring <robh@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Doug Anderson <dianders@chromium.org> Cc: Tyler Hicks <tyhicks@linux.microsoft.com> Cc: Frank Rowand <frowand.list@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/CAL_JsqJX=TCCs7=gg486r9TN4NYscMTCLNfqJF9crskKPq-bTg@mail.gmail.com Link: https://lore.kernel.org/r/20210303134927.18975-3-will@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
Will Deacon authored
The built-in kernel commandline (CONFIG_CMDLINE) can be configured in three different ways: 1. CMDLINE_FORCE: Use CONFIG_CMDLINE instead of any bootloader args 2. CMDLINE_EXTEND: Append the bootloader args to CONFIG_CMDLINE 3. CMDLINE_FROM_BOOTLOADER: Only use CONFIG_CMDLINE if there aren't any bootloader args. The early cmdline parsing to detect idreg overrides gets (2) and (3) slightly wrong: in the case of (2) the bootloader args are parsed first and in the case of (3) the CMDLINE is always parsed. Fix these issues by moving the bootargs parsing out into a helper function and following the same logic as that used by the EFI stub. Reviewed-by: Marc Zyngier <maz@kernel.org> Fixes: 33200303 ("arm64: cpufeature: Add an early command-line cpufeature override facility") Link: https://lore.kernel.org/r/20210303134927.18975-2-will@kernel.orgSigned-off-by: Will Deacon <will@kernel.org>
-
- 06 Mar, 2021 4 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdmaLinus Torvalds authored
Pull rdma fixes from Jason Gunthorpe: "Nothing special here, though Bob's regression fixes for rxe would have made it before the rc cycle had there not been such strong winter weather! - Fix corner cases in the rxe reference counting cleanup that are causing regressions in blktests for SRP - Two kdoc fixes so W=1 is clean - Missing error return in error unwind for mlx5 - Wrong lock type nesting in IB CM" * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: RDMA/rxe: Fix errant WARN_ONCE in rxe_completer() RDMA/rxe: Fix extra deref in rxe_rcv_mcast_pkt() RDMA/rxe: Fix missed IB reference counting in loopback RDMA/uverbs: Fix kernel-doc warning of _uverbs_alloc RDMA/mlx5: Set correct kernel-doc identifier IB/mlx5: Add missing error code RDMA/rxe: Fix missing kconfig dependency on CRYPTO RDMA/cm: Fix IRQ restore in ib_send_cm_sidr_rep
-
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linuxLinus Torvalds authored
Pull gcc-plugins fixes from Kees Cook: "Tiny gcc-plugin fixes for v5.12-rc2. These issues are small but have been reported a couple times now by static analyzers, so best to get them fixed to reduce the noise. :) - Fix coding style issues (Jason Yan)" * tag 'gcc-plugins-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: gcc-plugins: latent_entropy: remove unneeded semicolon gcc-plugins: structleak: remove unneeded variable 'ret'
-
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linuxLinus Torvalds authored
Pull pstore fixes from Kees Cook: - Rate-limit ECC warnings (Dmitry Osipenko) - Fix error path check for NULL (Tetsuo Handa) * tag 'pstore-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: pstore/ram: Rate-limit "uncorrectable error in header" message pstore: Fix warning in pstore_kill_sb()
-
- 05 Mar, 2021 10 commits
-
-
Linus Torvalds authored
Merge tag 'for-5.12/dm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper fixes from Mike Snitzer: "Fix DM verity target's optional Forward Error Correction (FEC) for Reed-Solomon roots that are unaligned to block size" * tag 'for-5.12/dm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: dm verity: fix FEC for RS roots unaligned to block size dm bufio: subtract the number of initial sectors in dm_bufio_get_device_size
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull block fixes from Jens Axboe: - NVMe fixes: - more device quirks (Julian Einwag, Zoltán Böszörményi, Pascal Terjan) - fix a hwmon error return (Daniel Wagner) - fix the keep alive timeout initialization (Martin George) - ensure the model_number can't be changed on a used subsystem (Max Gurtovoy) - rsxx missing -EFAULT on copy_to_user() failure (Dan) - rsxx remove unused linux.h include (Tian) - kill unused RQF_SORTED (Jean) - updated outdated BFQ comments (Joseph) - revert work-around commit for bd_size_lock, since we removed the offending user in this merge window (Damien) * tag 'block-5.12-2021-03-05' of git://git.kernel.dk/linux-block: nvmet: model_number must be immutable once set nvme-fabrics: fix kato initialization nvme-hwmon: Return error code when registration fails nvme-pci: add quirks for Lexar 256GB SSD nvme-pci: mark Kingston SKC2000 as not supporting the deepest power state nvme-pci: mark Seagate Nytro XM1440 as QUIRK_NO_NS_DESC_LIST. rsxx: Return -EFAULT if copy_to_user() fails block/bfq: update comments and default value in docs for fifo_expire rsxx: remove unused including <linux/version.h> block: Drop leftover references to RQF_SORTED block: revert "block: fix bd_size_lock use"
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull io_uring fixes from Jens Axboe: "A bit of a mix between fallout from the worker change, cleanups and reductions now possible from that change, and fixes in general. In detail: - Fully serialize manager and worker creation, fixing races due to that. - Clean up some naming that had gone stale. - SQPOLL fixes. - Fix race condition around task_work rework that went into this merge window. - Implement unshare. Used for when the original task does unshare(2) or setuid/seteuid and friends, drops the original workers and forks new ones. - Drop the only remaining piece of state shuffling we had left, which was cred. Move it into issue instead, and we can drop all of that code too. - Kill f_op->flush() usage. That was such a nasty hack that we had out of necessity, we no longer need it. - Following from ->flush() removal, we can also drop various bits of ctx state related to SQPOLL and cancelations. - Fix an issue with IOPOLL retry, which originally was fallout from a filemap change (removing iov_iter_revert()), but uncovered an issue with iovec re-import too late. - Fix an issue with system suspend. - Use xchg() for fallback work, instead of cmpxchg(). - Properly destroy io-wq on exec. - Add create_io_thread() core helper, and use that in io-wq and io_uring. This allows us to remove various silly completion events related to thread setup. - A few error handling fixes. This should be the grunt of fixes necessary for the new workers, next week should be quieter. We've got a pending series from Pavel on cancelations, and how tasks and rings are indexed. Outside of that, should just be minor fixes. Even with these fixes, we're still killing a net ~80 lines" * tag 'io_uring-5.12-2021-03-05' of git://git.kernel.dk/linux-block: (41 commits) io_uring: don't restrict issue_flags for io_openat io_uring: make SQPOLL thread parking saner io-wq: kill hashed waitqueue before manager exits io_uring: clear IOCB_WAITQ for non -EIOCBQUEUED return io_uring: don't keep looping for more events if we can't flush overflow io_uring: move to using create_io_thread() kernel: provide create_io_thread() helper io_uring: reliably cancel linked timeouts io_uring: cancel-match based on flags io-wq: ensure all pending work is canceled on exit io_uring: ensure that threads freeze on suspend io_uring: remove extra in_idle wake up io_uring: inline __io_queue_async_work() io_uring: inline io_req_clean_work() io_uring: choose right tctx->io_wq for try cancel io_uring: fix -EAGAIN retry with IOPOLL io-wq: fix error path leak of buffered write hash map io_uring: remove sqo_task io_uring: kill sqo_dead and sqo submission halting io_uring: ignore double poll add on the same waitqueue head ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pmLinus Torvalds authored
Pull power management fixes from Rafael Wysocki: "These fix the usage of device links in the runtime PM core code and update the DTPM (Dynamic Thermal Power Management) feature added recently. Specifics: - Make the runtime PM core code avoid attempting to suspend supplier devices before updating the PM-runtime status of a consumer to 'suspended' (Rafael Wysocki). - Fix DTPM (Dynamic Thermal Power Management) root node initialization and label that feature as EXPERIMENTAL in Kconfig (Daniel Lezcano)" * tag 'pm-5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: powercap/drivers/dtpm: Add the experimental label to the option description powercap/drivers/dtpm: Fix root node initialization PM: runtime: Update device status before letting suppliers suspend
-
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pmLinus Torvalds authored
Pull ACPI fix from Rafael Wysocki: "Make the empty stubs of some helper functions used when CONFIG_ACPI is not set actually match those functions (Andy Shevchenko)" * tag 'acpi-5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: ACPI: bus: Constify is_acpi_node() and friends (part 2)
-
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommuLinus Torvalds authored
Pull iommu fixes from Joerg Roedel: - Fix a sleeping-while-atomic issue in the AMD IOMMU code - Disable lazy IOTLB flush for untrusted devices in the Intel VT-d driver - Fix status code definitions for Intel VT-d - Fix IO Page Fault issue in Tegra IOMMU driver * tag 'iommu-fixes-v5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: iommu/vt-d: Fix status code for Allocate/Free PASID command iommu: Don't use lazy flush for untrusted device iommu/tegra-smmu: Fix mc errors on tegra124-nyan iommu/amd: Fix sleeping in atomic in increase_address_space()
-
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linuxLinus Torvalds authored
Pull btrfs fixes from David Sterba: "More regression fixes and stabilization. Regressions: - zoned mode - count zone sizes in wider int types - fix space accounting for read-only block groups - subpage: fix page tail zeroing Fixes: - fix spurious warning when remounting with free space tree - fix warning when creating a directory with smack enabled - ioctl checks for qgroup inheritance when creating a snapshot - qgroup - fix missing unlock on error path in zero range - fix amount of released reservation on error - fix flushing from unsafe context with open transaction, potentially deadlocking - minor build warning fixes" * tag 'for-5.12-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: btrfs: zoned: do not account freed region of read-only block group as zone_unusable btrfs: zoned: use sector_t for zone sectors btrfs: subpage: fix the false data csum mismatch error btrfs: fix warning when creating a directory with smack enabled btrfs: don't flush from btrfs_delayed_inode_reserve_metadata btrfs: export and rename qgroup_reserve_meta btrfs: free correct amount of space in btrfs_delayed_inode_reserve_metadata btrfs: fix spurious free_space_tree remount warning btrfs: validate qgroup inherit for SNAP_CREATE_V2 ioctl btrfs: unlock extents in btrfs_zero_range in case of quota reservation errors btrfs: ref-verify: use 'inline void' keyword ordering
-
git://git.kernel.org/pub/scm/linux/kernel/git/robh/linuxLinus Torvalds authored
Pull devicetree fixes from Rob Herring: - Another batch of graph and video-interfaces schema conversions - Drop DT header symlink for dropped C6X arch - Fix bcm2711-hdmi schema error * tag 'devicetree-fixes-for-5.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: dt-bindings: media: Use graph and video-interfaces schemas, round 2 dts: drop dangling c6x symlink dt-bindings: bcm2711-hdmi: Fix broken schema
-
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-traceLinus Torvalds authored
Pull tracing fixes from Steven Rostedt: "Functional fixes: - Fix big endian conversion for arm64 in recordmcount processing - Fix timestamp corruption in ring buffer on discarding events - Fix memory leak in __create_synth_event() - Skip selftests if tracing is disabled as it will cause them to fail. Non-functional fixes: - Fix help text in Kconfig - Remove duplicate prototype for trace_empty() - Fix stale comment about the trace_event_call flags. Self test update: - Add more information to the validation output of when a corrupt timestamp is found in the ring buffer, and also trigger a warning to make sure that tests catch it" * tag 'trace-v5.12-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: tracing: Fix comment about the trace_event_call flags tracing: Skip selftests if tracing is disabled tracing: Fix memory leak in __create_synth_event() ring-buffer: Add a little more information and a WARN when time stamp going backwards is detected ring-buffer: Force before_stamp and write_stamp to be different on discard tracing: Fix help text of TRACEPOINT_BENCHMARK in Kconfig tracing: Remove duplicate declaration from trace.h ftrace: Have recordmcount use w8 to read relp->r_info in arm64_is_fake_mcount
-
Bob Pearson authored
In rxe_comp.c in rxe_completer() the function free_pkt() did not clear skb which triggered a warning at 'done:' and could possibly at 'exit:'. The WARN_ONCE() calls are not actually needed. The call to free_pkt() is moved to the end to clearly show that all skbs are freed. Fixes: 899aba89 ("RDMA/rxe: Fix FIXME in rxe_udp_encap_recv()") Link: https://lore.kernel.org/r/20210304192048.2958-1-rpearson@hpe.comSigned-off-by: Bob Pearson <rpearsonhpe@gmail.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
-