- 28 Nov, 2022 6 commits
-
-
Amit Daniel Kachhap authored
Int8 matrix multiplication (FEAT_AA32I8MM) is a feature present in AArch32 state for Armv8 and is represented by ISAR6.I8MM identification register. This feature denotes the presence of VSMMLA, VSUDOT, VUMMLA, VUSMMLA and VUSDOT instructions and hence adding a hwcap will enable the userspace to check it before trying to use those instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Amit Daniel Kachhap authored
Advanced SIMD BFloat16 (FEAT_AA32BF16) is a feature present in AArch32 state for Armv8 and is represented by ISAR6.BF16 identification register. This feature denotes the presence of VCVT, VCVTB, VCVTT, VDOT, VFMAB, VFMAT and VMMLA instructions and hence adding a hwcap will enable the userspace to check it before trying to use those instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Amit Daniel Kachhap authored
Floating-point half-precision multiplication (FHM) is a feature present in AArch32 state for Armv8 and is represented by ISAR6.FHM identification register. This feature denotes the presence of VFMAL and VMFSL instructions and hence adding a hwcap will enable the userspace to check it before trying to use those instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Amit Daniel Kachhap authored
Advanced Dot product is a feature present in AArch32 state for Armv8 and is represented by ISAR6 identification register. This feature denotes the presence of UDOT and SDOT instructions and hence adding a hwcap will enable the userspace to check it before trying to use those instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Amit Daniel Kachhap authored
Floating point half-precision (FPHP) and Advanced SIMD half-precision (ASIMDHP) are VFP features (FEAT_FP16) represented by MVFR1 identification register. These capabilities can optionally exist with VFPv3 and mandatory with VFPv4. Both these new features exist for Armv8 architecture in AArch32 state. These hwcaps may be useful for the userspace to add conditional check before trying to use FEAT_FP16 feature specific instructions. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Amit Daniel Kachhap authored
AArch32 Instruction Set Attribute Register 6 (ID_ISAR6_EL1) and AArch32 Processor Feature Register 2 (ID_PFR2_EL1) identifies some new features for the Armv8 architecture. This registers will be utilized to add hwcaps for those cpu features. These registers are marked as reserved for Armv7 and should be a RAZ. Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
- 14 Nov, 2022 6 commits
-
-
Russell King (Oracle) authored
Add unwinder information so oops in the findbit functions can create a proper backtrace. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Russell King (Oracle) authored
Convert the implementations to operate on words rather than bytes which makes bitmap searching faster. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Russell King (Oracle) authored
Since the pairs of _find_first and _find_next functions are pretty similar, use macros to generate this code. This commit does not change the generated code. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Russell King (Oracle) authored
Provide a more efficient ARMv7 implementation to determine the first set bit in the supplied value. Signed-off-by: Russell King (Oracle) <rmk@armlinux.org.uk>
-
Russell King (Oracle) authored
Document the ARMv5 bit offset calculation code. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Li Huafei authored
Historically architectures have had duplicated code in their stack trace implementations for filtering what gets traced. In order to avoid this duplication some generic code has been provided using a new interface arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which factors all this out into the generic stack trace code. Convert ARM to use this common infrastructure. When initializing the stack frame of the current task, arm64 uses __builtin_frame_address(1) to initialize the frame pointer, skipping arch_stack_walk(), see the commit c607ab4f ("arm64: stacktrace: don't trace arch_stack_walk()"). Since __builtin_frame_address(1) does not work on ARM, unwind_frame() is used to unwind the stack one layer forward before calling walk_stackframe(). Signed-off-by: Li Huafei <lihuafei1@huawei.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
- 08 Nov, 2022 4 commits
-
-
Li Huafei authored
As with the generic arch_stack_walk() code the ARM stack walk code takes a callback that is called per stack frame. Currently the ARM code always passes a struct stackframe to the callback and the generic code just passes the pc, however none of the users ever reference anything in the struct other than the pc value. The ARM code also uses a return type of int while the generic code uses a return type of bool though in both cases the return value is a boolean value and the sense is inverted between the two. In order to reduce code duplication when ARM is converted to use arch_stack_walk() change the signature and return sense of the ARM specific callback to match that of the generic code. Signed-off-by: Li Huafei <lihuafei1@huawei.com> Reviewed-by: Mark Brown <broonie@kernel.org> Reviewed-by: Linus Waleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Nick Desaulniers authored
When both -march= and -Wa,-march= are specified for assembler or assembler-with-cpp sources, GCC and Clang will prefer the -Wa,-march= value but Clang will warn that -march= is unused. warning: argument unused during compilation: '-march=armv6k' [-Wunused-command-line-argument] This is the top group of warnings we observe when using clang to assemble the kernel via `ARCH=arm make LLVM=1`. Split the arch-y make variable into two, so that -march= flags only get passed to the compiler, not the assembler. -D flags are added to KBUILD_CPPFLAGS which is used for both C and assembler-with-cpp sources. Clang is trying to warn that it doesn't support different values for -march= and -Wa,-march= (like GCC does, but the kernel doesn't need this) though the value of the preprocessor define __thumb2__ is based on -march=. Make sure to re-set __thumb2__ via -D flag for assembler sources now that we're no longer passing -march= to the assembler. Set it to a different value than the preprocessor would for -march= in case -march= gets accidentally re-added to KBUILD_AFLAGS in the future. Thanks to Ard and Nathan for this suggestion. Link: https://github.com/ClangBuiltLinux/linux/issues/1315 Link: https://github.com/ClangBuiltLinux/linux/issues/1587 Link: https://github.com/llvm/llvm-project/issues/55656Suggested-by: Ard Biesheuvel <ardb@kernel.org> Suggested-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Nick Desaulniers authored
Avoids an error from the assembler for CONFIG_THUMB2 kernels: clang-15: error: hardware TLS register is not supported for the thumbv4t sub-architecture This flag only makes sense to pass to the compiler, not the assembler. Perhaps CFLAGS_ABI can be renamed to CPPFLAGS_ABI to reflect that they will be passed to both the compiler and assembler for sources that require pre-processing. Reviewed-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Nick Desaulniers authored
Similar to commit a6c30873 ("ARM: 8989/1: use .fpu assembler directives instead of assembler arguments"). GCC and GNU binutils support setting the "sub arch" via -march=, -Wa,-march, target function attribute, and .arch assembler directive. Clang was missing support for -Wa,-march=, but this was implemented in clang-13. The behavior of both GCC and Clang is to prefer -Wa,-march= over -march= for assembler and assembler-with-cpp sources, but Clang will warn about the -march= being unused. clang: warning: argument unused during compilation: '-march=armv6k' [-Wunused-command-line-argument] Since most assembler is non-conditionally assembled with one sub arch (modulo arch/arm/delay-loop.S which conditionally is assembled as armv4 based on CONFIG_ARCH_RPC, and arch/arm/mach-at91/pm-suspend.S which is conditionally assembled as armv7-a based on CONFIG_CPU_V7), prefer the .arch assembler directive. Add a few more instances found in compile testing as found by Arnd and Nathan. Link: https://github.com/llvm/llvm-project/commit/1d51c699b9e2ebc5bcfdbe85c74cc871426333d4 Link: https://bugs.llvm.org/show_bug.cgi?id=48894 Link: https://github.com/ClangBuiltLinux/linux/issues/1195 Link: https://github.com/ClangBuiltLinux/linux/issues/1315Suggested-by: Arnd Bergmann <arnd@arndb.de> Suggested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
- 07 Nov, 2022 9 commits
-
-
Nick Desaulniers authored
arch-y and tune-y used lazy evaluation since they used to contain cc-option checks. They don't any longer, so just eagerly evaluate these command line flags. Reviewed-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Ulf Hansson authored
If the system PM callbacks haven't been assigned, the PM core falls back to invoke the corresponding the pm_generic_* helpers for the device. Let's rely on this behaviour and drop the redundant assignments. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Nick Desaulniers authored
kbuild test robot reports: In file included from crypto/xor.c:17: ./arch/arm/include/asm/xor.h:61:3: error: write to reserved register 'R7' GET_BLOCK_4(p1); ^ ./arch/arm/include/asm/xor.h:20:10: note: expanded from macro 'GET_BLOCK_4' __asm__("ldmia %0, {%1, %2, %3, %4}" ^ ./arch/arm/include/asm/xor.h:63:3: error: write to reserved register 'R7' PUT_BLOCK_4(p1); ^ ./arch/arm/include/asm/xor.h:42:23: note: expanded from macro 'PUT_BLOCK_4' __asm__ __volatile__("stmia %0!, {%2, %3, %4, %5}" ^ ./arch/arm/include/asm/xor.h:83:3: error: write to reserved register 'R7' GET_BLOCK_4(p1); ^ ./arch/arm/include/asm/xor.h:20:10: note: expanded from macro 'GET_BLOCK_4' __asm__("ldmia %0, {%1, %2, %3, %4}" ^ ./arch/arm/include/asm/xor.h:86:3: error: write to reserved register 'R7' PUT_BLOCK_4(p1); ^ ./arch/arm/include/asm/xor.h:42:23: note: expanded from macro 'PUT_BLOCK_4' __asm__ __volatile__("stmia %0!, {%2, %3, %4, %5}" ^ Thumb2 uses r7 rather than r11 as the frame pointer. Let's use r10 rather than r7 for these temporaries. Link: https://github.com/ClangBuiltLinux/linux/issues/1732 Link: https://lore.kernel.org/llvm/202210072120.V1O2SuKY-lkp@intel.com/Reported-by: kernel test robot <lkp@intel.com> Suggested-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Guilherme G. Piccoli authored
Currently the regular CPU shutdown path for ARM disables IRQs/FIQs in the secondary CPUs - smp_send_stop() calls ipi_cpu_stop(), which is responsible for that. IRQs are architecturally masked when we take an interrupt, but FIQs are high priority than IRQs, hence they aren't masked. With that said, it makes sense to disable FIQs here, but there's no need for (re-)disabling IRQs. More than that: there is an alternative path for disabling CPUs, in the form of function crash_smp_send_stop(), which is used for kexec/panic path. This function relies on a SMP call that also triggers a busy-wait loop [at machine_crash_nonpanic_core()], but without disabling FIQs. This might lead to odd scenarios, like early interrupts in the boot of kexec'd kernel or even interrupts in secondary "disabled" CPUs while the main one still works in the panic path and assumes all secondary CPUs are (really!) off. So, let's disable FIQs in both paths and *not* disable IRQs a second time, since they are already masked in both paths by the architecture. This way, we keep both CPU quiesce paths consistent and safe. Cc: Marc Zyngier <maz@kernel.org> Cc: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Nick Desaulniers authored
clang-15's ability to elide loops completely became more aggressive when it can deduce how a variable is being updated in a loop. Counting down one variable by an increment of another can be replaced by a modulo operation. For 64b variables on 32b ARM EABI targets, this can result in the compiler generating calls to __aeabi_uldivmod, which it does for a do while loop in float64_rem(). For the kernel, we'd generally prefer that developers not open code 64b division via binary / operators and instead use the more explicit helpers from div64.h. On arm-linux-gnuabi targets, failure to do so can result in linkage failures due to undefined references to __aeabi_uldivmod(). While developers can avoid open coding divisions on 64b variables, the compiler doesn't know that the Linux kernel has a partial implementation of a compiler runtime (--rtlib) to enforce this convention. It's also undecidable for the compiler whether the code in question would be faster to execute the loop vs elide it and do the 64b division. While I actively avoid using the internal -mllvm command line flags, I think we get better code than using barrier() here, which will force reloads+spills in the loop for all toolchains. Link: https://github.com/ClangBuiltLinux/linux/issues/1666Reported-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nathan Chancellor <nathan@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Wang Kefeng authored
UEFI runtime page tables dump only for ARM64 at present, but ARM support EFI and ARM_PTDUMP_DEBUGFS now. Since ARM could potentially execute with a 1G/3G user/kernel split, choosing 1G as the upper limit for UEFI runtime end, with this, we could enable UEFI runtime page tables on ARM. Acked-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Wang Kefeng authored
If there is a kernel fault, see do_kernel_fault(), we only print the generic "paging request" or "NULL pointer dereference" message which don't show read, write or excute information, let's provide better fault message for them. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Seung-Woo Kim authored
To enable UBSAN on ARM, this patch enables ARCH_HAS_UBSAN_SANITIZE_ALL from arm confiuration. Basic kernel bootup test is passed on arm with CONFIG_UBSAN_SANITIZE_ALL enabled. [florian: rebased against v6.0-rc7] Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Alex Sverdlin authored
"unwind: Index not found eef26358" warnings keep popping up on CONFIG_ARM_MODULE_PLTS-enabled systems if the PC points to a PLT veneer. Teach the unwinder how to deal with them, taking into account they don't change state of the stack or register file except loading PC. Link: https://lore.kernel.org/linux-arm-kernel/20200402153845.30985-1-kursad.oney@broadcom.com/Tested-by: Kursad Oney <kursad.oney@broadcom.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
- 04 Oct, 2022 6 commits
-
-
Wang Kefeng authored
ARM could have 3 page table level if ARM_LPAE enabled, or only 2 page table level, let's show the page table level name when dump. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Wang Kefeng authored
Since commit 7a1be318 ("ARM: 9012/1: move device tree mapping out of linear region"), FDT is placed between the end of the vmalloc region and the start of the fixmap region, let's show it in dump. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Alex Sverdlin authored
In case CONFIG_KASAN_VMALLOC=y kasan_populate_vmalloc() allocates the shadow pages dynamically. But even worse is that kasan_release_vmalloc() releases them, which is not compatible with create_mapping() of MODULES_VADDR..MODULES_END range: BUG: Bad page state in process kworker/9:1 pfn:2068b page:e5e06160 refcount:0 mapcount:0 mapping:00000000 index:0x0 flags: 0x1000(reserved) raw: 00001000 e5e06164 e5e06164 00000000 00000000 00000000 ffffffff 00000000 page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set bad because of flags: 0x1000(reserved) Modules linked in: ip_tables CPU: 9 PID: 154 Comm: kworker/9:1 Not tainted 5.4.188-... #1 Hardware name: LSI Axxia AXM55XX Workqueue: events do_free_init unwind_backtrace show_stack dump_stack bad_page free_pcp_prepare free_unref_page kasan_depopulate_vmalloc_pte __apply_to_page_range apply_to_existing_page_range kasan_release_vmalloc __purge_vmap_area_lazy _vm_unmap_aliases.part.0 __vunmap do_free_init process_one_work worker_thread kthread Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Linus Walleij authored
Pointers to virtual memory functions are (void *) but the __dma_update_pte() function is passing an unsigned long. Fix this up by explicit cast. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Li Huafei authored
Because an exception stack frame is not created in the exception entry, save_trace() does special handling for the exception PC, but this is only needed when CONFIG_FRAME_POINTER_UNWIND=y. When CONFIG_ARM_UNWIND=y, unwind annotations have been added to the exception entry and save_trace() will repeatedly save the exception PC: [0x7f000090] hrtimer_hander+0x8/0x10 [hrtimer] [0x8019ec50] __hrtimer_run_queues+0x18c/0x394 [0x8019f760] hrtimer_run_queues+0xbc/0xd0 [0x8019def0] update_process_times+0x34/0x80 [0x801ad2a4] tick_periodic+0x48/0xd0 [0x801ad3dc] tick_handle_periodic+0x1c/0x7c [0x8010f2e0] twd_handler+0x30/0x40 [0x80177620] handle_percpu_devid_irq+0xa0/0x23c [0x801718d0] generic_handle_domain_irq+0x24/0x34 [0x80502d28] gic_handle_irq+0x74/0x88 [0x8085817c] generic_handle_arch_irq+0x58/0x78 [0x80100ba8] __irq_svc+0x88/0xc8 [0x80108114] arch_cpu_idle+0x38/0x3c [0x80108114] arch_cpu_idle+0x38/0x3c <==== duplicate saved exception PC [0x80861bf8] default_idle_call+0x38/0x130 [0x8015d5cc] do_idle+0x150/0x214 [0x8015d978] cpu_startup_entry+0x18/0x1c [0x808589c0] rest_init+0xd8/0xdc [0x80c00a44] arch_post_acpi_subsys_init+0x0/0x8 We can move the special handling of the exception PC in save_trace() to the unwind_frame() of the frame pointer unwinder. Signed-off-by: Li Huafei <lihuafei1@huawei.com> Reviewed-by: Linus Waleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Li Huafei authored
When using the frame pointer unwinder, it was found that the stack trace output of stack_trace_save() is incomplete if the stack contains call_with_stack(): [0x7f00002c] dump_stack_task+0x2c/0x90 [hrtimer] [0x7f0000a0] hrtimer_hander+0x10/0x18 [hrtimer] [0x801a67f0] __hrtimer_run_queues+0x1b0/0x3b4 [0x801a7350] hrtimer_run_queues+0xc4/0xd8 [0x801a597c] update_process_times+0x3c/0x88 [0x801b5a98] tick_periodic+0x50/0xd8 [0x801b5bf4] tick_handle_periodic+0x24/0x84 [0x8010ffc4] twd_handler+0x38/0x48 [0x8017d220] handle_percpu_devid_irq+0xa8/0x244 [0x80176e9c] generic_handle_domain_irq+0x2c/0x3c [0x8052e3a8] gic_handle_irq+0x7c/0x90 [0x808ab15c] generic_handle_arch_irq+0x60/0x80 [0x8051191c] call_with_stack+0x1c/0x20 For the frame pointer unwinder, unwind_frame() checks stackframe::fp by stackframe::sp. Since call_with_stack() switches the SP from one stack to another, stackframe::fp and stackframe: :sp will point to different stacks, so we can no longer check stackframe::fp by stackframe::sp. Skip checking stackframe::fp at this point to avoid this problem. Signed-off-by: Li Huafei <lihuafei1@huawei.com> Reviewed-by: Linus Waleij <linus.walleij@linaro.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
- 22 Sep, 2022 1 commit
-
-
Zhen Lei authored
Function show_regs() is usually called in interrupt handler or exception handler, it prints the registers specified by the parameter 'regs', then dump the stack traces. Although not explicitly documented, dump the stack traces based on'regs' seems to make the most sense. Although dump_stack() can finally dump the desired content, because 'regs' are saved by the entry of current interrupt or exception. In the following example we can see: 1) The backtrace of interrupt or exception handler is not expected, it causes confusion. 2) Something is printed repeatedly. The line with the kernel version "CPU: 0 PID: 70 Comm: test0 Not tainted 5.19.0+ #8", the registers saved in "Exception stack" which 'regs' actually point to. For example: rcu: INFO: rcu_sched self-detected stall on CPU rcu: 0-....: (499 ticks this GP) idle=379/1/0x40000002 softirq=91/91 fqs=249 (t=500 jiffies g=-911 q=13 ncpus=4) CPU: 0 PID: 70 Comm: test0 Not tainted 5.19.0+ #8 Hardware name: ARM-Versatile Express PC is at ktime_get+0x4c/0xe8 LR is at ktime_get+0x4c/0xe8 pc : 8019a474 lr : 8019a474 psr: 60000013 sp : cabd1f28 ip : 00000001 fp : 00000005 r10: 527bf1b8 r9 : 431bde82 r8 : d7b634db r7 : 0000156e r6 : 61f234f8 r5 : 00000001 r4 : 80ca86c0 r3 : ffffffff r2 : fe5bce0b r1 : 00000000 r0 : 01a431f4 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 6121406a DAC: 00000051 CPU: 0 PID: 70 Comm: test0 Not tainted 5.19.0+ #8 <-----------start---------- Hardware name: ARM-Versatile Express | unwind_backtrace from show_stack+0x10/0x14 | show_stack from dump_stack_lvl+0x40/0x4c | dump_stack_lvl from rcu_dump_cpu_stacks+0x10c/0x134 | rcu_dump_cpu_stacks from rcu_sched_clock_irq+0x780/0xaf4 | rcu_sched_clock_irq from update_process_times+0x54/0x74 | update_process_times from tick_periodic+0x3c/0xd4 | tick_periodic from tick_handle_periodic+0x20/0x80 worthless tick_handle_periodic from twd_handler+0x30/0x40 or twd_handler from handle_percpu_devid_irq+0x8c/0x1c8 duplicated handle_percpu_devid_irq from generic_handle_domain_irq+0x24/0x34 | generic_handle_domain_irq from gic_handle_irq+0x74/0x88 | gic_handle_irq from generic_handle_arch_irq+0x34/0x44 | generic_handle_arch_irq from call_with_stack+0x18/0x20 | call_with_stack from __irq_svc+0x98/0xb0 | Exception stack(0xcabd1ed8 to 0xcabd1f20) | 1ec0: 01a431f4 00000000 | 1ee0: fe5bce0b ffffffff 80ca86c0 00000001 61f234f8 0000156e d7b634db 431bde82 | 1f00: 527bf1b8 00000005 00000001 cabd1f28 8019a474 8019a474 60000013 ffffffff | __irq_svc from ktime_get+0x4c/0xe8 <---------end-------------- ktime_get from test_task+0x44/0x110 test_task from kthread+0xd8/0xf4 kthread from ret_from_fork+0x14/0x2c Exception stack(0xcabd1fb0 to 0xcabd1ff8) 1fa0: 00000000 00000000 00000000 00000000 1fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 1fe0: 00000000 00000000 00000000 00000000 00000013 00000000 After replacing dump_stack() with dump_backtrace(): rcu: INFO: rcu_sched self-detected stall on CPU rcu: 0-....: (500 ticks this GP) idle=8f7/1/0x40000002 softirq=129/129 fqs=241 (t=500 jiffies g=-915 q=13 ncpus=4) CPU: 0 PID: 69 Comm: test0 Not tainted 5.19.0+ #9 Hardware name: ARM-Versatile Express PC is at ktime_get+0x4c/0xe8 LR is at ktime_get+0x4c/0xe8 pc : 8019a494 lr : 8019a494 psr: 60000013 sp : cabddf28 ip : 00000001 fp : 00000002 r10: 0779cb48 r9 : 431bde82 r8 : d7b634db r7 : 00000a66 r6 : e835ab70 r5 : 00000001 r4 : 80ca86c0 r3 : ffffffff r2 : ff337d39 r1 : 00000000 r0 : 00cc82c6 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 611d006a DAC: 00000051 ktime_get from test_task+0x44/0x110 test_task from kthread+0xd8/0xf4 kthread from ret_from_fork+0x14/0x2c Exception stack(0xcabddfb0 to 0xcabddff8) dfa0: 00000000 00000000 00000000 00000000 dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
- 31 Aug, 2022 3 commits
-
-
Zhen Lei authored
The hardware automatically disable the IRQ interrupt before jumping to the interrupt or exception vector. Therefore, the preempt_disable() operation in this_cpu_read() after macro expansion is unnecessary. In fact, function this_cpu_read() may trigger scheduling, see pseudocode below. Pseudocode of this_cpu_read(xx): preempt_disable_notrace(); raw_cpu_read(xx); if (unlikely(__preempt_count_dec_and_test())) __preempt_schedule_notrace(); Therefore, use raw_cpu_* instead of this_cpu_* to eliminate potential hazards. At the very least, it reduces a few lines of assembly code. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Wang Kefeng authored
Those functions are removed since 2006 commit d6551e88 ("[ARM] Add thread_notify infrastructure"). Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
Ben Wolsieffer authored
When building with an arm-*-uclinuxfdpiceabi toolchain, the FDPIC ABI is enabled by default but should not be used to build the kernel. Therefore, pass -mno-fdpic if supported by the compiler. Signed-off-by: Ben Wolsieffer <ben.wolsieffer@hefring.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
- 30 Aug, 2022 1 commit
-
-
Baruch Siach authored
When user undefined instruction debug is enabled pc value is hashed like kernel pointers for security reason. But the security benefit of this hash is very limited because the code goes on to call __show_regs() that prints the plain pointer value. pc is a user pointer anyway, so the kernel does not leak anything. The only result is confusion about the difference between the pc value on the first printed line, and the value that __show_regs() prints. Always print the plain value of pc. Signed-off-by: Baruch Siach <baruch@tkos.co.il> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
- 28 Aug, 2022 4 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mmLinus Torvalds authored
Pull more hotfixes from Andrew Morton: "Seventeen hotfixes. Mostly memory management things. Ten patches are cc:stable, addressing pre-6.0 issues" * tag 'mm-hotfixes-stable-2022-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: .mailmap: update Luca Ceresoli's e-mail address mm/mprotect: only reference swap pfn page if type match squashfs: don't call kmalloc in decompressors mm/damon/dbgfs: avoid duplicate context directory creation mailmap: update email address for Colin King asm-generic: sections: refactor memory_intersects bootmem: remove the vmemmap pages from kmemleak in put_page_bootmem ocfs2: fix freeing uninitialized resource on ocfs2_dlm_shutdown Revert "memcg: cleanup racy sum avoidance code" mm/zsmalloc: do not attempt to free IS_ERR handle binder_alloc: add missing mmap_lock calls when using the VMA mm: re-allow pinning of zero pfns (again) vmcoreinfo: add kallsyms_num_syms symbol mailmap: update Guilherme G. Piccoli's email addresses writeback: avoid use-after-free after removing device shmem: update folio if shmem_replace_page() updates the page mm/hugetlb: avoid corrupting page->mapping in hugetlb_mcopy_atomic_pte
-
Linus Torvalds authored
Pull bitmap fixes from Yury Norov: "Fix the reported issues, and implements the suggested improvements, for the version of the cpumask tests [1] that was merged with commit c41e8866 ("lib/test: introduce cpumask KUnit test suite"). These changes include fixes for the tests, and better alignment with the KUnit style guidelines" * tag 'bitmap-6.0-rc3' of github.com:/norov/linux: lib/cpumask_kunit: add tests file to MAINTAINERS lib/cpumask_kunit: log mask contents lib/test_cpumask: follow KUnit style guidelines lib/test_cpumask: fix cpu_possible_mask last test lib/test_cpumask: drop cpu_possible_mask full test
-
Luca Ceresoli authored
My Bootlin address is preferred from now on. Link: https://lkml.kernel.org/r/20220826130515.3011951-1-luca.ceresoli@bootlin.comSigned-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Atish Patra <atishp@atishpatra.org> Cc: Hans Verkuil <hverkuil-cisco@xs4all.nl> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-