An error occurred fetching the project authors.
- 30 Apr, 2024 1 commit
-
-
Jisheng Zhang authored
Currently, riscv linux requires at least IMA, so all platforms have a multiplier. And I assume the 'mul' efficiency is comparable or better than a sequence of five or so register-dependent arithmetic instructions. Select ARCH_HAS_FAST_MULTIPLIER to get slightly nicer codegen. Refer to commit f9b41929 ("[PATCH] bitops: hweight() speedup") for more details. In a simple benchmark test calling hweight64() in a loop, it got: about 14% performance improvement on JH7110, tested on Milkv Mars. about 23% performance improvement on TH1520 and SG2042, tested on Sipeed LPI4A and SG2042 platform. a slight performance drop on CV1800B, tested on milkv duo. Among all riscv platforms in my hands, this is the only one which sees a slight performance drop. It means the 'mul' isn't quick enough. However, the situation exists on x86 too, for example, P4 doesn't have fast integer multiplies as said in the above commit, x86 also selects ARCH_HAS_FAST_MULTIPLIER. So let's select ARCH_HAS_FAST_MULTIPLIER which can benefit almost riscv platforms. Samuel also provided some performance numbers: On Unmatched: 20% speedup for __sw_hweight32 and 30% speedup for __sw_hweight64. On D1: 8% speedup for __sw_hweight32 and 8% slowdown for __sw_hweight64. Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Reviewed-by:
Samuel Holland <samuel.holland@sifive.com> Tested-by:
Samuel Holland <samuel.holland@sifive.com> Reviewed-by:
Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20240325105823.1483-1-jszhang@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 29 Apr, 2024 1 commit
-
-
Samuel Holland authored
In SMP configurations, all TLB flushing narrower than flush_tlb_all() goes through __flush_tlb_range(). Do the same in UP configurations. This allows UP configurations to take advantage of recent improvements to the code in tlbflush.c, such as support for huge pages and flushing multiple-page ranges. Reviewed-by:
Alexandre Ghiti <alexghiti@rivosinc.com> Signed-off-by:
Samuel Holland <samuel.holland@sifive.com> Reviewed-by:
Yunhui Cui <cuiyunhui@bytedance.com> Link: https://lore.kernel.org/r/20240327045035.368512-7-samuel.holland@sifive.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 28 Apr, 2024 1 commit
-
-
Miguel Ojeda authored
The rust modules work on 64-bit RISC-V, with no twiddling required. Select HAVE_RUST and provide the required flags to kbuild so that the modules can be used. The Makefile and Kconfig changes are lifted from work done by Miguel in the Rust-for-Linux tree, hence his authorship. Following the rabbit hole, the Makefile changes originated in a script, created based on config files originally added by Gary, hence his co-authorship. 32-bit is broken in core rust code, so support is limited to 64-bit: ld.lld: error: undefined symbol: __udivdi3 As 64-bit RISC-V is now supported, add it to the arch support table. Co-developed-by:
Gary Guo <gary@garyguo.net> Signed-off-by:
Gary Guo <gary@garyguo.net> Signed-off-by:
Miguel Ojeda <ojeda@kernel.org> Co-developed-by:
Conor Dooley <conor.dooley@microchip.com> Signed-off-by:
Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20240409-silencer-book-ce1320f06aab@spudSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 26 Apr, 2024 1 commit
-
-
David Hildenbrand authored
Nowadays, we call it "GUP-fast", the external interface includes functions like "get_user_pages_fast()", and we renamed all internal functions to reflect that as well. Let's make the config option reflect that. Link: https://lkml.kernel.org/r/20240402125516.223131-3-david@redhat.comSigned-off-by:
David Hildenbrand <david@redhat.com> Reviewed-by:
Mike Rapoport (IBM) <rppt@kernel.org> Reviewed-by:
Jason Gunthorpe <jgg@nvidia.com> Reviewed-by:
John Hubbard <jhubbard@nvidia.com> Cc: Peter Xu <peterx@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- 24 Apr, 2024 1 commit
-
-
Jisheng Zhang authored
Select ARCH_USE_CMPXCHG_LOCKREF to enable the cmpxchg-based lockless lockref implementation for riscv. Using Linus' test case[1] on TH1520 platform, I see a 11.2% improvement. On JH7110 platform, I see 12.0% improvement. Link: http://marc.info/?l=linux-fsdevel&m=137782380714721&w=4 [1] Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Reviewed-by:
Andrea Parri <parri.andrea@gmail.com> Link: https://lore.kernel.org/r/20240325111038.1700-2-jszhang@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 09 Apr, 2024 2 commits
-
-
Samuel Holland authored
For ease of testing, it is convenient to run NOMMU kernels in supervisor mode. The only required change is to offset the kernel load address, since the beginning of RAM is usually reserved for M-mode firmware. Signed-off-by:
Samuel Holland <samuel.holland@sifive.com> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20240227003630.3634533-5-samuel.holland@sifive.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Samuel Holland authored
The Zbb and Zicboz ISA extensions have no dependency on an MMU and are equally useful on NOMMU kernels. Signed-off-by:
Samuel Holland <samuel.holland@sifive.com> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20240227003630.3634533-4-samuel.holland@sifive.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 25 Mar, 2024 1 commit
-
-
Anup Patel authored
The QEMU virt machine supports AIA emulation and quite a few RISC-V platforms with AIA support are under development so select APLIC and IMSIC drivers for all RISC-V platforms. Signed-off-by:
Anup Patel <apatel@ventanamicro.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Björn Töpel <bjorn@rivosinc.com> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Reviewed-by:
Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20240307140307.646078-9-apatel@ventanamicro.com
-
- 13 Mar, 2024 1 commit
-
-
Charlie Jenkins authored
Introduce Kconfig options to set the kernel unaligned access support. These options provide a non-portable alternative to the runtime unaligned access probe. To support this, the unaligned access probing code is moved into it's own file and gated behind a new RISCV_PROBE_UNALIGNED_ACCESS_SUPPORT option. Signed-off-by:
Charlie Jenkins <charlie@rivosinc.com> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Tested-by:
Samuel Holland <samuel.holland@sifive.com> Link: https://lore.kernel.org/r/20240308-disable_misaligned_probe_config-v9-4-a388770ba0ce@rivosinc.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 06 Mar, 2024 2 commits
-
-
Arnd Bergmann authored
Most architectures only support a single hardcoded page size. In order to ensure that each one of these sets the corresponding Kconfig symbols, change over the PAGE_SHIFT definition to the common one and allow only the hardware page size to be selected. Acked-by:
Guo Ren <guoren@kernel.org> Acked-by:
Heiko Carstens <hca@linux.ibm.com> Acked-by:
Stafford Horne <shorne@gmail.com> Acked-by:
Johannes Berg <johannes@sipsolutions.net> Acked-by:
Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Yangyu Chen authored
The BUILTIN_DTB_SOURCE was only configured for K210 before. Since SOC_BUILTIN_DTB_DECLARE was removed at commit d5805af9 ("riscv: Fix builtin DTB handling") from patch [1], the kernel cannot choose one of the dtbs from then on and always take the first one dtb to use. Then, another commit 0ddd7eaf ("riscv: Fix BUILTIN_DTB for sifive and microchip soc") from patch [2] supports BUILTIN_DTB_SOURCE for other SoCs. However, this feature will only work if the Kconfig we use links the dtb we expected in the first place as mentioned in the thread [3]. Thus, a config BUILTIN_DTB_SOURCE is needed for all SoCs to choose one dtb to use. For some considerations, this patch also removes default y if XIP_KERNEL for BUILTIN_DTB, as this requires setting a proper dtb to use on the BUILTIN_DTB_SOURCE, else the kernel with XIP but does not set BUILTIN_DTB_SOURCE or unselect BUILTIN_DTB will not boot. Also, this patch removes the default dtb string for k210 from Kconfig to nommu_k210_defconfig and nommu_k210_sdcard_defconfig to avoid complex Kconfig settings for other SoCs in the future. [1] https://lore.kernel.org/linux-riscv/20201208073355.40828-5-damien.lemoal@wdc.com/ [2] https://lore.kernel.org/linux-riscv/20210604120639.1447869-1-alex@ghiti.fr/ [3] https://lore.kernel.org/linux-riscv/CAK7LNATt_56mO2Le4v4EnPnAfd3gC8S_Sm5-GCsfa=qXy=8Lrg@mail.gmail.com/Signed-off-by:
Yangyu Chen <cyy@cyyself.name> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Acked-by:
Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by:
Conor Dooley <conor.dooley@microchip.com>
-
- 28 Feb, 2024 1 commit
-
-
Eric Biggers authored
LLVM commit 8e01042da9d3 ("[RISCV] Add missing dependency check for Zvkb (#79467)") broke the check used by the TOOLCHAIN_HAS_VECTOR_CRYPTO kconfig symbol because it made zvkb start depending on v or zve*. Fix this by specifying both v and zvkb when checking for support for zvkb. Signed-off-by:
Eric Biggers <ebiggers@google.com> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20240127090055.124336-1-ebiggers@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 24 Feb, 2024 1 commit
-
-
Baoquan He authored
Patch series "Split crash out from kexec and clean up related config items", v3. Motivation: ============= Previously, LKP reported a building error. When investigating, it can't be resolved reasonablly with the present messy kdump config items. https://lore.kernel.org/oe-kbuild-all/202312182200.Ka7MzifQ-lkp@intel.com/ The kdump (crash dumping) related config items could causes confusions: Firstly, CRASH_CORE enables codes including - crashkernel reservation; - elfcorehdr updating; - vmcoreinfo exporting; - crash hotplug handling; Now fadump of powerpc, kcore dynamic debugging and kdump all selects CRASH_CORE, while fadump - fadump needs crashkernel parsing, vmcoreinfo exporting, and accessing global variable 'elfcorehdr_addr'; - kcore only needs vmcoreinfo exporting; - kdump needs all of the current kernel/crash_core.c. So only enabling PROC_CORE or FA_DUMP will enable CRASH_CORE, this mislead people that we enable crash dumping, actual it's not. Secondly, It's not reasonable to allow KEXEC_CORE select CRASH_CORE. Because KEXEC_CORE enables codes which allocate control pages, copy kexec/kdump segments, and prepare for switching. These codes are shared by both kexec reboot and kdump. We could want kexec reboot, but disable kdump. In that case, CRASH_CORE should not be selected. -------------------- CONFIG_CRASH_CORE=y CONFIG_KEXEC_CORE=y CONFIG_KEXEC=y CONFIG_KEXEC_FILE=y --------------------- Thirdly, It's not reasonable to allow CRASH_DUMP select KEXEC_CORE. That could make KEXEC_CORE, CRASH_DUMP are enabled independently from KEXEC or KEXEC_FILE. However, w/o KEXEC or KEXEC_FILE, the KEXEC_CORE code built in doesn't make any sense because no kernel loading or switching will happen to utilize the KEXEC_CORE code. --------------------- CONFIG_CRASH_CORE=y CONFIG_KEXEC_CORE=y CONFIG_CRASH_DUMP=y --------------------- In this case, what is worse, on arch sh and arm, KEXEC relies on MMU, while CRASH_DUMP can still be enabled when !MMU, then compiling error is seen as the lkp test robot reported in above link. ------arch/sh/Kconfig------ config ARCH_SUPPORTS_KEXEC def_bool MMU config ARCH_SUPPORTS_CRASH_DUMP def_bool BROKEN_ON_SMP --------------------------- Changes: =========== 1, split out crash_reserve.c from crash_core.c; 2, split out vmcore_infoc. from crash_core.c; 3, move crash related codes in kexec_core.c into crash_core.c; 4, remove dependency of FA_DUMP on CRASH_DUMP; 5, clean up kdump related config items; 6, wrap up crash codes in crash related ifdefs on all 8 arch-es which support crash dumping, except of ppc; Achievement: =========== With above changes, I can rearrange the config item logic as below (the right item depends on or is selected by the left item): PROC_KCORE -----------> VMCORE_INFO |----------> VMCORE_INFO FA_DUMP----| |----------> CRASH_RESERVE ---->VMCORE_INFO / |---->CRASH_RESERVE KEXEC --| /| |--> KEXEC_CORE--> CRASH_DUMP-->/-|---->PROC_VMCORE KEXEC_FILE --| \ | \---->CRASH_HOTPLUG KEXEC --| |--> KEXEC_CORE (for kexec reboot only) KEXEC_FILE --| Test ======== On all 8 architectures, including x86_64, arm64, s390x, sh, arm, mips, riscv, loongarch, I did below three cases of config item setting and building all passed. Take configs on x86_64 as exampmle here: (1) Both CONFIG_KEXEC and KEXEC_FILE is unset, then all kexec/kdump items are unset automatically: # Kexec and crash features # CONFIG_KEXEC is not set # CONFIG_KEXEC_FILE is not set # end of Kexec and crash features (2) set CONFIG_KEXEC_FILE and 'make olddefconfig': --------------- # Kexec and crash features CONFIG_CRASH_RESERVE=y CONFIG_VMCORE_INFO=y CONFIG_KEXEC_CORE=y CONFIG_KEXEC_FILE=y CONFIG_CRASH_DUMP=y CONFIG_CRASH_HOTPLUG=y CONFIG_CRASH_MAX_MEMORY_RANGES=8192 # end of Kexec and crash features --------------- (3) unset CONFIG_CRASH_DUMP in case 2 and execute 'make olddefconfig': ------------------------ # Kexec and crash features CONFIG_KEXEC_CORE=y CONFIG_KEXEC_FILE=y # end of Kexec and crash features ------------------------ Note: For ppc, it needs investigation to make clear how to split out crash code in arch folder. Hope Hari and Pingfan can help have a look, see if it's doable. Now, I make it either have both kexec and crash enabled, or disable both of them altogether. This patch (of 14): Both kdump and fa_dump of ppc rely on crashkernel reservation. Move the relevant codes into separate files: crash_reserve.c, include/linux/crash_reserve.h. And also add config item CRASH_RESERVE to control its enabling of the codes. And update config items which has relationship with crashkernel reservation. And also change ifdeffery from CONFIG_CRASH_CORE to CONFIG_CRASH_RESERVE when those scopes are only crashkernel reservation related. And also rename arch/XXX/include/asm/{crash_core.h => crash_reserve.h} on arm64, x86 and risc-v because those architectures' crash_core.h is only related to crashkernel reservation. [akpm@linux-foundation.org: s/CRASH_RESEERVE/CRASH_RESERVE/, per Klara Modin] Link: https://lkml.kernel.org/r/20240124051254.67105-1-bhe@redhat.com Link: https://lkml.kernel.org/r/20240124051254.67105-2-bhe@redhat.comSigned-off-by:
Baoquan He <bhe@redhat.com> Acked-by:
Hari Bathini <hbathini@linux.ibm.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Pingfan Liu <piliu@redhat.com> Cc: Klara Modin <klarasmodin@gmail.com> Cc: Michael Kelley <mhklinux@outlook.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- 22 Feb, 2024 2 commits
-
-
Nathan Chancellor authored
Now that the minimum supported version of LLVM for building the kernel has been bumped to 13.0.1, this condition is always true, as the build will fail during the configuration stage for older LLVM versions. Remove it. Link: https://lkml.kernel.org/r/20240125-bump-min-llvm-ver-to-13-0-1-v1-8-f5ff9bda41c5@kernel.orgSigned-off-by:
Nathan Chancellor <nathan@kernel.org> Reviewed-by:
Kees Cook <keescook@chromium.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Conor Dooley <conor@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masahiro Yamada <masahiroy@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Nicolas Schier <nicolas@fjasle.eu> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Nathan Chancellor authored
reviews.llvm.org was LLVM's Phabricator instances for code review. It has been abandoned in favor of GitHub pull requests. While the majority of links in the kernel sources still work because of the work Fangrui has done turning the dynamic Phabricator instance into a static archive, there are some issues with that work, so preemptively convert all the links in the kernel sources to point to the commit on GitHub. Most of the commits have the corresponding differential review link in the commit message itself so there should not be any loss of fidelity in the relevant information. Link: https://discourse.llvm.org/t/update-on-github-pull-requests/71540/172 Link: https://lkml.kernel.org/r/20240109-update-llvm-links-v1-2-eb09b59db071@kernel.orgSigned-off-by:
Nathan Chancellor <nathan@kernel.org> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Acked-by:
Fangrui Song <maskray@google.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Mykola Lysenko <mykolal@fb.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- 17 Feb, 2024 1 commit
-
-
Nathan Chancellor authored
Commit e4bb020f ("riscv: detect assembler support for .option arch") added two tests, one for a valid value to '.option arch' that should succeed and one for an invalid value that is expected to fail to make sure that support for '.option arch' is properly detected because Clang does not error when '.option arch' is not supported: $ clang --target=riscv64-linux-gnu -Werror -x assembler -c -o /dev/null <(echo '.option arch, +m') /dev/fd/63:1:9: warning: unknown option, expected 'push', 'pop', 'rvc', 'norvc', 'relax' or 'norelax' .option arch, +m ^ $ echo $? 0 Unfortunately, the invalid test started being accepted by Clang after the linked llvm-project change, which causes CONFIG_AS_HAS_OPTION_ARCH and configurations that depend on it to be silently disabled, even though those versions do support '.option arch'. The invalid test can be avoided altogether by using '-Wa,--fatal-warnings', which will turn all assembler warnings into errors, like '-Werror' does for the compiler: $ clang --target=riscv64-linux-gnu -Werror -Wa,--fatal-warnings -x assembler -c -o /dev/null <(echo '.option arch, +m') /dev/fd/63:1:9: error: unknown option, expected 'push', 'pop', 'rvc', 'norvc', 'relax' or 'norelax' .option arch, +m ^ $ echo $? 1 The as-instr macros have been updated to make use of this flag, so remove the invalid test, which allows CONFIG_AS_HAS_OPTION_ARCH to work for all compiler versions. Cc: stable@vger.kernel.org Fixes: e4bb020f ("riscv: detect assembler support for .option arch") Link: https://github.com/llvm/llvm-project/commit/3ac9fe69f70a2b3541266daedbaaa7dc9c007a2aReported-by:
Eric Biggers <ebiggers@kernel.org> Closes: https://lore.kernel.org/r/20240121011341.GA97368@sol.localdomain/Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Tested-by:
Eric Biggers <ebiggers@google.com> Tested-by:
Andy Chiu <andybnac@gmail.com> Reviewed-by:
Andy Chiu <andybnac@gmail.com> Tested-by:
Conor Dooley <conor.dooley@microchip.com> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Acked-by:
Masahiro Yamada <masahiroy@kernel.org> Link: https://lore.kernel.org/r/20240125-fix-riscv-option-arch-llvm-18-v1-2-390ac9cc3cd0@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 15 Feb, 2024 2 commits
-
-
Andrea Parri authored
RISC-V uses xRET instructions on return from interrupt and to go back to user-space; the xRET instruction is not core serializing. Use FENCE.I for providing core serialization as follows: - by calling sync_core_before_usermode() on return from interrupt (cf. ipi_sync_core()), - via switch_mm() and sync_core_before_usermode() (respectively, for uthread->uthread and kthread->uthread transitions) before returning to user-space. On RISC-V, the serialization in switch_mm() is activated by resetting the icache_stale_mask of the mm at prepare_sync_core_cmd(). Suggested-by:
Palmer Dabbelt <palmer@dabbelt.com> Signed-off-by:
Andrea Parri <parri.andrea@gmail.com> Reviewed-by:
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/r/20240131144936.29190-5-parri.andrea@gmail.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Andrea Parri authored
The membarrier system call requires a full memory barrier after storing to rq->curr, before going back to user-space. The barrier is only needed when switching between processes: the barrier is implied by mmdrop() when switching from kernel to userspace, and it's not needed when switching from userspace to kernel. Rely on the feature/mechanism ARCH_HAS_MEMBARRIER_CALLBACKS and on the primitive membarrier_arch_switch_mm(), already adopted by the PowerPC architecture, to insert the required barrier. Fixes: fab957c1 ("RISC-V: Atomic and Locking Code") Signed-off-by:
Andrea Parri <parri.andrea@gmail.com> Reviewed-by:
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/r/20240131144936.29190-2-parri.andrea@gmail.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 06 Feb, 2024 1 commit
-
-
Kees Cook authored
For simplicity in splitting out UBSan options into separate rules, remove CONFIG_UBSAN_SANITIZE_ALL, effectively defaulting to "y", which is how it is generally used anyway. (There are no ":= y" cases beyond where a specific file is enabled when a top-level ":= n" is in effect.) Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Marco Elver <elver@google.com> Cc: linux-doc@vger.kernel.org Cc: linux-kbuild@vger.kernel.org Signed-off-by:
Kees Cook <keescook@chromium.org>
-
- 25 Jan, 2024 1 commit
-
-
Song Shuai authored
Inspired from arm64's implement -- commit 70918779 ("arm64: entry: Enable random_kstack_offset support") Add support of kernel stack offset randomization while handling syscall, the offset is defaultly limited by KSTACK_OFFSET_MAX() (i.e. 10 bits). In order to avoid trigger stack canaries (due to __builtin_alloca) and slowing down the entry path, use __no_stack_protector attribute to disable stack protector for do_trap_ecall_u() at the function level. Acked-by:
Palmer Dabbelt <palmer@rivosinc.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Song Shuai <songshuaishuai@tinylab.org> Link: https://lore.kernel.org/r/20231109133751.212079-1-songshuaishuai@tinylab.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 24 Jan, 2024 2 commits
-
-
Jisheng Zhang authored
Activate the fast gup for riscv mmu platforms. Here are some GUP_FAST_BENCHMARK performance numbers: Before the patch: GUP_FAST_BENCHMARK: Time: get:53203 put:5085 us After the patch: GUP_FAST_BENCHMARK: Time: get:17711 put:5060 us The get time is reduced by 66.7%! IOW, 3x get speed! Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Reviewed-by:
Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20231219175046.2496-5-jszhang@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Jisheng Zhang authored
In order to implement fast gup we need to ensure that the page table walker is protected from page table pages being freed from under it. riscv situation is more complicated than other architectures: some riscv platforms may use IPI to perform TLB shootdown, for example, those platforms which support AIA, usually the riscv_ipi_for_rfence is true on these platforms; some riscv platforms may rely on the SBI to perform TLB shootdown, usually the riscv_ipi_for_rfence is false on these platforms. To keep software pagetable walkers safe in this case we switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the comment below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h for more details. This patch enables MMU_GATHER_RCU_TABLE_FREE, then use *tlb_remove_page_ptdesc() for those platforms which use IPI to perform TLB shootdown; *tlb_remove_ptdesc() for those platforms which use SBI to perform TLB shootdown; Both case mean that disabling interrupts will block the free and protect the fast gup page walker. Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Reviewed-by:
Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20231219175046.2496-4-jszhang@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 23 Jan, 2024 2 commits
-
-
Conor Dooley authored
Revert commit ed309ce5 ("RISC-V: mark hibernation as nonportable") as it appears the broken versions of OpenSBI have not made it to production on any systems that support hibernation. Signed-off-by:
Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20230802-chef-throng-d9de8b672a49@wendySigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Eric Biggers authored
Add a kconfig symbol that indicates whether the toolchain supports the vector crypto extensions. This is needed by the RISC-V crypto code. Signed-off-by:
Eric Biggers <ebiggers@google.com> Link: https://lore.kernel.org/r/20240122002024.27477-3-ebiggers@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 22 Jan, 2024 1 commit
-
-
Wende Tan authored
Allow LTO to be selected for RISC-V, only when LLD >= 14, since there is an issue [1] in prior LLD versions that prevents LLD to generate proper machine code for RISC-V when writing `nop`s. To avoid boot failures in QEMU [2], '-mattr=+c' and '-mattr=+relax' need to be passed via '-mllvm' to ld.lld, as there appears to be an issue with LLVM's target-features and LTO [3], which can result in incorrect relocations to branch targets [4]. Once this is fixed in LLVM, it can be made conditional on affected ld.lld versions. Disable LTO for arch/riscv/kernel/pi, as llvm-objcopy expects an ELF object file when manipulating the files in that subfolder, rather than LLVM bitcode. [1] https://github.com/llvm/llvm-project/issues/50505, resolved by LLVM commit e63455d5e0e5 ("[MC] Use local MCSubtargetInfo in writeNops") [2] https://github.com/ClangBuiltLinux/linux/issues/1942 [3] https://github.com/llvm/llvm-project/issues/59350 [4] https://github.com/llvm/llvm-project/issues/65090Tested-by:
Wende Tan <twd2.me@gmail.com> Signed-off-by:
Wende Tan <twd2.me@gmail.com> Co-developed-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20231017-riscv-lto-v4-1-e7810b24e805@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 18 Jan, 2024 5 commits
-
-
Song Shuai authored
Add RISC-V variants of the ftrace-direct* samples. Tested-by:
Evgenii Shatokhin <e.shatokhin@yadro.com> Signed-off-by:
Song Shuai <suagrfillet@gmail.com> Tested-by:
Guo Ren <guoren@kernel.org> Signed-off-by:
Guo Ren <guoren@kernel.org> Acked-by:
Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20231130121531.1178502-5-bjorn@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Song Shuai authored
Select the DYNAMIC_FTRACE_WITH_DIRECT_CALLS to provide the register_ftrace_direct[_multi] interfaces allowing users to register the customed trampoline (direct_caller) as the mcount for one or more target functions. And modify_ftrace_direct[_multi] are also provided for modifying direct_caller. To make the direct_caller and the other ftrace hooks (e.g. function/fgraph tracer, k[ret]probes) co-exist, a temporary register is nominated to store the address of direct_caller in ftrace_regs_caller. After the setting of the address direct_caller by direct_ops->func and the RESTORE_REGS in ftrace_regs_caller, direct_caller will be jumped to by the `jr` inst. Add DYNAMIC_FTRACE_WITH_DIRECT_CALLS support for RISC-V. Signed-off-by:
Song Shuai <suagrfillet@gmail.com> Tested-by:
Guo Ren <guoren@kernel.org> Signed-off-by:
Guo Ren <guoren@kernel.org> Acked-by:
Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20231130121531.1178502-4-bjorn@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Song Shuai authored
In commit afc76b8b ("riscv: Using PATCHABLE_FUNCTION_ENTRY instead of MCOUNT") RISC-V added support for -fpatchable-function-entry, which removes the need for recordmcount. Select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY to tell the build system not to run recordmcount. Link: https://lore.kernel.org/linux-riscv/CAAYs2=j3Eak9vU6xbAw0zPuoh00rh8v5C2U3fePkokZFibWs2g@mail.gmail.com/T/#t Link: https://lore.kernel.org/linux-riscv/Y4jtfrJt+%2FQ5nMOz@spud/Signed-off-by:
Song Shuai <suagrfillet@gmail.com> Tested-by:
Guo Ren <guoren@kernel.org> Signed-off-by:
Guo Ren <guoren@kernel.org> Acked-by:
Björn Töpel <bjorn@rivosinc.com> Link: https://lore.kernel.org/r/20231130121531.1178502-2-bjorn@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Nathan Chancellor authored
LLVM prior to 18.0.0 would generate incorrect debug info for DWARF5 due to linker relaxation, which was worked around in clang by defaulting RISC-V to DWARF4 [1]. Unfortunately, this workaround does not work for the kernel because the DWARF version can be independently changed from the default in Kconfig. Do not allow DWARF5 to be selected for RISC-V when using linker relaxation (ld.lld >= 15.0.0) and a version of LLVM that does not have the fixes (the integrated assembler [2] and ld.lld [3] < 18.0.0) necessary to generate the correct debug info. Link: https://github.com/llvm/llvm-project/commit/bbc0f99f3bc96f1db16f649fc21dd18e5b0918f6 [1] Link: https://github.com/llvm/llvm-project/commit/1df5ea29b43690b6622db2cad7b745607ca4de6a [2] Link: https://github.com/llvm/llvm-project/commit/7ffabb61a5569444b5ac9322e22e5471cc5e4a77 [3] Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Reviewed-by:
Fangrui Song <maskray@google.com> Link: https://lore.kernel.org/r/20231205-riscv-restrict-dwarf5-llvm-v2-2-aedf00a382ac@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Nathan Chancellor authored
Certain configurations may need to be disabled if linker relaxation is in use, such as DWARF5 with ld.lld < 18. Hoist the logic of whether or not linker relaxation is in use into Kconfig so decisions can be made at configuration time. Reviewed-by:
Fangrui Song <maskray@google.com> Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20231205-riscv-restrict-dwarf5-llvm-v2-1-aedf00a382ac@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 16 Jan, 2024 2 commits
-
-
Andy Chiu authored
Add kernel_vstate to keep track of kernel-mode Vector registers when trap introduced context switch happens. Also, provide riscv_v_flags to let context save/restore routine track context status. Context tracking happens whenever the core starts its in-kernel Vector executions. An active (dirty) kernel task's V contexts will be saved to memory whenever a trap-introduced context switch happens. Or, when a softirq, which happens to nest on top of it, uses Vector. Context retoring happens when the execution transfer back to the original Kernel context where it first enable preempt_v. Also, provide a config CONFIG_RISCV_ISA_V_PREEMPTIVE to give users an option to disable preemptible kernel-mode Vector at build time. Users with constraint memory may want to disable this config as preemptible kernel-mode Vector needs extra space for tracking of per thread's kernel-mode V context. Or, users might as well want to disable it if all kernel-mode Vector code is time sensitive and cannot tolerate context switch overhead. Signed-off-by:
Andy Chiu <andy.chiu@sifive.com> Tested-by:
Björn Töpel <bjorn@rivosinc.com> Tested-by:
Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-11-andy.chiu@sifive.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Andy Chiu authored
This patch utilizes Vector to perform copy_to_user/copy_from_user. If Vector is available and the size of copy is large enough for Vector to perform better than scalar, then direct the kernel to do Vector copies for userspace. Though the best programming practice for users is to reduce the copy, this provides a faster variant when copies are inevitable. The optimal size for using Vector, copy_to_user_thres, is only a heuristic for now. We can add DT parsing if people feel the need of customizing it. The exception fixup code of the __asm_vector_usercopy must fallback to the scalar one because accessing user pages might fault, and must be sleepable. Current kernel-mode Vector does not allow tasks to be preemptible, so we must disactivate Vector and perform a scalar fallback in such case. The original implementation of Vector operations comes from https://github.com/sifive/sifive-libc, which we agree to contribute to Linux kernel. Co-developed-by:
Jerry Shih <jerry.shih@sifive.com> Signed-off-by:
Jerry Shih <jerry.shih@sifive.com> Co-developed-by:
Nick Knight <nick.knight@sifive.com> Signed-off-by:
Nick Knight <nick.knight@sifive.com> Suggested-by:
Guo Ren <guoren@kernel.org> Signed-off-by:
Andy Chiu <andy.chiu@sifive.com> Tested-by:
Björn Töpel <bjorn@rivosinc.com> Tested-by:
Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Link: https://lore.kernel.org/r/20240115055929.4736-6-andy.chiu@sifive.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 11 Jan, 2024 2 commits
-
-
Alexandre Ghiti authored
Allow to defer the flushing of the TLB when unmapping pages, which allows to reduce the numbers of IPI and the number of sfence.vma. The ubenchmarch used in commit 43b3dfdd ("arm64: support batched/deferred tlb shootdown during page reclamation/migration") that was multithreaded to force the usage of IPI shows good performance improvement on all platforms: * Unmatched: ~34% * TH1520 : ~78% * Qemu : ~81% In addition, perf on qemu reports an important decrease in time spent dealing with IPIs: Before: 68.17% main [kernel.kallsyms] [k] __sbi_rfence_v02_call After : 8.64% main [kernel.kallsyms] [k] __sbi_rfence_v02_call * Benchmark: int stick_this_thread_to_core(int core_id) { int num_cores = sysconf(_SC_NPROCESSORS_ONLN); if (core_id < 0 || core_id >= num_cores) return EINVAL; cpu_set_t cpuset; CPU_ZERO(&cpuset); CPU_SET(core_id, &cpuset); pthread_t current_thread = pthread_self(); return pthread_setaffinity_np(current_thread, sizeof(cpu_set_t), &cpuset); } static void *fn_thread (void *p_data) { int ret; pthread_t thread; stick_this_thread_to_core((int)p_data); while (1) { sleep(1); } return NULL; } int main() { volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); pthread_t threads[4]; int ret; for (int i = 0; i < 4; ++i) { ret = pthread_create(&threads[i], NULL, fn_thread, (void *)i); if (ret) { printf("%s", strerror (ret)); } } memset(p, 0x88, SIZE); for (int k = 0; k < 10000; k++) { /* swap in */ for (int i = 0; i < SIZE; i += 4096) { (void)p[i]; } /* swap out */ madvise(p, SIZE, MADV_PAGEOUT); } for (int i = 0; i < 4; i++) { pthread_cancel(threads[i]); } for (int i = 0; i < 4; i++) { pthread_join(threads[i], NULL); } return 0; } Signed-off-by:
Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by:
Jisheng Zhang <jszhang@kernel.org> Tested-by: Jisheng Zhang <jszhang@kernel.org> # Tested on TH1520 Tested-by:
Nam Cao <namcao@linutronix.de> Link: https://lore.kernel.org/r/20240108193640.344929-1-alexghiti@rivosinc.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Andrew Jones authored
When the SUSP SBI extension is present it implies that the standard "suspend to RAM" type is available. Wire it up to the generic platform suspend support, also applying the already present support for non-retentive CPU suspend. When the kernel is built with CONFIG_SUSPEND, one can do 'echo mem > /sys/power/state' to suspend. Resumption will occur when a platform-specific wake-up event arrives. Signed-off-by:
Andrew Jones <ajones@ventanamicro.com> Tested-by:
Samuel Holland <samuel.holland@sifive.com> Reviewed-by:
Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20231206110807.35882-4-ajones@ventanamicro.comSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 10 Jan, 2024 4 commits
-
-
Jisheng Zhang authored
DCACHE_WORD_ACCESS uses the word-at-a-time API for optimised string comparisons in the vfs layer. This patch implements support for load_unaligned_zeropad in much the same way as has been done for arm64. Here is the test program and step: $ cat tt.c #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #define ITERATIONS 1000000 #define PATH "123456781234567812345678123456781" int main(void) { unsigned long i; struct stat buf; for (i = 0; i < ITERATIONS; i++) stat(PATH, &buf); return 0; } $ gcc -O2 tt.c $ touch 123456781234567812345678123456781 $ time ./a.out Per my test on T-HEAD C910 platforms, the above test performance is improved by about 7.5%. Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Link: https://lore.kernel.org/r/20231225044207.3821-3-jszhang@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Jisheng Zhang authored
Some riscv implementations such as T-HEAD's C906, C908, C910 and C920 support efficient unaligned access, for performance reason we want to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To avoid performance regressions on other non efficient unaligned access platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected. To solve this problem, runtime code patching based on the detected speed is a good solution. But that's not easy, it involves lots of work to modify vairous subsystems such as net, mm, lib and so on. This can be done step by step. So let's take an easier solution: add support to efficient unaligned access and hide the support under NONPORTABLE. Now let's introduce RISCV_EFFICIENT_UNALIGNED_ACCESS which depends on NONPORTABLE, if users know during config time that the kernel will be only run on those efficient unaligned access hw platforms, they can enable it. Obviously, generic unified kernel Image shouldn't enable it. Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Reviewed-by:
Charlie Jenkins <charlie@rivosinc.com> Reviewed-by:
Eric Biggers <ebiggers@google.com> Link: https://lore.kernel.org/r/20231225044207.3821-2-jszhang@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Jisheng Zhang authored
As said in the help of ARCH_WANTS_NO_INSTR entry in arch/Kconfig: "An architecture should select this if the noinstr macro is being used on functions to denote that the toolchain should avoid instrumenting such functions and is required for correctness." Select ARCH_WANTS_NO_INSTR for correctness. PS: The reason we didn't find any issue so far is that the CC_HAS_NO_PROFILE_FN_ATTR is true. Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Link: https://lore.kernel.org/r/20231123142223.1787-1-jszhang@kernel.orgSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
Frederik Haxel authored
This enables, among other things, testing with the QEMU virt machine. To build an XIP kernel for the QEMU virt machine, configure the the kernel as desired and apply the following configuration ``` CONFIG_NONPORTABLE=y CONFIG_XIP_KERNEL=y CONFIG_XIP_PHYS_ADDR=0x20000000 CONFIG_PHYS_RAM_BASE=0x80200000 CONFIG_BUILTIN_DTB=n ``` Since the QEMU virt flash memory expects a 32 MB file, the built image must be padded. For example, with `truncate -s 32M arch/riscv/boot/xipImage` The kernel can be started using the following command in QEMU (v8+) ``` qemu-system-riscv64 -M virt,pflash0=pflash0 \ -blockdev node-name=pflash0,driver=file,read-only=on,\ filename=arch/riscv/boot/xipImage <optional parameters> ``` Signed-off-by:
Frederik Haxel <haxel@fzi.de> Link: https://lore.kernel.org/r/20231212130116.848530-4-haxel@fzi.deSigned-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- 30 Dec, 2023 1 commit
-
-
Andrew Jones authored
When the SBI STA extension exists we can use it to implement paravirt steal-time support. Fill in the empty pv-time functions with an SBI STA implementation and add the Kconfig knobs allowing it to be enabled. Acked-by:
Palmer Dabbelt <palmer@rivosinc.com> Reviewed-by:
Atish Patra <atishp@rivosinc.com> Reviewed-by:
Anup Patel <anup@brainfault.org> Signed-off-by:
Andrew Jones <ajones@ventanamicro.com> Signed-off-by:
Anup Patel <anup@brainfault.org>
-
- 20 Dec, 2023 1 commit
-
-
Arnd Bergmann authored
The cleanup for the CONFIG_KEXEC Kconfig logic accidentally changed the 'depends on CRYPTO=y' dependency to a plain 'depends on CRYPTO', which causes a link failure when all the crypto support is in a loadable module and kexec_file support is built-in: x86_64-linux-ld: vmlinux.o: in function `__x64_sys_kexec_file_load': (.text+0x32e30a): undefined reference to `crypto_alloc_shash' x86_64-linux-ld: (.text+0x32e58e): undefined reference to `crypto_shash_update' x86_64-linux-ld: (.text+0x32e6ee): undefined reference to `crypto_shash_final' Both s390 and x86 have this problem, while ppc64 and riscv have the correct dependency already. On riscv, the dependency is only used for the purgatory, not for the kexec_file code itself, which may be a bit surprising as it means that with CONFIG_CRYPTO=m, it is possible to enable KEXEC_FILE but then the purgatory code is silently left out. Move this into the common Kconfig.kexec file in a way that is correct everywhere, using the dependency on CRYPTO_SHA256=y only when the purgatory code is available. This requires reversing the dependency between ARCH_SUPPORTS_KEXEC_PURGATORY and KEXEC_FILE, but the effect remains the same, other than making riscv behave like the other ones. On s390, there is an additional dependency on CRYPTO_SHA256_S390, which should technically not be required but gives better performance. Remove this dependency here, noting that it was not present in the initial Kconfig code but was brought in without an explanation in commit 71406883 ("s390/kexec_file: Add kexec_file_load system call"). [arnd@arndb.de: fix riscv build] Link: https://lkml.kernel.org/r/67ddd260-d424-4229-a815-e3fcfb864a77@app.fastmail.com Link: https://lkml.kernel.org/r/20231023110308.1202042-1-arnd@kernel.org Fixes: 6af51380 ("x86/kexec: refactor for kernel/Kconfig.kexec") Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Reviewed-by:
Eric DeVolder <eric_devolder@yahoo.com> Tested-by:
Eric DeVolder <eric_devolder@yahoo.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Conor Dooley <conor@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-