- 20 Dec, 2018 13 commits
-
-
Christoph Hellwig authored
The unmap methods need to transfer memory ownership back from the device to the cpu by identical means as dma_sync_*_to_cpu. I'm not sure powerpc needs to do any work in this transfer direction, but given that it does invalidate the caches in dma_sync_*_to_cpu already we should make sure we also do so on unmapping. Signed-off-by: Christoph Hellwig <hch@lst.de> [mpe: s/dir/direction in dma_nommu_unmap_page()] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christoph Hellwig authored
AMIGAONE selects NOT_COHERENT_CACHE, so we better allow it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
This patch fixes early DEBUG messages in prom.c: - Use %px instead of %p to see the addresses - Cast memblock_phys_mem_size() with (unsigned long long) to avoid build failure when phys_addr_t is not 64 bits. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Greg Kurz authored
All fields in the PE are big-endian. Use cpu_to_be32() like everywhere else something is written to the PE. Otherwise a wrong TID will be used by the NPU. If this TID happens to point to an existing thread sharing the same mm, it could be woken up by error. This is highly improbable though. The likely outcome of this is the NPU not finding the target thread and forcing the AFU into sending an interrupt, which userspace is supposed to handle anyway. Fixes: e948e06f ("ocxl: Expose the thread_id needed for wait on POWER9") Cc: stable@vger.kernel.org # v4.18 Signed-off-by: Greg Kurz <groug@kaod.org> Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Joel Stanley authored
When building for ppc32 with clang these flags are unsupported: -ffixed-r2 and -mmultiple llvm's lib/Target/PowerPC/PPCRegisterInfo.cpp marks r2 as reserved on when building for SVR4ABI and !ppc64: // The SVR4 ABI reserves r2 and r13 if (Subtarget.isSVR4ABI()) { // We only reserve r2 if we need to use the TOC pointer. If we have no // explicit uses of the TOC pointer (meaning we're a leaf function with // no constant-pool loads, etc.) and we have no potential uses inside an // inline asm block, then we can treat r2 has an ordinary callee-saved // register. const PPCFunctionInfo *FuncInfo = MF.getInfo<PPCFunctionInfo>(); if (!TM.isPPC64() || FuncInfo->usesTOCBasePtr() || MF.hasInlineAsm()) markSuperRegs(Reserved, PPC::R2); // System-reserved register markSuperRegs(Reserved, PPC::R13); // Small Data Area pointer register } This means we can safely omit -ffixed-r2 when building for 32-bit targets. The -mmultiple/-mno-multiple flags are not supported by clang, so platforms that might support multiple miss out on using multiple word instructions. We wrap these flags in cc-option so that when Clang gains support the kernel will be able use these flags. Clang 8 can then build a ppc44x_defconfig which boots in Qemu: make CC=clang-8 ARCH=powerpc CROSS_COMPILE=powerpc-linux-gnu- ppc44x_defconfig ./scripts/config -e CONFIG_DEVTMPFS -d DEVTMPFS_MOUNT make CC=clang-8 ARCH=powerpc CROSS_COMPILE=powerpc-linux-gnu- qemu-system-ppc -M bamboo \ -kernel arch/powerpc/boot/zImage \ -dtb arch/powerpc/boot/dts/bamboo.dtb \ -initrd ~/ppc32-440-rootfs.cpio \ -nographic -serial stdio -monitor pty -append "console=ttyS0" Link: https://github.com/ClangBuiltLinux/linux/issues/261 Link: https://bugs.llvm.org/show_bug.cgi?id=39556 Link: https://bugs.llvm.org/show_bug.cgi?id=39555Signed-off-by: Joel Stanley <joel@jms.id.au> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Joel Stanley authored
We cannot build these files with clang as it does not allow altivec instructions in assembly when -msoft-float is passed. Jinsong Ji <jji@us.ibm.com> wrote: > We currently disable Altivec/VSX support when enabling soft-float. So > any usage of vector builtins will break. > > Enable Altivec/VSX with soft-float may need quite some clean up work, so > I guess this is currently a limitation. > > Removing -msoft-float will make it work (and we are lucky that no > floating point instructions will be generated as well). This is a workaround until the issue is resolved in clang. Link: https://bugs.llvm.org/show_bug.cgi?id=31177 Link: https://github.com/ClangBuiltLinux/linux/issues/239Signed-off-by: Joel Stanley <joel@jms.id.au> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Madhavan Srinivasan authored
Remove PM_L2_ST_MISS and PM_L2_ST from HW cache event array since these are bus events. And these needs to be programmed in groups. Hence remove them. Fixes: f1fb60bf ('powerpc/perf: Export Power9 generic and cache events to sysfs') Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Madhavan Srinivasan authored
In previous generation processors, both bus events and direct events of performance monitoring unit can be individually programmabled and monitored in PMCs. But in Power9, L2/L3 bus events are always available as a "bank" of 4 events. To obtain the counts for any of the l2/l3 bus events in a given bank, the user will have to program PMC4 with corresponding l2/l3 bus event for that bank. Patch enforce two contraints incase of L2/L3 bus events. 1)Any L2/L3 event when programmed is also expected to program corresponding PMC4 event from that group. 2)PMC4 event should always been programmed first due to group constraint logic limitation For ex. consider these L3 bus events PM_L3_PF_ON_CHIP_MEM (0x460A0), PM_L3_PF_MISS_L3 (0x160A0), PM_L3_CO_MEM (0x260A0), PM_L3_PF_ON_CHIP_CACHE (0x360A0), 1) This is an INVALID group for L3 Bus event monitoring, since it is missing PMC4 event. perf stat -e "{r160A0,r260A0,r360A0}" < > And this is a VALID group for L3 Bus events: perf stat -e "{r460A0,r160A0,r260A0,r360A0}" < > 2) This is an INVALID group for L3 Bus event monitoring, since it is missing PMC4 event. perf stat -e "{r260A0,r360A0}" < > And this is a VALID group for L3 Bus events: perf stat -e "{r460A0,r260A0,r360A0}" < > 3) This is an INVALID group for L3 Bus event monitoring, since it is missing PMC4 event. perf stat -e "{r360A0}" < > And this is a VALID group for L3 Bus events: perf stat -e "{r460A0,r360A0}" < > Patch here implements group constraint logic suggested by Michael Ellerman. Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Madhavan Srinivasan authored
Raw event code has couple of fields "unit" and "cache" in it, to capture the "unit" to monitor for a given pmcxsel and cache reload qualifier to program in MMCR1. isa207_get_constraint() refers "unit" field to update the MMCRC (L2/L3) Event bus control fields with "cache" bits of the raw event code. These are power8 specific and not supported by PowerISA v3.0 pmu. So wrap the checks to be power8 specific. Also, "cache" bit field is referred to update MMCR1[16:17] and this check can be power8 specific. Fixes: 7ffd948f ('powerpc/perf: factor out power8 pmu functions') Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Madhavan Srinivasan authored
Update the raw event code comment in power9-pmu.c with respect to "cache" bits, since power9 MMCRC does not support these. Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Madhavan Srinivasan authored
On each sample, Sample Instruction Event Register (SIER) content is saved in pt_regs. SIER does not have a entry as-is in the pt_regs but instead, SIER content is saved in the "dar" register of pt_regs. Patch adds another entry to the perf_regs structure to include the "SIER" printing which internally maps to the "dar" of pt_regs. It also check for the SIER availability in the platform and present value accordingly Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Madhavan Srinivasan authored
MMCRA[34:36] and MMCRA[38:44] expose the thresholding counter value. Thresholding counter can be used to count latency cycles such as load miss to reload. But threshold counter value is not relevant when the sampled instruction type is unknown or reserved. Patch to fix the thresholding counter value to zero when sampled instruction type is unknown or reserved. Fixes: 170a315f('powerpc/perf: Support to export MMCRA[TEC*] field to userspace') Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
In commit 2865d08d ("powerpc/mm: Move the DSISR_PROTFAULT sanity check") we moved the protection fault access check before the vma lookup. That means we hit that WARN_ON when user space accesses a kernel address. Before that commit this was handled by find_vma() not finding vma for the kernel address and considering that access as bad area access. Avoid the confusing WARN_ON and convert that to a ratelimited printk. With the patch we now get: for load: a.out[5997]: User access of kernel address (c00000000000dea0) - exploit attempt? (uid: 1000) a.out[5997]: segfault (11) at c00000000000dea0 nip 1317c0798 lr 7fff80d6441c code 1 in a.out[1317c0000+10000] a.out[5997]: code: 60000000 60420000 3c4c0002 38427790 4bffff20 3c4c0002 38427784 fbe1fff8 a.out[5997]: code: f821ffc1 7c3f0b78 60000000 e9228030 <89290000> 993f002f 60000000 383f0040 for exec: a.out[6067]: User access of kernel address (c00000000000dea0) - exploit attempt? (uid: 1000) a.out[6067]: segfault (11) at c00000000000dea0 nip c00000000000dea0 lr 129d507b0 code 1 a.out[6067]: Bad NIP, not dumping instructions. Fixes: 2865d08d ("powerpc/mm: Move the DSISR_PROTFAULT sanity check") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Tested-by: Breno Leitao <leitao@debian.org> [mpe: Don't split printk() string across lines] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 19 Dec, 2018 19 commits
-
-
Christophe Leroy authored
The 603 doesn't have a HASH table, TLB misses are handled by software. It is then possible to generate page fault when _PAGE_EXEC is not set like in nohash/32. There is one "reserved" PTE bit available, this patch uses it for _PAGE_EXEC. In order to support it, set_pte_filter() and set_access_flags_filter() are made common, and the handling is made dependent on MMU_FTR_HPTE_TABLE Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Define slice_init_new_context_exec() at all time to avoid Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
With the following piece of code, the following compilation warning is encountered: if (_IOC_DIR(ioc) != _IOC_NONE) { int verify = _IOC_DIR(ioc) & _IOC_READ ? VERIFY_WRITE : VERIFY_READ; if (!access_ok(verify, ioarg, _IOC_SIZE(ioc))) { drivers/platform/test/dev.c: In function 'my_ioctl': drivers/platform/test/dev.c:219:7: warning: unused variable 'verify' [-Wunused-variable] int verify = _IOC_DIR(ioc) & _IOC_READ ? VERIFY_WRITE : VERIFY_READ; This patch fixes it by referencing 'type' in the macro allthough doing nothing with it. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
commit f21f49ea ("[POWERPC] Remove the dregs of APUS support from arch/powerpc") removed CONFIG_APUS, but forgot to remove the logic which adapts tophys() and tovirt() for it. This patch removes the last stale pieces. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
mfsrin() takes segment num from bits 31-28 (IBM bits 0-3). Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> [mpe: Clarify bit numbering] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
This patch adds STACK_FRAME_REGS_MARKER in the stack at exception entry in order to see interrupts in call traces as below: [ 0.013964] Call Trace: [ 0.014014] [c0745db0] [c007a9d4] tick_periodic.constprop.5+0xd8/0x104 (unreliable) [ 0.014086] [c0745dc0] [c007aa20] tick_handle_periodic+0x20/0x9c [ 0.014181] [c0745de0] [c0009cd0] timer_interrupt+0xa0/0x264 [ 0.014258] [c0745e10] [c000e484] ret_from_except+0x0/0x14 [ 0.014390] --- interrupt: 901 at console_unlock.part.7+0x3f4/0x528 [ 0.014390] LR = console_unlock.part.7+0x3f0/0x528 [ 0.014455] [c0745ee0] [c0050334] console_unlock.part.7+0x114/0x528 (unreliable) [ 0.014542] [c0745f30] [c00524e0] register_console+0x3d8/0x44c [ 0.014625] [c0745f60] [c0675aac] cpm_uart_console_init+0x18/0x2c [ 0.014709] [c0745f70] [c06614f4] console_init+0x114/0x1cc [ 0.014795] [c0745fb0] [c0658b68] start_kernel+0x300/0x3d8 [ 0.014864] [c0745ff0] [c00022cc] start_here+0x44/0x98 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
This patch fixes the loop in p_block_mapped() and v_block_mapped() to scan the entire bat_addrs[] array. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Depending on the CONFIG selected, many of the MMU features are not possible. Lets only get the possible ones in MMU_FTRS_POSSIBLE. This allows gcc to get rid at compile time of code related to not possible features. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Instead of hardcoding reset vector restore, use patch_instruction() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Use patch sites and associated helpers to manage TLB handlers patching instead of hardcoding. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Instead of hardcoding code modifications, use code patching functions. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Instead of hardcoding the TLB handlers patching, use the newly created modify_instruction_site() helper. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Use patch_sites and the new modify_instruction_site() function instead of hardcoding hash functions patching. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Instead of manually patching a blr at hash_page() entry in MMU_init_hw(), this patch adds a features section in head_32.S Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Use patch_site_addr() instead of hardcoding the address calculation in machine_init() Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Add two helpers to avoid hardcoding of instructions modifications. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
Using patch_site_addr() helper, patch_instruction_site() and patch_branch_site() can be simplified and inlined. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
In file included from ./include/linux/hugetlb.h:445:0, from arch/powerpc/kernel/setup-common.c:37: ./arch/powerpc/include/asm/hugetlb.h: In function ‘huge_ptep_clear_flush’: ./arch/powerpc/include/asm/hugetlb.h:154:8: error: variable ‘pte’ set but not used [-Werror=unused-but-set-variable] Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Christophe Leroy authored
This patch implements CONFIG_DEBUG_VIRTUAL to warn about incorrect use of virt_to_phys() and page_to_phys() Below is the result of test_debug_virtual: [ 1.438746] WARNING: CPU: 0 PID: 1 at ./arch/powerpc/include/asm/io.h:808 test_debug_virtual_init+0x3c/0xd4 [ 1.448156] CPU: 0 PID: 1 Comm: swapper Not tainted 4.20.0-rc5-00560-g6bfb52e23a00-dirty #532 [ 1.457259] NIP: c066c550 LR: c0650ccc CTR: c066c514 [ 1.462257] REGS: c900bdb0 TRAP: 0700 Not tainted (4.20.0-rc5-00560-g6bfb52e23a00-dirty) [ 1.471184] MSR: 00029032 <EE,ME,IR,DR,RI> CR: 48000422 XER: 20000000 [ 1.477811] [ 1.477811] GPR00: c0650ccc c900be60 c60d0000 00000000 006000c0 c9000000 00009032 c7fa0020 [ 1.477811] GPR08: 00002400 00000001 09000000 00000000 c07b5d04 00000000 c00037d8 00000000 [ 1.477811] GPR16: 00000000 00000000 00000000 00000000 c0760000 c0740000 00000092 c0685bb0 [ 1.477811] GPR24: c065042c c068a734 c0685b8c 00000006 00000000 c0760000 c075c3c0 ffffffff [ 1.512711] NIP [c066c550] test_debug_virtual_init+0x3c/0xd4 [ 1.518315] LR [c0650ccc] do_one_initcall+0x8c/0x1cc [ 1.523163] Call Trace: [ 1.525595] [c900be60] [c0567340] 0xc0567340 (unreliable) [ 1.530954] [c900be90] [c0650ccc] do_one_initcall+0x8c/0x1cc [ 1.536551] [c900bef0] [c0651000] kernel_init_freeable+0x1f4/0x2cc [ 1.542658] [c900bf30] [c00037ec] kernel_init+0x14/0x110 [ 1.547913] [c900bf40] [c000e1d0] ret_from_kernel_thread+0x14/0x1c [ 1.553971] Instruction dump: [ 1.556909] 3ca50100 bfa10024 54a5000e 3fa0c076 7c0802a6 3d454000 813dc204 554893be [ 1.564566] 7d294010 7d294910 90010034 39290001 <0f090000> 7c3e0b78 955e0008 3fe0c062 [ 1.572425] ---[ end trace 6f6984225b280ad6 ]--- [ 1.577467] PA: 0x09000000 for VA: 0xc9000000 [ 1.581799] PA: 0x061e8f50 for VA: 0xc61e8f50 Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 17 Dec, 2018 4 commits
-
-
Christophe Leroy authored
On several arches, virt_to_phys() is in io.h Build fails without it: CC lib/test_debug_virtual.o lib/test_debug_virtual.c: In function 'test_debug_virtual_init': lib/test_debug_virtual.c:26:7: error: implicit declaration of function 'virt_to_phys' [-Werror=implicit-function-declaration] pa = virt_to_phys(va); ^ Fixes: e4dace36 ("lib: add test module for CONFIG_DEBUG_VIRTUAL") CC: stable@vger.kernel.org Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Mathieu Malaterre authored
The code: ifdef CONFIG_6xx KBUILD_CFLAGS += -mcpu=powerpc endif was added in 2006 in commit f48b8296 ("[PATCH] powerpc32: Set cpu explicitly in kernel compiles"). This change was acceptable since the TARGET_CPU logic was 64-bit only. Since commit 0e00a8c9 ("powerpc: Allow CPU selection also on PPC32") this logic is no longer acceptable after the TARGET_CPU specific. It currently appends -mcpu=powerpc at the end of the command line, after any TARGET_CPU specific: gcc -Wp,-MD,init/.do_mounts.o.d ... -mcpu=powerpc -mbig-endian -m32 ... -mcpu=e300c2 ... -mcpu=powerpc ... ../init/do_mounts.c Fixes: 0e00a8c9 ("powerpc: Allow CPU selection also on PPC32") Signed-off-by: Mathieu Malaterre <malat@debian.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
ipic_set_priority() has been unused since 2006 when the last usage was removed in commit b9f0f1bb ("[POWERPC] Adapt ipic driver to new host_ops interface, add set_irq_type to set IRQ sense"). Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Merge our fixes branch again, this has a couple of build fixes and also a change to do_syscall_trace_enter() that will conflict with a patch we want to apply in next.
-
- 10 Dec, 2018 1 commit
-
-
Elvira Khabirova authored
Arch code should use tracehook_*() helpers, as documented in include/linux/tracehook.h, ptrace_report_syscall() is not expected to be used outside that file. The patch does not look very nice, but at least it is correct and opens the way for PTRACE_GET_SYSCALL_INFO API. Co-authored-by: Dmitry V. Levin <ldv@altlinux.org> Fixes: 5521eb4b ("powerpc/ptrace: Add support for PTRACE_SYSEMU") Signed-off-by: Elvira Khabirova <lineprinter@altlinux.org> Signed-off-by: Dmitry V. Levin <ldv@altlinux.org> [mpe: Take this as a minimal fix for 4.20, we'll rework it later] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 09 Dec, 2018 3 commits
-
-
Oliver O'Halloran authored
The "altmap" is used to provide a pool of memory that is reserved for the vmemmap backing of hot-plugged memory. This is useful when adding large amount of ZONE_DEVICE memory to a system with a limited amount of normal memory. On ppc64 we use huge pages to map the vmemmap which requires the backing storage to be contigious and aligned to the hugepage size. The altmap implementation allows for the altmap provider to reserve a few PFNs at the start of the range for it's own uses and when this occurs the first chunk of the altmap is not usable for hugepage mappings. On hash there is no sane way to fall back to a normal sized page mapping so we fail the allocation. This results in memory hotplug failing with ENOMEM when the new range doesn't fall into an existing vmemmap block. This patch handles this case by falling back to using system memory rather than failing if we cannot allocate from the altmap. This fallback should only ever be used for the first vmemmap block so it should not cause excess memory consumption. Fixes: 7b73d978 ("mm: pass the vmem_altmap to vmemmap_populate") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Oliver O'Halloran authored
The interleave set cookie is used to determine if a label stored in the metadata space should be applied to the current region. This is important in the case of NVDIMMs since the firmware may change the interleaving configuration of a DIMM which would invalidate the existing labels. In our case the hypervisor hides those details from us so we don't really care, but libnvdimm still requires the interleave set cookie to be non-zero. For our purposes we just need the set cookie to be unique and fixed for a given PAPR SCM region and using the unit-guid (really a UUID) is fine for this purpose. Fixes: b5beae5e ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> [mpe: Use kernel types (u64)] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Oliver O'Halloran authored
When a new nvdimm device is registered with libnvdimm via nvdimm_create() it is added as a device on the nvdimm bus. The probe function for the DIMM driver is potentially quite slow so actually registering and probing the device is done in an async domain rather than immediately after device creation. This can result in a race where the region device (created 2nd) is probed first and fails to activate at boot. To fix this we use the same approach as the ACPI/NFIT driver which is to check that all the DIMM devices registered successfully. LibNVDIMM provides the nvdimm_bus_count_dimms() function which synchronises with the async domain and verifies that the dimm was successfully registered with the bus. If either of these does not occur then we bail. Fixes: b5beae5e ("powerpc/pseries: Add driver for PAPR SCM regions") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-