- 10 Dec, 2020 23 commits
-
-
Cédric Le Goater authored
Previous patches removed the need of the first argument which was a hack for Firwmware EOI. Remove it and flatten the routine which has became simpler. Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-12-clg@kaod.org
-
Cédric Le Goater authored
This flag was used to support the P9 DD1 and we have stopped supporting this CPU when DD2 came out. See skiboot commit: https://github.com/open-power/skiboot/commit/0b0d15e3c170 Also, remove eoi handler which is now unused. Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-11-clg@kaod.org
-
Cédric Le Goater authored
This flag was used to support the PHB4 LSIs on P9 DD1 and we have stopped supporting this CPU when DD2 came out. See skiboot commit: https://github.com/open-power/skiboot/commit/0b0d15e3c170Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-10-clg@kaod.org
-
Cédric Le Goater authored
This flag was used to support the PHB4 LSIs on P9 DD1 and we have stopped supporting this CPU when DD2 came out. See skiboot commit: https://github.com/open-power/skiboot/commit/0b0d15e3c170Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-9-clg@kaod.org
-
Cédric Le Goater authored
PowerNV systems can handle up to 4K guests and 1M interrupt numbers per chip. Increase the range of allowed interrupts to support a larger number of guests. Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-8-clg@kaod.org
-
Cédric Le Goater authored
Full state of the Linux interrupt descriptors can be dumped under debugfs when compiled with CONFIG_GENERIC_IRQ_DEBUGFS. Add support for the XIVE interrupt controller. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-7-clg@kaod.org
-
Cédric Le Goater authored
We hope one day to handle multiple irq_domain in the XIVE driver. Start simple by setting the name using the DT node. Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-6-clg@kaod.org
-
Cédric Le Goater authored
The XIVE driver deals with CPU IPIs in a peculiar way. Each CPU has its own XIVE IPI interrupt allocated at the HW level, for PowerNV, or at the hypervisor level for pSeries. In practice, these interrupts are not always used. pSeries/PowerVM prefers local doorbells for local threads since they are faster. On PowerNV, global doorbells are also preferred for the same reason. The mapping in the Linux is reduced to a single interrupt using HW interrupt number 0 and a custom irq_chip to handle EOI. This can cause performance issues in some benchmark (ipistorm) on multichip systems. Clarify the use of the 0 value, it will help in improving multichip support. Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-4-clg@kaod.org
-
Cédric Le Goater authored
This is a simple cleanup to identify easily all flags of the XIVE interrupt structure. The interrupts flagged with XIVE_IRQ_FLAG_NO_EOI are the escalations used to wake up vCPUs in KVM. They are handled very differently from the rest. Signed-off-by: Cédric Le Goater <clg@kaod.org> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-3-clg@kaod.org
-
Cédric Le Goater authored
This is useful to track allocation of the HW resources on per guest basis. Making sure IPIs are local to the chip of the vCPUs reduces rerouting between interrupt controllers and gives better performance in case of pinning. Checking the distribution of VP structures on the chips also helps in reducing PowerBUS traffic. [ clg: resurrected show_sources and reworked ouput ] Signed-off-by: Greg Kurz <groug@kaod.org> Signed-off-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201210171450.1933725-2-clg@kaod.org
-
Gautham R. Shenoy authored
On POWER platforms where only some groups of threads within a core share the L2-cache (indicated by the ibm,thread-groups device-tree property), we currently print the incorrect shared_cpu_map/list for L2-cache in the sysfs. This patch reports the correct shared_cpu_map/list on such platforms. Example: On a platform with "ibm,thread-groups" set to 00000001 00000002 00000004 00000000 00000002 00000004 00000006 00000001 00000003 00000005 00000007 00000002 00000002 00000004 00000000 00000002 00000004 00000006 00000001 00000003 00000005 00000007 This indicates that threads {0,2,4,6} in the core share the L2-cache and threads {1,3,5,7} in the core share the L2 cache. However, without the patch, the shared_cpu_map/list for L2 for CPUs 0, 1 is reported in the sysfs as follows: /sys/devices/system/cpu/cpu0/cache/index2/shared_cpu_list:0-7 /sys/devices/system/cpu/cpu0/cache/index2/shared_cpu_map:000000,000000ff /sys/devices/system/cpu/cpu1/cache/index2/shared_cpu_list:0-7 /sys/devices/system/cpu/cpu1/cache/index2/shared_cpu_map:000000,000000ff With the patch, the shared_cpu_map/list for L2 cache for CPUs 0, 1 is correctly reported as follows: /sys/devices/system/cpu/cpu0/cache/index2/shared_cpu_list:0,2,4,6 /sys/devices/system/cpu/cpu0/cache/index2/shared_cpu_map:000000,00000055 /sys/devices/system/cpu/cpu1/cache/index2/shared_cpu_list:1,3,5,7 /sys/devices/system/cpu/cpu1/cache/index2/shared_cpu_map:000000,000000aa This patch also defines cpu_l2_cache_mask() for !CONFIG_SMP case. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1607596739-32439-6-git-send-email-ego@linux.vnet.ibm.com
-
Gautham R. Shenoy authored
On POWER systems, groups of threads within a core sharing the L2-cache can be indicated by the "ibm,thread-groups" property array with the identifier "2". This patch adds support for detecting this, and when present, populate the populating the cpu_l2_cache_mask of every CPU to the core-siblings which share L2 with the CPU as specified in the by the "ibm,thread-groups" property array. On a platform with the following "ibm,thread-group" configuration 00000001 00000002 00000004 00000000 00000002 00000004 00000006 00000001 00000003 00000005 00000007 00000002 00000002 00000004 00000000 00000002 00000004 00000006 00000001 00000003 00000005 00000007 Without this patch, the sched-domain hierarchy for CPUs 0,1 would be CPU0 attaching sched-domain(s): domain-0: span=0,2,4,6 level=SMT domain-1: span=0-7 level=CACHE domain-2: span=0-15,24-39,48-55 level=MC domain-3: span=0-55 level=DIE CPU1 attaching sched-domain(s): domain-0: span=1,3,5,7 level=SMT domain-1: span=0-7 level=CACHE domain-2: span=0-15,24-39,48-55 level=MC domain-3: span=0-55 level=DIE The CACHE domain at 0-7 is incorrect since the ibm,thread-groups sub-array [00000002 00000002 00000004 00000000 00000002 00000004 00000006 00000001 00000003 00000005 00000007] indicates that L2 (Property "2") is shared only between the threads of a single group. There are "2" groups of threads where each group contains "4" threads each. The groups being {0,2,4,6} and {1,3,5,7}. With this patch, the sched-domain hierarchy for CPUs 0,1 would be CPU0 attaching sched-domain(s): domain-0: span=0,2,4,6 level=SMT domain-1: span=0-15,24-39,48-55 level=MC domain-2: span=0-55 level=DIE CPU1 attaching sched-domain(s): domain-0: span=1,3,5,7 level=SMT domain-1: span=0-15,24-39,48-55 level=MC domain-2: span=0-55 level=DIE The CACHE domain with span=0,2,4,6 for CPU 0 (span=1,3,5,7 for CPU 1 resp.) gets degenerated into the SMT domain. Furthermore, the last-level-cache domain gets correctly set to the SMT sched-domain. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1607596739-32439-5-git-send-email-ego@linux.vnet.ibm.com
-
Gautham R. Shenoy authored
init_thread_group_l1_cache_map() initializes the per-cpu cpumask thread_group_l1_cache_map with the core-siblings which share L1 cache with the CPU. Make this function generic to the cache-property (L1 or L2) and update a suitable mask. This is a preparatory patch for the next patch where we will introduce discovery of thread-groups that share L2-cache. No functional change. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1607596739-32439-4-git-send-email-ego@linux.vnet.ibm.com
-
Gautham R. Shenoy authored
On platforms which have the "ibm,thread-groups" property, the per-cpu variable cpu_l1_cache_map keeps a track of which group of threads within the same core share the L1 cache, Instruction and Data flow. This patch renames the variable to "thread_group_l1_cache_map" to make it consistent with a subsequent patch which will introduce thread_group_l2_cache_map. This patch introduces no functional change. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1607596739-32439-3-git-send-email-ego@linux.vnet.ibm.com
-
Gautham R. Shenoy authored
The "ibm,thread-groups" device-tree property is an array that is used to indicate if groups of threads within a core share certain properties. It provides details of which property is being shared by which groups of threads. This array can encode information about multiple properties being shared by different thread-groups within the core. Example: Suppose, "ibm,thread-groups" = [1,2,4,8,10,12,14,9,11,13,15,2,2,4,8,10,12,14,9,11,13,15] This can be decomposed up into two consecutive arrays: a) [1,2,4,8,10,12,14,9,11,13,15] b) [2,2,4,8,10,12,14,9,11,13,15] where in, a) provides information of Property "1" being shared by "2" groups, each with "4" threads each. The "ibm,ppc-interrupt-server#s" of the first group is {8,10,12,14} and the "ibm,ppc-interrupt-server#s" of the second group is {9,11,13,15}. Property "1" is indicative of the thread in the group sharing L1 cache, translation cache and Instruction Data flow. b) provides information of Property "2" being shared by "2" groups, each group with "4" threads. The "ibm,ppc-interrupt-server#s" of the first group is {8,10,12,14} and the "ibm,ppc-interrupt-server#s" of the second group is {9,11,13,15}. Property "2" indicates that the threads in each group share the L2-cache. The existing code assumes that the "ibm,thread-groups" encodes information about only one property. Hence even on platforms which encode information about multiple properties being shared by the corresponding groups of threads, the current code will only pick the first one. (In the above example, it will only consider [1,2,4,8,10,12,14,9,11,13,15] but not [2,2,4,8,10,12,14,9,11,13,15]). This patch extends the parsing support on platforms which encode information about multiple properties being shared by the corresponding groups of threads. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1607596739-32439-2-git-send-email-ego@linux.vnet.ibm.com
-
Ravi Bangoria authored
POWER10 DD1 has an issue where it generates watchpoint exceptions when it shouldn't. The conditions where this occur are: - octword op - ending address of DAWR range is less than starting address of op - those addresses need to be in the same or in two consecutive 512B blocks - 'op address + 64B' generates an address that has a carry into bit 52 (crosses 2K boundary) Handle such spurious exception by considering them as extraneous and emulating/single-steeping instruction without generating an event. [ravi: Fixed build warning reported by lkp@intel.com] Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201106045650.278987-1-ravi.bangoria@linux.ibm.com
-
Balamuruhan S authored
Add testcases for VSX vector paired load/store instructions. Sample o/p: emulate_step_test: lxvp : PASS emulate_step_test: stxvp : PASS emulate_step_test: lxvpx : PASS emulate_step_test: stxvpx : PASS emulate_step_test: plxvp : PASS emulate_step_test: pstxvp : PASS Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201011050908.72173-6-ravi.bangoria@linux.ibm.com
-
Balamuruhan S authored
Add instruction encodings, DQ, D0, D1 immediate, XTP, XSP operands as macros for new VSX vector paired instructions, * Load VSX Vector Paired (lxvp) * Load VSX Vector Paired Indexed (lxvpx) * Prefixed Load VSX Vector Paired (plxvp) * Store VSX Vector Paired (stxvp) * Store VSX Vector Paired Indexed (stxvpx) * Prefixed Store VSX Vector Paired (pstxvp) Suggested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201011050908.72173-5-ravi.bangoria@linux.ibm.com
-
Balamuruhan S authored
VSX Vector Paired instructions loads/stores an octword (32 bytes) from/to storage into two sequential VSRs. Add emulation support for these new instructions: * Load VSX Vector Paired (lxvp) * Load VSX Vector Paired Indexed (lxvpx) * Prefixed Load VSX Vector Paired (plxvp) * Store VSX Vector Paired (stxvp) * Store VSX Vector Paired Indexed (stxvpx) * Prefixed Store VSX Vector Paired (pstxvp) [kernel test robot reported a build failure] Reported-by: kernel test robot <lkp@intel.com> Suggested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201011050908.72173-4-ravi.bangoria@linux.ibm.com
-
Ravi Bangoria authored
Recently added Power10 prefixed VSX instruction are included unconditionally in the kernel. If they are executed on a machine without VSX support, it might create issues. Fix that. Also fix one mnemonics spelling mistake in comment. Fixes: 50b80a12 ("powerpc sstep: Add support for prefixed load/stores") Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201011050908.72173-3-ravi.bangoria@linux.ibm.com
-
Balamuruhan S authored
Unconditional emulation of prefixed instructions will allow emulation of them on Power10 predecessors which might cause issues. Restrict that. Fixes: 3920742b ("powerpc sstep: Add support for prefixed fixed-point arithmetic") Fixes: 50b80a12 ("powerpc sstep: Add support for prefixed load/stores") Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.ibm.com> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com> Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201011050908.72173-2-ravi.bangoria@linux.ibm.com
-
Nicholas Piggin authored
Idle code no longer uses the .cpu_restore CPU operation to restore SPRs, so this workaround is no longer required. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190711022404.18132-2-npiggin@gmail.com
-
Athira Rajeev authored
Perf event attritube supports exclude_kernel flag to avoid sampling/profiling in supervisor state (kernel). Based on this event attr flag, Monitor Mode Control Register bit is set to freeze on supervisor state. But sometimes (due to hardware limitation), Sampled Instruction Address Register (SIAR) locks on to kernel address even when freeze on supervisor is set. Patch here adds a check to drop those samples. Cc: stable@vger.kernel.org Signed-off-by: Athira Rajeev <atrajeev@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1606289215-1433-1-git-send-email-atrajeev@linux.vnet.ibm.com
-
- 09 Dec, 2020 17 commits
-
-
Nicholas Piggin authored
This is way to catch some cases of decrementer overflow, when the decrementer has underflowed an odd number of times, while MSR[EE] was disabled. With a typical small decrementer, a timer that fires when MSR[EE] is disabled will be "lost" if MSR[EE] remains disabled for between 4.3 and 8.6 seconds after the timer expires. In any case, the decrementer interrupt would be taken at 8.6 seconds and the timer would be found at that point. So this check is for catching extreme latency events, and it prevents those latencies from being a further few seconds long. It's not obvious this is a good tradeoff. This is already a watchdog magnitude event and that situation is not improved a significantly with this check. For large decrementers, it's useless. Therefore remove this check, which avoids a mftb when enabling hard disabled interrupts (e.g., when enabling after coming from hardware interrupt handlers). Perhaps more importantly, it also removes the clunky MSR[EE] vs PACA_IRQ_HARD_DIS incoherency in soft-interrupt replay which simplifies the code. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201107014336.2337337-1-npiggin@gmail.com
-
Nicholas Piggin authored
No supported processor implements this mode. Setting the bit in MSR values can be a bit confusing (and would prevent the bit from ever being reused). Remove it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201106045340.1935841-1-npiggin@gmail.com
-
Nicholas Piggin authored
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201111110723.3148665-3-npiggin@gmail.com
-
Christophe Leroy authored
PTE_FLAGS_OFFSET is defined in asm/page_32.h and used only in hash_low.S And PTE_FLAGS_OFFSET nullity depends on CONFIG_PTE_64BIT Instead of tests like #if (PTE_FLAGS_OFFSET != 0), use CONFIG_PTE_64BIT related code. Also move the definition of PTE_FLAGS_OFFSET into hash_low.S directly, that improves readability. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/f5bc21db7a33dab55924734e6060c2e9daed562e.1606247495.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
VSID is only for create_hpte(). When _PAGE_HASHPTE is already set, add_hash_page() bails out without calling create_hpte() and doesn't need the value of VSID. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/3907199974c89b85a3441cf3f528751173b7649c.1606247495.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
primary_pteg_full and htab_hash_searches are not used. Remove them. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6470ab99e58c84a5445af43ce4d1d772b0dc3e93.1606247495.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
All hugetlb range freeing functions have a verification like the following, which only differs by the mask used, depending on the page table level. start &= MASK; if (start < floor) return; if (ceiling) { ceiling &= MASK; if (! ceiling) return; } if (end - 1 > ceiling - 1) return; Refactor that into a helper function which takes the mask as an argument, returning true when [start;end[ is not fully contained inside [floor;ceiling[ Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/16a571bb32eb6e8cd44bda484c8d81cd8a25e6d7.1604668827.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Exception fixup doesn't require the heady full regs saving, do it from do_page_fault() directly. For that, split bad_page_fault() in two parts. As bad_page_fault() can also be called from other places than handle_page_fault(), it will still perform exception fixup and fallback on __bad_page_fault(). handle_page_fault() directly calls __bad_page_fault() as the exception fixup will now be done by do_page_fault() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bd07d6fef9237614cd6d318d8f19faeeadaa816b.1607491748.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
search_exception_tables() is an heavy operation, we have to avoid it. When KUAP is selected, we'll know the fault has been blocked by KUAP. When it is blocked by KUAP, check whether we are in an expected userspace access place. If so, emit a warning to spot something is going work. Otherwise, just remain silent, it will likely Oops soon. When KUAP is not selected, it behaves just as if the address was already in the TLBs and no fault was generated. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9870f01e293a5a76c4f4e4ddd4a6b0f63038c591.1607491748.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In order to prepare the removal of calls to search_exception_tables() on the fast path, move the WARN() out of bad_kuap_fault(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9501311014bd6507e04b27a0c3035186ccf65cd5.1607491748.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
To make it more readable, separate page_fault_is_write() and page_fault_is_bad() to avoir several levels of #ifdefs Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/6afaac2495248d68f94c438c5ec36b6010931de5.1607491748.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
The verification and message introduced by commit 374f3f59 ("powerpc/mm/hash: Handle user access of kernel address gracefully") applies to all platforms, it should not be limited to BOOK3S. Make the BOOK3S version of sanity_check_fault() the one for all, and bail out earlier if not BOOK3S. Fixes: 374f3f59 ("powerpc/mm/hash: Handle user access of kernel address gracefully") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/fe199d5af3578d3bf80035d203a94d742a7a28af.1607491748.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Add PPC_RAW_MFSPR() to replace open coding done in 8xx-pmu.c Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e281e3a611eead8817c49cf06a60072a021af823.1606231483.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Use SPRN_SPRG_SCRATCH2 in DTLB miss exception instead of DAR in order to be similar to ITLB miss exception. This also simplifies mpc8xx_pmu_del() Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/e3cc8f023ef40e1e8ae144e4dd1330a5ff022528.1606231483.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
In order to re-enable MMU earlier, ensure ITLB miss exception cannot clobber SPRN_SPRG_SCRATCH0 and SPRN_SPRG_SCRATCH1. Do so by using SPRN_SPRG_SCRATCH2 and SPRN_M_TW instead, like the DTLB miss exception. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/abc78e8e9577d473691ebb9996c6413b37bfd9ca.1606231483.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
We now have r11 available as a scratch register so INVALIDATE_ADJACENT_PAGES_CPU15() can be simplified. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/bdafd651b4ac3a851fd09249f5f3699c50da29f2.1606231483.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
There is no big poing in not pinning kernel text anymore, as now we can keep pinned TLB even with things like DEBUG_PAGEALLOC. Remove CONFIG_PIN_TLB_TEXT, making it always right. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [mpe: Drop ifdef around mmu_pin_tlb() to fix build errors] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/203b89de491e1379f1677a2685211b7c32adfff0.1606231483.git.christophe.leroy@csgroup.eu
-