- 20 Oct, 2023 7 commits
-
-
Nicholas Piggin authored
When the PMU is disabled, MMCRA is not updated to disable BHRB and instruction sampling. This can lead to those features remaining enabled, which can slow down a real or emulated CPU. Fixes: 1cade527 ("powerpc/perf: BHRB control to disable BHRB logic when not used") Cc: stable@vger.kernel.org # v5.9+ Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231018153423.298373-1-npiggin@gmail.com
-
Naveen N Rao authored
When creating a kprobe on function entry through tracefs, enable arguments to be recorded to be specified using $argN syntax. Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230614085926.2176641-1-naveen@kernel.org
-
Naveen N Rao authored
Toolchains don't always default to the ELFv2 ABI. This is true with at least the kernel.org toolchains. As such, pass -mabi=elfv2 explicitly to ensure that we are testing against the correct compiler output. Signed-off-by: Naveen N Rao <naveen@kernel.org> [mpe: Tweak comment wording] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230530093821.298590-1-naveen@kernel.org
-
Nick Child authored
Rather than replacing the versionless vmlinux and System.map files, copy to files with the version info appended. Additionally, since executing the script is a last resort option, inform the user about the missing `installkernel` command and the location of the installation. This work is adapted from `arch/s390/boot/install.sh`, and also matches the behaviour of arm, arm64 and riscv. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230314164442.124929-1-nnac123@linux.ibm.com
-
Wang Yufen authored
If the vcpu_associativity alloc memory successfully but the pcpu_associativity fails to alloc memory, the vcpu_associativity memory leaks. Fixes: d62c8dee ("powerpc/pseries: Provide vcpu dispatch statistics") Signed-off-by: Wang Yufen <wangyufen@huawei.com> Reviewed-by: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/1671003983-10794-1-git-send-email-wangyufen@huawei.com
-
Sebastian Andrzej Siewior authored
The macro __SPIN_LOCK_INITIALIZER() is implementation specific. Users that desire to initialize a spinlock in a struct must use __SPIN_LOCK_UNLOCKED(). Use __SPIN_LOCK_UNLOCKED() for the spinlock_t in imc_global_refc. Fixes: 76d588dd ("powerpc/imc-pmu: Fix use of mutex in IRQs disabled section") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230309134831.Nz12nqsU@linutronix.de
-
Haren Myneni authored
The VAS open window call prints error message and returns -EBUSY after the migration suspend event initiated and until the resume event completed on the destination system. It can cause the log buffer filled with these error messages if the user space issues continuous open window calls. Similar case even for DLPAR CPU remove event when no credits are available until the credits are freed or with the other DLPAR CPU add event. So changes in the patch to use pr_err_ratelimited() instead of pr_err() to display open window failure and not-available credits error messages. Use pr_fmt() and make the corresponding changes to have the consistencein prefix all pr_*() messages (vas-api.c). Fixes: 37e67648 ("powerpc/pseries/vas: Add VAS migration handler") Signed-off-by: Haren Myneni <haren@linux.ibm.com> [mpe: Use "vas-api" as the prefix to match the file name.] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231019215033.1335251-1-haren@linux.ibm.com
-
- 19 Oct, 2023 33 commits
-
-
Gaurav Batra authored
When a device is initialized, the driver invokes dma_supported() twice - first for streaming mappings followed by coherent mappings. For an SR-IOV device, default window is deleted and DDW created. With vPMEM enabled, TCE mappings are dynamically created for both vPMEM and SR-IOV device. There are no direct mappings. First time when dma_supported() is called with 64 bit mask, DDW is created and marked as dynamic window. The second time dma_supported() is called, enable_ddw() finds existing window for the device and incorrectly returns it as "direct mapping". This only happens when size of DDW is big enough to map max LPAR memory. This results in streaming TCEs to not get dynamically mapped, since code incorrently assumes these are already pre-mapped. The adapter initially comes up but goes down due to EEH. Fixes: 381ceda8 ("powerpc/pseries/iommu: Make use of DDW for indirect mapping") Cc: stable@vger.kernel.org # v5.15+ Signed-off-by: Gaurav Batra <gbatra@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231003030802.47914-1-gbatra@linux.vnet.ibm.com
-
Srikar Dronamraju authored
PowerVM Hypervisor dispatches on a whole core basis. In a shared LPAR, a CPU from a core that is CEDED or preempted may have a larger latency. In such a scenario, its preferable to choose a different CPU to run. If one of the CPUs in the core is active, i.e neither CEDED nor preempted, then consider this CPU as not preempted. Also if any of the CPUs in the core has yielded but OS has not requested CEDE or CONFER, then consider this CPU to be preempted. Correct detection of preempted CPUs is important for detecting idle CPUs/cores in task scheduler. Tested-by: Aboorva Devarajan <aboorvad@linux.vnet.ibm.com> Reviewed-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com> Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231019091452.95260-1-srikar@linux.vnet.ibm.com
-
Kuan-Wei Chiu authored
This patch improves the performance of event alternative lookup by replacing the previous linear search with a more efficient binary search. This change reduces the time complexity for the search process from O(n) to O(log(n)). A pre-sorted table of event values and their corresponding indices has been introduced to expedite the search process. Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com> [mpe: Call the array "presort*ed*_event_table", minor formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231013175714.2142775-1-visitorckw@gmail.com
-
Michael Ellerman authored
A thread started via eg. user_mode_thread() runs in the kernel to begin with and then may later return to userspace. While it's running in the kernel it has a pt_regs at the base of its kernel stack, but that pt_regs is all zeroes. If the thread oopses in that state, it leads to an ugly stack trace with a big block of zero GPRs, as reported by Joel: Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.5.0-rc7-00004-gf7757129-dirty #3 Hardware name: IBM PowerNV (emulated by qemu) POWER9 0x4e1200 opal:v7.0 PowerNV Call Trace: [c0000000036afb00] [c0000000010dd058] dump_stack_lvl+0x6c/0x9c (unreliable) [c0000000036afb30] [c00000000013c524] panic+0x178/0x424 [c0000000036afbd0] [c000000002005100] mount_root_generic+0x250/0x324 [c0000000036afca0] [c0000000020057d0] prepare_namespace+0x2d4/0x344 [c0000000036afd20] [c0000000020049c0] kernel_init_freeable+0x358/0x3ac [c0000000036afdf0] [c0000000000111b0] kernel_init+0x30/0x1a0 [c0000000036afe50] [c00000000000debc] ret_from_kernel_user_thread+0x14/0x1c --- interrupt: 0 at 0x0 NIP: 0000000000000000 LR: 0000000000000000 CTR: 0000000000000000 REGS: c0000000036afe80 TRAP: 0000 Not tainted (6.5.0-rc7-00004-gf7757129-dirty) MSR: 0000000000000000 <> CR: 00000000 XER: 00000000 CFAR: 0000000000000000 IRQMASK: 0 GPR00: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR04: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR08: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR12: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR24: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR28: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 NIP [0000000000000000] 0x0 LR [0000000000000000] 0x0 --- interrupt: 0 The all-zero pt_regs looks ugly and conveys no useful information, other than its presence. So detect that case and just show the presence of the frame by printing the interrupt marker, eg: Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.5.0-rc3-00126-g18e9506562a0-dirty #301 Hardware name: IBM pSeries (emulated by qemu) POWER9 (raw) 0x4e1202 0xf000005 of:SLOF,HEAD hv:linux,kvm pSeries Call Trace: [c000000003aabb00] [c000000001143db8] dump_stack_lvl+0x6c/0x9c (unreliable) [c000000003aabb30] [c00000000014c624] panic+0x178/0x424 [c000000003aabbd0] [c0000000020050fc] mount_root_generic+0x250/0x324 [c000000003aabca0] [c0000000020057cc] prepare_namespace+0x2d4/0x344 [c000000003aabd20] [c0000000020049bc] kernel_init_freeable+0x358/0x3ac [c000000003aabdf0] [c0000000000111b0] kernel_init+0x30/0x1a0 [c000000003aabe50] [c00000000000debc] ret_from_kernel_user_thread+0x14/0x1c --- interrupt: 0 at 0x0 To avoid ever suppressing a valid pt_regs make sure the pt_regs has a zero MSR and TRAP value, and is located at the very base of the stack. Fixes: 6895dfc0 ("powerpc: copy_thread fill in interrupt frame marker and back chain") Reported-by: Joel Stanley <joel@jms.id.au> Reported-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Joel Stanley <joel@jms.id.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230824064210.907266-1-mpe@ellerman.id.au
-
Stanislav Kinsburskii authored
virt_to_phys() doesn't need the address pointer to be mutable. At the same time allowing it to be mutable leads to the following build warning for constant pointers: warning: passing argument 1 of ‘virt_to_phys’ discards ‘const’ qualifier from pointer target type Signed-off-by: Stanislav Kinsburskii <stanislav.kinsburskii@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/168155747391.13678.10634415747614468991.stgit@skinsburskii.localdomain
-
Yang Yingliang authored
Add missing iounmap(), if request_irq() fails. Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230105064145.3879356-1-yangyingliang@huawei.com
-
Benjamin Gray authored
Sparse reports an endianness error with the else case of val = (cpu_endian ? be64_to_cpu(reg_entry->reg_val) : (u64)(reg_entry->reg_val)); This is a safe operation because the code is explicitly working with dynamic endianness, so add the __force annotation to tell Sparse to ignore it. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-13-bgray@linux.ibm.com
-
Benjamin Gray authored
Sparse reports a warning when casting to an int. There is no need to cast in the first place, so drop them. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-12-bgray@linux.ibm.com
-
Benjamin Gray authored
Sparse reports dereferencing an __iomem pointer. These routines are clearly low level handlers for IO memory, so force cast away the __iomem annotation to tell sparse the dereferences are safe. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-11-bgray@linux.ibm.com
-
Benjamin Gray authored
Sparse reports dereference of a __user pointer. copy_mc_to_user() takes a __user pointer, verifies it, then calls the generic copy routine copy_mc_generic(). As we have verified the pointer, cast out the __user annotation when passing to copy_mc_generic(). Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-10-bgray@linux.ibm.com
-
Benjamin Gray authored
Sparse reports an endian mismatch with args to opal_int_get_xirr(). Checking the skiboot source[1] shows the function takes a __be32* (as expected), so update the function declaration to reflect this. [1]: https://github.com/open-power/skiboot/blob/80e2b1dc73/hw/xive.c#L3479Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-9-bgray@linux.ibm.com
-
Benjamin Gray authored
Sparse reports several endianness warnings on variables and functions that are consistently treated as big endian. There are no multi-endianness shenanigans going on here so fix these low hanging fruit up in one patch. All changes are just type annotations; no endianness switching operations are introduced by this patch. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-7-bgray@linux.ibm.com
-
Benjamin Gray authored
Sparse reports several function implementations annotated with extern. This is clearly incorrect, likely just copied from an actual extern declaration in another file. Fix the sparse warnings by removing extern. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-6-bgray@linux.ibm.com
-
Benjamin Gray authored
Sparse reports several uses of 0 for pointer arguments and comparisons. Replace with NULL to better convey the intent. Remove entirely if a comparison to follow the kernel style of implicit boolean conversions. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-5-bgray@linux.ibm.com
-
Benjamin Gray authored
Sparse reports an invalid endian cast here. The code is written for big endian platforms, so le32_to_cpu() acts as a byte reversal. This file is checked by sparse on a little endian build though, so replace the reverse function with the dedicated swab32() function to better express the intent of the code. Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-4-bgray@linux.ibm.com
-
Benjamin Gray authored
Sparse reports a size mismatch in the endian swap. The Opal implementation[1] passes the value as a __be64, and the receiving variable out_qsize is a u64, so the use of be32_to_cpu() appears to be an error. [1]: https://github.com/open-power/skiboot/blob/80e2b1dc73/hw/xive.c#L3854 Fixes: 88ec6b93 ("powerpc/xive: add OPAL extensions for the XIVE native exploitation support") Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20231011053711.93427-2-bgray@linux.ibm.com
-
Christophe Leroy authored
Introduce PAGE_EXECONLY_X macro which provides exec-only rights. The _X may be seen as redundant with the EXECONLY but it helps keep consistency, all macros having the EXEC right have _X. And put it next to PAGE_NONE as PAGE_EXECONLY_X is somehow PAGE_NONE + EXEC just like all other SOMETHING_X are just SOMETHING + EXEC. On book3s/64 PAGE_EXECONLY becomes PAGE_READONLY_X. On book3s/64, as PAGE_EXECONLY is only valid for Radix add VM_READ flag in vm_get_page_prot() for non-Radix. And update access_error() so that a non exec fault on a VM_EXEC only mapping is always invalid, even when the underlying layer don't always generate a fault for that. For 8xx, set PAGE_EXECONLY_X as _PAGE_NA | _PAGE_EXEC. For others, only set it as just _PAGE_EXEC With that change, 8xx, e500 and 44x fully honor execute-only protection. On 40x that is a partial implementation of execute-only. The implementation won't be complete because once a TLB has been loaded via the Instruction TLB miss handler, it will be possible to read the page. But at least it can't be read unless it is executed first. On 603 MMU, TLB missed are handled by SW and there are separate DTLB and ITLB. Execute-only is therefore now supported by not loading DTLB when read access is not permitted. On hash (604) MMU it is more tricky because hash table is common to load/store and execute. Nevertheless it is still possible to check whether _PAGE_READ is set before loading hash table for a load/store access. At least it can't be read unless it is executed first. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/4283ea9cbef9ff2fbee468904800e1962bc8fc18.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
_PAGE_USER is now gone on all targets. Remove it completely. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/76ebe74fdaed4297a1d8203a61174650c1d8d278.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Instead of always displaying either 'rw' or 'r ' depending on _PAGE_RW, display 'r' or ' ' for _PAGE_READ and 'w' or ' ' for _PAGE_WRITE. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/dd8201a0f8fd87ce62a7ff2edc958b604b8ec3c0.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
On 603 MMU, TLB missed are handled by SW and there are separated DTLB and ITLB. It is therefore possible to implement execute-only protection by not loading DTLB when read access is not permitted. To do that, _PAGE_READ flag is needed but there is no bit available for it in PTE. On the other hand the only real use of _PAGE_USER is to implement PAGE_NONE by clearing _PAGE_USER. As _PAGE_NONE can also be implemented by clearing _PAGE_READ, remove _PAGE_USER and add _PAGE_READ. Then use the virtual address to know whether user rights or kernel rights are to be used. With that change, 603 MMU now honors execute-only protection. For hash (604) MMU it is more tricky because hash table is common to load/store and execute. Nevertheless it is still possible to check whether _PAGE_READ is set before loading hash table for a load/store access. At least it can't be read unless it is executed first. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/b7702dd5a041ec59055ed2880f4952e94c087a2e.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Several places, _PAGE_RW maps to write permission and don't always imply read. To make it more clear, do as book3s/64 in commit c7d54842 ("powerpc/mm: Use _PAGE_READ to indicate Read access") and use _PAGE_WRITE when more relevant. For the time being _PAGE_WRITE is equivalent to _PAGE_RW but that will change when _PAGE_READ gets added in following patches. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/5798782869fe4d2698f104948dabd17657b89395.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
_PAGE_USER is used to select the zone. Today zone 0 is kernel and zone 1 is user. To implement _PAGE_NONE, _PAGE_USER is cleared, leading to no access for user but kernel still has access to the page so it's possible for a user application to write in that page by using a kernel function as trampoline. What is really wanted is to have user rights on pages below TASK_SIZE and no user rights on pages above TASK_SIZE. Use zones for that. There are 16 zones so lets use the 4 upper address bits to set the zone and declare zone rights based on TASK_SIZE. Then drop _PAGE_USER and reuse it as _PAGE_READ that will be checked in Data TLB miss handler. That will properly handle PAGE_NONE for both kernel and user. In addition, it partially implements execute-only right. The implementation won't be complete because once a TLB has been loaded via the Instruction TLB miss handler, it will be possible to read the page. But at least it can't be read unless it is executed first. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/2a13e3ba8a5dec43143cc1f9a91ec71ea1529f3c.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
44x MMU has 6 page protection bits: - R, W, X for supervisor - R, W, X for user It means that it can support X without R. To do that, _PAGE_READ flag is needed but there is no bit available for it in PTE. On the other hand the only real use of _PAGE_USER is to implement PAGE_NONE by clearing _PAGE_USER. As _PAGE_NONE can also be implemented by clearing _PAGE_READ, remove _PAGE_USER and add _PAGE_READ. In order to insert bits in one go during TLB miss, move _PAGE_ACCESSED and put _PAGE_READ just after _PAGE_DIRTY so that _PAGE_DIRTY is copied into SW and _PAGE_READ into SR at once. With that change, 44x now also honors execute-only protection. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/043e17987b260b99b45094138c6cb2e89e63d499.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
e500 MMU has 6 page protection bits: - R, W, X for supervisor - R, W, X for user It means that it can support X without R. To do that, _PAGE_READ flag is needed. With 32 bits PTE there is no bit available for it in PTE. On the other hand the only real use of _PAGE_USER is to implement PAGE_NONE by clearing _PAGE_USER. As _PAGE_NONE can also be implemented by clearing _PAGE_READ, remove _PAGE_USER and add _PAGE_READ. Move _PAGE_PRESENT into bit 30 so that _PAGE_READ can match SR bit. With 64 bits PTE _PAGE_USER is already the combination of SR and UR so all we need to do is to rename it _PAGE_READ. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/0849ab6bf7ae2af23f94b0457fa40d0ea3983fe4.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
pte_user() is now only used in pte_access_permitted() to check access on vmas. User flag is cleared to make a page unreadable. So rename it pte_read() and remove pte_user() which isn't used anymore. For the time being it checks _PAGE_USER but in the near futur all plateforms will be converted to _PAGE_READ so lets support both for now. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/72cbb5be595e9ef884140def73815ed0b0b37010.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Several places, _PAGE_RW maps to write permission and don't always imply read. To make it more clear, do as book3s/64 in commit c7d54842 ("powerpc/mm: Use _PAGE_READ to indicate Read access") and use _PAGE_WRITE when more relevant. For the time being _PAGE_WRITE is equivalent to _PAGE_RW but that will change when _PAGE_READ gets added in following patches. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/1f79b88db54d030ada776dc9845e0e88345bfc28.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
book3s64 need specific masks because it needs _PAGE_PRIVILEGED for PAGE_NONE. book3s64 already has _PAGE_RW and _PAGE_RWX. So add _PAGE_NA, _PAGE_RO and _PAGE_ROX and remove specific permission masks. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/d37a418de52e2c950cad6797e81663b56991366f.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
8xx already has _PAGE_NA and _PAGE_RO. So add _PAGE_ROX, _PAGE_RW and _PAGE_RWX and remove specific permission masks. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/5d0b5ce43485f697313eee4326ddff97157fb219.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Prepare a common version of the permission masks that will be based on _PAGE_NA, _PAGE_RO, _PAGE_ROX, _PAGE_RW, _PAGE_RWX that will be defined in platform specific headers in later patches. Put them in a new header pgtable-masks.h which will be included by platforms. And prepare a common version of flags used for mapping kernel memory that will be based on _PAGE_RO, _PAGE_ROX, _PAGE_RW, _PAGE_RWX that will be defined in platform specific headers. Put them in unless _PAGE_KERNEL_RO is already defined so that platform specific definitions can be dismantled one by one. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/d31b59efcdb38e675479563307af891aeadf6ec9.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
pte_user() may return 'false' when a user page is PAGE_NONE. In that case it is still a user page and needs to be handled as such. So use is_kernel_addr() instead. And remove "user" text from ptdump as ptdump only dumps kernel tables. Note: no change done for book3s/64 which still has it 'priviledge' bit. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/c778dad89fad07727c31717a9c62f45357c29ebc.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
pte_mkuser() is never used. Remove it. pte_mkpriviledged() is not used anymore. Remove it too. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/1a1dc18816456c637dc8a9c38d532f7598b60ac4.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
Calling ioremap() with _PAGE_USER (or _PAGE_PRIVILEDGE unset) is wrong. Loudly fail the call to ioremap() instead of blindly clearing the flags. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/b6dd5485ad00d2aafd2bb9b7c2c4eac3ebf2cdaf.1695659959.git.christophe.leroy@csgroup.eu
-
Christophe Leroy authored
ioremap_page_range() calls pgprot_nx() vmap() and vmap_pfn() clear execute permission by calling pgprot_nx(). When pgprot_nx() is not defined it falls back to a nop. Implement it for powerpc then use it in early_ioremap_range(). Then the call to pte_exprotect() can be removed from ioremap_prot(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/5993a7a097e989af1c97fc4a6c011fefc67dbe6e.1695659959.git.christophe.leroy@csgroup.eu
-