Commit 980bbf7c authored by Christophe Leroy's avatar Christophe Leroy Committed by Michael Ellerman

powerpc/32: Call mmu_mark_initmem_nx() regardless of data block mapping.

mark_initmem_nx() calls either mmu_mark_initmem_nx() or
set_memory_attr() based on return from v_block_mapped()
of _sinittext.

But we can now handle text and data independently, so that
text may be mapped by block even when data is mapped by pages.

On the 8xx for instance, at startup 32Mbytes of memory are
pinned in TLB. So the pinned entries need to go away for sinittext.

In next patch a BAT will be set to also covers sinittext on book3s/32.
So it will also be needed to call mmu_mark_initmem_nx() even when
data above sinittext is not mapped with BATs.

As this is highly dependent on the platform, call mmu_mark_initmem_nx()
regardless of data block mapping. Then the platform will know what to
do.

Modify 8xx mmu_mark_initmem_nx() so that inittext mapping is modified
only when pagealloc debug and kfence are not active, otherwise inittext
is mapped with standard pages. And don't do anything on kernel text
which is already mapped with PAGE_KERNEL_TEXT.

Fixes: da1adea0 ("powerpc/8xx: Allow STRICT_KERNEL_RwX with pinned TLB")
Signed-off-by: default avatarChristophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/db3fc14f3bfa6215b0786ef58a6e2bc1e1f964d7.1655202804.git.christophe.leroy@csgroup.eu
parent f57261e6
...@@ -170,8 +170,8 @@ void mmu_mark_initmem_nx(void) ...@@ -170,8 +170,8 @@ void mmu_mark_initmem_nx(void)
unsigned long boundary = strict_kernel_rwx_enabled() ? sinittext : etext8; unsigned long boundary = strict_kernel_rwx_enabled() ? sinittext : etext8;
unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M); unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M);
mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_TEXT, false); if (!debug_pagealloc_enabled_or_kfence())
mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL, false); mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL, false);
mmu_pin_tlb(block_mapped_ram, false); mmu_pin_tlb(block_mapped_ram, false);
} }
......
...@@ -135,9 +135,9 @@ void mark_initmem_nx(void) ...@@ -135,9 +135,9 @@ void mark_initmem_nx(void)
unsigned long numpages = PFN_UP((unsigned long)_einittext) - unsigned long numpages = PFN_UP((unsigned long)_einittext) -
PFN_DOWN((unsigned long)_sinittext); PFN_DOWN((unsigned long)_sinittext);
if (v_block_mapped((unsigned long)_sinittext)) { mmu_mark_initmem_nx();
mmu_mark_initmem_nx();
} else { if (!v_block_mapped((unsigned long)_sinittext)) {
set_memory_nx((unsigned long)_sinittext, numpages); set_memory_nx((unsigned long)_sinittext, numpages);
set_memory_rw((unsigned long)_sinittext, numpages); set_memory_rw((unsigned long)_sinittext, numpages);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment