Commit 8ec41987 authored by Will Deacon's avatar Will Deacon

arm64: mm: ensure patched kernel text is fetched from PoU

The arm64 booting document requires that the bootloader has cleaned the
kernel image to the PoC. However, when a CPU re-enters the kernel due to
either a CPU hotplug "on" event or resuming from a low-power state (e.g.
cpuidle), the kernel text may in-fact be dirty at the PoU due to things
like alternative patching or even module loading.

Thanks to I-cache speculation with the MMU off, stale instructions could
be fetched prior to enabling the MMU, potentially leading to crashes
when executing regions of code that have been modified at runtime.

This patch addresses the issue by ensuring that the local I-cache is
invalidated immediately after a CPU has enabled its MMU but before
jumping out of the identity mapping. Any stale instructions fetched from
the PoC will then be discarded and refetched correctly from the PoU.
Patching kernel text executed prior to the MMU being enabled is
prohibited, so the early entry code will always be clean.
Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
parent 04b8637b
...@@ -634,5 +634,13 @@ __enable_mmu: ...@@ -634,5 +634,13 @@ __enable_mmu:
isb isb
msr sctlr_el1, x0 msr sctlr_el1, x0
isb isb
/*
* Invalidate the local I-cache so that any instructions fetched
* speculatively from the PoC are discarded, since they may have
* been dynamically patched at the PoU.
*/
ic iallu
dsb nsh
isb
br x27 br x27
ENDPROC(__enable_mmu) ENDPROC(__enable_mmu)
...@@ -133,6 +133,14 @@ ENTRY(cpu_resume_mmu) ...@@ -133,6 +133,14 @@ ENTRY(cpu_resume_mmu)
ldr x3, =cpu_resume_after_mmu ldr x3, =cpu_resume_after_mmu
msr sctlr_el1, x0 // restore sctlr_el1 msr sctlr_el1, x0 // restore sctlr_el1
isb isb
/*
* Invalidate the local I-cache so that any instructions fetched
* speculatively from the PoC are discarded, since they may have
* been dynamically patched at the PoU.
*/
ic iallu
dsb nsh
isb
br x3 // global jump to virtual address br x3 // global jump to virtual address
ENDPROC(cpu_resume_mmu) ENDPROC(cpu_resume_mmu)
.popsection .popsection
......
...@@ -146,7 +146,6 @@ ENDPROC(cpu_do_switch_mm) ...@@ -146,7 +146,6 @@ ENDPROC(cpu_do_switch_mm)
* value of the SCTLR_EL1 register. * value of the SCTLR_EL1 register.
*/ */
ENTRY(__cpu_setup) ENTRY(__cpu_setup)
ic iallu // I+BTB cache invalidate
tlbi vmalle1is // invalidate I + D TLBs tlbi vmalle1is // invalidate I + D TLBs
dsb ish dsb ish
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment