Commit 85e68cfc authored by Mark Rutland's avatar Mark Rutland Committed by Luis Henriques

arm64: head.S: ensure visibility of page tables

commit 91d57155 upstream.

After writing the page tables, we use __inval_cache_range to invalidate
any stale cache entries. Strongly Ordered memory accesses are not
ordered w.r.t. cache maintenance instructions, and hence explicit memory
barriers are required to provide this ordering. However,
__inval_cache_range was written to be used on Normal Cacheable memory
once the MMU and caches are on, and does not have any barriers prior to
the DC instructions.

This patch adds a DMB between the page tables being written and the
corresponding cachelines being invalidated, ensuring that the
invalidation makes the new data visible to subsequent cacheable
accesses. A barrier is not required before the prior invalidate as we do
not access the page table memory area prior to this, and earlier
barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
ordering w.r.t. any stores performed prior to entering Linux.
Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: c218bca7 ("arm64: Relax the kernel cache requirements for boot")
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
parent 4138de54
...@@ -599,6 +599,7 @@ __create_page_tables: ...@@ -599,6 +599,7 @@ __create_page_tables:
*/ */
mov x0, x25 mov x0, x25
add x1, x26, #SWAPPER_DIR_SIZE add x1, x26, #SWAPPER_DIR_SIZE
dmb sy
bl __inval_cache_range bl __inval_cache_range
mov lr, x27 mov lr, x27
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment