- 01 Mar, 2024 2 commits
-
-
Ard Biesheuvel authored
arm64_use_ng_mappings will be set to 'true' by the early boot code if it decides to use non-global (nG) attributes for all kernel mappings, typically when enabling KASLR on a system that does not implement E0PD. In this case, the G-to-nG update routines are never called, and so there is no reason to create the writable mapping of the associated status flag in the ID map. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240301104046.1234309-6-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Commit 0dd4f60a ("arm64: mm: Add support for folding PUDs at runtime") implements specialized PUD alloc/free helpers to allow the decision whether or not to fold PUDs to be made at runtime when the number of paging levels is 4 or higher. Its implementation of pud_free() is based on the generic version that existed when the patch was first written, but in the meantime, the freeing of a PUD has become a bit more involved, and so instead of simply freeing the page, we should invoke the generic __pud_free() that encapsulates whatever needs doing at this point. This fixes a reported warning emitted by the page flags self-diagnostics. Reported-by: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Ryan Roberts <ryan.roberts@arm.com> Link: https://lore.kernel.org/r/20240301104046.1234309-5-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 22 Feb, 2024 1 commit
-
-
Bartosz Golaszewski authored
Add the generated executable for relacheck to the list of ignored files. Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org> Link: https://lore.kernel.org/r/20240222210441.33142-1-brgl@bgdev.plSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 19 Feb, 2024 2 commits
-
-
Marc Zyngier authored
Open-coding the feature matching parameters for LVA/LVA2 leads to issues with upcoming changes to the cpufeature code. By making TGRAN{4,16,64} and VARange signed/unsigned as per the architecture, we can use the existing macros, making the feature match robust against those changes. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
When set_pud() is called on a 4-level paging build config that runs with 3 levels at runtime (which happens with 16k page size builds with support for LPA2), the updated entry is in fact a PGD in swapper_pg_dir[], and this is mapped read-only after boot. So in this case, the existing check needs to be performed as well, even though __PAGETABLE_PUD_FOLDED is not #define'd. So replace the #ifdef with a call to pgtable_l4_enabled(). Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240216235944.3677178-2-ardb+git@google.comReviewed-by: Itaru Kitayama <itaru.kitayama@fujitsu.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 16 Feb, 2024 35 commits
-
-
Ard Biesheuvel authored
The AArch64 virtual memory system supports a global WXN control, which can be enabled to make all writable mappings implicitly no-exec. This is a useful hardening feature, as it prevents mistakes in managing page table permissions from being exploited to attack the system. When enabled at EL1, the restrictions apply to both EL1 and EL0. EL1 is completely under our control, and has been cleaned up to allow WXN to be enabled from boot onwards. EL0 is not under our control, but given that widely deployed security features such as selinux or PaX already limit the ability of user space to create mappings that are writable and executable at the same time, the impact of enabling this for EL0 is expected to be limited. (For this reason, common user space libraries that have a legitimate need for manipulating executable code already carry fallbacks such as [0].) If enabled at compile time, the feature can still be disabled at boot if needed, by passing arm64.nowxn on the kernel command line. [0] https://github.com/libffi/libffi/blob/master/src/closures.c#L440Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20240214122845.2033971-88-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add a hook to permit architectures to perform validation on the prot flags passed to mmap(), like arch_validate_prot() does for mprotect(). This will be used by arm64 to reject PROT_WRITE+PROT_EXEC mappings on configurations that run with WXN enabled. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-87-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
We typically enable support in defconfig for all architectural features for which we can detect at runtime if the hardware actually supports them. Now that we have implemented support for LPA2 based 52-bit virtual addressing in a way that should not impact 48-bit operation on non-LPA2 CPU, we can do the same, and enable 52-bit virtual addressing by default. Catalin adds: Currently the "Virtual address space size" arch/arm64/Kconfig menu entry sets different defaults for each page size. However, all are overridden by the defconfig to 48 bits. Set the new default in Kconfig and remove the defconfig line. [ardb: squash follow-up fix from Catalin] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-86-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Update Kconfig to permit 4k and 16k granule configurations to be built with 52-bit virtual addressing, now that all the prerequisites are in place. While at it, update the feature description so it matches on the appropriate feature bits depending on the page size. For simplicity, let's just keep ARM64_HAS_VA52 as the feature name. Note that LPA2 based 52-bit virtual addressing requires 52-bit physical addressing support to be enabled as well, as programming TCR.TxSZ to values below 16 is not allowed unless TCR.DS is set, which is what activates the 52-bit physical addressing support. While supporting the converse (52-bit physical addressing without 52-bit virtual addressing) would be possible in principle, let's keep things simple, by only allowing these features to be enabled at the same time. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-85-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
get_user_mapping_size() uses vabits_actual and CONFIG_PGTABLE_LEVELS to provide the starting point for a table walk. This is fine for LVA, as the number of translation levels is the same regardless of whether LVA is enabled. However, with LPA2, this will no longer be the case, so let's derive the number of levels from the number of VA bits directly. Acked-by: Marc Zyngier <maz@kernel.org> Acked-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-84-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Currently, the ptdump code deals with folded PMD or PUD levels at build time, by omitting those levels when invoking note_page. IOW, note_page() is never invoked with level == 1 if P4Ds are folded in the build configuration. With the introduction of LPA2 support, we will defer some of these folding decisions to runtime, so let's take care of this by overriding the 'level' argument when this condition triggers. Substituting the PUD or PMD strings for "PGD" when the level in question is folded at build time is no longer necessary, and so the conditional expressions can be simplified. This also makes the indirection of the 'name' field unnecessary, so change that into a char[] array, and make the whole thing __ro_after_init. Note that the mm_p?d_folded() functions currently ignore their mm pointer arguments, but let's wire them up correctly anyway. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-83-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Configurations built with support for 52-bit virtual addressing can also run on CPUs that only support 48 bits of VA space, in which case only that part of swapper_pg_dir that represents the 48-bit addressable region is relevant, and everything else is ignored by the hardware. Our software pagetable walker has little in the way of input address validation, and so it will happily start a walk from an address that is not representable by the number of paging levels that are actually active, resulting in lots of bogus output from the page table dumper unless we take care to start at a valid address. So define the start address at runtime based on vabits_actual. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-82-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
In order to support LPA2 on 16k pages in a way that permits non-LPA2 systems to run the same kernel image, we have to be able to fall back to at most 48 bits of virtual addressing. Falling back to 48 bits would result in a level 0 with only 2 entries, which is suboptimal in terms of TLB utilization. So instead, let's fall back to 47 bits in that case. This means we need to be able to fold PUDs dynamically, similar to how we fold P4Ds for 48 bit virtual addressing on LPA2 with 4k pages. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-81-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Allow the KASAN init code to deal with 5 levels of paging, and relax the requirement that the shadow region is aligned to the top level pgd_t size. This is necessary for LPA2 based 52-bit virtual addressing, where the KASAN shadow will never be aligned to the pgd_t size. Allowing this also enables the 16k/48-bit case for KASAN, which is a nice bonus. This involves some hackery to manipulate the root and next level page tables without having to distinguish all the various configurations, including 16k/48-bits (which has a two entry pgd_t level), and LPA2 configurations running with one translation level less on non-LPA2 hardware. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-80-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add support for using 5 levels of paging in the fixmap, as well as in the kernel page table handling code which uses fixmaps internally. This also handles the case where a 5 level build runs on hardware that only supports 4 levels of paging. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-79-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Update the early kernel mapping code to take 52-bit virtual addressing into account based on the LPA2 feature. This is a bit more involved than LVA (which is supported with 64k pages only), given that some page table descriptor bits change meaning in this case. To keep the handling in asm to a minimum, the initial ID map is still created with 48-bit virtual addressing, which implies that the kernel image must be loaded into 48-bit addressable physical memory. This is currently required by the boot protocol, even though we happen to support placement outside of that for LVA/64k based configurations. Enabling LPA2 involves more than setting TCR.T1SZ to a lower value, there is also a DS bit in TCR that needs to be set, and which changes the meaning of bits [9:8] in all page table descriptors. Since we cannot enable DS and every live page table descriptor at the same time, let's pivot through another temporary mapping. This avoids the need to reintroduce manipulations of the page tables with the MMU and caches disabled. To permit the LPA2 feature to be overridden on the kernel command line, which may be necessary to work around silicon errata, or to deal with mismatched features on heterogeneous SoC designs, test for CPU feature overrides first, and only then enable LPA2. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-78-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add support for 5 level paging in the G-to-nG routine that creates its own temporary page tables to traverse the swapper page tables. Also add support for running the 5 level configuration with the top level folded at runtime, to support CPUs that do not implement the LPA2 extension. While at it, wire up the level skipping logic so it will also trigger on 4 level configurations with LPA2 enabled at build time but not active at runtime, as we'll fall back to 3 level paging in that case. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-77-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add the required types and descriptor accessors to support 5 levels of paging in the common code. This is one of the prerequisites for supporting 52-bit virtual addressing with 4k pages. Note that this does not cover the code that handles kernel mappings or the fixmap. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-76-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
In preparation for enabling LPA2 support, introduce the mask values for converting between physical addresses and their representations in a page table descriptor. While at it, move the pte_to_phys asm macro into its only user, so that we can freely modify it to use its input value register as a temp register. For LPA2, the PTE_ADDR_MASK contains two non-adjacent sequences of zero bits, which means it no longer fits into the immediate field of an ordinary ALU instruction. So let's redefine it to include the bits in between as well, and only use it when converting from physical address to PTE representation, where the distinction does not matter. Also update the name accordingly to emphasize this. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-75-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
When LPA2 is enabled, bits 8 and 9 of page and block descriptors become part of the output address instead of carrying shareability attributes for the region in question. So avoid setting these bits if TCR.DS == 1, which means LPA2 is enabled. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-74-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The LPA2 feature introduces new FSC values to report abort exceptions related to translation level -1. Define these and wire them up. Reuse the new ESR FSC classification helpers that arrived via the KVM arm64 tree, and update the one for translation faults to check specifically for a translation fault at level -1. (Access flag or permission faults cannot occur at level -1 because they alway involve a descriptor at the superior level so changing those helpers is not needed). Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-73-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The PROT_* macros resolve to expressions that are only valid in C and not in assembler, and so they are only usable from C code. Currently, we make an exception for the permission indirection init code in proc.S, which doesn't care about the bits that are conditionally set, and so we just #define PTE_MAYBE_NG to 0x0 for any assembler file that includes these definitions. This is dodgy because this means that PROT_NORMAL and friends is generally available in asm code, but defined in a way that deviates from the definition that C code will observe, which might lead to hard to diagnose issues down the road. So instead, #define PTE_MAYBE_NG only in the place where the PIE constants are evaluated, and #undef it again right after. This allows us to drop the #define from pgtable-prot.h, and avoid the risk of deviating definitions between asm and C. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-72-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add support for overriding the VARange field of the MMFR2 CPU ID register. This permits the associated LVA feature to be overridden early enough for the boot code that creates the kernel mapping to take it into account. Given that LPA2 implies LVA, disabling the latter should disable the former as well. So override the ID_AA64MMFR0.TGran field of the current page size as well if it advertises support for 52-bit addressing. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-71-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Currently, we detect CPU support for 52-bit virtual addressing (LVA) extremely early, before creating the kernel page tables or enabling the MMU. We cannot override the feature this early, and so large virtual addressing is always enabled on CPUs that implement support for it if the software support for it was enabled at build time. It also means we rely on non-trivial code in asm to deal with this feature. Given that both the ID map and the TTBR1 mapping of the kernel image are guaranteed to be 48-bit addressable, it is not actually necessary to enable support this early, and instead, we can model it as a CPU feature. That way, we can rely on code patching to get the correct TCR.T1SZ values programmed on secondary boot and resume from suspend. On the primary boot path, we simply enable the MMU with 48-bit virtual addressing initially, and update TCR.T1SZ if LVA is supported from C code, right before creating the kernel mapping. Given that TTBR1 still points to reserved_pg_dir at this point, updating TCR.T1SZ should be safe without the need for explicit TLB maintenance. Since this gets rid of all accesses to the vabits_actual variable from asm code that occurred before TCR.T1SZ had been programmed, we no longer have a need for this variable, and we can replace it with a C expression that produces the correct value directly, based on the value of TCR.T1SZ. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-70-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This reverts commit 1682c45b, which is no longer needed now that we create the permanent kernel mapping directly during early boot. This is a RINO (revert in name only) given that some of the code has moved around, but the changes are straight-forward. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-69-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Now that the early kernel mapping is created with all the right attributes and segment boundaries, there is no longer a need to recreate it and switch to it. This also means we no longer have to copy the kasan shadow or some parts of the fixmap from one set of page tables to the other. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-68-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Early in the boot, when .rodata is still writable, we can poke swapper_pg_dir entries directly, and there is no need to go through the fixmap. After a future patch, we will enter the kernel with swapper_pg_dir already active, and early swapper_pg_dir updates for creating the fixmap page table hierarchy itself cannot go through the fixmap for obvious reaons. So let's keep track of whether rodata is writable, and update the descriptor directly in that case. As the same reasoning applies to early KASAN init, make the function noinstr as well. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-67-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The asm code that creates the initial ID map is rather intricate and hard to follow. This is problematic because it makes adding support for things like LPA2 or WXN more difficult than necessary. Also, it is parameterized like the rest of the MM code to run with a configurable number of levels, which is rather pointless, given that all AArch64 CPUs implement support for 48-bit virtual addressing, and that many systems exist with DRAM located outside of the 39-bit addressable range, which is the only smaller VA size that is widely used, and we need additional tricks to make things work in that combination. So let's bite the bullet, and rip out all the asm macros, and fiddly code, and replace it with a C implementation based on the newly added routines for creating the early kernel VA mappings. And while at it, create the initial ID map based on 48-bit virtual addressing as well, regardless of the number of configured levels for the kernel proper. Note that this code may execute with the MMU and caches disabled, and is therefore not permitted to make unaligned accesses. This shouldn't generally happen in any case for the algorithm as implemented, but to be sure, let's pass -mstrict-align to the compiler just in case. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-66-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The mapping from PGD/PUD/PMD to levels and shifts is very confusing, given that, due to folding, the shifts may be equal for different levels, if the macros are even #define'd to begin with. In a subsequent patch, we will modify the ID mapping code to decouple the number of levels from the kernel's view of how these types are folded, so prepare for this by reformulating the macros without the use of these types. Instead, use SWAPPER_BLOCK_SHIFT as the base quantity, and derive it from either PAGE_SHIFT or PMD_SHIFT, which -if defined at all- are defined unambiguously for a given page size, regardless of the number of configured levels. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-65-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Even though we support loading kernels anywhere in 48-bit addressable physical memory, we create the ID maps based on the number of levels that we happened to configure for the kernel VA and user VA spaces. The reason for this is that the PGD/PUD/PMD based classification of translation levels, along with the associated folding when the number of levels is less than 5, does not permit creating a page table hierarchy of a set number of levels. This means that, for instance, on 39-bit VA kernels we need to configure an additional level above PGD level on the fly, and 36-bit VA kernels still only support 47-bit virtual addressing with this trick applied. Now that we have a separate helper to populate page table hierarchies that does not define the levels in terms of PUDS/PMDS/etc at all, let's reuse it to create the permanent ID map with a fixed VA size of 48 bits. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-64-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The asm version of the kernel mapping code works fine for creating a coarse grained identity map, but for mapping the kernel down to its exact boundaries with the right attributes, it is not suitable. This is why we create a preliminary RWX kernel mapping first, and then rebuild it from scratch later on. So let's reimplement this in C, in a way that will make it unnecessary to create the kernel page tables yet another time in paging_init(). Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-63-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
__cpu_replace_ttbr1() is a static inline, and so it gets instantiated wherever it is used. This is not really necessary, as it is never called on a hot path. It also has the unfortunate side effect that the symbol idmap_cpu_replace_ttbr1 may never be referenced from kCFI enabled C code, and this means the type id symbol may not exist either. This will result in a build error once we start referring to this symbol from asm code as well. (Note that this problem only occurs when CnP, KAsan and suspend/resume are all disabled in the Kconfig but that is a valid config, if unusual). So let's just move it out of line so all callers will share the same implementation, which will reference idmap_cpu_replace_ttbr1 unconditionally. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-62-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
In preparation for moving the first assignment of arm64_use_ng_mappings to an earlier stage in the boot, ensure that kaslr_requires_kpti() is accessible without relying on the core kernel's view on whether or not KASLR is enabled. So make it a static inline, and move the kaslr_enabled() check out of it and into the callers, one of which will disappear in a subsequent patch. Once/when support for the obsolete ThunderX 1 platform is dropped, this check reduces to a E0PD feature check on the local CPU. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-61-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Now that we can set BSS variables from the early code running from the ID map, we can set memstart_offset_seed directly from the C code that derives the value instead of passing it back and forth between C and asm code. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-60-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
In preparation for switching to an early kernel mapping routine that maps each segment according to its precise boundaries, and with the correct attributes, let's allocate some extra pages for page tables for the 4k page size configuration. This is necessary because the start and end of each segment may not be aligned to the block size, and so we'll need an extra page table at each segment boundary. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-59-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add some helpers that will be used by the early kernel mapping code to check feature support on the local CPU. This permits the early kernel mapping to be created with the right attributes, removing the need for tearing it down and recreating it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-58-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add rodata=off to the set of kernel command line options that is parsed early using the CPU feature override detection code, so we can easily refer to it when creating the kernel mapping. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-57-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The early kaslr code open codes the detection of 'nokaslr' on the kernel command line, and this is no longer necessary now that the feature detection code, which also looks for the same string, executes before this code. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-56-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add some helpers to extract and apply feature overrides to the bare idreg values. This involves inspecting the value and mask of the specific field that we are interested in, given that an override value/mask pair might be invalid for one field but valid for another. Then, wire up the new helper for the hVHE test - note that we can drop the sysreg test here, as the override will be invalid when trying to enable hVHE on non-VHE capable hardware. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-55-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Once we update the early kernel mapping code to only map the kernel once with the right permissions, we can no longer perform code patching via this mapping. So move this code to an earlier stage of the boot, right after applying the relocations. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20240214122845.2033971-54-ardb+git@google.comSigned-off-by: Catalin Marinas <catalin.marinas@arm.com>
-