- 05 Apr, 2017 9 commits
-
-
AKASHI Takahiro authored
In addition to common VMCOREINFO's defined in crash_save_vmcoreinfo_init(), we need to know, for crash utility, - kimage_voffset - PHYS_OFFSET to examine the contents of a dump file (/proc/vmcore) correctly due to the introduction of KASLR (CONFIG_RANDOMIZE_BASE) in v4.6. - VA_BITS is also required for makedumpfile command. arch_crash_save_vmcoreinfo() appends them to the dump file. More VMCOREINFO's may be added later. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus and save registers' status in per-cpu ELF notes before starting crash dump kernel. See kernel_kexec(). Even if not all secondary cpus have shut down, we do kdump anyway. As we don't have to make non-boot(crashed) cpus offline (to preserve correct status of cpus at crash dump) before shutting down, this patch also adds a variant of smp_send_stop(). Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
Since arch_kexec_protect_crashkres() removes a mapping for crash dump kernel image, the loaded data won't be preserved around hibernation. In this patch, helper functions, crash_prepare_suspend()/ crash_post_resume(), are additionally called before/after hibernation so that the relevant memory segments will be mapped again and preserved just as the others are. In addition, to minimize the size of hibernation image, crash_is_nosave() is added to pfn_is_nosave() in order to recognize only the pages that hold loaded crash dump kernel image as saveable. Hibernation excludes any pages that are marked as Reserved and yet "nosave." Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Takahiro Akashi authored
arch_kexec_protect_crashkres() and arch_kexec_unprotect_crashkres() are meant to be called by kexec_load() in order to protect the memory allocated for crash dump kernel once the image is loaded. The protection is implemented by unmapping the relevant segments in crash dump kernel memory, rather than making it read-only as other archs do, to prevent coherency issues due to potential cache aliasing (with mismatched attributes). Page-level mappings are consistently used here so that we can change the attributes of segments in page granularity as well as shrink the region also in page granularity through /sys/kernel/kexec_crash_size, putting the freed memory back to buddy system. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
This function validates and invalidates PTE entries, and will be utilized in kdump to protect loaded crash dump kernel image. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
"crashkernel=" kernel parameter specifies the size (and optionally the start address) of the system ram to be used by crash dump kernel. reserve_crashkernel() will allocate and reserve that memory at boot time of primary kernel. The memory range will be exposed to userspace as a resource named "Crash kernel" in /proc/iomem. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Pratyush Anand <panand@redhat.com> Reviewed-by: James Morse <james.morse@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
Crash dump kernel uses only a limited range of available memory as System RAM. On arm64 kdump, This memory range is advertised to crash dump kernel via a device-tree property under /chosen, linux,usable-memory-range = <BASE SIZE> Crash dump kernel reads this property at boot time and calls memblock_cap_memory_range() to limit usable memory which are listed either in UEFI memory map table or "memory" nodes of a device tree blob. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: Geoff Levand <geoff@infradead.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
Add memblock_cap_memory_range() which will remove all the memblock regions except the memory range specified in the arguments. In addition, rework is done on memblock_mem_limit_remove_map() to re-implement it using memblock_cap_memory_range(). This function, like memblock_mem_limit_remove_map(), will not remove memblocks with MEMMAP_NOMAP attribute as they may be mapped and accessed later as "device memory." See the commit a571d4eb ("mm/memblock.c: add new infrastructure to address the mem limit issue"). This function is used, in a succeeding patch in the series of arm64 kdump suuport, to limit the range of usable memory, or System RAM, on crash dump kernel. (Please note that "mem=" parameter is of little use for this purpose.) Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Dennis Chen <dennis.chen@arm.com> Cc: linux-mm@kvack.org Cc: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
This function, with a combination of memblock_mark_nomap(), will be used in a later kdump patch for arm64 when it temporarily isolates some range of memory from the other memory blocks in order to create a specific kernel mapping at boot time. Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 04 Apr, 2017 12 commits
-
-
Catalin Marinas authored
Merge branch 'arm64/common-sysreg' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux into for-next/core * 'arm64/common-sysreg' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux: arm64: sysreg: add Set/Way sys encodings arm64: sysreg: add register encodings used by KVM arm64: sysreg: add physical timer registers arm64: sysreg: subsume GICv3 sysreg definitions arm64: sysreg: add performance monitor registers arm64: sysreg: add debug system registers arm64: sysreg: sort by encoding
-
Catalin Marinas authored
Merge tag 'acpi-arm64-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux into for-next/core ACPI ARM64 specific changes for v4.12. Patches contain: - IORT kernel interface misc clean-ups - IORT id mapping interface refactoring in preparation for platform MSI (IORT named components -> GIC ITS mappings) devid mapping code - IORT id mapping implementation for named components nodes to ITS nodes, in order to provide the kernel with a firmware interface to map platform devices devids to GIC ITS components * tag 'acpi-arm64-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux: ACPI: platform: setup MSI domain for ACPI based platform device ACPI: platform-msi: retrieve devid from IORT ACPI/IORT: Introduce iort_node_map_platform_id() to retrieve dev id ACPI/IORT: Rename iort_node_map_rid() to make it generic ACPI/IORT: Rework iort_match_node_callback() return value handling ACPI/IORT: Add missing comment for iort_dev_find_its_id() ACPI/IORT: Fix the indentation in iort_scan_node()
-
Ard Biesheuvel authored
To prevent unintended modifications to the kernel text (malicious or otherwise) while running the EFI stub, describe the kernel image as two separate sections: a .text section with read-execute permissions, covering .text, .rodata and .init.text, and a .data section with read-write permissions, covering .init.data, .data and .bss. This relies on the firmware to actually take the section permission flags into account, but this is something that is currently being implemented in EDK2, which means we will likely start seeing it in the wild between one and two years from now. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Replace open coded constants with symbolic ones throughout the Image and the EFI headers. No binary level changes are intended. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The kernel's EFI PE/COFF header contains a dummy .reloc section, and an explanatory comment that claims that this is required for the EFI application loader to accept the Image as a relocatable image (i.e., one that can be loaded at any offset and fixed up in place) This was inherited from the x86 implementation, which has elaborate host tooling to mangle the PE/COFF header post-link time, and which populates the .reloc section with a single dummy base relocation. On ARM, no such tooling exists, and the .reloc section remains empty, and is never even exposed via the BaseRelocationTable directory entry, which is where the PE/COFF loader looks for it. The PE/COFF spec is unclear about relocatable images that do not require any fixups, but the EDK2 implementation, which is the de facto reference for PE/COFF in the UEFI space, clearly does not care, and explicitly mentions (in a comment) that relocatable images with no base relocations are perfectly fine, as long as they don't have the RELOCS_STRIPPED attribute set (which is not the case for our PE/COFF image) So simply remove the .reloc section altogether. Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Peter Jones <pjones@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Bring the PE/COFF header in line with the PE/COFF spec, by setting NumberOfSymbols to 0, and removing the section alignment flags. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
After having split off the PE header, clean up the bits that remain: use .long consistently, merge two adjacent #ifdef CONFIG_EFI blocks, fix the offset of the PE header pointer and remove the redundant .align that follows it. Also, since we will be eliminating all open coded constants from the EFI header in subsequent patches, let's replace the open coded "ARM\x64" magic number with its .ascii equivalent. No changes to the resulting binary image are intended. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
In preparation of yet another round of modifications to the PE/COFF header, macroize it and move the definition into a separate source file. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Add the missing IMAGE_FILE_MACHINE_ARM64 and IMAGE_DEBUG_TYPE_CODEVIEW definitions. We'll need them for the arm64 EFI stub... Signed-off-by: Mark Rutland <mark.rutland@arm.com> [ardb: add IMAGE_DEBUG_TYPE_CODEVIEW as well] Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Some of the definitions in include/linux/pe.h would be useful for the EFI stub headers, where values are currently open-coded. Unfortunately they cannot be used as some structures are also defined in pe.h without !__ASSEMBLY__ guards. This patch moves the structure definitions into an #ifdef __ASSEMBLY__ block, so that the common value definitions can be used from assembly. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This module tests the module loader's ELF relocation processing routines. When loaded, it logs output like below. Relocation test: ------------------------------------------------------- R_AARCH64_ABS64 0xffff880000cccccc pass R_AARCH64_ABS32 0x00000000f800cccc pass R_AARCH64_ABS16 0x000000000000f8cc pass R_AARCH64_MOVW_SABS_Gn 0xffff880000cccccc pass R_AARCH64_MOVW_UABS_Gn 0xffff880000cccccc pass R_AARCH64_ADR_PREL_LO21 0xffffff9cf4d1a400 pass R_AARCH64_PREL64 0xffffff9cf4d1a400 pass R_AARCH64_PREL32 0xffffff9cf4d1a400 pass R_AARCH64_PREL16 0xffffff9cf4d1a400 pass Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Dave Martin authored
read_system_reg() can readily be confused with read_sysreg(), whereas these are really quite different in their meaning. This patches attempts to reduce the ambiguity be reserving "sysreg" for the actual system register accessors. read_system_reg() is instead renamed to read_sanitised_ftr_reg(), to make it more obvious that the Linux-defined sanitised feature register cache is being accessed here, not the underlying architectural system registers. cpufeature.c's internal __raw_read_system_reg() function is renamed in line with its actual purpose: a form of read_sysreg() that indexes on (non-compiletime-constant) encoding rather than symbolic register name. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 30 Mar, 2017 2 commits
-
-
Hanjun Guo authored
By allowing platform MSI domain to be created on ACPI platforms, a platform device MSI domain can be set-up when it is probed. In order to do that, the MSI domain the platform device connects to should be retrieved, so the iort_get_platform_device_domain() is introduced to retrieve the domain from the IORT kernel layer. With the domain retrieved, we need a proper way to set the domain to platform device. Given that some platform devices (irqchips) require the MSI irqdomain to be their interrupt parent domain, the MSI irqdomain should be determined before platform device is probed but after the platform device is allocated which means that the code setting up the MSI irqdomain, ie acpi_configure_pmsi_domain() should be called in acpi_platform_notify() (that is triggered after adding a device but before the respective driver is probed) for the platform MSI domain code set-up path to work properly. Acked-by: Rafael J. Wysocki <rafael@kernel.org> [for glue.c] Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> [lorenzo.pieralisi@arm.com: rewrote commit log] Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Ming Lei <ming.lei@canonical.com> Tested-by: Wei Xu <xuwei5@hisilicon.com> Tested-by: Sinan Kaya <okaya@codeaurora.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Tomasz Nowicki <tn@semihalf.com>
-
Hanjun Guo authored
For devices connecting to an ITS, the devices need to identify themself through a devid; this devid is represented in the IORT table in named component node [1] for platform devices, so this patch adds code that scans the IORT table to retrieve the devices devid. Add an IORT interface to collect ITS devices devid to carry out platform devices MSI mappings with IORT tables. [1]: https://static.docs.arm.com/den0049/b/DEN0049B_IO_Remapping_Table.pdfSigned-off-by: Hanjun Guo <hanjun.guo@linaro.org> [lorenzo.pieralisi@arm.com: rewrote commit log/dropped ITS changes] Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Ming Lei <ming.lei@canonical.com> Tested-by: Wei Xu <xuwei5@hisilicon.com> Tested-by: Sinan Kaya <okaya@codeaurora.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Tomasz Nowicki <tn@semihalf.com> Cc: Thomas Gleixner <tglx@linutronix.de>
-
- 29 Mar, 2017 2 commits
-
-
Hanjun Guo authored
To retrieve dev id for IORT named components nodes there are two steps involved (second is optional): (1) Retrieve the initial id (this may well provide the final mapping) (2) Map the id (optional if (1) represents the map type we need), this is needed for use cases such as NC (named component) -> SMMU -> ITS mappings. the iort_node_get_id() function was created for step (1) above and iort_node_map_rid() for step (2). Create a wrapper, named iort_node_map_platform_id(), that encompasses the two steps at once to retrieve the dev id to provide steps (1)-(2) functionality. iort_node_map_platform_id() will handle the parent type so type handling in iort_node_get_id() is duplicated, remove it and update current iort_node_get_id() users to move them over to iort_node_map_platform_id(). Suggested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Suggested-by: Tomasz Nowicki <tn@semihalf.com> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> [lorenzo.pieralisi@arm.com: rewrote commit log] Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Ming Lei <ming.lei@canonical.com> Tested-by: Wei Xu <xuwei5@hisilicon.com> Tested-by: Sinan Kaya <okaya@codeaurora.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Tomasz Nowicki <tn@semihalf.com>
-
Hanjun Guo authored
iort_node_map_rid() was designed to take an input id (that is not necessarily a PCI requester id) and map it to an output id (eg an SMMU streamid or an ITS deviceid) according to the mappings provided by an IORT node mapping entries. This means that the iort_node_map_rid() input id is not always a PCI requester id as its name, parameters and local variables suggest, which is misleading. Apply the s/rid/id substitution to the iort_node_map_rid() mapping function and its users to make sure its intended usage is clearer. Suggested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Ming Lei <ming.lei@canonical.com> Tested-by: Wei Xu <xuwei5@hisilicon.com> Tested-by: Sinan Kaya <okaya@codeaurora.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Tomasz Nowicki <tn@semihalf.com>
-
- 23 Mar, 2017 13 commits
-
-
Kefeng Wang authored
There are two unnecessary newlines, one is in show_regs, another is in __show_regs(), drop them. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This is the third attempt at enabling the use of contiguous hints for kernel mappings. The most recent attempt 0bfc445d was reverted after it turned out that updating permission attributes on live contiguous ranges may result in TLB conflicts. So this time, the contiguous hint is not set for .rodata or for the linear alias of .text/.rodata, both of which are mapped read-write initially, and remapped read-only at a later stage. (Note that the latter region could also be unmapped and remapped again with updated permission attributes, given that the region, while live, is only mapped for the convenience of the hibernation code, but that also means the TLB footprint is negligible anyway, so why bother) This enables the following contiguous range sizes for the virtual mapping of the kernel image, and for the linear mapping: granule size | cont PTE | cont PMD | -------------+------------+------------+ 4 KB | 64 KB | 32 MB | 16 KB | 2 MB | 1 GB* | 64 KB | 2 MB | 16 GB* | * Only when built for 3 or more levels of translation. This is due to the fact that a 2 level configuration only consists of PGDs and PTEs, and the added complexity of dealing with folded PMDs is not justified considering that 16 GB contiguous ranges are likely to be ignored by the hardware (and 16k/2 levels is a niche configuration) Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The routines __pud_populate and __pmd_populate only create a table entry at their respective level which refers to the next level page by its physical address, so there is no reason to map this page and then unmap it immediately after. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
In preparation of extending the policy for manipulating kernel mappings with whether or not contiguous hints may be used in the page tables, replace the bool 'page_mappings_only' with a flags field and a flag NO_BLOCK_MAPPINGS. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
A mapping with the contiguous bit cannot be safely manipulated while live, regardless of whether the bit changes between the old and new mapping. So take this into account when deciding whether the change is safe. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The debug_pagealloc facility manipulates kernel mappings in the linear region at page granularity to detect out of bounds or use-after-free accesses. Since the kernel segments are not allocated dynamically, there is no point in taking the debug_pagealloc_enabled flag into account for them, and we can use block mappings unconditionally. Note that this applies equally to the linear alias of text/rodata: we will never have dynamic allocations there given that the same memory is statically in use by the kernel image. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Align the function prototype of alloc_init_pte() with its pmd and pud counterparts by replacing the pfn parameter with the equivalent physical address. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
To avoid having mappings that are writable and executable at the same time, split the init region into a .init.text region that is mapped read-only, and a .init.data region that is mapped non-executable. This is possible now that the alternative patching occurs via the linear mapping, and the linear alias of the init region is always mapped writable (but never executable). Since the alternatives descriptions themselves are read-only data, move those into the .init.text region. Reviewed-by: Laura Abbott <labbott@redhat.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Now that alternatives patching code no longer relies on the primary mapping of .text being writable, we can remove the code that removes the writable permissions post-init time, and map it read-only from the outset. To preserve the existing behavior under rodata=off, which is relied upon by external debuggers to manage software breakpoints (as pointed out by Mark), add an early_param() check for rodata=, and use RWX permissions if it set to 'off'. Reviewed-by: Laura Abbott <labbott@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
One important rule of thumb when desiging a secure software system is that memory should never be writable and executable at the same time. We mostly adhere to this rule in the kernel, except at boot time, when regions may be mapped RWX until after we are done applying alternatives or making other one-off changes. For the alternative patching, we can improve the situation by applying the fixups via the linear mapping, which is never mapped with executable permissions. So map the linear alias of .text with RW- permissions initially, and remove the write permissions as soon as alternative patching has completed. Reviewed-by: Laura Abbott <labbott@redhat.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
In preparation of refactoring the kernel mapping logic so that text regions are never mapped writable, which would require adding explicit TLB maintenance to new call sites of create_mapping_late() (which is currently invoked twice from the same function), move the TLB maintenance from the call site into create_mapping_late() itself, and change it from a full TLB flush into a flush by VA, which is more appropriate here. Also, given that create_mapping_late() has evolved into a routine that only updates protection bits on existing mappings, rename it to update_mapping_prot() Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
The kvm_vgic_global_state struct contains a static key which is written to by jump_label_init() at boot time. So in preparation of making .text regions truly (well, almost truly) read-only, mark kvm_vgic_global_state __ro_after_init so it moves to the .rodata section instead. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Laura Abbott <labbott@redhat.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This reverts commit 9c0e83c3, which is no longer needed now that the modversions code plays nice with relocatable PIE kernels. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 22 Mar, 2017 2 commits
-
-
Mark Rutland authored
We only need to initialise sctlr_el1 if we're installing an EL2 stub, so we may as well defer this until we're doing so. Similarly, we can defer intialising CPTR_EL2 until then, as we do not access any trapped functionality as part of el2_setup. This patch modified el2_setup accordingly, allowing us to remove a branch and simplify the code flow. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
The early el2_setup code is a little convoluted, with two branches where one would do. This makes the code more painful to read than is necessary. We can remove a branch and simplify the logic by moving the early return in the booted-at-EL1 case earlier in the function. This separates it from all the setup logic that only makes sense for EL2. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-