- 01 Mar, 2016 1 commit
-
-
Marc Zyngier authored
In order to reduce the risk of a bad merge, let's move the new kvm_call_hyp back to its original location in the file. This has zero impact from a code point of view. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 29 Feb, 2016 2 commits
-
-
Ard Biesheuvel authored
Commit c031a421 ("arm64: kaslr: randomize the linear region") implements randomization of the linear region, by subtracting a random multiple of PUD_SIZE from memstart_addr. This causes the virtual mapping of system RAM to move upwards in the linear region, and at the same time causes memstart_addr to assume a value which may be negative if the offset of system RAM in the physical space is smaller than its offset relative to PAGE_OFFSET in the virtual space. Since memstart_addr is effectively an offset now, redefine its type as s64 so that expressions involving shifting or division preserve its sign. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
In the boot log, instead of listing .init first, list .text, .rodata, .init and .data in the same order they appear in memory Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 26 Feb, 2016 9 commits
-
-
Ard Biesheuvel authored
The LSE atomics implementation uses runtime patching to patch in calls to out of line non-LSE atomics implementations on cores that lack hardware support for LSE. To avoid paying the overhead cost of a function call even if no call ends up being made, the bl instruction is kept invisible to the compiler, and the out of line implementations preserve all registers, not just the ones that they are required to preserve as per the AAPCS64. However, commit fd045f6c ("arm64: add support for module PLTs") added support for routing branch instructions via veneers if the branch target offset exceeds the range of the ordinary relative branch instructions. Since this deals with jump and call instructions that are exposed to ELF relocations, the PLT code uses x16 to hold the address of the branch target when it performs an indirect branch-to-register, something which is explicitly allowed by the AAPCS64 (and ordinary compiler generated code does not expect register x16 or x17 to retain their values across a bl instruction). Since the lse runtime patched bl instructions don't adhere to the AAPCS64, they don't deal with this clobbering of registers x16 and x17. So add them to the clobber list of the asm() statements that perform the call instructions, and drop x16 and x17 from the list of registers that are callee saved in the out of line non-LSE implementations. In addition, since we have given these functions two scratch registers, they no longer need to stack/unstack temp registers. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> [will: factored clobber list into #define, updated Makefile comment] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Kefeng Wang authored
Use VA_START macro in asm/memory.h instead of private LOWEST_ADDR definition in dump.c. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
UAO is a feature of ARMv8.2, so add a submenu like we have for 8.1. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Lorenzo Pieralisi authored
When secondary cpus are booted through the ACPI parking protocol, the booted cpu should check that FW has correctly cleared its mailbox entry point value to make sure the boot process was correctly executed. The entry point check is carried in the cpu_ops->cpu_postboot method, that is executed by secondary cpus when entering the kernel with irqs disabled. The ACPI parking protocol cpu_ops maps/unmaps the mailboxes on the primary CPU to trigger secondary boot in the cpu_ops->cpu_boot method and on secondary processors to carry out FW checks on the booted CPU to verify the boot protocol was successfully executed in the cpu_ops->cpu_postboot method. Therefore, the cpu_ops->cpu_postboot method is forced to ioremap/unmap the mailboxes, which is wrong in that ioremap cannot be safely be carried out with irqs disabled. To fix this issue, this patch reshuffles the code so that the mailboxes are still mapped after the boot processor executes the cpu_ops->cpu_boot method for a given cpu, and the VA at which a mailbox is mapped for a given cpu is stashed in the per-cpu data struct so that secondary cpus can retrieve them in the cpu_ops->cpu_postboot and complete the required FW checks. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: Itaru Kitayama <itaru.kitayama@riken.jp> Tested-by: Loc Ho <lho@apm.com> Tested-by: Itaru Kitayama <itaru.kitayama@riken.jp> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Loc Ho <lho@apm.com> Cc: Itaru Kitayama <itaru.kitayama@riken.jp> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mark Salter <msalter@redhat.com> Cc: Al Stone <ahs3@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
ARMv8.2 extensions [1] include an optional feature, which supports half precision(16bit) floating point/asimd data processing instructions. This patch adds support for detecting and exposing the same to the userspace via HWCAPs [1] https://community.arm.com/groups/processors/blog/2016/01/05/armv8-a-architecture-evolutionSigned-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
The asm-generic fixmap.h depends on each architecture's fixmap.h to pull in the definition of PAGE_KERNEL_RO, if this exists. In the absence of this, FIXMAP_PAGE_RO will not be defined. In mm/early_ioremap.c the definition of early_memremap_ro is predicated on FIXMAP_PAGE_RO being defined. Currently, the arm64 fixmap.h doesn't include pgtable.h for the definition of PAGE_KERNEL_RO, and as a knock-on effect early_memremap_ro is not always defined, leading to link-time failures when it is used. This has been observed with defconfig on next-20160226. Unfortunately, as pgtable.h includes fixmap.h, adding the include introduces a circular dependency, which is just as fragile. Instead, this patch factors out PAGE_KERNEL_RO and other prot definitions into a new pgtable-prot header which can be included by poth pgtable.h and fixmap.h, avoiding the circular dependency, and ensuring that early_memremap_ro is alwyas defined where it is used. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Andrew Pinski authored
On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI instructions may cause the icache to become corrupted if it contains data for a non-current ASID. This patch implements the workaround (which invalidates the local icache when switching the mm) by using code patching. Signed-off-by: Andrew Pinski <apinski@cavium.com> Signed-off-by: David Daney <david.daney@cavium.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
Currently the .rodata section is actually still executable when DEBUG_RODATA is enabled. This changes that so the .rodata is actually read only, no execute. It also adds the .rodata section to the mem_init banner. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Mark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: added vm_struct vmlinux_rodata in map_kernel()] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Miles Chen authored
Remove the unnecessary boundary check since there is a huge gap between user and kernel address that they would never overlap. (arm64 does not have enough levels of page tables to cover 64-bit virtual address) See Documentation/arm64/memory.txt Signed-off-by: Miles Chen <miles.chen@mediatek.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 25 Feb, 2016 9 commits
-
-
Catalin Marinas authored
In such configuration, Linux uses only two pages of page tables and __pud_populate() should not be used. However, the BUILD_BUG() triggers since pud_sect() is still defined and the compiler cannot eliminate such code, even though at run-time it should not be triggered. This patch extends the #ifdef ARM64_64K_PAGES condition for pud_sect to include PGTABLE_LEVELS < 3. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
Now that we have a clear understanding of the sign of a feature, rename the routines to reflect the sign, so that it is not misused. The cpuid_feature_extract_field() now accepts a 'sign' parameter. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
Use the appropriate accessor for the feature bit by keeping track of the sign of the feature Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
There is a confusion on whether the values of a feature are signed or not in ARM. This is not clearly mentioned in the ARM ARM either. We have dealt most of the bits as signed so far, and marked the rest as unsigned explicitly. This fixed in ARM ARM and will be rolled out soon. Here is the criteria in a nutshell: 1) The fields, which are either signed or unsigned, use increasing numerical values to indicate an increase in functionality. Thus, if a value of 0x1 indicates the presence of some instructions, then the 0x2 value will indicate the presence of those instructions plus some additional instructions or functionality. 2) For ID field values where the value 0x0 defines that a feature is not present, the number is an unsigned value. 3) For some features where the feature was made optional or removed after the start of the definition of the architecture, the value 0x0 is used to indicate the presence of a feature, and 0xF indicates the absence of the feature. In these cases, the fields are, in effect, holding signed values. So with these rules applied, we have only the following fields which are signed and the rest are unsigned. a) ID_AA64PFR0_EL1: {FP, ASIMD} b) ID_AA64MMFR0_EL1: {TGran4K, TGran64K} c) ID_AA64DFR0_EL1: PMUVer (0xf - PMUv3 not implemented) d) ID_DFR0_EL1: PerfMon e) ID_MMFR0_EL1: {InnerShr, OuterShr} Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
Correct the feature bit entries for : ID_DFR0 ID_MMFR0 to fix the default safe value for some of the bits. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
Adds a hook for checking whether a secondary CPU has the features used already by the kernel during early boot, based on the boot CPU and plugs in the check for ASID size. The ID_AA64MMFR0_EL1:ASIDBits determines the size of the mm context id and is used in the early boot to make decisions. The value is picked up from the Boot CPU and cannot be delayed until other CPUs are up. If a secondary CPU has a smaller size than that of the Boot CPU, things will break horribly and the usual SANITY check is not good enough to prevent the system from crashing. So, crash the system with enough information. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
Add a helper to extract ASIDBits on the current cpu Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
We verify the capabilities of the secondary CPUs only when hotplug is enabled. The boot time activated CPUs do not go through the verification by checking whether the system wide capabilities were initialised or not. This patch removes the capability check dependency on CONFIG_HOTPLUG_CPU, to make sure that all the secondary CPUs go through the check. The boot time activated CPUs will still skip the system wide capability check. The plan is to hook in a check for CPU features used by the kernel at early boot up, based on the Boot CPU values. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
A secondary CPU could fail to come online due to insufficient capabilities and could simply die or loop in the kernel. e.g, a CPU with no support for the selected kernel PAGE_SIZE loops in kernel with MMU turned off. or a hotplugged CPU which doesn't have one of the advertised system capability will die during the activation. There is no way to synchronise the status of the failing CPU back to the master. This patch solves the issue by adding a field to the secondary_data which can be updated by the failing CPU. If the secondary CPU fails even before turning the MMU on, it updates the status in a special variable reserved in the head.txt section to make sure that the update can be cache invalidated safely without possible sharing of cache write back granule. Here are the possible states : -1. CPU_MMU_OFF - Initial value set by the master CPU, this value indicates that the CPU could not turn the MMU on, hence the status could not be reliably updated in the secondary_data. Instead, the CPU has updated the status @ __early_cpu_boot_status. 0. CPU_BOOT_SUCCESS - CPU has booted successfully. 1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the master CPU to synchronise by issuing a cpu_ops->cpu_kill. 2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is looping in the kernel. This information could be used by say, kexec to check if it is really safe to do a kexec reboot. 3. CPU_PANIC_KERNEL - CPU detected some serious issues which requires kernel to crash immediately. The secondary CPU cannot call panic() until it has initialised the GIC. This flag can be used to instruct the master to do so. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> [catalin.marinas@arm.com: conflict resolution] [catalin.marinas@arm.com: converted "status" from int to long] [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 24 Feb, 2016 17 commits
-
-
Suzuki K Poulose authored
This patch moves cpu_die_early to smp.c, where it fits better. No functional changes, except for adding the necessary checks for CONFIG_HOTPLUG_CPU. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
Or in other words, make fail_incapable_cpu() reusable. We use fail_incapable_cpu() to kill a secondary CPU early during the bringup, which doesn't have the system advertised capabilities. This patch makes the routine more generic, to kill a secondary booting CPU, getting rid of the dependency on capability struct. This can be used by checks which are not necessarily attached to a capability struct (e.g, cpu ASIDBits). In that process, renames the function to cpu_die_early() to better match its functionality. This will be moved to arch/arm64/kernel/smp.c later. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K Poulose authored
Adds a routine which can be used to park CPUs (spinning in kernel) when they can't be killed. Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Since arm64 does not use a decompressor that supplies an execution environment where it is feasible to some extent to provide a source of randomness, the arm64 KASLR kernel depends on the bootloader to supply some random bits in the /chosen/kaslr-seed DT property upon kernel entry. On UEFI systems, we can use the EFI_RNG_PROTOCOL, if supplied, to obtain some random bits. At the same time, use it to randomize the offset of the kernel Image in physical memory. Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Before we can move the command line processing before the allocation of the kernel, which is required for detecting the 'nokaslr' option which controls that allocation, move the converted command line higher up in memory, to prevent it from interfering with the kernel itself. Since x86 needs the address to fit in 32 bits, use UINT_MAX as the upper bound there. Otherwise, use ULONG_MAX (i.e., no limit) Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This implements efi_random_alloc(), which allocates a chunk of memory of a certain size at a certain alignment, and uses the random_seed argument it receives to randomize the address of the allocation. This is implemented by iterating over the UEFI memory map, counting the number of suitable slots (aligned offsets) within each region, and picking a random number between 0 and 'number of slots - 1' to select the slot, This should guarantee that each possible offset is chosen equally likely. Suggested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This exposes the firmware's implementation of EFI_RNG_PROTOCOL via a new function efi_get_random_bytes(). Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been provided by the bootloader, randomize the placement of RAM inside the linear region if sufficient space is available. For instance, on a 4KB granule/3 levels kernel, the linear region is 256 GB in size, and we can choose any 1 GB aligned offset that is far enough from the top of the address space to fit the distance between the start of the lowest memblock and the top of the highest memblock. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This adds support for KASLR is implemented, based on entropy provided by the bootloader in the /chosen/kaslr-seed DT property. Depending on the size of the address space (VA_BITS) and the page size, the entropy in the virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all 4 levels), with the sidenote that displacements that result in the kernel image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB granule kernels, respectively) are not allowed, and will be rounded up to an acceptable value. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is randomized independently from the core kernel. This makes it less likely that the location of core kernel data structures can be determined by an adversary, but causes all function calls from modules into the core kernel to be resolved via entries in the module PLTs. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is randomized by choosing a page aligned 128 MB region inside the interval [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of entropy (depending on page size), independently of the kernel randomization, but still guarantees that modules are within the range of relative branch and jump instructions (with the caveat that, since the module region is shared with other uses of the vmalloc area, modules may need to be loaded further away if the module region is exhausted) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This implements CONFIG_RELOCATABLE, which links the final vmlinux image with a dynamic relocation section, allowing the early boot code to perform a relocation to a different virtual address at runtime. This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Instead of using absolute addresses for both the exception location and the fixup, use offsets relative to the exception table entry values. Not only does this cut the size of the exception table in half, it is also a prerequisite for KASLR, since absolute exception table entries are subject to dynamic relocation, which is incompatible with the sorting of the exception table that occurs at build time. This patch also introduces the _ASM_EXTABLE preprocessor macro (which exists on x86 as well) and its _asm_extable assembly counterpart, as shorthands to emit exception table entries. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This adds support to the generic search_extable() and sort_extable() implementations for dealing with exception table entries whose fields contain relative offsets rather than absolute addresses. Acked-by: Helge Deller <deller@gmx.de> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Acked-by: Tony Luck <tony.luck@intel.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Add support to scripts/sortextable for handling relocatable (PIE) executables, whose ELF type is ET_DYN, not ET_EXEC. Other than adding support for the new type, no changes are needed. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This reshuffles some code in asm/elf.h and puts a #ifndef __ASSEMBLY__ around its C definitions so that the CPP defines can be used in asm source files as well. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Before implementing KASLR for arm64 by building a self-relocating PIE executable, we have to ensure that values we use before the relocation routine is executed are not subject to dynamic relocation themselves. This applies not only to virtual addresses, but also to values that are supplied by the linker at build time and relocated using R_AARCH64_ABS64 relocations. So instead, use assemble time constants, or force the use of static relocations by folding the constants into the instructions. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Unfortunately, the current way of using the linker to emit build time constants into the Image header will no longer work once we switch to the use of PIE executables. The reason is that such constants are emitted into the binary using R_AARCH64_ABS64 relocations, which are resolved at runtime, not at build time, and the places targeted by those relocations will contain zeroes before that. So refactor the endian swapping linker script constant generation code so that it emits the upper and lower 32-bit words separately. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This adds support for emitting PLTs at module load time for relative branches that are out of range. This is a prerequisite for KASLR, which may place the kernel and the modules anywhere in the vmalloc area, making it more likely that branch target offsets exceed the maximum range of +/- 128 MB. In this version, I removed the distinction between relocations against .init executable sections and ordinary executable sections. The reason is that it is hardly worth the trouble, given that .init.text usually does not contain that many far branches, and this version now only reserves PLT entry space for jump and call relocations against undefined symbols (since symbols defined in the same module can be assumed to be within +/- 128 MB) For example, the mac80211.ko module (which is fairly sizable at ~400 KB) built with -mcmodel=large gives the following relocation counts: relocs branches unique !local .text 3925 3347 518 219 .init.text 11 8 7 1 .exit.text 4 4 4 1 .text.unlikely 81 67 36 17 ('unique' means branches to unique type/symbol/addend combos, of which !local is the subset referring to undefined symbols) IOW, we are only emitting a single PLT entry for the .init sections, and we are better off just adding it to the core PLT section instead. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 23 Feb, 2016 2 commits
-
-
Ard Biesheuvel authored
Instead of reversing the header dependency between asm/bug.h and asm/debug-monitors.h, split off the brk instruction immediate value defines into a new header asm/brk-imm.h, and include it from both. This solves the circular dependency issue that prevents BUG() from being used in some header files, and keeps the definitions together. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Since PAGE_OFFSET is chosen such that it cuts the kernel VA space right in half, and since the size of the kernel VA space itself is always a power of 2, we can treat PAGE_OFFSET as a bitmask and replace the additions/subtractions with 'or' and 'and-not' operations. For the comparison against PAGE_OFFSET, a mov/cmp/branch sequence ends up getting replaced with a single tbz instruction. For the additions and subtractions, we save a mov instruction since the mask is folded into the instruction's immediate field. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-