- 20 Feb, 2024 12 commits
-
-
Josh Poimboeuf authored
On s390, currently kernel uses the '-fPIE' compiler flag for compiling vmlinux. This has a few problems: - It uses dynamic symbols (.dynsym), for which the linker refuses to allow more than 64k sections. This can break features which use '-ffunction-sections' and '-fdata-sections', including kpatch-build [1] and Function Granular KASLR. - It unnecessarily uses GOT relocations, adding an extra layer of indirection for many memory accesses. Instead of using '-fPIE', resolve all the relocations at link time and then manually adjust any absolute relocations (R_390_64) during boot. This is done by first telling the linker to preserve all relocations during the vmlinux link. (Note this is harmless: they are later stripped in the vmlinux.bin link.) Then use the 'relocs' tool to find all absolute relocations (R_390_64) which apply to allocatable sections. The offsets of those relocations are saved in a special section which is then used to adjust the relocations during boot. (Note: For some reason, Clang occasionally creates a GOT reference, even without '-fPIE'. So Clang-compiled kernels have a GOT, which needs to be adjusted.) On my mostly-defconfig kernel, this reduces kernel text size by ~1.3%. [1] https://github.com/dynup/kpatch/issues/1284 [2] https://gcc.gnu.org/pipermail/gcc-patches/2023-June/622872.html [3] https://gcc.gnu.org/pipermail/gcc-patches/2023-August/625986.html Compiler consideration: Gcc recently implemented an optimization [2] for loading symbols without explicit alignment, aligning with the IBM Z ELF ABI. This ABI mandates symbols to reside on a 2-byte boundary, enabling the use of the larl instruction. However, kernel linker scripts may still generate unaligned symbols. To address this, a new -munaligned-symbols option has been introduced [3] in recent gcc versions. This option has to be used with future gcc versions. Older Clang lacks support for handling unaligned symbols generated by kernel linker scripts when the kernel is built without -fPIE. However, future versions of Clang will include support for the -munaligned-symbols option. When the support is unavailable, compile the kernel with -fPIE to maintain the existing behavior. In addition to it: move vmlinux.relocs to safe relocation When the kernel is built with CONFIG_KERNEL_UNCOMPRESSED, the entire uncompressed vmlinux.bin is positioned in the bzImage decompressor image at the default kernel LMA of 0x100000, enabling it to be executed in-place. However, the size of .vmlinux.relocs could be large enough to cause an overlap with the uncompressed kernel at the address 0x100000. To address this issue, .vmlinux.relocs is positioned after the .rodata.compressed in the bzImage. Nevertheless, in this configuration, vmlinux.relocs will overlap with the .bss section of vmlinux.bin. To overcome that, move vmlinux.relocs to a safe location before clearing .bss and handling relocs. Compile warning fix from Sumanth Korikkar: When kernel is built with CONFIG_LD_ORPHAN_WARN and -fno-PIE, there are several warnings: ld: warning: orphan section `.rela.iplt' from `arch/s390/kernel/head64.o' being placed in section `.rela.dyn' ld: warning: orphan section `.rela.head.text' from `arch/s390/kernel/head64.o' being placed in section `.rela.dyn' ld: warning: orphan section `.rela.init.text' from `arch/s390/kernel/head64.o' being placed in section `.rela.dyn' ld: warning: orphan section `.rela.rodata.cst8' from `arch/s390/kernel/head64.o' being placed in section `.rela.dyn' Orphan sections are sections that exist in an object file but don't have a corresponding output section in the final executable. ld raises a warning when it identifies such sections. Eliminate the warning by placing all .rela orphan sections in .rela.dyn and raise an error when size of .rela.dyn is greater than zero. i.e. Dont just neglect orphan sections. This is similar to adjustment performed in x86, where kernel is built with -fno-PIE. commit 5354e845 ("x86/build: Add asserts for unwanted sections") [sumanthk@linux.ibm.com: rebased Josh Poimboeuf patches and move vmlinux.relocs to safe location] [hca@linux.ibm.com: merged compile warning fix from Sumanth] Tested-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Link: https://lore.kernel.org/r/20240219132734.22881-4-sumanthk@linux.ibm.com Link: https://lore.kernel.org/r/20240219132734.22881-5-sumanthk@linux.ibm.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Josh Poimboeuf authored
This 'relocs' tool is copied from the x86 version, ported for s390, and greatly simplified to remove unnecessary features. It reads vmlinux and outputs assembly to create a .vmlinux.relocs_64 section which contains the offsets of all R_390_64 relocations which apply to allocatable sections. Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Link: https://lore.kernel.org/r/20240219132734.22881-3-sumanthk@linux.ibm.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Sumanth Korikkar authored
Gcc recently implemented an optimization [1] for loading symbols without explicit alignment, aligning with the IBM Z ELF ABI. This ABI mandates symbols to reside on a 2-byte boundary, enabling the use of the larl instruction. However, kernel linker scripts may still generate unaligned symbols. To address this, a new -munaligned-symbols option has been introduced [2] in recent gcc versions. [1] https://gcc.gnu.org/pipermail/gcc-patches/2023-June/622872.html [2] https://gcc.gnu.org/pipermail/gcc-patches/2023-August/625986.html However, when -munaligned-symbols is used in vdso code, it leads to the following compilation error: `.data.rel.ro.local' referenced in section `.text' of arch/s390/kernel/vdso64/vdso64_generic.o: defined in discarded section `.data.rel.ro.local' of arch/s390/kernel/vdso64/vdso64_generic.o vdso linker script discards .data section to make it lightweight. However, -munaligned-symbols in vdso object files references literal pool and accesses _vdso_data. Hence, compile vdso code without -munaligned-symbols. This means in the future, vdso code should deal with alignment of newly introduced unaligned linker symbols. Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Link: https://lore.kernel.org/r/20240219132734.22881-2-sumanthk@linux.ibm.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Nathan Chancellor authored
When attempting to boot a kernel compiled with OBJCOPY=llvm-objcopy, there is a crash right at boot: Out of memory allocating 6d7800 bytes 8 aligned in range 0:20000000 Reserved memory ranges: 0000000000000000 a394c3c30d90cdaf DECOMPRESSOR Usable online memory ranges (info source: sclp read info [3]): 0000000000000000 0000000020000000 Usable online memory total: 20000000 Reserved: a394c3c30d90cdaf Free: 0 Call Trace: (sp:0000000000033e90 [<0000000000012fbc>] physmem_alloc_top_down+0x5c/0x104) sp:0000000000033f00 [<0000000000011d56>] startup_kernel+0x3a6/0x77c sp:0000000000033f60 [<00000000000100f4>] startup_normal+0xd4/0xd4 GNU objcopy does not have any issues. Looking at differences between the object files in each build reveals info.bin does not get properly populated with llvm-objcopy, which results in an empty .vmlinux.info section. $ file {gnu,llvm}-objcopy/arch/s390/boot/info.bin gnu-objcopy/arch/s390/boot/info.bin: data llvm-objcopy/arch/s390/boot/info.bin: empty $ llvm-readelf --section-headers {gnu,llvm}-objcopy/arch/s390/boot/vmlinux | rg 'File:|\.vmlinux\.info|\.decompressor\.syms' File: gnu-objcopy/arch/s390/boot/vmlinux [12] .vmlinux.info PROGBITS 0000000000034000 035000 000078 00 WA 0 0 1 [13] .decompressor.syms PROGBITS 0000000000034078 035078 000b00 00 WA 0 0 1 File: llvm-objcopy/arch/s390/boot/vmlinux [12] .vmlinux.info PROGBITS 0000000000034000 035000 000000 00 WA 0 0 1 [13] .decompressor.syms PROGBITS 0000000000034000 035000 000b00 00 WA 0 0 1 Ulrich points out that llvm-objcopy only copies sections marked as alloc with a binary output target, whereas the .vmlinux.info section is only marked as load. Add 'alloc' in addition to 'load', so that both objcopy implementations work properly: $ file {gnu,llvm}-objcopy/arch/s390/boot/info.bin gnu-objcopy/arch/s390/boot/info.bin: data llvm-objcopy/arch/s390/boot/info.bin: data $ llvm-readelf --section-headers {gnu,llvm}-objcopy/arch/s390/boot/vmlinux | rg 'File:|\.vmlinux\.info|\.decompressor\.syms' File: gnu-objcopy/arch/s390/boot/vmlinux [12] .vmlinux.info PROGBITS 0000000000034000 035000 000078 00 WA 0 0 1 [13] .decompressor.syms PROGBITS 0000000000034078 035078 000b00 00 WA 0 0 1 File: llvm-objcopy/arch/s390/boot/vmlinux [12] .vmlinux.info PROGBITS 0000000000034000 035000 000078 00 WA 0 0 1 [13] .decompressor.syms PROGBITS 0000000000034078 035078 000b00 00 WA 0 0 1 Closes: https://github.com/ClangBuiltLinux/linux/issues/1996 Link: https://github.com/llvm/llvm-project/commit/3c02cb7492fc78fb678264cebf57ff88e478e14fSuggested-by: Ulrich Weigand <ulrich.weigand@de.ibm.com> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20240216-s390-fix-boot-with-llvm-objcopy-v1-1-0ac623daf42b@kernel.orgSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Gerd Bayer authored
Found and fixed these while working on synchronizing the state handling of zpci_dev's. Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Gerd Bayer authored
Centralize the removal so all paths are covered and the hotplug slot will remain active until the device is really destroyed. Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Gerd Bayer authored
There's a number of tasks that need the state of a zpci device to be stable. Other tasks need to be synchronized as they change the state. State changes could be generated by the system as availability or error events, or be requested by the user through manipulations in sysfs. Some other actions accessible through sysfs - like device resets - need the state to be stable. Unsynchronized state handling could lead to unusable devices. This has been observed in cases of concurrent state changes through systemd udev rules and DPM boot control. Some breakage can be provoked by artificial tests, e.g. through repetitively injecting "recover" on a PCI function through sysfs while running a "hotplug remove/add" in a loop through a PCI slot's "power" attribute in sysfs. After a few iterations this could result in a kernel oops. So introduce a new mutex "state_lock" to guard the state property of the struct zpci_dev. Acquire this lock in all task that modify the state: - hotplug add and remove, through the PCI hotplug slot entry, - avaiability events, as reported by the platform, - error events, as reported by the platform, - during device resets, explicit through sysfs requests or implict through the common PCI layer. Break out an inner _do_recover() routine out of recover_store() to separte the necessary synchronizations from the actual manipulations of the zpci_dev required for the reset. With the following changes I was able to run the inject loops for hours without hitting an error. Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Gerd Bayer authored
Since this guards only the Function Measurement Block, rename from generic lock to fmb_lock in preparation to introduce another lock that guards the state member Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Thomas Richter authored
Adjust whitespace indentation. No functional change. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Thomas Richter authored
When an event is started, read the current value of the PAI counter. This value is saved in event::hw.prev_count. When an event is stopped, this value is subtracted from the current value read out at event stop time. The difference is the delta of this counter. Simplify the logic and read the event value every time the event is started. This scheme is identical to other device drivers. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Thomas Richter authored
When the PAI events ALL_CRYPTO or ALL_NNPA are created for system wide sampling, all PAI counters are monitored. On each process schedule out, the values of all PAI counters are investigated. Non-zero values are saved in the event's ring buffer as raw data. This scheme expects the start value of each counter to be reset to zero after each read operation performed by the PAI PMU device driver. This allows for only one active event at any one time as it relies on the start value of counters to be reset to zero. Create a save area for each installed PAI XXXX_ALL event and save all PAI counter values in this save area. Instead of clearing the PAI counter lowcore area to zero after each read operation, copy them from the lowcore area to the event's save area at process schedule out time. The delta of each PAI counter is calculated by subtracting the old counter's value stored in the event's save area from the current value stored in the lowcore area. With this scheme, mulitple events of the PAI counters XXXX_ALL can be handled at the same time. This will be addressed in a follow-on patch. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Holger Dengler authored
The ap_bus is using inline functions of the ultravisor (uv) in-kernel API. The related header file is implicitly included via several other headers. Replace this by an explicit include of the ultravisor header in the ap_bus file. Signed-off-by: Holger Dengler <dengler@linux.ibm.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
- 16 Feb, 2024 28 commits
-
-
Heiko Carstens authored
Convert CRC-32 LE variants to C. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Convert CRC-32 BE variant to C. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Provide various vector instruction inline assemblies for crc32 calculations. This is just preparation to keep the conversion of the existing crc32 implementations from assembly to C small. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Provide several one instruction fpu inline assemebles and use them to implement the bogomips calculation in C like style. This is more for illustration purposes on how kernel fpu code can be written in C. This has the advantage that the author only has to take care of the floating point instructions, but doesn't need to take care of general purpose register allocation (if needed), and the semantics of all other instructions not related to fpu. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Move the s390 specific raid6 inline assemblies, make them generic, and reuse them to implement the raid6 gen/xor implementation. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
With csum_partial(), which reads all bytes into registers it is easy to also implement csum_partial_copy_nocheck() which copies the buffer while calculating its checksum. For a 512 byte buffer this reduces the runtime by 19%. Compared to the old generic variant (memcpy() + cksm instruction) runtime is reduced by 42%). Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Provide a faster variant of csum_partial() which uses vector registers instead of the cksm instruction. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Convert those callers of csum_partial() to use the cksm instruction, which are either very early or in critical paths, like panic/dump, so they don't have to rely on a working kernel infrastructure, which will be introduced with a subsequent patch. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Call instrument_read() from csum_partial() instead of kasan_check_read(). instrument_read() covers all memory access instrumentation methods. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
TIF_FPU is unused - remove it. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
The first invocation of kernel_fpu_begin() after switching from user to kernel context will save all vector registers, even if only parts of the vector registers are used within the kernel fpu context. Given that save and restore of all vector registers is quite expensive change the current approach in several ways: - Instead of saving and restoring all user registers limit this to those registers which are actually used within an kernel fpu context. - On context switch save all remaining user fpu registers, so they can be restored when the task is rescheduled. - Saving user registers within kernel_fpu_begin() is done without disabling and enabling interrupts - which also slightly reduces runtime. In worst case (e.g. interrupt context uses the same registers) this may lead to the situation that registers are saved several times, however the assumption is that this will not happen frequently, so that the new method is faster in nearly all cases. - save_user_fpu_regs() can still be called from all contexts and saves all (or all remaining) user registers to a tasks ufpu user fpu save area. Overall this reduces the time required to save and restore the user fpu context for nearly all cases. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
The kernel_fpu structure has a quite large size of 520 bytes. In order to reduce stack footprint introduce several kernel fpu structures with different and also smaller sizes. This way every kernel fpu user must use the correct variant. A compile time check verifies that the correct variant is used. There are several users which use only 16 instead of all 32 vector registers. For those users the new kernel_fpu_16 structure with a size of only 266 bytes can be used. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Let fpu_vlm() and fpu_vstm() macros return the number of registers saved / loaded. This is helpful to read easy to read code in case there are several subsequent fpu_vlm() or fpu_vstm() calls: __vector128 *vxrs = .... vxrs += fpu_vstm(0, 15, vxrs); vxrs += fpu_vstm(16, 31, vxrs); Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
The anonymous union within struct fpu contains a floating point register array and a vector register array. Given that the vector register is always present remove the floating point register array. For configurations without vector registers save the floating point register contents within their corresponding vector register location. This allows to remove the union, and also to simplify ptrace and perf code. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
KVM was the only user which modified the regs pointer in struct fpu. Remove the pointer and convert the rest of the core fpu code to directly access the save area embedded within struct fpu. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
KVM modifies the kernel fpu's regs pointer to its own area to implement its custom version of preemtible kernel fpu context. With general support for preemptible kernel fpu context there is no need for the extra complexity in KVM code anymore. Therefore convert KVM to a regular kernel fpu user. In particular this means that all TIF_FPU checks can be removed, since the fpu register context will never be changed by other kernel fpu users, and also the fpu register context will be restored if a thread is preempted. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Make the kernel fpu context preemptible. Add another fpu structure to the thread_struct, and use it to save and restore the kernel fpu context if its task uses fpu registers when it is preempted. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Change type of fpu mask consistently from u32 to int. This is a prerequisite to make the kernel fpu usage preemptible. Upcoming code uses __atomic* ops which work with int pointers. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Rename save_fpu_regs(), load_fpu_regs(), and struct thread_struct's fpu member to save_user_fpu_regs(), load_user_fpu_regs(), and ufpu. This way the function and variable names reflect for which context they are supposed to be used. This large and trivial conversion is a prerequisite for making the kernel fpu usage preemptible. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
The FPU state, as represented by the CIF_FPU flag reflects the FPU state of a task, not the CPU it is running on. Therefore convert the flag to a regular TIF flag. This removes the magic in switch_to() where a save_fpu_regs() call for the currently (previous) running task sets the per-cpu CIF_FPU flag, which is required to restore FPU register contents of the next task, when it returns to user space. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Convert the rather large __kernel_fpu_begin()/__kernel_fpu_end() inline assemblies to C. The C variant is much more readable, and this also allows to get rid of the non-obvious usage of KERNEL_VXR_* constants within the inline assemblies. E.g. "tmll %[m],6" correlates with the two bits set in KERNEL_VXR_LOW. If the corresponding defines would be changed, the inline assembles would break in a subtle way. Therefore convert to C, use the proper defines, and allow the compiler to generate code using the (hopefully) most efficient instructions. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Instead of open-coding vlm and vstm inline assemblies at several locations, provide an fpu_* function for each instruction, and use them in the new save_vx_regs() and load_vx_regs() helper functions. Note that "O" and "R" inline assembly operand modifiers are used in order to pass the displacement and base register of the memory operands to the existing VLM and VSTM macros. The two operand modifiers are not available for clang. Therefore provide two variants of each inline assembly. The clang variant always uses and clobbers general purpose register 1, like in the previous inline assemblies, so it can be used as base register with a zero displacement. This generates slightly less efficient code, but can be removed as soon as clang has support for the used operand modifiers. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Instead of open-coding lfpc, sfpc, and stfpc inline assemblies at several locations, provide an fpu_* function for each instruction and use the function instead. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Deduplicate the 64 ld and std inline assemblies. Provide an fpu inline assembly for both instructions, and use them in the new save_fp_regs() and load_fp_regs() helper functions. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
The only user of sfpc_safe() needs to read the new fpc register value from memory before it is set with sfpc. Avoid this indirection and use lfpc, which reads the new value from memory. Also add the "fpu_" prefix to have a common name space for fpu related inline assemblies, and provide memory access instrumentation. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Add documentation which describes what the fpu helper functions are good for, and why they should be used. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Move, rename, and merge the fpu and vx header files. This way fpu header files have a consistent naming scheme (fpu*.h). Also get rid of the fpu subdirectory and move header files to asm directory, so that all fpu and vx header files can be found at the same location. Merge internal.h header file into other header files, since the internal helpers are used at many locations. so those helper functions are really not internal. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Address various checkpatch warnings, adjust whitespace, and try to increase readability. This is just preparation, in order to avoid that subsequent patches contain any distracting drive-by coding style changes. Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-