- 07 Mar, 2024 10 commits
-
-
Harald Freudenberger authored
This patch reworks and improves the pkey retry behavior for the pkey_ep11key2pkey() function. In contrast to the pkey_skey2pkey() function which is used to trigger a protected key derivation from an CCA secure data or cipher key the EP11 counterpart function had no proper retry loop implemented. This patch now introduces code which acts similar to the retry already done for CCA keys for this function used for EP11 keys. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
This patch reworks and improves the zcrypt retry behavior: - The zcrypt_rescan_req counter has been removed. This counter variable has been increased on some transport errors and was used as a gatekeeper for AP bus rescans. - Rework of the zcrypt_process_rescan() function to not use the above counter variable any more. Instead now always the ap_bus_force_rescan() function is called (as this has been improved with a previous patch). - As the zcrpyt_process_rescan() function is called in all cprb send functions in case of the first attempt to send failed with ENODEV now before the next attempt to send an cprb is started. - Introduce a define ZCRYPT_WAIT_BINDINGS_COMPLETE_MS for the amount of milliseconds to have the zcrypt API wait for AP bindings complete. This amount has been reduced to 30s (was 60s). Some playing around showed that 30s is a really fair limit. The result of the above together with the patches to improve the AP scan bus functions is that after the first loop of cprb send retries when the result is a ENODEV the AP bus scan is always triggered (synchronous). If the AP bus scan detects changes in the configuration, all the send functions now retry when the first attempt was failing with ENODEV in the hope that now a suitable device has appeared. About concurrency: The ap_bus_force_rescan() uses a mutex to ensure only one active AP bus scan is running. Another caller of this function is blocked as long as the scan is running but does not cause yet another scan. Instead the result of the 'other' scan is used. This affects only tasks which run into an initial ENODEV. Tasks with successful delivery of cprbs will never invoke the bus scan and thus never get blocked by the mutex. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
The both functions zcrypt_send_cprb() and zcrypt_send_ep11_cprb() are used to send CPRBs in-kernel from different sources. For example the pkey module may call one of the functions in zcrypt_ep11misc.c to trigger a derive of a protected key from a secure key blob via an existing crypto card. These both functions are then the internal API to send the CPRB and receive the response. All the ioctl functions to send an CPRB down to the addressed crypto card use some kind of retry mechanism. When the first attempt fails with ENODEV, a bus rescan is triggered and a loop with retries is carried out. For the both named internal functions there was never any retry attempt made. This patch now introduces the retry code even for this both internal functions to have effectively same behavior on sending an CPRB from an in-kernel source and sending an CPRB from userspace via ioctl. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Rework the invocations around ap_scan_bus(): - Protect ap_scan_bus() with a mutex to make sure only one scan at a time is running. - The workqueue invocation which is triggered by either the module init or via AP bus scan timer expiration uses this mutex and if there is already a scan running, the work is simple aborted (as the job is done by another task). - The ap_bus_force_rescan() which is invoked by higher level layers mostly on failures which indicate a bus scan may help is reworked to call ap_scan_bus() direct instead of enqueuing work into a system workqueue and waiting for that to finish. Of course the mutex is respected and in case of another task already running a bus scan the shortcut of waiting for this scan to finish and reusing the scan result is taken. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
The AP scan bus function now returns true if there have been any config changes detected. This will become important in a follow up patch which will exploit this hint for further actions. This also required to have the AP scan bus timer callback reworked as the function signature has changed to bool ap_scan_bus(void). Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
This patch tries to clarify the functions and variables around the AP scan bus job. All these variables and functions start with ap_scan_bus and are declared in one place now. No functional changes in this patch - only renaming and move of code or declarations. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
The APQN bindings complete completion was used to reflect that 1st the AP bus initial scan is done and 2nd all the detected APQNs have been bound to a device driver. This was a single-shot action. However, as the AP bus supports hot-plug it may be that new APQNs appear reflected as new AP queue and card devices which need to be bound to appropriate device drivers. So the condition that all existing AP queue devices are bound to device drivers may go away for a certain time. This patch now checks during AP bus scan for maybe new AP devices appearing and does a re-init of the internal completion variable. So the AP bus function ap_wait_apqn_bindings_complete() now may block on this condition variable even later after initial scan is through when new APQNs appear which need to get bound. This patch also moves the check for binding complete invocation from the probe function to the end of the AP bus scan function. This change also covers some weird scenarios where during a card hotplug the binding of the card device was sufficient for binding complete but the queue devices where still in the process of being discovered. As of now this change has no impact on existing code. The behavior change in the now later bindings complete should not impact any code (and has been tested so far). The only exploiter is the zcrypt function zcrypt_wait_api_operational() which only initial calls ap_wait_apqn_bindings_complete(). However, this new behavior of the AP bus wait for APQNs bindings complete function will be used in a later patch exploiting this for the zcrypt API layer. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Set LOCKDEP_BITS to 16 and LOCKDEP_CHAINS_BITS to 17, since test systems frequently run out of lockdep entries and lockdep chains. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Jason J. Herne authored
Update vfio_ap_mdev_reset_queue() to handle an unexpected checkstop (hardware error) the same as the deconfigured case. This prevents unexpected and unhelpful warnings in the event of a hardware error. We also stop lying about a queue's reset response code. This was originally done so we could force vfio_ap_mdev_filter_matrix to pass a deconfigured device through to the guest for the hotplug scenario. vfio_ap_mdev_filter_matrix is instead modified to allow passthrough for all queues with reset state normal, deconfigured, or checkstopped. In the checkstopped case we choose to pass the device through and let the error state be reflected at the guest level. Signed-off-by: "Jason J. Herne" <jjherne@linux.ibm.com> Reviewed-by: Anthony Krowiak <akrowiak@linux.ibm.com> Link: https://lore.kernel.org/r/20240215153144.14747-1-jjherne@linux.ibm.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Thomas Richter authored
Currently only one PAI sampling event can be created and active at any one time. The PMU device drivers store a pointer to this event in their data structures even when the event is created for counting and the PMU device driver reference to this counting event is never needed. Change this and assign the pointer to the PMU device driver only when a sampling event is created. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
- 26 Feb, 2024 5 commits
-
-
Alexander Gordeev authored
Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Alexander Gordeev authored
The relocation table is not expected to contain a zero-termination entry. The existing check is likely a left-over from similar x86 code that uses zero-entries as delimiters. s390 does not have ones and therefore the check could be avoided. Suggested-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Alexander Gordeev authored
Make the type of __vmlinux_relocs_64_start|end symbols as char array, just like it is done for all other sections. Function rescue_relocs() is simplified as result. Suggested-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Alexander Gordeev authored
Do not use vmlinux.image_size within kaslr_adjust_relocs() function to calculate the upper relocation table boundary. Instead, make both lower and upper boundaries the function input parameters. Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Alexander Gordeev authored
The end of GOT is calculated dynamically on boot. The size of GOT is calculated on build from the start and end of GOT. Avoid both calculations and use the end of GOT directly. Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
- 25 Feb, 2024 1 commit
-
-
Heiko Carstens authored
Naresh reported this build error on linux-next: s390x-linux-gnu-ld: Unexpected GOT/PLT entries detected! make[3]: *** [/builds/linux/arch/s390/boot/Makefile:87: arch/s390/boot/vmlinux.syms] Error 1 make[3]: Target 'arch/s390/boot/bzImage' not remade because of errors. The reason for the build error is an incorrect/incomplete assertion which checks the size of the .got.plt section. Similar to x86 the size is either zero or 24 bytes (three entries). See commit 262b5cae ("x86/boot/compressed: Move .got.plt entries out of the .got section") for more details. The three reserved/additional entries for s390 are described in chapter 3.2.2 of the s390x ABI [1] (thanks to Andreas Krebbel for pointing this out!). [1] https://github.com/IBM/s390x-abi/releases/download/v1.6.1/lzsabi_s390x.pdfReported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Reported-by: Linux Kernel Functional Testing <lkft@linaro.org> Closes: https://lore.kernel.org/all/CA+G9fYvWp8TY-fMEvc3UhoVtoR_eM5VsfHj3+n+kexcfJJ+Cvw@mail.gmail.com Fixes: 30226853 ("s390: vmlinux.lds.S: explicitly handle '.got' and '.plt' sections") Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
- 22 Feb, 2024 3 commits
-
-
Nathan Chancellor authored
When building with OBJDUMP=llvm-objdump, there are a series of warnings from the section comparisons that arch/s390/boot/Makefile performs between vmlinux and arch/s390/boot/vmlinux: llvm-objdump: warning: section '.boot.preserved.data' mentioned in a -j/--section option, but not found in any input file llvm-objdump: warning: section '.boot.data' mentioned in a -j/--section option, but not found in any input file llvm-objdump: warning: section '.boot.preserved.data' mentioned in a -j/--section option, but not found in any input file llvm-objdump: warning: section '.boot.data' mentioned in a -j/--section option, but not found in any input file The warning is a little misleading, as these sections do exist in the input files. It is really pointing out that llvm-objdump does not match GNU objdump's behavior of respecting '-j' / '--section' in combination with '-t' / '--syms': $ s390x-linux-gnu-objdump -t -j .boot.data vmlinux.full vmlinux.full: file format elf64-s390 SYMBOL TABLE: 0000000001951000 l O .boot.data 0000000000003000 sclp_info_sccb 00000000019550e0 l O .boot.data 0000000000000001 sclp_info_sccb_valid 00000000019550e2 g O .boot.data 0000000000001000 early_command_line ... $ llvm-objdump -t -j .boot.data vmlinux.full vmlinux.full: file format elf64-s390 SYMBOL TABLE: 0000000000100040 l O .text 0000000000000010 dw_psw 0000000000000000 l df *ABS* 0000000000000000 main.c 00000000001001b0 l F .text 00000000000000c6 trace_event_raw_event_initcall_level 0000000000100280 l F .text 0000000000000100 perf_trace_initcall_level ... It may be possible to change llvm-objdump's behavior to match GNU objdump's behavior but the difficulty of that task has not yet been explored. The combination of '$(OBJDUMP) -t -j' is not common in the kernel tree on a whole, so workaround this tool difference by grepping for the sections in the full symbol table output in a similar manner to the sed invocation. This results in no visible change for GNU objdump users while fixing the warnings for OBJDUMP=llvm-objdump, further enabling use of LLVM=1 for ARCH=s390 with versions of LLVM that have support for s390 in ld.lld and llvm-objcopy. Reported-by: Heiko Carstens <hca@linux.ibm.com> Closes: https://lore.kernel.org/20240219113248.16287-C-hca@linux.ibm.com/ Link: https://github.com/ClangBuiltLinux/linux/issues/859Signed-off-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20240220-s390-work-around-llvm-objdump-t-j-v1-1-47bb0366a831@kernel.orgSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Eric Farman authored
There is a selftest that checks for an (expected) error when an invalid AR is specified, but not one that exercises the AR path. Add a simple test that mirrors the vanilla write/read test while providing an AR. An AR that contains zero will direct the CPU to use the primary address space normally used anyway. AR[1] is selected for this test because the host AR[1] is usually non-zero, and KVM needs to correctly swap those values. Reviewed-by: Nina Schoetterl-Glausch <nsg@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com> Signed-off-by: Eric Farman <farman@linux.ibm.com> Link: https://lore.kernel.org/r/20240220211211.3102609-3-farman@linux.ibm.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Eric Farman authored
The routine ar_translation() can be reached by both the instruction intercept path (where the access registers had been loaded with the guest register contents), and the MEM_OP ioctls (which hadn't). Since this routine saves the current registers to vcpu->run, this routine erroneously saves host registers into the guest space. Introduce a boolean in the kvm_vcpu_arch struct to indicate whether the registers contain guest contents. If they do (the instruction intercept path), the save can be performed and the AR translation is done just as it is today. If they don't (the MEM_OP path), the AR can be read from vcpu->run without stashing the current contents. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Nina Schoetterl-Glausch <nsg@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com> Signed-off-by: Eric Farman <farman@linux.ibm.com> Link: https://lore.kernel.org/r/20240220211211.3102609-2-farman@linux.ibm.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
- 21 Feb, 2024 1 commit
-
-
Janosch Frank authored
It's a bit nicer than having multiple lines and will help if there's another re-work since we'll only have to change one location. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
- 20 Feb, 2024 12 commits
-
-
Josh Poimboeuf authored
On s390, currently kernel uses the '-fPIE' compiler flag for compiling vmlinux. This has a few problems: - It uses dynamic symbols (.dynsym), for which the linker refuses to allow more than 64k sections. This can break features which use '-ffunction-sections' and '-fdata-sections', including kpatch-build [1] and Function Granular KASLR. - It unnecessarily uses GOT relocations, adding an extra layer of indirection for many memory accesses. Instead of using '-fPIE', resolve all the relocations at link time and then manually adjust any absolute relocations (R_390_64) during boot. This is done by first telling the linker to preserve all relocations during the vmlinux link. (Note this is harmless: they are later stripped in the vmlinux.bin link.) Then use the 'relocs' tool to find all absolute relocations (R_390_64) which apply to allocatable sections. The offsets of those relocations are saved in a special section which is then used to adjust the relocations during boot. (Note: For some reason, Clang occasionally creates a GOT reference, even without '-fPIE'. So Clang-compiled kernels have a GOT, which needs to be adjusted.) On my mostly-defconfig kernel, this reduces kernel text size by ~1.3%. [1] https://github.com/dynup/kpatch/issues/1284 [2] https://gcc.gnu.org/pipermail/gcc-patches/2023-June/622872.html [3] https://gcc.gnu.org/pipermail/gcc-patches/2023-August/625986.html Compiler consideration: Gcc recently implemented an optimization [2] for loading symbols without explicit alignment, aligning with the IBM Z ELF ABI. This ABI mandates symbols to reside on a 2-byte boundary, enabling the use of the larl instruction. However, kernel linker scripts may still generate unaligned symbols. To address this, a new -munaligned-symbols option has been introduced [3] in recent gcc versions. This option has to be used with future gcc versions. Older Clang lacks support for handling unaligned symbols generated by kernel linker scripts when the kernel is built without -fPIE. However, future versions of Clang will include support for the -munaligned-symbols option. When the support is unavailable, compile the kernel with -fPIE to maintain the existing behavior. In addition to it: move vmlinux.relocs to safe relocation When the kernel is built with CONFIG_KERNEL_UNCOMPRESSED, the entire uncompressed vmlinux.bin is positioned in the bzImage decompressor image at the default kernel LMA of 0x100000, enabling it to be executed in-place. However, the size of .vmlinux.relocs could be large enough to cause an overlap with the uncompressed kernel at the address 0x100000. To address this issue, .vmlinux.relocs is positioned after the .rodata.compressed in the bzImage. Nevertheless, in this configuration, vmlinux.relocs will overlap with the .bss section of vmlinux.bin. To overcome that, move vmlinux.relocs to a safe location before clearing .bss and handling relocs. Compile warning fix from Sumanth Korikkar: When kernel is built with CONFIG_LD_ORPHAN_WARN and -fno-PIE, there are several warnings: ld: warning: orphan section `.rela.iplt' from `arch/s390/kernel/head64.o' being placed in section `.rela.dyn' ld: warning: orphan section `.rela.head.text' from `arch/s390/kernel/head64.o' being placed in section `.rela.dyn' ld: warning: orphan section `.rela.init.text' from `arch/s390/kernel/head64.o' being placed in section `.rela.dyn' ld: warning: orphan section `.rela.rodata.cst8' from `arch/s390/kernel/head64.o' being placed in section `.rela.dyn' Orphan sections are sections that exist in an object file but don't have a corresponding output section in the final executable. ld raises a warning when it identifies such sections. Eliminate the warning by placing all .rela orphan sections in .rela.dyn and raise an error when size of .rela.dyn is greater than zero. i.e. Dont just neglect orphan sections. This is similar to adjustment performed in x86, where kernel is built with -fno-PIE. commit 5354e845 ("x86/build: Add asserts for unwanted sections") [sumanthk@linux.ibm.com: rebased Josh Poimboeuf patches and move vmlinux.relocs to safe location] [hca@linux.ibm.com: merged compile warning fix from Sumanth] Tested-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Link: https://lore.kernel.org/r/20240219132734.22881-4-sumanthk@linux.ibm.com Link: https://lore.kernel.org/r/20240219132734.22881-5-sumanthk@linux.ibm.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Josh Poimboeuf authored
This 'relocs' tool is copied from the x86 version, ported for s390, and greatly simplified to remove unnecessary features. It reads vmlinux and outputs assembly to create a .vmlinux.relocs_64 section which contains the offsets of all R_390_64 relocations which apply to allocatable sections. Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Link: https://lore.kernel.org/r/20240219132734.22881-3-sumanthk@linux.ibm.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Sumanth Korikkar authored
Gcc recently implemented an optimization [1] for loading symbols without explicit alignment, aligning with the IBM Z ELF ABI. This ABI mandates symbols to reside on a 2-byte boundary, enabling the use of the larl instruction. However, kernel linker scripts may still generate unaligned symbols. To address this, a new -munaligned-symbols option has been introduced [2] in recent gcc versions. [1] https://gcc.gnu.org/pipermail/gcc-patches/2023-June/622872.html [2] https://gcc.gnu.org/pipermail/gcc-patches/2023-August/625986.html However, when -munaligned-symbols is used in vdso code, it leads to the following compilation error: `.data.rel.ro.local' referenced in section `.text' of arch/s390/kernel/vdso64/vdso64_generic.o: defined in discarded section `.data.rel.ro.local' of arch/s390/kernel/vdso64/vdso64_generic.o vdso linker script discards .data section to make it lightweight. However, -munaligned-symbols in vdso object files references literal pool and accesses _vdso_data. Hence, compile vdso code without -munaligned-symbols. This means in the future, vdso code should deal with alignment of newly introduced unaligned linker symbols. Acked-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Link: https://lore.kernel.org/r/20240219132734.22881-2-sumanthk@linux.ibm.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Nathan Chancellor authored
When attempting to boot a kernel compiled with OBJCOPY=llvm-objcopy, there is a crash right at boot: Out of memory allocating 6d7800 bytes 8 aligned in range 0:20000000 Reserved memory ranges: 0000000000000000 a394c3c30d90cdaf DECOMPRESSOR Usable online memory ranges (info source: sclp read info [3]): 0000000000000000 0000000020000000 Usable online memory total: 20000000 Reserved: a394c3c30d90cdaf Free: 0 Call Trace: (sp:0000000000033e90 [<0000000000012fbc>] physmem_alloc_top_down+0x5c/0x104) sp:0000000000033f00 [<0000000000011d56>] startup_kernel+0x3a6/0x77c sp:0000000000033f60 [<00000000000100f4>] startup_normal+0xd4/0xd4 GNU objcopy does not have any issues. Looking at differences between the object files in each build reveals info.bin does not get properly populated with llvm-objcopy, which results in an empty .vmlinux.info section. $ file {gnu,llvm}-objcopy/arch/s390/boot/info.bin gnu-objcopy/arch/s390/boot/info.bin: data llvm-objcopy/arch/s390/boot/info.bin: empty $ llvm-readelf --section-headers {gnu,llvm}-objcopy/arch/s390/boot/vmlinux | rg 'File:|\.vmlinux\.info|\.decompressor\.syms' File: gnu-objcopy/arch/s390/boot/vmlinux [12] .vmlinux.info PROGBITS 0000000000034000 035000 000078 00 WA 0 0 1 [13] .decompressor.syms PROGBITS 0000000000034078 035078 000b00 00 WA 0 0 1 File: llvm-objcopy/arch/s390/boot/vmlinux [12] .vmlinux.info PROGBITS 0000000000034000 035000 000000 00 WA 0 0 1 [13] .decompressor.syms PROGBITS 0000000000034000 035000 000b00 00 WA 0 0 1 Ulrich points out that llvm-objcopy only copies sections marked as alloc with a binary output target, whereas the .vmlinux.info section is only marked as load. Add 'alloc' in addition to 'load', so that both objcopy implementations work properly: $ file {gnu,llvm}-objcopy/arch/s390/boot/info.bin gnu-objcopy/arch/s390/boot/info.bin: data llvm-objcopy/arch/s390/boot/info.bin: data $ llvm-readelf --section-headers {gnu,llvm}-objcopy/arch/s390/boot/vmlinux | rg 'File:|\.vmlinux\.info|\.decompressor\.syms' File: gnu-objcopy/arch/s390/boot/vmlinux [12] .vmlinux.info PROGBITS 0000000000034000 035000 000078 00 WA 0 0 1 [13] .decompressor.syms PROGBITS 0000000000034078 035078 000b00 00 WA 0 0 1 File: llvm-objcopy/arch/s390/boot/vmlinux [12] .vmlinux.info PROGBITS 0000000000034000 035000 000078 00 WA 0 0 1 [13] .decompressor.syms PROGBITS 0000000000034078 035078 000b00 00 WA 0 0 1 Closes: https://github.com/ClangBuiltLinux/linux/issues/1996 Link: https://github.com/llvm/llvm-project/commit/3c02cb7492fc78fb678264cebf57ff88e478e14fSuggested-by: Ulrich Weigand <ulrich.weigand@de.ibm.com> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20240216-s390-fix-boot-with-llvm-objcopy-v1-1-0ac623daf42b@kernel.orgSigned-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Gerd Bayer authored
Found and fixed these while working on synchronizing the state handling of zpci_dev's. Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Gerd Bayer authored
Centralize the removal so all paths are covered and the hotplug slot will remain active until the device is really destroyed. Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Gerd Bayer authored
There's a number of tasks that need the state of a zpci device to be stable. Other tasks need to be synchronized as they change the state. State changes could be generated by the system as availability or error events, or be requested by the user through manipulations in sysfs. Some other actions accessible through sysfs - like device resets - need the state to be stable. Unsynchronized state handling could lead to unusable devices. This has been observed in cases of concurrent state changes through systemd udev rules and DPM boot control. Some breakage can be provoked by artificial tests, e.g. through repetitively injecting "recover" on a PCI function through sysfs while running a "hotplug remove/add" in a loop through a PCI slot's "power" attribute in sysfs. After a few iterations this could result in a kernel oops. So introduce a new mutex "state_lock" to guard the state property of the struct zpci_dev. Acquire this lock in all task that modify the state: - hotplug add and remove, through the PCI hotplug slot entry, - avaiability events, as reported by the platform, - error events, as reported by the platform, - during device resets, explicit through sysfs requests or implict through the common PCI layer. Break out an inner _do_recover() routine out of recover_store() to separte the necessary synchronizations from the actual manipulations of the zpci_dev required for the reset. With the following changes I was able to run the inject loops for hours without hitting an error. Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Gerd Bayer authored
Since this guards only the Function Measurement Block, rename from generic lock to fmb_lock in preparation to introduce another lock that guards the state member Signed-off-by: Gerd Bayer <gbayer@linux.ibm.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Thomas Richter authored
Adjust whitespace indentation. No functional change. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Thomas Richter authored
When an event is started, read the current value of the PAI counter. This value is saved in event::hw.prev_count. When an event is stopped, this value is subtracted from the current value read out at event stop time. The difference is the delta of this counter. Simplify the logic and read the event value every time the event is started. This scheme is identical to other device drivers. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Thomas Richter authored
When the PAI events ALL_CRYPTO or ALL_NNPA are created for system wide sampling, all PAI counters are monitored. On each process schedule out, the values of all PAI counters are investigated. Non-zero values are saved in the event's ring buffer as raw data. This scheme expects the start value of each counter to be reset to zero after each read operation performed by the PAI PMU device driver. This allows for only one active event at any one time as it relies on the start value of counters to be reset to zero. Create a save area for each installed PAI XXXX_ALL event and save all PAI counter values in this save area. Instead of clearing the PAI counter lowcore area to zero after each read operation, copy them from the lowcore area to the event's save area at process schedule out time. The delta of each PAI counter is calculated by subtracting the old counter's value stored in the event's save area from the current value stored in the lowcore area. With this scheme, mulitple events of the PAI counters XXXX_ALL can be handled at the same time. This will be addressed in a follow-on patch. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Holger Dengler authored
The ap_bus is using inline functions of the ultravisor (uv) in-kernel API. The related header file is implicitly included via several other headers. Replace this by an explicit include of the ultravisor header in the ap_bus file. Signed-off-by: Holger Dengler <dengler@linux.ibm.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
- 16 Feb, 2024 8 commits
-
-
Heiko Carstens authored
Convert CRC-32 LE variants to C. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Convert CRC-32 BE variant to C. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Provide various vector instruction inline assemblies for crc32 calculations. This is just preparation to keep the conversion of the existing crc32 implementations from assembly to C small. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Provide several one instruction fpu inline assemebles and use them to implement the bogomips calculation in C like style. This is more for illustration purposes on how kernel fpu code can be written in C. This has the advantage that the author only has to take care of the floating point instructions, but doesn't need to take care of general purpose register allocation (if needed), and the semantics of all other instructions not related to fpu. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Move the s390 specific raid6 inline assemblies, make them generic, and reuse them to implement the raid6 gen/xor implementation. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
With csum_partial(), which reads all bytes into registers it is easy to also implement csum_partial_copy_nocheck() which copies the buffer while calculating its checksum. For a 512 byte buffer this reduces the runtime by 19%. Compared to the old generic variant (memcpy() + cksm instruction) runtime is reduced by 42%). Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Provide a faster variant of csum_partial() which uses vector registers instead of the cksm instruction. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Convert those callers of csum_partial() to use the cksm instruction, which are either very early or in critical paths, like panic/dump, so they don't have to rely on a working kernel infrastructure, which will be introduced with a subsequent patch. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-