- 20 Mar, 2023 26 commits
-
-
Harald Freudenberger authored
Extent the ap inline functions ap_rapq() (calls PQAP(RAPQ)) and ap_zapq() (calls PQAP(ZAPQ)) with a new parameter to enable the new architectured F bit which forces an unassociate and/or unbind on a secure execution associated and/or bound queue. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
With SE SB (Secure Binding) some currently unused and thus always zero bits in the TAPQ GR2 result are now used to show the binding state of a queue. So to check if a card has changed the comparing base is exactly this GR2 value shown as 'ap_function' in sysfs (/sys/devices/ap/cardxx/ap_functions). Now there is some queue specific info in this info and so a new mask TAPQ_CARD_FUNC_CMP_MASK is used to filter out only the relevant bits for card compare. For the same reason now the function bits (including exactly this bind/associate information) need to be exposed to user space now. So tools like lszcrypt can evaluate binding/association state on a queue base. So here comes a new sysfs attribute /sys/devices/ap/cardxx/xx.yyyy/ap_functions This sysfs attribute is similar to the already existing ap_functions attribute at ap card level. It shows the upper 32 bits of GR2 from an invocation of TAPQ for this AP queue. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
This patch introduces a new struct ap_tapq_gr2 which covers the response in GR2 on TAPQ invocation. This makes it much easier and less error-prone for the calling functions to access the right field without shifting and masking. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Introduce a new AP bus sysfs attribute /sys/bus/ap/features which shows the features from the QCI information. Currently these feature bits are evaluated: - QCI S bit is shown as 'APSC' - QCI N bit is shown as 'APXA' - QCI C bit is shown as 'QACT' - QCI R bit is shown as 'RC8A' - QCI B bit is shown as 'APSB' Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
This patch introduces an update to the ap_config_info struct which is filled with the QCI subfunction. There is a new bit apsb (short 'B') showing if the AP secure bind facility is available. The patch also includes a simple function ap_sb_available() wrapping this bit test. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Replace scnprintf() with sysfs_emit() and friends where possible. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
The inline ap_dqap function does not return the number of bytes actually written into the message buffer. The calling code inspects the AP message header to figure out what kind of AP message has been received and pulls the length information from this header. This processing may not work correctly in cases where only a fragment of the reply is received. With this patch the ap_dqap inline function now returns the number of actually written bytes in the *length parameter. So the calling function has a chance to compare the number of received bytes against what the AP message header length field states. This is especially useful in cases where a message could only get partially received. The low level reply processing functions needed some rework to be able to catch this new length information and compare it the right way. The rework also deals with some situations where until now the reply length was not correctly calculated and/or set. All this has been heavily tested as the modifications on the reply length information may affect crypto load. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Since s390 kernel build does not support 32 bit build any more there is no difference between long and long long. So this patch reworks all occurrences of psmid (a 64 bit value) to use unsigned long now. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Allow to enforce 64 byte function alignment like it is possible for a couple of other architectures. This may or may not be helpful for debugging performance problems, as described with the Kconfig option. Since the kernel works also with 64 byte function alignment there is no reason for not allowing to enforce this function alignment. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Use __ALIGN instead of open coded .align statement to make sure that vdso code follows global kernel function alignment rules. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Use __ALIGN instead of open coded .align statement to make sure that external expoline thunks follow global function alignment rules. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Move the ftrace hotpatch trampolines to mcount.S. This allows to make use of the standard SYM_CODE macros which again makes sure that the hotpatch trampolines follow the function alignment rules of the rest of the kernel. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Make use of CONFIG_FUNCTION_ALIGNMENT which was introduced with commit d49a0626 ("arch: Introduce CONFIG_FUNCTION_ALIGNMENT"). Select FUNCTION_ALIGNMENT_8B for gcc in order to reflect gcc's default function alignment. For all other compilers, which is only clang, select a function alignment of 16 bytes which reflects the default function alignment for clang. Also change the __ALIGN define to follow whatever the value of CONFIG_FUNCTION_ALIGNMENT is. This makes sure that the alignment of C and assembler functions is the same. In result everything still uses the default function alignment for both compilers. However in addition this is now also true for all assembly functions, so that all functions have a consistent alignment. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Vasily Gorbik says: =================== Combine and generalize all methods for finding unused memory in decompressor, while decreasing complexity, add memory holes support, while improving error handling (especially in low-memory conditions) and debug-ability. =================== Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
Since regular paging structs are initialized in decompressor already move KASAN shadow mapping to decompressor as well. This helps to avoid allocating KASAN required memory in 1 large chunk, de-duplicate paging structs creation code and start the uncompressed kernel with KASAN instrumentation right away. This also allows to avoid all pitfalls accidentally calling KASAN instrumented code during KASAN initialization. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
Allow changing page table attributes for KASAN shadow memory ranges. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
Currently several approaches for finding unused memory in decompressor are utilized. While "safe_addr" grows towards higher addresses, vmem code allocates paging structures top down. The former requires careful ordering. In addition to that ipl report handling code verifies potential intersections with secure boot certificates on its own. Neither of two approaches are memory holes aware and consistent with each other in low memory conditions. To solve that, existing approaches are generalized and combined together, as well as online memory ranges are now taken into consideration. physmem_info has been extended to contain reserved memory ranges. New set of functions allow to handle reserves and find unused memory. All reserves and memory allocations are "typed". In case of out of memory condition decompressor fails with detailed info on current reserved ranges and usable online memory. Linux version 6.2.0 ... Kernel command line: ... mem=100M Our of memory allocating 100000 bytes 100000 aligned in range 0:5800000 Reserved memory ranges: 0000000000000000 0000000003e33000 DECOMPRESSOR 0000000003f00000 00000000057648a3 INITRD 00000000063e0000 00000000063e8000 VMEM 00000000063eb000 00000000063f4000 VMEM 00000000063f7800 0000000006400000 VMEM 0000000005800000 0000000006300000 KASAN Usable online memory ranges (info source: sclp read info [3]): 0000000000000000 0000000006400000 Usable online memory total: 6400000 Reserved: 61b10a3 Free: 24ef5d Call Trace: (sp:000000000002bd58 [<0000000000012a70>] physmem_alloc_top_down+0x60/0x14c) sp:000000000002bdc8 [<0000000000013756>] _pa+0x56/0x6a sp:000000000002bdf0 [<0000000000013bcc>] pgtable_populate+0x45c/0x65e sp:000000000002be90 [<00000000000140aa>] setup_vmem+0x2da/0x424 sp:000000000002bec8 [<0000000000011c20>] startup_kernel+0x428/0x8b4 sp:000000000002bf60 [<00000000000100f4>] startup_normal+0xd4/0xd4 physmem_alloc_range allows to find free memory in specified range. It should be used for one time allocations only like finding position for amode31 and vmlinux. physmem_alloc_top_down can be used just like physmem_alloc_range, but it also allows multiple allocations per type and tries to merge sequential allocations together. Which is useful for paging structures allocations. If sequential allocations cannot be merged together they are "chained", allowing easy per type reserved ranges enumeration and migration to memblock later. Extra "struct reserved_range" allocated for chaining are not tracked or reserved but rely on the fact that both physmem_alloc_range and physmem_alloc_top_down search for free memory only below current top down allocator position. All reserved ranges should be transferred to memblock before memblock allocations are enabled. The startup code has been reordered to delay any memory allocations until online memory ranges are detected and occupied memory ranges are marked as reserved to be excluded from follow-up allocations. Ipl report certificates are a special case, ipl report certificates list is checked together with other memory reserves until certificates are saved elsewhere. KASAN required memory for shadow memory allocation and mapping is reserved as 1 large chunk which is later passed to KASAN early initialization code. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
In preparation to extending mem_detect with additional information like reserved ranges rename it to more generic physmem_info. This new naming also help to avoid confusion by using more exact terms like "physmem online ranges", etc. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
check_image_bootable() has been introduced with commit 627c9b62 ("s390/boot: block uncompressed vmlinux booting attempts") to make sure that users don't try to boot uncompressed vmlinux ELF image in qemu. It used to be possible quite some time ago. That commit prevented confusion with uncompressed vmlinux image starting to boot and even printing kernel messages until it crashed. Users might have tried to report the problem without realizing they are doing something which was not intended. Since commit f1d3c532 ("s390/boot: move sclp early buffer from fixed address in asm to C") check_image_bootable() doesn't function properly anymore, as well as booting uncompressed vmlinux image in qemu doesn't really produce any output and crashes. Moving forward it doesn't make sense to fix check_image_bootable() anymore, so simply remove it. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Ilya Leoshkevich authored
report_user_fault() currently does not show which library last_break points to. Call print_vma_addr() to find out; the output now looks like this: Last Breaking-Event-Address: [<000003ffaa2a56e4>] libc.so.6[3ffaa180000+251000] For kernel it's unchanged: Last Breaking-Event-Address: [<000000000030fd06>] trace_hardirqs_on+0x56/0xc8 Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Luis Chamberlain authored
The routine appldata_register_ops() allocates a sysctl table with 4 entries. The firsts one, ops->ctl_table[0] is the parent directory with an empty entry following it, ops->ctl_table[1]. The next entry is for the ops->name and that is ops->ctl_table[2]. It needs an empty entry following that, and that is ops->ctl_table[3]. And so hence the kcalloc(4, sizeof(struct ctl_table), GFP_KERNEL). We can simplify this considerably since sysctl_register("foo", table) can create the parent directory for us if it does not exist. So we can just remove the first two entries and move back the ops->name to the first entry, and just use kcalloc(2, ...). [gor@linux.ibm.com: appldata_generic_handler fixup ctl_table index 2->0] Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20230310234525.3986352-7-mcgrof@kernel.orgReviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Luis Chamberlain authored
There is no need to declare an extra tables to just create directory, this can be easily be done with a prefix path with register_sysctl(). Simplify this registration. Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20230310234525.3986352-6-mcgrof@kernel.orgReviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Luis Chamberlain authored
There is no need to declare an extra tables to just create directory, this can be easily be done with a prefix path with register_sysctl(). Simplify this registration. Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20230310234525.3986352-5-mcgrof@kernel.orgReviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Luis Chamberlain authored
There is no need to declare an extra tables to just create directory, this can be easily be done with a prefix path with register_sysctl(). Simplify this registration. Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20230310234525.3986352-4-mcgrof@kernel.orgReviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Luis Chamberlain authored
There is no need to declare an extra tables to just create directory, this can be easily be done with a prefix path with register_sysctl(). Simplify this registration. Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20230310234525.3986352-3-mcgrof@kernel.orgReviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Luis Chamberlain authored
There is no need to declare an extra tables to just create directory, this can be easily be done with a prefix path with register_sysctl(). Simplify this registration. Signed-off-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20230310234525.3986352-2-mcgrof@kernel.orgReviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
- 13 Mar, 2023 11 commits
-
-
Niklas Schnelle authored
Prior to commit 960ac362 ("s390/pci: allow zPCI zbus without a function zero") enabling and scanning a PCI function had to potentially be postponed until the function with devfn zero on that bus was plugged. While the commit removed the waiting itself extra code to scan all functions on the PCI bus once function zero appeared was missed. Remove that code and the outdated comments about waiting for function zero. Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Link: https://lore.kernel.org/r/20230306151014.60913-5-schnelle@linux.ibm.comSigned-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
The pci_bus_add_devices() call in zpci_bus_create_pci_bus() is without function since at this point no device could have been added to the freshly created PCI bus. Suggested-by: Bjorn Helgaas <helgaas@kernel.org> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Link: https://lore.kernel.org/r/20230306151014.60913-4-schnelle@linux.ibm.comSigned-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
As the name suggests zpci_bus_scan_device() is used to scan a specific device and thus pci_bus_add_device() for that device is sufficient. Furthermore move this call inside the rescan/remove locking. Suggested-by: Bjorn Helgaas <helgaas@kernel.org> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Link: https://lore.kernel.org/r/20230306151014.60913-3-schnelle@linux.ibm.comSigned-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
gen_lpswe() contains a BUILD_BUG_ON() statement which depends on a function parameter. If the compiler decides to generate a not inlined function this will lead to a build error, even if all call sites pass a valid parameter. To avoid this always inline gen_lpswe(). Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Al Viro authored
Setting and ->psw.addr in childregs of kernel thread is a rudiment of the old kernel_thread()/kernel_execve() implementation. Mainline hadn't been using them since 2012. And clarify the assignments to frame->sf.gprs - the array stores grp6..gpr15 values to be set by __switch_to(), so frame->sf.gprs[5] actually affects grp11, etc. Better spell that as frame->sf.gprs[11 - 6]... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Link: https://lore.kernel.org/r/ZAU6BYFisE8evmYf@ZenIVSigned-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Yu Zhe authored
Pointer variables of void * type do not require type cast. Signed-off-by: Yu Zhe <yuzhe@nfschina.com> Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Link: https://lore.kernel.org/r/20230303052155.21072-1-yuzhe@nfschina.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
There is no point in changing branch prediction state of a cpu shortly before it enters stop state. Therefore remove __bpon(). Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
s390_isolate_bp_guest() is unused. Remove it. Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
TIF_ISOLATE_BP is unused since it was introduced with commit 6b73044b ("s390: run user space and KVM guests with modified branch prediction"). Given that there is no use case remove it again. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
When leaving interpretive execution because of a program check BPENTER should be called like it is done on interrupt exit as well. Acked-by: Alexander Gordeev <agordeev@linux.ibm.com> Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Sven Schnelle authored
The code which handles the ipl report is searching for a free location in memory where it could copy the component and certificate entries to. It checks for intersection between the sections required for the kernel and the component/certificate data area, but fails to check whether the data structures linking these data areas together intersect. This might cause the iplreport copy code to overwrite the iplreport itself. Fix this by adding two addtional intersection checks. Cc: <stable@vger.kernel.org> Fixes: 9641b8cc ("s390/ipl: read IPL report at early boot") Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 12 Mar, 2023 3 commits
-
-
Linus Torvalds authored
-
Hector Martin authored
This reverts part of commit 015b8cc5 ("wifi: cfg80211: Fix use after free for wext") This commit broke WPA offload by unconditionally clearing the crypto modes for non-WEP connections. Drop that part of the patch. Signed-off-by: Hector Martin <marcan@marcan.st> Reported-by: Ilya <me@0upti.me> Reported-and-tested-by: Janne Grunau <j@jannau.net> Reviewed-by: Eric Curtin <ecurtin@redhat.com> Fixes: 015b8cc5 ("wifi: cfg80211: Fix use after free for wext") Cc: stable@kernel.org Link: https://lore.kernel.org/linux-wireless/ZAx0TWRBlGfv7pNl@kroah.com/T/#m11e6e0915ab8fa19ce8bc9695ab288c0fe018edfSigned-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmddLinus Torvalds authored
Pull tpm fixes from Jarkko Sakkinen: "Two additional bug fixes for v6.3" * tag 'tpm-v6.3-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd: tpm: disable hwrng for fTPM on some AMD designs tpm/eventlog: Don't abort tpm_read_log on faulty ACPI address
-