- 04 Apr, 2023 14 commits
-
-
Heiko Carstens authored
Make STACK_INIT_OFFSET also available for assembler code, and use it everywhere instead of open-coding it at several places. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
The pattern for all in_<type>_stack() functions is the same; especially also the size of all stacks is the same. Simplify the code by passing only the stack address to the generic in_stack() helper, which then can assume a THREAD_SIZE sized stack. Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Harald Freudenberger authored
The preparation of the key data struct for a CCA RSA ME operation had some improvement to skip leading zeros in the key's exponent. However, all supported CCA cards nowadays support leading zeros in key tokens. So for simplifying the CCA key preparing code, this patch simply removes this optimization code. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Juergen Christ <jchrist@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Harald Freudenberger authored
There was some ancient code which padded the results of a clear key ME or CRT operation with some PKCS 1.2 header. According to the comment this was only needed by crypto cards older than the CEX2. These cards are not supported any more and so this patch removes this obscure result padding code. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Juergen Christ <jchrist@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Currently, exception tables are marked as ro_after_init. However, since they are sorted during compile time using scripts/sorttable, they can be moved to RO_DATA using the RO_EXCEPTION_TABLE_ALIGN macro, which is specifically designed for this purpose. Suggested-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Since commit 4efd417f ("s390: raise minimum supported machine generation to z10"), the long-displacement facility is assumed and required for the kernel. Clean up a couple of places in the entry code, where long-displacement could be used directly instead of using a base register. However, there are still a few other places where a base register has to be used to extend short-displacement for the second lowcore page access. Notably, boot/head.S still has to be built for z900, and in mcck_int_handler, spt and lbear, which don't have long-displacements, but need to access save areas at the second lowcore page. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Heiko Carstens says: =================== There are a couple of oddities within the s390 uaccess library functions. Therefore cleanup the whole uaccess.c file. There is no functional change, only improved readability. The output of "objdump -Dr" was always compared before/after each patch to make sure that the generated object file is identical, if that could be expected. Therefore the series also includes more patches than really required to cleanup the code. Furthermore the kunit usercopy tests also still pass. =================== Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
In order to get uaccess.c (nearly) checkpatch warning free remove an extra blank line: CHECK: Blank lines aren't necessary before a close brace '}' + +} Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Get rid of the not needed val local variable and pass the constant value directly as operand value. In addition this turns the val operand into an input operand, since it is not changed within the inline assemblies. This in turn requires also to add the earlyclobber contraint modifier to all output operands, since the (former) val operand is used after all output variants have been modified. The usercopy kunit tests still pass after this change. Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Rename tmp1 and tmp2 variables to more meaningful val (for value) and rem (for remainder). Except for debug sections the output of "objdump -Dr" of the uaccess object file is identical before/after this change. Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Rename and sort labels in uaccess inline assemblies to increase readability. In addition have only one EX_TABLE entry per line - also to increase readability. Except for debug sections the output of "objdump -Dr" of the uaccess object file is identical before/after this change. Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Remove an unused label in all three uaccess inline assemblies. Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Improve readability of the uaccess inline assemblies by using symbolic names for all input and output operands. Except for debug sections the output of "objdump -Dr" of the uaccess object file is identical before/after this change. Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 27 Mar, 2023 4 commits
-
-
Heiko Carstens authored
Add missing earlyclobber annotation to size, to, and tmp2 operands of the __clear_user() inline assembly since they are modified or written to before the last usage of all input operands. This can lead to incorrect register allocation for the inline assembly. Fixes: 6c2a9e6d ("[S390] Use alternative user-copy operations for new hardware.") Reported-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/all/20230321122514.1743889-3-mark.rutland@arm.com/ Cc: stable@vger.kernel.org Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Lizhe authored
If there is no driver match function, the driver core assumes that each candidate pair (driver, device) matches, see driver_match_device(). Drop the matrix bus's match function that always returned 1 and so implements the same behaviour as when there is no match function Signed-off-by: Lizhe <sensor1010@163.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Link: https://lore.kernel.org/r/20230319041941.259830-1-sensor1010@163.comSigned-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
s390 trivially supports the ARCH_HAS_MEMBARRIER_SYNC_CORE requirements since the used lpswe(y) instruction to return from any kernel context to user space performs CPU serialization. This is very similar to arm, arm64 and powerpc. See commit 70216e18 ("membarrier: Provide core serializing command, *_SYNC_CORE") for further details. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Thomas Richter authored
This flag is used to process only fully populated sampling buffers when an sampling event is stopped on a CPU. By default the last sampling buffer is also scanned for samples even if the sampling block full indicator is not set in the trailer entry of a sampling buffer page. This flag can be set via perf_event_attr::config1 field. It was never used and never documented. It is useless now. With PERF_CPUM_SF_FULL_BLOCKS: When a process is scheduled off the CPU, the sampling is stopped and the samples are copied to the perf ring buffer and marked invalid. When stopped at the last full sample buffer page (which is achieved with the PERF_CPUM_SF_FULL_BLOCKS options), the hardware sampling will resume at the first free sample entry in the current, partially filled sample buffer. Without PERF_CPUM_SF_FULL_BLOCKS (default behavior): The partially filled last sample buffer is scanned and valid samples are saved to the perf ring buffer. The valid samples are marked invalid. The sampling is resumed when the process is scheduled on this CPU. Again the hardware sampling will resume at the first free sample entry in the current, partially filled sample buffer. Now the next interrupt handler invocation scans the full sample block and saves the valid samples to the ring buffer. It omits the invalid samples at the top of the buffer. The default behavior is fully sufficient, therefore remove this feature. Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Acked-by: Hendrik Brueckner <brueckner@linux.ibm.com> Acked-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 20 Mar, 2023 22 commits
-
-
Heiko Carstens authored
Make use of atomic_fetch_xor() instead of an atomic_cmpxchg() loop to implement atomic_xor_bits() (aka atomic_xor_return()). This makes the C code more readable and in addition generates better code, since for z196 and newer a single lax instruction is generated instead of a cmpxchg() loop. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Review and extend the low level AP code to be able to deal with asynchronous reported errors on APQNs. The hypervisor and the SE guest may be confronted with an asynchronously reported error at return of an AP instruction. So all places where AP instructions are called need review and may eventually need extensions. However, not all places need rework. As together with the AP status and the enabled asynch bit there is always a response code set. The asynch error reporting comes with new response codes which may be simple handled in the default case of a switch statement. The idea behind this patch is to report asynch errors as -EPERM (read this as "Operation not permitted") which reflects the fact that only a rapq (with F bit enabled) is a valid AP instruction when an asynch error is flagged. The AP queue state machine functions return AP_SM_WAIT_NONE when a asynch error is detected to reflect the fact, that the state machine can't do anything with such an error as long as the queue is reset. Unfortunately the ap bus scan function needed some update as the ap_queue_info() now needs to return 3 states: 1 if an APQN exists and info is available, -1 if it is assumed an APQN does not exist and the new return value 0 without any info values filled. This 0 returncode is handled as "there is an APQN but we currently don't know any more hw info about this, so please use your previous info and try again later". Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Implementation of the new functions for SE AP support: bind, unbind and associate. There are two new sysfs attributes for this: /sys/devices/ap/cardxx/xx.yyyy/se_bind /sys/devices/ap/cardxx/xx.yyyy/se_associate Writing a 1 into the se_bind attribute triggers the SE AP bind for this AP queue, writing a 0 into does an unbind - that's a reset (RAPQ) with the F bit enabled. The se_associate attribute needs an integer value in range 0...2^16-1 written in. This is the index into a secrets table feed into the ultravisor. For more details please see the Architecture documents. These both new ap queue attributes are only visible inside a SE guest with SB (Secure Binding) available. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
For some events the ap bus needs to poll. For example when an AP queue is reset until the reset is through. Also when no interrupt support is available (e.g. zVM) there is a need to poll until all requests have been processed and all replies have been delivered. Polling is done with a high resolution timer by default run with a rate of 4kHz (LPAR) or 666Hz (zVM guest). For some events (wait for reset complete, wait for irq enabled complete) this is a much too high poll rate which triggers a lot of TAPQ invocations. This patch introduces the possibility for the state machine functions to return a new wait enum AP_SM_WAIT_LOW_TIMEOUT which gives a hint to the ap_wait() function to eventually set up the timer with a more relaxed timeout value of 25Hz. This patch also includes a slight rework of the sysfs functions parsing the timer related stuff: Use of kstrtobool and kstrtoul instead of sscanf. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Introduce two new low level functions ap_bapq() (calls PQAP(BAPQ)) and ap_aapq (calls PQAP(AAPQ)). Both functions are only meant to be used in SE environment with the SE AP binding facility available. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Extent the ap inline functions ap_rapq() (calls PQAP(RAPQ)) and ap_zapq() (calls PQAP(ZAPQ)) with a new parameter to enable the new architectured F bit which forces an unassociate and/or unbind on a secure execution associated and/or bound queue. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
With SE SB (Secure Binding) some currently unused and thus always zero bits in the TAPQ GR2 result are now used to show the binding state of a queue. So to check if a card has changed the comparing base is exactly this GR2 value shown as 'ap_function' in sysfs (/sys/devices/ap/cardxx/ap_functions). Now there is some queue specific info in this info and so a new mask TAPQ_CARD_FUNC_CMP_MASK is used to filter out only the relevant bits for card compare. For the same reason now the function bits (including exactly this bind/associate information) need to be exposed to user space now. So tools like lszcrypt can evaluate binding/association state on a queue base. So here comes a new sysfs attribute /sys/devices/ap/cardxx/xx.yyyy/ap_functions This sysfs attribute is similar to the already existing ap_functions attribute at ap card level. It shows the upper 32 bits of GR2 from an invocation of TAPQ for this AP queue. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
This patch introduces a new struct ap_tapq_gr2 which covers the response in GR2 on TAPQ invocation. This makes it much easier and less error-prone for the calling functions to access the right field without shifting and masking. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Introduce a new AP bus sysfs attribute /sys/bus/ap/features which shows the features from the QCI information. Currently these feature bits are evaluated: - QCI S bit is shown as 'APSC' - QCI N bit is shown as 'APXA' - QCI C bit is shown as 'QACT' - QCI R bit is shown as 'RC8A' - QCI B bit is shown as 'APSB' Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
This patch introduces an update to the ap_config_info struct which is filled with the QCI subfunction. There is a new bit apsb (short 'B') showing if the AP secure bind facility is available. The patch also includes a simple function ap_sb_available() wrapping this bit test. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Replace scnprintf() with sysfs_emit() and friends where possible. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Reviewed-by: Tony Krowiak <akrowiak@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
The inline ap_dqap function does not return the number of bytes actually written into the message buffer. The calling code inspects the AP message header to figure out what kind of AP message has been received and pulls the length information from this header. This processing may not work correctly in cases where only a fragment of the reply is received. With this patch the ap_dqap inline function now returns the number of actually written bytes in the *length parameter. So the calling function has a chance to compare the number of received bytes against what the AP message header length field states. This is especially useful in cases where a message could only get partially received. The low level reply processing functions needed some rework to be able to catch this new length information and compare it the right way. The rework also deals with some situations where until now the reply length was not correctly calculated and/or set. All this has been heavily tested as the modifications on the reply length information may affect crypto load. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Harald Freudenberger authored
Since s390 kernel build does not support 32 bit build any more there is no difference between long and long long. So this patch reworks all occurrences of psmid (a 64 bit value) to use unsigned long now. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Holger Dengler <dengler@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Allow to enforce 64 byte function alignment like it is possible for a couple of other architectures. This may or may not be helpful for debugging performance problems, as described with the Kconfig option. Since the kernel works also with 64 byte function alignment there is no reason for not allowing to enforce this function alignment. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Use __ALIGN instead of open coded .align statement to make sure that vdso code follows global kernel function alignment rules. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Use __ALIGN instead of open coded .align statement to make sure that external expoline thunks follow global function alignment rules. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Move the ftrace hotpatch trampolines to mcount.S. This allows to make use of the standard SYM_CODE macros which again makes sure that the hotpatch trampolines follow the function alignment rules of the rest of the kernel. Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Acked-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Make use of CONFIG_FUNCTION_ALIGNMENT which was introduced with commit d49a0626 ("arch: Introduce CONFIG_FUNCTION_ALIGNMENT"). Select FUNCTION_ALIGNMENT_8B for gcc in order to reflect gcc's default function alignment. For all other compilers, which is only clang, select a function alignment of 16 bytes which reflects the default function alignment for clang. Also change the __ALIGN define to follow whatever the value of CONFIG_FUNCTION_ALIGNMENT is. This makes sure that the alignment of C and assembler functions is the same. In result everything still uses the default function alignment for both compilers. However in addition this is now also true for all assembly functions, so that all functions have a consistent alignment. Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Heiko Carstens authored
Vasily Gorbik says: =================== Combine and generalize all methods for finding unused memory in decompressor, while decreasing complexity, add memory holes support, while improving error handling (especially in low-memory conditions) and debug-ability. =================== Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
Since regular paging structs are initialized in decompressor already move KASAN shadow mapping to decompressor as well. This helps to avoid allocating KASAN required memory in 1 large chunk, de-duplicate paging structs creation code and start the uncompressed kernel with KASAN instrumentation right away. This also allows to avoid all pitfalls accidentally calling KASAN instrumented code during KASAN initialization. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
Allow changing page table attributes for KASAN shadow memory ranges. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-
Vasily Gorbik authored
Currently several approaches for finding unused memory in decompressor are utilized. While "safe_addr" grows towards higher addresses, vmem code allocates paging structures top down. The former requires careful ordering. In addition to that ipl report handling code verifies potential intersections with secure boot certificates on its own. Neither of two approaches are memory holes aware and consistent with each other in low memory conditions. To solve that, existing approaches are generalized and combined together, as well as online memory ranges are now taken into consideration. physmem_info has been extended to contain reserved memory ranges. New set of functions allow to handle reserves and find unused memory. All reserves and memory allocations are "typed". In case of out of memory condition decompressor fails with detailed info on current reserved ranges and usable online memory. Linux version 6.2.0 ... Kernel command line: ... mem=100M Our of memory allocating 100000 bytes 100000 aligned in range 0:5800000 Reserved memory ranges: 0000000000000000 0000000003e33000 DECOMPRESSOR 0000000003f00000 00000000057648a3 INITRD 00000000063e0000 00000000063e8000 VMEM 00000000063eb000 00000000063f4000 VMEM 00000000063f7800 0000000006400000 VMEM 0000000005800000 0000000006300000 KASAN Usable online memory ranges (info source: sclp read info [3]): 0000000000000000 0000000006400000 Usable online memory total: 6400000 Reserved: 61b10a3 Free: 24ef5d Call Trace: (sp:000000000002bd58 [<0000000000012a70>] physmem_alloc_top_down+0x60/0x14c) sp:000000000002bdc8 [<0000000000013756>] _pa+0x56/0x6a sp:000000000002bdf0 [<0000000000013bcc>] pgtable_populate+0x45c/0x65e sp:000000000002be90 [<00000000000140aa>] setup_vmem+0x2da/0x424 sp:000000000002bec8 [<0000000000011c20>] startup_kernel+0x428/0x8b4 sp:000000000002bf60 [<00000000000100f4>] startup_normal+0xd4/0xd4 physmem_alloc_range allows to find free memory in specified range. It should be used for one time allocations only like finding position for amode31 and vmlinux. physmem_alloc_top_down can be used just like physmem_alloc_range, but it also allows multiple allocations per type and tries to merge sequential allocations together. Which is useful for paging structures allocations. If sequential allocations cannot be merged together they are "chained", allowing easy per type reserved ranges enumeration and migration to memblock later. Extra "struct reserved_range" allocated for chaining are not tracked or reserved but rely on the fact that both physmem_alloc_range and physmem_alloc_top_down search for free memory only below current top down allocator position. All reserved ranges should be transferred to memblock before memblock allocations are enabled. The startup code has been reordered to delay any memory allocations until online memory ranges are detected and occupied memory ranges are marked as reserved to be excluded from follow-up allocations. Ipl report certificates are a special case, ipl report certificates list is checked together with other memory reserves until certificates are saved elsewhere. KASAN required memory for shadow memory allocation and mapping is reserved as 1 large chunk which is later passed to KASAN early initialization code. Acked-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
-