Commit 41b7a347 authored by Daniel Axtens's avatar Daniel Axtens Committed by Michael Ellerman

powerpc: Book3S 64-bit outline-only KASAN support

Implement a limited form of KASAN for Book3S 64-bit machines running under
the Radix MMU, supporting only outline mode.

 - Enable the compiler instrumentation to check addresses and maintain the
   shadow region. (This is the guts of KASAN which we can easily reuse.)

 - Require kasan-vmalloc support to handle modules and anything else in
   vmalloc space.

 - KASAN needs to be able to validate all pointer accesses, but we can't
   instrument all kernel addresses - only linear map and vmalloc. On boot,
   set up a single page of read-only shadow that marks all iomap and
   vmemmap accesses as valid.

 - Document KASAN in powerpc docs.

Background
----------

KASAN support on Book3S is a bit tricky to get right:

 - It would be good to support inline instrumentation so as to be able to
   catch stack issues that cannot be caught with outline mode.

 - Inline instrumentation requires a fixed offset.

 - Book3S runs code with translations off ("real mode") during boot,
   including a lot of generic device-tree parsing code which is used to
   determine MMU features.

    [ppc64 mm note: The kernel installs a linear mapping at effective
    address c000...-c008.... This is a one-to-one mapping with physical
    memory from 0000... onward. Because of how memory accesses work on
    powerpc 64-bit Book3S, a kernel pointer in the linear map accesses the
    same memory both with translations on (accessing as an 'effective
    address'), and with translations off (accessing as a 'real
    address'). This works in both guests and the hypervisor. For more
    details, see s5.7 of Book III of version 3 of the ISA, in particular
    the Storage Control Overview, s5.7.3, and s5.7.5 - noting that this
    KASAN implementation currently only supports Radix.]

 - Some code - most notably a lot of KVM code - also runs with translations
   off after boot.

 - Therefore any offset has to point to memory that is valid with
   translations on or off.

One approach is just to give up on inline instrumentation. This way
boot-time checks can be delayed until after the MMU is set is up, and we
can just not instrument any code that runs with translations off after
booting. Take this approach for now and require outline instrumentation.

Previous attempts allowed inline instrumentation. However, they came with
some unfortunate restrictions: only physically contiguous memory could be
used and it had to be specified at compile time. Maybe we can do better in
the future.

[paulus@ozlabs.org - Rebased onto 5.17.  Note that a kernel with
 CONFIG_KASAN=y will crash during boot on a machine using HPT
 translation because not all the entry points to the generic
 KASAN code are protected with a call to kasan_arch_is_ready().]

Originally-by: Balbir Singh <bsingharora@gmail.com> # ppc64 out-of-line radix version
Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
[mpe: Update copyright year and comment formatting]
Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/YoTE69OQwiG7z+Gu@cleo
parent 2ab2d579
KASAN is supported on powerpc on 32-bit only. KASAN is supported on powerpc on 32-bit and Radix 64-bit only.
32 bit support 32 bit support
============== ==============
...@@ -10,3 +10,49 @@ fixmap area and occupies one eighth of the total kernel virtual memory space. ...@@ -10,3 +10,49 @@ fixmap area and occupies one eighth of the total kernel virtual memory space.
Instrumentation of the vmalloc area is optional, unless built with modules, Instrumentation of the vmalloc area is optional, unless built with modules,
in which case it is required. in which case it is required.
64 bit support
==============
Currently, only the radix MMU is supported. There have been versions for hash
and Book3E processors floating around on the mailing list, but nothing has been
merged.
KASAN support on Book3S is a bit tricky to get right:
- It would be good to support inline instrumentation so as to be able to catch
stack issues that cannot be caught with outline mode.
- Inline instrumentation requires a fixed offset.
- Book3S runs code with translations off ("real mode") during boot, including a
lot of generic device-tree parsing code which is used to determine MMU
features.
- Some code - most notably a lot of KVM code - also runs with translations off
after boot.
- Therefore any offset has to point to memory that is valid with
translations on or off.
One approach is just to give up on inline instrumentation. This way boot-time
checks can be delayed until after the MMU is set is up, and we can just not
instrument any code that runs with translations off after booting. This is the
current approach.
To avoid this limitiation, the KASAN shadow would have to be placed inside the
linear mapping, using the same high-bits trick we use for the rest of the linear
mapping. This is tricky:
- We'd like to place it near the start of physical memory. In theory we can do
this at run-time based on how much physical memory we have, but this requires
being able to arbitrarily relocate the kernel, which is basically the tricky
part of KASLR. Not being game to implement both tricky things at once, this
is hopefully something we can revisit once we get KASLR for Book3S.
- Alternatively, we can place the shadow at the _end_ of memory, but this
requires knowing how much contiguous physical memory a system has _at compile
time_. This is a big hammer, and has some unfortunate consequences: inablity
to handle discontiguous physical memory, total failure to boot on machines
with less memory than specified, and that machines with more memory than
specified can't use it. This was deemed unacceptable.
...@@ -109,6 +109,7 @@ config PPC ...@@ -109,6 +109,7 @@ config PPC
# Please keep this list sorted alphabetically. # Please keep this list sorted alphabetically.
# #
select ARCH_32BIT_OFF_T if PPC32 select ARCH_32BIT_OFF_T if PPC32
select ARCH_DISABLE_KASAN_INLINE if PPC_RADIX_MMU
select ARCH_ENABLE_MEMORY_HOTPLUG select ARCH_ENABLE_MEMORY_HOTPLUG
select ARCH_ENABLE_MEMORY_HOTREMOVE select ARCH_ENABLE_MEMORY_HOTREMOVE
select ARCH_HAS_COPY_MC if PPC64 select ARCH_HAS_COPY_MC if PPC64
...@@ -157,6 +158,7 @@ config PPC ...@@ -157,6 +158,7 @@ config PPC
select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_IPC_PARSE_VERSION
select ARCH_WANT_IRQS_OFF_ACTIVATE_MM select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
select ARCH_WANT_LD_ORPHAN_WARN select ARCH_WANT_LD_ORPHAN_WARN
select ARCH_WANTS_NO_INSTR
select ARCH_WEAK_RELEASE_ACQUIRE select ARCH_WEAK_RELEASE_ACQUIRE
select BINFMT_ELF select BINFMT_ELF
select BUILDTIME_TABLE_SORT select BUILDTIME_TABLE_SORT
...@@ -188,7 +190,8 @@ config PPC ...@@ -188,7 +190,8 @@ config PPC
select HAVE_ARCH_JUMP_LABEL select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_JUMP_LABEL_RELATIVE
select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14
select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14 select HAVE_ARCH_KASAN if PPC_RADIX_MMU
select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN
select HAVE_ARCH_KFENCE if PPC_BOOK3S_32 || PPC_8xx || 40x select HAVE_ARCH_KFENCE if PPC_BOOK3S_32 || PPC_8xx || 40x
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_BITS
......
...@@ -374,4 +374,5 @@ config PPC_FAST_ENDIAN_SWITCH ...@@ -374,4 +374,5 @@ config PPC_FAST_ENDIAN_SWITCH
config KASAN_SHADOW_OFFSET config KASAN_SHADOW_OFFSET
hex hex
depends on KASAN depends on KASAN
default 0xe0000000 default 0xe0000000 if PPC32
default 0xa80e000000000000 if PPC64
...@@ -18,6 +18,10 @@ ...@@ -18,6 +18,10 @@
#include <asm/book3s/64/hash-4k.h> #include <asm/book3s/64/hash-4k.h>
#endif #endif
#define H_PTRS_PER_PTE (1 << H_PTE_INDEX_SIZE)
#define H_PTRS_PER_PMD (1 << H_PMD_INDEX_SIZE)
#define H_PTRS_PER_PUD (1 << H_PUD_INDEX_SIZE)
/* Bits to set in a PMD/PUD/PGD entry valid bit*/ /* Bits to set in a PMD/PUD/PGD entry valid bit*/
#define HASH_PMD_VAL_BITS (0x8000000000000000UL) #define HASH_PMD_VAL_BITS (0x8000000000000000UL)
#define HASH_PUD_VAL_BITS (0x8000000000000000UL) #define HASH_PUD_VAL_BITS (0x8000000000000000UL)
......
...@@ -232,6 +232,9 @@ extern unsigned long __pmd_frag_size_shift; ...@@ -232,6 +232,9 @@ extern unsigned long __pmd_frag_size_shift;
#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE) #define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE) #define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
#define MAX_PTRS_PER_PTE ((H_PTRS_PER_PTE > R_PTRS_PER_PTE) ? H_PTRS_PER_PTE : R_PTRS_PER_PTE)
#define MAX_PTRS_PER_PMD ((H_PTRS_PER_PMD > R_PTRS_PER_PMD) ? H_PTRS_PER_PMD : R_PTRS_PER_PMD)
#define MAX_PTRS_PER_PUD ((H_PTRS_PER_PUD > R_PTRS_PER_PUD) ? H_PTRS_PER_PUD : R_PTRS_PER_PUD)
#define MAX_PTRS_PER_PGD (1 << (H_PGD_INDEX_SIZE > RADIX_PGD_INDEX_SIZE ? \ #define MAX_PTRS_PER_PGD (1 << (H_PGD_INDEX_SIZE > RADIX_PGD_INDEX_SIZE ? \
H_PGD_INDEX_SIZE : RADIX_PGD_INDEX_SIZE)) H_PGD_INDEX_SIZE : RADIX_PGD_INDEX_SIZE))
......
...@@ -35,6 +35,11 @@ ...@@ -35,6 +35,11 @@
#define RADIX_PMD_SHIFT (PAGE_SHIFT + RADIX_PTE_INDEX_SIZE) #define RADIX_PMD_SHIFT (PAGE_SHIFT + RADIX_PTE_INDEX_SIZE)
#define RADIX_PUD_SHIFT (RADIX_PMD_SHIFT + RADIX_PMD_INDEX_SIZE) #define RADIX_PUD_SHIFT (RADIX_PMD_SHIFT + RADIX_PMD_INDEX_SIZE)
#define RADIX_PGD_SHIFT (RADIX_PUD_SHIFT + RADIX_PUD_INDEX_SIZE) #define RADIX_PGD_SHIFT (RADIX_PUD_SHIFT + RADIX_PUD_INDEX_SIZE)
#define R_PTRS_PER_PTE (1 << RADIX_PTE_INDEX_SIZE)
#define R_PTRS_PER_PMD (1 << RADIX_PMD_INDEX_SIZE)
#define R_PTRS_PER_PUD (1 << RADIX_PUD_INDEX_SIZE)
/* /*
* Size of EA range mapped by our pagetables. * Size of EA range mapped by our pagetables.
*/ */
...@@ -68,11 +73,11 @@ ...@@ -68,11 +73,11 @@
* *
* *
* 3rd quadrant expanded: * 3rd quadrant expanded:
* +------------------------------+ * +------------------------------+ Highest address (0xc010000000000000)
* | | * +------------------------------+ KASAN shadow end (0xc00fc00000000000)
* | | * | |
* | | * | |
* +------------------------------+ Kernel vmemmap end (0xc010000000000000) * +------------------------------+ Kernel vmemmap end/shadow start (0xc00e000000000000)
* | | * | |
* | 512TB | * | 512TB |
* | | * | |
...@@ -91,6 +96,7 @@ ...@@ -91,6 +96,7 @@
* +------------------------------+ Kernel linear (0xc.....) * +------------------------------+ Kernel linear (0xc.....)
*/ */
/* For the sizes of the shadow area, see kasan.h */
/* /*
* If we store section details in page->flags we can't increase the MAX_PHYSMEM_BITS * If we store section details in page->flags we can't increase the MAX_PHYSMEM_BITS
......
...@@ -30,9 +30,31 @@ ...@@ -30,9 +30,31 @@
#define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET) #define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET)
#ifdef CONFIG_PPC32
#define KASAN_SHADOW_END (-(-KASAN_SHADOW_START >> KASAN_SHADOW_SCALE_SHIFT)) #define KASAN_SHADOW_END (-(-KASAN_SHADOW_START >> KASAN_SHADOW_SCALE_SHIFT))
#elif defined(CONFIG_PPC_BOOK3S_64)
/*
* The shadow ends before the highest accessible address
* because we don't need a shadow for the shadow. Instead:
* c00e000000000000 << 3 + a80e000000000000 = c00fc00000000000
*/
#define KASAN_SHADOW_END 0xc00fc00000000000UL
#endif
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
#ifdef CONFIG_PPC_BOOK3S_64
DECLARE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
static __always_inline bool kasan_arch_is_ready(void)
{
if (static_branch_likely(&powerpc_kasan_enabled_key))
return true;
return false;
}
#define kasan_arch_is_ready kasan_arch_is_ready
#endif
void kasan_early_init(void); void kasan_early_init(void);
void kasan_mmu_init(void); void kasan_mmu_init(void);
void kasan_init(void); void kasan_init(void);
......
...@@ -33,6 +33,17 @@ KASAN_SANITIZE_early_32.o := n ...@@ -33,6 +33,17 @@ KASAN_SANITIZE_early_32.o := n
KASAN_SANITIZE_cputable.o := n KASAN_SANITIZE_cputable.o := n
KASAN_SANITIZE_prom_init.o := n KASAN_SANITIZE_prom_init.o := n
KASAN_SANITIZE_btext.o := n KASAN_SANITIZE_btext.o := n
KASAN_SANITIZE_paca.o := n
KASAN_SANITIZE_setup_64.o := n
KASAN_SANITIZE_mce.o := n
KASAN_SANITIZE_mce_power.o := n
# we have to be particularly careful in ppc64 to exclude code that
# runs with translations off, as we cannot access the shadow with
# translations off. However, ppc32 can sanitize this.
ifdef CONFIG_PPC64
KASAN_SANITIZE_traps.o := n
endif
ifdef CONFIG_KASAN ifdef CONFIG_KASAN
CFLAGS_early_32.o += -DDISABLE_BRANCH_PROFILING CFLAGS_early_32.o += -DDISABLE_BRANCH_PROFILING
......
...@@ -131,3 +131,8 @@ obj-$(CONFIG_KVM_BOOK3S_64_PR) += kvm-pr.o ...@@ -131,3 +131,8 @@ obj-$(CONFIG_KVM_BOOK3S_64_PR) += kvm-pr.o
obj-$(CONFIG_KVM_BOOK3S_64_HV) += kvm-hv.o obj-$(CONFIG_KVM_BOOK3S_64_HV) += kvm-hv.o
obj-y += $(kvm-book3s_64-builtin-objs-y) obj-y += $(kvm-book3s_64-builtin-objs-y)
# KVM does a lot in real-mode, and 64-bit Book3S KASAN doesn't support that
ifdef CONFIG_PPC_BOOK3S_64
KASAN_SANITIZE := n
endif
...@@ -24,3 +24,12 @@ obj-$(CONFIG_PPC_PKEY) += pkeys.o ...@@ -24,3 +24,12 @@ obj-$(CONFIG_PPC_PKEY) += pkeys.o
# Instrumenting the SLB fault path can lead to duplicate SLB entries # Instrumenting the SLB fault path can lead to duplicate SLB entries
KCOV_INSTRUMENT_slb.o := n KCOV_INSTRUMENT_slb.o := n
# Parts of these can run in real mode and therefore are
# not safe with the current outline KASAN implementation
KASAN_SANITIZE_mmu_context.o := n
KASAN_SANITIZE_pgtable.o := n
KASAN_SANITIZE_radix_pgtable.o := n
KASAN_SANITIZE_radix_tlb.o := n
KASAN_SANITIZE_slb.o := n
KASAN_SANITIZE_pkeys.o := n
...@@ -5,3 +5,4 @@ KASAN_SANITIZE := n ...@@ -5,3 +5,4 @@ KASAN_SANITIZE := n
obj-$(CONFIG_PPC32) += init_32.o obj-$(CONFIG_PPC32) += init_32.o
obj-$(CONFIG_PPC_8xx) += 8xx.o obj-$(CONFIG_PPC_8xx) += 8xx.o
obj-$(CONFIG_PPC_BOOK3S_32) += book3s_32.o obj-$(CONFIG_PPC_BOOK3S_32) += book3s_32.o
obj-$(CONFIG_PPC_BOOK3S_64) += init_book3s_64.o
// SPDX-License-Identifier: GPL-2.0
/*
* KASAN for 64-bit Book3S powerpc
*
* Copyright 2019-2022, Daniel Axtens, IBM Corporation.
*/
/*
* ppc64 turns on virtual memory late in boot, after calling into generic code
* like the device-tree parser, so it uses this in conjunction with a hook in
* outline mode to avoid invalid access early in boot.
*/
#define DISABLE_BRANCH_PROFILING
#include <linux/kasan.h>
#include <linux/printk.h>
#include <linux/sched/task.h>
#include <linux/memblock.h>
#include <asm/pgalloc.h>
DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
static void __init kasan_init_phys_region(void *start, void *end)
{
unsigned long k_start, k_end, k_cur;
void *va;
if (start >= end)
return;
k_start = ALIGN_DOWN((unsigned long)kasan_mem_to_shadow(start), PAGE_SIZE);
k_end = ALIGN((unsigned long)kasan_mem_to_shadow(end), PAGE_SIZE);
va = memblock_alloc(k_end - k_start, PAGE_SIZE);
for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE, va += PAGE_SIZE)
map_kernel_page(k_cur, __pa(va), PAGE_KERNEL);
}
void __init kasan_init(void)
{
/*
* We want to do the following things:
* 1) Map real memory into the shadow for all physical memblocks
* This takes us from c000... to c008...
* 2) Leave a hole over the shadow of vmalloc space. KASAN_VMALLOC
* will manage this for us.
* This takes us from c008... to c00a...
* 3) Map the 'early shadow'/zero page over iomap and vmemmap space.
* This takes us up to where we start at c00e...
*/
void *k_start = kasan_mem_to_shadow((void *)RADIX_VMALLOC_END);
void *k_end = kasan_mem_to_shadow((void *)RADIX_VMEMMAP_END);
phys_addr_t start, end;
u64 i;
pte_t zero_pte = pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL);
if (!early_radix_enabled()) {
pr_warn("KASAN not enabled as it requires radix!");
return;
}
for_each_mem_range(i, &start, &end)
kasan_init_phys_region((void *)start, (void *)end);
for (i = 0; i < PTRS_PER_PTE; i++)
__set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
&kasan_early_shadow_pte[i], zero_pte, 0);
for (i = 0; i < PTRS_PER_PMD; i++)
pmd_populate_kernel(&init_mm, &kasan_early_shadow_pmd[i],
kasan_early_shadow_pte);
for (i = 0; i < PTRS_PER_PUD; i++)
pud_populate(&init_mm, &kasan_early_shadow_pud[i],
kasan_early_shadow_pmd);
/* map the early shadow over the iomap and vmemmap space */
kasan_populate_early_shadow(k_start, k_end);
/* mark early shadow region as RO and wipe it */
zero_pte = pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL_RO);
for (i = 0; i < PTRS_PER_PTE; i++)
__set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
&kasan_early_shadow_pte[i], zero_pte, 0);
/*
* clear_page relies on some cache info that hasn't been set up yet.
* It ends up looping ~forever and blows up other data.
* Use memset instead.
*/
memset(kasan_early_shadow_page, 0, PAGE_SIZE);
static_branch_inc(&powerpc_kasan_enabled_key);
/* Enable error messages */
init_task.kasan_depth = 0;
pr_info("KASAN init done\n");
}
void __init kasan_late_init(void) { }
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <linux/const.h> #include <linux/const.h>
#include <linux/kasan.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/hugetlb.h> #include <asm/hugetlb.h>
...@@ -289,11 +290,11 @@ static void populate_markers(void) ...@@ -289,11 +290,11 @@ static void populate_markers(void)
#endif #endif
address_markers[i++].start_address = FIXADDR_START; address_markers[i++].start_address = FIXADDR_START;
address_markers[i++].start_address = FIXADDR_TOP; address_markers[i++].start_address = FIXADDR_TOP;
#endif /* CONFIG_PPC64 */
#ifdef CONFIG_KASAN #ifdef CONFIG_KASAN
address_markers[i++].start_address = KASAN_SHADOW_START; address_markers[i++].start_address = KASAN_SHADOW_START;
address_markers[i++].start_address = KASAN_SHADOW_END; address_markers[i++].start_address = KASAN_SHADOW_END;
#endif #endif
#endif /* CONFIG_PPC64 */
} }
static int ptdump_show(struct seq_file *m, void *v) static int ptdump_show(struct seq_file *m, void *v)
......
...@@ -104,6 +104,7 @@ config PPC_BOOK3S_64 ...@@ -104,6 +104,7 @@ config PPC_BOOK3S_64
select HAVE_MOVE_PUD select HAVE_MOVE_PUD
select IRQ_WORK select IRQ_WORK
select PPC_64S_HASH_MMU if !PPC_RADIX_MMU select PPC_64S_HASH_MMU if !PPC_RADIX_MMU
select KASAN_VMALLOC if KASAN
config PPC_BOOK3E_64 config PPC_BOOK3E_64
bool "Embedded processors" bool "Embedded processors"
......
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
# nothing that deals with real mode is safe to KASAN
# in particular, idle code runs a bunch of things in real mode
KASAN_SANITIZE_idle.o := n
KASAN_SANITIZE_pci-ioda.o := n
# pnv_machine_check_early
KASAN_SANITIZE_setup.o := n
obj-y += setup.o opal-call.o opal-wrappers.o opal.o opal-async.o obj-y += setup.o opal-call.o opal-wrappers.o opal.o opal-async.o
obj-y += idle.o opal-rtc.o opal-nvram.o opal-lpc.o opal-flash.o obj-y += idle.o opal-rtc.o opal-nvram.o opal-lpc.o opal-flash.o
obj-y += rng.o opal-elog.o opal-dump.o opal-sysparam.o opal-sensor.o obj-y += rng.o opal-elog.o opal-dump.o opal-sysparam.o opal-sensor.o
......
...@@ -34,4 +34,6 @@ obj-$(CONFIG_PPC_VAS) += vas.o vas-sysfs.o ...@@ -34,4 +34,6 @@ obj-$(CONFIG_PPC_VAS) += vas.o vas-sysfs.o
obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += cc_platform.o obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += cc_platform.o
# nothing that operates in real mode is safe for KASAN
KASAN_SANITIZE_ras.o := n
KASAN_SANITIZE_kexec.o := n KASAN_SANITIZE_kexec.o := n
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment