Commit 1109a5d9 authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Kees Cook

usercopy: Remove HARDENED_USERCOPY_PAGESPAN

There isn't enough information to make this a useful check any more;
the useful parts of it were moved in earlier patches, so remove this
set of checks now.
Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: default avatarKees Cook <keescook@chromium.org>
Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
Signed-off-by: default avatarKees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220110231530.665970-5-willy@infradead.org
parent ab502103
...@@ -158,64 +158,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n, ...@@ -158,64 +158,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
usercopy_abort("null address", NULL, to_user, ptr, n); usercopy_abort("null address", NULL, to_user, ptr, n);
} }
/* Checks for allocs that are marked in some way as spanning multiple pages. */
static inline void check_page_span(const void *ptr, unsigned long n,
struct page *page, bool to_user)
{
#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
const void *end = ptr + n - 1;
bool is_reserved, is_cma;
/*
* Sometimes the kernel data regions are not marked Reserved (see
* check below). And sometimes [_sdata,_edata) does not cover
* rodata and/or bss, so check each range explicitly.
*/
/* Allow reads of kernel rodata region (if not marked as Reserved). */
if (ptr >= (const void *)__start_rodata &&
end <= (const void *)__end_rodata) {
if (!to_user)
usercopy_abort("rodata", NULL, to_user, 0, n);
return;
}
/* Allow kernel data region (if not marked as Reserved). */
if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
return;
/* Allow kernel bss region (if not marked as Reserved). */
if (ptr >= (const void *)__bss_start &&
end <= (const void *)__bss_stop)
return;
/* Is the object wholly within one base page? */
if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
((unsigned long)end & (unsigned long)PAGE_MASK)))
return;
/*
* Reject if range is entirely either Reserved (i.e. special or
* device memory), or CMA. Otherwise, reject since the object spans
* several independently allocated pages.
*/
is_reserved = PageReserved(page);
is_cma = is_migrate_cma_page(page);
if (!is_reserved && !is_cma)
usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
page = virt_to_head_page(ptr);
if (is_reserved && !PageReserved(page))
usercopy_abort("spans Reserved and non-Reserved pages",
NULL, to_user, 0, n);
if (is_cma && !is_migrate_cma_page(page))
usercopy_abort("spans CMA and non-CMA pages", NULL,
to_user, 0, n);
}
#endif
}
static inline void check_heap_object(const void *ptr, unsigned long n, static inline void check_heap_object(const void *ptr, unsigned long n,
bool to_user) bool to_user)
{ {
...@@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n, ...@@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
unsigned long offset = ptr - folio_address(folio); unsigned long offset = ptr - folio_address(folio);
if (offset + n > folio_size(folio)) if (offset + n > folio_size(folio))
usercopy_abort("page alloc", NULL, to_user, offset, n); usercopy_abort("page alloc", NULL, to_user, offset, n);
} else {
/* Verify object does not incorrectly span multiple pages. */
check_page_span(ptr, n, folio_page(folio, 0), to_user);
} }
} }
......
...@@ -160,20 +160,9 @@ config HARDENED_USERCOPY ...@@ -160,20 +160,9 @@ config HARDENED_USERCOPY
copy_from_user() functions) by rejecting memory ranges that copy_from_user() functions) by rejecting memory ranges that
are larger than the specified heap object, span multiple are larger than the specified heap object, span multiple
separately allocated pages, are not on the process stack, separately allocated pages, are not on the process stack,
or are part of the kernel text. This kills entire classes or are part of the kernel text. This prevents entire classes
of heap overflow exploits and similar kernel memory exposures. of heap overflow exploits and similar kernel memory exposures.
config HARDENED_USERCOPY_PAGESPAN
bool "Refuse to copy allocations that span multiple pages"
depends on HARDENED_USERCOPY
depends on BROKEN
help
When a multi-page allocation is done without __GFP_COMP,
hardened usercopy will reject attempts to copy it. There are,
however, several cases of this in the kernel that have not all
been removed. This config is intended to be used only while
trying to find such users.
config FORTIFY_SOURCE config FORTIFY_SOURCE
bool "Harden common str/mem functions against buffer overflows" bool "Harden common str/mem functions against buffer overflows"
depends on ARCH_HAS_FORTIFY_SOURCE depends on ARCH_HAS_FORTIFY_SOURCE
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment