Commit ab502103 authored by Matthew Wilcox (Oracle)'s avatar Matthew Wilcox (Oracle) Committed by Kees Cook

mm/usercopy: Detect large folio overruns

Move the compound page overrun detection out of
CONFIG_HARDENED_USERCOPY_PAGESPAN and convert it to use folios so it's
enabled for more people.
Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: default avatarKees Cook <keescook@chromium.org>
Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
Signed-off-by: default avatarKees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220110231530.665970-4-willy@infradead.org
parent 0aef499f
...@@ -164,7 +164,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, ...@@ -164,7 +164,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
{ {
#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
const void *end = ptr + n - 1; const void *end = ptr + n - 1;
struct page *endpage;
bool is_reserved, is_cma; bool is_reserved, is_cma;
/* /*
...@@ -195,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, ...@@ -195,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
((unsigned long)end & (unsigned long)PAGE_MASK))) ((unsigned long)end & (unsigned long)PAGE_MASK)))
return; return;
/* Allow if fully inside the same compound (__GFP_COMP) page. */
endpage = virt_to_head_page(end);
if (likely(endpage == page))
return;
/* /*
* Reject if range is entirely either Reserved (i.e. special or * Reject if range is entirely either Reserved (i.e. special or
* device memory), or CMA. Otherwise, reject since the object spans * device memory), or CMA. Otherwise, reject since the object spans
...@@ -259,6 +253,10 @@ static inline void check_heap_object(const void *ptr, unsigned long n, ...@@ -259,6 +253,10 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
if (folio_test_slab(folio)) { if (folio_test_slab(folio)) {
/* Check slab allocator for flags and size. */ /* Check slab allocator for flags and size. */
__check_heap_object(ptr, n, folio_slab(folio), to_user); __check_heap_object(ptr, n, folio_slab(folio), to_user);
} else if (folio_test_large(folio)) {
unsigned long offset = ptr - folio_address(folio);
if (offset + n > folio_size(folio))
usercopy_abort("page alloc", NULL, to_user, offset, n);
} else { } else {
/* Verify object does not incorrectly span multiple pages. */ /* Verify object does not incorrectly span multiple pages. */
check_page_span(ptr, n, folio_page(folio, 0), to_user); check_page_span(ptr, n, folio_page(folio, 0), to_user);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment