Commit d2e527f0 authored by Vlastimil Babka's avatar Vlastimil Babka

mm/slab: remove HAVE_HARDENED_USERCOPY_ALLOCATOR

With SLOB removed, both remaining allocators support hardened usercopy,
so remove the config and associated #ifdef.
Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
Reviewed-by: default avatarLorenzo Stoakes <lstoakes@gmail.com>
Reviewed-by: default avatarKees Cook <keescook@chromium.org>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Acked-by: default avatarHyeonggon Yoo <42.hyeyoo@gmail.com>
parent 8040cbf5
...@@ -221,7 +221,6 @@ choice ...@@ -221,7 +221,6 @@ choice
config SLAB config SLAB
bool "SLAB" bool "SLAB"
depends on !PREEMPT_RT depends on !PREEMPT_RT
select HAVE_HARDENED_USERCOPY_ALLOCATOR
help help
The regular slab allocator that is established and known to work The regular slab allocator that is established and known to work
well in all environments. It organizes cache hot objects in well in all environments. It organizes cache hot objects in
...@@ -229,7 +228,6 @@ config SLAB ...@@ -229,7 +228,6 @@ config SLAB
config SLUB config SLUB
bool "SLUB (Unqueued Allocator)" bool "SLUB (Unqueued Allocator)"
select HAVE_HARDENED_USERCOPY_ALLOCATOR
help help
SLUB is a slab allocator that minimizes cache line usage SLUB is a slab allocator that minimizes cache line usage
instead of managing queues of cached objects (SLAB approach). instead of managing queues of cached objects (SLAB approach).
......
...@@ -832,16 +832,8 @@ struct kmem_obj_info { ...@@ -832,16 +832,8 @@ struct kmem_obj_info {
void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab); void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab);
#endif #endif
#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
void __check_heap_object(const void *ptr, unsigned long n, void __check_heap_object(const void *ptr, unsigned long n,
const struct slab *slab, bool to_user); const struct slab *slab, bool to_user);
#else
static inline
void __check_heap_object(const void *ptr, unsigned long n,
const struct slab *slab, bool to_user)
{
}
#endif
#ifdef CONFIG_SLUB_DEBUG #ifdef CONFIG_SLUB_DEBUG
void skip_orig_size_check(struct kmem_cache *s, const void *object); void skip_orig_size_check(struct kmem_cache *s, const void *object);
......
...@@ -127,16 +127,8 @@ config LSM_MMAP_MIN_ADDR ...@@ -127,16 +127,8 @@ config LSM_MMAP_MIN_ADDR
this low address space will need the permission specific to the this low address space will need the permission specific to the
systems running LSM. systems running LSM.
config HAVE_HARDENED_USERCOPY_ALLOCATOR
bool
help
The heap allocator implements __check_heap_object() for
validating memory ranges against heap object sizes in
support of CONFIG_HARDENED_USERCOPY.
config HARDENED_USERCOPY config HARDENED_USERCOPY
bool "Harden memory copies between kernel and userspace" bool "Harden memory copies between kernel and userspace"
depends on HAVE_HARDENED_USERCOPY_ALLOCATOR
imply STRICT_DEVMEM imply STRICT_DEVMEM
help help
This option checks for obviously wrong memory regions when This option checks for obviously wrong memory regions when
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment