Commit fc8d8620 authored by Stanislaw Gruszka's avatar Stanislaw Gruszka Committed by Linus Torvalds

slub: min order when debug_guardpage_minorder > 0

Disable slub debug facilities and allocate slabs at minimal order when
debug_guardpage_minorder > 0 to increase probability to catch random
memory corruption by cpu exception.
Signed-off-by: default avatarStanislaw Gruszka <sgruszka@redhat.com>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: default avatarChristoph Lameter <cl@linux.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Stanislaw Gruszka <sgruszka@redhat.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent c6968e73
...@@ -3654,6 +3654,9 @@ void __init kmem_cache_init(void) ...@@ -3654,6 +3654,9 @@ void __init kmem_cache_init(void)
struct kmem_cache *temp_kmem_cache_node; struct kmem_cache *temp_kmem_cache_node;
unsigned long kmalloc_size; unsigned long kmalloc_size;
if (debug_guardpage_minorder())
slub_max_order = 0;
kmem_size = offsetof(struct kmem_cache, node) + kmem_size = offsetof(struct kmem_cache, node) +
nr_node_ids * sizeof(struct kmem_cache_node *); nr_node_ids * sizeof(struct kmem_cache_node *);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment