Commit 9cf7a111 authored by Abel Wu's avatar Abel Wu Committed by Linus Torvalds

mm/slub: make add_full() condition more explicit

The commit below is incomplete, as it didn't handle the add_full() part.
commit a4d3f891 ("slub: remove useless kmem_cache_debug() before
remove_full()")

This patch checks for SLAB_STORE_USER instead of kmem_cache_debug(), since
that should be the only context in which we need the list_lock for
add_full().
Signed-off-by: default avatarAbel Wu <wuyun.wu@huawei.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Liu Xiang <liu.xiang6@zte.com.cn>
Link: https://lkml.kernel.org/r/20200811020240.1231-1-wuyun.wu@huawei.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 9f986d99
...@@ -2245,7 +2245,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, ...@@ -2245,7 +2245,8 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
} }
} else { } else {
m = M_FULL; m = M_FULL;
if (kmem_cache_debug(s) && !lock) { #ifdef CONFIG_SLUB_DEBUG
if ((s->flags & SLAB_STORE_USER) && !lock) {
lock = 1; lock = 1;
/* /*
* This also ensures that the scanning of full * This also ensures that the scanning of full
...@@ -2254,6 +2255,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page, ...@@ -2254,6 +2255,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
*/ */
spin_lock(&n->list_lock); spin_lock(&n->list_lock);
} }
#endif
} }
if (l != m) { if (l != m) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment