Commit 1ed7ce57 authored by Shakeel Butt's avatar Shakeel Butt Committed by Linus Torvalds

slub: fix kmalloc_pagealloc_invalid_free unit test

The unit test kmalloc_pagealloc_invalid_free makes sure that for the
higher order slub allocation which goes to page allocator, the free is
called with the correct address i.e.  the virtual address of the head
page.

Commit f227f0fa ("slub: fix unreclaimable slab stat for bulk free")
unified the free code paths for page allocator based slub allocations
but instead of using the address passed by the caller, it extracted the
address from the page.  Thus making the unit test
kmalloc_pagealloc_invalid_free moot.  So, fix this by using the address
passed by the caller.

Should we fix this? I think yes because dev expect kasan to catch these
type of programming bugs.

Link: https://lkml.kernel.org/r/20210802180819.1110165-1-shakeelb@google.com
Fixes: f227f0fa ("slub: fix unreclaimable slab stat for bulk free")
Signed-off-by: default avatarShakeel Butt <shakeelb@google.com>
Reported-by: default avatarNathan Chancellor <nathan@kernel.org>
Tested-by: default avatarNathan Chancellor <nathan@kernel.org>
Acked-by: default avatarRoman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 340caf17
...@@ -3236,12 +3236,12 @@ struct detached_freelist { ...@@ -3236,12 +3236,12 @@ struct detached_freelist {
struct kmem_cache *s; struct kmem_cache *s;
}; };
static inline void free_nonslab_page(struct page *page) static inline void free_nonslab_page(struct page *page, void *object)
{ {
unsigned int order = compound_order(page); unsigned int order = compound_order(page);
VM_BUG_ON_PAGE(!PageCompound(page), page); VM_BUG_ON_PAGE(!PageCompound(page), page);
kfree_hook(page_address(page)); kfree_hook(object);
mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
__free_pages(page, order); __free_pages(page, order);
} }
...@@ -3282,7 +3282,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, ...@@ -3282,7 +3282,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
if (!s) { if (!s) {
/* Handle kalloc'ed objects */ /* Handle kalloc'ed objects */
if (unlikely(!PageSlab(page))) { if (unlikely(!PageSlab(page))) {
free_nonslab_page(page); free_nonslab_page(page, object);
p[size] = NULL; /* mark object processed */ p[size] = NULL; /* mark object processed */
return size; return size;
} }
...@@ -4258,7 +4258,7 @@ void kfree(const void *x) ...@@ -4258,7 +4258,7 @@ void kfree(const void *x)
page = virt_to_head_page(x); page = virt_to_head_page(x);
if (unlikely(!PageSlab(page))) { if (unlikely(!PageSlab(page))) {
free_nonslab_page(page); free_nonslab_page(page, object);
return; return;
} }
slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_); slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment