Commit 999d8795 authored by Ezequiel Garcia's avatar Ezequiel Garcia Committed by Pekka Enberg

mm/slob: Drop usage of page->private for storing page-sized allocations

This field was being used to store size allocation so it could be
retrieved by ksize(). However, it is a bad practice to not mark a page
as a slab page and then use fields for special purposes.
There is no need to store the allocated size and
ksize() can simply return PAGE_SIZE << compound_order(page).

Cc: Pekka Enberg <penberg@kernel.org>
Cc: Matt Mackall <mpm@selenic.com>
Acked-by: default avatarChristoph Lameter <cl@linux.com>
Signed-off-by: default avatarEzequiel Garcia <elezegarcia@gmail.com>
Signed-off-by: default avatarPekka Enberg <penberg@kernel.org>
parent 1b4f59e3
...@@ -28,9 +28,8 @@ ...@@ -28,9 +28,8 @@
* from kmalloc are prepended with a 4-byte header with the kmalloc size. * from kmalloc are prepended with a 4-byte header with the kmalloc size.
* If kmalloc is asked for objects of PAGE_SIZE or larger, it calls * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
* alloc_pages() directly, allocating compound pages so the page order * alloc_pages() directly, allocating compound pages so the page order
* does not have to be separately tracked, and also stores the exact * does not have to be separately tracked.
* allocation size in page->private so that it can be used to accurately * These objects are detected in kfree() because PageSlab()
* provide ksize(). These objects are detected in kfree() because slob_page()
* is false for them. * is false for them.
* *
* SLAB is emulated on top of SLOB by simply calling constructors and * SLAB is emulated on top of SLOB by simply calling constructors and
...@@ -455,11 +454,6 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller) ...@@ -455,11 +454,6 @@ __do_kmalloc_node(size_t size, gfp_t gfp, int node, unsigned long caller)
if (likely(order)) if (likely(order))
gfp |= __GFP_COMP; gfp |= __GFP_COMP;
ret = slob_new_pages(gfp, order, node); ret = slob_new_pages(gfp, order, node);
if (ret) {
struct page *page;
page = virt_to_page(ret);
page->private = size;
}
trace_kmalloc_node(caller, ret, trace_kmalloc_node(caller, ret,
size, PAGE_SIZE << order, gfp, node); size, PAGE_SIZE << order, gfp, node);
...@@ -514,18 +508,20 @@ EXPORT_SYMBOL(kfree); ...@@ -514,18 +508,20 @@ EXPORT_SYMBOL(kfree);
size_t ksize(const void *block) size_t ksize(const void *block)
{ {
struct page *sp; struct page *sp;
int align;
unsigned int *m;
BUG_ON(!block); BUG_ON(!block);
if (unlikely(block == ZERO_SIZE_PTR)) if (unlikely(block == ZERO_SIZE_PTR))
return 0; return 0;
sp = virt_to_page(block); sp = virt_to_page(block);
if (PageSlab(sp)) { if (unlikely(!PageSlab(sp)))
int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN); return PAGE_SIZE << compound_order(sp);
unsigned int *m = (unsigned int *)(block - align);
return SLOB_UNITS(*m) * SLOB_UNIT; align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
} else m = (unsigned int *)(block - align);
return sp->private; return SLOB_UNITS(*m) * SLOB_UNIT;
} }
EXPORT_SYMBOL(ksize); EXPORT_SYMBOL(ksize);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment