Commit afcd93a4 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] slab alignment fixes

From: Manfred Spraul <manfred@colorfullife.com>

Below is a patch that redefines the kmem_cache_alloc `align' argument:

- align not zero: use the specified alignment.  I think values smaller than
  sizeof(void*) will work, even on archs with strict alignment requirement (or
  at least: slab shouldn't crash.  Obviously the user must handle the
  alignment properly).

- align zero:
* debug on: align to sizeof(void*)
* debug off, SLAB_HWCACHE_ALIGN clear: align to sizeof(void*)
* debug off, SLAB_HWCACHE_ALIGN set: align to the smaller of
   - cache_line_size()
   - the object size, rounded up to the next power of two.
  Slab never honored cache align for tiny objects: otherwise the 32-byte
  kmalloc objects would use 128 byte objects.

There is one additional point: right now slab uses ints for the bufctls.
Using short would save two bytes for each object.  Initially I had used short,
but davem objected.  IIRC because some archs do not handle short efficiently. 
Should I allow arch overrides for the bufctls?  On i386, saving two bytes
might allow a few additional anon_vma objects in each page.
parent 01305153
...@@ -1150,14 +1150,22 @@ kmem_cache_create (const char *name, size_t size, size_t align, ...@@ -1150,14 +1150,22 @@ kmem_cache_create (const char *name, size_t size, size_t align,
BUG(); BUG();
if (align) { if (align) {
/* minimum supported alignment: */
if (align < BYTES_PER_WORD)
align = BYTES_PER_WORD;
/* combinations of forced alignment and advanced debugging is /* combinations of forced alignment and advanced debugging is
* not yet implemented. * not yet implemented.
*/ */
flags &= ~(SLAB_RED_ZONE|SLAB_STORE_USER); flags &= ~(SLAB_RED_ZONE|SLAB_STORE_USER);
} else {
if (flags & SLAB_HWCACHE_ALIGN) {
/* Default alignment: as specified by the arch code.
* Except if an object is really small, then squeeze multiple
* into one cacheline.
*/
align = cache_line_size();
while (size <= align/2)
align /= 2;
} else {
align = BYTES_PER_WORD;
}
} }
/* Get cache's description obj. */ /* Get cache's description obj. */
...@@ -1210,15 +1218,6 @@ kmem_cache_create (const char *name, size_t size, size_t align, ...@@ -1210,15 +1218,6 @@ kmem_cache_create (const char *name, size_t size, size_t align,
*/ */
flags |= CFLGS_OFF_SLAB; flags |= CFLGS_OFF_SLAB;
if (!align) {
/* Default alignment: compile time specified l1 cache size.
* Except if an object is really small, then squeeze multiple
* into one cacheline.
*/
align = cache_line_size();
while (size <= align/2)
align /= 2;
}
size = ALIGN(size, align); size = ALIGN(size, align);
/* Cal size (in pages) of slabs, and the num of objs per slab. /* Cal size (in pages) of slabs, and the num of objs per slab.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment