Commit e3176036 authored by Tejun Heo's avatar Tejun Heo

percpu: fix too low alignment restriction on UP

UP __alloc_percpu() triggered WARN_ON_ONCE() if the requested
alignment is larger than that of unsigned long long, which is too
small for all the cacheline aligned allocations.  Bump it up to
SMP_CACHE_BYTES which kmalloc allocations generally guarantee.
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
Reported-by: default avatarIngo Molnar <mingo@elte.hu>
parent d2b02615
...@@ -156,7 +156,7 @@ static inline void *__alloc_percpu(size_t size, size_t align) ...@@ -156,7 +156,7 @@ static inline void *__alloc_percpu(size_t size, size_t align)
* on it. Larger alignment should only be used for module * on it. Larger alignment should only be used for module
* percpu sections on SMP for which this path isn't used. * percpu sections on SMP for which this path isn't used.
*/ */
WARN_ON_ONCE(align > __alignof__(unsigned long long)); WARN_ON_ONCE(align > SMP_CACHE_BYTES);
return kzalloc(size, GFP_KERNEL); return kzalloc(size, GFP_KERNEL);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment