-
Jann Horn authored
BugLink: https://bugs.launchpad.net/bugs/1868629 commit fd4d9c7d upstream. When kmem_cache_alloc_bulk() attempts to allocate N objects from a percpu freelist of length M, and N > M > 0, it will first remove the M elements from the percpu freelist, then call ___slab_alloc() to allocate the next element and repopulate the percpu freelist. ___slab_alloc() can re-enable IRQs via allocate_slab(), so the TID must be bumped before ___slab_alloc() to properly commit the freelist head change. Fix it by unconditionally bumping c->tid when entering the slowpath. Cc: stable@vger.kernel.org Fixes: ebe909e0 ("slub: improve bulk alloc strategy") Signed-off-by:
Jann Horn <jannh@google.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Khalid Elmously <khalid.elmously@canonical.com> Signed-off-by:
Kelsey Skunberg <kelsey.skunberg@canonical.com>
2674d79e