-
Andrew Morton authored
From: Rusty Russell <rusty@rustcorp.com.au> From: Srivatsa Vaddagiri <vatsa@in.ibm.com> Hit a couple of (cpu hotplug) races in slab allocator during my tests. Mostly it was because of continuous loading/unloading fs/minix/minix.ko while simultaneously doing offline/online of CPUs. As part of its init and exit routines, minix.ko create/destroys caches, which lead to several oopses. 1. kmem_cache_create In brief, kmem_cache_create does: a) calls enable_cpucache to create per-cpu cache for all online CPUs. b) adds the cache to the global list of caches These two are not done atomically and thats what causes problems. For ex: lets say that at the time of step a) CPU1 is not online. Hence no per-cpu cache is created for CPU1 (cachep->array[1] is NULL). However CPU1 is not completely dead in the sense that CPU_DEAD processing for it is not yet over. By the time CPU_DEAD processing starts for CPU1, step b) is complete. So cpuup_callback finds this cache and tries freeing it's per-cpu cache associated with CPU1. In the process it dereferences a NULL pointer and dies. 2. kmem_cache_destroy In brief, kmem_cache_destroy does: a) deletes the cache from the global list of caches b) Drain per-cpu cache (drain_cpu_caches), which basically uses smp_call_function to run do_drain on all online CPUs. One possible race is let's say that CPU1 is coming up. By the time CPU_UP_PREPARE is processed for CPU1, step a) is complete. Hence cpuup_callback does not allocate any per-cpu cache for the cache that is being destroyed. However by the time step b) is run, CPU1 is completely online (taking interrupts). It receives the IPI and tries draining it per-cpu cache (which is NULL) and dies there. I think we need to serialize kmem_cache_create/destroy against CPU hotplug to prevent these problems. Patch below does that by taking CPU Hotplug sem (which is OK since kmem_cache_create/destroy are not very frequently used?).
52930067