Commit f07116d7 authored by Uladzislau Rezki (Sony)'s avatar Uladzislau Rezki (Sony) Committed by Linus Torvalds

mm/vmalloc: respect passed gfp_mask when doing preloading

Allocation functions should comply with the given gfp_mask as much as
possible.  The preallocation code in alloc_vmap_area doesn't follow that
pattern and it is using a hardcoded GFP_KERNEL.  Although this doesn't
really make much difference because vmalloc is not GFP_NOWAIT compliant
in general (e.g.  page table allocations are GFP_KERNEL) there is no
reason to spread that bad habit and it is good to fix the antipattern.

[mhocko@suse.com: rewrite changelog]
Link: http://lkml.kernel.org/r/20191016095438.12391-2-urezki@gmail.comSigned-off-by: default avatarUladzislau Rezki (Sony) <urezki@gmail.com>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Daniel Wagner <dwagner@suse.de>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 81f1ba58
...@@ -1063,9 +1063,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, ...@@ -1063,9 +1063,9 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
return ERR_PTR(-EBUSY); return ERR_PTR(-EBUSY);
might_sleep(); might_sleep();
gfp_mask = gfp_mask & GFP_RECLAIM_MASK;
va = kmem_cache_alloc_node(vmap_area_cachep, va = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
gfp_mask & GFP_RECLAIM_MASK, node);
if (unlikely(!va)) if (unlikely(!va))
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
...@@ -1073,7 +1073,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, ...@@ -1073,7 +1073,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
* Only scan the relevant parts containing pointers to other objects * Only scan the relevant parts containing pointers to other objects
* to avoid false negatives. * to avoid false negatives.
*/ */
kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK); kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask);
retry: retry:
/* /*
...@@ -1099,7 +1099,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, ...@@ -1099,7 +1099,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
* Just proceed as it is. If needed "overflow" path * Just proceed as it is. If needed "overflow" path
* will refill the cache we allocate from. * will refill the cache we allocate from.
*/ */
pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node); pva = kmem_cache_alloc_node(vmap_area_cachep, gfp_mask, node);
spin_lock(&vmap_area_lock); spin_lock(&vmap_area_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment