Commit 855e57a1 authored by Christoph Hellwig's avatar Christoph Hellwig Committed by Linus Torvalds

mm: remove unmap_vmap_area

This function just has a single caller, open code it there.
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: David Airlie <airlied@linux.ie>
Cc: Gao Xiang <xiang@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kelley <mikelley@microsoft.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Sakari Ailus <sakari.ailus@linux.intel.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Will Deacon <will@kernel.org>
Link: http://lkml.kernel.org/r/20200414131348.444715-18-hch@lst.deSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ed1f324c
...@@ -1248,14 +1248,6 @@ int unregister_vmap_purge_notifier(struct notifier_block *nb) ...@@ -1248,14 +1248,6 @@ int unregister_vmap_purge_notifier(struct notifier_block *nb)
} }
EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier); EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
/*
* Clear the pagetable entries of a given vmap_area
*/
static void unmap_vmap_area(struct vmap_area *va)
{
unmap_kernel_range_noflush(va->va_start, va->va_end - va->va_start);
}
/* /*
* lazy_max_pages is the maximum amount of virtual address space we gather up * lazy_max_pages is the maximum amount of virtual address space we gather up
* before attempting to purge with a TLB flush. * before attempting to purge with a TLB flush.
...@@ -1417,7 +1409,7 @@ static void free_vmap_area_noflush(struct vmap_area *va) ...@@ -1417,7 +1409,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)
static void free_unmap_vmap_area(struct vmap_area *va) static void free_unmap_vmap_area(struct vmap_area *va)
{ {
flush_cache_vunmap(va->va_start, va->va_end); flush_cache_vunmap(va->va_start, va->va_end);
unmap_vmap_area(va); unmap_kernel_range_noflush(va->va_start, va->va_end - va->va_start);
if (debug_pagealloc_enabled_static()) if (debug_pagealloc_enabled_static())
flush_tlb_kernel_range(va->va_start, va->va_end); flush_tlb_kernel_range(va->va_start, va->va_end);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment