Commit e0aceaae authored by Russell King's avatar Russell King Committed by Linus Torvalds

[PATCH] flush_cache_mm in zap_page_range

unmap_vmas() eventually calls tlb_start_vma(), where most architectures
flush caches as necessary.  The flush here seems to make the
flush_cache_range() in zap_page_range() redundant, and therefore can be
removed.
parent 9a72322d
...@@ -601,7 +601,6 @@ void zap_page_range(struct vm_area_struct *vma, ...@@ -601,7 +601,6 @@ void zap_page_range(struct vm_area_struct *vma,
lru_add_drain(); lru_add_drain();
spin_lock(&mm->page_table_lock); spin_lock(&mm->page_table_lock);
flush_cache_range(vma, address, end);
tlb = tlb_gather_mmu(mm, 0); tlb = tlb_gather_mmu(mm, 0);
unmap_vmas(&tlb, mm, vma, address, end, &nr_accounted); unmap_vmas(&tlb, mm, vma, address, end, &nr_accounted);
tlb_finish_mmu(tlb, address, end); tlb_finish_mmu(tlb, address, end);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment