Commit 9f4d1ba1 authored by Andy Lutomirski's avatar Andy Lutomirski Committed by Greg Kroah-Hartman

x86/mm: Make flush_tlb_mm_range() more predictable

commit ce27374f upstream.

I'm about to rewrite the function almost completely, but first I
want to get a functional change out of the way.  Currently, if
flush_tlb_mm_range() does not flush the local TLB at all, it will
never do individual page flushes on remote CPUs.  This seems to be
an accident, and preserving it will be awkward.  Let's change it
first so that any regressions in the rewrite will be easier to
bisect and so that the rewrite can attempt to change no visible
behavior at all.

The fix is simple: we can simply avoid short-circuiting the
calculation of base_pages_to_flush.

As a side effect, this also eliminates a potential corner case: if
tlb_single_page_flush_ceiling == TLB_FLUSH_ALL, flush_tlb_mm_range()
could have ended up flushing the entire address space one page at a
time.
Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
Acked-by: default avatarDave Hansen <dave.hansen@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/4b29b771d9975aad7154c314534fec235618175a.1492844372.git.luto@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 227d6f0e
...@@ -292,6 +292,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, ...@@ -292,6 +292,12 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
unsigned long base_pages_to_flush = TLB_FLUSH_ALL; unsigned long base_pages_to_flush = TLB_FLUSH_ALL;
preempt_disable(); preempt_disable();
if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB))
base_pages_to_flush = (end - start) >> PAGE_SHIFT;
if (base_pages_to_flush > tlb_single_page_flush_ceiling)
base_pages_to_flush = TLB_FLUSH_ALL;
if (current->active_mm != mm) { if (current->active_mm != mm) {
/* Synchronize with switch_mm. */ /* Synchronize with switch_mm. */
smp_mb(); smp_mb();
...@@ -308,15 +314,11 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, ...@@ -308,15 +314,11 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
goto out; goto out;
} }
if ((end != TLB_FLUSH_ALL) && !(vmflag & VM_HUGETLB))
base_pages_to_flush = (end - start) >> PAGE_SHIFT;
/* /*
* Both branches below are implicit full barriers (MOV to CR or * Both branches below are implicit full barriers (MOV to CR or
* INVLPG) that synchronize with switch_mm. * INVLPG) that synchronize with switch_mm.
*/ */
if (base_pages_to_flush > tlb_single_page_flush_ceiling) { if (base_pages_to_flush == TLB_FLUSH_ALL) {
base_pages_to_flush = TLB_FLUSH_ALL;
count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
local_flush_tlb(); local_flush_tlb();
} else { } else {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment