Commit 044b622e authored by Hugh Dickins's avatar Hugh Dickins Committed by Kleber Sacilotto de Souza

kaiser: asm/tlbflush.h handle noPGE at lower level

I found asm/tlbflush.h too twisty, and think it safer not to avoid
__native_flush_tlb_global_irq_disabled() in the kaiser_enabled case,
but instead let it handle kaiser_enabled along with cr3: it can just
use __native_flush_tlb() for that, no harm in re-disabling preemption.

(This is not the same change as Kirill and Dave have suggested for
upstream, flipping PGE in cr4: that's neat, but needs a cpu_has_pge
check; cr3 is enough for kaiser, and thought to be cheaper than cr4.)

Also delete the X86_FEATURE_INVPCID invpcid_flush_all_nonglobals()
preference from __native_flush_tlb(): unlike the invpcid_flush_all()
preference in __native_flush_tlb_global(), it's not seen in upstream
4.14, and was recently reported to be surprisingly slow.
Acked-by: default avatarJiri Kosina <jkosina@suse.cz>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>

CVE-2017-5754
Signed-off-by: default avatarColin Ian King <colin.king@canonical.com>
Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
parent 25cc746c
......@@ -153,14 +153,6 @@ static inline void kaiser_flush_tlb_on_return_to_user(void)
static inline void __native_flush_tlb(void)
{
if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Note, this works with CR4.PCIDE=0 or 1.
*/
invpcid_flush_all_nonglobals();
return;
}
/*
* If current->mm == NULL then we borrow a mm which may change during a
* task switch and therefore we must not be preempted while we write CR3
......@@ -184,11 +176,8 @@ static inline void __native_flush_tlb_global_irq_disabled(void)
/* restore PGE as it was before */
native_write_cr4(cr4);
} else {
/*
* x86_64 microcode update comes this way when CR4.PGE is not
* enabled, and it's safer for all callers to allow this case.
*/
native_write_cr3(native_read_cr3());
/* do it with cr3, letting kaiser flush user PCID */
__native_flush_tlb();
}
}
......@@ -196,12 +185,6 @@ static inline void __native_flush_tlb_global(void)
{
unsigned long flags;
if (kaiser_enabled) {
/* Globals are not used at all */
__native_flush_tlb();
return;
}
if (this_cpu_has(X86_FEATURE_INVPCID)) {
/*
* Using INVPCID is considerably faster than a pair of writes
......@@ -255,11 +238,7 @@ static inline void __native_flush_tlb_single(unsigned long addr)
static inline void __flush_tlb_all(void)
{
if (cpu_has_pge)
__flush_tlb_global();
else
__flush_tlb();
__flush_tlb_global();
/*
* Note: if we somehow had PCID but not PGE, then this wouldn't work --
* we'd end up flushing kernel translations for the current ASID but
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment