Commit a8318c13 authored by Gustavo Romero's avatar Gustavo Romero Committed by Michael Ellerman

powerpc/tm: Fix restoring FP/VMX facility incorrectly on interrupts

When in userspace and MSR FP=0 the hardware FP state is unrelated to
the current process. This is extended for transactions where if tbegin
is run with FP=0, the hardware checkpoint FP state will also be
unrelated to the current process. Due to this, we need to ensure this
hardware checkpoint is updated with the correct state before we enable
FP for this process.

Unfortunately we get this wrong when returning to a process from a
hardware interrupt. A process that starts a transaction with FP=0 can
take an interrupt. When the kernel returns back to that process, we
change to FP=1 but with hardware checkpoint FP state not updated. If
this transaction is then rolled back, the FP registers now contain the
wrong state.

The process looks like this:
   Userspace:                      Kernel

               Start userspace
                with MSR FP=0 TM=1
                  < -----
   ...
   tbegin
   bne
               Hardware interrupt
                   ---- >
                                    <do_IRQ...>
                                    ....
                                    ret_from_except
                                      restore_math()
				        /* sees FP=0 */
                                        restore_fp()
                                          tm_active_with_fp()
					    /* sees FP=1 (Incorrect) */
                                          load_fp_state()
                                        FP = 0 -> 1
                  < -----
               Return to userspace
                 with MSR TM=1 FP=1
                 with junk in the FP TM checkpoint
   TM rollback
   reads FP junk

When returning from the hardware exception, tm_active_with_fp() is
incorrectly making restore_fp() call load_fp_state() which is setting
FP=1.

The fix is to remove tm_active_with_fp().

tm_active_with_fp() is attempting to handle the case where FP state
has been changed inside a transaction. In this case the checkpointed
and transactional FP state is different and hence we must restore the
FP state (ie. we can't do lazy FP restore inside a transaction that's
used FP). It's safe to remove tm_active_with_fp() as this case is
handled by restore_tm_state(). restore_tm_state() detects if FP has
been using inside a transaction and will set load_fp and call
restore_math() to ensure the FP state (checkpoint and transaction) is
restored.

This is a data integrity problem for the current process as the FP
registers are corrupted. It's also a security problem as the FP
registers from one process may be leaked to another.

Similarly for VMX.

A simple testcase to replicate this will be posted to
tools/testing/selftests/powerpc/tm/tm-poison.c

This fixes CVE-2019-15031.

Fixes: a7771176 ("powerpc: Don't enable FP/Altivec if not checkpointed")
Cc: stable@vger.kernel.org # 4.15+
Signed-off-by: default avatarGustavo Romero <gromero@linux.ibm.com>
Signed-off-by: default avatarMichael Neuling <mikey@neuling.org>
Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20190904045529.23002-2-gromero@linux.vnet.ibm.com
parent 8205d5d9
...@@ -101,21 +101,8 @@ static void check_if_tm_restore_required(struct task_struct *tsk) ...@@ -101,21 +101,8 @@ static void check_if_tm_restore_required(struct task_struct *tsk)
} }
} }
static bool tm_active_with_fp(struct task_struct *tsk)
{
return MSR_TM_ACTIVE(tsk->thread.regs->msr) &&
(tsk->thread.ckpt_regs.msr & MSR_FP);
}
static bool tm_active_with_altivec(struct task_struct *tsk)
{
return MSR_TM_ACTIVE(tsk->thread.regs->msr) &&
(tsk->thread.ckpt_regs.msr & MSR_VEC);
}
#else #else
static inline void check_if_tm_restore_required(struct task_struct *tsk) { } static inline void check_if_tm_restore_required(struct task_struct *tsk) { }
static inline bool tm_active_with_fp(struct task_struct *tsk) { return false; }
static inline bool tm_active_with_altivec(struct task_struct *tsk) { return false; }
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
bool strict_msr_control; bool strict_msr_control;
...@@ -252,7 +239,7 @@ EXPORT_SYMBOL(enable_kernel_fp); ...@@ -252,7 +239,7 @@ EXPORT_SYMBOL(enable_kernel_fp);
static int restore_fp(struct task_struct *tsk) static int restore_fp(struct task_struct *tsk)
{ {
if (tsk->thread.load_fp || tm_active_with_fp(tsk)) { if (tsk->thread.load_fp) {
load_fp_state(&current->thread.fp_state); load_fp_state(&current->thread.fp_state);
current->thread.load_fp++; current->thread.load_fp++;
return 1; return 1;
...@@ -334,8 +321,7 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread); ...@@ -334,8 +321,7 @@ EXPORT_SYMBOL_GPL(flush_altivec_to_thread);
static int restore_altivec(struct task_struct *tsk) static int restore_altivec(struct task_struct *tsk)
{ {
if (cpu_has_feature(CPU_FTR_ALTIVEC) && if (cpu_has_feature(CPU_FTR_ALTIVEC) && (tsk->thread.load_vec)) {
(tsk->thread.load_vec || tm_active_with_altivec(tsk))) {
load_vr_state(&tsk->thread.vr_state); load_vr_state(&tsk->thread.vr_state);
tsk->thread.used_vr = 1; tsk->thread.used_vr = 1;
tsk->thread.load_vec++; tsk->thread.load_vec++;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment