Commit f7329c08 authored by Nicolai Stange's avatar Nicolai Stange Committed by Stefan Bader

x86/KVM/VMX: Move the l1tf_flush_l1d test to vmx_l1d_flush()

Currently, vmx_vcpu_run() checks if l1tf_flush_l1d is set and invokes
vmx_l1d_flush() if so.

This test is unncessary for the "always flush L1D" mode.

Move the check to vmx_l1d_flush()'s conditional mode code path.

Notes:
- vmx_l1d_flush() is likely to get inlined anyway and thus, there's no
  extra function call.

- This inverts the (static) branch prediction, but there hadn't been any
  explicit likely()/unlikely() annotations before and so it stays as is.
Signed-off-by: default avatarNicolai Stange <nstange@suse.de>
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>

CVE-2018-3620
CVE-2018-3646

[smb: Some minor context adjustments in second hunk]
Signed-off-by: default avatarStefan Bader <stefan.bader@canonical.com>
parent f9a1bc63
......@@ -8396,12 +8396,16 @@ static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
* 'always'
*/
if (static_branch_likely(&vmx_l1d_flush_cond)) {
bool flush_l1d = vcpu->arch.l1tf_flush_l1d;
/*
* Clear the flush bit, it gets set again either from
* vcpu_run() or from one of the unsafe VMEXIT
* handlers.
*/
vcpu->arch.l1tf_flush_l1d = false;
if (!flush_l1d)
return;
}
vcpu->stat.l1d_flush++;
......@@ -8860,10 +8864,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
x86_spec_ctrl_set_guest(vcpu->arch.spec_ctrl, 0);
if (static_branch_unlikely(&vmx_l1d_should_flush)) {
if (vcpu->arch.l1tf_flush_l1d)
vmx_l1d_flush(vcpu);
}
if (static_branch_unlikely(&vmx_l1d_should_flush))
vmx_l1d_flush(vcpu);
vmx->__launched = vmx->loaded_vmcs->launched;
asm(
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment