Commit c44d9b34 authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: x86: Invoke vendor's vcpu_after_set_cpuid() after all common updates

Move the call to kvm_x86_ops.vcpu_after_set_cpuid() to the very end of
kvm_vcpu_after_set_cpuid() to allow the vendor implementation to react
to changes made by the common code.  In the near future, this will be
used by VMX to update its CR4 guest/host masks to account for reserved
bits.  In the long term, SGX support will update the allowed XCR0 mask
for enclaves based on the vCPU's allowed XCR0.

vcpu_after_set_cpuid() (nee kvm_update_cpuid()) was originally added by
commit 2acf923e ("KVM: VMX: Enable XSAVE/XRSTOR for guest"), and was
called separately after kvm_x86_ops.vcpu_after_set_cpuid() (nee
kvm_x86_ops->cpuid_update()).  There is no indication that the placement
of the common code updates after the vendor updates was anything more
than a "new function at the end" decision.

Inspection of the current code reveals no dependency on kvm_x86_ops'
vcpu_after_set_cpuid() in kvm_vcpu_after_set_cpuid() or any of its
helpers.  The bulk of the common code depends only on the guest's CPUID
configuration, kvm_mmu_reset_context() does not consume dynamic vendor
state, and there are no collisions between kvm_pmu_refresh() and VMX's
update of PT state.
Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200930041659.28181-3-sean.j.christopherson@intel.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 6e1d849f
...@@ -145,8 +145,6 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) ...@@ -145,8 +145,6 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
struct kvm_lapic *apic = vcpu->arch.apic; struct kvm_lapic *apic = vcpu->arch.apic;
struct kvm_cpuid_entry2 *best; struct kvm_cpuid_entry2 *best;
kvm_x86_ops.vcpu_after_set_cpuid(vcpu);
best = kvm_find_cpuid_entry(vcpu, 1, 0); best = kvm_find_cpuid_entry(vcpu, 1, 0);
if (best && apic) { if (best && apic) {
if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER)) if (cpuid_entry_has(best, X86_FEATURE_TSC_DEADLINE_TIMER))
...@@ -170,6 +168,9 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) ...@@ -170,6 +168,9 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
kvm_pmu_refresh(vcpu); kvm_pmu_refresh(vcpu);
vcpu->arch.cr4_guest_rsvd_bits = vcpu->arch.cr4_guest_rsvd_bits =
__cr4_reserved_bits(guest_cpuid_has, vcpu); __cr4_reserved_bits(guest_cpuid_has, vcpu);
/* Invoke the vendor callback only after the above state is updated. */
kvm_x86_ops.vcpu_after_set_cpuid(vcpu);
kvm_x86_ops.update_exception_bitmap(vcpu); kvm_x86_ops.update_exception_bitmap(vcpu);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment