Commit 3f2739bd authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: x86: Acquire SRCU read lock when handling fastpath MSR writes

Temporarily acquire kvm->srcu for read when potentially emulating WRMSR in
the VM-Exit fastpath handler, as several of the common helpers used during
emulation expect the caller to provide SRCU protection.  E.g. if the guest
is counting instructions retired, KVM will query the PMU event filter when
stepping over the WRMSR.

  dump_stack+0x85/0xdf
  lockdep_rcu_suspicious+0x109/0x120
  pmc_event_is_allowed+0x165/0x170
  kvm_pmu_trigger_event+0xa5/0x190
  handle_fastpath_set_msr_irqoff+0xca/0x1e0
  svm_vcpu_run+0x5c3/0x7b0 [kvm_amd]
  vcpu_enter_guest+0x2108/0x2580

Alternatively, check_pmu_event_filter() could acquire kvm->srcu, but this
isn't the first bug of this nature, e.g. see commit 5c30e810 ("KVM:
SVM: Skip WRMSR fastpath on VM-Exit if next RIP isn't valid").  Providing
protection for the entirety of WRMSR emulation will allow reverting the
aforementioned commit, and will avoid having to play whack-a-mole when new
uses of SRCU-protected structures are inevitably added in common emulation
helpers.

Fixes: dfdeda67 ("KVM: x86/pmu: Prevent the PMU from counting disallowed events")
Reported-by: default avatarGreg Thelen <gthelen@google.com>
Reported-by: default avatarAaron Lewis <aaronlewis@google.com>
Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
Message-Id: <20230721224337.2335137-2-seanjc@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent a062dad7
...@@ -2172,6 +2172,8 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu) ...@@ -2172,6 +2172,8 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
u64 data; u64 data;
fastpath_t ret = EXIT_FASTPATH_NONE; fastpath_t ret = EXIT_FASTPATH_NONE;
kvm_vcpu_srcu_read_lock(vcpu);
switch (msr) { switch (msr) {
case APIC_BASE_MSR + (APIC_ICR >> 4): case APIC_BASE_MSR + (APIC_ICR >> 4):
data = kvm_read_edx_eax(vcpu); data = kvm_read_edx_eax(vcpu);
...@@ -2194,6 +2196,8 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu) ...@@ -2194,6 +2196,8 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
if (ret != EXIT_FASTPATH_NONE) if (ret != EXIT_FASTPATH_NONE)
trace_kvm_msr_write(msr, data); trace_kvm_msr_write(msr, data);
kvm_vcpu_srcu_read_unlock(vcpu);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff); EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment