Commit 264d3dc1 authored by Lai Jiangshan's avatar Lai Jiangshan Committed by Paolo Bonzini

KVM: X86: pair smp_wmb() of mmu_try_to_unsync_pages() with smp_rmb()

The commit 578e1c4d ("kvm: x86: Avoid taking MMU lock
in kvm_mmu_sync_roots if no sync is needed") added smp_wmb() in
mmu_try_to_unsync_pages(), but the corresponding smp_load_acquire() isn't
used on the load of SPTE.W.  smp_load_acquire() orders _subsequent_
loads after sp->is_unsync; it does not order _earlier_ loads before
the load of sp->is_unsync.

This has no functional change; smp_rmb() is a NOP on x86, and no
compiler barrier is required because there is a VMEXIT between the
load of SPTE.W and kvm_mmu_snc_roots.

Cc: Junaid Shahid <junaids@google.com>
Signed-off-by: default avatarLai Jiangshan <laijs@linux.alibaba.com>
Message-Id: <20211019110154.4091-4-jiangshanlai@gmail.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 509bfe3d
...@@ -2669,8 +2669,8 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, ...@@ -2669,8 +2669,8 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
* (sp->unsync = true) * (sp->unsync = true)
* *
* The write barrier below ensures that 1.1 happens before 1.2 and thus * The write barrier below ensures that 1.1 happens before 1.2 and thus
* the situation in 2.4 does not arise. The implicit barrier in 2.2 * the situation in 2.4 does not arise. It pairs with the read barrier
* pairs with this write barrier. * in is_unsync_root(), placed between 2.1's load of SPTE.W and 2.3.
*/ */
smp_wmb(); smp_wmb();
...@@ -3643,6 +3643,30 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu) ...@@ -3643,6 +3643,30 @@ static int mmu_alloc_special_roots(struct kvm_vcpu *vcpu)
#endif #endif
} }
static bool is_unsync_root(hpa_t root)
{
struct kvm_mmu_page *sp;
/*
* The read barrier orders the CPU's read of SPTE.W during the page table
* walk before the reads of sp->unsync/sp->unsync_children here.
*
* Even if another CPU was marking the SP as unsync-ed simultaneously,
* any guest page table changes are not guaranteed to be visible anyway
* until this VCPU issues a TLB flush strictly after those changes are
* made. We only need to ensure that the other CPU sets these flags
* before any actual changes to the page tables are made. The comments
* in mmu_try_to_unsync_pages() describe what could go wrong if this
* requirement isn't satisfied.
*/
smp_rmb();
sp = to_shadow_page(root);
if (sp->unsync || sp->unsync_children)
return true;
return false;
}
void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu)
{ {
int i; int i;
...@@ -3660,18 +3684,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) ...@@ -3660,18 +3684,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu)
hpa_t root = vcpu->arch.mmu->root_hpa; hpa_t root = vcpu->arch.mmu->root_hpa;
sp = to_shadow_page(root); sp = to_shadow_page(root);
/* if (!is_unsync_root(root))
* Even if another CPU was marking the SP as unsync-ed
* simultaneously, any guest page table changes are not
* guaranteed to be visible anyway until this VCPU issues a TLB
* flush strictly after those changes are made. We only need to
* ensure that the other CPU sets these flags before any actual
* changes to the page tables are made. The comments in
* mmu_try_to_unsync_pages() describe what could go wrong if
* this requirement isn't satisfied.
*/
if (!smp_load_acquire(&sp->unsync) &&
!smp_load_acquire(&sp->unsync_children))
return; return;
write_lock(&vcpu->kvm->mmu_lock); write_lock(&vcpu->kvm->mmu_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment