Commit 982b3394 authored by Takuya Yoshikawa's avatar Takuya Yoshikawa Committed by Gleb Natapov

KVM: x86: Optimize mmio spte zapping when creating/moving memslot

When we create or move a memory slot, we need to zap mmio sptes.
Currently, zap_all() is used for this and this is causing two problems:
 - extra page faults after zapping mmu pages
 - long mmu_lock hold time during zapping mmu pages

For the latter, Marcelo reported a disastrous mmu_lock hold time during
hot-plug, which made the guest unresponsive for a long time.

This patch takes a simple way to fix these problems: do not zap mmu
pages unless they are marked mmio cached.  On our test box, this took
only 50us for the 4GB guest and we did not see ms of mmu_lock hold time
any more.

Note that we still need to do zap_all() for other cases.  So another
work is also needed: Xiao's work may be the one.
Reviewed-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: default avatarTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: default avatarGleb Natapov <gleb@redhat.com>
parent 95b0430d
...@@ -767,6 +767,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm, ...@@ -767,6 +767,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
struct kvm_memory_slot *slot, struct kvm_memory_slot *slot,
gfn_t gfn_offset, unsigned long mask); gfn_t gfn_offset, unsigned long mask);
void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_zap_all(struct kvm *kvm);
void kvm_mmu_zap_mmio_sptes(struct kvm *kvm);
unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
......
...@@ -4189,6 +4189,24 @@ void kvm_mmu_zap_all(struct kvm *kvm) ...@@ -4189,6 +4189,24 @@ void kvm_mmu_zap_all(struct kvm *kvm)
spin_unlock(&kvm->mmu_lock); spin_unlock(&kvm->mmu_lock);
} }
void kvm_mmu_zap_mmio_sptes(struct kvm *kvm)
{
struct kvm_mmu_page *sp, *node;
LIST_HEAD(invalid_list);
spin_lock(&kvm->mmu_lock);
restart:
list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
if (!sp->mmio_cached)
continue;
if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
goto restart;
}
kvm_mmu_commit_zap_page(kvm, &invalid_list);
spin_unlock(&kvm->mmu_lock);
}
static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc)
{ {
struct kvm *kvm; struct kvm *kvm;
......
...@@ -6991,7 +6991,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, ...@@ -6991,7 +6991,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
* mmio sptes. * mmio sptes.
*/ */
if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) { if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) {
kvm_mmu_zap_all(kvm); kvm_mmu_zap_mmio_sptes(kvm);
kvm_reload_remote_mmus(kvm); kvm_reload_remote_mmus(kvm);
} }
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment