Commit 9a3aad70 authored by Xiao Guangrong's avatar Xiao Guangrong Committed by Avi Kivity

KVM: MMU: using __xchg_spte more smarter

Sometimes, atomically set spte is not needed, this patch call __xchg_spte()
more smartly

Note: if the old mapping's access bit is already set, we no need atomic operation
since the access bit is not lost
Signed-off-by: default avatarXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: default avatarAvi Kivity <avi@redhat.com>
parent e4b502ea
...@@ -682,9 +682,14 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) ...@@ -682,9 +682,14 @@ static void rmap_remove(struct kvm *kvm, u64 *spte)
static void set_spte_track_bits(u64 *sptep, u64 new_spte) static void set_spte_track_bits(u64 *sptep, u64 new_spte)
{ {
pfn_t pfn; pfn_t pfn;
u64 old_spte; u64 old_spte = *sptep;
if (!shadow_accessed_mask || !is_shadow_present_pte(old_spte) ||
old_spte & shadow_accessed_mask) {
__set_spte(sptep, new_spte);
} else
old_spte = __xchg_spte(sptep, new_spte);
old_spte = __xchg_spte(sptep, new_spte);
if (!is_rmap_spte(old_spte)) if (!is_rmap_spte(old_spte))
return; return;
pfn = spte_to_pfn(old_spte); pfn = spte_to_pfn(old_spte);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment