1. 23 Feb, 2024 3 commits
    • Sean Christopherson's avatar
      KVM: x86: Drop superfluous check on direct MMU vs. WRITE_PF_TO_SP flag · dfeef3d3
      Sean Christopherson authored
      Remove reexecute_instruction()'s final check on the MMU being direct, as
      EMULTYPE_WRITE_PF_TO_SP is only ever set if the MMU is indirect, i.e. is a
      shadow MMU.  Prior to commit 93c05d3e ("KVM: x86: improve
      reexecute_instruction"), the flag simply didn't exist (and KVM actually
      returned "true" unconditionally for both types of MMUs).  I.e. the
      explicit check for a direct MMU is simply leftover artifact from old code.
      
      Link: https://lore.kernel.org/r/20240203002343.383056-4-seanjc@google.comSigned-off-by: default avatarSean Christopherson <seanjc@google.com>
      dfeef3d3
    • Sean Christopherson's avatar
      KVM: x86: Drop dedicated logic for direct MMUs in reexecute_instruction() · 515c18a6
      Sean Christopherson authored
      Now that KVM doesn't pointlessly acquire mmu_lock for direct MMUs, drop
      the dedicated path entirely and always query indirect_shadow_pages when
      deciding whether or not to try unprotecting the gfn.  For indirect, a.k.a.
      shadow MMUs, checking indirect_shadow_pages is harmless; unless *every*
      shadow page was somehow zapped while KVM was attempting to emulate the
      instruction, indirect_shadow_pages is guaranteed to be non-zero.
      
      Well, unless the instruction used a direct hugepage with 2-level paging
      for its code page, but in that case, there's obviously nothing to
      unprotect.  And in the extremely unlikely case all shadow pages were
      zapped, there's again obviously nothing to unprotect.
      
      Link: https://lore.kernel.org/r/20240203002343.383056-3-seanjc@google.comSigned-off-by: default avatarSean Christopherson <seanjc@google.com>
      515c18a6
    • Mingwei Zhang's avatar
      KVM: x86/mmu: Don't acquire mmu_lock when using indirect_shadow_pages as a heuristic · 474b99ed
      Mingwei Zhang authored
      Drop KVM's completely pointless acquisition of mmu_lock when deciding
      whether or not to unprotect any shadow pages residing at the gfn before
      resuming the guest to let it retry an instruction that KVM failed to
      emulated.  In this case, indirect_shadow_pages is used as a coarse-grained
      heuristic to check if there is any chance of there being a relevant shadow
      page to unprotected.  But acquiring mmu_lock largely defeats any benefit
      to the heuristic, as taking mmu_lock for write is likely far more costly
      to the VM as a whole than unnecessarily walking mmu_page_hash.
      
      Furthermore, the current code is already prone to false negatives and
      false positives, as it drops mmu_lock before checking the flag and
      unprotecting shadow pages.  And as evidenced by the lack of bug reports,
      neither false positives nor false negatives are problematic.  A false
      positive simply means that KVM will try to unprotect shadow pages that
      have already been zapped.  And a false negative means that KVM will
      resume the guest without unprotecting the gfn, i.e. if a shadow page was
      _just_ created, the vCPU will hit the same page fault and do the whole
      dance all over again, and detect and unprotect the shadow page the second
      time around (or not, if something else zaps it first).
      Reported-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarMingwei Zhang <mizhang@google.com>
      [sean: drop READ_ONCE() and comment change, rewrite changelog]
      Link: https://lore.kernel.org/r/20240203002343.383056-2-seanjc@google.comSigned-off-by: default avatarSean Christopherson <seanjc@google.com>
      474b99ed
  2. 31 Jan, 2024 1 commit
  3. 29 Jan, 2024 1 commit
  4. 28 Jan, 2024 7 commits
  5. 27 Jan, 2024 9 commits
  6. 26 Jan, 2024 19 commits