1. 10 Feb, 2022 32 commits
  2. 08 Feb, 2022 8 commits
    • Maxim Levitsky's avatar
      KVM: x86: SVM: move avic definitions from AMD's spec to svm.h · 39150352
      Maxim Levitsky authored
      asm/svm.h is the correct place for all values that are defined in
      the SVM spec, and that includes AVIC.
      
      Also add some values from the spec that were not defined before
      and will be soon useful.
      Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220207155447.840194-10-mlevitsk@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      39150352
    • Maxim Levitsky's avatar
      KVM: x86: lapic: don't touch irr_pending in kvm_apic_update_apicv when inhibiting it · 755c2bf8
      Maxim Levitsky authored
      kvm_apic_update_apicv is called when AVIC is still active, thus IRR bits
      can be set by the CPU after it is called, and don't cause the irr_pending
      to be set to true.
      
      Also logic in avic_kick_target_vcpu doesn't expect a race with this
      function so to make it simple, just keep irr_pending set to true and
      let the next interrupt injection to the guest clear it.
      Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220207155447.840194-9-mlevitsk@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      755c2bf8
    • Maxim Levitsky's avatar
      KVM: x86: nSVM: deal with L1 hypervisor that intercepts interrupts but lets L2 control them · 2b0ecccb
      Maxim Levitsky authored
      Fix a corner case in which the L1 hypervisor intercepts
      interrupts (INTERCEPT_INTR) and either doesn't set
      virtual interrupt masking (V_INTR_MASKING) or enters a
      nested guest with EFLAGS.IF disabled prior to the entry.
      
      In this case, despite the fact that L1 intercepts the interrupts,
      KVM still needs to set up an interrupt window to wait before
      injecting the INTR vmexit.
      
      Currently the KVM instead enters an endless loop of 'req_immediate_exit'.
      
      Exactly the same issue also happens for SMIs and NMI.
      Fix this as well.
      
      Note that on VMX, this case is impossible as there is only
      'vmexit on external interrupts' execution control which either set,
      in which case both host and guest's EFLAGS.IF
      are ignored, or not set, in which case no VMexits are delivered.
      Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220207155447.840194-8-mlevitsk@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2b0ecccb
    • Maxim Levitsky's avatar
      KVM: x86: nSVM: expose clean bit support to the guest · 91f673b3
      Maxim Levitsky authored
      KVM already honours few clean bits thus it makes sense
      to let the nested guest know about it.
      
      Note that KVM also doesn't check if the hardware supports
      clean bits, and therefore nested KVM was
      already setting clean bits and L0 KVM
      was already honouring them.
      Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220207155447.840194-6-mlevitsk@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      91f673b3
    • Maxim Levitsky's avatar
      KVM: x86: nSVM/nVMX: set nested_run_pending on VM entry which is a result of RSM · 759cbd59
      Maxim Levitsky authored
      While RSM induced VM entries are not full VM entries,
      they still need to be followed by actual VM entry to complete it,
      unlike setting the nested state.
      
      This patch fixes boot of hyperv and SMM enabled
      windows VM running nested on KVM, which fail due
      to this issue combined with lack of dirty bit setting.
      Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Cc: stable@vger.kernel.org
      Message-Id: <20220207155447.840194-5-mlevitsk@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      759cbd59
    • Maxim Levitsky's avatar
      KVM: x86: nSVM: mark vmcb01 as dirty when restoring SMM saved state · e8efa4ff
      Maxim Levitsky authored
      While usually, restoring the smm state makes the KVM enter
      the nested guest thus a different vmcb (vmcb02 vs vmcb01),
      KVM should still mark it as dirty, since hardware
      can in theory cache multiple vmcbs.
      
      Failure to do so, combined with lack of setting the
      nested_run_pending (which is fixed in the next patch),
      might make KVM re-enter vmcb01, which was just exited from,
      with completely different set of guest state registers
      (SMM vs non SMM) and without proper dirty bits set,
      which results in the CPU reusing stale IDTR pointer
      which leads to a guest shutdown on any interrupt.
      
      On the real hardware this usually doesn't happen,
      but when running nested, L0's KVM does check and
      honour few dirty bits, causing this issue to happen.
      
      This patch fixes boot of hyperv and SMM enabled
      windows VM running nested on KVM.
      Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Cc: stable@vger.kernel.org
      Message-Id: <20220207155447.840194-4-mlevitsk@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e8efa4ff
    • Maxim Levitsky's avatar
      KVM: x86: nSVM: fix potential NULL derefernce on nested migration · e1779c27
      Maxim Levitsky authored
      Turns out that due to review feedback and/or rebases
      I accidentally moved the call to nested_svm_load_cr3 to be too early,
      before the NPT is enabled, which is very wrong to do.
      
      KVM can't even access guest memory at that point as nested NPT
      is needed for that, and of course it won't initialize the walk_mmu,
      which is main issue the patch was addressing.
      
      Fix this for real.
      
      Fixes: 232f75d3 ("KVM: nSVM: call nested_svm_load_cr3 on nested state load")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220207155447.840194-3-mlevitsk@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      e1779c27
    • Maxim Levitsky's avatar
      KVM: x86: SVM: don't passthrough SMAP/SMEP/PKE bits in !NPT && !gCR0.PG case · c53bbe21
      Maxim Levitsky authored
      When the guest doesn't enable paging, and NPT/EPT is disabled, we
      use guest't paging CR3's as KVM's shadow paging pointer and
      we are technically in direct mode as if we were to use NPT/EPT.
      
      In direct mode we create SPTEs with user mode permissions
      because usually in the direct mode the NPT/EPT doesn't
      need to restrict access based on guest CPL
      (there are MBE/GMET extenstions for that but KVM doesn't use them).
      
      In this special "use guest paging as direct" mode however,
      and if CR4.SMAP/CR4.SMEP are enabled, that will make the CPU
      fault on each access and KVM will enter endless loop of page faults.
      
      Since page protection doesn't have any meaning in !PG case,
      just don't passthrough these bits.
      
      The fix is the same as was done for VMX in commit:
      commit 656ec4a4 ("KVM: VMX: fix SMEP and SMAP without EPT")
      
      This fixes the boot of windows 10 without NPT for good.
      (Without this patch, BSP boots, but APs were stuck in endless
      loop of page faults, causing the VM boot with 1 CPU)
      Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Cc: stable@vger.kernel.org
      Message-Id: <20220207155447.840194-2-mlevitsk@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c53bbe21