Commit c53bbe21 authored by Maxim Levitsky's avatar Maxim Levitsky Committed by Paolo Bonzini

KVM: x86: SVM: don't passthrough SMAP/SMEP/PKE bits in !NPT && !gCR0.PG case

When the guest doesn't enable paging, and NPT/EPT is disabled, we
use guest't paging CR3's as KVM's shadow paging pointer and
we are technically in direct mode as if we were to use NPT/EPT.

In direct mode we create SPTEs with user mode permissions
because usually in the direct mode the NPT/EPT doesn't
need to restrict access based on guest CPL
(there are MBE/GMET extenstions for that but KVM doesn't use them).

In this special "use guest paging as direct" mode however,
and if CR4.SMAP/CR4.SMEP are enabled, that will make the CPU
fault on each access and KVM will enter endless loop of page faults.

Since page protection doesn't have any meaning in !PG case,
just don't passthrough these bits.

The fix is the same as was done for VMX in commit:
commit 656ec4a4 ("KVM: VMX: fix SMEP and SMAP without EPT")

This fixes the boot of windows 10 without NPT for good.
(Without this patch, BSP boots, but APs were stuck in endless
loop of page faults, causing the VM boot with 1 CPU)
Signed-off-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
Cc: stable@vger.kernel.org
Message-Id: <20220207155447.840194-2-mlevitsk@redhat.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent dd4589ee
...@@ -1585,6 +1585,7 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) ...@@ -1585,6 +1585,7 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
{ {
struct vcpu_svm *svm = to_svm(vcpu); struct vcpu_svm *svm = to_svm(vcpu);
u64 hcr0 = cr0; u64 hcr0 = cr0;
bool old_paging = is_paging(vcpu);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
if (vcpu->arch.efer & EFER_LME && !vcpu->arch.guest_state_protected) { if (vcpu->arch.efer & EFER_LME && !vcpu->arch.guest_state_protected) {
...@@ -1601,8 +1602,11 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) ...@@ -1601,8 +1602,11 @@ void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
#endif #endif
vcpu->arch.cr0 = cr0; vcpu->arch.cr0 = cr0;
if (!npt_enabled) if (!npt_enabled) {
hcr0 |= X86_CR0_PG | X86_CR0_WP; hcr0 |= X86_CR0_PG | X86_CR0_WP;
if (old_paging != is_paging(vcpu))
svm_set_cr4(vcpu, kvm_read_cr4(vcpu));
}
/* /*
* re-enable caching here because the QEMU bios * re-enable caching here because the QEMU bios
...@@ -1646,8 +1650,12 @@ void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) ...@@ -1646,8 +1650,12 @@ void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
svm_flush_tlb(vcpu); svm_flush_tlb(vcpu);
vcpu->arch.cr4 = cr4; vcpu->arch.cr4 = cr4;
if (!npt_enabled) if (!npt_enabled) {
cr4 |= X86_CR4_PAE; cr4 |= X86_CR4_PAE;
if (!is_paging(vcpu))
cr4 &= ~(X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_PKE);
}
cr4 |= host_cr4_mce; cr4 |= host_cr4_mce;
to_svm(vcpu)->vmcb->save.cr4 = cr4; to_svm(vcpu)->vmcb->save.cr4 = cr4;
vmcb_mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR); vmcb_mark_dirty(to_svm(vcpu)->vmcb, VMCB_CR);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment