Commit 4bbef7e8 authored by Sean Christopherson's avatar Sean Christopherson Committed by Paolo Bonzini

KVM: SVM: Simplify and harden helper to flush SEV guest page(s)

Rework sev_flush_guest_memory() to explicitly handle only a single page,
and harden it to fall back to WBINVD if VM_PAGE_FLUSH fails.  Per-page
flushing is currently used only to flush the VMSA, and in its current
form, the helper is completely broken with respect to flushing actual
guest memory, i.e. won't work correctly for an arbitrary memory range.

VM_PAGE_FLUSH takes a host virtual address, and is subject to normal page
walks, i.e. will fault if the address is not present in the host page
tables or does not have the correct permissions.  Current AMD CPUs also
do not honor SMAP overrides (undocumented in kernel versions of the APM),
so passing in a userspace address is completely out of the question.  In
other words, KVM would need to manually walk the host page tables to get
the pfn, ensure the pfn is stable, and then use the direct map to invoke
VM_PAGE_FLUSH.  And the latter might not even work, e.g. if userspace is
particularly evil/clever and backs the guest with Secret Memory (which
unmaps memory from the direct map).
Signed-off-by: default avatarSean Christopherson <seanjc@google.com>

Fixes: add5e2f0 ("KVM: SVM: Add support for the SEV-ES VMSA")
Reported-by: default avatarMingwei Zhang <mizhang@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: default avatarMingwei Zhang <mizhang@google.com>
Message-Id: <20220421031407.2516575-2-mizhang@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 266a19a0
...@@ -2226,9 +2226,18 @@ int sev_cpu_init(struct svm_cpu_data *sd) ...@@ -2226,9 +2226,18 @@ int sev_cpu_init(struct svm_cpu_data *sd)
* Pages used by hardware to hold guest encrypted state must be flushed before * Pages used by hardware to hold guest encrypted state must be flushed before
* returning them to the system. * returning them to the system.
*/ */
static void sev_flush_guest_memory(struct vcpu_svm *svm, void *va, static void sev_flush_encrypted_page(struct kvm_vcpu *vcpu, void *va)
unsigned long len)
{ {
int asid = to_kvm_svm(vcpu->kvm)->sev_info.asid;
/*
* Note! The address must be a kernel address, as regular page walk
* checks are performed by VM_PAGE_FLUSH, i.e. operating on a user
* address is non-deterministic and unsafe. This function deliberately
* takes a pointer to deter passing in a user address.
*/
unsigned long addr = (unsigned long)va;
/* /*
* If hardware enforced cache coherency for encrypted mappings of the * If hardware enforced cache coherency for encrypted mappings of the
* same physical page is supported, nothing to do. * same physical page is supported, nothing to do.
...@@ -2237,40 +2246,16 @@ static void sev_flush_guest_memory(struct vcpu_svm *svm, void *va, ...@@ -2237,40 +2246,16 @@ static void sev_flush_guest_memory(struct vcpu_svm *svm, void *va,
return; return;
/* /*
* If the VM Page Flush MSR is supported, use it to flush the page * VM Page Flush takes a host virtual address and a guest ASID. Fall
* (using the page virtual address and the guest ASID). * back to WBINVD if this faults so as not to make any problems worse
* by leaving stale encrypted data in the cache.
*/ */
if (boot_cpu_has(X86_FEATURE_VM_PAGE_FLUSH)) { if (WARN_ON_ONCE(wrmsrl_safe(MSR_AMD64_VM_PAGE_FLUSH, addr | asid)))
struct kvm_sev_info *sev; goto do_wbinvd;
unsigned long va_start;
u64 start, stop;
/* Align start and stop to page boundaries. */
va_start = (unsigned long)va;
start = (u64)va_start & PAGE_MASK;
stop = PAGE_ALIGN((u64)va_start + len);
if (start < stop) {
sev = &to_kvm_svm(svm->vcpu.kvm)->sev_info;
while (start < stop) {
wrmsrl(MSR_AMD64_VM_PAGE_FLUSH,
start | sev->asid);
start += PAGE_SIZE;
}
return; return;
}
WARN(1, "Address overflow, using WBINVD\n");
}
/* do_wbinvd:
* Hardware should always have one of the above features,
* but if not, use WBINVD and issue a warning.
*/
WARN_ONCE(1, "Using WBINVD to flush guest memory\n");
wbinvd_on_all_cpus(); wbinvd_on_all_cpus();
} }
...@@ -2284,7 +2269,8 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu) ...@@ -2284,7 +2269,8 @@ void sev_free_vcpu(struct kvm_vcpu *vcpu)
svm = to_svm(vcpu); svm = to_svm(vcpu);
if (vcpu->arch.guest_state_protected) if (vcpu->arch.guest_state_protected)
sev_flush_guest_memory(svm, svm->sev_es.vmsa, PAGE_SIZE); sev_flush_encrypted_page(vcpu, svm->sev_es.vmsa);
__free_page(virt_to_page(svm->sev_es.vmsa)); __free_page(virt_to_page(svm->sev_es.vmsa));
if (svm->sev_es.ghcb_sa_free) if (svm->sev_es.ghcb_sa_free)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment