Commit 3b51022b authored by Alexey Kardashevskiy's avatar Alexey Kardashevskiy Committed by Greg Kroah-Hartman

KVM: PPC: Book3S PR: Exit KVM on failed mapping


[ Upstream commit bd9166ff ]

At the moment kvmppc_mmu_map_page() returns -1 if
mmu_hash_ops.hpte_insert() fails for any reason so the page fault handler
resumes the guest and it faults on the same address again.

This adds distinction to kvmppc_mmu_map_page() to return -EIO if
mmu_hash_ops.hpte_insert() failed for a reason other than full pteg.
At the moment only pSeries_lpar_hpte_insert() returns -2 if
plpar_pte_enter() failed with a code other than H_PTEG_FULL.
Other mmu_hash_ops.hpte_insert() instances can only fail with
-1 "full pteg".

With this change, if PR KVM fails to update HPT, it can signal
the userspace about this instead of returning to guest and having
the very same page fault over and over again.
Signed-off-by: default avatarAlexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: default avatarDavid Gibson <david@gibson.dropbear.id.au>
Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
Signed-off-by: default avatarSasha Levin <alexander.levin@microsoft.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 5593c2ba
...@@ -177,12 +177,15 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, ...@@ -177,12 +177,15 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte,
ret = ppc_md.hpte_insert(hpteg, vpn, hpaddr, rflags, vflags, ret = ppc_md.hpte_insert(hpteg, vpn, hpaddr, rflags, vflags,
hpsize, hpsize, MMU_SEGSIZE_256M); hpsize, hpsize, MMU_SEGSIZE_256M);
if (ret < 0) { if (ret == -1) {
/* If we couldn't map a primary PTE, try a secondary */ /* If we couldn't map a primary PTE, try a secondary */
hash = ~hash; hash = ~hash;
vflags ^= HPTE_V_SECONDARY; vflags ^= HPTE_V_SECONDARY;
attempt++; attempt++;
goto map_again; goto map_again;
} else if (ret < 0) {
r = -EIO;
goto out_unlock;
} else { } else {
trace_kvm_book3s_64_mmu_map(rflags, hpteg, trace_kvm_book3s_64_mmu_map(rflags, hpteg,
vpn, hpaddr, orig_pte); vpn, hpaddr, orig_pte);
......
...@@ -625,7 +625,11 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu, ...@@ -625,7 +625,11 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
kvmppc_mmu_unmap_page(vcpu, &pte); kvmppc_mmu_unmap_page(vcpu, &pte);
} }
/* The guest's PTE is not mapped yet. Map on the host */ /* The guest's PTE is not mapped yet. Map on the host */
kvmppc_mmu_map_page(vcpu, &pte, iswrite); if (kvmppc_mmu_map_page(vcpu, &pte, iswrite) == -EIO) {
/* Exit KVM if mapping failed */
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
return RESUME_HOST;
}
if (data) if (data)
vcpu->stat.sp_storage++; vcpu->stat.sp_storage++;
else if (vcpu->arch.mmu.is_dcbz32(vcpu) && else if (vcpu->arch.mmu.is_dcbz32(vcpu) &&
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment