Commit ce690e9c authored by David Matlack's avatar David Matlack Committed by Paolo Bonzini

KVM: selftests: Refactor nested_map() to specify target level

Refactor nested_map() to specify that it explicityl wants 4K mappings
(the existing behavior) and push the implementation down into
__nested_map(), which can be used in subsequent commits to create huge
page mappings.

No function change intended.
Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
Signed-off-by: default avatarDavid Matlack <dmatlack@google.com>
Message-Id: <20220520233249.3776001-5-dmatlack@google.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent b8ca01ea
...@@ -486,6 +486,7 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, ...@@ -486,6 +486,7 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
* nested_paddr - Nested guest physical address to map * nested_paddr - Nested guest physical address to map
* paddr - VM Physical Address * paddr - VM Physical Address
* size - The size of the range to map * size - The size of the range to map
* level - The level at which to map the range
* *
* Output Args: None * Output Args: None
* *
...@@ -494,22 +495,29 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm, ...@@ -494,22 +495,29 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
* Within the VM given by vm, creates a nested guest translation for the * Within the VM given by vm, creates a nested guest translation for the
* page range starting at nested_paddr to the page range starting at paddr. * page range starting at nested_paddr to the page range starting at paddr.
*/ */
void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm, void __nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
uint64_t nested_paddr, uint64_t paddr, uint64_t size) uint64_t nested_paddr, uint64_t paddr, uint64_t size,
int level)
{ {
size_t page_size = vm->page_size; size_t page_size = PG_LEVEL_SIZE(level);
size_t npages = size / page_size; size_t npages = size / page_size;
TEST_ASSERT(nested_paddr + size > nested_paddr, "Vaddr overflow"); TEST_ASSERT(nested_paddr + size > nested_paddr, "Vaddr overflow");
TEST_ASSERT(paddr + size > paddr, "Paddr overflow"); TEST_ASSERT(paddr + size > paddr, "Paddr overflow");
while (npages--) { while (npages--) {
nested_pg_map(vmx, vm, nested_paddr, paddr); __nested_pg_map(vmx, vm, nested_paddr, paddr, level);
nested_paddr += page_size; nested_paddr += page_size;
paddr += page_size; paddr += page_size;
} }
} }
void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
uint64_t nested_paddr, uint64_t paddr, uint64_t size)
{
__nested_map(vmx, vm, nested_paddr, paddr, size, PG_LEVEL_4K);
}
/* Prepare an identity extended page table that maps all the /* Prepare an identity extended page table that maps all the
* physical pages in VM. * physical pages in VM.
*/ */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment