Commit ad6260da authored by Paolo Bonzini's avatar Paolo Bonzini Committed by Radim Krčmář

KVM: x86: drop legacy device assignment

Legacy device assignment has been deprecated since 4.2 (released
1.5 years ago).  VFIO is better and everyone should have switched to it.
If they haven't, this should convince them. :)
Reviewed-by: default avatarAlex Williamson <alex.williamson@redhat.com>
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent 2c82878b
......@@ -1326,130 +1326,6 @@ The flags bitmap is defined as:
/* the host supports the ePAPR idle hcall
#define KVM_PPC_PVINFO_FLAGS_EV_IDLE (1<<0)
4.48 KVM_ASSIGN_PCI_DEVICE (deprecated)
Capability: none
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_assigned_pci_dev (in)
Returns: 0 on success, -1 on error
Assigns a host PCI device to the VM.
struct kvm_assigned_pci_dev {
__u32 assigned_dev_id;
__u32 busnr;
__u32 devfn;
__u32 flags;
__u32 segnr;
union {
__u32 reserved[11];
};
};
The PCI device is specified by the triple segnr, busnr, and devfn.
Identification in succeeding service requests is done via assigned_dev_id. The
following flags are specified:
/* Depends on KVM_CAP_IOMMU */
#define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0)
/* The following two depend on KVM_CAP_PCI_2_3 */
#define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)
#define KVM_DEV_ASSIGN_MASK_INTX (1 << 2)
If KVM_DEV_ASSIGN_PCI_2_3 is set, the kernel will manage legacy INTx interrupts
via the PCI-2.3-compliant device-level mask, thus enable IRQ sharing with other
assigned devices or host devices. KVM_DEV_ASSIGN_MASK_INTX specifies the
guest's view on the INTx mask, see KVM_ASSIGN_SET_INTX_MASK for details.
The KVM_DEV_ASSIGN_ENABLE_IOMMU flag is a mandatory option to ensure
isolation of the device. Usages not specifying this flag are deprecated.
Only PCI header type 0 devices with PCI BAR resources are supported by
device assignment. The user requesting this ioctl must have read/write
access to the PCI sysfs resource files associated with the device.
Errors:
ENOTTY: kernel does not support this ioctl
Other error conditions may be defined by individual device types or
have their standard meanings.
4.49 KVM_DEASSIGN_PCI_DEVICE (deprecated)
Capability: none
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_assigned_pci_dev (in)
Returns: 0 on success, -1 on error
Ends PCI device assignment, releasing all associated resources.
See KVM_ASSIGN_PCI_DEVICE for the data structure. Only assigned_dev_id is
used in kvm_assigned_pci_dev to identify the device.
Errors:
ENOTTY: kernel does not support this ioctl
Other error conditions may be defined by individual device types or
have their standard meanings.
4.50 KVM_ASSIGN_DEV_IRQ (deprecated)
Capability: KVM_CAP_ASSIGN_DEV_IRQ
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_assigned_irq (in)
Returns: 0 on success, -1 on error
Assigns an IRQ to a passed-through device.
struct kvm_assigned_irq {
__u32 assigned_dev_id;
__u32 host_irq; /* ignored (legacy field) */
__u32 guest_irq;
__u32 flags;
union {
__u32 reserved[12];
};
};
The following flags are defined:
#define KVM_DEV_IRQ_HOST_INTX (1 << 0)
#define KVM_DEV_IRQ_HOST_MSI (1 << 1)
#define KVM_DEV_IRQ_HOST_MSIX (1 << 2)
#define KVM_DEV_IRQ_GUEST_INTX (1 << 8)
#define KVM_DEV_IRQ_GUEST_MSI (1 << 9)
#define KVM_DEV_IRQ_GUEST_MSIX (1 << 10)
It is not valid to specify multiple types per host or guest IRQ. However, the
IRQ type of host and guest can differ or can even be null.
Errors:
ENOTTY: kernel does not support this ioctl
Other error conditions may be defined by individual device types or
have their standard meanings.
4.51 KVM_DEASSIGN_DEV_IRQ (deprecated)
Capability: KVM_CAP_ASSIGN_DEV_IRQ
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_assigned_irq (in)
Returns: 0 on success, -1 on error
Ends an IRQ assignment to a passed-through device.
See KVM_ASSIGN_DEV_IRQ for the data structure. The target device is specified
by assigned_dev_id, flags must correspond to the IRQ type specified on
KVM_ASSIGN_DEV_IRQ. Partial deassignment of host or guest IRQ is allowed.
4.52 KVM_SET_GSI_ROUTING
Capability: KVM_CAP_IRQ_ROUTING
......@@ -1536,52 +1412,6 @@ struct kvm_irq_routing_hv_sint {
__u32 sint;
};
4.53 KVM_ASSIGN_SET_MSIX_NR (deprecated)
Capability: none
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_assigned_msix_nr (in)
Returns: 0 on success, -1 on error
Set the number of MSI-X interrupts for an assigned device. The number is
reset again by terminating the MSI-X assignment of the device via
KVM_DEASSIGN_DEV_IRQ. Calling this service more than once at any earlier
point will fail.
struct kvm_assigned_msix_nr {
__u32 assigned_dev_id;
__u16 entry_nr;
__u16 padding;
};
#define KVM_MAX_MSIX_PER_DEV 256
4.54 KVM_ASSIGN_SET_MSIX_ENTRY (deprecated)
Capability: none
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_assigned_msix_entry (in)
Returns: 0 on success, -1 on error
Specifies the routing of an MSI-X assigned device interrupt to a GSI. Setting
the GSI vector to zero means disabling the interrupt.
struct kvm_assigned_msix_entry {
__u32 assigned_dev_id;
__u32 gsi;
__u16 entry; /* The index of entry in the MSI-X table */
__u16 padding[3];
};
Errors:
ENOTTY: kernel does not support this ioctl
Other error conditions may be defined by individual device types or
have their standard meanings.
4.55 KVM_SET_TSC_KHZ
......@@ -1733,40 +1563,6 @@ should skip processing the bitmap and just invalidate everything. It must
be set to the number of set bits in the bitmap.
4.61 KVM_ASSIGN_SET_INTX_MASK (deprecated)
Capability: KVM_CAP_PCI_2_3
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_assigned_pci_dev (in)
Returns: 0 on success, -1 on error
Allows userspace to mask PCI INTx interrupts from the assigned device. The
kernel will not deliver INTx interrupts to the guest between setting and
clearing of KVM_ASSIGN_SET_INTX_MASK via this interface. This enables use of
and emulation of PCI 2.3 INTx disable command register behavior.
This may be used for both PCI 2.3 devices supporting INTx disable natively and
older devices lacking this support. Userspace is responsible for emulating the
read value of the INTx disable bit in the guest visible PCI command register.
When modifying the INTx disable state, userspace should precede updating the
physical device command register by calling this ioctl to inform the kernel of
the new intended INTx mask state.
Note that the kernel uses the device INTx disable bit to internally manage the
device interrupt state for PCI 2.3 devices. Reads of this register may
therefore not match the expected value. Writes should always use the guest
intended INTx disable value rather than attempting to read-copy-update the
current physical device state. Races between user and kernel updates to the
INTx disable bit are handled lazily in the kernel. It's possible the device
may generate unintended interrupts, but they will not be injected into the
guest.
See KVM_ASSIGN_DEV_IRQ for the data structure. The target device is specified
by assigned_dev_id. In the flags field, only KVM_DEV_ASSIGN_MASK_INTX is
evaluated.
4.62 KVM_CREATE_SPAPR_TCE
Capability: KVM_CAP_SPAPR_TCE
......
......@@ -86,18 +86,6 @@ config KVM_MMU_AUDIT
This option adds a R/W kVM module parameter 'mmu_audit', which allows
auditing of KVM MMU events at runtime.
config KVM_DEVICE_ASSIGNMENT
bool "KVM legacy PCI device assignment support (DEPRECATED)"
depends on KVM && PCI && IOMMU_API
default n
---help---
Provide support for legacy PCI device assignment through KVM. The
kernel now also supports a full featured userspace device driver
framework through VFIO, which supersedes this support and provides
better security.
If unsure, say N.
# OK, it's a little counter-intuitive to do this, but it puts it neatly under
# the virtualization menu.
source drivers/vhost/Kconfig
......
......@@ -15,8 +15,6 @@ kvm-y += x86.o mmu.o emulate.o i8259.o irq.o lapic.o \
i8254.o ioapic.o irq_comm.o cpuid.o pmu.o mtrr.o \
hyperv.o page_track.o debugfs.o
kvm-$(CONFIG_KVM_DEVICE_ASSIGNMENT) += assigned-dev.o iommu.o
kvm-intel-y += vmx.o pmu_intel.o
kvm-amd-y += svm.o pmu_amd.o
......
This diff is collapsed.
#ifndef ARCH_X86_KVM_ASSIGNED_DEV_H
#define ARCH_X86_KVM_ASSIGNED_DEV_H
#include <linux/kvm_host.h>
#ifdef CONFIG_KVM_DEVICE_ASSIGNMENT
int kvm_assign_device(struct kvm *kvm, struct pci_dev *pdev);
int kvm_deassign_device(struct kvm *kvm, struct pci_dev *pdev);
int kvm_iommu_map_guest(struct kvm *kvm);
int kvm_iommu_unmap_guest(struct kvm *kvm);
long kvm_vm_ioctl_assigned_device(struct kvm *kvm, unsigned ioctl,
unsigned long arg);
void kvm_free_all_assigned_devices(struct kvm *kvm);
#else
static inline int kvm_iommu_unmap_guest(struct kvm *kvm)
{
return 0;
}
static inline long kvm_vm_ioctl_assigned_device(struct kvm *kvm, unsigned ioctl,
unsigned long arg)
{
return -ENOTTY;
}
static inline void kvm_free_all_assigned_devices(struct kvm *kvm) {}
#endif /* CONFIG_KVM_DEVICE_ASSIGNMENT */
#endif /* ARCH_X86_KVM_ASSIGNED_DEV_H */
/*
* Copyright (c) 2006, Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
* Place - Suite 330, Boston, MA 02111-1307 USA.
*
* Copyright (C) 2006-2008 Intel Corporation
* Copyright IBM Corporation, 2008
* Copyright 2010 Red Hat, Inc. and/or its affiliates.
*
* Author: Allen M. Kay <allen.m.kay@intel.com>
* Author: Weidong Han <weidong.han@intel.com>
* Author: Ben-Ami Yassour <benami@il.ibm.com>
*/
#include <linux/list.h>
#include <linux/kvm_host.h>
#include <linux/moduleparam.h>
#include <linux/pci.h>
#include <linux/stat.h>
#include <linux/iommu.h>
#include "assigned-dev.h"
static bool allow_unsafe_assigned_interrupts;
module_param_named(allow_unsafe_assigned_interrupts,
allow_unsafe_assigned_interrupts, bool, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(allow_unsafe_assigned_interrupts,
"Enable device assignment on platforms without interrupt remapping support.");
static int kvm_iommu_unmap_memslots(struct kvm *kvm);
static void kvm_iommu_put_pages(struct kvm *kvm,
gfn_t base_gfn, unsigned long npages);
static kvm_pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn,
unsigned long npages)
{
gfn_t end_gfn;
kvm_pfn_t pfn;
pfn = gfn_to_pfn_memslot(slot, gfn);
end_gfn = gfn + npages;
gfn += 1;
if (is_error_noslot_pfn(pfn))
return pfn;
while (gfn < end_gfn)
gfn_to_pfn_memslot(slot, gfn++);
return pfn;
}
static void kvm_unpin_pages(struct kvm *kvm, kvm_pfn_t pfn,
unsigned long npages)
{
unsigned long i;
for (i = 0; i < npages; ++i)
kvm_release_pfn_clean(pfn + i);
}
int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
{
gfn_t gfn, end_gfn;
kvm_pfn_t pfn;
int r = 0;
struct iommu_domain *domain = kvm->arch.iommu_domain;
int flags;
/* check if iommu exists and in use */
if (!domain)
return 0;
gfn = slot->base_gfn;
end_gfn = gfn + slot->npages;
flags = IOMMU_READ;
if (!(slot->flags & KVM_MEM_READONLY))
flags |= IOMMU_WRITE;
if (!kvm->arch.iommu_noncoherent)
flags |= IOMMU_CACHE;
while (gfn < end_gfn) {
unsigned long page_size;
/* Check if already mapped */
if (iommu_iova_to_phys(domain, gfn_to_gpa(gfn))) {
gfn += 1;
continue;
}
/* Get the page size we could use to map */
page_size = kvm_host_page_size(kvm, gfn);
/* Make sure the page_size does not exceed the memslot */
while ((gfn + (page_size >> PAGE_SHIFT)) > end_gfn)
page_size >>= 1;
/* Make sure gfn is aligned to the page size we want to map */
while ((gfn << PAGE_SHIFT) & (page_size - 1))
page_size >>= 1;
/* Make sure hva is aligned to the page size we want to map */
while (__gfn_to_hva_memslot(slot, gfn) & (page_size - 1))
page_size >>= 1;
/*
* Pin all pages we are about to map in memory. This is
* important because we unmap and unpin in 4kb steps later.
*/
pfn = kvm_pin_pages(slot, gfn, page_size >> PAGE_SHIFT);
if (is_error_noslot_pfn(pfn)) {
gfn += 1;
continue;
}
/* Map into IO address space */
r = iommu_map(domain, gfn_to_gpa(gfn), pfn_to_hpa(pfn),
page_size, flags);
if (r) {
printk(KERN_ERR "kvm_iommu_map_address:"
"iommu failed to map pfn=%llx\n", pfn);
kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT);
goto unmap_pages;
}
gfn += page_size >> PAGE_SHIFT;
cond_resched();
}
return 0;
unmap_pages:
kvm_iommu_put_pages(kvm, slot->base_gfn, gfn - slot->base_gfn);
return r;
}
static int kvm_iommu_map_memslots(struct kvm *kvm)
{
int idx, r = 0;
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
if (kvm->arch.iommu_noncoherent)
kvm_arch_register_noncoherent_dma(kvm);
idx = srcu_read_lock(&kvm->srcu);
slots = kvm_memslots(kvm);
kvm_for_each_memslot(memslot, slots) {
r = kvm_iommu_map_pages(kvm, memslot);
if (r)
break;
}
srcu_read_unlock(&kvm->srcu, idx);
return r;
}
int kvm_assign_device(struct kvm *kvm, struct pci_dev *pdev)
{
struct iommu_domain *domain = kvm->arch.iommu_domain;
int r;
bool noncoherent;
/* check if iommu exists and in use */
if (!domain)
return 0;
if (pdev == NULL)
return -ENODEV;
r = iommu_attach_device(domain, &pdev->dev);
if (r) {
dev_err(&pdev->dev, "kvm assign device failed ret %d", r);
return r;
}
noncoherent = !iommu_capable(&pci_bus_type, IOMMU_CAP_CACHE_COHERENCY);
/* Check if need to update IOMMU page table for guest memory */
if (noncoherent != kvm->arch.iommu_noncoherent) {
kvm_iommu_unmap_memslots(kvm);
kvm->arch.iommu_noncoherent = noncoherent;
r = kvm_iommu_map_memslots(kvm);
if (r)
goto out_unmap;
}
kvm_arch_start_assignment(kvm);
pci_set_dev_assigned(pdev);
dev_info(&pdev->dev, "kvm assign device\n");
return 0;
out_unmap:
kvm_iommu_unmap_memslots(kvm);
return r;
}
int kvm_deassign_device(struct kvm *kvm, struct pci_dev *pdev)
{
struct iommu_domain *domain = kvm->arch.iommu_domain;
/* check if iommu exists and in use */
if (!domain)
return 0;
if (pdev == NULL)
return -ENODEV;
iommu_detach_device(domain, &pdev->dev);
pci_clear_dev_assigned(pdev);
kvm_arch_end_assignment(kvm);
dev_info(&pdev->dev, "kvm deassign device\n");
return 0;
}
int kvm_iommu_map_guest(struct kvm *kvm)
{
int r;
if (!iommu_present(&pci_bus_type)) {
printk(KERN_ERR "%s: iommu not found\n", __func__);
return -ENODEV;
}
mutex_lock(&kvm->slots_lock);
kvm->arch.iommu_domain = iommu_domain_alloc(&pci_bus_type);
if (!kvm->arch.iommu_domain) {
r = -ENOMEM;
goto out_unlock;
}
if (!allow_unsafe_assigned_interrupts &&
!iommu_capable(&pci_bus_type, IOMMU_CAP_INTR_REMAP)) {
printk(KERN_WARNING "%s: No interrupt remapping support,"
" disallowing device assignment."
" Re-enable with \"allow_unsafe_assigned_interrupts=1\""
" module option.\n", __func__);
iommu_domain_free(kvm->arch.iommu_domain);
kvm->arch.iommu_domain = NULL;
r = -EPERM;
goto out_unlock;
}
r = kvm_iommu_map_memslots(kvm);
if (r)
kvm_iommu_unmap_memslots(kvm);
out_unlock:
mutex_unlock(&kvm->slots_lock);
return r;
}
static void kvm_iommu_put_pages(struct kvm *kvm,
gfn_t base_gfn, unsigned long npages)
{
struct iommu_domain *domain;
gfn_t end_gfn, gfn;
kvm_pfn_t pfn;
u64 phys;
domain = kvm->arch.iommu_domain;
end_gfn = base_gfn + npages;
gfn = base_gfn;
/* check if iommu exists and in use */
if (!domain)
return;
while (gfn < end_gfn) {
unsigned long unmap_pages;
size_t size;
/* Get physical address */
phys = iommu_iova_to_phys(domain, gfn_to_gpa(gfn));
if (!phys) {
gfn++;
continue;
}
pfn = phys >> PAGE_SHIFT;
/* Unmap address from IO address space */
size = iommu_unmap(domain, gfn_to_gpa(gfn), PAGE_SIZE);
unmap_pages = 1ULL << get_order(size);
/* Unpin all pages we just unmapped to not leak any memory */
kvm_unpin_pages(kvm, pfn, unmap_pages);
gfn += unmap_pages;
cond_resched();
}
}
void kvm_iommu_unmap_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
{
kvm_iommu_put_pages(kvm, slot->base_gfn, slot->npages);
}
static int kvm_iommu_unmap_memslots(struct kvm *kvm)
{
int idx;
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
idx = srcu_read_lock(&kvm->srcu);
slots = kvm_memslots(kvm);
kvm_for_each_memslot(memslot, slots)
kvm_iommu_unmap_pages(kvm, memslot);
srcu_read_unlock(&kvm->srcu, idx);
if (kvm->arch.iommu_noncoherent)
kvm_arch_unregister_noncoherent_dma(kvm);
return 0;
}
int kvm_iommu_unmap_guest(struct kvm *kvm)
{
struct iommu_domain *domain = kvm->arch.iommu_domain;
/* check if iommu exists and in use */
if (!domain)
return 0;
mutex_lock(&kvm->slots_lock);
kvm_iommu_unmap_memslots(kvm);
kvm->arch.iommu_domain = NULL;
kvm->arch.iommu_noncoherent = false;
mutex_unlock(&kvm->slots_lock);
iommu_domain_free(domain);
return 0;
}
......@@ -27,7 +27,6 @@
#include "kvm_cache_regs.h"
#include "x86.h"
#include "cpuid.h"
#include "assigned-dev.h"
#include "pmu.h"
#include "hyperv.h"
......@@ -2675,10 +2674,6 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_SET_BOOT_CPU_ID:
case KVM_CAP_SPLIT_IRQCHIP:
case KVM_CAP_IMMEDIATE_EXIT:
#ifdef CONFIG_KVM_DEVICE_ASSIGNMENT
case KVM_CAP_ASSIGN_DEV_IRQ:
case KVM_CAP_PCI_2_3:
#endif
r = 1;
break;
case KVM_CAP_ADJUST_CLOCK:
......@@ -2713,11 +2708,6 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_PV_MMU: /* obsolete */
r = 0;
break;
#ifdef CONFIG_KVM_DEVICE_ASSIGNMENT
case KVM_CAP_IOMMU:
r = iommu_present(&pci_bus_type);
break;
#endif
case KVM_CAP_MCE:
r = KVM_MAX_MCE_BANKS;
break;
......@@ -4230,7 +4220,7 @@ long kvm_arch_vm_ioctl(struct file *filp,
break;
}
default:
r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
r = -ENOTTY;
}
out:
return r;
......@@ -8068,7 +8058,6 @@ void kvm_arch_sync_events(struct kvm *kvm)
{
cancel_delayed_work_sync(&kvm->arch.kvmclock_sync_work);
cancel_delayed_work_sync(&kvm->arch.kvmclock_update_work);
kvm_free_all_assigned_devices(kvm);
kvm_free_pit(kvm);
}
......@@ -8152,7 +8141,6 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
}
if (kvm_x86_ops->vm_destroy)
kvm_x86_ops->vm_destroy(kvm);
kvm_iommu_unmap_guest(kvm);
kvm_pic_destroy(kvm);
kvm_ioapic_destroy(kvm);
kvm_free_vcpus(kvm);
......
......@@ -877,22 +877,6 @@ void kvm_unregister_irq_ack_notifier(struct kvm *kvm,
int kvm_request_irq_source_id(struct kvm *kvm);
void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id);
#ifdef CONFIG_KVM_DEVICE_ASSIGNMENT
int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot);
void kvm_iommu_unmap_pages(struct kvm *kvm, struct kvm_memory_slot *slot);
#else
static inline int kvm_iommu_map_pages(struct kvm *kvm,
struct kvm_memory_slot *slot)
{
return 0;
}
static inline void kvm_iommu_unmap_pages(struct kvm *kvm,
struct kvm_memory_slot *slot)
{
}
#endif
/*
* search_memslots() and __gfn_to_memslot() are here because they are
* used in non-modular code in arch/powerpc/kvm/book3s_hv_rm_mmu.c.
......
......@@ -1019,8 +1019,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
old_memslots = install_new_memslots(kvm, as_id, slots);
/* slot was deleted or moved, clear iommu mapping */
kvm_iommu_unmap_pages(kvm, &old);
/* From this point no new shadow pages pointing to a deleted,
* or moved, memslot will be created.
*
......@@ -1055,21 +1053,6 @@ int __kvm_set_memory_region(struct kvm *kvm,
kvm_free_memslot(kvm, &old, &new);
kvfree(old_memslots);
/*
* IOMMU mapping: New slots need to be mapped. Old slots need to be
* un-mapped and re-mapped if their base changes. Since base change
* unmapping is handled above with slot deletion, mapping alone is
* needed here. Anything else the iommu might care about for existing
* slots (size changes, userspace addr changes and read-only flag
* changes) is disallowed above, so any other attribute changes getting
* here can be skipped.
*/
if (as_id == 0 && (change == KVM_MR_CREATE || change == KVM_MR_MOVE)) {
r = kvm_iommu_map_pages(kvm, &new);
return r;
}
return 0;
out_slots:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment