Commit 66dcff86 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM update from Paolo Bonzini:
 "3.19 changes for KVM:

   - spring cleaning: removed support for IA64, and for hardware-
     assisted virtualization on the PPC970

   - ARM, PPC, s390 all had only small fixes

  For x86:
   - small performance improvements (though only on weird guests)
   - usual round of hardware-compliancy fixes from Nadav
   - APICv fixes
   - XSAVES support for hosts and guests.  XSAVES hosts were broken
     because the (non-KVM) XSAVES patches inadvertently changed the KVM
     userspace ABI whenever XSAVES was enabled; hence, this part is
     going to stable.  Guest support is just a matter of exposing the
     feature and CPUID leaves support"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (179 commits)
  KVM: move APIC types to arch/x86/
  KVM: PPC: Book3S: Enable in-kernel XICS emulation by default
  KVM: PPC: Book3S HV: Improve H_CONFER implementation
  KVM: PPC: Book3S HV: Fix endianness of instruction obtained from HEIR register
  KVM: PPC: Book3S HV: Remove code for PPC970 processors
  KVM: PPC: Book3S HV: Tracepoints for KVM HV guest interactions
  KVM: PPC: Book3S HV: Simplify locking around stolen time calculations
  arch: powerpc: kvm: book3s_paired_singles.c: Remove unused function
  arch: powerpc: kvm: book3s_pr.c: Remove unused function
  arch: powerpc: kvm: book3s.c: Remove some unused functions
  arch: powerpc: kvm: book3s_32_mmu.c: Remove unused function
  KVM: PPC: Book3S HV: Check wait conditions before sleeping in kvmppc_vcore_blocked
  KVM: PPC: Book3S HV: ptes are big endian
  KVM: PPC: Book3S HV: Fix inaccuracies in ICP emulation for H_IPI
  KVM: PPC: Book3S HV: Fix KSM memory corruption
  KVM: PPC: Book3S HV: Fix an issue where guest is paused on receiving HMI
  KVM: PPC: Book3S HV: Fix computation of tlbie operand
  KVM: PPC: Book3S HV: Add missing HPTE unlock
  KVM: PPC: BookE: Improve irq inject tracepoint
  arm/arm64: KVM: Require in-kernel vgic for the arch timers
  ...
parents 91ed9e8a 2c4aa55a
Currently, kvm module is in EXPERIMENTAL stage on IA64. This means that
interfaces are not stable enough to use. So, please don't run critical
applications in virtual machine.
We will try our best to improve it in future versions!
Guide: How to boot up guests on kvm/ia64
This guide is to describe how to enable kvm support for IA-64 systems.
1. Get the kvm source from git.kernel.org.
Userspace source:
git clone git://git.kernel.org/pub/scm/virt/kvm/kvm-userspace.git
Kernel Source:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/xiantao/kvm-ia64.git
2. Compile the source code.
2.1 Compile userspace code:
(1)cd ./kvm-userspace
(2)./configure
(3)cd kernel
(4)make sync LINUX= $kernel_dir (kernel_dir is the directory of kernel source.)
(5)cd ..
(6)make qemu
(7)cd qemu; make install
2.2 Compile kernel source code:
(1) cd ./$kernel_dir
(2) Make menuconfig
(3) Enter into virtualization option, and choose kvm.
(4) make
(5) Once (4) done, make modules_install
(6) Make initrd, and use new kernel to reboot up host machine.
(7) Once (6) done, cd $kernel_dir/arch/ia64/kvm
(8) insmod kvm.ko; insmod kvm-intel.ko
Note: For step 2, please make sure that host page size == TARGET_PAGE_SIZE of qemu, otherwise, may fail.
3. Get Guest Firmware named as Flash.fd, and put it under right place:
(1) If you have the guest firmware (binary) released by Intel Corp for Xen, use it directly.
(2) If you have no firmware at hand, Please download its source from
hg clone http://xenbits.xensource.com/ext/efi-vfirmware.hg
you can get the firmware's binary in the directory of efi-vfirmware.hg/binaries.
(3) Rename the firmware you owned to Flash.fd, and copy it to /usr/local/share/qemu
4. Boot up Linux or Windows guests:
4.1 Create or install a image for guest boot. If you have xen experience, it should be easy.
4.2 Boot up guests use the following command.
/usr/local/bin/qemu-system-ia64 -smp xx -m 512 -hda $your_image
(xx is the number of virtual processors for the guest, now the maximum value is 4)
5. Known possible issue on some platforms with old Firmware.
In the event of strange host crash issues, try to solve it through either of the following ways:
(1): Upgrade your Firmware to the latest one.
(2): Applying the below patch to kernel source.
diff --git a/arch/ia64/kernel/pal.S b/arch/ia64/kernel/pal.S
index 0b53344..f02b0f7 100644
--- a/arch/ia64/kernel/pal.S
+++ b/arch/ia64/kernel/pal.S
@@ -84,7 +84,8 @@ GLOBAL_ENTRY(ia64_pal_call_static)
mov ar.pfs = loc1
mov rp = loc0
;;
- srlz.d // serialize restoration of psr.l
+ srlz.i // serialize restoration of psr.l
+ ;;
br.ret.sptk.many b0
END(ia64_pal_call_static)
6. Bug report:
If you found any issues when use kvm/ia64, Please post the bug info to kvm-ia64-devel mailing list.
https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel/
Thanks for your interest! Let's work together, and make kvm/ia64 stronger and stronger!
Xiantao Zhang <xiantao.zhang@intel.com>
2008.3.10
This diff is collapsed.
...@@ -12,14 +12,14 @@ specific. ...@@ -12,14 +12,14 @@ specific.
1. GROUP: KVM_S390_VM_MEM_CTRL 1. GROUP: KVM_S390_VM_MEM_CTRL
Architectures: s390 Architectures: s390
1.1. ATTRIBUTE: KVM_S390_VM_MEM_CTRL 1.1. ATTRIBUTE: KVM_S390_VM_MEM_ENABLE_CMMA
Parameters: none Parameters: none
Returns: -EBUSY if already a vcpus is defined, otherwise 0 Returns: -EBUSY if a vcpu is already defined, otherwise 0
Enables CMMA for the virtual machine Enables Collaborative Memory Management Assist (CMMA) for the virtual machine.
1.2. ATTRIBUTE: KVM_S390_VM_CLR_CMMA 1.2. ATTRIBUTE: KVM_S390_VM_MEM_CLR_CMMA
Parameteres: none Parameters: none
Returns: 0 Returns: 0
Clear the CMMA status for all guest pages, so any pages the guest marked Clear the CMMA status for all guest pages, so any pages the guest marked
......
...@@ -168,7 +168,7 @@ MSR_KVM_ASYNC_PF_EN: 0x4b564d02 ...@@ -168,7 +168,7 @@ MSR_KVM_ASYNC_PF_EN: 0x4b564d02
64 byte memory area which must be in guest RAM and must be 64 byte memory area which must be in guest RAM and must be
zeroed. Bits 5-2 are reserved and should be zero. Bit 0 is 1 zeroed. Bits 5-2 are reserved and should be zero. Bit 0 is 1
when asynchronous page faults are enabled on the vcpu 0 when when asynchronous page faults are enabled on the vcpu 0 when
disabled. Bit 2 is 1 if asynchronous page faults can be injected disabled. Bit 1 is 1 if asynchronous page faults can be injected
when vcpu is in cpl == 0. when vcpu is in cpl == 0.
First 4 byte of 64 byte memory location will be written to by First 4 byte of 64 byte memory location will be written to by
......
...@@ -5495,15 +5495,6 @@ S: Supported ...@@ -5495,15 +5495,6 @@ S: Supported
F: arch/powerpc/include/asm/kvm* F: arch/powerpc/include/asm/kvm*
F: arch/powerpc/kvm/ F: arch/powerpc/kvm/
KERNEL VIRTUAL MACHINE For Itanium (KVM/IA64)
M: Xiantao Zhang <xiantao.zhang@intel.com>
L: kvm-ia64@vger.kernel.org
W: http://kvm.qumranet.com
S: Supported
F: Documentation/ia64/kvm.txt
F: arch/ia64/include/asm/kvm*
F: arch/ia64/kvm/
KERNEL VIRTUAL MACHINE for s390 (KVM/s390) KERNEL VIRTUAL MACHINE for s390 (KVM/s390)
M: Christian Borntraeger <borntraeger@de.ibm.com> M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Cornelia Huck <cornelia.huck@de.ibm.com> M: Cornelia Huck <cornelia.huck@de.ibm.com>
......
...@@ -33,6 +33,11 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu); ...@@ -33,6 +33,11 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
{
vcpu->arch.hcr = HCR_GUEST_MASK;
}
static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu) static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
{ {
return 1; return 1;
......
...@@ -150,8 +150,6 @@ struct kvm_vcpu_stat { ...@@ -150,8 +150,6 @@ struct kvm_vcpu_stat {
u32 halt_wakeup; u32 halt_wakeup;
}; };
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init);
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init); int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
......
...@@ -52,6 +52,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); ...@@ -52,6 +52,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void); void free_boot_hyp_pgd(void);
void free_hyp_pgds(void); void free_hyp_pgds(void);
void stage2_unmap_vm(struct kvm *kvm);
int kvm_alloc_stage2_pgd(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm);
void kvm_free_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
...@@ -161,9 +162,10 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) ...@@ -161,9 +162,10 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
} }
static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
unsigned long size) unsigned long size,
bool ipa_uncached)
{ {
if (!vcpu_has_cache_enabled(vcpu)) if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
kvm_flush_dcache_to_poc((void *)hva, size); kvm_flush_dcache_to_poc((void *)hva, size);
/* /*
......
...@@ -213,6 +213,11 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id) ...@@ -213,6 +213,11 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
int err; int err;
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;
if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) {
err = -EBUSY;
goto out;
}
vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL); vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL);
if (!vcpu) { if (!vcpu) {
err = -ENOMEM; err = -ENOMEM;
...@@ -263,6 +268,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) ...@@ -263,6 +268,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
{ {
/* Force users to call KVM_ARM_VCPU_INIT */ /* Force users to call KVM_ARM_VCPU_INIT */
vcpu->arch.target = -1; vcpu->arch.target = -1;
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
/* Set up the timer */ /* Set up the timer */
kvm_timer_vcpu_init(vcpu); kvm_timer_vcpu_init(vcpu);
...@@ -419,6 +425,7 @@ static void update_vttbr(struct kvm *kvm) ...@@ -419,6 +425,7 @@ static void update_vttbr(struct kvm *kvm)
static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
{ {
struct kvm *kvm = vcpu->kvm;
int ret; int ret;
if (likely(vcpu->arch.has_run_once)) if (likely(vcpu->arch.has_run_once))
...@@ -427,15 +434,23 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) ...@@ -427,15 +434,23 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
vcpu->arch.has_run_once = true; vcpu->arch.has_run_once = true;
/* /*
* Initialize the VGIC before running a vcpu the first time on * Map the VGIC hardware resources before running a vcpu the first
* this VM. * time on this VM.
*/ */
if (unlikely(!vgic_initialized(vcpu->kvm))) { if (unlikely(!vgic_ready(kvm))) {
ret = kvm_vgic_init(vcpu->kvm); ret = kvm_vgic_map_resources(kvm);
if (ret) if (ret)
return ret; return ret;
} }
/*
* Enable the arch timers only if we have an in-kernel VGIC
* and it has been properly initialized, since we cannot handle
* interrupts from the virtual timer with a userspace gic.
*/
if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
kvm_timer_enable(kvm);
return 0; return 0;
} }
...@@ -649,6 +664,48 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level, ...@@ -649,6 +664,48 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
return -EINVAL; return -EINVAL;
} }
static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init)
{
unsigned int i;
int phys_target = kvm_target_cpu();
if (init->target != phys_target)
return -EINVAL;
/*
* Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
* use the same target.
*/
if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
return -EINVAL;
/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
for (i = 0; i < sizeof(init->features) * 8; i++) {
bool set = (init->features[i / 32] & (1 << (i % 32)));
if (set && i >= KVM_VCPU_MAX_FEATURES)
return -ENOENT;
/*
* Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
* use the same feature set.
*/
if (vcpu->arch.target != -1 && i < KVM_VCPU_MAX_FEATURES &&
test_bit(i, vcpu->arch.features) != set)
return -EINVAL;
if (set)
set_bit(i, vcpu->arch.features);
}
vcpu->arch.target = phys_target;
/* Now we know what it is, we can reset it. */
return kvm_reset_vcpu(vcpu);
}
static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
struct kvm_vcpu_init *init) struct kvm_vcpu_init *init)
{ {
...@@ -658,11 +715,22 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, ...@@ -658,11 +715,22 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
if (ret) if (ret)
return ret; return ret;
/*
* Ensure a rebooted VM will fault in RAM pages and detect if the
* guest MMU is turned off and flush the caches as needed.
*/
if (vcpu->arch.has_run_once)
stage2_unmap_vm(vcpu->kvm);
vcpu_reset_hcr(vcpu);
/* /*
* Handle the "start in power-off" case by marking the VCPU as paused. * Handle the "start in power-off" case by marking the VCPU as paused.
*/ */
if (__test_and_clear_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features)) if (test_bit(KVM_ARM_VCPU_POWER_OFF, vcpu->arch.features))
vcpu->arch.pause = true; vcpu->arch.pause = true;
else
vcpu->arch.pause = false;
return 0; return 0;
} }
......
...@@ -38,7 +38,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { ...@@ -38,7 +38,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{ {
vcpu->arch.hcr = HCR_GUEST_MASK;
return 0; return 0;
} }
...@@ -274,31 +273,6 @@ int __attribute_const__ kvm_target_cpu(void) ...@@ -274,31 +273,6 @@ int __attribute_const__ kvm_target_cpu(void)
} }
} }
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init)
{
unsigned int i;
/* We can only cope with guest==host and only on A15/A7 (for now). */
if (init->target != kvm_target_cpu())
return -EINVAL;
vcpu->arch.target = init->target;
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
for (i = 0; i < sizeof(init->features) * 8; i++) {
if (test_bit(i, (void *)init->features)) {
if (i >= KVM_VCPU_MAX_FEATURES)
return -ENOENT;
set_bit(i, vcpu->arch.features);
}
}
/* Now we know what it is, we can reset it. */
return kvm_reset_vcpu(vcpu);
}
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init) int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
{ {
int target = kvm_target_cpu(); int target = kvm_target_cpu();
......
...@@ -187,15 +187,18 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, ...@@ -187,15 +187,18 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
} }
rt = vcpu->arch.mmio_decode.rt; rt = vcpu->arch.mmio_decode.rt;
data = vcpu_data_guest_to_host(vcpu, *vcpu_reg(vcpu, rt), mmio.len);
trace_kvm_mmio((mmio.is_write) ? KVM_TRACE_MMIO_WRITE : if (mmio.is_write) {
KVM_TRACE_MMIO_READ_UNSATISFIED, data = vcpu_data_guest_to_host(vcpu, *vcpu_reg(vcpu, rt),
mmio.len, fault_ipa, mmio.len);
(mmio.is_write) ? data : 0);
if (mmio.is_write) trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, mmio.len,
fault_ipa, data);
mmio_write_buf(mmio.data, mmio.len, data); mmio_write_buf(mmio.data, mmio.len, data);
} else {
trace_kvm_mmio(KVM_TRACE_MMIO_READ_UNSATISFIED, mmio.len,
fault_ipa, 0);
}
if (vgic_handle_mmio(vcpu, run, &mmio)) if (vgic_handle_mmio(vcpu, run, &mmio))
return 1; return 1;
......
...@@ -612,6 +612,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size) ...@@ -612,6 +612,71 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
unmap_range(kvm, kvm->arch.pgd, start, size); unmap_range(kvm, kvm->arch.pgd, start, size);
} }
static void stage2_unmap_memslot(struct kvm *kvm,
struct kvm_memory_slot *memslot)
{
hva_t hva = memslot->userspace_addr;
phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
phys_addr_t size = PAGE_SIZE * memslot->npages;
hva_t reg_end = hva + size;
/*
* A memory region could potentially cover multiple VMAs, and any holes
* between them, so iterate over all of them to find out if we should
* unmap any of them.
*
* +--------------------------------------------+
* +---------------+----------------+ +----------------+
* | : VMA 1 | VMA 2 | | VMA 3 : |
* +---------------+----------------+ +----------------+
* | memory region |
* +--------------------------------------------+
*/
do {
struct vm_area_struct *vma = find_vma(current->mm, hva);
hva_t vm_start, vm_end;
if (!vma || vma->vm_start >= reg_end)
break;
/*
* Take the intersection of this VMA with the memory region
*/
vm_start = max(hva, vma->vm_start);
vm_end = min(reg_end, vma->vm_end);
if (!(vma->vm_flags & VM_PFNMAP)) {
gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
unmap_stage2_range(kvm, gpa, vm_end - vm_start);
}
hva = vm_end;
} while (hva < reg_end);
}
/**
* stage2_unmap_vm - Unmap Stage-2 RAM mappings
* @kvm: The struct kvm pointer
*
* Go through the memregions and unmap any reguler RAM
* backing memory already mapped to the VM.
*/
void stage2_unmap_vm(struct kvm *kvm)
{
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
int idx;
idx = srcu_read_lock(&kvm->srcu);
spin_lock(&kvm->mmu_lock);
slots = kvm_memslots(kvm);
kvm_for_each_memslot(memslot, slots)
stage2_unmap_memslot(kvm, memslot);
spin_unlock(&kvm->mmu_lock);
srcu_read_unlock(&kvm->srcu, idx);
}
/** /**
* kvm_free_stage2_pgd - free all stage-2 tables * kvm_free_stage2_pgd - free all stage-2 tables
* @kvm: The KVM struct pointer for the VM. * @kvm: The KVM struct pointer for the VM.
...@@ -853,6 +918,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ...@@ -853,6 +918,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct vm_area_struct *vma; struct vm_area_struct *vma;
pfn_t pfn; pfn_t pfn;
pgprot_t mem_type = PAGE_S2; pgprot_t mem_type = PAGE_S2;
bool fault_ipa_uncached;
write_fault = kvm_is_write_fault(vcpu); write_fault = kvm_is_write_fault(vcpu);
if (fault_status == FSC_PERM && !write_fault) { if (fault_status == FSC_PERM && !write_fault) {
...@@ -919,6 +985,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ...@@ -919,6 +985,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (!hugetlb && !force_pte) if (!hugetlb && !force_pte)
hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa); hugetlb = transparent_hugepage_adjust(&pfn, &fault_ipa);
fault_ipa_uncached = memslot->flags & KVM_MEMSLOT_INCOHERENT;
if (hugetlb) { if (hugetlb) {
pmd_t new_pmd = pfn_pmd(pfn, mem_type); pmd_t new_pmd = pfn_pmd(pfn, mem_type);
new_pmd = pmd_mkhuge(new_pmd); new_pmd = pmd_mkhuge(new_pmd);
...@@ -926,7 +994,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ...@@ -926,7 +994,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_set_s2pmd_writable(&new_pmd); kvm_set_s2pmd_writable(&new_pmd);
kvm_set_pfn_dirty(pfn); kvm_set_pfn_dirty(pfn);
} }
coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE); coherent_cache_guest_page(vcpu, hva & PMD_MASK, PMD_SIZE,
fault_ipa_uncached);
ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd);
} else { } else {
pte_t new_pte = pfn_pte(pfn, mem_type); pte_t new_pte = pfn_pte(pfn, mem_type);
...@@ -934,7 +1003,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ...@@ -934,7 +1003,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_set_s2pte_writable(&new_pte); kvm_set_s2pte_writable(&new_pte);
kvm_set_pfn_dirty(pfn); kvm_set_pfn_dirty(pfn);
} }
coherent_cache_guest_page(vcpu, hva, PAGE_SIZE); coherent_cache_guest_page(vcpu, hva, PAGE_SIZE,
fault_ipa_uncached);
ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte,
pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE)); pgprot_val(mem_type) == pgprot_val(PAGE_S2_DEVICE));
} }
...@@ -1294,11 +1364,12 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, ...@@ -1294,11 +1364,12 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
hva = vm_end; hva = vm_end;
} while (hva < reg_end); } while (hva < reg_end);
if (ret) {
spin_lock(&kvm->mmu_lock); spin_lock(&kvm->mmu_lock);
if (ret)
unmap_stage2_range(kvm, mem->guest_phys_addr, mem->memory_size); unmap_stage2_range(kvm, mem->guest_phys_addr, mem->memory_size);
else
stage2_flush_memslot(kvm, memslot);
spin_unlock(&kvm->mmu_lock); spin_unlock(&kvm->mmu_lock);
}
return ret; return ret;
} }
...@@ -1310,6 +1381,15 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, ...@@ -1310,6 +1381,15 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
unsigned long npages) unsigned long npages)
{ {
/*
* Readonly memslots are not incoherent with the caches by definition,
* but in practice, they are used mostly to emulate ROMs or NOR flashes
* that the guest may consider devices and hence map as uncached.
* To prevent incoherency issues in these cases, tag all readonly
* regions as incoherent.
*/
if (slot->flags & KVM_MEM_READONLY)
slot->flags |= KVM_MEMSLOT_INCOHERENT;
return 0; return 0;
} }
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>. * along with this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#include <linux/preempt.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <linux/wait.h> #include <linux/wait.h>
...@@ -166,6 +167,23 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu) ...@@ -166,6 +167,23 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type) static void kvm_prepare_system_event(struct kvm_vcpu *vcpu, u32 type)
{ {
int i;
struct kvm_vcpu *tmp;
/*
* The KVM ABI specifies that a system event exit may call KVM_RUN
* again and may perform shutdown/reboot at a later time that when the
* actual request is made. Since we are implementing PSCI and a
* caller of PSCI reboot and shutdown expects that the system shuts
* down or reboots immediately, let's make sure that VCPUs are not run
* after this call is handled and before the VCPUs have been
* re-initialized.
*/
kvm_for_each_vcpu(i, tmp, vcpu->kvm) {
tmp->arch.pause = true;
kvm_vcpu_kick(tmp);
}
memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event)); memset(&vcpu->run->system_event, 0, sizeof(vcpu->run->system_event));
vcpu->run->system_event.type = type; vcpu->run->system_event.type = type;
vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT; vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
......
...@@ -38,6 +38,11 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu); ...@@ -38,6 +38,11 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
{
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
}
static inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu) static inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu)
{ {
return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.pc; return (unsigned long *)&vcpu_gp_regs(vcpu)->regs.pc;
......
...@@ -165,8 +165,6 @@ struct kvm_vcpu_stat { ...@@ -165,8 +165,6 @@ struct kvm_vcpu_stat {
u32 halt_wakeup; u32 halt_wakeup;
}; };
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init);
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init); int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
...@@ -200,6 +198,7 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void); ...@@ -200,6 +198,7 @@ struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void); struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
u64 kvm_call_hyp(void *hypfn, ...); u64 kvm_call_hyp(void *hypfn, ...);
void force_vm_exit(const cpumask_t *mask);
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
int exception_index); int exception_index);
......
...@@ -83,6 +83,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t); ...@@ -83,6 +83,7 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void); void free_boot_hyp_pgd(void);
void free_hyp_pgds(void); void free_hyp_pgds(void);
void stage2_unmap_vm(struct kvm *kvm);
int kvm_alloc_stage2_pgd(struct kvm *kvm); int kvm_alloc_stage2_pgd(struct kvm *kvm);
void kvm_free_stage2_pgd(struct kvm *kvm); void kvm_free_stage2_pgd(struct kvm *kvm);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
...@@ -243,9 +244,10 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) ...@@ -243,9 +244,10 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
} }
static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva, static inline void coherent_cache_guest_page(struct kvm_vcpu *vcpu, hva_t hva,
unsigned long size) unsigned long size,
bool ipa_uncached)
{ {
if (!vcpu_has_cache_enabled(vcpu)) if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
kvm_flush_dcache_to_poc((void *)hva, size); kvm_flush_dcache_to_poc((void *)hva, size);
if (!icache_is_aliasing()) { /* PIPT */ if (!icache_is_aliasing()) { /* PIPT */
......
...@@ -38,7 +38,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { ...@@ -38,7 +38,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
{ {
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
return 0; return 0;
} }
...@@ -297,31 +296,6 @@ int __attribute_const__ kvm_target_cpu(void) ...@@ -297,31 +296,6 @@ int __attribute_const__ kvm_target_cpu(void)
return -EINVAL; return -EINVAL;
} }
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init)
{
unsigned int i;
int phys_target = kvm_target_cpu();
if (init->target != phys_target)
return -EINVAL;
vcpu->arch.target = phys_target;
bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
for (i = 0; i < sizeof(init->features) * 8; i++) {
if (init->features[i / 32] & (1 << (i % 32))) {
if (i >= KVM_VCPU_MAX_FEATURES)
return -ENOENT;
set_bit(i, vcpu->arch.features);
}
}
/* Now we know what it is, we can reset it. */
return kvm_reset_vcpu(vcpu);
}
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init) int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
{ {
int target = kvm_target_cpu(); int target = kvm_target_cpu();
......
...@@ -20,7 +20,6 @@ config IA64 ...@@ -20,7 +20,6 @@ config IA64
select HAVE_DYNAMIC_FTRACE if (!ITANIUM) select HAVE_DYNAMIC_FTRACE if (!ITANIUM)
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_DMA_ATTRS select HAVE_DMA_ATTRS
select HAVE_KVM
select TTY select TTY
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
...@@ -640,8 +639,6 @@ source "security/Kconfig" ...@@ -640,8 +639,6 @@ source "security/Kconfig"
source "crypto/Kconfig" source "crypto/Kconfig"
source "arch/ia64/kvm/Kconfig"
source "lib/Kconfig" source "lib/Kconfig"
config IOMMU_HELPER config IOMMU_HELPER
......
...@@ -53,7 +53,6 @@ core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/ ...@@ -53,7 +53,6 @@ core-$(CONFIG_IA64_HP_ZX1) += arch/ia64/dig/
core-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += arch/ia64/dig/ core-$(CONFIG_IA64_HP_ZX1_SWIOTLB) += arch/ia64/dig/
core-$(CONFIG_IA64_SGI_SN2) += arch/ia64/sn/ core-$(CONFIG_IA64_SGI_SN2) += arch/ia64/sn/
core-$(CONFIG_IA64_SGI_UV) += arch/ia64/uv/ core-$(CONFIG_IA64_SGI_UV) += arch/ia64/uv/
core-$(CONFIG_KVM) += arch/ia64/kvm/
drivers-$(CONFIG_PCI) += arch/ia64/pci/ drivers-$(CONFIG_PCI) += arch/ia64/pci/
drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/ drivers-$(CONFIG_IA64_HP_SIM) += arch/ia64/hp/sim/
......
This diff is collapsed.
/*
* same structure to x86's
* Hopefully asm-x86/pvclock-abi.h would be moved to somewhere more generic.
* For now, define same duplicated definitions.
*/
#ifndef _ASM_IA64__PVCLOCK_ABI_H
#define _ASM_IA64__PVCLOCK_ABI_H
#ifndef __ASSEMBLY__
/*
* These structs MUST NOT be changed.
* They are the ABI between hypervisor and guest OS.
* KVM is using this.
*
* pvclock_vcpu_time_info holds the system time and the tsc timestamp
* of the last update. So the guest can use the tsc delta to get a
* more precise system time. There is one per virtual cpu.
*
* pvclock_wall_clock references the point in time when the system
* time was zero (usually boot time), thus the guest calculates the
* current wall clock by adding the system time.
*
* Protocol for the "version" fields is: hypervisor raises it (making
* it uneven) before it starts updating the fields and raises it again
* (making it even) when it is done. Thus the guest can make sure the
* time values it got are consistent by checking the version before
* and after reading them.
*/
struct pvclock_vcpu_time_info {
u32 version;
u32 pad0;
u64 tsc_timestamp;
u64 system_time;
u32 tsc_to_system_mul;
s8 tsc_shift;
u8 pad[3];
} __attribute__((__packed__)); /* 32 bytes */
struct pvclock_wall_clock {
u32 version;
u32 sec;
u32 nsec;
} __attribute__((__packed__));
#endif /* __ASSEMBLY__ */
#endif /* _ASM_IA64__PVCLOCK_ABI_H */
#ifndef __ASM_IA64_KVM_H
#define __ASM_IA64_KVM_H
/*
* kvm structure definitions for ia64
*
* Copyright (C) 2007 Xiantao Zhang <xiantao.zhang@intel.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
* Place - Suite 330, Boston, MA 02111-1307 USA.
*
*/
#include <linux/types.h>
#include <linux/ioctl.h>
/* Select x86 specific features in <linux/kvm.h> */
#define __KVM_HAVE_IOAPIC
#define __KVM_HAVE_IRQ_LINE
/* Architectural interrupt line count. */
#define KVM_NR_INTERRUPTS 256
#define KVM_IOAPIC_NUM_PINS 48
struct kvm_ioapic_state {
__u64 base_address;
__u32 ioregsel;
__u32 id;
__u32 irr;
__u32 pad;
union {
__u64 bits;
struct {
__u8 vector;
__u8 delivery_mode:3;
__u8 dest_mode:1;
__u8 delivery_status:1;
__u8 polarity:1;
__u8 remote_irr:1;
__u8 trig_mode:1;
__u8 mask:1;
__u8 reserve:7;
__u8 reserved[4];
__u8 dest_id;
} fields;
} redirtbl[KVM_IOAPIC_NUM_PINS];
};
#define KVM_IRQCHIP_PIC_MASTER 0
#define KVM_IRQCHIP_PIC_SLAVE 1
#define KVM_IRQCHIP_IOAPIC 2
#define KVM_NR_IRQCHIPS 3
#define KVM_CONTEXT_SIZE 8*1024
struct kvm_fpreg {
union {
unsigned long bits[2];
long double __dummy; /* force 16-byte alignment */
} u;
};
union context {
/* 8K size */
char dummy[KVM_CONTEXT_SIZE];
struct {
unsigned long psr;
unsigned long pr;
unsigned long caller_unat;
unsigned long pad;
unsigned long gr[32];
unsigned long ar[128];
unsigned long br[8];
unsigned long cr[128];
unsigned long rr[8];
unsigned long ibr[8];
unsigned long dbr[8];
unsigned long pkr[8];
struct kvm_fpreg fr[128];
};
};
struct thash_data {
union {
struct {
unsigned long p : 1; /* 0 */
unsigned long rv1 : 1; /* 1 */
unsigned long ma : 3; /* 2-4 */
unsigned long a : 1; /* 5 */
unsigned long d : 1; /* 6 */
unsigned long pl : 2; /* 7-8 */
unsigned long ar : 3; /* 9-11 */
unsigned long ppn : 38; /* 12-49 */
unsigned long rv2 : 2; /* 50-51 */
unsigned long ed : 1; /* 52 */
unsigned long ig1 : 11; /* 53-63 */
};
struct {
unsigned long __rv1 : 53; /* 0-52 */
unsigned long contiguous : 1; /*53 */
unsigned long tc : 1; /* 54 TR or TC */
unsigned long cl : 1;
/* 55 I side or D side cache line */
unsigned long len : 4; /* 56-59 */
unsigned long io : 1; /* 60 entry is for io or not */
unsigned long nomap : 1;
/* 61 entry cann't be inserted into machine TLB.*/
unsigned long checked : 1;
/* 62 for VTLB/VHPT sanity check */
unsigned long invalid : 1;
/* 63 invalid entry */
};
unsigned long page_flags;
}; /* same for VHPT and TLB */
union {
struct {
unsigned long rv3 : 2;
unsigned long ps : 6;
unsigned long key : 24;
unsigned long rv4 : 32;
};
unsigned long itir;
};
union {
struct {
unsigned long ig2 : 12;
unsigned long vpn : 49;
unsigned long vrn : 3;
};
unsigned long ifa;
unsigned long vadr;
struct {
unsigned long tag : 63;
unsigned long ti : 1;
};
unsigned long etag;
};
union {
struct thash_data *next;
unsigned long rid;
unsigned long gpaddr;
};
};
#define NITRS 8
#define NDTRS 8
struct saved_vpd {
unsigned long vhpi;
unsigned long vgr[16];
unsigned long vbgr[16];
unsigned long vnat;
unsigned long vbnat;
unsigned long vcpuid[5];
unsigned long vpsr;
unsigned long vpr;
union {
unsigned long vcr[128];
struct {
unsigned long dcr;
unsigned long itm;
unsigned long iva;
unsigned long rsv1[5];
unsigned long pta;
unsigned long rsv2[7];
unsigned long ipsr;
unsigned long isr;
unsigned long rsv3;
unsigned long iip;
unsigned long ifa;
unsigned long itir;
unsigned long iipa;
unsigned long ifs;
unsigned long iim;
unsigned long iha;
unsigned long rsv4[38];
unsigned long lid;
unsigned long ivr;
unsigned long tpr;
unsigned long eoi;
unsigned long irr[4];
unsigned long itv;
unsigned long pmv;
unsigned long cmcv;
unsigned long rsv5[5];
unsigned long lrr0;
unsigned long lrr1;
unsigned long rsv6[46];
};
};
};
struct kvm_regs {
struct saved_vpd vpd;
/*Arch-regs*/
int mp_state;
unsigned long vmm_rr;
/* TR and TC. */
struct thash_data itrs[NITRS];
struct thash_data dtrs[NDTRS];
/* Bit is set if there is a tr/tc for the region. */
unsigned char itr_regions;
unsigned char dtr_regions;
unsigned char tc_regions;
char irq_check;
unsigned long saved_itc;
unsigned long itc_check;
unsigned long timer_check;
unsigned long timer_pending;
unsigned long last_itc;
unsigned long vrr[8];
unsigned long ibr[8];
unsigned long dbr[8];
unsigned long insvc[4]; /* Interrupt in service. */
unsigned long xtp;
unsigned long metaphysical_rr0; /* from kvm_arch (so is pinned) */
unsigned long metaphysical_rr4; /* from kvm_arch (so is pinned) */
unsigned long metaphysical_saved_rr0; /* from kvm_arch */
unsigned long metaphysical_saved_rr4; /* from kvm_arch */
unsigned long fp_psr; /*used for lazy float register */
unsigned long saved_gp;
/*for phycial emulation */
union context saved_guest;
unsigned long reserved[64]; /* for future use */
};
struct kvm_sregs {
};
struct kvm_fpu {
};
#define KVM_IA64_VCPU_STACK_SHIFT 16
#define KVM_IA64_VCPU_STACK_SIZE (1UL << KVM_IA64_VCPU_STACK_SHIFT)
struct kvm_ia64_vcpu_stack {
unsigned char stack[KVM_IA64_VCPU_STACK_SIZE];
};
struct kvm_debug_exit_arch {
};
/* for KVM_SET_GUEST_DEBUG */
struct kvm_guest_debug_arch {
};
/* definition of registers in kvm_run */
struct kvm_sync_regs {
};
#endif
#
# KVM configuration
#
source "virt/kvm/Kconfig"
menuconfig VIRTUALIZATION
bool "Virtualization"
depends on HAVE_KVM || IA64
default y
---help---
Say Y here to get to see options for using your Linux host to run other
operating systems inside virtual machines (guests).
This option alone does not add any kernel code.
If you say N, all options in this submenu will be skipped and disabled.
if VIRTUALIZATION
config KVM
tristate "Kernel-based Virtual Machine (KVM) support"
depends on BROKEN
depends on HAVE_KVM && MODULES
depends on BROKEN
select PREEMPT_NOTIFIERS
select ANON_INODES
select HAVE_KVM_IRQCHIP
select HAVE_KVM_IRQFD
select HAVE_KVM_IRQ_ROUTING
select KVM_APIC_ARCHITECTURE
select KVM_MMIO
---help---
Support hosting fully virtualized guest machines using hardware
virtualization extensions. You will need a fairly recent
processor equipped with virtualization extensions. You will also
need to select one or more of the processor modules below.
This module provides access to the hardware capabilities through
a character device node named /dev/kvm.
To compile this as a module, choose M here: the module
will be called kvm.
If unsure, say N.
config KVM_INTEL
tristate "KVM for Intel Itanium 2 processors support"
depends on KVM && m
---help---
Provides support for KVM on Itanium 2 processors equipped with the VT
extensions.
config KVM_DEVICE_ASSIGNMENT
bool "KVM legacy PCI device assignment support"
depends on KVM && PCI && IOMMU_API
default y
---help---
Provide support for legacy PCI device assignment through KVM. The
kernel now also supports a full featured userspace device driver
framework through VFIO, which supersedes much of this support.
If unsure, say Y.
source drivers/vhost/Kconfig
endif # VIRTUALIZATION
#This Make file is to generate asm-offsets.h and build source.
#
#Generate asm-offsets.h for vmm module build
offsets-file := asm-offsets.h
always := $(offsets-file)
targets := $(offsets-file)
targets += arch/ia64/kvm/asm-offsets.s
# Default sed regexp - multiline due to syntax constraints
define sed-y
"/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}"
endef
quiet_cmd_offsets = GEN $@
define cmd_offsets
(set -e; \
echo "#ifndef __ASM_KVM_OFFSETS_H__"; \
echo "#define __ASM_KVM_OFFSETS_H__"; \
echo "/*"; \
echo " * DO NOT MODIFY."; \
echo " *"; \
echo " * This file was generated by Makefile"; \
echo " *"; \
echo " */"; \
echo ""; \
sed -ne $(sed-y) $<; \
echo ""; \
echo "#endif" ) > $@
endef
# We use internal rules to avoid the "is up to date" message from make
arch/ia64/kvm/asm-offsets.s: arch/ia64/kvm/asm-offsets.c \
$(wildcard $(srctree)/arch/ia64/include/asm/*.h)\
$(wildcard $(srctree)/include/linux/*.h)
$(call if_changed_dep,cc_s_c)
$(obj)/$(offsets-file): arch/ia64/kvm/asm-offsets.s
$(call cmd,offsets)
FORCE : $(obj)/$(offsets-file)
#
# Makefile for Kernel-based Virtual Machine module
#
ccflags-y := -Ivirt/kvm -Iarch/ia64/kvm/
asflags-y := -Ivirt/kvm -Iarch/ia64/kvm/
KVM := ../../../virt/kvm
common-objs = $(KVM)/kvm_main.o $(KVM)/ioapic.o \
$(KVM)/coalesced_mmio.o $(KVM)/irq_comm.o
ifeq ($(CONFIG_KVM_DEVICE_ASSIGNMENT),y)
common-objs += $(KVM)/assigned-dev.o $(KVM)/iommu.o
endif
kvm-objs := $(common-objs) kvm-ia64.o kvm_fw.o
obj-$(CONFIG_KVM) += kvm.o
CFLAGS_vcpu.o += -mfixed-range=f2-f5,f12-f127
kvm-intel-objs = vmm.o vmm_ivt.o trampoline.o vcpu.o optvfault.o mmio.o \
vtlb.o process.o kvm_lib.o
#Add link memcpy and memset to avoid possible structure assignment error
kvm-intel-objs += memcpy.o memset.o
obj-$(CONFIG_KVM_INTEL) += kvm-intel.o
/*
* asm-offsets.c Generate definitions needed by assembly language modules.
* This code generates raw asm output which is post-processed
* to extract and format the required data.
*
* Anthony Xu <anthony.xu@intel.com>
* Xiantao Zhang <xiantao.zhang@intel.com>
* Copyright (c) 2007 Intel Corporation KVM support.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
* Place - Suite 330, Boston, MA 02111-1307 USA.
*
*/
#include <linux/kvm_host.h>
#include <linux/kbuild.h>
#include "vcpu.h"
void foo(void)
{
DEFINE(VMM_TASK_SIZE, sizeof(struct kvm_vcpu));
DEFINE(VMM_PT_REGS_SIZE, sizeof(struct kvm_pt_regs));
BLANK();
DEFINE(VMM_VCPU_META_RR0_OFFSET,
offsetof(struct kvm_vcpu, arch.metaphysical_rr0));
DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET,
offsetof(struct kvm_vcpu,
arch.metaphysical_saved_rr0));
DEFINE(VMM_VCPU_VRR0_OFFSET,
offsetof(struct kvm_vcpu, arch.vrr[0]));
DEFINE(VMM_VPD_IRR0_OFFSET,
offsetof(struct vpd, irr[0]));
DEFINE(VMM_VCPU_ITC_CHECK_OFFSET,
offsetof(struct kvm_vcpu, arch.itc_check));
DEFINE(VMM_VCPU_IRQ_CHECK_OFFSET,
offsetof(struct kvm_vcpu, arch.irq_check));
DEFINE(VMM_VPD_VHPI_OFFSET,
offsetof(struct vpd, vhpi));
DEFINE(VMM_VCPU_VSA_BASE_OFFSET,
offsetof(struct kvm_vcpu, arch.vsa_base));
DEFINE(VMM_VCPU_VPD_OFFSET,
offsetof(struct kvm_vcpu, arch.vpd));
DEFINE(VMM_VCPU_IRQ_CHECK,
offsetof(struct kvm_vcpu, arch.irq_check));
DEFINE(VMM_VCPU_TIMER_PENDING,
offsetof(struct kvm_vcpu, arch.timer_pending));
DEFINE(VMM_VCPU_META_SAVED_RR0_OFFSET,
offsetof(struct kvm_vcpu, arch.metaphysical_saved_rr0));
DEFINE(VMM_VCPU_MODE_FLAGS_OFFSET,
offsetof(struct kvm_vcpu, arch.mode_flags));
DEFINE(VMM_VCPU_ITC_OFS_OFFSET,
offsetof(struct kvm_vcpu, arch.itc_offset));
DEFINE(VMM_VCPU_LAST_ITC_OFFSET,
offsetof(struct kvm_vcpu, arch.last_itc));
DEFINE(VMM_VCPU_SAVED_GP_OFFSET,
offsetof(struct kvm_vcpu, arch.saved_gp));
BLANK();
DEFINE(VMM_PT_REGS_B6_OFFSET,
offsetof(struct kvm_pt_regs, b6));
DEFINE(VMM_PT_REGS_B7_OFFSET,
offsetof(struct kvm_pt_regs, b7));
DEFINE(VMM_PT_REGS_AR_CSD_OFFSET,
offsetof(struct kvm_pt_regs, ar_csd));
DEFINE(VMM_PT_REGS_AR_SSD_OFFSET,
offsetof(struct kvm_pt_regs, ar_ssd));
DEFINE(VMM_PT_REGS_R8_OFFSET,
offsetof(struct kvm_pt_regs, r8));
DEFINE(VMM_PT_REGS_R9_OFFSET,
offsetof(struct kvm_pt_regs, r9));
DEFINE(VMM_PT_REGS_R10_OFFSET,
offsetof(struct kvm_pt_regs, r10));
DEFINE(VMM_PT_REGS_R11_OFFSET,
offsetof(struct kvm_pt_regs, r11));
DEFINE(VMM_PT_REGS_CR_IPSR_OFFSET,
offsetof(struct kvm_pt_regs, cr_ipsr));
DEFINE(VMM_PT_REGS_CR_IIP_OFFSET,
offsetof(struct kvm_pt_regs, cr_iip));
DEFINE(VMM_PT_REGS_CR_IFS_OFFSET,
offsetof(struct kvm_pt_regs, cr_ifs));
DEFINE(VMM_PT_REGS_AR_UNAT_OFFSET,
offsetof(struct kvm_pt_regs, ar_unat));
DEFINE(VMM_PT_REGS_AR_PFS_OFFSET,
offsetof(struct kvm_pt_regs, ar_pfs));
DEFINE(VMM_PT_REGS_AR_RSC_OFFSET,
offsetof(struct kvm_pt_regs, ar_rsc));
DEFINE(VMM_PT_REGS_AR_RNAT_OFFSET,
offsetof(struct kvm_pt_regs, ar_rnat));
DEFINE(VMM_PT_REGS_AR_BSPSTORE_OFFSET,
offsetof(struct kvm_pt_regs, ar_bspstore));
DEFINE(VMM_PT_REGS_PR_OFFSET,
offsetof(struct kvm_pt_regs, pr));
DEFINE(VMM_PT_REGS_B0_OFFSET,
offsetof(struct kvm_pt_regs, b0));
DEFINE(VMM_PT_REGS_LOADRS_OFFSET,
offsetof(struct kvm_pt_regs, loadrs));
DEFINE(VMM_PT_REGS_R1_OFFSET,
offsetof(struct kvm_pt_regs, r1));
DEFINE(VMM_PT_REGS_R12_OFFSET,
offsetof(struct kvm_pt_regs, r12));
DEFINE(VMM_PT_REGS_R13_OFFSET,
offsetof(struct kvm_pt_regs, r13));
DEFINE(VMM_PT_REGS_AR_FPSR_OFFSET,
offsetof(struct kvm_pt_regs, ar_fpsr));
DEFINE(VMM_PT_REGS_R15_OFFSET,
offsetof(struct kvm_pt_regs, r15));
DEFINE(VMM_PT_REGS_R14_OFFSET,
offsetof(struct kvm_pt_regs, r14));
DEFINE(VMM_PT_REGS_R2_OFFSET,
offsetof(struct kvm_pt_regs, r2));
DEFINE(VMM_PT_REGS_R3_OFFSET,
offsetof(struct kvm_pt_regs, r3));
DEFINE(VMM_PT_REGS_R16_OFFSET,
offsetof(struct kvm_pt_regs, r16));
DEFINE(VMM_PT_REGS_R17_OFFSET,
offsetof(struct kvm_pt_regs, r17));
DEFINE(VMM_PT_REGS_R18_OFFSET,
offsetof(struct kvm_pt_regs, r18));
DEFINE(VMM_PT_REGS_R19_OFFSET,
offsetof(struct kvm_pt_regs, r19));
DEFINE(VMM_PT_REGS_R20_OFFSET,
offsetof(struct kvm_pt_regs, r20));
DEFINE(VMM_PT_REGS_R21_OFFSET,
offsetof(struct kvm_pt_regs, r21));
DEFINE(VMM_PT_REGS_R22_OFFSET,
offsetof(struct kvm_pt_regs, r22));
DEFINE(VMM_PT_REGS_R23_OFFSET,
offsetof(struct kvm_pt_regs, r23));
DEFINE(VMM_PT_REGS_R24_OFFSET,
offsetof(struct kvm_pt_regs, r24));
DEFINE(VMM_PT_REGS_R25_OFFSET,
offsetof(struct kvm_pt_regs, r25));
DEFINE(VMM_PT_REGS_R26_OFFSET,
offsetof(struct kvm_pt_regs, r26));
DEFINE(VMM_PT_REGS_R27_OFFSET,
offsetof(struct kvm_pt_regs, r27));
DEFINE(VMM_PT_REGS_R28_OFFSET,
offsetof(struct kvm_pt_regs, r28));
DEFINE(VMM_PT_REGS_R29_OFFSET,
offsetof(struct kvm_pt_regs, r29));
DEFINE(VMM_PT_REGS_R30_OFFSET,
offsetof(struct kvm_pt_regs, r30));
DEFINE(VMM_PT_REGS_R31_OFFSET,
offsetof(struct kvm_pt_regs, r31));
DEFINE(VMM_PT_REGS_AR_CCV_OFFSET,
offsetof(struct kvm_pt_regs, ar_ccv));
DEFINE(VMM_PT_REGS_F6_OFFSET,
offsetof(struct kvm_pt_regs, f6));
DEFINE(VMM_PT_REGS_F7_OFFSET,
offsetof(struct kvm_pt_regs, f7));
DEFINE(VMM_PT_REGS_F8_OFFSET,
offsetof(struct kvm_pt_regs, f8));
DEFINE(VMM_PT_REGS_F9_OFFSET,
offsetof(struct kvm_pt_regs, f9));
DEFINE(VMM_PT_REGS_F10_OFFSET,
offsetof(struct kvm_pt_regs, f10));
DEFINE(VMM_PT_REGS_F11_OFFSET,
offsetof(struct kvm_pt_regs, f11));
DEFINE(VMM_PT_REGS_R4_OFFSET,
offsetof(struct kvm_pt_regs, r4));
DEFINE(VMM_PT_REGS_R5_OFFSET,
offsetof(struct kvm_pt_regs, r5));
DEFINE(VMM_PT_REGS_R6_OFFSET,
offsetof(struct kvm_pt_regs, r6));
DEFINE(VMM_PT_REGS_R7_OFFSET,
offsetof(struct kvm_pt_regs, r7));
DEFINE(VMM_PT_REGS_EML_UNAT_OFFSET,
offsetof(struct kvm_pt_regs, eml_unat));
DEFINE(VMM_VCPU_IIPA_OFFSET,
offsetof(struct kvm_vcpu, arch.cr_iipa));
DEFINE(VMM_VCPU_OPCODE_OFFSET,
offsetof(struct kvm_vcpu, arch.opcode));
DEFINE(VMM_VCPU_CAUSE_OFFSET, offsetof(struct kvm_vcpu, arch.cause));
DEFINE(VMM_VCPU_ISR_OFFSET,
offsetof(struct kvm_vcpu, arch.cr_isr));
DEFINE(VMM_PT_REGS_R16_SLOT,
(((offsetof(struct kvm_pt_regs, r16)
- sizeof(struct kvm_pt_regs)) >> 3) & 0x3f));
DEFINE(VMM_VCPU_MODE_FLAGS_OFFSET,
offsetof(struct kvm_vcpu, arch.mode_flags));
DEFINE(VMM_VCPU_GP_OFFSET, offsetof(struct kvm_vcpu, arch.__gp));
BLANK();
DEFINE(VMM_VPD_BASE_OFFSET, offsetof(struct kvm_vcpu, arch.vpd));
DEFINE(VMM_VPD_VIFS_OFFSET, offsetof(struct vpd, ifs));
DEFINE(VMM_VLSAPIC_INSVC_BASE_OFFSET,
offsetof(struct kvm_vcpu, arch.insvc[0]));
DEFINE(VMM_VPD_VPTA_OFFSET, offsetof(struct vpd, pta));
DEFINE(VMM_VPD_VPSR_OFFSET, offsetof(struct vpd, vpsr));
DEFINE(VMM_CTX_R4_OFFSET, offsetof(union context, gr[4]));
DEFINE(VMM_CTX_R5_OFFSET, offsetof(union context, gr[5]));
DEFINE(VMM_CTX_R12_OFFSET, offsetof(union context, gr[12]));
DEFINE(VMM_CTX_R13_OFFSET, offsetof(union context, gr[13]));
DEFINE(VMM_CTX_KR0_OFFSET, offsetof(union context, ar[0]));
DEFINE(VMM_CTX_KR1_OFFSET, offsetof(union context, ar[1]));
DEFINE(VMM_CTX_B0_OFFSET, offsetof(union context, br[0]));
DEFINE(VMM_CTX_B1_OFFSET, offsetof(union context, br[1]));
DEFINE(VMM_CTX_B2_OFFSET, offsetof(union context, br[2]));
DEFINE(VMM_CTX_RR0_OFFSET, offsetof(union context, rr[0]));
DEFINE(VMM_CTX_RSC_OFFSET, offsetof(union context, ar[16]));
DEFINE(VMM_CTX_BSPSTORE_OFFSET, offsetof(union context, ar[18]));
DEFINE(VMM_CTX_RNAT_OFFSET, offsetof(union context, ar[19]));
DEFINE(VMM_CTX_FCR_OFFSET, offsetof(union context, ar[21]));
DEFINE(VMM_CTX_EFLAG_OFFSET, offsetof(union context, ar[24]));
DEFINE(VMM_CTX_CFLG_OFFSET, offsetof(union context, ar[27]));
DEFINE(VMM_CTX_FSR_OFFSET, offsetof(union context, ar[28]));
DEFINE(VMM_CTX_FIR_OFFSET, offsetof(union context, ar[29]));
DEFINE(VMM_CTX_FDR_OFFSET, offsetof(union context, ar[30]));
DEFINE(VMM_CTX_UNAT_OFFSET, offsetof(union context, ar[36]));
DEFINE(VMM_CTX_FPSR_OFFSET, offsetof(union context, ar[40]));
DEFINE(VMM_CTX_PFS_OFFSET, offsetof(union context, ar[64]));
DEFINE(VMM_CTX_LC_OFFSET, offsetof(union context, ar[65]));
DEFINE(VMM_CTX_DCR_OFFSET, offsetof(union context, cr[0]));
DEFINE(VMM_CTX_IVA_OFFSET, offsetof(union context, cr[2]));
DEFINE(VMM_CTX_PTA_OFFSET, offsetof(union context, cr[8]));
DEFINE(VMM_CTX_IBR0_OFFSET, offsetof(union context, ibr[0]));
DEFINE(VMM_CTX_DBR0_OFFSET, offsetof(union context, dbr[0]));
DEFINE(VMM_CTX_F2_OFFSET, offsetof(union context, fr[2]));
DEFINE(VMM_CTX_F3_OFFSET, offsetof(union context, fr[3]));
DEFINE(VMM_CTX_F32_OFFSET, offsetof(union context, fr[32]));
DEFINE(VMM_CTX_F33_OFFSET, offsetof(union context, fr[33]));
DEFINE(VMM_CTX_PKR0_OFFSET, offsetof(union context, pkr[0]));
DEFINE(VMM_CTX_PSR_OFFSET, offsetof(union context, psr));
BLANK();
}
/*
* irq.h: In-kernel interrupt controller related definitions
* Copyright (c) 2008, Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
* Place - Suite 330, Boston, MA 02111-1307 USA.
*
* Authors:
* Xiantao Zhang <xiantao.zhang@intel.com>
*
*/
#ifndef __IRQ_H
#define __IRQ_H
#include "lapic.h"
static inline int irqchip_in_kernel(struct kvm *kvm)
{
return 1;
}
#endif
This diff is collapsed.
This diff is collapsed.
/*
* kvm_lib.c: Compile some libraries for kvm-intel module.
*
* Just include kernel's library, and disable symbols export.
* Copyright (C) 2008, Intel Corporation.
* Xiantao Zhang (xiantao.zhang@intel.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#undef CONFIG_MODULES
#include <linux/module.h>
#undef CONFIG_KALLSYMS
#undef EXPORT_SYMBOL
#undef EXPORT_SYMBOL_GPL
#define EXPORT_SYMBOL(sym)
#define EXPORT_SYMBOL_GPL(sym)
#include "../../../lib/vsprintf.c"
#include "../../../lib/ctype.c"
This diff is collapsed.
#ifndef __KVM_IA64_LAPIC_H
#define __KVM_IA64_LAPIC_H
#include <linux/kvm_host.h>
/*
* vlsapic
*/
struct kvm_lapic{
struct kvm_vcpu *vcpu;
uint64_t insvc[4];
uint64_t vhpi;
uint8_t xtp;
uint8_t pal_init_pending;
uint8_t pad[2];
};
int kvm_create_lapic(struct kvm_vcpu *vcpu);
void kvm_free_lapic(struct kvm_vcpu *vcpu);
int kvm_apic_match_physical_addr(struct kvm_lapic *apic, u16 dest);
int kvm_apic_match_logical_addr(struct kvm_lapic *apic, u8 mda);
int kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
int short_hand, int dest, int dest_mode);
int kvm_apic_compare_prio(struct kvm_vcpu *vcpu1, struct kvm_vcpu *vcpu2);
int kvm_apic_set_irq(struct kvm_vcpu *vcpu, struct kvm_lapic_irq *irq);
#define kvm_apic_present(x) (true)
#define kvm_lapic_enabled(x) (true)
#endif
#include "../lib/memcpy.S"
#include "../lib/memset.S"
#ifndef __KVM_IA64_MISC_H
#define __KVM_IA64_MISC_H
#include <linux/kvm_host.h>
/*
* misc.h
* Copyright (C) 2007, Intel Corporation.
* Xiantao Zhang (xiantao.zhang@intel.com)
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
* Place - Suite 330, Boston, MA 02111-1307 USA.
*
*/
/*
*Return p2m base address at host side!
*/
static inline uint64_t *kvm_host_get_pmt(struct kvm *kvm)
{
return (uint64_t *)(kvm->arch.vm_base +
offsetof(struct kvm_vm_data, kvm_p2m));
}
static inline void kvm_set_pmt_entry(struct kvm *kvm, gfn_t gfn,
u64 paddr, u64 mem_flags)
{
uint64_t *pmt_base = kvm_host_get_pmt(kvm);
unsigned long pte;
pte = PAGE_ALIGN(paddr) | mem_flags;
pmt_base[gfn] = pte;
}
/*Function for translating host address to guest address*/
static inline void *to_guest(struct kvm *kvm, void *addr)
{
return (void *)((unsigned long)(addr) - kvm->arch.vm_base +
KVM_VM_DATA_BASE);
}
/*Function for translating guest address to host address*/
static inline void *to_host(struct kvm *kvm, void *addr)
{
return (void *)((unsigned long)addr - KVM_VM_DATA_BASE
+ kvm->arch.vm_base);
}
/* Get host context of the vcpu */
static inline union context *kvm_get_host_context(struct kvm_vcpu *vcpu)
{
union context *ctx = &vcpu->arch.host;
return to_guest(vcpu->kvm, ctx);
}
/* Get guest context of the vcpu */
static inline union context *kvm_get_guest_context(struct kvm_vcpu *vcpu)
{
union context *ctx = &vcpu->arch.guest;
return to_guest(vcpu->kvm, ctx);
}
/* kvm get exit data from gvmm! */
static inline struct exit_ctl_data *kvm_get_exit_data(struct kvm_vcpu *vcpu)
{
return &vcpu->arch.exit_data;
}
/*kvm get vcpu ioreq for kvm module!*/
static inline struct kvm_mmio_req *kvm_get_vcpu_ioreq(struct kvm_vcpu *vcpu)
{
struct exit_ctl_data *p_ctl_data;
if (vcpu) {
p_ctl_data = kvm_get_exit_data(vcpu);
if (p_ctl_data->exit_reason == EXIT_REASON_MMIO_INSTRUCTION)
return &p_ctl_data->u.ioreq;
}
return NULL;
}
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -170,8 +170,6 @@ extern void *kvmppc_pin_guest_page(struct kvm *kvm, unsigned long addr, ...@@ -170,8 +170,6 @@ extern void *kvmppc_pin_guest_page(struct kvm *kvm, unsigned long addr,
unsigned long *nb_ret); unsigned long *nb_ret);
extern void kvmppc_unpin_guest_page(struct kvm *kvm, void *addr, extern void kvmppc_unpin_guest_page(struct kvm *kvm, void *addr,
unsigned long gpa, bool dirty); unsigned long gpa, bool dirty);
extern long kvmppc_virtmode_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
long pte_index, unsigned long pteh, unsigned long ptel);
extern long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags, extern long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
long pte_index, unsigned long pteh, unsigned long ptel, long pte_index, unsigned long pteh, unsigned long ptel,
pgd_t *pgdir, bool realmode, unsigned long *idx_ret); pgd_t *pgdir, bool realmode, unsigned long *idx_ret);
......
...@@ -37,7 +37,6 @@ static inline void svcpu_put(struct kvmppc_book3s_shadow_vcpu *svcpu) ...@@ -37,7 +37,6 @@ static inline void svcpu_put(struct kvmppc_book3s_shadow_vcpu *svcpu)
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
#define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */
extern unsigned long kvm_rma_pages;
#endif #endif
#define VRMA_VSID 0x1ffffffUL /* 1TB VSID reserved for VRMA */ #define VRMA_VSID 0x1ffffffUL /* 1TB VSID reserved for VRMA */
...@@ -148,7 +147,7 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r, ...@@ -148,7 +147,7 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r,
/* This covers 14..54 bits of va*/ /* This covers 14..54 bits of va*/
rb = (v & ~0x7fUL) << 16; /* AVA field */ rb = (v & ~0x7fUL) << 16; /* AVA field */
rb |= v >> (62 - 8); /* B field */ rb |= (v >> HPTE_V_SSIZE_SHIFT) << 8; /* B field */
/* /*
* AVA in v had cleared lower 23 bits. We need to derive * AVA in v had cleared lower 23 bits. We need to derive
* that from pteg index * that from pteg index
......
This diff is collapsed.
...@@ -170,8 +170,6 @@ extern long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn, ...@@ -170,8 +170,6 @@ extern long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
unsigned long ioba, unsigned long tce); unsigned long ioba, unsigned long tce);
extern long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn, extern long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
unsigned long ioba); unsigned long ioba);
extern struct kvm_rma_info *kvm_alloc_rma(void);
extern void kvm_release_rma(struct kvm_rma_info *ri);
extern struct page *kvm_alloc_hpt(unsigned long nr_pages); extern struct page *kvm_alloc_hpt(unsigned long nr_pages);
extern void kvm_release_hpt(struct page *page, unsigned long nr_pages); extern void kvm_release_hpt(struct page *page, unsigned long nr_pages);
extern int kvmppc_core_init_vm(struct kvm *kvm); extern int kvmppc_core_init_vm(struct kvm *kvm);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -738,3 +738,4 @@ void *get_xsave_addr(struct xsave_struct *xsave, int xstate) ...@@ -738,3 +738,4 @@ void *get_xsave_addr(struct xsave_struct *xsave, int xstate)
return (void *)xsave + xstate_comp_offsets[feature]; return (void *)xsave + xstate_comp_offsets[feature];
} }
EXPORT_SYMBOL_GPL(get_xsave_addr);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment