Commit 76260774 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Paolo Bonzini:
 "Bugfixes, a pvspinlock optimization, and documentation moving"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: X86: Boost queue head vCPU to mitigate lock waiter preemption
  Documentation: move Documentation/virtual to Documentation/virt
  KVM: nVMX: Set cached_vmcs12 and cached_shadow_vmcs12 NULL after free
  KVM: X86: Dynamically allocate user_fpu
  KVM: X86: Fix fpu state crash in kvm guest
  Revert "kvm: x86: Use task structs fpu field for user"
  KVM: nVMX: Clear pending KVM_REQ_GET_VMCS12_PAGES when leaving nested
parents c2626876 266e85a5
...@@ -2545,7 +2545,7 @@ ...@@ -2545,7 +2545,7 @@
mem_encrypt=on: Activate SME mem_encrypt=on: Activate SME
mem_encrypt=off: Do not activate SME mem_encrypt=off: Do not activate SME
Refer to Documentation/virtual/kvm/amd-memory-encryption.rst Refer to Documentation/virt/kvm/amd-memory-encryption.rst
for details on when memory encryption can be activated. for details on when memory encryption can be activated.
mem_sleep_default= [SUSPEND] Default system suspend mode: mem_sleep_default= [SUSPEND] Default system suspend mode:
......
...@@ -3781,7 +3781,7 @@ encrypted VMs. ...@@ -3781,7 +3781,7 @@ encrypted VMs.
Currently, this ioctl is used for issuing Secure Encrypted Virtualization Currently, this ioctl is used for issuing Secure Encrypted Virtualization
(SEV) commands on AMD Processors. The SEV commands are defined in (SEV) commands on AMD Processors. The SEV commands are defined in
Documentation/virtual/kvm/amd-memory-encryption.rst. Documentation/virt/kvm/amd-memory-encryption.rst.
4.111 KVM_MEMORY_ENCRYPT_REG_REGION 4.111 KVM_MEMORY_ENCRYPT_REG_REGION
......
...@@ -18,7 +18,7 @@ S390: ...@@ -18,7 +18,7 @@ S390:
number in R1. number in R1.
For further information on the S390 diagnose call as supported by KVM, For further information on the S390 diagnose call as supported by KVM,
refer to Documentation/virtual/kvm/s390-diag.txt. refer to Documentation/virt/kvm/s390-diag.txt.
PowerPC: PowerPC:
It uses R3-R10 and hypercall number in R11. R4-R11 are used as output registers. It uses R3-R10 and hypercall number in R11. R4-R11 are used as output registers.
...@@ -26,7 +26,7 @@ S390: ...@@ -26,7 +26,7 @@ S390:
KVM hypercalls uses 4 byte opcode, that are patched with 'hypercall-instructions' KVM hypercalls uses 4 byte opcode, that are patched with 'hypercall-instructions'
property inside the device tree's /hypervisor node. property inside the device tree's /hypervisor node.
For more information refer to Documentation/virtual/kvm/ppc-pv.txt For more information refer to Documentation/virt/kvm/ppc-pv.txt
MIPS: MIPS:
KVM hypercalls use the HYPCALL instruction with code 0 and the hypercall KVM hypercalls use the HYPCALL instruction with code 0 and the hypercall
......
...@@ -298,7 +298,7 @@ Handling a page fault is performed as follows: ...@@ -298,7 +298,7 @@ Handling a page fault is performed as follows:
vcpu->arch.mmio_gfn, and call the emulator vcpu->arch.mmio_gfn, and call the emulator
- If both P bit and R/W bit of error code are set, this could possibly - If both P bit and R/W bit of error code are set, this could possibly
be handled as a "fast page fault" (fixed without taking the MMU lock). See be handled as a "fast page fault" (fixed without taking the MMU lock). See
the description in Documentation/virtual/kvm/locking.txt. the description in Documentation/virt/kvm/locking.txt.
- if needed, walk the guest page tables to determine the guest translation - if needed, walk the guest page tables to determine the guest translation
(gva->gpa or ngpa->gpa) (gva->gpa or ngpa->gpa)
- if permissions are insufficient, reflect the fault back to the guest - if permissions are insufficient, reflect the fault back to the guest
......
...@@ -7,7 +7,7 @@ Review checklist for kvm patches ...@@ -7,7 +7,7 @@ Review checklist for kvm patches
2. Patches should be against kvm.git master branch. 2. Patches should be against kvm.git master branch.
3. If the patch introduces or modifies a new userspace API: 3. If the patch introduces or modifies a new userspace API:
- the API must be documented in Documentation/virtual/kvm/api.txt - the API must be documented in Documentation/virt/kvm/api.txt
- the API must be discoverable using KVM_CHECK_EXTENSION - the API must be discoverable using KVM_CHECK_EXTENSION
4. New state must include support for save/restore. 4. New state must include support for save/restore.
......
...@@ -8808,7 +8808,7 @@ L: kvm@vger.kernel.org ...@@ -8808,7 +8808,7 @@ L: kvm@vger.kernel.org
W: http://www.linux-kvm.org W: http://www.linux-kvm.org
T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
S: Supported S: Supported
F: Documentation/virtual/kvm/ F: Documentation/virt/kvm/
F: include/trace/events/kvm.h F: include/trace/events/kvm.h
F: include/uapi/asm-generic/kvm* F: include/uapi/asm-generic/kvm*
F: include/uapi/linux/kvm* F: include/uapi/linux/kvm*
...@@ -12137,7 +12137,7 @@ M: Thomas Hellstrom <thellstrom@vmware.com> ...@@ -12137,7 +12137,7 @@ M: Thomas Hellstrom <thellstrom@vmware.com>
M: "VMware, Inc." <pv-drivers@vmware.com> M: "VMware, Inc." <pv-drivers@vmware.com>
L: virtualization@lists.linux-foundation.org L: virtualization@lists.linux-foundation.org
S: Supported S: Supported
F: Documentation/virtual/paravirt_ops.txt F: Documentation/virt/paravirt_ops.txt
F: arch/*/kernel/paravirt* F: arch/*/kernel/paravirt*
F: arch/*/include/asm/paravirt*.h F: arch/*/include/asm/paravirt*.h
F: include/linux/hypervisor.h F: include/linux/hypervisor.h
...@@ -16854,7 +16854,7 @@ W: http://user-mode-linux.sourceforge.net ...@@ -16854,7 +16854,7 @@ W: http://user-mode-linux.sourceforge.net
Q: https://patchwork.ozlabs.org/project/linux-um/list/ Q: https://patchwork.ozlabs.org/project/linux-um/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml.git
S: Maintained S: Maintained
F: Documentation/virtual/uml/ F: Documentation/virt/uml/
F: arch/um/ F: arch/um/
F: arch/x86/um/ F: arch/x86/um/
F: fs/hostfs/ F: fs/hostfs/
......
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
* Struct fields are always 32 or 64 bit aligned, depending on them being 32 * Struct fields are always 32 or 64 bit aligned, depending on them being 32
* or 64 bit wide respectively. * or 64 bit wide respectively.
* *
* See Documentation/virtual/kvm/ppc-pv.txt * See Documentation/virt/kvm/ppc-pv.txt
*/ */
struct kvm_vcpu_arch_shared { struct kvm_vcpu_arch_shared {
__u64 scratch1; __u64 scratch1;
......
...@@ -607,15 +607,16 @@ struct kvm_vcpu_arch { ...@@ -607,15 +607,16 @@ struct kvm_vcpu_arch {
/* /*
* QEMU userspace and the guest each have their own FPU state. * QEMU userspace and the guest each have their own FPU state.
* In vcpu_run, we switch between the user, maintained in the * In vcpu_run, we switch between the user and guest FPU contexts.
* task_struct struct, and guest FPU contexts. While running a VCPU, * While running a VCPU, the VCPU thread will have the guest FPU
* the VCPU thread will have the guest FPU context. * context.
* *
* Note that while the PKRU state lives inside the fpu registers, * Note that while the PKRU state lives inside the fpu registers,
* it is switched out separately at VMENTER and VMEXIT time. The * it is switched out separately at VMENTER and VMEXIT time. The
* "guest_fpu" state here contains the guest FPU context, with the * "guest_fpu" state here contains the guest FPU context, with the
* host PRKU bits. * host PRKU bits.
*/ */
struct fpu *user_fpu;
struct fpu *guest_fpu; struct fpu *guest_fpu;
u64 xcr0; u64 xcr0;
......
...@@ -3466,7 +3466,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, ...@@ -3466,7 +3466,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
/* /*
* Currently, fast page fault only works for direct mapping * Currently, fast page fault only works for direct mapping
* since the gfn is not stable for indirect shadow page. See * since the gfn is not stable for indirect shadow page. See
* Documentation/virtual/kvm/locking.txt to get more detail. * Documentation/virt/kvm/locking.txt to get more detail.
*/ */
fault_handled = fast_pf_fix_direct_spte(vcpu, sp, fault_handled = fast_pf_fix_direct_spte(vcpu, sp,
iterator.sptep, spte, iterator.sptep, spte,
......
...@@ -2143,12 +2143,20 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) ...@@ -2143,12 +2143,20 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
goto out; goto out;
} }
svm->vcpu.arch.user_fpu = kmem_cache_zalloc(x86_fpu_cache,
GFP_KERNEL_ACCOUNT);
if (!svm->vcpu.arch.user_fpu) {
printk(KERN_ERR "kvm: failed to allocate kvm userspace's fpu\n");
err = -ENOMEM;
goto free_partial_svm;
}
svm->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache, svm->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache,
GFP_KERNEL_ACCOUNT); GFP_KERNEL_ACCOUNT);
if (!svm->vcpu.arch.guest_fpu) { if (!svm->vcpu.arch.guest_fpu) {
printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n"); printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n");
err = -ENOMEM; err = -ENOMEM;
goto free_partial_svm; goto free_user_fpu;
} }
err = kvm_vcpu_init(&svm->vcpu, kvm, id); err = kvm_vcpu_init(&svm->vcpu, kvm, id);
...@@ -2211,6 +2219,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) ...@@ -2211,6 +2219,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
kvm_vcpu_uninit(&svm->vcpu); kvm_vcpu_uninit(&svm->vcpu);
free_svm: free_svm:
kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu); kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu);
free_user_fpu:
kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.user_fpu);
free_partial_svm: free_partial_svm:
kmem_cache_free(kvm_vcpu_cache, svm); kmem_cache_free(kvm_vcpu_cache, svm);
out: out:
...@@ -2241,6 +2251,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) ...@@ -2241,6 +2251,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
__free_page(virt_to_page(svm->nested.hsave)); __free_page(virt_to_page(svm->nested.hsave));
__free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
kvm_vcpu_uninit(vcpu); kvm_vcpu_uninit(vcpu);
kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.user_fpu);
kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu); kmem_cache_free(x86_fpu_cache, svm->vcpu.arch.guest_fpu);
kmem_cache_free(kvm_vcpu_cache, svm); kmem_cache_free(kvm_vcpu_cache, svm);
} }
......
...@@ -220,6 +220,8 @@ static void free_nested(struct kvm_vcpu *vcpu) ...@@ -220,6 +220,8 @@ static void free_nested(struct kvm_vcpu *vcpu)
if (!vmx->nested.vmxon && !vmx->nested.smm.vmxon) if (!vmx->nested.vmxon && !vmx->nested.smm.vmxon)
return; return;
kvm_clear_request(KVM_REQ_GET_VMCS12_PAGES, vcpu);
vmx->nested.vmxon = false; vmx->nested.vmxon = false;
vmx->nested.smm.vmxon = false; vmx->nested.smm.vmxon = false;
free_vpid(vmx->nested.vpid02); free_vpid(vmx->nested.vpid02);
...@@ -232,7 +234,9 @@ static void free_nested(struct kvm_vcpu *vcpu) ...@@ -232,7 +234,9 @@ static void free_nested(struct kvm_vcpu *vcpu)
vmx->vmcs01.shadow_vmcs = NULL; vmx->vmcs01.shadow_vmcs = NULL;
} }
kfree(vmx->nested.cached_vmcs12); kfree(vmx->nested.cached_vmcs12);
vmx->nested.cached_vmcs12 = NULL;
kfree(vmx->nested.cached_shadow_vmcs12); kfree(vmx->nested.cached_shadow_vmcs12);
vmx->nested.cached_shadow_vmcs12 = NULL;
/* Unpin physical memory we referred to in the vmcs02 */ /* Unpin physical memory we referred to in the vmcs02 */
if (vmx->nested.apic_access_page) { if (vmx->nested.apic_access_page) {
kvm_release_page_dirty(vmx->nested.apic_access_page); kvm_release_page_dirty(vmx->nested.apic_access_page);
......
...@@ -6598,6 +6598,7 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu) ...@@ -6598,6 +6598,7 @@ static void vmx_free_vcpu(struct kvm_vcpu *vcpu)
free_loaded_vmcs(vmx->loaded_vmcs); free_loaded_vmcs(vmx->loaded_vmcs);
kfree(vmx->guest_msrs); kfree(vmx->guest_msrs);
kvm_vcpu_uninit(vcpu); kvm_vcpu_uninit(vcpu);
kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.user_fpu);
kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu); kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu);
kmem_cache_free(kvm_vcpu_cache, vmx); kmem_cache_free(kvm_vcpu_cache, vmx);
} }
...@@ -6613,12 +6614,20 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) ...@@ -6613,12 +6614,20 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
if (!vmx) if (!vmx)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
vmx->vcpu.arch.user_fpu = kmem_cache_zalloc(x86_fpu_cache,
GFP_KERNEL_ACCOUNT);
if (!vmx->vcpu.arch.user_fpu) {
printk(KERN_ERR "kvm: failed to allocate kvm userspace's fpu\n");
err = -ENOMEM;
goto free_partial_vcpu;
}
vmx->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache, vmx->vcpu.arch.guest_fpu = kmem_cache_zalloc(x86_fpu_cache,
GFP_KERNEL_ACCOUNT); GFP_KERNEL_ACCOUNT);
if (!vmx->vcpu.arch.guest_fpu) { if (!vmx->vcpu.arch.guest_fpu) {
printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n"); printk(KERN_ERR "kvm: failed to allocate vcpu's fpu\n");
err = -ENOMEM; err = -ENOMEM;
goto free_partial_vcpu; goto free_user_fpu;
} }
vmx->vpid = allocate_vpid(); vmx->vpid = allocate_vpid();
...@@ -6721,6 +6730,8 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id) ...@@ -6721,6 +6730,8 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
free_vcpu: free_vcpu:
free_vpid(vmx->vpid); free_vpid(vmx->vpid);
kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu); kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.guest_fpu);
free_user_fpu:
kmem_cache_free(x86_fpu_cache, vmx->vcpu.arch.user_fpu);
free_partial_vcpu: free_partial_vcpu:
kmem_cache_free(kvm_vcpu_cache, vmx); kmem_cache_free(kvm_vcpu_cache, vmx);
return ERR_PTR(err); return ERR_PTR(err);
......
...@@ -3306,6 +3306,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) ...@@ -3306,6 +3306,10 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
kvm_x86_ops->vcpu_load(vcpu, cpu); kvm_x86_ops->vcpu_load(vcpu, cpu);
fpregs_assert_state_consistent();
if (test_thread_flag(TIF_NEED_FPU_LOAD))
switch_fpu_return();
/* Apply any externally detected TSC adjustments (due to suspend) */ /* Apply any externally detected TSC adjustments (due to suspend) */
if (unlikely(vcpu->arch.tsc_offset_adjustment)) { if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment); adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);
...@@ -7202,7 +7206,7 @@ static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id) ...@@ -7202,7 +7206,7 @@ static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
rcu_read_unlock(); rcu_read_unlock();
if (target) if (target && READ_ONCE(target->ready))
kvm_vcpu_yield_to(target); kvm_vcpu_yield_to(target);
} }
...@@ -7242,6 +7246,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) ...@@ -7242,6 +7246,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
break; break;
case KVM_HC_KICK_CPU: case KVM_HC_KICK_CPU:
kvm_pv_kick_cpu_op(vcpu->kvm, a0, a1); kvm_pv_kick_cpu_op(vcpu->kvm, a0, a1);
kvm_sched_yield(vcpu->kvm, a1);
ret = 0; ret = 0;
break; break;
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
...@@ -7990,9 +7995,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) ...@@ -7990,9 +7995,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
trace_kvm_entry(vcpu->vcpu_id); trace_kvm_entry(vcpu->vcpu_id);
guest_enter_irqoff(); guest_enter_irqoff();
fpregs_assert_state_consistent(); /* The preempt notifier should have taken care of the FPU already. */
if (test_thread_flag(TIF_NEED_FPU_LOAD)) WARN_ON_ONCE(test_thread_flag(TIF_NEED_FPU_LOAD));
switch_fpu_return();
if (unlikely(vcpu->arch.switch_db_regs)) { if (unlikely(vcpu->arch.switch_db_regs)) {
set_debugreg(0, 7); set_debugreg(0, 7);
...@@ -8270,7 +8274,7 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) ...@@ -8270,7 +8274,7 @@ static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
{ {
fpregs_lock(); fpregs_lock();
copy_fpregs_to_fpstate(&current->thread.fpu); copy_fpregs_to_fpstate(vcpu->arch.user_fpu);
/* PKRU is separately restored in kvm_x86_ops->run. */ /* PKRU is separately restored in kvm_x86_ops->run. */
__copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state, __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu->state,
~XFEATURE_MASK_PKRU); ~XFEATURE_MASK_PKRU);
...@@ -8287,7 +8291,7 @@ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) ...@@ -8287,7 +8291,7 @@ static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
fpregs_lock(); fpregs_lock();
copy_fpregs_to_fpstate(vcpu->arch.guest_fpu); copy_fpregs_to_fpstate(vcpu->arch.guest_fpu);
copy_kernel_to_fpregs(&current->thread.fpu.state); copy_kernel_to_fpregs(&vcpu->arch.user_fpu->state);
fpregs_mark_activate(); fpregs_mark_activate();
fpregs_unlock(); fpregs_unlock();
......
...@@ -116,7 +116,7 @@ struct kvm_irq_level { ...@@ -116,7 +116,7 @@ struct kvm_irq_level {
* ACPI gsi notion of irq. * ACPI gsi notion of irq.
* For IA-64 (APIC model) IOAPIC0: irq 0-23; IOAPIC1: irq 24-47.. * For IA-64 (APIC model) IOAPIC0: irq 0-23; IOAPIC1: irq 24-47..
* For X86 (standard AT mode) PIC0/1: irq 0-15. IOAPIC0: 0-23.. * For X86 (standard AT mode) PIC0/1: irq 0-15. IOAPIC0: 0-23..
* For ARM: See Documentation/virtual/kvm/api.txt * For ARM: See Documentation/virt/kvm/api.txt
*/ */
union { union {
__u32 irq; __u32 irq;
...@@ -1086,7 +1086,7 @@ struct kvm_xen_hvm_config { ...@@ -1086,7 +1086,7 @@ struct kvm_xen_hvm_config {
* *
* KVM_IRQFD_FLAG_RESAMPLE indicates resamplefd is valid and specifies * KVM_IRQFD_FLAG_RESAMPLE indicates resamplefd is valid and specifies
* the irqfd to operate in resampling mode for level triggered interrupt * the irqfd to operate in resampling mode for level triggered interrupt
* emulation. See Documentation/virtual/kvm/api.txt. * emulation. See Documentation/virt/kvm/api.txt.
*/ */
#define KVM_IRQFD_FLAG_RESAMPLE (1 << 1) #define KVM_IRQFD_FLAG_RESAMPLE (1 << 1)
......
...@@ -116,7 +116,7 @@ struct kvm_irq_level { ...@@ -116,7 +116,7 @@ struct kvm_irq_level {
* ACPI gsi notion of irq. * ACPI gsi notion of irq.
* For IA-64 (APIC model) IOAPIC0: irq 0-23; IOAPIC1: irq 24-47.. * For IA-64 (APIC model) IOAPIC0: irq 0-23; IOAPIC1: irq 24-47..
* For X86 (standard AT mode) PIC0/1: irq 0-15. IOAPIC0: 0-23.. * For X86 (standard AT mode) PIC0/1: irq 0-15. IOAPIC0: 0-23..
* For ARM: See Documentation/virtual/kvm/api.txt * For ARM: See Documentation/virt/kvm/api.txt
*/ */
union { union {
__u32 irq; __u32 irq;
...@@ -1085,7 +1085,7 @@ struct kvm_xen_hvm_config { ...@@ -1085,7 +1085,7 @@ struct kvm_xen_hvm_config {
* *
* KVM_IRQFD_FLAG_RESAMPLE indicates resamplefd is valid and specifies * KVM_IRQFD_FLAG_RESAMPLE indicates resamplefd is valid and specifies
* the irqfd to operate in resampling mode for level triggered interrupt * the irqfd to operate in resampling mode for level triggered interrupt
* emulation. See Documentation/virtual/kvm/api.txt. * emulation. See Documentation/virt/kvm/api.txt.
*/ */
#define KVM_IRQFD_FLAG_RESAMPLE (1 << 1) #define KVM_IRQFD_FLAG_RESAMPLE (1 << 1)
......
...@@ -727,7 +727,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -727,7 +727,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
* Ensure we set mode to IN_GUEST_MODE after we disable * Ensure we set mode to IN_GUEST_MODE after we disable
* interrupts and before the final VCPU requests check. * interrupts and before the final VCPU requests check.
* See the comment in kvm_vcpu_exiting_guest_mode() and * See the comment in kvm_vcpu_exiting_guest_mode() and
* Documentation/virtual/kvm/vcpu-requests.rst * Documentation/virt/kvm/vcpu-requests.rst
*/ */
smp_store_mb(vcpu->mode, IN_GUEST_MODE); smp_store_mb(vcpu->mode, IN_GUEST_MODE);
......
...@@ -250,7 +250,7 @@ static unsigned long vgic_v3_uaccess_read_pending(struct kvm_vcpu *vcpu, ...@@ -250,7 +250,7 @@ static unsigned long vgic_v3_uaccess_read_pending(struct kvm_vcpu *vcpu,
* pending state of interrupt is latched in pending_latch variable. * pending state of interrupt is latched in pending_latch variable.
* Userspace will save and restore pending state and line_level * Userspace will save and restore pending state and line_level
* separately. * separately.
* Refer to Documentation/virtual/kvm/devices/arm-vgic-v3.txt * Refer to Documentation/virt/kvm/devices/arm-vgic-v3.txt
* for handling of ISPENDR and ICPENDR. * for handling of ISPENDR and ICPENDR.
*/ */
for (i = 0; i < len * 8; i++) { for (i = 0; i < len * 8; i++) {
......
...@@ -42,7 +42,7 @@ ...@@ -42,7 +42,7 @@
VGIC_AFFINITY_LEVEL(val, 3)) VGIC_AFFINITY_LEVEL(val, 3))
/* /*
* As per Documentation/virtual/kvm/devices/arm-vgic-v3.txt, * As per Documentation/virt/kvm/devices/arm-vgic-v3.txt,
* below macros are defined for CPUREG encoding. * below macros are defined for CPUREG encoding.
*/ */
#define KVM_REG_ARM_VGIC_SYSREG_OP0_MASK 0x000000000000c000 #define KVM_REG_ARM_VGIC_SYSREG_OP0_MASK 0x000000000000c000
...@@ -63,7 +63,7 @@ ...@@ -63,7 +63,7 @@
KVM_REG_ARM_VGIC_SYSREG_OP2_MASK) KVM_REG_ARM_VGIC_SYSREG_OP2_MASK)
/* /*
* As per Documentation/virtual/kvm/devices/arm-vgic-its.txt, * As per Documentation/virt/kvm/devices/arm-vgic-its.txt,
* below macros are defined for ITS table entry encoding. * below macros are defined for ITS table entry encoding.
*/ */
#define KVM_ITS_CTE_VALID_SHIFT 63 #define KVM_ITS_CTE_VALID_SHIFT 63
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment