1. 19 Oct, 2015 1 commit
    • Paolo Bonzini's avatar
      kvm: x86: zero EFER on INIT · 5690891b
      Paolo Bonzini authored
      Not zeroing EFER means that a 32-bit firmware cannot enter paging mode
      without clearing EFER.LME first (which it should not know about).
      Yang Zhang from Intel confirmed that the manual is wrong and EFER is
      cleared to zero on INIT.
      
      Fixes: d28bc9dd
      Cc: stable@vger.kernel.org
      Cc: Yang Z Zhang <yang.z.zhang@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5690891b
  2. 16 Oct, 2015 15 commits
  3. 14 Oct, 2015 9 commits
    • Wanpeng Li's avatar
      KVM: VMX: introduce __vmx_flush_tlb to handle specific vpid · dd5f5341
      Wanpeng Li authored
      Introduce __vmx_flush_tlb() to handle specific vpid.
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      dd5f5341
    • Wanpeng Li's avatar
      KVM: VMX: adjust interface to allocate/free_vpid · 991e7a0e
      Wanpeng Li authored
      Adjust allocate/free_vid so that they can be reused for the nested vpid.
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      991e7a0e
    • Kosuke Tatsukawa's avatar
      kvm: fix waitqueue_active without memory barrier in virt/kvm/async_pf.c · 6003a420
      Kosuke Tatsukawa authored
      async_pf_execute() seems to be missing a memory barrier which might
      cause the waker to not notice the waiter and miss sending a wake_up as
      in the following figure.
      
              async_pf_execute                    kvm_vcpu_block
      ------------------------------------------------------------------------
      spin_lock(&vcpu->async_pf.lock);
      if (waitqueue_active(&vcpu->wq))
      /* The CPU might reorder the test for
         the waitqueue up here, before
         prior writes complete */
                                          prepare_to_wait(&vcpu->wq, &wait,
                                            TASK_INTERRUPTIBLE);
                                          /*if (kvm_vcpu_check_block(vcpu) < 0) */
                                           /*if (kvm_arch_vcpu_runnable(vcpu)) { */
                                            ...
                                            return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
                                              !vcpu->arch.apf.halted)
                                              || !list_empty_careful(&vcpu->async_pf.done)
                                           ...
                                           return 0;
      list_add_tail(&apf->link,
        &vcpu->async_pf.done);
      spin_unlock(&vcpu->async_pf.lock);
                                          waited = true;
                                          schedule();
      ------------------------------------------------------------------------
      
      The attached patch adds the missing memory barrier.
      
      I found this issue when I was looking through the linux source code
      for places calling waitqueue_active() before wake_up*(), but without
      preceding memory barriers, after sending a patch to fix a similar
      issue in drivers/tty/n_tty.c  (Details about the original issue can be
      found here: https://lkml.org/lkml/2015/9/28/849).
      Signed-off-by: default avatarKosuke Tatsukawa <tatsu@ab.jp.nec.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      6003a420
    • Radim Krčmář's avatar
      KVM: x86: don't notify userspace IOAPIC on edge EOI · 13db7734
      Radim Krčmář authored
      On real hardware, edge-triggered interrupts don't set a bit in TMR,
      which means that IOAPIC isn't notified on EOI.  Do the same here.
      
      Staying in guest/kernel mode after edge EOI is what we want for most
      devices.  If some bugs could be nicely worked around with edge EOI
      notifications, we should invest in a better interface.
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      13db7734
    • Radim Krčmář's avatar
      KVM: x86: fix edge EOI and IOAPIC reconfig race · db2bdcbb
      Radim Krčmář authored
      KVM uses eoi_exit_bitmap to track vectors that need an action on EOI.
      The problem is that IOAPIC can be reconfigured while an interrupt with
      old configuration is pending and eoi_exit_bitmap only remembers the
      newest configuration;  thus EOI from the pending interrupt is not
      recognized.
      
      (Reconfiguration is not a problem for level interrupts, because IOAPIC
       sends interrupt with the new configuration.)
      
      For an edge interrupt with ACK notifiers, like i8254 timer; things can
      happen in this order
       1) IOAPIC inject a vector from i8254
       2) guest reconfigures that vector's VCPU and therefore eoi_exit_bitmap
          on original VCPU gets cleared
       3) guest's handler for the vector does EOI
       4) KVM's EOI handler doesn't pass that vector to IOAPIC because it is
          not in that VCPU's eoi_exit_bitmap
       5) i8254 stops working
      
      A simple solution is to set the IOAPIC vector in eoi_exit_bitmap if the
      vector is in PIR/IRR/ISR.
      
      This creates an unwanted situation if the vector is reused by a
      non-IOAPIC source, but I think it is so rare that we don't want to make
      the solution more sophisticated.  The simple solution also doesn't work
      if we are reconfiguring the vector.  (Shouldn't happen in the wild and
      I'd rather fix users of ACK notifiers instead of working around that.)
      
      The are no races because ioapic injection and reconfig are locked.
      
      Fixes: b053b2ae ("KVM: x86: Add EOI exit bitmap inference")
      [Before b053b2ae, this bug happened only with APICv.]
      Fixes: c7c9c56c ("x86, apicv: add virtual interrupt delivery support")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      db2bdcbb
    • Radim Krčmář's avatar
      kvm: x86: set KVM_REQ_EVENT when updating IRR · c77f3fab
      Radim Krčmář authored
      After moving PIR to IRR, the interrupt needs to be delivered manually.
      Reported-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      c77f3fab
    • Paolo Bonzini's avatar
      Merge branch 'kvm-master' into HEAD · bff98d3b
      Paolo Bonzini authored
      Merge more important SMM fixes.
      bff98d3b
    • Paolo Bonzini's avatar
      KVM: x86: fix RSM into 64-bit protected mode · b10d92a5
      Paolo Bonzini authored
      In order to get into 64-bit protected mode, you need to enable
      paging while EFER.LMA=1.  For this to work, CS.L must be 0.
      Currently, we load the segments before CR0 and CR4, which means
      that if RSM returns into 64-bit protected mode CS.L is already 1
      and everything breaks.
      
      Luckily, CS.L=0 is always the case when executing RSM, because it
      is forbidden to execute RSM from 64-bit protected mode.  Hence it
      is enough to load CR0 and CR4 first, and only then the segments.
      
      Fixes: 660a5d51
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b10d92a5
    • Paolo Bonzini's avatar
      KVM: x86: fix previous commit for 32-bit · 25188b99
      Paolo Bonzini authored
      Unfortunately I only noticed this after pushing.
      
      Fixes: f0d648bd
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      25188b99
  4. 13 Oct, 2015 15 commits