1. 27 Dec, 2011 5 commits
    • Xiao Guangrong's avatar
      KVM: MMU: avoid pte_list_desc running out in kvm_mmu_pte_write · f759e2b4
      Xiao Guangrong authored
      kvm_mmu_pte_write is unsafe since we need to alloc pte_list_desc in the
      function when spte is prefetched, unfortunately, we can not know how many
      spte need to be prefetched on this path, that means we can use out of the
      free  pte_list_desc object in the cache, and BUG_ON() is triggered, also some
      path does not fill the cache, such as INS instruction emulated that does not
      trigger page fault
      Signed-off-by: default avatarXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Signed-off-by: default avatarAvi Kivity <avi@redhat.com>
      f759e2b4
    • Nadav Har'El's avatar
      KVM: nVMX: Fix warning-causing idt-vectoring-info behavior · 51cfe38e
      Nadav Har'El authored
      When L0 wishes to inject an interrupt while L2 is running, it emulates an exit
      to L1 with EXIT_REASON_EXTERNAL_INTERRUPT. This was explained in the original
      nVMX patch 23, titled "Correct handling of interrupt injection".
      
      Unfortunately, it is possible (though rare) that at this point there is valid
      idt_vectoring_info in vmcs02. For example, L1 injected some interrupt to L2,
      and when L2 tried to run this interrupt's handler, it got a page fault - so
      it returns the original interrupt vector in idt_vectoring_info. The problem
      is that if this is the case, we cannot exit to L1 with EXTERNAL_INTERRUPT
      like we wished to, because the VMX spec guarantees that idt_vectoring_info
      and exit_reason_external_interrupt can never happen together. This is not
      just specified in the spec - a KVM L1 actually prints a kernel warning
      "unexpected, valid vectoring info" if we violate this guarantee, and some
      users noticed these warnings in L1's logs.
      
      In order to better emulate a processor, which would never return the external
      interrupt and the idt-vectoring-info together, we need to separate the two
      injection steps: First, complete L1's injection into L2 (i.e., enter L2,
      injecting to it the idt-vectoring-info); Second, after entry into L2 succeeds
      and it exits back to L0, exit to L1 with the EXIT_REASON_EXTERNAL_INTERRUPT.
      Most of this is already in the code - the only change we need is to remain
      in L2 (and not exit to L1) in this case.
      
      Note that the previous patch ensures (by using KVM_REQ_IMMEDIATE_EXIT) that
      although we do enter L2 first, it will exit immediately after processing its
      injection, allowing us to promptly inject to L1.
      
      Note how we test vmcs12->idt_vectoring_info_field; This isn't really the
      vmcs12 value (we haven't exited to L1 yet, so vmcs12 hasn't been updated),
      but rather the place we save, at the end of vmx_vcpu_run, the vmcs02 value
      of this field. This was explained in patch 25 ("Correct handling of idt
      vectoring info") of the original nVMX patch series.
      
      Thanks to Dave Allan and to Federico Simoncelli for reporting this bug,
      to Abel Gordon for helping me figure out the solution, and to Avi Kivity
      for helping to improve it.
      Signed-off-by: default avatarNadav Har'El <nyh@il.ibm.com>
      Signed-off-by: default avatarAvi Kivity <avi@redhat.com>
      51cfe38e
    • Nadav Har'El's avatar
      KVM: nVMX: Add KVM_REQ_IMMEDIATE_EXIT · d6185f20
      Nadav Har'El authored
      This patch adds a new vcpu->requests bit, KVM_REQ_IMMEDIATE_EXIT.
      This bit requests that when next entering the guest, we should run it only
      for as little as possible, and exit again.
      
      We use this new option in nested VMX: When L1 launches L2, but L0 wishes L1
      to continue running so it can inject an event to it, we unfortunately cannot
      just pretend to have run L2 for a little while - We must really launch L2,
      otherwise certain one-off vmcs12 parameters (namely, L1 injection into L2)
      will be lost. So the existing code runs L2 in this case.
      But L2 could potentially run for a long time until it exits, and the
      injection into L1 will be delayed. The new KVM_REQ_IMMEDIATE_EXIT allows us
      to request that L2 will be entered, as necessary, but will exit as soon as
      possible after entry.
      
      Our implementation of this request uses smp_send_reschedule() to send a
      self-IPI, with interrupts disabled. The interrupts remain disabled until the
      guest is entered, and then, after the entry is complete (often including
      processing an injection and jumping to the relevant handler), the physical
      interrupt is noticed and causes an exit.
      
      On recent Intel processors, we could have achieved the same goal by using
      MTF instead of a self-IPI. Another technique worth considering in the future
      is to use VM_EXIT_ACK_INTR_ON_EXIT and a highest-priority vector IPI - to
      slightly improve performance by avoiding the useless interrupt handler
      which ends up being called when smp_send_reschedule() is used.
      Signed-off-by: default avatarNadav Har'El <nyh@il.ibm.com>
      Signed-off-by: default avatarAvi Kivity <avi@redhat.com>
      d6185f20
    • Keith Packard's avatar
      drm/i915: Disable RC6 on Sandybridge by default · 371de6e4
      Keith Packard authored
      RC6 fails again.
      
      > I found my system freeze mostly during starting up X and KDE. Sometimes it
      > works for some minutes, sometimes it freezes immediatly. When the freeze
      > happens, everything is dead (even the reset button does not work, I need to
      > power cycle).
      
      > I disabled RC6, and my system runs wonderfully.
      
      > The system is a Z68 Pro board with Sandybridge i5-2500K processor, 8
      > GB of RAM and UEFI firmware.
      Reported-by: default avatarKai Krakow <hurikhan77@gmail.com>
      Signed-off-by: default avatarKeith Packard <keithp@keithp.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      371de6e4
    • Keith Packard's avatar
      drm/i915: Disable semaphores by default on SNB · ebbd857e
      Keith Packard authored
      Semaphores still cause problems on some machines:
      
      > From Udo Steinberg:
      >
      > With Linux-3.2-rc6 I'm frequently seeing GPU hangs when large amounts of
      > text scroll in an xterm, such as when extracting a tar archive. Such as this
      > one (note the timestamps):
      >
      >  I can reproduce it fairly easily with something
      >  as simple as:
      >
      >	  while true; do dmesg; done
      
      This patch turns them off on SNB while leaving them on for IVB.
      Reported-by: default avatarUdo Steinberg <udo@hypervisor.org>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Eugeni Dodonov <eugeni@dodonov.net>
      Signed-off-by: default avatarKeith Packard <keithp@keithp.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ebbd857e
  2. 26 Dec, 2011 8 commits
  3. 25 Dec, 2011 4 commits
  4. 24 Dec, 2011 4 commits
  5. 23 Dec, 2011 14 commits
  6. 22 Dec, 2011 5 commits
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://neil.brown.name/md · ad1fca20
      Linus Torvalds authored
      * 'for-linus' of git://neil.brown.name/md:
        md/bitmap: It is OK to clear bits during recovery.
        md: don't give up looking for spares on first failure-to-add
        md/raid5: ensure correct assessment of drives during degraded reshape.
        md/linear: fix hot-add of devices to linear arrays.
      ad1fca20
    • NeilBrown's avatar
      md/bitmap: It is OK to clear bits during recovery. · 961902c0
      NeilBrown authored
      commit d0a4bb49 introduced a
      regression which is annoying but fairly harmless.
      
      When writing to an array that is undergoing recovery (a spare
      in being integrated into the array), writing to the array will
      set bits in the bitmap, but they will not be cleared when the
      write completes.
      
      For bits covering areas that have not been recovered yet this is not a
      problem as the recovery will clear the bits.  However bits set in
      already-recovered region will stay set and never be cleared.
      This doesn't risk data integrity.  The only negatives are:
       - next time there is a crash, more resyncing than necessary will
         be done.
       - the bitmap doesn't look clean, which is confusing.
      
      While an array is recovering we don't want to update the
      'events_cleared' setting in the bitmap but we do still want to clear
      bits that have very recently been set - providing they were written to
      the recovering device.
      
      So split those two needs - which previously both depended on 'success'
      and always clear the bit of the write went to all devices.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      961902c0
    • NeilBrown's avatar
      md: don't give up looking for spares on first failure-to-add · 60fc1370
      NeilBrown authored
      Before performing a recovery we try to remove any spares that
      might not be working, then add any that might have become relevant.
      
      Currently we abort on the first spare that cannot be added.
      This is a false optimisation.
      It is conceivable that - depending on rules in the personality - a
      subsequent spare might be accepted.
      Also the loop does other things like count the available spares and
      reset the 'recovery_offset' value.
      
      If we abort early these might not happen properly.
      
      So remove the early abort.
      
      In particular if you have an array what is undergoing recovery and
      which has extra spares, then the recovery may not restart after as
      reboot as the could of 'spares' might end up as zero.
      Reported-by: default avatarAnssi Hannula <anssi.hannula@iki.fi>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      60fc1370
    • NeilBrown's avatar
      md/raid5: ensure correct assessment of drives during degraded reshape. · 30d7a483
      NeilBrown authored
      While reshaping a degraded array (as when reshaping a RAID0 by first
      converting it to a degraded RAID4) we currently get confused about
      which devices are in_sync.  In most cases we get it right, but in the
      region that is being reshaped we need to treat non-failed devices as
      in-sync when we have the data but haven't actually written it out yet.
      Reported-by: default avatarAdam Kwolek <adam.kwolek@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      30d7a483
    • NeilBrown's avatar
      md/linear: fix hot-add of devices to linear arrays. · 09cd9270
      NeilBrown authored
      commit d70ed2e4
      broke hot-add to a linear array.
      After that commit, metadata if not written to devices until they
      have been fully integrated into the array as determined by
      saved_raid_disk.  That patch arranged to clear that field after
      a recovery completed.
      
      However for linear arrays, there is no recovery - the integration is
      instantaneous.  So we need to explicitly clear the saved_raid_disk
      field.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      09cd9270