1. 06 Aug, 2018 1 commit
  2. 02 Aug, 2018 2 commits
  3. 30 Jul, 2018 15 commits
  4. 26 Jul, 2018 3 commits
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Read kvm->arch.emul_smt_mode under kvm->lock · b5c6f760
      Paul Mackerras authored
      Commit 1e175d2e ("KVM: PPC: Book3S HV: Pack VCORE IDs to access full
      VCPU ID space", 2018-07-25) added code that uses kvm->arch.emul_smt_mode
      before any VCPUs are created.  However, userspace can change
      kvm->arch.emul_smt_mode at any time up until the first VCPU is created.
      Hence it is (theoretically) possible for the check in
      kvmppc_core_vcpu_create_hv() to race with another userspace thread
      changing kvm->arch.emul_smt_mode.
      
      This fixes it by moving the test that uses kvm->arch.emul_smt_mode into
      the block where kvm->lock is held.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      b5c6f760
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Allow creating max number of VCPUs on POWER9 · 1ebe6b81
      Paul Mackerras authored
      Commit 1e175d2e ("KVM: PPC: Book3S HV: Pack VCORE IDs to access full
      VCPU ID space", 2018-07-25) allowed use of VCPU IDs up to
      KVM_MAX_VCPU_ID on POWER9 in all guest SMT modes and guest emulated
      hardware SMT modes.  However, with the current definition of
      KVM_MAX_VCPU_ID, a guest SMT mode of 1 and an emulated SMT mode of 8,
      it is only possible to create KVM_MAX_VCPUS / 2 VCPUS, because
      threads_per_subcore is 4 on POWER9 CPUs.  (Using an emulated SMT mode
      of 8 is useful when migrating VMs to or from POWER8 hosts.)
      
      This increases KVM_MAX_VCPU_ID to 8 * KVM_MAX_VCPUS when HV KVM is
      configured in, so that a full complement of KVM_MAX_VCPUS VCPUs can
      be created on POWER9 in all guest SMT modes and emulated hardware
      SMT modes.
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      1ebe6b81
    • Sam Bobroff's avatar
      KVM: PPC: Book3S HV: Pack VCORE IDs to access full VCPU ID space · 1e175d2e
      Sam Bobroff authored
      It is not currently possible to create the full number of possible
      VCPUs (KVM_MAX_VCPUS) on Power9 with KVM-HV when the guest uses fewer
      threads per core than its core stride (or "VSMT mode"). This is
      because the VCORE ID and XIVE offsets grow beyond KVM_MAX_VCPUS
      even though the VCPU ID is less than KVM_MAX_VCPU_ID.
      
      To address this, "pack" the VCORE ID and XIVE offsets by using
      knowledge of the way the VCPU IDs will be used when there are fewer
      guest threads per core than the core stride. The primary thread of
      each core will always be used first. Then, if the guest uses more than
      one thread per core, these secondary threads will sequentially follow
      the primary in each core.
      
      So, the only way an ID above KVM_MAX_VCPUS can be seen, is if the
      VCPUs are being spaced apart, so at least half of each core is empty,
      and IDs between KVM_MAX_VCPUS and (KVM_MAX_VCPUS * 2) can be mapped
      into the second half of each core (4..7, in an 8-thread core).
      
      Similarly, if IDs above KVM_MAX_VCPUS * 2 are seen, at least 3/4 of
      each core is being left empty, and we can map down into the second and
      third quarters of each core (2, 3 and 5, 6 in an 8-thread core).
      
      Lastly, if IDs above KVM_MAX_VCPUS * 4 are seen, only the primary
      threads are being used and 7/8 of the core is empty, allowing use of
      the 1, 5, 3 and 7 thread slots.
      
      (Strides less than 8 are handled similarly.)
      
      This allows the VCORE ID or offset to be calculated quickly from the
      VCPU ID or XIVE server numbers, without access to the VCPU structure.
      
      [paulus@ozlabs.org - tidied up comment a little, changed some WARN_ONCE
       to pr_devel, wrapped line, fixed id check.]
      Signed-off-by: default avatarSam Bobroff <sam.bobroff@au1.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      1e175d2e
  5. 22 Jul, 2018 8 commits
  6. 21 Jul, 2018 11 commits