An error occurred fetching the project authors.
  1. 04 Oct, 2023 2 commits
  2. 02 Oct, 2023 1 commit
  3. 27 Sep, 2023 2 commits
  4. 18 Aug, 2023 1 commit
  5. 09 Aug, 2023 1 commit
  6. 08 Aug, 2023 1 commit
  7. 21 Jun, 2023 1 commit
  8. 16 May, 2023 1 commit
  9. 05 Apr, 2023 4 commits
  10. 04 Apr, 2023 1 commit
  11. 30 Mar, 2023 2 commits
  12. 23 Mar, 2023 1 commit
  13. 16 Mar, 2023 1 commit
    • Thomas Huth's avatar
      KVM: arm64: Limit length in kvm_vm_ioctl_mte_copy_tags() to INT_MAX · 2def950c
      Thomas Huth authored
      In case of success, this function returns the amount of handled bytes.
      However, this does not work for large values: The function is called
      from kvm_arch_vm_ioctl() (which still returns a long), which in turn
      is called from kvm_vm_ioctl() in virt/kvm/kvm_main.c. And that function
      stores the return value in an "int r" variable. So the upper 32-bits
      of the "long" return value are lost there.
      
      KVM ioctl functions should only return "int" values, so let's limit
      the amount of bytes that can be requested here to INT_MAX to avoid
      the problem with the truncated return value. We can then also change
      the return type of the function to "int" to make it clearer that it
      is not possible to return a "long" here.
      
      Fixes: f0376edb ("KVM: arm64: Add ioctl to fetch/store tags in a guest")
      Signed-off-by: default avatarThomas Huth <thuth@redhat.com>
      Reviewed-by: default avatarCornelia Huck <cohuck@redhat.com>
      Reviewed-by: default avatarGavin Shan <gshan@redhat.com>
      Reviewed-by: default avatarSteven Price <steven.price@arm.com>
      Message-Id: <20230208140105.655814-5-thuth@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2def950c
  14. 07 Feb, 2023 2 commits
  15. 29 Jan, 2023 2 commits
  16. 24 Jan, 2023 1 commit
    • Aaron Lewis's avatar
      KVM: x86/pmu: Introduce masked events to the pmu event filter · 14329b82
      Aaron Lewis authored
      When building a list of filter events, it can sometimes be a challenge
      to fit all the events needed to adequately restrict the guest into the
      limited space available in the pmu event filter.  This stems from the
      fact that the pmu event filter requires each event (i.e. event select +
      unit mask) be listed, when the intention might be to restrict the
      event select all together, regardless of it's unit mask.  Instead of
      increasing the number of filter events in the pmu event filter, add a
      new encoding that is able to do a more generalized match on the unit mask.
      
      Introduce masked events as another encoding the pmu event filter
      understands.  Masked events has the fields: mask, match, and exclude.
      When filtering based on these events, the mask is applied to the guest's
      unit mask to see if it matches the match value (i.e. umask & mask ==
      match).  The exclude bit can then be used to exclude events from that
      match.  E.g. for a given event select, if it's easier to say which unit
      mask values shouldn't be filtered, a masked event can be set up to match
      all possible unit mask values, then another masked event can be set up to
      match the unit mask values that shouldn't be filtered.
      
      Userspace can query to see if this feature exists by looking for the
      capability, KVM_CAP_PMU_EVENT_MASKED_EVENTS.
      
      This feature is enabled by setting the flags field in the pmu event
      filter to KVM_PMU_EVENT_FLAG_MASKED_EVENTS.
      
      Events can be encoded by using KVM_PMU_ENCODE_MASKED_ENTRY().
      
      It is an error to have a bit set outside the valid bits for a masked
      event, and calls to KVM_SET_PMU_EVENT_FILTER will return -EINVAL in
      such cases, including the high bits of the event select (35:32) if
      called on Intel.
      
      With these updates the filter matching code has been updated to match on
      a common event.  Masked events were flexible enough to handle both event
      types, so they were used as the common event.  This changes how guest
      events get filtered because regardless of the type of event used in the
      uAPI, they will be converted to masked events.  Because of this there
      could be a slight performance hit because instead of matching the filter
      event with a lookup on event select + unit mask, it does a lookup on event
      select then walks the unit masks to find the match.  This shouldn't be a
      big problem because I would expect the set of common event selects to be
      small, and if they aren't the set can likely be reduced by using masked
      events to generalize the unit mask.  Using one type of event when
      filtering guest events allows for a common code path to be used.
      Signed-off-by: default avatarAaron Lewis <aaronlewis@google.com>
      Link: https://lore.kernel.org/r/20221220161236.555143-5-aaronlewis@google.comSigned-off-by: default avatarSean Christopherson <seanjc@google.com>
      14329b82
  17. 09 Jan, 2023 1 commit
    • Paolo Bonzini's avatar
      KVM: x86: Do not return host topology information from KVM_GET_SUPPORTED_CPUID · 45e966fc
      Paolo Bonzini authored
      Passing the host topology to the guest is almost certainly wrong
      and will confuse the scheduler.  In addition, several fields of
      these CPUID leaves vary on each processor; it is simply impossible to
      return the right values from KVM_GET_SUPPORTED_CPUID in such a way that
      they can be passed to KVM_SET_CPUID2.
      
      The values that will most likely prevent confusion are all zeroes.
      Userspace will have to override it anyway if it wishes to present a
      specific topology to the guest.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      45e966fc
  18. 03 Jan, 2023 1 commit
  19. 27 Dec, 2022 2 commits
  20. 14 Dec, 2022 1 commit
  21. 02 Dec, 2022 5 commits
  22. 01 Dec, 2022 3 commits
  23. 30 Nov, 2022 1 commit
    • David Woodhouse's avatar
      KVM: x86/xen: Allow XEN_RUNSTATE_UPDATE flag behaviour to be configured · d8ba8ba4
      David Woodhouse authored
      Closer inspection of the Xen code shows that we aren't supposed to be
      using the XEN_RUNSTATE_UPDATE flag unconditionally. It should be
      explicitly enabled by guests through the HYPERVISOR_vm_assist hypercall.
      If we randomly set the top bit of ->state_entry_time for a guest that
      hasn't asked for it and doesn't expect it, that could make the runtimes
      fail to add up and confuse the guest. Without the flag it's perfectly
      safe for a vCPU to read its own vcpu_runstate_info; just not for one
      vCPU to read *another's*.
      
      I briefly pondered adding a word for the whole set of VMASST_TYPE_*
      flags but the only one we care about for HVM guests is this, so it
      seemed a bit pointless.
      Signed-off-by: default avatarDavid Woodhouse <dwmw@amazon.co.uk>
      Message-Id: <20221127122210.248427-3-dwmw2@infradead.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d8ba8ba4
  24. 29 Nov, 2022 1 commit
  25. 23 Nov, 2022 1 commit