- 28 Jul, 2022 6 commits
-
-
Sean Christopherson authored
Rename kvm_unmap_rmap() and kvm_zap_rmap() to kvm_zap_rmap() and __kvm_zap_rmap() respectively to show that what was the "unmap" helper is just a wrapper for the "zap" helper, i.e. that they do the exact same thing, one just exists to deal with its caller passing in more params. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220715224226.3749507-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Rename __kvm_zap_rmaps() to kvm_rmap_zap_gfn_range() to avoid future confusion with a soon-to-be-introduced __kvm_zap_rmap(). Using a plural "rmaps" is somewhat ambiguous without additional context, as it's not obvious whether it's referring to multiple rmap lists, versus multiple rmap entries within a single list. Use kvm_rmap_zap_gfn_range() to align with the pattern established by kvm_rmap_zap_collapsible_sptes(), without losing the information that it zaps only rmap-based MMUs, i.e. don't rename it to __kvm_zap_gfn_range(). No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220715224226.3749507-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop the trailing "p" from rmap helpers, i.e. rename functions to simply be kvm_<action>_rmap(). Declaring that a function takes a pointer is completely unnecessary and goes against kernel style. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220715224226.3749507-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use pte_list_destroy() directly when recycling rmaps instead of bouncing through kvm_unmap_rmapp() and kvm_zap_rmapp(). Calling kvm_unmap_rmapp() is unnecessary and odd as it requires passing dummy parameters; passing NULL for @slot when __rmap_add() already has a valid slot is especially weird and confusing. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220715224226.3749507-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Return a u64, not an int, from mmu_spte_clear_track_bits(). The return value is the old SPTE value, which is very much a 64-bit value. The sole caller that consumes the return value, drop_spte(), already uses a u64. The only reason that truncating the SPTE value is not problematic is because drop_spte() only queries the shadow-present bit, which is in the lower 32 bits. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220715224226.3749507-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Maciej S. Szmigiero authored
enter_svm_guest_mode() first calls nested_vmcb02_prepare_control() to copy control fields from VMCB12 to the current VMCB, then nested_vmcb02_prepare_save() to perform a similar copy of the save area. This means that nested_vmcb02_prepare_control() still runs with the previous save area values in the current VMCB so it shouldn't take the L2 guest CS.Base from this area. Explicitly pull CS.Base from the actual VMCB12 instead in enter_svm_guest_mode(). Granted, having a non-zero CS.Base is a very rare thing (and even impossible in 64-bit mode), having it change between nested VMRUNs is probably even rarer, but if it happens it would create a really subtle bug so it's better to fix it upfront. Fixes: 6ef88d6e ("KVM: SVM: Re-inject INT3/INTO instead of retrying the instruction") Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <4caa0f67589ae3c22c311ee0e6139496902f2edc.1658159083.git.maciej.szmigiero@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 22 Jul, 2022 1 commit
-
-
Paolo Bonzini authored
Merge tag 'kvm-s390-next-5.20-1' of https://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD KVM: s390x: Fixes and features for 5.20 * First part of deferred teardown * CPU Topology * interpretive execution for PCI instructions * PV attestation * Minor fixes
-
- 20 Jul, 2022 3 commits
-
-
Pierre Morel authored
During a subsystem reset the Topology-Change-Report is cleared. Let's give userland the possibility to clear the MTCR in the case of a subsystem reset. To migrate the MTCR, we give userland the possibility to query the MTCR state. We indicate KVM support for the CPU topology facility with a new KVM capability: KVM_CAP_S390_CPU_TOPOLOGY. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Message-Id: <20220714194334.127812-1-pmorel@linux.ibm.com> Link: https://lore.kernel.org/all/20220714194334.127812-1-pmorel@linux.ibm.com/ [frankja@linux.ibm.com: Simple conflict resolution in Documentation/virt/kvm/api.rst] Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
-
Pierre Morel authored
We report a topology change to the guest for any CPU hotplug. The reporting to the guest is done using the Multiprocessor Topology-Change-Report (MTCR) bit of the utility entry in the guest's SCA which will be cleared during the interpretation of PTF. On every vCPU creation we set the MCTR bit to let the guest know the next time it uses the PTF with command 2 instruction that the topology changed and that it should use the STSI(15.1.x) instruction to get the topology details. STSI(15.1.x) gives information on the CPU configuration topology. Let's accept the interception of STSI with the function code 15 and let the userland part of the hypervisor handle it when userland supports the CPU Topology facility. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Nico Boehr <nrb@linux.ibm.com> Reviewed-by: Janis Schoetterl-Glausch <scgl@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20220714101824.101601-2-pmorel@linux.ibm.com Message-Id: <20220714101824.101601-2-pmorel@linux.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
-
Pierre Morel authored
We can check if SIIF is enabled by testing the sclp_info struct instead of testing the sie control block eca variable as that facility is always enabled if available. Also let's cleanup all the ipte related struct member accesses which currently happen by referencing the KVM struct via the VCPU struct. Making the KVM struct the parameter to the ipte_* functions removes one level of indirection which makes the code more readable. Signed-off-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Nico Boehr <nrb@linux.ibm.com> Link: https://lore.kernel.org/all/20220711084148.25017-2-pmorel@linux.ibm.com/Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
-
- 19 Jul, 2022 5 commits
-
-
Nico Boehr authored
When the SIGP interpretation facility is present and a VCPU sends an ecall to another VCPU in enabled wait, the sending VCPU receives a 56 intercept (partial execution), so KVM can wake up the receiving CPU. Note that the SIGP interpretation facility will take care of the interrupt delivery and KVM's only job is to wake the receiving VCPU. For PV, the sending VCPU will receive a 108 intercept (pv notify) and should continue like in the non-PV case, i.e. wake the receiving VCPU. For PV and non-PV guests the interrupt delivery will occur through the SIGP interpretation facility on SIE entry when SIE finds the X bit in the status field set. However, in handle_pv_notification(), there was no special handling for SIGP, which leads to interrupt injection being requested by KVM for the next SIE entry. This results in the interrupt being delivered twice: once by the SIGP interpretation facility and once by KVM through the IICTL. Add the necessary special handling in handle_pv_notification(), similar to handle_partial_execution(), which simply wakes the receiving VCPU and leave interrupt delivery to the SIGP interpretation facility. In contrast to external calls, emergency calls are not interpreted but also cause a 108 intercept, which is why we still need to call handle_instruction() for SIGP orders other than ecall. Since kvm_s390_handle_sigp_pei() is now called for all SIGP orders which cause a 108 intercept - even if they are actually handled by handle_instruction() - move the tracepoint in kvm_s390_handle_sigp_pei() to avoid possibly confusing trace messages. Signed-off-by: Nico Boehr <nrb@linux.ibm.com> Cc: <stable@vger.kernel.org> # 5.7 Fixes: da24a0cc ("KVM: s390: protvirt: Instruction emulation") Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com> Link: https://lore.kernel.org/r/20220718130434.73302-1-nrb@linux.ibm.com Message-Id: <20220718130434.73302-1-nrb@linux.ibm.com> Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
-
Claudio Imbrenda authored
Move the Destroy Secure Configuration UVC before the loop to destroy the memory. If the protected VM has memory, it will be cleaned up and made accessible by the Destroy Secure Configuration UVC. The struct page for the relevant pages will still have the protected bit set, so the loop is still needed to clean that up. Switching the order of those two operations does not change the outcome, but it is significantly faster. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Nico Boehr <nrb@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20220628135619.32410-13-imbrenda@linux.ibm.com Message-Id: <20220628135619.32410-13-imbrenda@linux.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
-
Claudio Imbrenda authored
Refactor kvm_s390_pv_deinit_vm to improve readability and simplify the improvements that are coming in subsequent patches. No functional change intended. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Link: https://lore.kernel.org/r/20220628135619.32410-12-imbrenda@linux.ibm.com Message-Id: <20220628135619.32410-12-imbrenda@linux.ibm.com> [frankja@linux.ibm.com: Dropped commit message line regarding review] Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
-
Claudio Imbrenda authored
When ptep_get_and_clear_full is called for a mm teardown, we will now attempt to destroy the secure pages. This will be faster than export. In case it was not a teardown, or if for some reason the destroy page UVC failed, we try with an export page, like before. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Acked-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Nico Boehr <nrb@linux.ibm.com> Link: https://lore.kernel.org/r/20220628135619.32410-11-imbrenda@linux.ibm.com Message-Id: <20220628135619.32410-11-imbrenda@linux.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
-
Claudio Imbrenda authored
Add an mmu_notifier for protected VMs. The callback function is triggered when the mm is torn down, and will attempt to convert all protected vCPUs to non-protected. This allows the mm teardown to use the destroy page UVC instead of export. Also make KVM select CONFIG_MMU_NOTIFIER, needed to use mmu_notifiers. Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Acked-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Nico Boehr <nrb@linux.ibm.com> Link: https://lore.kernel.org/r/20220628135619.32410-10-imbrenda@linux.ibm.com Message-Id: <20220628135619.32410-10-imbrenda@linux.ibm.com> [frankja@linux.ibm.com: Conflict resolution for mmu_notifier.h include and struct kvm_s390_pv] Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
-
- 14 Jul, 2022 25 commits
-
-
Sean Christopherson authored
When applying the hotplug hack to match x2APIC IDs for vCPUs in xAPIC mode, check the target APID ID for being unaddressable in xAPIC mode instead of checking the vCPU's x2APIC ID, and in that case proceed as if apic_x2apic_mode(vcpu) were true. Functionally, it does not matter whether you compare kvm_x2apic_id(apic) or mda with 0xff, since the two values are then checked for equality. But in isolation, checking the x2APIC ID takes an unnecessary dependency on the x2APIC ID being read-only (which isn't strictly true on AMD CPUs, and is difficult to document as well); it also requires KVM to fallthrough and check the xAPIC ID as well to deal with a writable xAPIC ID, whereas the xAPIC ID _can't_ match a target ID greater than 0xff. Opportunistically reword the comment to call out the various subtleties, and to fix a typo reported by Zhang Jiaming. No functional change intended. Cc: Zhang Jiaming <jiaming@nfschina.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Merge bugfix needed in both 5.19 (because it's bad) and 5.20 (because it is a prerequisite to test new features).
-
Sean Christopherson authored
Restrict get_mt_mask() to a u8 and reintroduce using a RET0 static_call for the SVM implementation. EPT stores the memtype information in the lower 8 bits (bits 6:3 to be precise), and even returns a shifted u8 without an explicit cast to a larger type; there's no need to return a full u64. Note, RET0 doesn't play nice with a u64 return on 32-bit kernels, see commit bf07be36 ("KVM: x86: do not use KVM_X86_OP_OPTIONAL_RET0 for get_mt_mask"). Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220714153707.3239119-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add a second CPUID helper, kvm_find_cpuid_entry_index(), to handle KVM queries for CPUID leaves whose index _may_ be significant, and drop the index param from the existing kvm_find_cpuid_entry(). Add a WARN in the inner helper, cpuid_entry2_find(), to detect attempts to retrieve a CPUID entry whose index is significant without explicitly providing an index. Using an explicit magic number and letting callers omit the index avoids confusion by eliminating the myriad cases where KVM specifies '0' as a dummy value. Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Maxim Levitsky authored
Recently KVM's SVM code switched to re-injecting software interrupt events, if something prevented their delivery. Task switch due to task gate in the IDT, however is an exception to this rule, because in this case, INTn instruction causes a task switch intercept and its emulation completes the INTn emulation as well. Add a missing case to task_switch_interception for that. This fixes 32 bit kvm unit test taskswitch2. Fixes: 7e5b5ef8 ("KVM: SVM: Re-inject INTn instead of retrying the insn on "failure"") Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Message-Id: <20220714124453.188655-1-mlevitsk@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Remove a spurious closing paranthesis and tweak the comment about the cache capacity for PTE descriptors (rmaps) eager page splitting to tone down the assertion slightly, and to call out that topup requires dropping mmu_lock, which is the real motivation for avoiding topup (as opposed to memory usage). Cc: David Matlack <dmatlack@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220712020724.1262121-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Tweak the comment above the computation of the quadrant for PG_LEVEL_4K shadow pages to explicitly call out how and why KVM uses role.quadrant to consume gPTE bits. Opportunistically wrap an unnecessarily long line. No functional change intended. Link: https://lore.kernel.org/all/YqvWvBv27fYzOFdE@google.comReviewed-by: David Matlack <dmatlack@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220712020724.1262121-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add spte_index() to dedup all the code that calculates a SPTE's index into its parent's page table and/or spt array. Opportunistically tweak the calculation to avoid pointer arithmetic, which is subtle (subtract in 8-byte chunks) and less performant (requires the compiler to generate the subtraction). Suggested-by: David Matlack <dmatlack@google.com> Reviewed-by: David Matlack <dmatlack@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20220712020724.1262121-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Vitaly Kuznetsov authored
Windows 10/11 guests with Hyper-V role (WSL2) enabled are observed to hang upon boot or shortly after when a non-default TSC frequency was set for L1. The issue is observed on a host where TSC scaling is supported. The problem appears to be that Windows doesn't use TSC frequency for its guests even when the feature is advertised and KVM filters SECONDARY_EXEC_TSC_SCALING out when creating L2 controls from L1's. This leads to L2 running with the default frequency (matching host's) while L1 is running with an altered one. Keep SECONDARY_EXEC_TSC_SCALING in secondary exec controls for L2 when it was set for L1. TSC_MULTIPLIER is already correctly computed and written by prepare_vmcs02(). Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com> Message-Id: <20220712135009.952805-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Update the Processor Trace (PT) MSR intercepts during a filter change if and only if PT may be exposed to the guest, i.e. only if KVM is operating in the so called "host+guest" mode where PT can be used simultaneously by both the host and guest. If PT is in system mode, the host is the sole owner of PT and the MSRs should never be passed through to the guest. Luckily the missed check only results in unnecessary work, as select RTIT MSRs are passed through only when RTIT tracing is enabled "in" the guest, and tracing can't be enabled in the guest when KVM is in system mode (writes to guest.MSR_IA32_RTIT_CTL are disallowed). Cc: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20220712015838.1253995-1-seanjc@google.comSigned-off-by: Sean Christopherson <seanjc@google.com>
-
Sean Christopherson authored
Drop SVM_CPUID_FUNC to reduce the probability of tests open coding CPUID checks instead of using kvm_cpu_has() or this_cpu_has(). Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-43-seanjc@google.com
-
Sean Christopherson authored
Use cpuid() to get CPUID.0x0 in cpu_vendor_string_is(), thus eliminating the last open coded usage of CPUID (ignoring debug_regs.c, which emits CPUID from the guest to trigger a VM-Exit and doesn't actually care about the results of CPUID). Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-42-seanjc@google.com
-
Sean Christopherson authored
Provide informative error messages for the various checks related to requesting access to XSAVE features that are buried behind XSAVE Feature Disabling (XFD). Opportunistically rename the helper to have "require" in the name so that it's somewhat obvious that the helper may skip the test. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-41-seanjc@google.com
-
Sean Christopherson authored
Skip the AMX test instead of silently returning if the host kernel doesn't support ARCH_REQ_XCOMP_GUEST_PERM. KVM didn't support XFD until v5.17, so it's extremely unlikely allowing the test to run on a pre-v5.15 kernel is the right thing to do. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-40-seanjc@google.com
-
Sean Christopherson authored
Use kvm_cpu_has() to check for XFD supported in vm_xsave_req_perm(), simply checking host CPUID doesn't guarantee KVM supports AMX/XFD. Opportunistically hoist the check above the bit check; if XFD isn't supported, it's far better to get a "not supported at all" message, as opposed to a "feature X isn't supported" message". Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-39-seanjc@google.com
-
Sean Christopherson authored
Make the "get max CPUID leaf" helpers static inline, there's no reason to bury the one liners in processor.c. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-38-seanjc@google.com
-
Sean Christopherson authored
Rename kvm_get_supported_cpuid_index() to __kvm_get_supported_cpuid_entry() to better show its relationship to kvm_get_supported_cpuid_entry(), and because the helper returns a CPUID entry, not the index of an entry. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-37-seanjc@google.com
-
Sean Christopherson authored
Use kvm_get_supported_cpuid_entry() instead of kvm_get_supported_cpuid_index() when passing in '0' for the index, which just so happens to be the case in all remaining users of kvm_get_supported_cpuid_index() except kvm_get_supported_cpuid_entry(). Keep the helper as there may be users in the future, and it's not doing any harm. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-36-seanjc@google.com
-
Sean Christopherson authored
Replace an evil open coded instance of querying CPUID from L1 with this_cpu_has(X86_FEATURE_SVM). No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-35-seanjc@google.com
-
Sean Christopherson authored
Use this_cpu_has() to query OSXSAVE from the L1 guest in the CR4=>CPUID sync test. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-34-seanjc@google.com
-
Sean Christopherson authored
Add this_cpu_has() to query an X86_FEATURE_* via cpuid(), i.e. to query a feature from L1 (or L2) guest code. Arbitrarily select the AMX test to be the first user. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-33-seanjc@google.com
-
Sean Christopherson authored
Set the function/index for CPUID in the helper instead of relying on the caller to do so. In addition to reducing the risk of consuming an uninitialized ECX, having the function/index embedded in the call makes it easier to understand what is being checked. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-32-seanjc@google.com
-
Sean Christopherson authored
Tag the returned CPUID pointers from kvm_get_supported_cpuid(), kvm_get_supported_hv_cpuid(), and vcpu_get_supported_hv_cpuid() "const" to prevent reintroducing the broken pattern of modifying the static "cpuid" variable used by kvm_get_supported_cpuid() to cache the results of KVM_GET_SUPPORTED_CPUID. Update downstream consumers as needed. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-31-seanjc@google.com
-
Sean Christopherson authored
Add X86_FEATURE_X2APIC and use vcpu_clear_cpuid_feature() to clear x2APIC support in the xAPIC state test. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-30-seanjc@google.com
-
Sean Christopherson authored
Use vcpu_{set,clear}_cpuid_feature() to toggle nested VMX support in the vCPU CPUID module in the nVMX state test. Drop CPUID_VMX as there are no longer any users. Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20220614200707.3315957-29-seanjc@google.com
-