- 16 Mar, 2020 40 commits
-
-
Sean Christopherson authored
Expose kvm_mpx_supported() as a static inline so that it can be inlined in kvm_intel.ko. No functional change intended. Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Query supported_xcr0 when checking for MPX support instead of invoking ->mpx_supported() and drop ->mpx_supported() as kvm_mpx_supported() was its last user. Rename vmx_mpx_supported() to cpu_has_vmx_mpx() to better align with VMX/VMCS nomenclature. Modify VMX's adjustment of xcr0 to call cpus_has_vmx_mpx() (renamed from vmx_mpx_supported()) directly to avoid reading supported_xcr0 before it's fully configured. No functional change intended. Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> [Test that *all* bits are set. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add a new global variable, supported_xcr0, to track which xcr0 bits can be exposed to the guest instead of calculating the mask on every call. The supported bits are constant for a given instance of KVM. This paves the way toward eliminating the ->mpx_supported() call in kvm_mpx_supported(), e.g. eliminates multiple retpolines in VMX's nested VM-Enter path, and eventually toward eliminating ->mpx_supported() altogether. No functional change intended. Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add helpers to query which of the (two) supported PT modes is active. The primary motivation is to help document that there is a third PT mode (host-only) that's currently not supported by KVM. As is, it's not obvious that PT_MODE_SYSTEM != !PT_MODE_HOST_GUEST and vice versa, e.g. that "pt_mode == PT_MODE_SYSTEM" and "pt_mode != PT_MODE_HOST_GUEST" are two distinct checks. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use __do_cpuid_func()'s common loop iterator, "i", when enumerating the sub-leafs for CPUID 0xD now that the CPUID 0xD loop doesn't need to manual maintain separate counts for the entries index and CPUID index. No functional changed intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop a "nent >= maxnent" check in kvm_get_cpuid() that's fully redundant now that kvm_get_cpuid() isn't indexing the array to pass an entry to do_cpuid_func(). Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add a struct to hold the array of CPUID entries and its associated metadata when handling KVM_GET_SUPPORTED_CPUID. Lookup and provide the correct entry in do_host_cpuid(), which eliminates the majority of array indexing shenanigans, e.g. entries[i -1], and generally makes the code more readable. The last array indexing holdout is kvm_get_cpuid(), which can't really be avoided without throwing the baby out with the bathwater. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Refactoring the sub-leaf handling for CPUID 0x4/0x8000001d to eliminate a one-off variable and its associated brackets. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Declare "i" and "max_idx" at the top of __do_cpuid_func() to consolidate a handful of declarations in various case statements. More importantly, establish the pattern of using max_idx instead of e.g. entry->eax as the loop terminator in preparation for refactoring how entry is handled in __do_cpuid_func(). No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Move the nent vs. maxnent check and nent increment into do_host_cpuid() to consolidate what is now identical code. To signal success vs. failure, return the entry and NULL respectively. A future patch will build on this to also move the entry retrieval into do_host_cpuid(). No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop redundant checks when "emulating" SSBD feature across vendors, i.e. advertising the AMD variant when running on an Intel CPU and vice versa. Both SPEC_CTRL_SSBD and AMD_SSBD are already defined in the leaf-specific feature masks and are *not* forcefully set by the kernel, i.e. will already be set in the entry when supported by the host. Functionally, this changes nothing, but the redundant check is confusing, especially when considering future patches that will further differentiate between "real" and "emulated" feature bits. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop the index param from do_cpuid_7_mask() and instead switch on the entry's index, which is guaranteed to be set by do_host_cpuid(). No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Refactor the sub-leaf loop for CPUID 0x7 to move the main leaf out of said loop. The emitted code savings is basically a mirage, as the handling of the main leaf can easily be split to its own helper to avoid code bloat. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Increment the number of CPUID entries immediately after do_host_cpuid() in preparation for moving the logic into do_host_cpuid(). Handle the rare/impossible case of encountering a bogus sub-leaf by decrementing the number entries on failure. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
WARN if the save state size for a valid XCR0-managed sub-leaf is zero, which would indicate a KVM or CPU bug. Add a comment to explain why KVM WARNs so the reader doesn't have to tease out the relevant bits from Intel's SDM and KVM's XCR0/XSS code. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Now that sub-leaf 1 is handled separately, verify the next sub-leaf is needed before rejecting KVM_GET_SUPPORTED_CPUID due to an insufficiently sized userspace array. Note, although this is technically a bug, it's not visible to userspace as KVM_GET_SUPPORTED_CPUID is guaranteed to fail on KVM_CPUID_SIGNATURE, which is hardcoded to be added after leaf 0xD. The real motivation for the change is to tightly couple the nent/maxnent and do_host_cpuid() sequences in preparation for future cleanup. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Mov the sub-leaf 1 handling for CPUID 0xD out of the index>0 loop so that the loop only handles index>2. Sub-leafs 2+ have identical semantics, whereas sub-leaf 1 is effectively a feature sub-leaf. Moving sub-leaf 1 out of the loop does duplicate a bit of code, but the nent/maxnent code will be consolidated in a future patch, and duplicating the clear of ECX/EDX is arguably a good thing as the reasons for clearing said registers are completely different. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Verify that the next sub-leaf of CPUID 0x4 (or 0x8000001d) is valid before rejecting the entire KVM_GET_SUPPORTED_CPUID due to insufficent space in the userspace array. Note, although this is technically a bug, it's not visible to userspace as KVM_GET_SUPPORTED_CPUID is guaranteed to fail on KVM_CPUID_SIGNATURE, which is hardcoded to be added after the affected leafs. The real motivation for the change is to tightly couple the nent/maxnent and do_host_cpuid() sequences in preparation for future cleanup. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Clean up the error handling in kvm_dev_ioctl_get_cpuid(), which has gotten a bit crusty as the function has evolved over the years. Opportunistically hoist the static @funcs declaration to the top of the function to make it more obvious that it's a "static const". No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Refactor the handling of the Centaur-only CPUID leaf to detect the leaf via a runtime query instead of adding a one-off callback in the static array. When the callback was introduced, there were additional fields in the array's structs, and more importantly, retpoline wasn't a thing. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Move the guts of kvm_dev_ioctl_get_cpuid()'s CPUID func loop to a separate helper to improve code readability and pave the way for future cleanup. No functional change intended. Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Fix a long-standing bug that causes KVM to return 0 instead of -E2BIG when userspace's array is insufficiently sized. This technically breaks backwards compatibility, e.g. a userspace with a hardcoded cpuid->nent could theoretically be broken as it would see an error instead of success if cpuid->nent is less than the number of entries required to fully enumerate the host CPU. But, the lowest known cpuid->nent hardcoded by a VMM is 100 (lkvm and selftests), and the limit for current processors on Intel and AMD is well under a 100. E.g. Intel's Icelake server with all the bells and whistles tops out at ~60 entries (variable due to SGX sub-leafs), and AMD's CPUID documentation allows for less than 50. CPUID 0xD sub-leaves on current kernels are capped by the value of KVM_SUPPORTED_XCR0, and therefore so many subleaves cannot have appeared on current kernels. Note, while the Fixes: tag is accurate with respect to the immediate bug, it's likely that similar bugs in KVM_GET_SUPPORTED_CPUID existed prior to the refactoring, e.g. Qemu contains a workaround for the broken KVM_GET_SUPPORTED_CPUID behavior that predates the buggy commit by over two years. The Qemu workaround is also likely the main reason the bug has gone unreported for so long. Qemu hack: commit 76ae317f7c16aec6b469604b1764094870a75470 Author: Mark McLoughlin <markmc@redhat.com> Date: Tue May 19 18:55:21 2009 +0100 kvm: work around supported cpuid ioctl() brokenness KVM_GET_SUPPORTED_CPUID has been known to fail to return -E2BIG when it runs out of entries. Detect this by always trying again with a bigger table if the ioctl() fills the table. Fixes: 831bf664 ("KVM: Refactor and simplify kvm_dev_ioctl_get_supported_cpuid") Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Shuffle a few operand structs to the end of struct x86_emulate_ctxt and update the cache creation to whitelist only the region of the emulation context that is expected to be copied to/from user memory, e.g. the instruction operands, registers, and fetch/io/mem caches. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Now that the emulation context is dynamically allocated and not embedded in struct kvm_vcpu, move its header, kvm_emulate.h, out of the public asm directory and into KVM's private x86 directory. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Allocate the emulation context instead of embedding it in struct kvm_vcpu_arch. Dynamic allocation provides several benefits: - Shrinks the size x86 vcpus by ~2.5k bytes, dropping them back below the PAGE_ALLOC_COSTLY_ORDER threshold. - Allows for dropping the include of kvm_emulate.h from asm/kvm_host.h and moving kvm_emulate.h into KVM's private directory. - Allows a reducing KVM's attack surface by shrinking the amount of vCPU data that is exposed to usercopy. - Allows a future patch to disable the emulator entirely, which may or may not be a realistic endeavor. Mark the entire struct as valid for usercopy to maintain existing behavior with respect to hardened usercopy. Future patches can shrink the usercopy range to cover only what is necessary. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Move ctxt_virt_addr_bits() and emul_is_noncanonical_address() from x86.h to emulate.c. This eliminates all references to struct x86_emulate_ctxt from x86.h, and sets the stage for a future patch to stop including kvm_emulate.h in asm/kvm_host.h. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Explicitly pass an exception struct when checking for intercept from the emulator, which eliminates the last reference to arch.emulate_ctxt in vendor specific code. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add variants of the I/O helpers that take a vCPU instead of an emulation context. This will eventually allow KVM to limit use of the emulation context to the full emulation path. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Peter Xu authored
It's never used anywhere now. Signed-off-by: Peter Xu <peterx@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Explicitly cast the integer literal to an unsigned long when stuffing a non-canonical value into the host virtual address during private memslot deletion. The explicit cast fixes a warning that gets promoted to an error when running with KVM's newfangled -Werror setting. arch/x86/kvm/x86.c:9739:9: error: large integer implicitly truncated to unsigned type [-Werror=overflow] Fixes: a3e967c0b87d3 ("KVM: Terminate memslot walks via used_slots" Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop the call to cpu_has_vmx_ept_execute_only() when calculating which EPT capabilities will be exposed to L1 for nested EPT. The resulting configuration is immediately sanitized by the passed in @ept_caps, and except for the call from vmx_check_processor_compat(), @ept_caps is the capabilities that are queried by cpu_has_vmx_ept_execute_only(). For vmx_check_processor_compat(), KVM *wants* to ignore vmx_capability.ept so that a divergence in EPT capabilities between CPUs is detected. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Rename kvm_mmu->get_cr3() to call out that it is retrieving a guest value, as opposed to kvm_mmu->set_cr3(), which sets a host value, and to note that it will return something other than CR3 when nested EPT is in use. Hopefully the new name will also make it more obvious that L1's nested_cr3 is returned in SVM's nested NPT case. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Rename valid_ept_address() to nested_vmx_check_eptp() to follow the nVMX nomenclature and to reflect that the function now checks a lot more than just the address contained in the EPTP. Rename address to new_eptp in associated code. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Rename the accessor for vmcs12.EPTP to use "eptp" instead of "cr3". The accessor has no relation to cr3 whatsoever, other than it being assigned to the also poorly named kvm_mmu->get_cr3() hook. No functional change intended. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add support for 5-level nested EPT, and advertise said support in the EPT capabilities MSR. KVM's MMU can already handle 5-level legacy page tables, there's no reason to force an L1 VMM to use shadow paging if it wants to employ 5-level page tables. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop kvm_mmu_extended_role.cr4_la57 now that mmu_role doesn't mask off level, which already incorporates the guest's CR4.LA57 for a shadow MMU by querying is_la57_mode(). Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use the calculated role as-is when propagating it to kvm_mmu.mmu_role, i.e. stop masking off meaningful fields. The concept of masking off fields came from kvm_mmu_pte_write(), which (correctly) ignores certain fields when comparing kvm_mmu_page.role against kvm_mmu.mmu_role, e.g. the current mmu's access and level have no relation to a shadow page's access and level. Masking off the level causes problems for 5-level paging, e.g. CR4.LA57 has its own redundant flag in the extended role, and nested EPT would need a similar hack to support 5-level paging for L2. Opportunistically rework the mask for kvm_mmu_pte_write() to define the fields that should be ignored as opposed to the fields that should be checked, i.e. make it opt-out instead of opt-in so that new fields are automatically picked up. While doing so, stop ignoring "direct". The field is effectively ignored anyways because kvm_mmu_pte_write() is only reached with an indirect mmu and the loop only walks indirect shadow pages, but double checking "direct" literally costs nothing. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Jay Zhou authored
Since the new capability KVM_DIRTY_LOG_INITIALLY_SET of KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 has been introduced, tweak the clear_dirty_log_test to use it. Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Return true for vmx_interrupt_allowed() if the vCPU is in L2 and L1 has external interrupt exiting enabled. IRQs are never blocked in hardware if the CPU is in the guest (L2 from L1's perspective) when IRQs trigger VM-Exit. The new check percolates up to kvm_vcpu_ready_for_interrupt_injection() and thus vcpu_run(), and so KVM will exit to userspace if userspace has requested an interrupt window (to inject an IRQ into L1). Remove the @external_intr param from vmx_check_nested_events(), which is actually an indicator that userspace wants an interrupt window, e.g. it's named @req_int_win further up the stack. Injecting a VM-Exit into L1 to try and bounce out to L0 userspace is all kinds of broken and is no longer necessary. Remove the hack in nested_vmx_vmexit() that attempted to workaround the breakage in vmx_check_nested_events() by only filling interrupt info if there's an actual interrupt pending. The hack actually made things worse because it caused KVM to _never_ fill interrupt info when the LAPIC resides in userspace (kvm_cpu_has_interrupt() queries interrupt.injected, which is always cleared by prepare_vmcs12() before reaching the hack in nested_vmx_vmexit()). Fixes: 6550c4df ("KVM: nVMX: Fix interrupt window request with "Acknowledge interrupt on exit"") Cc: stable@vger.kernel.org Cc: Liran Alon <liran.alon@oracle.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Wanpeng Li authored
In the progress of vCPUs creation, it queues a kvmclock sync worker to the global workqueue before each vCPU creation completes. The workqueue subsystem guarantees not to queue the already queued work; however, we can make the logic more clear by making just one leader to trigger this kvmclock sync request, and also save on cacheline bouncing caused by test_and_set_bit. Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-