- 26 Apr, 2021 36 commits
-
-
Vitaly Kuznetsov authored
After commit 4fc096a9 ("KVM: Raise the maximum number of user memslots") set_memory_region_test may take too long, reports are that the default timeout value we have (120s) may not be enough even on a physical host. Speed things up a bit by throwing away vm_userspace_mem_region_add() usage from test_add_max_memory_regions(), we don't really need to do the majority of the stuff it does for the sake of this test. On my AMD EPYC 7401P, # time ./set_memory_region_test pre-patch: Testing KVM_RUN with zero added memory regions Allowed number of memory slots: 32764 Adding slots 0..32763, each memory region with 2048K size Testing MOVE of in-use region, 10 loops Testing DELETE of in-use region, 10 loops real 0m44.917s user 0m7.416s sys 0m34.601s post-patch: Testing KVM_RUN with zero added memory regions Allowed number of memory slots: 32764 Adding slots 0..32763, each memory region with 2048K size Testing MOVE of in-use region, 10 loops Testing DELETE of in-use region, 10 loops real 0m20.714s user 0m0.109s sys 0m18.359s Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210426130121.758229-1-vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Zhenzhong Duan authored
In vm_vcpu_rm() and kvm_vm_release(), a stale return value is checked in TEST_ASSERT macro. Fix it by assigning variable ret with correct return value. Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com> Message-Id: <20210426193138.118276-1-zhenzhong.duan@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Haiwei Li authored
`kvm_arch_dy_runnable` checks the pending_interrupt as the code in `kvm_arch_dy_has_pending_interrupt`. So take advantage of it. Signed-off-by: Haiwei Li <lihaiwei@tencent.com> Message-Id: <20210421032513.1921-1-lihaiwei.kernel@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Skip SEV's expensive WBINVD and DF_FLUSH if there are no SEV ASIDs waiting to be reclaimed, e.g. if SEV was never used. This "fixes" an issue where the DF_FLUSH fails during hardware teardown if the original SEV_INIT failed. Ideally, SEV wouldn't be marked as enabled in KVM if SEV_INIT fails, but that's a problem for another day. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-16-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Remove the forward declaration of sev_flush_asids(), which is only a few lines above the function itself. No functional change intended. Reviewed by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-15-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Replace calls to svm_sev_enabled() with direct checks on sev_enabled, or in the case of svm_mem_enc_op, simply drop the call to svm_sev_enabled(). This effectively replaces checks against a valid max_sev_asid with checks against sev_enabled. sev_enabled is forced off by sev_hardware_setup() if max_sev_asid is invalid, all call sites are guaranteed to run after sev_hardware_setup(), and all of the checks care about SEV being fully enabled (as opposed to intentionally handling the scenario where max_sev_asid is valid but SEV enabling fails due to OOM). Reviewed by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-14-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Move the allocation of the SEV VMCB array to sev.c to help pave the way toward encapsulating SEV enabling wholly within sev.c. No functional change intended. Reviewed by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-13-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Query max_sev_asid directly after setting it instead of bouncing through its wrapper, svm_sev_enabled(). Using the wrapper is unnecessary obfuscation. No functional change intended. Reviewed by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-12-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Remove the redundant svm_sev_enabled() check when calling sev_hardware_teardown(), the teardown helper itself does the check. Removing the check from svm.c will eventually allow dropping svm_sev_enabled() entirely. No functional change intended. Reviewed by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-11-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Enable the 'sev' and 'sev_es' module params by default instead of having them conditioned on CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT. The extra Kconfig is pointless as KVM SEV/SEV-ES support is already controlled via CONFIG_KVM_AMD_SEV, and CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT has the unfortunate side effect of enabling all the SEV-ES _guest_ code due to it being dependent on CONFIG_AMD_MEM_ENCRYPT=y. Cc: Borislav Petkov <bp@suse.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Define sev_enabled and sev_es_enabled as 'false' and explicitly #ifdef out all of sev_hardware_setup() if CONFIG_KVM_AMD_SEV=n. This kills three birds at once: - Makes sev_enabled and sev_es_enabled off by default if CONFIG_KVM_AMD_SEV=n. Previously, they could be on by default if CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=y, regardless of KVM SEV support. - Hides the sev and sev_es modules params when CONFIG_KVM_AMD_SEV=n. - Resolves a false positive -Wnonnull in __sev_recycle_asids() that is currently masked by the equivalent IS_ENABLED(CONFIG_KVM_AMD_SEV) check in svm_sev_enabled(), which will be dropped in a future patch. Reviewed by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-9-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Rename sev and sev_es to sev_enabled and sev_es_enabled respectively to better align with other KVM terminology, and to avoid pseudo-shadowing when the variables are moved to sev.c in a future patch ('sev' is often used for local struct kvm_sev_info pointers. No functional change intended. Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Add a reverse-CPUID entry for the memory encryption word, 0x8000001F.EAX, and use it to override the supported CPUID flags reported to userspace. Masking the reported CPUID flags avoids over-reporting KVM support, e.g. without the mask a SEV-SNP capable CPU may incorrectly advertise SNP support to userspace. Clear SEV/SEV-ES if their corresponding module parameters are disabled, and clear the memory encryption leaf completely if SEV is not fully supported in KVM. Advertise SME_COHERENT in addition to SEV and SEV-ES, as the guest can use SME_COHERENT to avoid CLFLUSH operations. Explicitly omit SME and VM_PAGE_FLUSH from the reporting. These features are used by KVM, but are not exposed to the guest, e.g. guest access to related MSRs will fault. Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Co-developed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Unconditionally invoke sev_hardware_setup() when configuring SVM and handle clearing the module params/variable 'sev' and 'sev_es' in sev_hardware_setup(). This allows making said variables static within sev.c and reduces the odds of a collision with guest code, e.g. the guest side of things has already laid claim to 'sev_enabled'. Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Disable SEV and SEV-ES if NPT is disabled. While the APM doesn't clearly state that NPT is mandatory, it's alluded to by: The guest page tables, managed by the guest, may mark data memory pages as either private or shared, thus allowing selected pages to be shared outside the guest. And practically speaking, shadow paging can't work since KVM can't read the guest's page tables. Fixes: e9df0942 ("KVM: SVM: Add sev module_param") Cc: Brijesh Singh <brijesh.singh@amd.com Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Free sev_asid_bitmap if the reclaim bitmap allocation fails, othwerise KVM will unnecessarily keep the bitmap when SEV is not fully enabled. Freeing the page is also necessary to avoid introducing a bug when a future patch eliminates svm_sev_enabled() in favor of using the global 'sev' flag directly. While sev_hardware_enabled() checks max_sev_asid, which is true even if KVM setup fails, 'sev' will be true if and only if KVM setup fully succeeds. Fixes: 33af3a7e ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations") Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Zero out the array of VMCB pointers so that pre_sev_run() won't see garbage when querying the array to detect when an SEV ASID is being associated with a new VMCB. In practice, reading random values is all but guaranteed to be benign as a false negative (which is extremely unlikely on its own) can only happen on CPU0 on the first VMRUN and would only cause KVM to skip the ASID flush. For anything bad to happen, a previous instance of KVM would have to exit without flushing the ASID, _and_ KVM would have to not flush the ASID at any time while building the new SEV guest. Cc: Borislav Petkov <bp@suse.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Fixes: 70cd94e6 ("KVM: SVM: VMRUN should use associated ASID when SEV is enabled") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop the sev_enabled flag and switch its one user over to sev_active(). sev_enabled was made redundant with the introduction of sev_status in commit b57de6cd ("x86/sev-es: Add SEV-ES Feature Detection"). sev_enabled and sev_active() are guaranteed to be equivalent, as each is true iff 'sev_status & MSR_AMD64_SEV_ENABLED' is true, and are only ever written in tandem (ignoring compressed boot's version of sev_status). Removing sev_enabled avoids confusion over whether it refers to the guest or the host, and will also allow KVM to usurp "sev_enabled" for its own purposes. No functional change intended. Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422021125.3417167-7-seanjc@google.com> Acked-by: Borislav Petkov <bp@suse.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Ricardo Koller authored
Split out the reverse CPUID machinery to a dedicated header file so that KVM selftests can reuse the reverse CPUID definitions without introducing any '#ifdef __KERNEL__' pollution. Co-developed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Ricardo Koller <ricarkol@google.com> Message-Id: <20210422005626.564163-2-ricarkol@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Append raw to the direct variants of kvm_register_read/write(), and drop the "l" from the mode-aware variants. I.e. make the mode-aware variants the default, and make the direct variants scary sounding so as to discourage use. Accessing the full 64-bit values irrespective of mode is rarely the desired behavior. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422022128.3464144-10-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop bits 63:32 of RAX when grabbing the address for INVLPGA emulation outside of 64-bit mode to make KVM's emulation slightly less wrong. The address for INVLPGA is determined by the effective address size, i.e. it's not hardcoded to 64/32 bits for a given mode. Add a FIXME to call out that the emulation is wrong. Opportunistically tweak the ASID handling to make it clear that it's defined by ECX, not rCX. Per the APM: The portion of rAX used to form the address is determined by the effective address size (current execution mode and optional address size prefix). The ASID is taken from ECX. Fixes: ff092385 ("KVM: SVM: Implement INVLPGA") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422022128.3464144-9-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Truncate RAX to 32 bits, i.e. consume EAX, when retrieving the hypecall index for a Xen hypercall. Per Xen documentation[*], the index is EAX when the vCPU is not in 64-bit mode. [*] http://xenbits.xenproject.org/docs/sphinx-unstable/guest-guide/x86/hypercall-abi.html Fixes: 23200b7a ("KVM: x86/xen: intercept xen hypercalls if enabled") Cc: Joao Martins <joao.m.martins@oracle.com> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422022128.3464144-8-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop bits 63:32 of the base and/or index GPRs when calculating the effective address of a VMX instruction memory operand. Outside of 64-bit mode, memory encodings are strictly limited to E*X and below. Fixes: 064aea77 ("KVM: nVMX: Decoding memory operands of VMX instructions") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422022128.3464144-7-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop bits 63:32 of the VMCS field encoding when checking for a nested VM-Exit on VMREAD/VMWRITE in !64-bit mode. VMREAD and VMWRITE always use 32-bit operands outside of 64-bit mode. The actual emulation of VMREAD/VMWRITE does the right thing, this bug is purely limited to incorrectly causing a nested VM-Exit if a GPR happens to have bits 63:32 set outside of 64-bit mode. Fixes: a7cde481 ("KVM: nVMX: Do not forward VMREAD/VMWRITE VMExits to L1 if required so by vmcs12 vmread/vmwrite bitmaps") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422022128.3464144-6-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop bits 63:32 when storing a DR/CR to a GPR when the vCPU is not in 64-bit mode. Per the SDM: The operand size for these instructions is always 32 bits in non-64-bit modes, regardless of the operand-size attribute. CR8 technically isn't affected as CR8 isn't accessible outside of 64-bit mode, but fix it up for consistency and to allow for future cleanup. Fixes: 6aa8b732 ("[PATCH] kvm: userspace interface") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422022128.3464144-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Drop bits 63:32 on loads/stores to/from DRs and CRs when the vCPU is not in 64-bit mode. The APM states bits 63:32 are dropped for both DRs and CRs: In 64-bit mode, the operand size is fixed at 64 bits without the need for a REX prefix. In non-64-bit mode, the operand size is fixed at 32 bits and the upper 32 bits of the destination are forced to 0. Fixes: 7ff76d58 ("KVM: SVM: enhance MOV CR intercept handler") Fixes: cae3797a ("KVM: SVM: enhance mov DR intercept handler") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422022128.3464144-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Check CR3 for an invalid GPA even if the vCPU isn't in long mode. For bigger emulation flows, notably RSM, the vCPU mode may not be accurate if CR0/CR4 are loaded after CR3. For MOV CR3 and similar flows, the caller is responsible for truncating the value. Fixes: 660a5d51 ("KVM: x86: save/load state on SMM switch") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422022128.3464144-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Remove the emulator's checks for illegal CR0, CR3, and CR4 values, as the checks are redundant, outdated, and in the case of SEV's C-bit, broken. The emulator manually calculates MAXPHYADDR from CPUID and neglects to mask off the C-bit. For all other checks, kvm_set_cr*() are a superset of the emulator checks, e.g. see CR4.LA57. Fixes: a780a3ea ("KVM: X86: Fix reserved bits check for MOV to CR3") Cc: Babu Moger <babu.moger@amd.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422022128.3464144-2-seanjc@google.com> Cc: stable@vger.kernel.org [Unify check_cr_read and check_cr_write. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Disable pass-through of the FS and GS base MSRs for 32-bit KVM. Intel's SDM unequivocally states that the MSRs exist if and only if the CPU supports x86-64. FS_BASE and GS_BASE are mostly a non-issue; a clever guest could opportunistically use the MSRs without issue. KERNEL_GS_BASE is a bigger problem, as a clever guest would subtly be broken if it were migrated, as KVM disallows software access to the MSRs, and unlike the direct variants, KERNEL_GS_BASE needs to be explicitly migrated as it's not captured in the VMCS. Fixes: 25c5f225 ("KVM: VMX: Enable MSR Bitmap feature") Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210422023831.3473491-1-seanjc@google.com> [*NOT* for stable kernels. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Use KVM's "user return MSRs" framework to defer restoring the host's MSR_TSC_AUX until the CPU returns to userspace. Add/improve comments to clarify why MSR_TSC_AUX is intercepted on both RDMSR and WRMSR, and why it's safe for KVM to keep the guest's value loaded even if KVM is scheduled out. Cc: Reiji Watanabe <reijiw@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210423223404.3860547-5-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Force clear bits 63:32 of MSR_TSC_AUX on write to emulate current AMD CPUs, which completely ignore the upper 32 bits, including dropping them on write. Emulating AMD hardware will also allow migrating a vCPU from AMD hardware to Intel hardware without requiring userspace to manually clear the upper bits, which are reserved on Intel hardware. Presumably, MSR_TSC_AUX[63:32] are intended to be reserved on AMD, but sadly the APM doesn't say _anything_ about those bits in the context of MSR access. The RDTSCP entry simply states that RCX contains bits 31:0 of the MSR, zero extended. And even worse is that the RDPID description implies that it can consume all 64 bits of the MSR: RDPID reads the value of TSC_AUX MSR used by the RDTSCP instruction into the specified destination register. Normal operand size prefixes do not apply and the update is either 32 bit or 64 bit based on the current mode. Emulate current hardware behavior to give KVM the best odds of playing nice with whatever the behavior of future AMD CPUs happens to be. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210423223404.3860547-3-seanjc@google.com> [Fix broken patch. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Inject #GP on guest accesses to MSR_TSC_AUX if RDTSCP is unsupported in the guest's CPUID model. Fixes: 46896c73 ("KVM: svm: add support for RDTSCP") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210423223404.3860547-2-seanjc@google.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Invert the inline declarations of the MSR interception helpers between the wrapper, vmx_set_intercept_for_msr(), and the core implementations, vmx_{dis,en}able_intercept_for_msr(). Letting the compiler _not_ inline the implementation reduces KVM's code footprint by ~3k bytes. Back when the helpers were added in commit 904e14fb ("KVM: VMX: make MSR bitmaps per-VCPU"), both the wrapper and the implementations were __always_inline because the end code distilled down to a few conditionals and a bit operation. Today, the implementations involve a variety of checks and bit ops in order to support userspace MSR filtering. Furthermore, the vast majority of calls to manipulate MSR interception are not performance sensitive, e.g. vCPU creation and x2APIC toggling. On the other hand, the one path that is performance sensitive, dynamic LBR passthrough, uses the wrappers, i.e. is largely untouched by inverting the inlining. In short, forcing the low level MSR interception code to be inlined no longer makes sense. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210423221912.3857243-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Wanpeng Li authored
Commit f1c6366e ("KVM: SVM: Add required changes to support intercepts under SEV-ES") prevents hypervisor accesses guest register state when the guest is running under SEV-ES. The initial value of vcpu->arch.guest_state_protected is false, it will not be updated in preemption notifiers after this commit which means that the kernel spinlock lock holder will always be skipped to boost. Let's fix it by always treating preempted is in the guest kernel mode, false positive is better than skip completely. Fixes: f1c6366e (KVM: SVM: Add required changes to support intercepts under SEV-ES) Signed-off-by: Wanpeng Li <wanpengli@tencent.com> Message-Id: <1619080459-30032-1-git-send-email-wanpengli@tencent.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Vitaly Kuznetsov authored
Async PF 'page ready' event may happen when LAPIC is (temporary) disabled. In particular, Sebastien reports that when Linux kernel is directly booted by Cloud Hypervisor, LAPIC is 'software disabled' when APF mechanism is initialized. On initialization KVM tries to inject 'wakeup all' event and puts the corresponding token to the slot. It is, however, failing to inject an interrupt (kvm_apic_set_irq() -> __apic_accept_irq() -> !apic_enabled()) so the guest never gets notified and the whole APF mechanism gets stuck. The same issue is likely to happen if the guest temporary disables LAPIC and a previously unavailable page becomes available. Do two things to resolve the issue: - Avoid dequeuing 'page ready' events from APF queue when LAPIC is disabled. - Trigger an attempt to deliver pending 'page ready' events when LAPIC becomes enabled (SPIV or MSR_IA32_APICBASE). Reported-by: Sebastien Boeuf <sebastien.boeuf@intel.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Message-Id: <20210422092948.568327-1-vkuznets@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 23 Apr, 2021 3 commits
-
-
Sean Christopherson authored
Take "enum kvm_only_cpuid_leafs" in scattered specific CPUID helpers (which is obvious in hindsight), and use "unsigned int" for leafs that can be the kernel's standard "enum cpuid_leaf" or the aforementioned KVM-only variant. Loss of the enum params is a bit disapponting, but gcc obviously isn't providing any extra sanity checks, and the various BUILD_BUG_ON() assertions ensure the input is in range. This fixes implicit enum conversions that are detected by clang-11: arch/x86/kvm/cpuid.c:499:29: warning: implicit conversion from enumeration type 'enum kvm_only_cpuid_leafs' to different enumeration type 'enum cpuid_leafs' [-Wenum-conversion] kvm_cpu_cap_init_scattered(CPUID_12_EAX, ~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~ arch/x86/kvm/cpuid.c:837:31: warning: implicit conversion from enumeration type 'enum kvm_only_cpuid_leafs' to different enumeration type 'enum cpuid_leafs' [-Wenum-conversion] cpuid_entry_override(entry, CPUID_12_EAX); ~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~ 2 warnings generated. Fixes: 4e66c0cb ("KVM: x86: Add support for reverse CPUID lookup of scattered features") Cc: Kai Huang <kai.huang@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Message-Id: <20210421010850.3009718-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Isaku Yamahata authored
Use symbolic value, EPT_VIOLATION_GVA_TRANSLATED, instead of 0x100 in handle_ept_violation(). Signed-off-by: Yao Yuan <yuan.yao@intel.com> Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> Message-Id: <724e8271ea301aece3eb2afe286a9e2e92a70b18.1619136576.git.isaku.yamahata@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarmPaolo Bonzini authored
KVM/arm64 updates for Linux 5.13 New features: - Stage-2 isolation for the host kernel when running in protected mode - Guest SVE support when running in nVHE mode - Force W^X hypervisor mappings in nVHE mode - ITS save/restore for guests using direct injection with GICv4.1 - nVHE panics now produce readable backtraces - Guest support for PTP using the ptp_kvm driver - Performance improvements in the S2 fault handler - Alexandru is now a reviewer (not really a new feature...) Fixes: - Proper emulation of the GICR_TYPER register - Handle the complete set of relocation in the nVHE EL2 object - Get rid of the oprofile dependency in the PMU code (and of the oprofile body parts at the same time) - Debug and SPE fixes - Fix vcpu reset
-
- 22 Apr, 2021 1 commit
-
-
Paolo Bonzini authored
-