- 22 May, 2011 8 commits
-
-
Avi Kivity authored
Avoid using ctxt->vcpu; we can do everything with ->get_cr() and ->set_cr(). A side effect is that we no longer activate the fpu on emulated CLTS; but that should be very rare. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Avoid use of ctxt->vcpu. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Requires ctxt->vcpu, which is to be abolished. Replace with open calls to get_msr(). Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Replacing direct calls to realmode_lgdt(), realmode_lidt(). Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Unneeded for register access. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Making the emulator caller agnostic. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Making the emulator caller agnostic. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Making the emulator caller agnostic. [Takuya Yoshikawa: fix typo leading to LDT failures] Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
-
- 11 May, 2011 32 commits
-
-
Avi Kivity authored
Making the emulator caller agnostic. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Making the emulator caller agnostic. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Clean up lines longer than 80 columns. No code changes. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Nelson Elhage authored
Since segments need to be handled slightly differently when fetching instructions, we add a __linearize helper that accepts a new 'fetch' boolean. [avi: fix oops caused by wrong segmented_address initialization order] Signed-off-by: Nelson Elhage <nelhage@ksplice.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
The last_guest_tsc is used in vcpu_load to adjust the tsc_offset since tsc-scaling is merged. So the last_guest_tsc needs to be updated in vcpu_put instead of the the last_host_tsc. This is fixed with this patch. Reported-by: Jan Kiszka <jan.kiszka@web.de> Tested-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
This patch fixes a bug in the nested-svm path when decode-assists is available on the machine. After a selective-cr0 intercept is detected the rip is advanced unconditionally. This causes the l1-guest to continue running with an l2-rip. This bug was with the sel_cr0 unit-test on decode-assists capable hardware. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Nelson Elhage authored
Currently, setting a large (i.e. negative) base address for %cs does not work on a 64-bit host. The "JOS" teaching operating system, used by MIT and other universities, relies on such segments while bootstrapping its way to full virtual memory management. Signed-off-by: Nelson Elhage <nelhage@ksplice.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Duan Jiong authored
Just remove useless function define kvm_inject_pit_timer_irqs() from file arch/x86/kvm/i8254.h Signed-off-by:Duan Jiong<djduanjiong@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Duan Jiong authored
Just remove useless function define kvm_pic_clear_isr_ack() and pit_has_pending_timer() Signed-off-by: Duan Jiong<djduanjiong@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Jeff Mahoney authored
This patch avoids gcc issuing the following warning when KVM_MAX_VCPUS=1: warning: array subscript is above array bounds kvm_for_each_vcpu currently checks to see if the index for the vcpu is valid /after/ loading it. We don't run into problems because the address is still inside the enclosing struct kvm and we never deference or write to it, so this isn't a security issue. The warning occurs when KVM_MAX_VCPUS=1 because the increment portion of the loop will *always* cause the loop to load an invalid location since ++idx will always be > 0. This patch moves the load so that the check occurs before the load and we don't run into the compiler warning. Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Jeff Mahoney <jeffm@suse.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Serge E. Hallyn authored
When doing a soft int, we need to bump eip before pushing it to the stack. Otherwise we'll do the int a second time. [apw@canonical.com: merged eip update as per Jan's recommendation.] Signed-off-by: Serge E. Hallyn <serge.hallyn@ubuntu.com> Signed-off-by: Andy Whitcroft <apw@canonical.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Takuya Yoshikawa authored
em_push() is a simple wrapper of emulate_push(). So this patch replaces emulate_push() with em_push() and removes the unnecessary former. In addition, the unused ops arguments are removed from emulate_pusha() and emulate_grp45(). Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Takuya Yoshikawa authored
PUSH emulation stores the value by calling writeback() after setting the dst operand appropriately in emulate_push(). This writeback() using dst is not needed at all because we know the target is the stack. So this patch makes emulate_push() call, newly introduced, segmented_write() directly. By this, many inlined writeback()'s are removed. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Takuya Yoshikawa authored
This stops "CMP r/m, reg" to write back the data into memory. Pointed out by Avi. The writeback suppression now covers CMP, CMPS, SCAS. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Jan Kiszka authored
In case certain allocations fail, vmx_create_vcpu may return 0 as error instead of a negative value encoded via ERR_PTR. This causes a NULL pointer dereferencing later on in kvm_vm_ioctl_vcpu_create. Reported-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Gleb Natapov authored
Currently we sync registers back and forth before/after exiting to userspace for IO, but during IO device model shouldn't need to read/write the registers, so we can as well skip those sync points. The only exaception is broken vmware backdor interface. The new code sync registers content during IO only if registers are read from/written to by userspace in the middle of the IO operation and this almost never happens in practise. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Avi Kivity authored
Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
For reuse later. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
So it can call emulate_gp() without forward declarations. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Needed for segment read/write checks. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Preparing to add segment checks. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
It's going to get more complicated soon. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Will help later adding proper segment checks. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
When the emulation of vmload or vmsave fails because the guest passed an unsupported physical address it gets an #GP with rip pointing to the instruction after vmsave/vmload. This is a bug and fixed by this patch. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
This patch implements two new vm-ioctls to get and set the virtual_tsc_khz if the machine supports tsc-scaling. Setting the tsc-frequency is only possible before userspace creates any vcpu. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
With TSC scaling in SVM the tsc-offset needs to be calculated differently. This patch propagates this calculation into the architecture specific modules so that this complexity can be handled there. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
This patch implements a call-back into the architecture code to allow the propagation of changes to the virtual tsc_khz of the vcpu. On SVM it updates the tsc_ratio variable, on VMX it does nothing. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
The calculation of the tsc_delta value to ensure a forward-going tsc for the guest is a function of the host-tsc. This works as long as the guests tsc_khz is equal to the hosts tsc_khz. With tsc-scaling hardware support this is not longer true and the tsc_delta needs to be calculated using guest_tsc values. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
This patch changes the kvm_guest_time_update function to use TSC frequency the guest actually has for updating its clock. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Joerg Roedel authored
This patch enhances the kvm_amd module with functions to support the TSC_RATE_MSR which can be used to set a given tsc frequency for the guest vcpu. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
VMMCALL requires EFER.SVME to be enabled in the host, not in the guest, which is what check_svme() checks. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
VMMCALL needs the VendorSpecific tag so that #UD emulation (called if a guest running on AMD was migrated to an Intel host) is allowed to process the instruction. Signed-off-by: Avi Kivity <avi@redhat.com>
-