- 01 Mar, 2010 40 commits
-
-
Avi Kivity authored
We will use this later to give the guest ownership of cr0.ts. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Since we'd like to allow the guest to own a few bits of cr0 at times, we need to know when we access those bits. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
clts writes cr0.ts; lmsw writes cr0[0:15] - record that in ftrace. Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
An SLB entry contains two pieces of information related to size: 1) PTE size 2) SLB size The L bit defines the PTE be "large" (usually means 16MB), SLB_VSID_B_1T defines that the SLB should span 1 GB instead of the default 256MB. Apparently I messed things up and just put those two in one box, shaked it heavily and came up with the current code which handles large pages incorrectly, because it also treats large page SLB entries as "1TB" segment entries. This patch splits those two features apart, making Linux guests boot even when they have > 256MB. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
When we get a program interrupt in guest kernel mode, we try to emulate the instruction. If that doesn't fail, we report to the user and try again - at the exact same instruction pointer. So if the guest kernel really does trigger an invalid instruction, we loop forever. So let's better go and forward program exceptions to the guest when we don't know the instruction we're supposed to emulate. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
When we need to reinject a program interrupt into the guest, we also need to reinject the corresponding flags into the guest. Signed-off-by: Alexander Graf <agraf@suse.de> Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
The code to unset HID5.dcbz32 is broken. This patch makes it do the right rotate magic. Signed-off-by: Alexander Graf <agraf@suse.de> Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
Book3S needs some flags in SRR1 to get to know details about an interrupt. One such example is the trap instruction. It tells the guest kernel that a program interrupt is due to a trap using a bit in SRR1. This patch implements above behavior, making WARN_ON behave like WARN_ON. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
Currently we're racy when doing the transition from IR=1 to IR=0, from the module memory entry code to the real mode SLB switching code. To work around that I took a look at the RTAS entry code which is faced with a similar problem and did the same thing: A small helper in linear mapped memory that does mtmsr with IR=0 and then RFIs info the actual handler. Thanks to that trick we can safely take page faults in the entry code and only need to be really wary of what to do as of the SLB switching part. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
Using an RFI in IR=1 is dangerous. We need to set two SRRs and then do an RFI without getting interrupted at all, because every interrupt could potentially overwrite the SRR values. Fortunately, we don't need to RFI in at least this particular case of the code, so we can just replace it with an mtmsr and b. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
To fetch the last instruction we were interrupted on, we enable DR in early exit code, where we are still in a very transitional phase between guest and host state. Most of the time this seemed to work, but another CPU can easily flush our TLB and HTAB which makes us go in the Linux page fault handler which totally breaks because we still use the guest's SLB entries. To work around that, let's introduce a second KVM guest mode that defines that whenever we get a trap, we don't call the Linux handler or go into the KVM exit code, but just jump over the faulting instruction. That way a potentially bad lwz doesn't trigger any faults and we can later on interpret the invalid instruction we fetched as "fetch didn't work". Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
We're being horribly racy right now. All the entry and exit code hijacks random fields from the PACA that could easily be used by different code in case we get interrupted, for example by a #MC or even page fault. After discussing this with Ben, we figured it's best to reserve some more space in the PACA and just shove off some vcpu state to there. That way we can drastically improve the readability of the code, make it less racy and less complex. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
We now have helpers for the GPRs, so let's also add some for CR and XER. Having them in the PACA simplifies code a lot, as we don't need to care about where to store CC or not to overflow any integers. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
All code in PPC KVM currently accesses gprs in the vcpu struct directly. While there's nothing wrong with that wrt the current way gprs are stored and loaded, it doesn't suffice for the PACA acceleration that will follow in this patchset. So let's just create little wrapper inline functions that we call whenever a GPR needs to be read from or written to. The compiled code shouldn't really change at all for now. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Takuya Yoshikawa authored
The explanation of write_emulated is confused with that of read_emulated. This patch fix it. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Sheng Yang authored
Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Sheng Yang authored
Then the callback can provide the maximum supported large page level, which is more flexible. Also move the gb page support into x86_64 specific. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Sheng Yang authored
We can use them in x86.c and vmx.c now... Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Alexander Graf authored
The PowerPC C ABI defines that registers r14-r31 need to be preserved across function calls. Since our exit handler is written in C, we can make use of that and don't need to reload r14-r31 on every entry/exit cycle. This technique is also used in the BookE code and is called "lightweight exits" there. To follow the tradition, it's called the same in Book3S. So far this optimization was disabled though, as the code didn't do what it was expected to do, but failed to work. This patch fixes and enables lightweight exits again. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Alexander Graf authored
When we're loading bolted entries into the SLB again, we're checking if an entry is in use and only slbmte it when it is. Unfortunately, the check always goes to the skip label of the first entry, resulting in an endless loop when it actually gets triggered. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Avi Kivity authored
When the guest acknowledges an interrupt, it sends an EOI message to the local apic, which broadcasts it to the ioapic. To handle the EOI, we need to take the ioapic mutex. On large guests, this causes a lot of contention on this mutex. Since large guests usually don't route interrupts via the ioapic (they use msi instead), this is completely unnecessary. Avoid taking the mutex by introducing a handled_vectors bitmap. Before taking the mutex, check if the ioapic was actually responsible for the acked vector. If not, we can return early. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Avi Kivity authored
Some exit reasons missed their strings; fill out the table. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Avi Kivity authored
With slots_lock converted to rcu, the entire kvm hotpath on modern processors (with npt or ept) now scales beautifully. Increase the maximum vcpu count to 64 to reflect this. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Using a similar two-step procedure as for memslots. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Use two steps for memslot deletion: mark the slot invalid (which stops instantiation of new shadow pages for that slot, but allows destruction), then instantiate the new empty slot. Also simplifies kvm_handle_hva locking. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
So its possible to iommu map a memslot before making it visible to kvm. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Which takes a memslot pointer instead of using kvm->memslots. To be used by SRCU convertion later. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Required for SRCU convertion later. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Have a pointer to an allocated region inside x86's kvm_arch. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Marcelo Tosatti authored
Have a pointer to an allocated region inside struct kvm. [alex: fix ppc book 3s] Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Wu Fengguang authored
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
-
Alexander Graf authored
Progress on KVM for Embedded PowerPC has stalled, but for Book3S there's quite a lot of work to do and going on. So in agreement with Hollis and Avi, we should switch maintainers for PowerPC. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Hollis Blanchard <hollis@penguinppc.org> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
Because we now emulate the DEC interrupt according to real life behavior, there's no need to keep the AGGRESSIVE_DEC hack around. Let's just remove it. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Acked-by: Hollis Blanchard <hollis@penguinppc.org> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
We treated the DEC interrupt like an edge based one. This is not true for Book3s. The DEC keeps firing until mtdec is issued again and thus clears the interrupt line. So let's implement this logic in KVM too. This patch moves the line clearing from the firing of the interrupt to the mtdec emulation. This makes PPC64 guests work without AGGRESSIVE_DEC defined. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Acked-by: Hollis Blanchard <hollis@penguinppc.org> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
We're using a switch table to find the irqprio that belongs to a specific interrupt vector. This table is part of the interrupt inject logic. Since we'll add a new function to stop interrupts, let's move this table out of the injection logic into a separate function. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Acked-by: Hollis Blanchard <hollis@penguinppc.org> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
- add destructor function - move related allocation into constructor - add stubs for !CONFIG_KVM_MMIO Signed-off-by: Avi Kivity <avi@redhat.com>
-