Commit 98edb6ca authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'kvm-updates/2.6.35' of git://git.kernel.org/pub/scm/virt/kvm/kvm

* 'kvm-updates/2.6.35' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (269 commits)
  KVM: x86: Add missing locking to arch specific vcpu ioctls
  KVM: PPC: Add missing vcpu_load()/vcpu_put() in vcpu ioctls
  KVM: MMU: Segregate shadow pages with different cr0.wp
  KVM: x86: Check LMA bit before set_efer
  KVM: Don't allow lmsw to clear cr0.pe
  KVM: Add cpuid.txt file
  KVM: x86: Tell the guest we'll warn it about tsc stability
  x86, paravirt: don't compute pvclock adjustments if we trust the tsc
  x86: KVM guest: Try using new kvm clock msrs
  KVM: x86: export paravirtual cpuid flags in KVM_GET_SUPPORTED_CPUID
  KVM: x86: add new KVMCLOCK cpuid feature
  KVM: x86: change msr numbers for kvmclock
  x86, paravirt: Add a global synchronization point for pvclock
  x86, paravirt: Enable pvclock flags in vcpu_time_info structure
  KVM: x86: Inject #GP with the right rip on efer writes
  KVM: SVM: Don't allow nested guest to VMMCALL into host
  KVM: x86: Fix exception reinjection forced to true
  KVM: Fix wallclock version writing race
  KVM: MMU: Don't read pdptrs with mmu spinlock held in mmu_alloc_roots
  KVM: VMX: enable VMXON check with SMX enabled (Intel TXT)
  ...
parents a8251096 8fbf065d
...@@ -656,6 +656,7 @@ struct kvm_clock_data { ...@@ -656,6 +656,7 @@ struct kvm_clock_data {
4.29 KVM_GET_VCPU_EVENTS 4.29 KVM_GET_VCPU_EVENTS
Capability: KVM_CAP_VCPU_EVENTS Capability: KVM_CAP_VCPU_EVENTS
Extended by: KVM_CAP_INTR_SHADOW
Architectures: x86 Architectures: x86
Type: vm ioctl Type: vm ioctl
Parameters: struct kvm_vcpu_event (out) Parameters: struct kvm_vcpu_event (out)
...@@ -676,7 +677,7 @@ struct kvm_vcpu_events { ...@@ -676,7 +677,7 @@ struct kvm_vcpu_events {
__u8 injected; __u8 injected;
__u8 nr; __u8 nr;
__u8 soft; __u8 soft;
__u8 pad; __u8 shadow;
} interrupt; } interrupt;
struct { struct {
__u8 injected; __u8 injected;
...@@ -688,9 +689,13 @@ struct kvm_vcpu_events { ...@@ -688,9 +689,13 @@ struct kvm_vcpu_events {
__u32 flags; __u32 flags;
}; };
KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that
interrupt.shadow contains a valid state. Otherwise, this field is undefined.
4.30 KVM_SET_VCPU_EVENTS 4.30 KVM_SET_VCPU_EVENTS
Capability: KVM_CAP_VCPU_EVENTS Capability: KVM_CAP_VCPU_EVENTS
Extended by: KVM_CAP_INTR_SHADOW
Architectures: x86 Architectures: x86
Type: vm ioctl Type: vm ioctl
Parameters: struct kvm_vcpu_event (in) Parameters: struct kvm_vcpu_event (in)
...@@ -709,6 +714,183 @@ current in-kernel state. The bits are: ...@@ -709,6 +714,183 @@ current in-kernel state. The bits are:
KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in
the flags field to signal that interrupt.shadow contains a valid state and
shall be written into the VCPU.
4.32 KVM_GET_DEBUGREGS
Capability: KVM_CAP_DEBUGREGS
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_debugregs (out)
Returns: 0 on success, -1 on error
Reads debug registers from the vcpu.
struct kvm_debugregs {
__u64 db[4];
__u64 dr6;
__u64 dr7;
__u64 flags;
__u64 reserved[9];
};
4.33 KVM_SET_DEBUGREGS
Capability: KVM_CAP_DEBUGREGS
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_debugregs (in)
Returns: 0 on success, -1 on error
Writes debug registers into the vcpu.
See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
yet and must be cleared on entry.
4.34 KVM_SET_USER_MEMORY_REGION
Capability: KVM_CAP_USER_MEM
Architectures: all
Type: vm ioctl
Parameters: struct kvm_userspace_memory_region (in)
Returns: 0 on success, -1 on error
struct kvm_userspace_memory_region {
__u32 slot;
__u32 flags;
__u64 guest_phys_addr;
__u64 memory_size; /* bytes */
__u64 userspace_addr; /* start of the userspace allocated memory */
};
/* for kvm_memory_region::flags */
#define KVM_MEM_LOG_DIRTY_PAGES 1UL
This ioctl allows the user to create or modify a guest physical memory
slot. When changing an existing slot, it may be moved in the guest
physical memory space, or its flags may be modified. It may not be
resized. Slots may not overlap in guest physical address space.
Memory for the region is taken starting at the address denoted by the
field userspace_addr, which must point at user addressable memory for
the entire memory slot size. Any object may back this memory, including
anonymous memory, ordinary files, and hugetlbfs.
It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr
be identical. This allows large pages in the guest to be backed by large
pages in the host.
The flags field supports just one flag, KVM_MEM_LOG_DIRTY_PAGES, which
instructs kvm to keep track of writes to memory within the slot. See
the KVM_GET_DIRTY_LOG ioctl.
When the KVM_CAP_SYNC_MMU capability, changes in the backing of the memory
region are automatically reflected into the guest. For example, an mmap()
that affects the region will be made visible immediately. Another example
is madvise(MADV_DROP).
It is recommended to use this API instead of the KVM_SET_MEMORY_REGION ioctl.
The KVM_SET_MEMORY_REGION does not allow fine grained control over memory
allocation and is deprecated.
4.35 KVM_SET_TSS_ADDR
Capability: KVM_CAP_SET_TSS_ADDR
Architectures: x86
Type: vm ioctl
Parameters: unsigned long tss_address (in)
Returns: 0 on success, -1 on error
This ioctl defines the physical address of a three-page region in the guest
physical address space. The region must be within the first 4GB of the
guest physical address space and must not conflict with any memory slot
or any mmio address. The guest may malfunction if it accesses this memory
region.
This ioctl is required on Intel-based hosts. This is needed on Intel hardware
because of a quirk in the virtualization implementation (see the internals
documentation when it pops into existence).
4.36 KVM_ENABLE_CAP
Capability: KVM_CAP_ENABLE_CAP
Architectures: ppc
Type: vcpu ioctl
Parameters: struct kvm_enable_cap (in)
Returns: 0 on success; -1 on error
+Not all extensions are enabled by default. Using this ioctl the application
can enable an extension, making it available to the guest.
On systems that do not support this ioctl, it always fails. On systems that
do support it, it only works for extensions that are supported for enablement.
To check if a capability can be enabled, the KVM_CHECK_EXTENSION ioctl should
be used.
struct kvm_enable_cap {
/* in */
__u32 cap;
The capability that is supposed to get enabled.
__u32 flags;
A bitfield indicating future enhancements. Has to be 0 for now.
__u64 args[4];
Arguments for enabling a feature. If a feature needs initial values to
function properly, this is the place to put them.
__u8 pad[64];
};
4.37 KVM_GET_MP_STATE
Capability: KVM_CAP_MP_STATE
Architectures: x86, ia64
Type: vcpu ioctl
Parameters: struct kvm_mp_state (out)
Returns: 0 on success; -1 on error
struct kvm_mp_state {
__u32 mp_state;
};
Returns the vcpu's current "multiprocessing state" (though also valid on
uniprocessor guests).
Possible values are:
- KVM_MP_STATE_RUNNABLE: the vcpu is currently running
- KVM_MP_STATE_UNINITIALIZED: the vcpu is an application processor (AP)
which has not yet received an INIT signal
- KVM_MP_STATE_INIT_RECEIVED: the vcpu has received an INIT signal, and is
now ready for a SIPI
- KVM_MP_STATE_HALTED: the vcpu has executed a HLT instruction and
is waiting for an interrupt
- KVM_MP_STATE_SIPI_RECEIVED: the vcpu has just received a SIPI (vector
accesible via KVM_GET_VCPU_EVENTS)
This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel
irqchip, the multiprocessing state must be maintained by userspace.
4.38 KVM_SET_MP_STATE
Capability: KVM_CAP_MP_STATE
Architectures: x86, ia64
Type: vcpu ioctl
Parameters: struct kvm_mp_state (in)
Returns: 0 on success; -1 on error
Sets the vcpu's current "multiprocessing state"; see KVM_GET_MP_STATE for
arguments.
This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel
irqchip, the multiprocessing state must be maintained by userspace.
5. The kvm_run structure 5. The kvm_run structure
...@@ -820,6 +1002,13 @@ executed a memory-mapped I/O instruction which could not be satisfied ...@@ -820,6 +1002,13 @@ executed a memory-mapped I/O instruction which could not be satisfied
by kvm. The 'data' member contains the written data if 'is_write' is by kvm. The 'data' member contains the written data if 'is_write' is
true, and should be filled by application code otherwise. true, and should be filled by application code otherwise.
NOTE: For KVM_EXIT_IO, KVM_EXIT_MMIO and KVM_EXIT_OSI, the corresponding
operations are complete (and guest state is consistent) only after userspace
has re-entered the kernel with KVM_RUN. The kernel side will first finish
incomplete operations and then check for pending signals. Userspace
can re-enter the guest with an unmasked signal pending to complete
pending operations.
/* KVM_EXIT_HYPERCALL */ /* KVM_EXIT_HYPERCALL */
struct { struct {
__u64 nr; __u64 nr;
...@@ -829,7 +1018,9 @@ true, and should be filled by application code otherwise. ...@@ -829,7 +1018,9 @@ true, and should be filled by application code otherwise.
__u32 pad; __u32 pad;
} hypercall; } hypercall;
Unused. Unused. This was once used for 'hypercall to userspace'. To implement
such functionality, use KVM_EXIT_IO (x86) or KVM_EXIT_MMIO (all except s390).
Note KVM_EXIT_IO is significantly faster than KVM_EXIT_MMIO.
/* KVM_EXIT_TPR_ACCESS */ /* KVM_EXIT_TPR_ACCESS */
struct { struct {
...@@ -870,6 +1061,19 @@ s390 specific. ...@@ -870,6 +1061,19 @@ s390 specific.
powerpc specific. powerpc specific.
/* KVM_EXIT_OSI */
struct {
__u64 gprs[32];
} osi;
MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
hypercalls and exit with this exit struct that contains all the guest gprs.
If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
Userspace can now handle the hypercall and when it's done modify the gprs as
necessary. Upon guest entry all guest GPRs will then be replaced by the values
in this struct.
/* Fix the size of the union. */ /* Fix the size of the union. */
char padding[256]; char padding[256];
}; };
......
KVM CPUID bits
Glauber Costa <glommer@redhat.com>, Red Hat Inc, 2010
=====================================================
A guest running on a kvm host, can check some of its features using
cpuid. This is not always guaranteed to work, since userspace can
mask-out some, or even all KVM-related cpuid features before launching
a guest.
KVM cpuid functions are:
function: KVM_CPUID_SIGNATURE (0x40000000)
returns : eax = 0,
ebx = 0x4b4d564b,
ecx = 0x564b4d56,
edx = 0x4d.
Note that this value in ebx, ecx and edx corresponds to the string "KVMKVMKVM".
This function queries the presence of KVM cpuid leafs.
function: define KVM_CPUID_FEATURES (0x40000001)
returns : ebx, ecx, edx = 0
eax = and OR'ed group of (1 << flag), where each flags is:
flag || value || meaning
=============================================================================
KVM_FEATURE_CLOCKSOURCE || 0 || kvmclock available at msrs
|| || 0x11 and 0x12.
------------------------------------------------------------------------------
KVM_FEATURE_NOP_IO_DELAY || 1 || not necessary to perform delays
|| || on PIO operations.
------------------------------------------------------------------------------
KVM_FEATURE_MMU_OP || 2 || deprecated.
------------------------------------------------------------------------------
KVM_FEATURE_CLOCKSOURCE2 || 3 || kvmclock available at msrs
|| || 0x4b564d00 and 0x4b564d01
------------------------------------------------------------------------------
KVM_FEATURE_CLOCKSOURCE_STABLE_BIT || 24 || host will warn if no guest-side
|| || per-cpu warps are expected in
|| || kvmclock.
------------------------------------------------------------------------------
This diff is collapsed.
...@@ -979,11 +979,13 @@ long kvm_arch_vm_ioctl(struct file *filp, ...@@ -979,11 +979,13 @@ long kvm_arch_vm_ioctl(struct file *filp,
r = -EFAULT; r = -EFAULT;
if (copy_from_user(&irq_event, argp, sizeof irq_event)) if (copy_from_user(&irq_event, argp, sizeof irq_event))
goto out; goto out;
r = -ENXIO;
if (irqchip_in_kernel(kvm)) { if (irqchip_in_kernel(kvm)) {
__s32 status; __s32 status;
status = kvm_set_irq(kvm, KVM_USERSPACE_IRQ_SOURCE_ID, status = kvm_set_irq(kvm, KVM_USERSPACE_IRQ_SOURCE_ID,
irq_event.irq, irq_event.level); irq_event.irq, irq_event.level);
if (ioctl == KVM_IRQ_LINE_STATUS) { if (ioctl == KVM_IRQ_LINE_STATUS) {
r = -EFAULT;
irq_event.status = status; irq_event.status = status;
if (copy_to_user(argp, &irq_event, if (copy_to_user(argp, &irq_event,
sizeof irq_event)) sizeof irq_event))
...@@ -1379,7 +1381,7 @@ static void kvm_release_vm_pages(struct kvm *kvm) ...@@ -1379,7 +1381,7 @@ static void kvm_release_vm_pages(struct kvm *kvm)
int i, j; int i, j;
unsigned long base_gfn; unsigned long base_gfn;
slots = rcu_dereference(kvm->memslots); slots = kvm_memslots(kvm);
for (i = 0; i < slots->nmemslots; i++) { for (i = 0; i < slots->nmemslots; i++) {
memslot = &slots->memslots[i]; memslot = &slots->memslots[i];
base_gfn = memslot->base_gfn; base_gfn = memslot->base_gfn;
...@@ -1535,8 +1537,10 @@ long kvm_arch_vcpu_ioctl(struct file *filp, ...@@ -1535,8 +1537,10 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
goto out; goto out;
if (copy_to_user(user_stack, stack, if (copy_to_user(user_stack, stack,
sizeof(struct kvm_ia64_vcpu_stack))) sizeof(struct kvm_ia64_vcpu_stack))) {
r = -EFAULT;
goto out; goto out;
}
break; break;
} }
......
...@@ -51,7 +51,7 @@ static int __init kvm_vmm_init(void) ...@@ -51,7 +51,7 @@ static int __init kvm_vmm_init(void)
vmm_fpswa_interface = fpswa_interface; vmm_fpswa_interface = fpswa_interface;
/*Register vmm data to kvm side*/ /*Register vmm data to kvm side*/
return kvm_init(&vmm_info, 1024, THIS_MODULE); return kvm_init(&vmm_info, 1024, 0, THIS_MODULE);
} }
static void __exit kvm_vmm_exit(void) static void __exit kvm_vmm_exit(void)
......
...@@ -21,6 +21,7 @@ ...@@ -21,6 +21,7 @@
/* operations for longs and pointers */ /* operations for longs and pointers */
#define PPC_LL stringify_in_c(ld) #define PPC_LL stringify_in_c(ld)
#define PPC_STL stringify_in_c(std) #define PPC_STL stringify_in_c(std)
#define PPC_STLU stringify_in_c(stdu)
#define PPC_LCMPI stringify_in_c(cmpdi) #define PPC_LCMPI stringify_in_c(cmpdi)
#define PPC_LONG stringify_in_c(.llong) #define PPC_LONG stringify_in_c(.llong)
#define PPC_LONG_ALIGN stringify_in_c(.balign 8) #define PPC_LONG_ALIGN stringify_in_c(.balign 8)
...@@ -44,6 +45,7 @@ ...@@ -44,6 +45,7 @@
/* operations for longs and pointers */ /* operations for longs and pointers */
#define PPC_LL stringify_in_c(lwz) #define PPC_LL stringify_in_c(lwz)
#define PPC_STL stringify_in_c(stw) #define PPC_STL stringify_in_c(stw)
#define PPC_STLU stringify_in_c(stwu)
#define PPC_LCMPI stringify_in_c(cmpwi) #define PPC_LCMPI stringify_in_c(cmpwi)
#define PPC_LONG stringify_in_c(.long) #define PPC_LONG stringify_in_c(.long)
#define PPC_LONG_ALIGN stringify_in_c(.balign 4) #define PPC_LONG_ALIGN stringify_in_c(.balign 4)
......
...@@ -77,4 +77,14 @@ struct kvm_debug_exit_arch { ...@@ -77,4 +77,14 @@ struct kvm_debug_exit_arch {
struct kvm_guest_debug_arch { struct kvm_guest_debug_arch {
}; };
#define KVM_REG_MASK 0x001f
#define KVM_REG_EXT_MASK 0xffe0
#define KVM_REG_GPR 0x0000
#define KVM_REG_FPR 0x0020
#define KVM_REG_QPR 0x0040
#define KVM_REG_FQPR 0x0060
#define KVM_INTERRUPT_SET -1U
#define KVM_INTERRUPT_UNSET -2U
#endif /* __LINUX_KVM_POWERPC_H */ #endif /* __LINUX_KVM_POWERPC_H */
...@@ -88,6 +88,8 @@ ...@@ -88,6 +88,8 @@
#define BOOK3S_HFLAG_DCBZ32 0x1 #define BOOK3S_HFLAG_DCBZ32 0x1
#define BOOK3S_HFLAG_SLB 0x2 #define BOOK3S_HFLAG_SLB 0x2
#define BOOK3S_HFLAG_PAIRED_SINGLE 0x4
#define BOOK3S_HFLAG_NATIVE_PS 0x8
#define RESUME_FLAG_NV (1<<0) /* Reload guest nonvolatile state? */ #define RESUME_FLAG_NV (1<<0) /* Reload guest nonvolatile state? */
#define RESUME_FLAG_HOST (1<<1) /* Resume host? */ #define RESUME_FLAG_HOST (1<<1) /* Resume host? */
......
...@@ -22,46 +22,47 @@ ...@@ -22,46 +22,47 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <asm/kvm_book3s_64_asm.h> #include <asm/kvm_book3s_asm.h>
struct kvmppc_slb { struct kvmppc_slb {
u64 esid; u64 esid;
u64 vsid; u64 vsid;
u64 orige; u64 orige;
u64 origv; u64 origv;
bool valid; bool valid : 1;
bool Ks; bool Ks : 1;
bool Kp; bool Kp : 1;
bool nx; bool nx : 1;
bool large; /* PTEs are 16MB */ bool large : 1; /* PTEs are 16MB */
bool tb; /* 1TB segment */ bool tb : 1; /* 1TB segment */
bool class; bool class : 1;
}; };
struct kvmppc_sr { struct kvmppc_sr {
u32 raw; u32 raw;
u32 vsid; u32 vsid;
bool Ks; bool Ks : 1;
bool Kp; bool Kp : 1;
bool nx; bool nx : 1;
bool valid : 1;
}; };
struct kvmppc_bat { struct kvmppc_bat {
u64 raw; u64 raw;
u32 bepi; u32 bepi;
u32 bepi_mask; u32 bepi_mask;
bool vs;
bool vp;
u32 brpn; u32 brpn;
u8 wimg; u8 wimg;
u8 pp; u8 pp;
bool vs : 1;
bool vp : 1;
}; };
struct kvmppc_sid_map { struct kvmppc_sid_map {
u64 guest_vsid; u64 guest_vsid;
u64 guest_esid; u64 guest_esid;
u64 host_vsid; u64 host_vsid;
bool valid; bool valid : 1;
}; };
#define SID_MAP_BITS 9 #define SID_MAP_BITS 9
...@@ -70,7 +71,7 @@ struct kvmppc_sid_map { ...@@ -70,7 +71,7 @@ struct kvmppc_sid_map {
struct kvmppc_vcpu_book3s { struct kvmppc_vcpu_book3s {
struct kvm_vcpu vcpu; struct kvm_vcpu vcpu;
struct kvmppc_book3s_shadow_vcpu shadow_vcpu; struct kvmppc_book3s_shadow_vcpu *shadow_vcpu;
struct kvmppc_sid_map sid_map[SID_MAP_NUM]; struct kvmppc_sid_map sid_map[SID_MAP_NUM];
struct kvmppc_slb slb[64]; struct kvmppc_slb slb[64];
struct { struct {
...@@ -82,9 +83,10 @@ struct kvmppc_vcpu_book3s { ...@@ -82,9 +83,10 @@ struct kvmppc_vcpu_book3s {
struct kvmppc_bat ibat[8]; struct kvmppc_bat ibat[8];
struct kvmppc_bat dbat[8]; struct kvmppc_bat dbat[8];
u64 hid[6]; u64 hid[6];
u64 gqr[8];
int slb_nr; int slb_nr;
u32 dsisr;
u64 sdr1; u64 sdr1;
u64 dsisr;
u64 hior; u64 hior;
u64 msr_mask; u64 msr_mask;
u64 vsid_first; u64 vsid_first;
...@@ -98,15 +100,15 @@ struct kvmppc_vcpu_book3s { ...@@ -98,15 +100,15 @@ struct kvmppc_vcpu_book3s {
#define CONTEXT_GUEST 1 #define CONTEXT_GUEST 1
#define CONTEXT_GUEST_END 2 #define CONTEXT_GUEST_END 2
#define VSID_REAL 0xfffffffffff00000 #define VSID_REAL 0x1fffffffffc00000ULL
#define VSID_REAL_DR 0xffffffffffe00000 #define VSID_BAT 0x1fffffffffb00000ULL
#define VSID_REAL_IR 0xffffffffffd00000 #define VSID_REAL_DR 0x2000000000000000ULL
#define VSID_BAT 0xffffffffffc00000 #define VSID_REAL_IR 0x4000000000000000ULL
#define VSID_PR 0x8000000000000000 #define VSID_PR 0x8000000000000000ULL
extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 ea, u64 ea_mask); extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong ea, ulong ea_mask);
extern void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 vp, u64 vp_mask); extern void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 vp, u64 vp_mask);
extern void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end); extern void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end);
extern void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 new_msr); extern void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 new_msr);
extern void kvmppc_mmu_book3s_64_init(struct kvm_vcpu *vcpu); extern void kvmppc_mmu_book3s_64_init(struct kvm_vcpu *vcpu);
extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu); extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu);
...@@ -114,11 +116,13 @@ extern int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte); ...@@ -114,11 +116,13 @@ extern int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte);
extern int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr); extern int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr);
extern void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu); extern void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu);
extern struct kvmppc_pte *kvmppc_mmu_find_pte(struct kvm_vcpu *vcpu, u64 ea, bool data); extern struct kvmppc_pte *kvmppc_mmu_find_pte(struct kvm_vcpu *vcpu, u64 ea, bool data);
extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong eaddr, int size, void *ptr, bool data); extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data);
extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong eaddr, int size, void *ptr); extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data);
extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec); extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec);
extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat,
bool upper, u32 val); bool upper, u32 val);
extern void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr);
extern int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu);
extern u32 kvmppc_trampoline_lowmem; extern u32 kvmppc_trampoline_lowmem;
extern u32 kvmppc_trampoline_enter; extern u32 kvmppc_trampoline_enter;
...@@ -126,6 +130,8 @@ extern void kvmppc_rmcall(ulong srr0, ulong srr1); ...@@ -126,6 +130,8 @@ extern void kvmppc_rmcall(ulong srr0, ulong srr1);
extern void kvmppc_load_up_fpu(void); extern void kvmppc_load_up_fpu(void);
extern void kvmppc_load_up_altivec(void); extern void kvmppc_load_up_altivec(void);
extern void kvmppc_load_up_vsx(void); extern void kvmppc_load_up_vsx(void);
extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);
extern ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst);
static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
{ {
...@@ -140,7 +146,108 @@ static inline ulong dsisr(void) ...@@ -140,7 +146,108 @@ static inline ulong dsisr(void)
} }
extern void kvm_return_point(void); extern void kvm_return_point(void);
static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu);
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
if ( num < 14 ) {
to_svcpu(vcpu)->gpr[num] = val;
to_book3s(vcpu)->shadow_vcpu->gpr[num] = val;
} else
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
if ( num < 14 )
return to_svcpu(vcpu)->gpr[num];
else
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
to_svcpu(vcpu)->cr = val;
to_book3s(vcpu)->shadow_vcpu->cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
to_svcpu(vcpu)->xer = val;
to_book3s(vcpu)->shadow_vcpu->xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->xer;
}
static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
{
to_svcpu(vcpu)->ctr = val;
}
static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->ctr;
}
static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
{
to_svcpu(vcpu)->lr = val;
}
static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->lr;
}
static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
{
to_svcpu(vcpu)->pc = val;
}
static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->pc;
}
static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
{
ulong pc = kvmppc_get_pc(vcpu);
struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
/* Load the instruction manually if it failed to do so in the
* exit path */
if (svcpu->last_inst == KVM_INST_FETCH_FAILED)
kvmppc_ld(vcpu, &pc, sizeof(u32), &svcpu->last_inst, false);
return svcpu->last_inst;
}
static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->fault_dar;
}
/* Magic register values loaded into r3 and r4 before the 'sc' assembly
* instruction for the OSI hypercalls */
#define OSI_SC_MAGIC_R3 0x113724FA
#define OSI_SC_MAGIC_R4 0x77810F9B
#define INS_DCBZ 0x7c0007ec #define INS_DCBZ 0x7c0007ec
/* Also add subarch specific defines */
#ifdef CONFIG_PPC_BOOK3S_32
#include <asm/kvm_book3s_32.h>
#else
#include <asm/kvm_book3s_64.h>
#endif
#endif /* __ASM_KVM_BOOK3S_H__ */ #endif /* __ASM_KVM_BOOK3S_H__ */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright SUSE Linux Products GmbH 2010
*
* Authors: Alexander Graf <agraf@suse.de>
*/
#ifndef __ASM_KVM_BOOK3S_32_H__
#define __ASM_KVM_BOOK3S_32_H__
static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu)
{
return to_book3s(vcpu)->shadow_vcpu;
}
#define PTE_SIZE 12
#define VSID_ALL 0
#define SR_INVALID 0x00000001 /* VSID 1 should always be unused */
#define SR_KP 0x20000000
#define PTE_V 0x80000000
#define PTE_SEC 0x00000040
#define PTE_M 0x00000010
#define PTE_R 0x00000100
#define PTE_C 0x00000080
#define SID_SHIFT 28
#define ESID_MASK 0xf0000000
#define VSID_MASK 0x00fffffff0000000ULL
#endif /* __ASM_KVM_BOOK3S_32_H__ */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright SUSE Linux Products GmbH 2010
*
* Authors: Alexander Graf <agraf@suse.de>
*/
#ifndef __ASM_KVM_BOOK3S_64_H__
#define __ASM_KVM_BOOK3S_64_H__
static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu)
{
return &get_paca()->shadow_vcpu;
}
#endif /* __ASM_KVM_BOOK3S_64_H__ */
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
#ifdef __ASSEMBLY__ #ifdef __ASSEMBLY__
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_HANDLER
#include <asm/kvm_asm.h> #include <asm/kvm_asm.h>
...@@ -55,7 +55,7 @@ kvmppc_resume_\intno: ...@@ -55,7 +55,7 @@ kvmppc_resume_\intno:
.macro DO_KVM intno .macro DO_KVM intno
.endm .endm
#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */ #endif /* CONFIG_KVM_BOOK3S_HANDLER */
#else /*__ASSEMBLY__ */ #else /*__ASSEMBLY__ */
...@@ -63,12 +63,33 @@ struct kvmppc_book3s_shadow_vcpu { ...@@ -63,12 +63,33 @@ struct kvmppc_book3s_shadow_vcpu {
ulong gpr[14]; ulong gpr[14];
u32 cr; u32 cr;
u32 xer; u32 xer;
u32 fault_dsisr;
u32 last_inst;
ulong ctr;
ulong lr;
ulong pc;
ulong shadow_srr1;
ulong fault_dar;
ulong host_r1; ulong host_r1;
ulong host_r2; ulong host_r2;
ulong handler; ulong handler;
ulong scratch0; ulong scratch0;
ulong scratch1; ulong scratch1;
ulong vmhandler; ulong vmhandler;
u8 in_guest;
#ifdef CONFIG_PPC_BOOK3S_32
u32 sr[16]; /* Guest SRs */
#endif
#ifdef CONFIG_PPC_BOOK3S_64
u8 slb_max; /* highest used guest slb entry */
struct {
u64 esid;
u64 vsid;
} slb[64]; /* guest SLB */
#endif
}; };
#endif /*__ASSEMBLY__ */ #endif /*__ASSEMBLY__ */
......
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright SUSE Linux Products GmbH 2010
*
* Authors: Alexander Graf <agraf@suse.de>
*/
#ifndef __ASM_KVM_BOOKE_H__
#define __ASM_KVM_BOOKE_H__
#include <linux/types.h>
#include <linux/kvm_host.h>
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
vcpu->arch.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
vcpu->arch.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
return vcpu->arch.xer;
}
static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
{
return vcpu->arch.last_inst;
}
static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
{
vcpu->arch.ctr = val;
}
static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.ctr;
}
static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
{
vcpu->arch.lr = val;
}
static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.lr;
}
static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
{
vcpu->arch.pc = val;
}
static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
{
return vcpu->arch.pc;
}
static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault_dear;
}
#endif /* __ASM_KVM_BOOKE_H__ */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright Novell Inc. 2010
*
* Authors: Alexander Graf <agraf@suse.de>
*/
#ifndef __ASM_KVM_FPU_H__
#define __ASM_KVM_FPU_H__
#include <linux/types.h>
extern void fps_fres(struct thread_struct *t, u32 *dst, u32 *src1);
extern void fps_frsqrte(struct thread_struct *t, u32 *dst, u32 *src1);
extern void fps_fsqrts(struct thread_struct *t, u32 *dst, u32 *src1);
extern void fps_fadds(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fdivs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fmuls(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fsubs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fmadds(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
extern void fps_fmsubs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
extern void fps_fnmadds(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
extern void fps_fnmsubs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
extern void fps_fsel(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
#define FPD_ONE_IN(name) extern void fpd_ ## name(u64 *fpscr, u32 *cr, \
u64 *dst, u64 *src1);
#define FPD_TWO_IN(name) extern void fpd_ ## name(u64 *fpscr, u32 *cr, \
u64 *dst, u64 *src1, u64 *src2);
#define FPD_THREE_IN(name) extern void fpd_ ## name(u64 *fpscr, u32 *cr, \
u64 *dst, u64 *src1, u64 *src2, u64 *src3);
extern void fpd_fcmpu(u64 *fpscr, u32 *cr, u64 *src1, u64 *src2);
extern void fpd_fcmpo(u64 *fpscr, u32 *cr, u64 *src1, u64 *src2);
FPD_ONE_IN(fsqrts)
FPD_ONE_IN(frsqrtes)
FPD_ONE_IN(fres)
FPD_ONE_IN(frsp)
FPD_ONE_IN(fctiw)
FPD_ONE_IN(fctiwz)
FPD_ONE_IN(fsqrt)
FPD_ONE_IN(fre)
FPD_ONE_IN(frsqrte)
FPD_ONE_IN(fneg)
FPD_ONE_IN(fabs)
FPD_TWO_IN(fadds)
FPD_TWO_IN(fsubs)
FPD_TWO_IN(fdivs)
FPD_TWO_IN(fmuls)
FPD_TWO_IN(fcpsgn)
FPD_TWO_IN(fdiv)
FPD_TWO_IN(fadd)
FPD_TWO_IN(fmul)
FPD_TWO_IN(fsub)
FPD_THREE_IN(fmsubs)
FPD_THREE_IN(fmadds)
FPD_THREE_IN(fnmsubs)
FPD_THREE_IN(fnmadds)
FPD_THREE_IN(fsel)
FPD_THREE_IN(fmsub)
FPD_THREE_IN(fmadd)
FPD_THREE_IN(fnmsub)
FPD_THREE_IN(fnmadd)
#endif
...@@ -66,7 +66,7 @@ struct kvm_vcpu_stat { ...@@ -66,7 +66,7 @@ struct kvm_vcpu_stat {
u32 dec_exits; u32 dec_exits;
u32 ext_intr_exits; u32 ext_intr_exits;
u32 halt_wakeup; u32 halt_wakeup;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC_BOOK3S
u32 pf_storage; u32 pf_storage;
u32 pf_instruc; u32 pf_instruc;
u32 sp_storage; u32 sp_storage;
...@@ -124,12 +124,12 @@ struct kvm_arch { ...@@ -124,12 +124,12 @@ struct kvm_arch {
}; };
struct kvmppc_pte { struct kvmppc_pte {
u64 eaddr; ulong eaddr;
u64 vpage; u64 vpage;
u64 raddr; ulong raddr;
bool may_read; bool may_read : 1;
bool may_write; bool may_write : 1;
bool may_execute; bool may_execute : 1;
}; };
struct kvmppc_mmu { struct kvmppc_mmu {
...@@ -145,7 +145,7 @@ struct kvmppc_mmu { ...@@ -145,7 +145,7 @@ struct kvmppc_mmu {
int (*xlate)(struct kvm_vcpu *vcpu, gva_t eaddr, struct kvmppc_pte *pte, bool data); int (*xlate)(struct kvm_vcpu *vcpu, gva_t eaddr, struct kvmppc_pte *pte, bool data);
void (*reset_msr)(struct kvm_vcpu *vcpu); void (*reset_msr)(struct kvm_vcpu *vcpu);
void (*tlbie)(struct kvm_vcpu *vcpu, ulong addr, bool large); void (*tlbie)(struct kvm_vcpu *vcpu, ulong addr, bool large);
int (*esid_to_vsid)(struct kvm_vcpu *vcpu, u64 esid, u64 *vsid); int (*esid_to_vsid)(struct kvm_vcpu *vcpu, ulong esid, u64 *vsid);
u64 (*ea_to_vp)(struct kvm_vcpu *vcpu, gva_t eaddr, bool data); u64 (*ea_to_vp)(struct kvm_vcpu *vcpu, gva_t eaddr, bool data);
bool (*is_dcbz32)(struct kvm_vcpu *vcpu); bool (*is_dcbz32)(struct kvm_vcpu *vcpu);
}; };
...@@ -160,7 +160,7 @@ struct hpte_cache { ...@@ -160,7 +160,7 @@ struct hpte_cache {
struct kvm_vcpu_arch { struct kvm_vcpu_arch {
ulong host_stack; ulong host_stack;
u32 host_pid; u32 host_pid;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC_BOOK3S
ulong host_msr; ulong host_msr;
ulong host_r2; ulong host_r2;
void *host_retip; void *host_retip;
...@@ -175,7 +175,7 @@ struct kvm_vcpu_arch { ...@@ -175,7 +175,7 @@ struct kvm_vcpu_arch {
ulong gpr[32]; ulong gpr[32];
u64 fpr[32]; u64 fpr[32];
u32 fpscr; u64 fpscr;
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
vector128 vr[32]; vector128 vr[32];
...@@ -186,19 +186,23 @@ struct kvm_vcpu_arch { ...@@ -186,19 +186,23 @@ struct kvm_vcpu_arch {
u64 vsr[32]; u64 vsr[32];
#endif #endif
#ifdef CONFIG_PPC_BOOK3S
/* For Gekko paired singles */
u32 qpr[32];
#endif
#ifdef CONFIG_BOOKE
ulong pc; ulong pc;
ulong ctr; ulong ctr;
ulong lr; ulong lr;
#ifdef CONFIG_BOOKE
ulong xer; ulong xer;
u32 cr; u32 cr;
#endif #endif
ulong msr; ulong msr;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC_BOOK3S
ulong shadow_msr; ulong shadow_msr;
ulong shadow_srr1;
ulong hflags; ulong hflags;
ulong guest_owned_ext; ulong guest_owned_ext;
#endif #endif
...@@ -253,20 +257,22 @@ struct kvm_vcpu_arch { ...@@ -253,20 +257,22 @@ struct kvm_vcpu_arch {
struct dentry *debugfs_exit_timing; struct dentry *debugfs_exit_timing;
#endif #endif
#ifdef CONFIG_BOOKE
u32 last_inst; u32 last_inst;
#ifdef CONFIG_PPC64
ulong fault_dsisr;
#endif
ulong fault_dear; ulong fault_dear;
ulong fault_esr; ulong fault_esr;
ulong queued_dear; ulong queued_dear;
ulong queued_esr; ulong queued_esr;
#endif
gpa_t paddr_accessed; gpa_t paddr_accessed;
u8 io_gpr; /* GPR used as IO source/target */ u8 io_gpr; /* GPR used as IO source/target */
u8 mmio_is_bigendian; u8 mmio_is_bigendian;
u8 mmio_sign_extend;
u8 dcr_needed; u8 dcr_needed;
u8 dcr_is_write; u8 dcr_is_write;
u8 osi_needed;
u8 osi_enabled;
u32 cpr0_cfgaddr; /* holds the last set cpr0_cfgaddr */ u32 cpr0_cfgaddr; /* holds the last set cpr0_cfgaddr */
...@@ -275,7 +281,7 @@ struct kvm_vcpu_arch { ...@@ -275,7 +281,7 @@ struct kvm_vcpu_arch {
u64 dec_jiffies; u64 dec_jiffies;
unsigned long pending_exceptions; unsigned long pending_exceptions;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC_BOOK3S
struct hpte_cache hpte_cache[HPTEG_CACHE_NUM]; struct hpte_cache hpte_cache[HPTEG_CACHE_NUM];
int hpte_cache_offset; int hpte_cache_offset;
#endif #endif
......
...@@ -30,6 +30,8 @@ ...@@ -30,6 +30,8 @@
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#ifdef CONFIG_PPC_BOOK3S #ifdef CONFIG_PPC_BOOK3S
#include <asm/kvm_book3s.h> #include <asm/kvm_book3s.h>
#else
#include <asm/kvm_booke.h>
#endif #endif
enum emulation_result { enum emulation_result {
...@@ -37,6 +39,7 @@ enum emulation_result { ...@@ -37,6 +39,7 @@ enum emulation_result {
EMULATE_DO_MMIO, /* kvm_run filled with MMIO request */ EMULATE_DO_MMIO, /* kvm_run filled with MMIO request */
EMULATE_DO_DCR, /* kvm_run filled with DCR request */ EMULATE_DO_DCR, /* kvm_run filled with DCR request */
EMULATE_FAIL, /* can't emulate this instruction */ EMULATE_FAIL, /* can't emulate this instruction */
EMULATE_AGAIN, /* something went wrong. go again */
}; };
extern int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu); extern int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
...@@ -48,8 +51,11 @@ extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu); ...@@ -48,8 +51,11 @@ extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu, extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int rt, unsigned int bytes, unsigned int rt, unsigned int bytes,
int is_bigendian); int is_bigendian);
extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int rt, unsigned int bytes,
int is_bigendian);
extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu, extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
u32 val, unsigned int bytes, int is_bigendian); u64 val, unsigned int bytes, int is_bigendian);
extern int kvmppc_emulate_instruction(struct kvm_run *run, extern int kvmppc_emulate_instruction(struct kvm_run *run,
struct kvm_vcpu *vcpu); struct kvm_vcpu *vcpu);
...@@ -63,6 +69,7 @@ extern void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 gvaddr, gpa_t gpaddr, ...@@ -63,6 +69,7 @@ extern void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 gvaddr, gpa_t gpaddr,
extern void kvmppc_mmu_priv_switch(struct kvm_vcpu *vcpu, int usermode); extern void kvmppc_mmu_priv_switch(struct kvm_vcpu *vcpu, int usermode);
extern void kvmppc_mmu_switch_pid(struct kvm_vcpu *vcpu, u32 pid); extern void kvmppc_mmu_switch_pid(struct kvm_vcpu *vcpu, u32 pid);
extern void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu); extern void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu);
extern int kvmppc_mmu_init(struct kvm_vcpu *vcpu);
extern int kvmppc_mmu_dtlb_index(struct kvm_vcpu *vcpu, gva_t eaddr); extern int kvmppc_mmu_dtlb_index(struct kvm_vcpu *vcpu, gva_t eaddr);
extern int kvmppc_mmu_itlb_index(struct kvm_vcpu *vcpu, gva_t eaddr); extern int kvmppc_mmu_itlb_index(struct kvm_vcpu *vcpu, gva_t eaddr);
extern gpa_t kvmppc_mmu_xlate(struct kvm_vcpu *vcpu, unsigned int gtlb_index, extern gpa_t kvmppc_mmu_xlate(struct kvm_vcpu *vcpu, unsigned int gtlb_index,
...@@ -88,6 +95,8 @@ extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu); ...@@ -88,6 +95,8 @@ extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu); extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu, extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq); struct kvm_interrupt *irq);
extern void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq);
extern int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu, extern int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int op, int *advance); unsigned int op, int *advance);
...@@ -99,81 +108,37 @@ extern void kvmppc_booke_exit(void); ...@@ -99,81 +108,37 @@ extern void kvmppc_booke_exit(void);
extern void kvmppc_core_destroy_mmu(struct kvm_vcpu *vcpu); extern void kvmppc_core_destroy_mmu(struct kvm_vcpu *vcpu);
#ifdef CONFIG_PPC_BOOK3S /*
* Cuts out inst bits with ordering according to spec.
/* We assume we're always acting on the current vcpu */ * That means the leftmost bit is zero. All given bits are included.
*/
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val) static inline u32 kvmppc_get_field(u64 inst, int msb, int lsb)
{
if ( num < 14 ) {
get_paca()->shadow_vcpu.gpr[num] = val;
to_book3s(vcpu)->shadow_vcpu.gpr[num] = val;
} else
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
if ( num < 14 )
return get_paca()->shadow_vcpu.gpr[num];
else
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.cr = val;
to_book3s(vcpu)->shadow_vcpu.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return get_paca()->shadow_vcpu.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.xer = val;
to_book3s(vcpu)->shadow_vcpu.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{ {
return get_paca()->shadow_vcpu.xer; u32 r;
} u32 mask;
#else BUG_ON(msb > lsb);
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val) mask = (1 << (lsb - msb + 1)) - 1;
{ r = (inst >> (63 - lsb)) & mask;
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num) return r;
{
return vcpu->arch.gpr[num];
} }
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val) /*
* Replaces inst bits with ordering according to spec.
*/
static inline u32 kvmppc_set_field(u64 inst, int msb, int lsb, int value)
{ {
vcpu->arch.cr = val; u32 r;
} u32 mask;
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu) BUG_ON(msb > lsb);
{
return vcpu->arch.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val) mask = ((1 << (lsb - msb + 1)) - 1) << (63 - lsb);
{ r = (inst & ~mask) | ((value << (63 - lsb)) & mask);
vcpu->arch.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu) return r;
{
return vcpu->arch.xer;
} }
#endif
#endif /* __POWERPC_KVM_PPC_H__ */ #endif /* __POWERPC_KVM_PPC_H__ */
...@@ -27,6 +27,8 @@ extern int __init_new_context(void); ...@@ -27,6 +27,8 @@ extern int __init_new_context(void);
extern void __destroy_context(int context_id); extern void __destroy_context(int context_id);
static inline void mmu_context_init(void) { } static inline void mmu_context_init(void) { }
#else #else
extern unsigned long __init_new_context(void);
extern void __destroy_context(unsigned long context_id);
extern void mmu_context_init(void); extern void mmu_context_init(void);
#endif #endif
......
...@@ -23,7 +23,7 @@ ...@@ -23,7 +23,7 @@
#include <asm/page.h> #include <asm/page.h>
#include <asm/exception-64e.h> #include <asm/exception-64e.h>
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
#include <asm/kvm_book3s_64_asm.h> #include <asm/kvm_book3s_asm.h>
#endif #endif
register struct paca_struct *local_paca asm("r13"); register struct paca_struct *local_paca asm("r13");
...@@ -137,15 +137,9 @@ struct paca_struct { ...@@ -137,15 +137,9 @@ struct paca_struct {
u64 startpurr; /* PURR/TB value snapshot */ u64 startpurr; /* PURR/TB value snapshot */
u64 startspurr; /* SPURR value snapshot */ u64 startspurr; /* SPURR value snapshot */
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_HANDLER
struct {
u64 esid;
u64 vsid;
} kvm_slb[64]; /* guest SLB */
/* We use this to store guest state in */ /* We use this to store guest state in */
struct kvmppc_book3s_shadow_vcpu shadow_vcpu; struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
u8 kvm_slb_max; /* highest used guest slb entry */
u8 kvm_in_guest; /* are we inside the guest? */
#endif #endif
}; };
......
...@@ -229,6 +229,9 @@ struct thread_struct { ...@@ -229,6 +229,9 @@ struct thread_struct {
unsigned long spefscr; /* SPE & eFP status */ unsigned long spefscr; /* SPE & eFP status */
int used_spe; /* set if process has used spe */ int used_spe; /* set if process has used spe */
#endif /* CONFIG_SPE */ #endif /* CONFIG_SPE */
#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
void* kvm_shadow_vcpu; /* KVM internal data */
#endif /* CONFIG_KVM_BOOK3S_32_HANDLER */
}; };
#define ARCH_MIN_TASKALIGN 16 #define ARCH_MIN_TASKALIGN 16
......
...@@ -293,10 +293,12 @@ ...@@ -293,10 +293,12 @@
#define HID1_ABE (1<<10) /* 7450 Address Broadcast Enable */ #define HID1_ABE (1<<10) /* 7450 Address Broadcast Enable */
#define HID1_PS (1<<16) /* 750FX PLL selection */ #define HID1_PS (1<<16) /* 750FX PLL selection */
#define SPRN_HID2 0x3F8 /* Hardware Implementation Register 2 */ #define SPRN_HID2 0x3F8 /* Hardware Implementation Register 2 */
#define SPRN_HID2_GEKKO 0x398 /* Gekko HID2 Register */
#define SPRN_IABR 0x3F2 /* Instruction Address Breakpoint Register */ #define SPRN_IABR 0x3F2 /* Instruction Address Breakpoint Register */
#define SPRN_IABR2 0x3FA /* 83xx */ #define SPRN_IABR2 0x3FA /* 83xx */
#define SPRN_IBCR 0x135 /* 83xx Insn Breakpoint Control Reg */ #define SPRN_IBCR 0x135 /* 83xx Insn Breakpoint Control Reg */
#define SPRN_HID4 0x3F4 /* 970 HID4 */ #define SPRN_HID4 0x3F4 /* 970 HID4 */
#define SPRN_HID4_GEKKO 0x3F3 /* Gekko HID4 */
#define SPRN_HID5 0x3F6 /* 970 HID5 */ #define SPRN_HID5 0x3F6 /* 970 HID5 */
#define SPRN_HID6 0x3F9 /* BE HID 6 */ #define SPRN_HID6 0x3F9 /* BE HID 6 */
#define HID6_LB (0x0F<<12) /* Concurrent Large Page Modes */ #define HID6_LB (0x0F<<12) /* Concurrent Large Page Modes */
...@@ -465,6 +467,14 @@ ...@@ -465,6 +467,14 @@
#define SPRN_VRSAVE 0x100 /* Vector Register Save Register */ #define SPRN_VRSAVE 0x100 /* Vector Register Save Register */
#define SPRN_XER 0x001 /* Fixed Point Exception Register */ #define SPRN_XER 0x001 /* Fixed Point Exception Register */
#define SPRN_MMCR0_GEKKO 0x3B8 /* Gekko Monitor Mode Control Register 0 */
#define SPRN_MMCR1_GEKKO 0x3BC /* Gekko Monitor Mode Control Register 1 */
#define SPRN_PMC1_GEKKO 0x3B9 /* Gekko Performance Monitor Control 1 */
#define SPRN_PMC2_GEKKO 0x3BA /* Gekko Performance Monitor Control 2 */
#define SPRN_PMC3_GEKKO 0x3BD /* Gekko Performance Monitor Control 3 */
#define SPRN_PMC4_GEKKO 0x3BE /* Gekko Performance Monitor Control 4 */
#define SPRN_WPAR_GEKKO 0x399 /* Gekko Write Pipe Address Register */
#define SPRN_SCOMC 0x114 /* SCOM Access Control */ #define SPRN_SCOMC 0x114 /* SCOM Access Control */
#define SPRN_SCOMD 0x115 /* SCOM Access DATA */ #define SPRN_SCOMD 0x115 /* SCOM Access DATA */
......
...@@ -50,6 +50,9 @@ ...@@ -50,6 +50,9 @@
#endif #endif
#ifdef CONFIG_KVM #ifdef CONFIG_KVM
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#ifndef CONFIG_BOOKE
#include <asm/kvm_book3s.h>
#endif
#endif #endif
#ifdef CONFIG_PPC32 #ifdef CONFIG_PPC32
...@@ -105,6 +108,9 @@ int main(void) ...@@ -105,6 +108,9 @@ int main(void)
DEFINE(THREAD_USED_SPE, offsetof(struct thread_struct, used_spe)); DEFINE(THREAD_USED_SPE, offsetof(struct thread_struct, used_spe));
#endif /* CONFIG_SPE */ #endif /* CONFIG_SPE */
#endif /* CONFIG_PPC64 */ #endif /* CONFIG_PPC64 */
#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
DEFINE(THREAD_KVM_SVCPU, offsetof(struct thread_struct, kvm_shadow_vcpu));
#endif
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags)); DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
DEFINE(TI_LOCAL_FLAGS, offsetof(struct thread_info, local_flags)); DEFINE(TI_LOCAL_FLAGS, offsetof(struct thread_info, local_flags));
...@@ -191,33 +197,9 @@ int main(void) ...@@ -191,33 +197,9 @@ int main(void)
DEFINE(PACA_DATA_OFFSET, offsetof(struct paca_struct, data_offset)); DEFINE(PACA_DATA_OFFSET, offsetof(struct paca_struct, data_offset));
DEFINE(PACA_TRAP_SAVE, offsetof(struct paca_struct, trap_save)); DEFINE(PACA_TRAP_SAVE, offsetof(struct paca_struct, trap_save));
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
DEFINE(PACA_KVM_IN_GUEST, offsetof(struct paca_struct, kvm_in_guest)); DEFINE(PACA_KVM_SVCPU, offsetof(struct paca_struct, shadow_vcpu));
DEFINE(PACA_KVM_SLB, offsetof(struct paca_struct, kvm_slb)); DEFINE(SVCPU_SLB, offsetof(struct kvmppc_book3s_shadow_vcpu, slb));
DEFINE(PACA_KVM_SLB_MAX, offsetof(struct paca_struct, kvm_slb_max)); DEFINE(SVCPU_SLB_MAX, offsetof(struct kvmppc_book3s_shadow_vcpu, slb_max));
DEFINE(PACA_KVM_CR, offsetof(struct paca_struct, shadow_vcpu.cr));
DEFINE(PACA_KVM_XER, offsetof(struct paca_struct, shadow_vcpu.xer));
DEFINE(PACA_KVM_R0, offsetof(struct paca_struct, shadow_vcpu.gpr[0]));
DEFINE(PACA_KVM_R1, offsetof(struct paca_struct, shadow_vcpu.gpr[1]));
DEFINE(PACA_KVM_R2, offsetof(struct paca_struct, shadow_vcpu.gpr[2]));
DEFINE(PACA_KVM_R3, offsetof(struct paca_struct, shadow_vcpu.gpr[3]));
DEFINE(PACA_KVM_R4, offsetof(struct paca_struct, shadow_vcpu.gpr[4]));
DEFINE(PACA_KVM_R5, offsetof(struct paca_struct, shadow_vcpu.gpr[5]));
DEFINE(PACA_KVM_R6, offsetof(struct paca_struct, shadow_vcpu.gpr[6]));
DEFINE(PACA_KVM_R7, offsetof(struct paca_struct, shadow_vcpu.gpr[7]));
DEFINE(PACA_KVM_R8, offsetof(struct paca_struct, shadow_vcpu.gpr[8]));
DEFINE(PACA_KVM_R9, offsetof(struct paca_struct, shadow_vcpu.gpr[9]));
DEFINE(PACA_KVM_R10, offsetof(struct paca_struct, shadow_vcpu.gpr[10]));
DEFINE(PACA_KVM_R11, offsetof(struct paca_struct, shadow_vcpu.gpr[11]));
DEFINE(PACA_KVM_R12, offsetof(struct paca_struct, shadow_vcpu.gpr[12]));
DEFINE(PACA_KVM_R13, offsetof(struct paca_struct, shadow_vcpu.gpr[13]));
DEFINE(PACA_KVM_HOST_R1, offsetof(struct paca_struct, shadow_vcpu.host_r1));
DEFINE(PACA_KVM_HOST_R2, offsetof(struct paca_struct, shadow_vcpu.host_r2));
DEFINE(PACA_KVM_VMHANDLER, offsetof(struct paca_struct,
shadow_vcpu.vmhandler));
DEFINE(PACA_KVM_SCRATCH0, offsetof(struct paca_struct,
shadow_vcpu.scratch0));
DEFINE(PACA_KVM_SCRATCH1, offsetof(struct paca_struct,
shadow_vcpu.scratch1));
#endif #endif
#endif /* CONFIG_PPC64 */ #endif /* CONFIG_PPC64 */
...@@ -228,8 +210,8 @@ int main(void) ...@@ -228,8 +210,8 @@ int main(void)
/* Interrupt register frame */ /* Interrupt register frame */
DEFINE(STACK_FRAME_OVERHEAD, STACK_FRAME_OVERHEAD); DEFINE(STACK_FRAME_OVERHEAD, STACK_FRAME_OVERHEAD);
DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE); DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
#ifdef CONFIG_PPC64
DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs)); DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
#ifdef CONFIG_PPC64
/* Create extra stack space for SRR0 and SRR1 when calling prom/rtas. */ /* Create extra stack space for SRR0 and SRR1 when calling prom/rtas. */
DEFINE(PROM_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16); DEFINE(PROM_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
DEFINE(RTAS_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16); DEFINE(RTAS_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
...@@ -412,9 +394,6 @@ int main(void) ...@@ -412,9 +394,6 @@ int main(void)
DEFINE(VCPU_HOST_STACK, offsetof(struct kvm_vcpu, arch.host_stack)); DEFINE(VCPU_HOST_STACK, offsetof(struct kvm_vcpu, arch.host_stack));
DEFINE(VCPU_HOST_PID, offsetof(struct kvm_vcpu, arch.host_pid)); DEFINE(VCPU_HOST_PID, offsetof(struct kvm_vcpu, arch.host_pid));
DEFINE(VCPU_GPRS, offsetof(struct kvm_vcpu, arch.gpr)); DEFINE(VCPU_GPRS, offsetof(struct kvm_vcpu, arch.gpr));
DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
DEFINE(VCPU_MSR, offsetof(struct kvm_vcpu, arch.msr)); DEFINE(VCPU_MSR, offsetof(struct kvm_vcpu, arch.msr));
DEFINE(VCPU_SPRG4, offsetof(struct kvm_vcpu, arch.sprg4)); DEFINE(VCPU_SPRG4, offsetof(struct kvm_vcpu, arch.sprg4));
DEFINE(VCPU_SPRG5, offsetof(struct kvm_vcpu, arch.sprg5)); DEFINE(VCPU_SPRG5, offsetof(struct kvm_vcpu, arch.sprg5));
...@@ -422,27 +401,68 @@ int main(void) ...@@ -422,27 +401,68 @@ int main(void)
DEFINE(VCPU_SPRG7, offsetof(struct kvm_vcpu, arch.sprg7)); DEFINE(VCPU_SPRG7, offsetof(struct kvm_vcpu, arch.sprg7));
DEFINE(VCPU_SHADOW_PID, offsetof(struct kvm_vcpu, arch.shadow_pid)); DEFINE(VCPU_SHADOW_PID, offsetof(struct kvm_vcpu, arch.shadow_pid));
DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst)); /* book3s */
DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear)); #ifdef CONFIG_PPC_BOOK3S
DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
/* book3s_64 */
#ifdef CONFIG_PPC64
DEFINE(VCPU_FAULT_DSISR, offsetof(struct kvm_vcpu, arch.fault_dsisr));
DEFINE(VCPU_HOST_RETIP, offsetof(struct kvm_vcpu, arch.host_retip)); DEFINE(VCPU_HOST_RETIP, offsetof(struct kvm_vcpu, arch.host_retip));
DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2));
DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr)); DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr));
DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr)); DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr));
DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem)); DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem));
DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter)); DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter));
DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler)); DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler));
DEFINE(VCPU_RMCALL, offsetof(struct kvm_vcpu, arch.rmcall)); DEFINE(VCPU_RMCALL, offsetof(struct kvm_vcpu, arch.rmcall));
DEFINE(VCPU_HFLAGS, offsetof(struct kvm_vcpu, arch.hflags)); DEFINE(VCPU_HFLAGS, offsetof(struct kvm_vcpu, arch.hflags));
DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
offsetof(struct kvmppc_vcpu_book3s, vcpu));
DEFINE(SVCPU_CR, offsetof(struct kvmppc_book3s_shadow_vcpu, cr));
DEFINE(SVCPU_XER, offsetof(struct kvmppc_book3s_shadow_vcpu, xer));
DEFINE(SVCPU_CTR, offsetof(struct kvmppc_book3s_shadow_vcpu, ctr));
DEFINE(SVCPU_LR, offsetof(struct kvmppc_book3s_shadow_vcpu, lr));
DEFINE(SVCPU_PC, offsetof(struct kvmppc_book3s_shadow_vcpu, pc));
DEFINE(SVCPU_R0, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[0]));
DEFINE(SVCPU_R1, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[1]));
DEFINE(SVCPU_R2, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[2]));
DEFINE(SVCPU_R3, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[3]));
DEFINE(SVCPU_R4, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[4]));
DEFINE(SVCPU_R5, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[5]));
DEFINE(SVCPU_R6, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[6]));
DEFINE(SVCPU_R7, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[7]));
DEFINE(SVCPU_R8, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[8]));
DEFINE(SVCPU_R9, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[9]));
DEFINE(SVCPU_R10, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[10]));
DEFINE(SVCPU_R11, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[11]));
DEFINE(SVCPU_R12, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[12]));
DEFINE(SVCPU_R13, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[13]));
DEFINE(SVCPU_HOST_R1, offsetof(struct kvmppc_book3s_shadow_vcpu, host_r1));
DEFINE(SVCPU_HOST_R2, offsetof(struct kvmppc_book3s_shadow_vcpu, host_r2));
DEFINE(SVCPU_VMHANDLER, offsetof(struct kvmppc_book3s_shadow_vcpu,
vmhandler));
DEFINE(SVCPU_SCRATCH0, offsetof(struct kvmppc_book3s_shadow_vcpu,
scratch0));
DEFINE(SVCPU_SCRATCH1, offsetof(struct kvmppc_book3s_shadow_vcpu,
scratch1));
DEFINE(SVCPU_IN_GUEST, offsetof(struct kvmppc_book3s_shadow_vcpu,
in_guest));
DEFINE(SVCPU_FAULT_DSISR, offsetof(struct kvmppc_book3s_shadow_vcpu,
fault_dsisr));
DEFINE(SVCPU_FAULT_DAR, offsetof(struct kvmppc_book3s_shadow_vcpu,
fault_dar));
DEFINE(SVCPU_LAST_INST, offsetof(struct kvmppc_book3s_shadow_vcpu,
last_inst));
DEFINE(SVCPU_SHADOW_SRR1, offsetof(struct kvmppc_book3s_shadow_vcpu,
shadow_srr1));
#ifdef CONFIG_PPC_BOOK3S_32
DEFINE(SVCPU_SR, offsetof(struct kvmppc_book3s_shadow_vcpu, sr));
#endif
#else #else
DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr)); DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer)); DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
#endif /* CONFIG_PPC64 */ DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst));
DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
#endif /* CONFIG_PPC_BOOK3S */
#endif #endif
#ifdef CONFIG_44x #ifdef CONFIG_44x
DEFINE(PGD_T_LOG2, PGD_T_LOG2); DEFINE(PGD_T_LOG2, PGD_T_LOG2);
......
...@@ -33,6 +33,7 @@ ...@@ -33,6 +33,7 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/bug.h> #include <asm/bug.h>
#include <asm/kvm_book3s_asm.h>
/* 601 only have IBAT; cr0.eq is set on 601 when using this macro */ /* 601 only have IBAT; cr0.eq is set on 601 when using this macro */
#define LOAD_BAT(n, reg, RA, RB) \ #define LOAD_BAT(n, reg, RA, RB) \
...@@ -303,6 +304,7 @@ __secondary_hold_acknowledge: ...@@ -303,6 +304,7 @@ __secondary_hold_acknowledge:
*/ */
#define EXCEPTION(n, label, hdlr, xfer) \ #define EXCEPTION(n, label, hdlr, xfer) \
. = n; \ . = n; \
DO_KVM n; \
label: \ label: \
EXCEPTION_PROLOG; \ EXCEPTION_PROLOG; \
addi r3,r1,STACK_FRAME_OVERHEAD; \ addi r3,r1,STACK_FRAME_OVERHEAD; \
...@@ -358,6 +360,7 @@ i##n: \ ...@@ -358,6 +360,7 @@ i##n: \
* -- paulus. * -- paulus.
*/ */
. = 0x200 . = 0x200
DO_KVM 0x200
mtspr SPRN_SPRG_SCRATCH0,r10 mtspr SPRN_SPRG_SCRATCH0,r10
mtspr SPRN_SPRG_SCRATCH1,r11 mtspr SPRN_SPRG_SCRATCH1,r11
mfcr r10 mfcr r10
...@@ -381,6 +384,7 @@ i##n: \ ...@@ -381,6 +384,7 @@ i##n: \
/* Data access exception. */ /* Data access exception. */
. = 0x300 . = 0x300
DO_KVM 0x300
DataAccess: DataAccess:
EXCEPTION_PROLOG EXCEPTION_PROLOG
mfspr r10,SPRN_DSISR mfspr r10,SPRN_DSISR
...@@ -397,6 +401,7 @@ DataAccess: ...@@ -397,6 +401,7 @@ DataAccess:
/* Instruction access exception. */ /* Instruction access exception. */
. = 0x400 . = 0x400
DO_KVM 0x400
InstructionAccess: InstructionAccess:
EXCEPTION_PROLOG EXCEPTION_PROLOG
andis. r0,r9,0x4000 /* no pte found? */ andis. r0,r9,0x4000 /* no pte found? */
...@@ -413,6 +418,7 @@ InstructionAccess: ...@@ -413,6 +418,7 @@ InstructionAccess:
/* Alignment exception */ /* Alignment exception */
. = 0x600 . = 0x600
DO_KVM 0x600
Alignment: Alignment:
EXCEPTION_PROLOG EXCEPTION_PROLOG
mfspr r4,SPRN_DAR mfspr r4,SPRN_DAR
...@@ -427,6 +433,7 @@ Alignment: ...@@ -427,6 +433,7 @@ Alignment:
/* Floating-point unavailable */ /* Floating-point unavailable */
. = 0x800 . = 0x800
DO_KVM 0x800
FPUnavailable: FPUnavailable:
BEGIN_FTR_SECTION BEGIN_FTR_SECTION
/* /*
...@@ -450,6 +457,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE) ...@@ -450,6 +457,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
/* System call */ /* System call */
. = 0xc00 . = 0xc00
DO_KVM 0xc00
SystemCall: SystemCall:
EXCEPTION_PROLOG EXCEPTION_PROLOG
EXC_XFER_EE_LITE(0xc00, DoSyscall) EXC_XFER_EE_LITE(0xc00, DoSyscall)
...@@ -467,9 +475,11 @@ SystemCall: ...@@ -467,9 +475,11 @@ SystemCall:
* by executing an altivec instruction. * by executing an altivec instruction.
*/ */
. = 0xf00 . = 0xf00
DO_KVM 0xf00
b PerformanceMonitor b PerformanceMonitor
. = 0xf20 . = 0xf20
DO_KVM 0xf20
b AltiVecUnavailable b AltiVecUnavailable
/* /*
...@@ -882,6 +892,10 @@ __secondary_start: ...@@ -882,6 +892,10 @@ __secondary_start:
RFI RFI
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
#ifdef CONFIG_KVM_BOOK3S_HANDLER
#include "../kvm/book3s_rmhandlers.S"
#endif
/* /*
* Those generic dummy functions are kept for CPUs not * Those generic dummy functions are kept for CPUs not
* included in CONFIG_6xx * included in CONFIG_6xx
......
...@@ -37,7 +37,7 @@ ...@@ -37,7 +37,7 @@
#include <asm/firmware.h> #include <asm/firmware.h>
#include <asm/page_64.h> #include <asm/page_64.h>
#include <asm/irqflags.h> #include <asm/irqflags.h>
#include <asm/kvm_book3s_64_asm.h> #include <asm/kvm_book3s_asm.h>
/* The physical memory is layed out such that the secondary processor /* The physical memory is layed out such that the secondary processor
* spin code sits at 0x0000...0x00ff. On server, the vectors follow * spin code sits at 0x0000...0x00ff. On server, the vectors follow
...@@ -169,7 +169,7 @@ exception_marker: ...@@ -169,7 +169,7 @@ exception_marker:
/* KVM trampoline code needs to be close to the interrupt handlers */ /* KVM trampoline code needs to be close to the interrupt handlers */
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
#include "../kvm/book3s_64_rmhandlers.S" #include "../kvm/book3s_rmhandlers.S"
#endif #endif
_GLOBAL(generic_secondary_thread_init) _GLOBAL(generic_secondary_thread_init)
......
...@@ -101,6 +101,10 @@ EXPORT_SYMBOL(pci_dram_offset); ...@@ -101,6 +101,10 @@ EXPORT_SYMBOL(pci_dram_offset);
EXPORT_SYMBOL(start_thread); EXPORT_SYMBOL(start_thread);
EXPORT_SYMBOL(kernel_thread); EXPORT_SYMBOL(kernel_thread);
#ifndef CONFIG_BOOKE
EXPORT_SYMBOL_GPL(cvt_df);
EXPORT_SYMBOL_GPL(cvt_fd);
#endif
EXPORT_SYMBOL(giveup_fpu); EXPORT_SYMBOL(giveup_fpu);
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
EXPORT_SYMBOL(giveup_altivec); EXPORT_SYMBOL(giveup_altivec);
......
...@@ -147,7 +147,7 @@ static int __init kvmppc_44x_init(void) ...@@ -147,7 +147,7 @@ static int __init kvmppc_44x_init(void)
if (r) if (r)
return r; return r;
return kvm_init(NULL, sizeof(struct kvmppc_vcpu_44x), THIS_MODULE); return kvm_init(NULL, sizeof(struct kvmppc_vcpu_44x), 0, THIS_MODULE);
} }
static void __exit kvmppc_44x_exit(void) static void __exit kvmppc_44x_exit(void)
......
...@@ -22,12 +22,34 @@ config KVM ...@@ -22,12 +22,34 @@ config KVM
select ANON_INODES select ANON_INODES
select KVM_MMIO select KVM_MMIO
config KVM_BOOK3S_HANDLER
bool
config KVM_BOOK3S_32_HANDLER
bool
select KVM_BOOK3S_HANDLER
config KVM_BOOK3S_64_HANDLER config KVM_BOOK3S_64_HANDLER
bool bool
select KVM_BOOK3S_HANDLER
config KVM_BOOK3S_32
tristate "KVM support for PowerPC book3s_32 processors"
depends on EXPERIMENTAL && PPC_BOOK3S_32 && !SMP && !PTE_64BIT
select KVM
select KVM_BOOK3S_32_HANDLER
---help---
Support running unmodified book3s_32 guest kernels
in virtual machines on book3s_32 host processors.
This module provides access to the hardware capabilities through
a character device node named /dev/kvm.
If unsure, say N.
config KVM_BOOK3S_64 config KVM_BOOK3S_64
tristate "KVM support for PowerPC book3s_64 processors" tristate "KVM support for PowerPC book3s_64 processors"
depends on EXPERIMENTAL && PPC64 depends on EXPERIMENTAL && PPC_BOOK3S_64
select KVM select KVM
select KVM_BOOK3S_64_HANDLER select KVM_BOOK3S_64_HANDLER
---help--- ---help---
......
...@@ -14,7 +14,7 @@ CFLAGS_emulate.o := -I. ...@@ -14,7 +14,7 @@ CFLAGS_emulate.o := -I.
common-objs-y += powerpc.o emulate.o common-objs-y += powerpc.o emulate.o
obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
obj-$(CONFIG_KVM_BOOK3S_64_HANDLER) += book3s_64_exports.o obj-$(CONFIG_KVM_BOOK3S_HANDLER) += book3s_exports.o
AFLAGS_booke_interrupts.o := -I$(obj) AFLAGS_booke_interrupts.o := -I$(obj)
...@@ -40,17 +40,31 @@ kvm-objs-$(CONFIG_KVM_E500) := $(kvm-e500-objs) ...@@ -40,17 +40,31 @@ kvm-objs-$(CONFIG_KVM_E500) := $(kvm-e500-objs)
kvm-book3s_64-objs := \ kvm-book3s_64-objs := \
$(common-objs-y) \ $(common-objs-y) \
fpu.o \
book3s_paired_singles.o \
book3s.o \ book3s.o \
book3s_64_emulate.o \ book3s_emulate.o \
book3s_64_interrupts.o \ book3s_interrupts.o \
book3s_64_mmu_host.o \ book3s_64_mmu_host.o \
book3s_64_mmu.o \ book3s_64_mmu.o \
book3s_32_mmu.o book3s_32_mmu.o
kvm-objs-$(CONFIG_KVM_BOOK3S_64) := $(kvm-book3s_64-objs) kvm-objs-$(CONFIG_KVM_BOOK3S_64) := $(kvm-book3s_64-objs)
kvm-book3s_32-objs := \
$(common-objs-y) \
fpu.o \
book3s_paired_singles.o \
book3s.o \
book3s_emulate.o \
book3s_interrupts.o \
book3s_32_mmu_host.o \
book3s_32_mmu.o
kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs)
kvm-objs := $(kvm-objs-m) $(kvm-objs-y) kvm-objs := $(kvm-objs-m) $(kvm-objs-y)
obj-$(CONFIG_KVM_440) += kvm.o obj-$(CONFIG_KVM_440) += kvm.o
obj-$(CONFIG_KVM_E500) += kvm.o obj-$(CONFIG_KVM_E500) += kvm.o
obj-$(CONFIG_KVM_BOOK3S_64) += kvm.o obj-$(CONFIG_KVM_BOOK3S_64) += kvm.o
obj-$(CONFIG_KVM_BOOK3S_32) += kvm.o
This diff is collapsed.
...@@ -37,7 +37,7 @@ ...@@ -37,7 +37,7 @@
#define dprintk(X...) do { } while(0) #define dprintk(X...) do { } while(0)
#endif #endif
#ifdef DEBUG_PTE #ifdef DEBUG_MMU_PTE
#define dprintk_pte(X...) printk(KERN_INFO X) #define dprintk_pte(X...) printk(KERN_INFO X)
#else #else
#define dprintk_pte(X...) do { } while(0) #define dprintk_pte(X...) do { } while(0)
...@@ -45,6 +45,9 @@ ...@@ -45,6 +45,9 @@
#define PTEG_FLAG_ACCESSED 0x00000100 #define PTEG_FLAG_ACCESSED 0x00000100
#define PTEG_FLAG_DIRTY 0x00000080 #define PTEG_FLAG_DIRTY 0x00000080
#ifndef SID_SHIFT
#define SID_SHIFT 28
#endif
static inline bool check_debug_ip(struct kvm_vcpu *vcpu) static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
{ {
...@@ -57,6 +60,8 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu) ...@@ -57,6 +60,8 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr, static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
struct kvmppc_pte *pte, bool data); struct kvmppc_pte *pte, bool data);
static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
u64 *vsid);
static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr) static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
{ {
...@@ -66,13 +71,14 @@ static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t e ...@@ -66,13 +71,14 @@ static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t e
static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr, static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
bool data) bool data)
{ {
struct kvmppc_sr *sre = find_sr(to_book3s(vcpu), eaddr); u64 vsid;
struct kvmppc_pte pte; struct kvmppc_pte pte;
if (!kvmppc_mmu_book3s_32_xlate_bat(vcpu, eaddr, &pte, data)) if (!kvmppc_mmu_book3s_32_xlate_bat(vcpu, eaddr, &pte, data))
return pte.vpage; return pte.vpage;
return (((u64)eaddr >> 12) & 0xffff) | (((u64)sre->vsid) << 16); kvmppc_mmu_book3s_32_esid_to_vsid(vcpu, eaddr >> SID_SHIFT, &vsid);
return (((u64)eaddr >> 12) & 0xffff) | (vsid << 16);
} }
static void kvmppc_mmu_book3s_32_reset_msr(struct kvm_vcpu *vcpu) static void kvmppc_mmu_book3s_32_reset_msr(struct kvm_vcpu *vcpu)
...@@ -142,8 +148,13 @@ static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr, ...@@ -142,8 +148,13 @@ static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
bat->bepi_mask); bat->bepi_mask);
} }
if ((eaddr & bat->bepi_mask) == bat->bepi) { if ((eaddr & bat->bepi_mask) == bat->bepi) {
u64 vsid;
kvmppc_mmu_book3s_32_esid_to_vsid(vcpu,
eaddr >> SID_SHIFT, &vsid);
vsid <<= 16;
pte->vpage = (((u64)eaddr >> 12) & 0xffff) | vsid;
pte->raddr = bat->brpn | (eaddr & ~bat->bepi_mask); pte->raddr = bat->brpn | (eaddr & ~bat->bepi_mask);
pte->vpage = (eaddr >> 12) | VSID_BAT;
pte->may_read = bat->pp; pte->may_read = bat->pp;
pte->may_write = bat->pp > 1; pte->may_write = bat->pp > 1;
pte->may_execute = true; pte->may_execute = true;
...@@ -172,7 +183,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr, ...@@ -172,7 +183,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
struct kvmppc_sr *sre; struct kvmppc_sr *sre;
hva_t ptegp; hva_t ptegp;
u32 pteg[16]; u32 pteg[16];
u64 ptem = 0; u32 ptem = 0;
int i; int i;
int found = 0; int found = 0;
...@@ -302,6 +313,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum, ...@@ -302,6 +313,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
/* And then put in the new SR */ /* And then put in the new SR */
sre->raw = value; sre->raw = value;
sre->vsid = (value & 0x0fffffff); sre->vsid = (value & 0x0fffffff);
sre->valid = (value & 0x80000000) ? false : true;
sre->Ks = (value & 0x40000000) ? true : false; sre->Ks = (value & 0x40000000) ? true : false;
sre->Kp = (value & 0x20000000) ? true : false; sre->Kp = (value & 0x20000000) ? true : false;
sre->nx = (value & 0x10000000) ? true : false; sre->nx = (value & 0x10000000) ? true : false;
...@@ -312,36 +324,48 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum, ...@@ -312,36 +324,48 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool large) static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool large)
{ {
kvmppc_mmu_pte_flush(vcpu, ea, ~0xFFFULL); kvmppc_mmu_pte_flush(vcpu, ea, 0x0FFFF000);
} }
static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid, static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
u64 *vsid) u64 *vsid)
{ {
ulong ea = esid << SID_SHIFT;
struct kvmppc_sr *sr;
u64 gvsid = esid;
if (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
sr = find_sr(to_book3s(vcpu), ea);
if (sr->valid)
gvsid = sr->vsid;
}
/* In case we only have one of MSR_IR or MSR_DR set, let's put /* In case we only have one of MSR_IR or MSR_DR set, let's put
that in the real-mode context (and hope RM doesn't access that in the real-mode context (and hope RM doesn't access
high memory) */ high memory) */
switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) { switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
case 0: case 0:
*vsid = (VSID_REAL >> 16) | esid; *vsid = VSID_REAL | esid;
break; break;
case MSR_IR: case MSR_IR:
*vsid = (VSID_REAL_IR >> 16) | esid; *vsid = VSID_REAL_IR | gvsid;
break; break;
case MSR_DR: case MSR_DR:
*vsid = (VSID_REAL_DR >> 16) | esid; *vsid = VSID_REAL_DR | gvsid;
break; break;
case MSR_DR|MSR_IR: case MSR_DR|MSR_IR:
{ if (!sr->valid)
ulong ea; return -1;
ea = esid << SID_SHIFT;
*vsid = find_sr(to_book3s(vcpu), ea)->vsid; *vsid = sr->vsid;
break; break;
}
default: default:
BUG(); BUG();
} }
if (vcpu->arch.msr & MSR_PR)
*vsid |= VSID_PR;
return 0; return 0;
} }
......
This diff is collapsed.
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright SUSE Linux Products GmbH 2009
*
* Authors: Alexander Graf <agraf@suse.de>
*/
/******************************************************************************
* *
* Entry code *
* *
*****************************************************************************/
.macro LOAD_GUEST_SEGMENTS
/* Required state:
*
* MSR = ~IR|DR
* R1 = host R1
* R2 = host R2
* R3 = shadow vcpu
* all other volatile GPRS = free
* SVCPU[CR] = guest CR
* SVCPU[XER] = guest XER
* SVCPU[CTR] = guest CTR
* SVCPU[LR] = guest LR
*/
#define XCHG_SR(n) lwz r9, (SVCPU_SR+(n*4))(r3); \
mtsr n, r9
XCHG_SR(0)
XCHG_SR(1)
XCHG_SR(2)
XCHG_SR(3)
XCHG_SR(4)
XCHG_SR(5)
XCHG_SR(6)
XCHG_SR(7)
XCHG_SR(8)
XCHG_SR(9)
XCHG_SR(10)
XCHG_SR(11)
XCHG_SR(12)
XCHG_SR(13)
XCHG_SR(14)
XCHG_SR(15)
/* Clear BATs. */
#define KVM_KILL_BAT(n, reg) \
mtspr SPRN_IBAT##n##U,reg; \
mtspr SPRN_IBAT##n##L,reg; \
mtspr SPRN_DBAT##n##U,reg; \
mtspr SPRN_DBAT##n##L,reg; \
li r9, 0
KVM_KILL_BAT(0, r9)
KVM_KILL_BAT(1, r9)
KVM_KILL_BAT(2, r9)
KVM_KILL_BAT(3, r9)
.endm
/******************************************************************************
* *
* Exit code *
* *
*****************************************************************************/
.macro LOAD_HOST_SEGMENTS
/* Register usage at this point:
*
* R1 = host R1
* R2 = host R2
* R12 = exit handler id
* R13 = shadow vcpu - SHADOW_VCPU_OFF
* SVCPU.* = guest *
* SVCPU[CR] = guest CR
* SVCPU[XER] = guest XER
* SVCPU[CTR] = guest CTR
* SVCPU[LR] = guest LR
*
*/
/* Restore BATs */
/* We only overwrite the upper part, so we only restoree
the upper part. */
#define KVM_LOAD_BAT(n, reg, RA, RB) \
lwz RA,(n*16)+0(reg); \
lwz RB,(n*16)+4(reg); \
mtspr SPRN_IBAT##n##U,RA; \
mtspr SPRN_IBAT##n##L,RB; \
lwz RA,(n*16)+8(reg); \
lwz RB,(n*16)+12(reg); \
mtspr SPRN_DBAT##n##U,RA; \
mtspr SPRN_DBAT##n##L,RB; \
lis r9, BATS@ha
addi r9, r9, BATS@l
tophys(r9, r9)
KVM_LOAD_BAT(0, r9, r10, r11)
KVM_LOAD_BAT(1, r9, r10, r11)
KVM_LOAD_BAT(2, r9, r10, r11)
KVM_LOAD_BAT(3, r9, r10, r11)
/* Restore Segment Registers */
/* 0xc - 0xf */
li r0, 4
mtctr r0
LOAD_REG_IMMEDIATE(r3, 0x20000000 | (0x111 * 0xc))
lis r4, 0xc000
3: mtsrin r3, r4
addi r3, r3, 0x111 /* increment VSID */
addis r4, r4, 0x1000 /* address of next segment */
bdnz 3b
/* 0x0 - 0xb */
/* 'current->mm' needs to be in r4 */
tophys(r4, r2)
lwz r4, MM(r4)
tophys(r4, r4)
/* This only clobbers r0, r3, r4 and r5 */
bl switch_mmu_context
.endm
...@@ -232,7 +232,7 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr, ...@@ -232,7 +232,7 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
} }
dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx " dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx "
"-> 0x%llx\n", "-> 0x%lx\n",
eaddr, avpn, gpte->vpage, gpte->raddr); eaddr, avpn, gpte->vpage, gpte->raddr);
found = true; found = true;
break; break;
...@@ -383,7 +383,7 @@ static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu) ...@@ -383,7 +383,7 @@ static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
if (vcpu->arch.msr & MSR_IR) { if (vcpu->arch.msr & MSR_IR) {
kvmppc_mmu_flush_segments(vcpu); kvmppc_mmu_flush_segments(vcpu);
kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc); kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
} }
} }
...@@ -439,37 +439,43 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va, ...@@ -439,37 +439,43 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
kvmppc_mmu_pte_vflush(vcpu, va >> 12, mask); kvmppc_mmu_pte_vflush(vcpu, va >> 12, mask);
} }
static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid, static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
u64 *vsid) u64 *vsid)
{ {
ulong ea = esid << SID_SHIFT;
struct kvmppc_slb *slb;
u64 gvsid = esid;
if (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
slb = kvmppc_mmu_book3s_64_find_slbe(to_book3s(vcpu), ea);
if (slb)
gvsid = slb->vsid;
}
switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) { switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
case 0: case 0:
*vsid = (VSID_REAL >> 16) | esid; *vsid = VSID_REAL | esid;
break; break;
case MSR_IR: case MSR_IR:
*vsid = (VSID_REAL_IR >> 16) | esid; *vsid = VSID_REAL_IR | gvsid;
break; break;
case MSR_DR: case MSR_DR:
*vsid = (VSID_REAL_DR >> 16) | esid; *vsid = VSID_REAL_DR | gvsid;
break; break;
case MSR_DR|MSR_IR: case MSR_DR|MSR_IR:
{ if (!slb)
ulong ea;
struct kvmppc_slb *slb;
ea = esid << SID_SHIFT;
slb = kvmppc_mmu_book3s_64_find_slbe(to_book3s(vcpu), ea);
if (slb)
*vsid = slb->vsid;
else
return -ENOENT; return -ENOENT;
*vsid = gvsid;
break; break;
}
default: default:
BUG(); BUG();
break; break;
} }
if (vcpu->arch.msr & MSR_PR)
*vsid |= VSID_PR;
return 0; return 0;
} }
......
...@@ -48,21 +48,25 @@ ...@@ -48,21 +48,25 @@
static void invalidate_pte(struct hpte_cache *pte) static void invalidate_pte(struct hpte_cache *pte)
{ {
dprintk_mmu("KVM: Flushing SPT %d: 0x%llx (0x%llx) -> 0x%llx\n", dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
i, pte->pte.eaddr, pte->pte.vpage, pte->host_va); pte->pte.eaddr, pte->pte.vpage, pte->host_va);
ppc_md.hpte_invalidate(pte->slot, pte->host_va, ppc_md.hpte_invalidate(pte->slot, pte->host_va,
MMU_PAGE_4K, MMU_SEGSIZE_256M, MMU_PAGE_4K, MMU_SEGSIZE_256M,
false); false);
pte->host_va = 0; pte->host_va = 0;
kvm_release_pfn_dirty(pte->pfn);
if (pte->pte.may_write)
kvm_release_pfn_dirty(pte->pfn);
else
kvm_release_pfn_clean(pte->pfn);
} }
void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 guest_ea, u64 ea_mask) void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
{ {
int i; int i;
dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%llx & 0x%llx\n", dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%lx & 0x%lx\n",
vcpu->arch.hpte_cache_offset, guest_ea, ea_mask); vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM); BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
...@@ -106,12 +110,12 @@ void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask) ...@@ -106,12 +110,12 @@ void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
} }
} }
void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end) void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
{ {
int i; int i;
dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%llx & 0x%llx\n", dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%lx & 0x%lx\n",
vcpu->arch.hpte_cache_offset, guest_pa, pa_mask); vcpu->arch.hpte_cache_offset, pa_start, pa_end);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM); BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) { for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
...@@ -182,7 +186,7 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid) ...@@ -182,7 +186,7 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
sid_map_mask = kvmppc_sid_hash(vcpu, gvsid); sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
map = &to_book3s(vcpu)->sid_map[sid_map_mask]; map = &to_book3s(vcpu)->sid_map[sid_map_mask];
if (map->guest_vsid == gvsid) { if (map->guest_vsid == gvsid) {
dprintk_slb("SLB: Searching 0x%llx -> 0x%llx\n", dprintk_slb("SLB: Searching: 0x%llx -> 0x%llx\n",
gvsid, map->host_vsid); gvsid, map->host_vsid);
return map; return map;
} }
...@@ -194,7 +198,8 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid) ...@@ -194,7 +198,8 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
return map; return map;
} }
dprintk_slb("SLB: Searching 0x%llx -> not found\n", gvsid); dprintk_slb("SLB: Searching %d/%d: 0x%llx -> not found\n",
sid_map_mask, SID_MAP_MASK - sid_map_mask, gvsid);
return NULL; return NULL;
} }
...@@ -212,7 +217,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte) ...@@ -212,7 +217,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
/* Get host physical address for gpa */ /* Get host physical address for gpa */
hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT); hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
if (kvm_is_error_hva(hpaddr)) { if (kvm_is_error_hva(hpaddr)) {
printk(KERN_INFO "Couldn't get guest page for gfn %llx!\n", orig_pte->eaddr); printk(KERN_INFO "Couldn't get guest page for gfn %lx!\n", orig_pte->eaddr);
return -EINVAL; return -EINVAL;
} }
hpaddr <<= PAGE_SHIFT; hpaddr <<= PAGE_SHIFT;
...@@ -227,10 +232,16 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte) ...@@ -227,10 +232,16 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
vcpu->arch.mmu.esid_to_vsid(vcpu, orig_pte->eaddr >> SID_SHIFT, &vsid); vcpu->arch.mmu.esid_to_vsid(vcpu, orig_pte->eaddr >> SID_SHIFT, &vsid);
map = find_sid_vsid(vcpu, vsid); map = find_sid_vsid(vcpu, vsid);
if (!map) { if (!map) {
kvmppc_mmu_map_segment(vcpu, orig_pte->eaddr); ret = kvmppc_mmu_map_segment(vcpu, orig_pte->eaddr);
WARN_ON(ret < 0);
map = find_sid_vsid(vcpu, vsid); map = find_sid_vsid(vcpu, vsid);
} }
BUG_ON(!map); if (!map) {
printk(KERN_ERR "KVM: Segment map for 0x%llx (0x%lx) failed\n",
vsid, orig_pte->eaddr);
WARN_ON(true);
return -EINVAL;
}
vsid = map->host_vsid; vsid = map->host_vsid;
va = hpt_va(orig_pte->eaddr, vsid, MMU_SEGSIZE_256M); va = hpt_va(orig_pte->eaddr, vsid, MMU_SEGSIZE_256M);
...@@ -257,26 +268,26 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte) ...@@ -257,26 +268,26 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
if (ret < 0) { if (ret < 0) {
/* If we couldn't map a primary PTE, try a secondary */ /* If we couldn't map a primary PTE, try a secondary */
#ifdef USE_SECONDARY
hash = ~hash; hash = ~hash;
vflags ^= HPTE_V_SECONDARY;
attempt++; attempt++;
if (attempt % 2)
vflags = HPTE_V_SECONDARY;
else
vflags = 0;
#else
attempt = 2;
#endif
goto map_again; goto map_again;
} else { } else {
int hpte_id = kvmppc_mmu_hpte_cache_next(vcpu); int hpte_id = kvmppc_mmu_hpte_cache_next(vcpu);
struct hpte_cache *pte = &vcpu->arch.hpte_cache[hpte_id]; struct hpte_cache *pte = &vcpu->arch.hpte_cache[hpte_id];
dprintk_mmu("KVM: %c%c Map 0x%llx: [%lx] 0x%lx (0x%llx) -> %lx\n", dprintk_mmu("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx\n",
((rflags & HPTE_R_PP) == 3) ? '-' : 'w', ((rflags & HPTE_R_PP) == 3) ? '-' : 'w',
(rflags & HPTE_R_N) ? '-' : 'x', (rflags & HPTE_R_N) ? '-' : 'x',
orig_pte->eaddr, hpteg, va, orig_pte->vpage, hpaddr); orig_pte->eaddr, hpteg, va, orig_pte->vpage, hpaddr);
/* The ppc_md code may give us a secondary entry even though we
asked for a primary. Fix up. */
if ((ret & _PTEIDX_SECONDARY) && !(vflags & HPTE_V_SECONDARY)) {
hash = ~hash;
hpteg = ((hash & htab_hash_mask) * HPTES_PER_GROUP);
}
pte->slot = hpteg + (ret & 7); pte->slot = hpteg + (ret & 7);
pte->host_va = va; pte->host_va = va;
pte->pte = *orig_pte; pte->pte = *orig_pte;
...@@ -321,6 +332,9 @@ static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid) ...@@ -321,6 +332,9 @@ static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
map->guest_vsid = gvsid; map->guest_vsid = gvsid;
map->valid = true; map->valid = true;
dprintk_slb("SLB: New mapping at %d: 0x%llx -> 0x%llx\n",
sid_map_mask, gvsid, map->host_vsid);
return map; return map;
} }
...@@ -331,14 +345,14 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid) ...@@ -331,14 +345,14 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid)
int found_inval = -1; int found_inval = -1;
int r; int r;
if (!get_paca()->kvm_slb_max) if (!to_svcpu(vcpu)->slb_max)
get_paca()->kvm_slb_max = 1; to_svcpu(vcpu)->slb_max = 1;
/* Are we overwriting? */ /* Are we overwriting? */
for (i = 1; i < get_paca()->kvm_slb_max; i++) { for (i = 1; i < to_svcpu(vcpu)->slb_max; i++) {
if (!(get_paca()->kvm_slb[i].esid & SLB_ESID_V)) if (!(to_svcpu(vcpu)->slb[i].esid & SLB_ESID_V))
found_inval = i; found_inval = i;
else if ((get_paca()->kvm_slb[i].esid & ESID_MASK) == esid) else if ((to_svcpu(vcpu)->slb[i].esid & ESID_MASK) == esid)
return i; return i;
} }
...@@ -352,11 +366,11 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid) ...@@ -352,11 +366,11 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid)
max_slb_size = mmu_slb_size; max_slb_size = mmu_slb_size;
/* Overflowing -> purge */ /* Overflowing -> purge */
if ((get_paca()->kvm_slb_max) == max_slb_size) if ((to_svcpu(vcpu)->slb_max) == max_slb_size)
kvmppc_mmu_flush_segments(vcpu); kvmppc_mmu_flush_segments(vcpu);
r = get_paca()->kvm_slb_max; r = to_svcpu(vcpu)->slb_max;
get_paca()->kvm_slb_max++; to_svcpu(vcpu)->slb_max++;
return r; return r;
} }
...@@ -374,7 +388,7 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr) ...@@ -374,7 +388,7 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
if (vcpu->arch.mmu.esid_to_vsid(vcpu, esid, &gvsid)) { if (vcpu->arch.mmu.esid_to_vsid(vcpu, esid, &gvsid)) {
/* Invalidate an entry */ /* Invalidate an entry */
get_paca()->kvm_slb[slb_index].esid = 0; to_svcpu(vcpu)->slb[slb_index].esid = 0;
return -ENOENT; return -ENOENT;
} }
...@@ -388,8 +402,8 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr) ...@@ -388,8 +402,8 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
slb_vsid &= ~SLB_VSID_KP; slb_vsid &= ~SLB_VSID_KP;
slb_esid |= slb_index; slb_esid |= slb_index;
get_paca()->kvm_slb[slb_index].esid = slb_esid; to_svcpu(vcpu)->slb[slb_index].esid = slb_esid;
get_paca()->kvm_slb[slb_index].vsid = slb_vsid; to_svcpu(vcpu)->slb[slb_index].vsid = slb_vsid;
dprintk_slb("slbmte %#llx, %#llx\n", slb_vsid, slb_esid); dprintk_slb("slbmte %#llx, %#llx\n", slb_vsid, slb_esid);
...@@ -398,11 +412,29 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr) ...@@ -398,11 +412,29 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu) void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
{ {
get_paca()->kvm_slb_max = 1; to_svcpu(vcpu)->slb_max = 1;
get_paca()->kvm_slb[0].esid = 0; to_svcpu(vcpu)->slb[0].esid = 0;
} }
void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu) void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
{ {
kvmppc_mmu_pte_flush(vcpu, 0, 0); kvmppc_mmu_pte_flush(vcpu, 0, 0);
__destroy_context(to_book3s(vcpu)->context_id);
}
int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
{
struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
int err;
err = __init_new_context();
if (err < 0)
return -1;
vcpu3s->context_id = err;
vcpu3s->vsid_max = ((vcpu3s->context_id + 1) << USER_ESID_BITS) - 1;
vcpu3s->vsid_first = vcpu3s->context_id << USER_ESID_BITS;
vcpu3s->vsid_next = vcpu3s->vsid_first;
return 0;
} }
...@@ -44,8 +44,7 @@ slb_exit_skip_ ## num: ...@@ -44,8 +44,7 @@ slb_exit_skip_ ## num:
* * * *
*****************************************************************************/ *****************************************************************************/
.global kvmppc_handler_trampoline_enter .macro LOAD_GUEST_SEGMENTS
kvmppc_handler_trampoline_enter:
/* Required state: /* Required state:
* *
...@@ -53,20 +52,14 @@ kvmppc_handler_trampoline_enter: ...@@ -53,20 +52,14 @@ kvmppc_handler_trampoline_enter:
* R13 = PACA * R13 = PACA
* R1 = host R1 * R1 = host R1
* R2 = host R2 * R2 = host R2
* R9 = guest IP * R3 = shadow vcpu
* R10 = guest MSR * all other volatile GPRS = free
* all other GPRS = free * SVCPU[CR] = guest CR
* PACA[KVM_CR] = guest CR * SVCPU[XER] = guest XER
* PACA[KVM_XER] = guest XER * SVCPU[CTR] = guest CTR
* SVCPU[LR] = guest LR
*/ */
mtsrr0 r9
mtsrr1 r10
/* Activate guest mode, so faults get handled by KVM */
li r11, KVM_GUEST_MODE_GUEST
stb r11, PACA_KVM_IN_GUEST(r13)
/* Remove LPAR shadow entries */ /* Remove LPAR shadow entries */
#if SLB_NUM_BOLTED == 3 #if SLB_NUM_BOLTED == 3
...@@ -101,14 +94,14 @@ kvmppc_handler_trampoline_enter: ...@@ -101,14 +94,14 @@ kvmppc_handler_trampoline_enter:
/* Fill SLB with our shadow */ /* Fill SLB with our shadow */
lbz r12, PACA_KVM_SLB_MAX(r13) lbz r12, SVCPU_SLB_MAX(r3)
mulli r12, r12, 16 mulli r12, r12, 16
addi r12, r12, PACA_KVM_SLB addi r12, r12, SVCPU_SLB
add r12, r12, r13 add r12, r12, r3
/* for (r11 = kvm_slb; r11 < kvm_slb + kvm_slb_size; r11+=slb_entry) */ /* for (r11 = kvm_slb; r11 < kvm_slb + kvm_slb_size; r11+=slb_entry) */
li r11, PACA_KVM_SLB li r11, SVCPU_SLB
add r11, r11, r13 add r11, r11, r3
slb_loop_enter: slb_loop_enter:
...@@ -127,34 +120,7 @@ slb_loop_enter_skip: ...@@ -127,34 +120,7 @@ slb_loop_enter_skip:
slb_do_enter: slb_do_enter:
/* Enter guest */ .endm
ld r0, (PACA_KVM_R0)(r13)
ld r1, (PACA_KVM_R1)(r13)
ld r2, (PACA_KVM_R2)(r13)
ld r3, (PACA_KVM_R3)(r13)
ld r4, (PACA_KVM_R4)(r13)
ld r5, (PACA_KVM_R5)(r13)
ld r6, (PACA_KVM_R6)(r13)
ld r7, (PACA_KVM_R7)(r13)
ld r8, (PACA_KVM_R8)(r13)
ld r9, (PACA_KVM_R9)(r13)
ld r10, (PACA_KVM_R10)(r13)
ld r12, (PACA_KVM_R12)(r13)
lwz r11, (PACA_KVM_CR)(r13)
mtcr r11
ld r11, (PACA_KVM_XER)(r13)
mtxer r11
ld r11, (PACA_KVM_R11)(r13)
ld r13, (PACA_KVM_R13)(r13)
RFI
kvmppc_handler_trampoline_enter_end:
/****************************************************************************** /******************************************************************************
* * * *
...@@ -162,99 +128,22 @@ kvmppc_handler_trampoline_enter_end: ...@@ -162,99 +128,22 @@ kvmppc_handler_trampoline_enter_end:
* * * *
*****************************************************************************/ *****************************************************************************/
.global kvmppc_handler_trampoline_exit .macro LOAD_HOST_SEGMENTS
kvmppc_handler_trampoline_exit:
/* Register usage at this point: /* Register usage at this point:
* *
* SPRG_SCRATCH0 = guest R13 * R1 = host R1
* R12 = exit handler id * R2 = host R2
* R13 = PACA * R12 = exit handler id
* PACA.KVM.SCRATCH0 = guest R12 * R13 = shadow vcpu - SHADOW_VCPU_OFF [=PACA on PPC64]
* PACA.KVM.SCRATCH1 = guest CR * SVCPU.* = guest *
* SVCPU[CR] = guest CR
* SVCPU[XER] = guest XER
* SVCPU[CTR] = guest CTR
* SVCPU[LR] = guest LR
* *
*/ */
/* Save registers */
std r0, PACA_KVM_R0(r13)
std r1, PACA_KVM_R1(r13)
std r2, PACA_KVM_R2(r13)
std r3, PACA_KVM_R3(r13)
std r4, PACA_KVM_R4(r13)
std r5, PACA_KVM_R5(r13)
std r6, PACA_KVM_R6(r13)
std r7, PACA_KVM_R7(r13)
std r8, PACA_KVM_R8(r13)
std r9, PACA_KVM_R9(r13)
std r10, PACA_KVM_R10(r13)
std r11, PACA_KVM_R11(r13)
/* Restore R1/R2 so we can handle faults */
ld r1, PACA_KVM_HOST_R1(r13)
ld r2, PACA_KVM_HOST_R2(r13)
/* Save guest PC and MSR in GPRs */
mfsrr0 r3
mfsrr1 r4
/* Get scratch'ed off registers */
mfspr r9, SPRN_SPRG_SCRATCH0
std r9, PACA_KVM_R13(r13)
ld r8, PACA_KVM_SCRATCH0(r13)
std r8, PACA_KVM_R12(r13)
lwz r7, PACA_KVM_SCRATCH1(r13)
stw r7, PACA_KVM_CR(r13)
/* Save more register state */
mfxer r6
stw r6, PACA_KVM_XER(r13)
mfdar r5
mfdsisr r6
/*
* In order for us to easily get the last instruction,
* we got the #vmexit at, we exploit the fact that the
* virtual layout is still the same here, so we can just
* ld from the guest's PC address
*/
/* We only load the last instruction when it's safe */
cmpwi r12, BOOK3S_INTERRUPT_DATA_STORAGE
beq ld_last_inst
cmpwi r12, BOOK3S_INTERRUPT_PROGRAM
beq ld_last_inst
b no_ld_last_inst
ld_last_inst:
/* Save off the guest instruction we're at */
/* Set guest mode to 'jump over instruction' so if lwz faults
* we'll just continue at the next IP. */
li r9, KVM_GUEST_MODE_SKIP
stb r9, PACA_KVM_IN_GUEST(r13)
/* 1) enable paging for data */
mfmsr r9
ori r11, r9, MSR_DR /* Enable paging for data */
mtmsr r11
/* 2) fetch the instruction */
li r0, KVM_INST_FETCH_FAILED /* In case lwz faults */
lwz r0, 0(r3)
/* 3) disable paging again */
mtmsr r9
no_ld_last_inst:
/* Unset guest mode */
li r9, KVM_GUEST_MODE_NONE
stb r9, PACA_KVM_IN_GUEST(r13)
/* Restore bolted entries from the shadow and fix it along the way */ /* Restore bolted entries from the shadow and fix it along the way */
/* We don't store anything in entry 0, so we don't need to take care of it */ /* We don't store anything in entry 0, so we don't need to take care of it */
...@@ -275,28 +164,4 @@ no_ld_last_inst: ...@@ -275,28 +164,4 @@ no_ld_last_inst:
slb_do_exit: slb_do_exit:
/* Register usage at this point: .endm
*
* R0 = guest last inst
* R1 = host R1
* R2 = host R2
* R3 = guest PC
* R4 = guest MSR
* R5 = guest DAR
* R6 = guest DSISR
* R12 = exit handler id
* R13 = PACA
* PACA.KVM.* = guest *
*
*/
/* RFI into the highmem handler */
mfmsr r7
ori r7, r7, MSR_IR|MSR_DR|MSR_RI /* Enable paging */
mtsrr1 r7
ld r8, PACA_KVM_VMHANDLER(r13) /* Highmem handler address */
mtsrr0 r8
RFI
kvmppc_handler_trampoline_exit_end:
...@@ -24,36 +24,56 @@ ...@@ -24,36 +24,56 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/exception-64s.h> #include <asm/exception-64s.h>
#define KVMPPC_HANDLE_EXIT .kvmppc_handle_exit #if defined(CONFIG_PPC_BOOK3S_64)
#define ULONG_SIZE 8
#define VCPU_GPR(n) (VCPU_GPRS + (n * ULONG_SIZE))
.macro DISABLE_INTERRUPTS #define ULONG_SIZE 8
mfmsr r0 #define FUNC(name) GLUE(.,name)
rldicl r0,r0,48,1
rotldi r0,r0,16
mtmsrd r0,1
.endm
#define GET_SHADOW_VCPU(reg) \
addi reg, r13, PACA_KVM_SVCPU
#define DISABLE_INTERRUPTS \
mfmsr r0; \
rldicl r0,r0,48,1; \
rotldi r0,r0,16; \
mtmsrd r0,1; \
#elif defined(CONFIG_PPC_BOOK3S_32)
#define ULONG_SIZE 4
#define FUNC(name) name
#define GET_SHADOW_VCPU(reg) \
lwz reg, (THREAD + THREAD_KVM_SVCPU)(r2)
#define DISABLE_INTERRUPTS \
mfmsr r0; \
rlwinm r0,r0,0,17,15; \
mtmsr r0; \
#endif /* CONFIG_PPC_BOOK3S_XX */
#define VCPU_GPR(n) (VCPU_GPRS + (n * ULONG_SIZE))
#define VCPU_LOAD_NVGPRS(vcpu) \ #define VCPU_LOAD_NVGPRS(vcpu) \
ld r14, VCPU_GPR(r14)(vcpu); \ PPC_LL r14, VCPU_GPR(r14)(vcpu); \
ld r15, VCPU_GPR(r15)(vcpu); \ PPC_LL r15, VCPU_GPR(r15)(vcpu); \
ld r16, VCPU_GPR(r16)(vcpu); \ PPC_LL r16, VCPU_GPR(r16)(vcpu); \
ld r17, VCPU_GPR(r17)(vcpu); \ PPC_LL r17, VCPU_GPR(r17)(vcpu); \
ld r18, VCPU_GPR(r18)(vcpu); \ PPC_LL r18, VCPU_GPR(r18)(vcpu); \
ld r19, VCPU_GPR(r19)(vcpu); \ PPC_LL r19, VCPU_GPR(r19)(vcpu); \
ld r20, VCPU_GPR(r20)(vcpu); \ PPC_LL r20, VCPU_GPR(r20)(vcpu); \
ld r21, VCPU_GPR(r21)(vcpu); \ PPC_LL r21, VCPU_GPR(r21)(vcpu); \
ld r22, VCPU_GPR(r22)(vcpu); \ PPC_LL r22, VCPU_GPR(r22)(vcpu); \
ld r23, VCPU_GPR(r23)(vcpu); \ PPC_LL r23, VCPU_GPR(r23)(vcpu); \
ld r24, VCPU_GPR(r24)(vcpu); \ PPC_LL r24, VCPU_GPR(r24)(vcpu); \
ld r25, VCPU_GPR(r25)(vcpu); \ PPC_LL r25, VCPU_GPR(r25)(vcpu); \
ld r26, VCPU_GPR(r26)(vcpu); \ PPC_LL r26, VCPU_GPR(r26)(vcpu); \
ld r27, VCPU_GPR(r27)(vcpu); \ PPC_LL r27, VCPU_GPR(r27)(vcpu); \
ld r28, VCPU_GPR(r28)(vcpu); \ PPC_LL r28, VCPU_GPR(r28)(vcpu); \
ld r29, VCPU_GPR(r29)(vcpu); \ PPC_LL r29, VCPU_GPR(r29)(vcpu); \
ld r30, VCPU_GPR(r30)(vcpu); \ PPC_LL r30, VCPU_GPR(r30)(vcpu); \
ld r31, VCPU_GPR(r31)(vcpu); \ PPC_LL r31, VCPU_GPR(r31)(vcpu); \
/***************************************************************************** /*****************************************************************************
* * * *
...@@ -69,11 +89,11 @@ _GLOBAL(__kvmppc_vcpu_entry) ...@@ -69,11 +89,11 @@ _GLOBAL(__kvmppc_vcpu_entry)
kvm_start_entry: kvm_start_entry:
/* Write correct stack frame */ /* Write correct stack frame */
mflr r0 mflr r0
std r0,16(r1) PPC_STL r0,PPC_LR_STKOFF(r1)
/* Save host state to the stack */ /* Save host state to the stack */
stdu r1, -SWITCH_FRAME_SIZE(r1) PPC_STLU r1, -SWITCH_FRAME_SIZE(r1)
/* Save r3 (kvm_run) and r4 (vcpu) */ /* Save r3 (kvm_run) and r4 (vcpu) */
SAVE_2GPRS(3, r1) SAVE_2GPRS(3, r1)
...@@ -82,33 +102,28 @@ kvm_start_entry: ...@@ -82,33 +102,28 @@ kvm_start_entry:
SAVE_NVGPRS(r1) SAVE_NVGPRS(r1)
/* Save LR */ /* Save LR */
std r0, _LINK(r1) PPC_STL r0, _LINK(r1)
/* Load non-volatile guest state from the vcpu */ /* Load non-volatile guest state from the vcpu */
VCPU_LOAD_NVGPRS(r4) VCPU_LOAD_NVGPRS(r4)
GET_SHADOW_VCPU(r5)
/* Save R1/R2 in the PACA */ /* Save R1/R2 in the PACA */
std r1, PACA_KVM_HOST_R1(r13) PPC_STL r1, SVCPU_HOST_R1(r5)
std r2, PACA_KVM_HOST_R2(r13) PPC_STL r2, SVCPU_HOST_R2(r5)
/* XXX swap in/out on load? */ /* XXX swap in/out on load? */
ld r3, VCPU_HIGHMEM_HANDLER(r4) PPC_LL r3, VCPU_HIGHMEM_HANDLER(r4)
std r3, PACA_KVM_VMHANDLER(r13) PPC_STL r3, SVCPU_VMHANDLER(r5)
kvm_start_lightweight: kvm_start_lightweight:
ld r9, VCPU_PC(r4) /* r9 = vcpu->arch.pc */ PPC_LL r10, VCPU_SHADOW_MSR(r4) /* r10 = vcpu->arch.shadow_msr */
ld r10, VCPU_SHADOW_MSR(r4) /* r10 = vcpu->arch.shadow_msr */
/* Load some guest state in the respective registers */
ld r5, VCPU_CTR(r4) /* r5 = vcpu->arch.ctr */
/* will be swapped in by rmcall */
ld r3, VCPU_LR(r4) /* r3 = vcpu->arch.lr */
mtlr r3 /* LR = r3 */
DISABLE_INTERRUPTS DISABLE_INTERRUPTS
#ifdef CONFIG_PPC_BOOK3S_64
/* Some guests may need to have dcbz set to 32 byte length. /* Some guests may need to have dcbz set to 32 byte length.
* *
* Usually we ensure that by patching the guest's instructions * Usually we ensure that by patching the guest's instructions
...@@ -118,7 +133,7 @@ kvm_start_lightweight: ...@@ -118,7 +133,7 @@ kvm_start_lightweight:
* because that's a lot faster. * because that's a lot faster.
*/ */
ld r3, VCPU_HFLAGS(r4) PPC_LL r3, VCPU_HFLAGS(r4)
rldicl. r3, r3, 0, 63 /* CR = ((r3 & 1) == 0) */ rldicl. r3, r3, 0, 63 /* CR = ((r3 & 1) == 0) */
beq no_dcbz32_on beq no_dcbz32_on
...@@ -128,13 +143,15 @@ kvm_start_lightweight: ...@@ -128,13 +143,15 @@ kvm_start_lightweight:
no_dcbz32_on: no_dcbz32_on:
ld r6, VCPU_RMCALL(r4) #endif /* CONFIG_PPC_BOOK3S_64 */
PPC_LL r6, VCPU_RMCALL(r4)
mtctr r6 mtctr r6
ld r3, VCPU_TRAMPOLINE_ENTER(r4) PPC_LL r3, VCPU_TRAMPOLINE_ENTER(r4)
LOAD_REG_IMMEDIATE(r4, MSR_KERNEL & ~(MSR_IR | MSR_DR)) LOAD_REG_IMMEDIATE(r4, MSR_KERNEL & ~(MSR_IR | MSR_DR))
/* Jump to SLB patching handlder and into our guest */ /* Jump to segment patching handler and into our guest */
bctr bctr
/* /*
...@@ -149,31 +166,20 @@ kvmppc_handler_highmem: ...@@ -149,31 +166,20 @@ kvmppc_handler_highmem:
/* /*
* Register usage at this point: * Register usage at this point:
* *
* R0 = guest last inst * R1 = host R1
* R1 = host R1 * R2 = host R2
* R2 = host R2 * R12 = exit handler id
* R3 = guest PC * R13 = PACA
* R4 = guest MSR * SVCPU.* = guest *
* R5 = guest DAR
* R6 = guest DSISR
* R13 = PACA
* PACA.KVM.* = guest *
* *
*/ */
/* R7 = vcpu */ /* R7 = vcpu */
ld r7, GPR4(r1) PPC_LL r7, GPR4(r1)
/* Now save the guest state */ #ifdef CONFIG_PPC_BOOK3S_64
stw r0, VCPU_LAST_INST(r7) PPC_LL r5, VCPU_HFLAGS(r7)
std r3, VCPU_PC(r7)
std r4, VCPU_SHADOW_SRR1(r7)
std r5, VCPU_FAULT_DEAR(r7)
std r6, VCPU_FAULT_DSISR(r7)
ld r5, VCPU_HFLAGS(r7)
rldicl. r5, r5, 0, 63 /* CR = ((r5 & 1) == 0) */ rldicl. r5, r5, 0, 63 /* CR = ((r5 & 1) == 0) */
beq no_dcbz32_off beq no_dcbz32_off
...@@ -184,35 +190,29 @@ kvmppc_handler_highmem: ...@@ -184,35 +190,29 @@ kvmppc_handler_highmem:
no_dcbz32_off: no_dcbz32_off:
std r14, VCPU_GPR(r14)(r7) #endif /* CONFIG_PPC_BOOK3S_64 */
std r15, VCPU_GPR(r15)(r7)
std r16, VCPU_GPR(r16)(r7) PPC_STL r14, VCPU_GPR(r14)(r7)
std r17, VCPU_GPR(r17)(r7) PPC_STL r15, VCPU_GPR(r15)(r7)
std r18, VCPU_GPR(r18)(r7) PPC_STL r16, VCPU_GPR(r16)(r7)
std r19, VCPU_GPR(r19)(r7) PPC_STL r17, VCPU_GPR(r17)(r7)
std r20, VCPU_GPR(r20)(r7) PPC_STL r18, VCPU_GPR(r18)(r7)
std r21, VCPU_GPR(r21)(r7) PPC_STL r19, VCPU_GPR(r19)(r7)
std r22, VCPU_GPR(r22)(r7) PPC_STL r20, VCPU_GPR(r20)(r7)
std r23, VCPU_GPR(r23)(r7) PPC_STL r21, VCPU_GPR(r21)(r7)
std r24, VCPU_GPR(r24)(r7) PPC_STL r22, VCPU_GPR(r22)(r7)
std r25, VCPU_GPR(r25)(r7) PPC_STL r23, VCPU_GPR(r23)(r7)
std r26, VCPU_GPR(r26)(r7) PPC_STL r24, VCPU_GPR(r24)(r7)
std r27, VCPU_GPR(r27)(r7) PPC_STL r25, VCPU_GPR(r25)(r7)
std r28, VCPU_GPR(r28)(r7) PPC_STL r26, VCPU_GPR(r26)(r7)
std r29, VCPU_GPR(r29)(r7) PPC_STL r27, VCPU_GPR(r27)(r7)
std r30, VCPU_GPR(r30)(r7) PPC_STL r28, VCPU_GPR(r28)(r7)
std r31, VCPU_GPR(r31)(r7) PPC_STL r29, VCPU_GPR(r29)(r7)
PPC_STL r30, VCPU_GPR(r30)(r7)
/* Save guest CTR */ PPC_STL r31, VCPU_GPR(r31)(r7)
mfctr r5
std r5, VCPU_CTR(r7)
/* Save guest LR */
mflr r5
std r5, VCPU_LR(r7)
/* Restore host msr -> SRR1 */ /* Restore host msr -> SRR1 */
ld r6, VCPU_HOST_MSR(r7) PPC_LL r6, VCPU_HOST_MSR(r7)
/* /*
* For some interrupts, we need to call the real Linux * For some interrupts, we need to call the real Linux
...@@ -228,9 +228,12 @@ no_dcbz32_off: ...@@ -228,9 +228,12 @@ no_dcbz32_off:
beq call_linux_handler beq call_linux_handler
cmpwi r12, BOOK3S_INTERRUPT_DECREMENTER cmpwi r12, BOOK3S_INTERRUPT_DECREMENTER
beq call_linux_handler beq call_linux_handler
cmpwi r12, BOOK3S_INTERRUPT_PERFMON
beq call_linux_handler
/* Back to EE=1 */ /* Back to EE=1 */
mtmsr r6 mtmsr r6
sync
b kvm_return_point b kvm_return_point
call_linux_handler: call_linux_handler:
...@@ -249,14 +252,14 @@ call_linux_handler: ...@@ -249,14 +252,14 @@ call_linux_handler:
*/ */
/* Restore host IP -> SRR0 */ /* Restore host IP -> SRR0 */
ld r5, VCPU_HOST_RETIP(r7) PPC_LL r5, VCPU_HOST_RETIP(r7)
/* XXX Better move to a safe function? /* XXX Better move to a safe function?
* What if we get an HTAB flush in between mtsrr0 and mtsrr1? */ * What if we get an HTAB flush in between mtsrr0 and mtsrr1? */
mtlr r12 mtlr r12
ld r4, VCPU_TRAMPOLINE_LOWMEM(r7) PPC_LL r4, VCPU_TRAMPOLINE_LOWMEM(r7)
mtsrr0 r4 mtsrr0 r4
LOAD_REG_IMMEDIATE(r3, MSR_KERNEL & ~(MSR_IR | MSR_DR)) LOAD_REG_IMMEDIATE(r3, MSR_KERNEL & ~(MSR_IR | MSR_DR))
mtsrr1 r3 mtsrr1 r3
...@@ -274,7 +277,7 @@ kvm_return_point: ...@@ -274,7 +277,7 @@ kvm_return_point:
/* Restore r3 (kvm_run) and r4 (vcpu) */ /* Restore r3 (kvm_run) and r4 (vcpu) */
REST_2GPRS(3, r1) REST_2GPRS(3, r1)
bl KVMPPC_HANDLE_EXIT bl FUNC(kvmppc_handle_exit)
/* If RESUME_GUEST, get back in the loop */ /* If RESUME_GUEST, get back in the loop */
cmpwi r3, RESUME_GUEST cmpwi r3, RESUME_GUEST
...@@ -285,7 +288,7 @@ kvm_return_point: ...@@ -285,7 +288,7 @@ kvm_return_point:
kvm_exit_loop: kvm_exit_loop:
ld r4, _LINK(r1) PPC_LL r4, _LINK(r1)
mtlr r4 mtlr r4
/* Restore non-volatile host registers (r14 - r31) */ /* Restore non-volatile host registers (r14 - r31) */
...@@ -296,8 +299,8 @@ kvm_exit_loop: ...@@ -296,8 +299,8 @@ kvm_exit_loop:
kvm_loop_heavyweight: kvm_loop_heavyweight:
ld r4, _LINK(r1) PPC_LL r4, _LINK(r1)
std r4, (16 + SWITCH_FRAME_SIZE)(r1) PPC_STL r4, (PPC_LR_STKOFF + SWITCH_FRAME_SIZE)(r1)
/* Load vcpu and cpu_run */ /* Load vcpu and cpu_run */
REST_2GPRS(3, r1) REST_2GPRS(3, r1)
...@@ -315,4 +318,3 @@ kvm_loop_lightweight: ...@@ -315,4 +318,3 @@ kvm_loop_lightweight:
/* Jump back into the beginning of this function */ /* Jump back into the beginning of this function */
b kvm_start_lightweight b kvm_start_lightweight
This diff is collapsed.
...@@ -22,7 +22,10 @@ ...@@ -22,7 +22,10 @@
#include <asm/reg.h> #include <asm/reg.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/exception-64s.h> #include <asm/exception-64s.h>
#endif
/***************************************************************************** /*****************************************************************************
* * * *
...@@ -30,6 +33,39 @@ ...@@ -30,6 +33,39 @@
* * * *
****************************************************************************/ ****************************************************************************/
#if defined(CONFIG_PPC_BOOK3S_64)
#define LOAD_SHADOW_VCPU(reg) \
mfspr reg, SPRN_SPRG_PACA
#define SHADOW_VCPU_OFF PACA_KVM_SVCPU
#define MSR_NOIRQ MSR_KERNEL & ~(MSR_IR | MSR_DR)
#define FUNC(name) GLUE(.,name)
#elif defined(CONFIG_PPC_BOOK3S_32)
#define LOAD_SHADOW_VCPU(reg) \
mfspr reg, SPRN_SPRG_THREAD; \
lwz reg, THREAD_KVM_SVCPU(reg); \
/* PPC32 can have a NULL pointer - let's check for that */ \
mtspr SPRN_SPRG_SCRATCH1, r12; /* Save r12 */ \
mfcr r12; \
cmpwi reg, 0; \
bne 1f; \
mfspr reg, SPRN_SPRG_SCRATCH0; \
mtcr r12; \
mfspr r12, SPRN_SPRG_SCRATCH1; \
b kvmppc_resume_\intno; \
1:; \
mtcr r12; \
mfspr r12, SPRN_SPRG_SCRATCH1; \
tophys(reg, reg)
#define SHADOW_VCPU_OFF 0
#define MSR_NOIRQ MSR_KERNEL
#define FUNC(name) name
#endif
.macro INTERRUPT_TRAMPOLINE intno .macro INTERRUPT_TRAMPOLINE intno
...@@ -42,19 +78,19 @@ kvmppc_trampoline_\intno: ...@@ -42,19 +78,19 @@ kvmppc_trampoline_\intno:
* First thing to do is to find out if we're coming * First thing to do is to find out if we're coming
* from a KVM guest or a Linux process. * from a KVM guest or a Linux process.
* *
* To distinguish, we check a magic byte in the PACA * To distinguish, we check a magic byte in the PACA/current
*/ */
mfspr r13, SPRN_SPRG_PACA /* r13 = PACA */ LOAD_SHADOW_VCPU(r13)
std r12, PACA_KVM_SCRATCH0(r13) PPC_STL r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
mfcr r12 mfcr r12
stw r12, PACA_KVM_SCRATCH1(r13) stw r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
lbz r12, PACA_KVM_IN_GUEST(r13) lbz r12, (SHADOW_VCPU_OFF + SVCPU_IN_GUEST)(r13)
cmpwi r12, KVM_GUEST_MODE_NONE cmpwi r12, KVM_GUEST_MODE_NONE
bne ..kvmppc_handler_hasmagic_\intno bne ..kvmppc_handler_hasmagic_\intno
/* No KVM guest? Then jump back to the Linux handler! */ /* No KVM guest? Then jump back to the Linux handler! */
lwz r12, PACA_KVM_SCRATCH1(r13) lwz r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
mtcr r12 mtcr r12
ld r12, PACA_KVM_SCRATCH0(r13) PPC_LL r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
mfspr r13, SPRN_SPRG_SCRATCH0 /* r13 = original r13 */ mfspr r13, SPRN_SPRG_SCRATCH0 /* r13 = original r13 */
b kvmppc_resume_\intno /* Get back original handler */ b kvmppc_resume_\intno /* Get back original handler */
...@@ -76,9 +112,7 @@ kvmppc_trampoline_\intno: ...@@ -76,9 +112,7 @@ kvmppc_trampoline_\intno:
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_SYSTEM_RESET INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_SYSTEM_RESET
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_MACHINE_CHECK INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_MACHINE_CHECK
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_DATA_STORAGE INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_DATA_STORAGE
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_DATA_SEGMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_INST_STORAGE INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_INST_STORAGE
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_INST_SEGMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_EXTERNAL INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_EXTERNAL
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_ALIGNMENT INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_ALIGNMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_PROGRAM INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_PROGRAM
...@@ -88,7 +122,14 @@ INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_SYSCALL ...@@ -88,7 +122,14 @@ INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_SYSCALL
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_TRACE INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_TRACE
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_PERFMON INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_PERFMON
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_ALTIVEC INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_ALTIVEC
/* Those are only available on 64 bit machines */
#ifdef CONFIG_PPC_BOOK3S_64
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_DATA_SEGMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_INST_SEGMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_VSX INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_VSX
#endif
/* /*
* Bring us back to the faulting code, but skip the * Bring us back to the faulting code, but skip the
...@@ -99,11 +140,11 @@ INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_VSX ...@@ -99,11 +140,11 @@ INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_VSX
* *
* Input Registers: * Input Registers:
* *
* R12 = free * R12 = free
* R13 = PACA * R13 = Shadow VCPU (PACA)
* PACA.KVM.SCRATCH0 = guest R12 * SVCPU.SCRATCH0 = guest R12
* PACA.KVM.SCRATCH1 = guest CR * SVCPU.SCRATCH1 = guest CR
* SPRG_SCRATCH0 = guest R13 * SPRG_SCRATCH0 = guest R13
* *
*/ */
kvmppc_handler_skip_ins: kvmppc_handler_skip_ins:
...@@ -114,9 +155,9 @@ kvmppc_handler_skip_ins: ...@@ -114,9 +155,9 @@ kvmppc_handler_skip_ins:
mtsrr0 r12 mtsrr0 r12
/* Clean up all state */ /* Clean up all state */
lwz r12, PACA_KVM_SCRATCH1(r13) lwz r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
mtcr r12 mtcr r12
ld r12, PACA_KVM_SCRATCH0(r13) PPC_LL r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
mfspr r13, SPRN_SPRG_SCRATCH0 mfspr r13, SPRN_SPRG_SCRATCH0
/* And get back into the code */ /* And get back into the code */
...@@ -147,41 +188,48 @@ kvmppc_handler_lowmem_trampoline_end: ...@@ -147,41 +188,48 @@ kvmppc_handler_lowmem_trampoline_end:
* *
* R3 = function * R3 = function
* R4 = MSR * R4 = MSR
* R5 = CTR * R5 = scratch register
* *
*/ */
_GLOBAL(kvmppc_rmcall) _GLOBAL(kvmppc_rmcall)
mtmsr r4 /* Disable relocation, so mtsrr LOAD_REG_IMMEDIATE(r5, MSR_NOIRQ)
mtmsr r5 /* Disable relocation and interrupts, so mtsrr
doesn't get interrupted */ doesn't get interrupted */
mtctr r5 sync
mtsrr0 r3 mtsrr0 r3
mtsrr1 r4 mtsrr1 r4
RFI RFI
#if defined(CONFIG_PPC_BOOK3S_32)
#define STACK_LR INT_FRAME_SIZE+4
#elif defined(CONFIG_PPC_BOOK3S_64)
#define STACK_LR _LINK
#endif
/* /*
* Activate current's external feature (FPU/Altivec/VSX) * Activate current's external feature (FPU/Altivec/VSX)
*/ */
#define define_load_up(what) \ #define define_load_up(what) \
\ \
_GLOBAL(kvmppc_load_up_ ## what); \ _GLOBAL(kvmppc_load_up_ ## what); \
subi r1, r1, INT_FRAME_SIZE; \ PPC_STLU r1, -INT_FRAME_SIZE(r1); \
mflr r3; \ mflr r3; \
std r3, _LINK(r1); \ PPC_STL r3, STACK_LR(r1); \
mfmsr r4; \ PPC_STL r20, _NIP(r1); \
std r31, GPR3(r1); \ mfmsr r20; \
mr r31, r4; \ LOAD_REG_IMMEDIATE(r3, MSR_DR|MSR_EE); \
li r5, MSR_DR; \ andc r3,r20,r3; /* Disable DR,EE */ \
oris r5, r5, MSR_EE@h; \ mtmsr r3; \
andc r4, r4, r5; \ sync; \
mtmsr r4; \ \
\ bl FUNC(load_up_ ## what); \
bl .load_up_ ## what; \ \
\ mtmsr r20; /* Enable DR,EE */ \
mtmsr r31; \ sync; \
ld r3, _LINK(r1); \ PPC_LL r3, STACK_LR(r1); \
ld r31, GPR3(r1); \ PPC_LL r20, _NIP(r1); \
addi r1, r1, INT_FRAME_SIZE; \ mtlr r3; \
mtlr r3; \ addi r1, r1, INT_FRAME_SIZE; \
blr blr
define_load_up(fpu) define_load_up(fpu)
...@@ -194,11 +242,10 @@ define_load_up(vsx) ...@@ -194,11 +242,10 @@ define_load_up(vsx)
.global kvmppc_trampoline_lowmem .global kvmppc_trampoline_lowmem
kvmppc_trampoline_lowmem: kvmppc_trampoline_lowmem:
.long kvmppc_handler_lowmem_trampoline - _stext .long kvmppc_handler_lowmem_trampoline - CONFIG_KERNEL_START
.global kvmppc_trampoline_enter .global kvmppc_trampoline_enter
kvmppc_trampoline_enter: kvmppc_trampoline_enter:
.long kvmppc_handler_trampoline_enter - _stext .long kvmppc_handler_trampoline_enter - CONFIG_KERNEL_START
#include "book3s_64_slb.S"
#include "book3s_segment.S"
This diff is collapsed.
...@@ -133,6 +133,12 @@ void kvmppc_core_queue_external(struct kvm_vcpu *vcpu, ...@@ -133,6 +133,12 @@ void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_EXTERNAL); kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_EXTERNAL);
} }
void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq)
{
clear_bit(BOOKE_IRQPRIO_EXTERNAL, &vcpu->arch.pending_exceptions);
}
/* Deliver the interrupt of the corresponding priority, if possible. */ /* Deliver the interrupt of the corresponding priority, if possible. */
static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu, static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
unsigned int priority) unsigned int priority)
...@@ -479,6 +485,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -479,6 +485,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{ {
int i; int i;
vcpu_load(vcpu);
regs->pc = vcpu->arch.pc; regs->pc = vcpu->arch.pc;
regs->cr = kvmppc_get_cr(vcpu); regs->cr = kvmppc_get_cr(vcpu);
regs->ctr = vcpu->arch.ctr; regs->ctr = vcpu->arch.ctr;
...@@ -499,6 +507,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -499,6 +507,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
regs->gpr[i] = kvmppc_get_gpr(vcpu, i); regs->gpr[i] = kvmppc_get_gpr(vcpu, i);
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -506,6 +516,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -506,6 +516,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{ {
int i; int i;
vcpu_load(vcpu);
vcpu->arch.pc = regs->pc; vcpu->arch.pc = regs->pc;
kvmppc_set_cr(vcpu, regs->cr); kvmppc_set_cr(vcpu, regs->cr);
vcpu->arch.ctr = regs->ctr; vcpu->arch.ctr = regs->ctr;
...@@ -525,6 +537,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) ...@@ -525,6 +537,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
for (i = 0; i < ARRAY_SIZE(regs->gpr); i++) for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
kvmppc_set_gpr(vcpu, i, regs->gpr[i]); kvmppc_set_gpr(vcpu, i, regs->gpr[i]);
vcpu_put(vcpu);
return 0; return 0;
} }
...@@ -553,7 +567,12 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) ...@@ -553,7 +567,12 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
struct kvm_translation *tr) struct kvm_translation *tr)
{ {
return kvmppc_core_vcpu_translate(vcpu, tr); int r;
vcpu_load(vcpu);
r = kvmppc_core_vcpu_translate(vcpu, tr);
vcpu_put(vcpu);
return r;
} }
int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
......
...@@ -161,7 +161,7 @@ static int __init kvmppc_e500_init(void) ...@@ -161,7 +161,7 @@ static int __init kvmppc_e500_init(void)
flush_icache_range(kvmppc_booke_handlers, flush_icache_range(kvmppc_booke_handlers,
kvmppc_booke_handlers + max_ivor + kvmppc_handler_len); kvmppc_booke_handlers + max_ivor + kvmppc_handler_len);
return kvm_init(NULL, sizeof(struct kvmppc_vcpu_e500), THIS_MODULE); return kvm_init(NULL, sizeof(struct kvmppc_vcpu_e500), 0, THIS_MODULE);
} }
static void __init kvmppc_e500_exit(void) static void __init kvmppc_e500_exit(void)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -72,7 +72,7 @@ static inline void kvm_s390_vcpu_set_mem(struct kvm_vcpu *vcpu) ...@@ -72,7 +72,7 @@ static inline void kvm_s390_vcpu_set_mem(struct kvm_vcpu *vcpu)
struct kvm_memslots *memslots; struct kvm_memslots *memslots;
idx = srcu_read_lock(&vcpu->kvm->srcu); idx = srcu_read_lock(&vcpu->kvm->srcu);
memslots = rcu_dereference(vcpu->kvm->memslots); memslots = kvm_memslots(vcpu->kvm);
mem = &memslots->memslots[0]; mem = &memslots->memslots[0];
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment