Commit 221bb8a4 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM updates from Paolo Bonzini:

 - ARM: GICv3 ITS emulation and various fixes.  Removal of the
   old VGIC implementation.

 - s390: support for trapping software breakpoints, nested
   virtualization (vSIE), the STHYI opcode, initial extensions
   for CPU model support.

 - MIPS: support for MIPS64 hosts (32-bit guests only) and lots
   of cleanups, preliminary to this and the upcoming support for
   hardware virtualization extensions.

 - x86: support for execute-only mappings in nested EPT; reduced
   vmexit latency for TSC deadline timer (by about 30%) on Intel
   hosts; support for more than 255 vCPUs.

 - PPC: bugfixes.

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (302 commits)
  KVM: PPC: Introduce KVM_CAP_PPC_HTM
  MIPS: Select HAVE_KVM for MIPS64_R{2,6}
  MIPS: KVM: Reset CP0_PageMask during host TLB flush
  MIPS: KVM: Fix ptr->int cast via KVM_GUEST_KSEGX()
  MIPS: KVM: Sign extend MFC0/RDHWR results
  MIPS: KVM: Fix 64-bit big endian dynamic translation
  MIPS: KVM: Fail if ebase doesn't fit in CP0_EBase
  MIPS: KVM: Use 64-bit CP0_EBase when appropriate
  MIPS: KVM: Set CP0_Status.KX on MIPS64
  MIPS: KVM: Make entry code MIPS64 friendly
  MIPS: KVM: Use kmap instead of CKSEG0ADDR()
  MIPS: KVM: Use virt_to_phys() to get commpage PFN
  MIPS: Fix definition of KSEGX() for 64-bit
  KVM: VMX: Add VMCS to CPU's loaded VMCSs before VMPTRLD
  kvm: x86: nVMX: maintain internal copy of current VMCS
  KVM: PPC: Book3S HV: Save/restore TM state in H_CEDE
  KVM: PPC: Book3S HV: Pull out TM state save/restore into separate procedures
  KVM: arm64: vgic-its: Simplify MAPI error handling
  KVM: arm64: vgic-its: Make vgic_its_cmd_handle_mapi similar to other handlers
  KVM: arm64: vgic-its: Turn device_id validation into generic ID validation
  ...
parents f7b32e4c 23528bb2
......@@ -1482,6 +1482,11 @@ struct kvm_irq_routing_msi {
__u32 pad;
};
On x86, address_hi is ignored unless the KVM_X2APIC_API_USE_32BIT_IDS
feature of KVM_CAP_X2APIC_API capability is enabled. If it is enabled,
address_hi bits 31-8 provide bits 31-8 of the destination id. Bits 7-0 of
address_hi must be zero.
struct kvm_irq_routing_s390_adapter {
__u64 ind_addr;
__u64 summary_addr;
......@@ -1583,6 +1588,17 @@ struct kvm_lapic_state {
Reads the Local APIC registers and copies them into the input argument. The
data format and layout are the same as documented in the architecture manual.
If KVM_X2APIC_API_USE_32BIT_IDS feature of KVM_CAP_X2APIC_API is
enabled, then the format of APIC_ID register depends on the APIC mode
(reported by MSR_IA32_APICBASE) of its VCPU. x2APIC stores APIC ID in
the APIC_ID register (bytes 32-35). xAPIC only allows an 8-bit APIC ID
which is stored in bits 31-24 of the APIC register, or equivalently in
byte 35 of struct kvm_lapic_state's regs field. KVM_GET_LAPIC must then
be called after MSR_IA32_APICBASE has been set with KVM_SET_MSR.
If KVM_X2APIC_API_USE_32BIT_IDS feature is disabled, struct kvm_lapic_state
always uses xAPIC format.
4.58 KVM_SET_LAPIC
......@@ -1600,6 +1616,10 @@ struct kvm_lapic_state {
Copies the input argument into the Local APIC registers. The data format
and layout are the same as documented in the architecture manual.
The format of the APIC ID register (bytes 32-35 of struct kvm_lapic_state's
regs field) depends on the state of the KVM_CAP_X2APIC_API capability.
See the note in KVM_GET_LAPIC.
4.59 KVM_IOEVENTFD
......@@ -2032,6 +2052,12 @@ registers, find a list below:
MIPS | KVM_REG_MIPS_CP0_CONFIG5 | 32
MIPS | KVM_REG_MIPS_CP0_CONFIG7 | 32
MIPS | KVM_REG_MIPS_CP0_ERROREPC | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH1 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH2 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH3 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH4 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH5 | 64
MIPS | KVM_REG_MIPS_CP0_KSCRATCH6 | 64
MIPS | KVM_REG_MIPS_COUNT_CTL | 64
MIPS | KVM_REG_MIPS_COUNT_RESUME | 64
MIPS | KVM_REG_MIPS_COUNT_HZ | 64
......@@ -2156,7 +2182,7 @@ after pausing the vcpu, but before it is resumed.
4.71 KVM_SIGNAL_MSI
Capability: KVM_CAP_SIGNAL_MSI
Architectures: x86
Architectures: x86 arm64
Type: vm ioctl
Parameters: struct kvm_msi (in)
Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
......@@ -2169,10 +2195,22 @@ struct kvm_msi {
__u32 address_hi;
__u32 data;
__u32 flags;
__u8 pad[16];
__u32 devid;
__u8 pad[12];
};
No flags are defined so far. The corresponding field must be 0.
flags: KVM_MSI_VALID_DEVID: devid contains a valid value
devid: If KVM_MSI_VALID_DEVID is set, contains a unique device identifier
for the device that wrote the MSI message.
For PCI, this is usually a BFD identifier in the lower 16 bits.
The per-VM KVM_CAP_MSI_DEVID capability advertises the need to provide
the device ID. If this capability is not set, userland cannot rely on
the kernel to allow the KVM_MSI_VALID_DEVID flag being set.
On x86, address_hi is ignored unless the KVM_CAP_X2APIC_API capability is
enabled. If it is enabled, address_hi bits 31-8 provide bits 31-8 of the
destination id. Bits 7-0 of address_hi must be zero.
4.71 KVM_CREATE_PIT2
......@@ -2520,6 +2558,7 @@ Parameters: struct kvm_device_attr
Returns: 0 on success, -1 on error
Errors:
ENXIO: The group or attribute is unknown/unsupported for this device
or hardware support is missing.
EPERM: The attribute cannot (currently) be accessed this way
(e.g. read-only attribute, or attribute that only makes
sense when the device is in a different state)
......@@ -2547,6 +2586,7 @@ Parameters: struct kvm_device_attr
Returns: 0 on success, -1 on error
Errors:
ENXIO: The group or attribute is unknown/unsupported for this device
or hardware support is missing.
Tests whether a device supports a particular attribute. A successful
return indicates the attribute is implemented. It does not necessarily
......@@ -3803,6 +3843,42 @@ Allows use of runtime-instrumentation introduced with zEC12 processor.
Will return -EINVAL if the machine does not support runtime-instrumentation.
Will return -EBUSY if a VCPU has already been created.
7.7 KVM_CAP_X2APIC_API
Architectures: x86
Parameters: args[0] - features that should be enabled
Returns: 0 on success, -EINVAL when args[0] contains invalid features
Valid feature flags in args[0] are
#define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0)
#define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1)
Enabling KVM_X2APIC_API_USE_32BIT_IDS changes the behavior of
KVM_SET_GSI_ROUTING, KVM_SIGNAL_MSI, KVM_SET_LAPIC, and KVM_GET_LAPIC,
allowing the use of 32-bit APIC IDs. See KVM_CAP_X2APIC_API in their
respective sections.
KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK must be enabled for x2APIC to work
in logical mode or with more than 255 VCPUs. Otherwise, KVM treats 0xff
as a broadcast even in x2APIC mode in order to support physical x2APIC
without interrupt remapping. This is undesirable in logical mode,
where 0xff represents CPUs 0-7 in cluster 0.
7.8 KVM_CAP_S390_USER_INSTR0
Architectures: s390
Parameters: none
With this capability enabled, all illegal instructions 0x0000 (2 bytes) will
be intercepted and forwarded to user space. User space can use this
mechanism e.g. to realize 2-byte software breakpoints. The kernel will
not inject an operating exception for these instructions, user space has
to take care of that.
This capability can be enabled dynamically even if VCPUs were already
created and are running.
8. Other capabilities.
----------------------
......
......@@ -4,16 +4,22 @@ ARM Virtual Generic Interrupt Controller (VGIC)
Device types supported:
KVM_DEV_TYPE_ARM_VGIC_V2 ARM Generic Interrupt Controller v2.0
KVM_DEV_TYPE_ARM_VGIC_V3 ARM Generic Interrupt Controller v3.0
KVM_DEV_TYPE_ARM_VGIC_ITS ARM Interrupt Translation Service Controller
Only one VGIC instance may be instantiated through either this API or the
legacy KVM_CREATE_IRQCHIP api. The created VGIC will act as the VM interrupt
controller, requiring emulated user-space devices to inject interrupts to the
VGIC instead of directly to CPUs.
Only one VGIC instance of the V2/V3 types above may be instantiated through
either this API or the legacy KVM_CREATE_IRQCHIP api. The created VGIC will
act as the VM interrupt controller, requiring emulated user-space devices to
inject interrupts to the VGIC instead of directly to CPUs.
Creating a guest GICv3 device requires a host GICv3 as well.
GICv3 implementations with hardware compatibility support allow a guest GICv2
as well.
Creating a virtual ITS controller requires a host GICv3 (but does not depend
on having physical ITS controllers).
There can be multiple ITS controllers per guest, each of them has to have
a separate, non-overlapping MMIO region.
Groups:
KVM_DEV_ARM_VGIC_GRP_ADDR
Attributes:
......@@ -39,6 +45,13 @@ Groups:
Only valid for KVM_DEV_TYPE_ARM_VGIC_V3.
This address needs to be 64K aligned.
KVM_VGIC_V3_ADDR_TYPE_ITS (rw, 64-bit)
Base address in the guest physical address space of the GICv3 ITS
control register frame. The ITS allows MSI(-X) interrupts to be
injected into guests. This extension is optional. If the kernel
does not support the ITS, the call returns -ENODEV.
Only valid for KVM_DEV_TYPE_ARM_VGIC_ITS.
This address needs to be 64K aligned and the region covers 128K.
KVM_DEV_ARM_VGIC_GRP_DIST_REGS
Attributes:
......@@ -109,8 +122,8 @@ Groups:
KVM_DEV_ARM_VGIC_GRP_CTRL
Attributes:
KVM_DEV_ARM_VGIC_CTRL_INIT
request the initialization of the VGIC, no additional parameter in
kvm_device_attr.addr.
request the initialization of the VGIC or ITS, no additional parameter
in kvm_device_attr.addr.
Errors:
-ENXIO: VGIC not properly configured as required prior to calling
this attribute
......
......@@ -20,7 +20,8 @@ Enables Collaborative Memory Management Assist (CMMA) for the virtual machine.
1.2. ATTRIBUTE: KVM_S390_VM_MEM_CLR_CMMA
Parameters: none
Returns: 0
Returns: -EINVAL if CMMA was not enabled
0 otherwise
Clear the CMMA status for all guest pages, so any pages the guest marked
as unused are again used any may not be reclaimed by the host.
......@@ -85,6 +86,90 @@ Returns: -EBUSY in case 1 or more vcpus are already activated (only in write
-ENOMEM if not enough memory is available to process the ioctl
0 in case of success
2.3. ATTRIBUTE: KVM_S390_VM_CPU_MACHINE_FEAT (r/o)
Allows user space to retrieve available cpu features. A feature is available if
provided by the hardware and supported by kvm. In theory, cpu features could
even be completely emulated by kvm.
struct kvm_s390_vm_cpu_feat {
__u64 feat[16]; # Bitmap (1 = feature available), MSB 0 bit numbering
};
Parameters: address of a buffer to load the feature list from.
Returns: -EFAULT if the given address is not accessible from kernel space.
0 in case of success.
2.4. ATTRIBUTE: KVM_S390_VM_CPU_PROCESSOR_FEAT (r/w)
Allows user space to retrieve or change enabled cpu features for all VCPUs of a
VM. Features that are not available cannot be enabled.
See 2.3. for a description of the parameter struct.
Parameters: address of a buffer to store/load the feature list from.
Returns: -EFAULT if the given address is not accessible from kernel space.
-EINVAL if a cpu feature that is not available is to be enabled.
-EBUSY if at least one VCPU has already been defined.
0 in case of success.
2.5. ATTRIBUTE: KVM_S390_VM_CPU_MACHINE_SUBFUNC (r/o)
Allows user space to retrieve available cpu subfunctions without any filtering
done by a set IBC. These subfunctions are indicated to the guest VCPU via
query or "test bit" subfunctions and used e.g. by cpacf functions, plo and ptff.
A subfunction block is only valid if KVM_S390_VM_CPU_MACHINE contains the
STFL(E) bit introducing the affected instruction. If the affected instruction
indicates subfunctions via a "query subfunction", the response block is
contained in the returned struct. If the affected instruction
indicates subfunctions via a "test bit" mechanism, the subfunction codes are
contained in the returned struct in MSB 0 bit numbering.
struct kvm_s390_vm_cpu_subfunc {
u8 plo[32]; # always valid (ESA/390 feature)
u8 ptff[16]; # valid with TOD-clock steering
u8 kmac[16]; # valid with Message-Security-Assist
u8 kmc[16]; # valid with Message-Security-Assist
u8 km[16]; # valid with Message-Security-Assist
u8 kimd[16]; # valid with Message-Security-Assist
u8 klmd[16]; # valid with Message-Security-Assist
u8 pckmo[16]; # valid with Message-Security-Assist-Extension 3
u8 kmctr[16]; # valid with Message-Security-Assist-Extension 4
u8 kmf[16]; # valid with Message-Security-Assist-Extension 4
u8 kmo[16]; # valid with Message-Security-Assist-Extension 4
u8 pcc[16]; # valid with Message-Security-Assist-Extension 4
u8 ppno[16]; # valid with Message-Security-Assist-Extension 5
u8 reserved[1824]; # reserved for future instructions
};
Parameters: address of a buffer to load the subfunction blocks from.
Returns: -EFAULT if the given address is not accessible from kernel space.
0 in case of success.
2.6. ATTRIBUTE: KVM_S390_VM_CPU_PROCESSOR_SUBFUNC (r/w)
Allows user space to retrieve or change cpu subfunctions to be indicated for
all VCPUs of a VM. This attribute will only be available if kernel and
hardware support are in place.
The kernel uses the configured subfunction blocks for indication to
the guest. A subfunction block will only be used if the associated STFL(E) bit
has not been disabled by user space (so the instruction to be queried is
actually available for the guest).
As long as no data has been written, a read will fail. The IBC will be used
to determine available subfunctions in this case, this will guarantee backward
compatibility.
See 2.5. for a description of the parameter struct.
Parameters: address of a buffer to store/load the subfunction blocks from.
Returns: -EFAULT if the given address is not accessible from kernel space.
-EINVAL when reading, if there was no write yet.
-EBUSY if at least one VCPU has already been defined.
0 in case of success.
3. GROUP: KVM_S390_VM_TOD
Architectures: s390
......
......@@ -89,7 +89,7 @@ In mmu_spte_clear_track_bits():
old_spte = *spte;
/* 'if' condition is satisfied. */
if (old_spte.Accssed == 1 &&
if (old_spte.Accessed == 1 &&
old_spte.W == 0)
spte = 0ull;
on fast page fault path:
......@@ -102,7 +102,7 @@ In mmu_spte_clear_track_bits():
old_spte = xchg(spte, 0ull)
if (old_spte.Accssed == 1)
if (old_spte.Accessed == 1)
kvm_set_pfn_accessed(spte.pfn);
if (old_spte.Dirty == 1)
kvm_set_pfn_dirty(spte.pfn);
......
......@@ -66,6 +66,8 @@ extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
extern void __init_stage2_translation(void);
extern void __kvm_hyp_reset(unsigned long);
#endif
#endif /* __ARM_KVM_ASM_H__ */
......@@ -241,8 +241,7 @@ int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
int exception_index);
static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
phys_addr_t pgd_ptr,
static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
unsigned long hyp_stack_ptr,
unsigned long vector_ptr)
{
......@@ -251,18 +250,13 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
* code. The init code doesn't need to preserve these
* registers as r0-r3 are already callee saved according to
* the AAPCS.
* Note that we slightly misuse the prototype by casing the
* Note that we slightly misuse the prototype by casting the
* stack pointer to a void *.
*
* We don't have enough registers to perform the full init in
* one go. Install the boot PGD first, and then install the
* runtime PGD, stack pointer and vectors. The PGDs are always
* passed as the third argument, in order to be passed into
* r2-r3 to the init code (yes, this is compliant with the
* PCS!).
*/
kvm_call_hyp(NULL, 0, boot_pgd_ptr);
* The PGDs are always passed as the third argument, in order
* to be passed into r2-r3 to the init code (yes, this is
* compliant with the PCS!).
*/
kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr);
}
......@@ -272,16 +266,13 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation);
}
static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr,
static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr,
phys_addr_t phys_idmap_start)
{
/*
* TODO
* kvm_call_reset(boot_pgd_ptr, phys_idmap_start);
*/
kvm_call_hyp((void *)virt_to_idmap(__kvm_hyp_reset), vector_ptr);
}
static inline int kvm_arch_dev_ioctl_check_extension(long ext)
static inline int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
{
return 0;
}
......
......@@ -25,9 +25,6 @@
#define __hyp_text __section(.hyp.text) notrace
#define kern_hyp_va(v) (v)
#define hyp_kern_va(v) (v)
#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
"mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
#define __ACCESS_CP15_64(Op1, CRm) \
......
......@@ -26,16 +26,7 @@
* We directly use the kernel VA for the HYP, as we can directly share
* the mapping (HTTBR "covers" TTBR1).
*/
#define HYP_PAGE_OFFSET_MASK UL(~0)
#define HYP_PAGE_OFFSET PAGE_OFFSET
#define KERN_TO_HYP(kva) (kva)
/*
* Our virtual mapping for the boot-time MMU-enable code. Must be
* shared across all the page-tables. Conveniently, we use the vectors
* page, where no kernel data will ever be shared with HYP.
*/
#define TRAMPOLINE_VA UL(CONFIG_VECTORS_BASE)
#define kern_hyp_va(kva) (kva)
/*
* KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation levels.
......@@ -49,9 +40,8 @@
#include <asm/pgalloc.h>
#include <asm/stage2_pgtable.h>
int create_hyp_mappings(void *from, void *to);
int create_hyp_mappings(void *from, void *to, pgprot_t prot);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void);
void free_hyp_pgds(void);
void stage2_unmap_vm(struct kvm *kvm);
......@@ -65,7 +55,6 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run);
void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
phys_addr_t kvm_mmu_get_httbr(void);
phys_addr_t kvm_mmu_get_boot_httbr(void);
phys_addr_t kvm_get_idmap_vector(void);
phys_addr_t kvm_get_idmap_start(void);
int kvm_mmu_init(void);
......
......@@ -97,7 +97,9 @@ extern pgprot_t pgprot_s2_device;
#define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
#define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
#define PAGE_KERNEL_EXEC pgprot_kernel
#define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP)
#define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_XN)
#define PAGE_HYP_EXEC _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY)
#define PAGE_HYP_RO _MOD_PROT(pgprot_kernel, L_PTE_HYP | L_PTE_RDONLY | L_PTE_XN)
#define PAGE_HYP_DEVICE _MOD_PROT(pgprot_hyp_device, L_PTE_HYP)
#define PAGE_S2 _MOD_PROT(pgprot_s2, L_PTE_S2_RDONLY)
#define PAGE_S2_DEVICE _MOD_PROT(pgprot_s2_device, L_PTE_S2_RDONLY)
......
......@@ -80,6 +80,10 @@ static inline bool is_kernel_in_hyp_mode(void)
return false;
}
/* The section containing the hypervisor idmap text */
extern char __hyp_idmap_text_start[];
extern char __hyp_idmap_text_end[];
/* The section containing the hypervisor text */
extern char __hyp_text_start[];
extern char __hyp_text_end[];
......
......@@ -46,13 +46,6 @@ config KVM_ARM_HOST
---help---
Provides host support for ARM processors.
config KVM_NEW_VGIC
bool "New VGIC implementation"
depends on KVM
default y
---help---
uses the new VGIC implementation
source drivers/vhost/Kconfig
endif # VIRTUALIZATION
......@@ -22,7 +22,6 @@ obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
ifeq ($(CONFIG_KVM_NEW_VGIC),y)
obj-y += $(KVM)/arm/vgic/vgic.o
obj-y += $(KVM)/arm/vgic/vgic-init.o
obj-y += $(KVM)/arm/vgic/vgic-irqfd.o
......@@ -30,9 +29,4 @@ obj-y += $(KVM)/arm/vgic/vgic-v2.o
obj-y += $(KVM)/arm/vgic/vgic-mmio.o
obj-y += $(KVM)/arm/vgic/vgic-mmio-v2.o
obj-y += $(KVM)/arm/vgic/vgic-kvm-device.o
else
obj-y += $(KVM)/arm/vgic.o
obj-y += $(KVM)/arm/vgic-v2.o
obj-y += $(KVM)/arm/vgic-v2-emul.o
endif
obj-y += $(KVM)/arm/arch_timer.o
......@@ -20,6 +20,7 @@
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/kvm_host.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/fs.h>
......@@ -122,7 +123,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
if (ret)
goto out_fail_alloc;
ret = create_hyp_mappings(kvm, kvm + 1);
ret = create_hyp_mappings(kvm, kvm + 1, PAGE_HYP);
if (ret)
goto out_free_stage2_pgd;
......@@ -201,7 +202,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = KVM_MAX_VCPUS;
break;
default:
r = kvm_arch_dev_ioctl_check_extension(ext);
r = kvm_arch_dev_ioctl_check_extension(kvm, ext);
break;
}
return r;
......@@ -239,7 +240,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, unsigned int id)
if (err)
goto free_vcpu;
err = create_hyp_mappings(vcpu, vcpu + 1);
err = create_hyp_mappings(vcpu, vcpu + 1, PAGE_HYP);
if (err)
goto vcpu_uninit;
......@@ -377,7 +378,7 @@ void force_vm_exit(const cpumask_t *mask)
/**
* need_new_vmid_gen - check that the VMID is still valid
* @kvm: The VM's VMID to checkt
* @kvm: The VM's VMID to check
*
* return true if there is a new generation of VMIDs being used
*
......@@ -616,7 +617,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
* Enter the guest
*/
trace_kvm_entry(*vcpu_pc(vcpu));
__kvm_guest_enter();
guest_enter_irqoff();
vcpu->mode = IN_GUEST_MODE;
ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);
......@@ -642,14 +643,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
local_irq_enable();
/*
* We do local_irq_enable() before calling kvm_guest_exit() so
* We do local_irq_enable() before calling guest_exit() so
* that if a timer interrupt hits while running the guest we
* account that tick as being spent in the guest. We enable
* preemption after calling kvm_guest_exit() so that if we get
* preemption after calling guest_exit() so that if we get
* preempted we make sure ticks after that is not counted as
* guest time.
*/
kvm_guest_exit();
guest_exit();
trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
/*
......@@ -1039,7 +1040,6 @@ long kvm_arch_vm_ioctl(struct file *filp,
static void cpu_init_hyp_mode(void *dummy)
{
phys_addr_t boot_pgd_ptr;
phys_addr_t pgd_ptr;
unsigned long hyp_stack_ptr;
unsigned long stack_page;
......@@ -1048,13 +1048,12 @@ static void cpu_init_hyp_mode(void *dummy)
/* Switch from the HYP stub to our own HYP init vector */
__hyp_set_vectors(kvm_get_idmap_vector());
boot_pgd_ptr = kvm_mmu_get_boot_httbr();
pgd_ptr = kvm_mmu_get_httbr();
stack_page = __this_cpu_read(kvm_arm_hyp_stack_page);
hyp_stack_ptr = stack_page + PAGE_SIZE;
vector_ptr = (unsigned long)kvm_ksym_ref(__kvm_hyp_vector);
__cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr);
__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
__cpu_init_stage2();
kvm_arm_init_debug();
......@@ -1076,15 +1075,9 @@ static void cpu_hyp_reinit(void)
static void cpu_hyp_reset(void)
{
phys_addr_t boot_pgd_ptr;
phys_addr_t phys_idmap_start;
if (!is_kernel_in_hyp_mode()) {
boot_pgd_ptr = kvm_mmu_get_boot_httbr();
phys_idmap_start = kvm_get_idmap_start();
__cpu_reset_hyp_mode(boot_pgd_ptr, phys_idmap_start);
}
if (!is_kernel_in_hyp_mode())
__cpu_reset_hyp_mode(hyp_default_vectors,
kvm_get_idmap_start());
}
static void _kvm_arch_hardware_enable(void *discard)
......@@ -1294,14 +1287,14 @@ static int init_hyp_mode(void)
* Map the Hyp-code called directly from the host
*/
err = create_hyp_mappings(kvm_ksym_ref(__hyp_text_start),
kvm_ksym_ref(__hyp_text_end));
kvm_ksym_ref(__hyp_text_end), PAGE_HYP_EXEC);
if (err) {
kvm_err("Cannot map world-switch code\n");
goto out_err;
}
err = create_hyp_mappings(kvm_ksym_ref(__start_rodata),
kvm_ksym_ref(__end_rodata));
kvm_ksym_ref(__end_rodata), PAGE_HYP_RO);
if (err) {
kvm_err("Cannot map rodata section\n");
goto out_err;
......@@ -1312,7 +1305,8 @@ static int init_hyp_mode(void)
*/
for_each_possible_cpu(cpu) {
char *stack_page = (char *)per_cpu(kvm_arm_hyp_stack_page, cpu);
err = create_hyp_mappings(stack_page, stack_page + PAGE_SIZE);
err = create_hyp_mappings(stack_page, stack_page + PAGE_SIZE,
PAGE_HYP);
if (err) {
kvm_err("Cannot map hyp stack\n");
......@@ -1324,7 +1318,7 @@ static int init_hyp_mode(void)
kvm_cpu_context_t *cpu_ctxt;
cpu_ctxt = per_cpu_ptr(kvm_host_cpu_state, cpu);
err = create_hyp_mappings(cpu_ctxt, cpu_ctxt + 1);
err = create_hyp_mappings(cpu_ctxt, cpu_ctxt + 1, PAGE_HYP);
if (err) {
kvm_err("Cannot map host CPU state: %d\n", err);
......@@ -1332,10 +1326,6 @@ static int init_hyp_mode(void)
}
}
#ifndef CONFIG_HOTPLUG_CPU
free_boot_hyp_pgd();
#endif
/* set size of VMID supported by CPU */
kvm_vmid_bits = kvm_get_vmid_bits();
kvm_info("%d-bit VMID\n", kvm_vmid_bits);
......
......@@ -210,7 +210,7 @@ bool kvm_condition_valid(struct kvm_vcpu *vcpu)
* @vcpu: The VCPU pointer
*
* When exceptions occur while instructions are executed in Thumb IF-THEN
* blocks, the ITSTATE field of the CPSR is not advanved (updated), so we have
* blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
* to do this little bit of work manually. The fields map like this:
*
* IT[7:0] -> CPSR[26:25],CPSR[15:10]
......
......@@ -182,7 +182,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
/**
* kvm_arm_copy_reg_indices - get indices of all registers.
*
* We do core registers right here, then we apppend coproc regs.
* We do core registers right here, then we append coproc regs.
*/
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
......
......@@ -32,23 +32,13 @@
* r2,r3 = Hypervisor pgd pointer
*
* The init scenario is:
* - We jump in HYP with four parameters: boot HYP pgd, runtime HYP pgd,
* runtime stack, runtime vectors
* - Enable the MMU with the boot pgd
* - Jump to a target into the trampoline page (remember, this is the same
* physical page!)
* - Now switch to the runtime pgd (same VA, and still the same physical
* page!)
* - We jump in HYP with 3 parameters: runtime HYP pgd, runtime stack,
* runtime vectors
* - Invalidate TLBs
* - Set stack and vectors
* - Setup the page tables
* - Enable the MMU
* - Profit! (or eret, if you only care about the code).
*
* As we only have four registers available to pass parameters (and we
* need six), we split the init in two phases:
* - Phase 1: r0 = 0, r1 = 0, r2,r3 contain the boot PGD.
* Provides the basic HYP init, and enable the MMU.
* - Phase 2: r0 = ToS, r1 = vectors, r2,r3 contain the runtime PGD.
* Switches to the runtime PGD, set stack and vectors.
*/
.text
......@@ -68,8 +58,11 @@ __kvm_hyp_init:
W(b) .
__do_hyp_init:
cmp r0, #0 @ We have a SP?
bne phase2 @ Yes, second stage init
@ Set stack pointer
mov sp, r0
@ Set HVBAR to point to the HYP vectors
mcr p15, 4, r1, c12, c0, 0 @ HVBAR
@ Set the HTTBR to point to the hypervisor PGD pointer passed
mcrr p15, 4, rr_lo_hi(r2, r3), c2
......@@ -114,34 +107,25 @@ __do_hyp_init:
THUMB( ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_TE) )
orr r1, r1, r2
orr r0, r0, r1
isb
mcr p15, 4, r0, c1, c0, 0 @ HSCR
isb
@ End of init phase-1
eret
phase2:
@ Set stack pointer
mov sp, r0
@ Set HVBAR to point to the HYP vectors
mcr p15, 4, r1, c12, c0, 0 @ HVBAR
@ Jump to the trampoline page
ldr r0, =TRAMPOLINE_VA
adr r1, target
bfi r0, r1, #0, #PAGE_SHIFT
ret r0
@ r0 : stub vectors address
ENTRY(__kvm_hyp_reset)
/* We're now in idmap, disable MMU */
mrc p15, 4, r1, c1, c0, 0 @ HSCTLR
ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I)
bic r1, r1, r2
mcr p15, 4, r1, c1, c0, 0 @ HSCTLR
target: @ We're now in the trampoline code, switch page tables
mcrr p15, 4, rr_lo_hi(r2, r3), c2
/* Install stub vectors */
mcr p15, 4, r0, c12, c0, 0 @ HVBAR
isb
@ Invalidate the old TLBs
mcr p15, 4, r0, c8, c7, 0 @ TLBIALLH
dsb ish
eret
ENDPROC(__kvm_hyp_reset)
.ltorg
......
......@@ -32,8 +32,6 @@
#include "trace.h"
extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
static pgd_t *boot_hyp_pgd;
static pgd_t *hyp_pgd;
static pgd_t *merged_hyp_pgd;
......@@ -483,28 +481,6 @@ static void unmap_hyp_range(pgd_t *pgdp, phys_addr_t start, u64 size)
} while (pgd++, addr = next, addr != end);
}
/**
* free_boot_hyp_pgd - free HYP boot page tables
*
* Free the HYP boot page tables. The bounce page is also freed.
*/
void free_boot_hyp_pgd(void)
{
mutex_lock(&kvm_hyp_pgd_mutex);
if (boot_hyp_pgd) {
unmap_hyp_range(boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
unmap_hyp_range(boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order);
boot_hyp_pgd = NULL;
}
if (hyp_pgd)
unmap_hyp_range(hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
mutex_unlock(&kvm_hyp_pgd_mutex);
}
/**
* free_hyp_pgds - free Hyp-mode page tables
*
......@@ -519,15 +495,20 @@ void free_hyp_pgds(void)
{
unsigned long addr;
free_boot_hyp_pgd();
mutex_lock(&kvm_hyp_pgd_mutex);
if (boot_hyp_pgd) {
unmap_hyp_range(boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order);
boot_hyp_pgd = NULL;
}
if (hyp_pgd) {
unmap_hyp_range(hyp_pgd, hyp_idmap_start, PAGE_SIZE);
for (addr = PAGE_OFFSET; virt_addr_valid(addr); addr += PGDIR_SIZE)
unmap_hyp_range(hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
unmap_hyp_range(hyp_pgd, kern_hyp_va(addr), PGDIR_SIZE);
for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE)
unmap_hyp_range(hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
unmap_hyp_range(hyp_pgd, kern_hyp_va(addr), PGDIR_SIZE);
free_pages((unsigned long)hyp_pgd, hyp_pgd_order);
hyp_pgd = NULL;
......@@ -679,17 +660,18 @@ static phys_addr_t kvm_kaddr_to_phys(void *kaddr)
* create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode
* @from: The virtual kernel start address of the range
* @to: The virtual kernel end address of the range (exclusive)
* @prot: The protection to be applied to this range
*
* The same virtual address as the kernel virtual address is also used
* in Hyp-mode mapping (modulo HYP_PAGE_OFFSET) to the same underlying
* physical pages.
*/
int create_hyp_mappings(void *from, void *to)
int create_hyp_mappings(void *from, void *to, pgprot_t prot)
{
phys_addr_t phys_addr;
unsigned long virt_addr;
unsigned long start = KERN_TO_HYP((unsigned long)from);
unsigned long end = KERN_TO_HYP((unsigned long)to);
unsigned long start = kern_hyp_va((unsigned long)from);
unsigned long end = kern_hyp_va((unsigned long)to);
if (is_kernel_in_hyp_mode())
return 0;
......@@ -704,7 +686,7 @@ int create_hyp_mappings(void *from, void *to)
err = __create_hyp_mappings(hyp_pgd, virt_addr,
virt_addr + PAGE_SIZE,
__phys_to_pfn(phys_addr),
PAGE_HYP);
prot);
if (err)
return err;
}
......@@ -723,8 +705,8 @@ int create_hyp_mappings(void *from, void *to)
*/
int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
{
unsigned long start = KERN_TO_HYP((unsigned long)from);
unsigned long end = KERN_TO_HYP((unsigned long)to);
unsigned long start = kern_hyp_va((unsigned long)from);
unsigned long end = kern_hyp_va((unsigned long)to);
if (is_kernel_in_hyp_mode())
return 0;
......@@ -1687,14 +1669,6 @@ phys_addr_t kvm_mmu_get_httbr(void)
return virt_to_phys(hyp_pgd);
}
phys_addr_t kvm_mmu_get_boot_httbr(void)
{
if (__kvm_cpu_uses_extended_idmap())
return virt_to_phys(merged_hyp_pgd);
else
return virt_to_phys(boot_hyp_pgd);
}
phys_addr_t kvm_get_idmap_vector(void)
{
return hyp_idmap_vector;
......@@ -1705,6 +1679,22 @@ phys_addr_t kvm_get_idmap_start(void)
return hyp_idmap_start;
}
static int kvm_map_idmap_text(pgd_t *pgd)
{
int err;
/* Create the idmap in the boot page tables */
err = __create_hyp_mappings(pgd,
hyp_idmap_start, hyp_idmap_end,
__phys_to_pfn(hyp_idmap_start),
PAGE_HYP_EXEC);
if (err)
kvm_err("Failed to idmap %lx-%lx\n",
hyp_idmap_start, hyp_idmap_end);
return err;
}
int kvm_mmu_init(void)
{
int err;
......@@ -1719,28 +1709,41 @@ int kvm_mmu_init(void)
*/
BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK);
hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order);
boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order);
kvm_info("IDMAP page: %lx\n", hyp_idmap_start);
kvm_info("HYP VA range: %lx:%lx\n",
kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
if (!hyp_pgd || !boot_hyp_pgd) {
kvm_err("Hyp mode PGD not allocated\n");
err = -ENOMEM;
if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
hyp_idmap_start < kern_hyp_va(~0UL)) {
/*
* The idmap page is intersecting with the VA space,
* it is not safe to continue further.
*/
kvm_err("IDMAP intersecting with HYP VA, unable to continue\n");
err = -EINVAL;
goto out;
}
/* Create the idmap in the boot page tables */
err = __create_hyp_mappings(boot_hyp_pgd,
hyp_idmap_start, hyp_idmap_end,
__phys_to_pfn(hyp_idmap_start),
PAGE_HYP);
if (err) {
kvm_err("Failed to idmap %lx-%lx\n",
hyp_idmap_start, hyp_idmap_end);
hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, hyp_pgd_order);
if (!hyp_pgd) {
kvm_err("Hyp mode PGD not allocated\n");
err = -ENOMEM;
goto out;
}
if (__kvm_cpu_uses_extended_idmap()) {
boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
hyp_pgd_order);
if (!boot_hyp_pgd) {
kvm_err("Hyp boot PGD not allocated\n");
err = -ENOMEM;
goto out;
}
err = kvm_map_idmap_text(boot_hyp_pgd);
if (err)
goto out;
merged_hyp_pgd = (pgd_t *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
if (!merged_hyp_pgd) {
kvm_err("Failed to allocate extra HYP pgd\n");
......@@ -1748,29 +1751,10 @@ int kvm_mmu_init(void)
}
__kvm_extend_hypmap(boot_hyp_pgd, hyp_pgd, merged_hyp_pgd,
hyp_idmap_start);
return 0;
}
/* Map the very same page at the trampoline VA */
err = __create_hyp_mappings(boot_hyp_pgd,
TRAMPOLINE_VA, TRAMPOLINE_VA + PAGE_SIZE,
__phys_to_pfn(hyp_idmap_start),
PAGE_HYP);
if (err) {
kvm_err("Failed to map trampoline @%lx into boot HYP pgd\n",
TRAMPOLINE_VA);
goto out;
}
/* Map the same page again into the runtime page tables */
err = __create_hyp_mappings(hyp_pgd,
TRAMPOLINE_VA, TRAMPOLINE_VA + PAGE_SIZE,
__phys_to_pfn(hyp_idmap_start),
PAGE_HYP);
if (err) {
kvm_err("Failed to map trampoline @%lx into runtime HYP pgd\n",
TRAMPOLINE_VA);
goto out;
} else {
err = kvm_map_idmap_text(hyp_pgd);
if (err)
goto out;
}
return 0;
......
......@@ -52,7 +52,7 @@ static const struct kvm_irq_level cortexa_vtimer_irq = {
* @vcpu: The VCPU pointer
*
* This function finds the right table above and sets the registers on the
* virtual CPU struct to their architectually defined reset values.
* virtual CPU struct to their architecturally defined reset values.
*/
int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
{
......
......@@ -36,8 +36,9 @@
#define ARM64_HAS_VIRT_HOST_EXTN 11
#define ARM64_WORKAROUND_CAVIUM_27456 12
#define ARM64_HAS_32BIT_EL0 13
#define ARM64_HYP_OFFSET_LOW 14
#define ARM64_NCAPS 14
#define ARM64_NCAPS 15
#ifndef __ASSEMBLY__
......
......@@ -178,7 +178,7 @@
/* Hyp System Trap Register */
#define HSTR_EL2_T(x) (1 << x)
/* Hyp Coproccessor Trap Register Shifts */
/* Hyp Coprocessor Trap Register Shifts */
#define CPTR_EL2_TFP_SHIFT 10
/* Hyp Coprocessor Trap Register */
......
......@@ -47,8 +47,7 @@
int __attribute_const__ kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
int kvm_arch_dev_ioctl_check_extension(long ext);
unsigned long kvm_hyp_reset_entry(void);
int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext);
void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start);
struct kvm_arch {
......@@ -348,8 +347,7 @@ int kvm_perf_teardown(void);
struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
phys_addr_t pgd_ptr,
static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr,
unsigned long hyp_stack_ptr,
unsigned long vector_ptr)
{
......@@ -357,19 +355,14 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
* Call initialization code, and switch to the full blown
* HYP code.
*/
__kvm_call_hyp((void *)boot_pgd_ptr, pgd_ptr,
hyp_stack_ptr, vector_ptr);
__kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr);
}
static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr,
void __kvm_hyp_teardown(void);
static inline void __cpu_reset_hyp_mode(unsigned long vector_ptr,
phys_addr_t phys_idmap_start)
{
/*
* Call reset code, and switch back to stub hyp vectors.
* Uses __kvm_call_hyp() to avoid kaslr's kvm_ksym_ref() translation.
*/
__kvm_call_hyp((void *)kvm_hyp_reset_entry(),
boot_pgd_ptr, phys_idmap_start);
kvm_call_hyp(__kvm_hyp_teardown, phys_idmap_start);
}
static inline void kvm_arch_hardware_unsetup(void) {}
......
......@@ -25,29 +25,6 @@
#define __hyp_text __section(.hyp.text) notrace
static inline unsigned long __kern_hyp_va(unsigned long v)
{
asm volatile(ALTERNATIVE("and %0, %0, %1",
"nop",
ARM64_HAS_VIRT_HOST_EXTN)
: "+r" (v) : "i" (HYP_PAGE_OFFSET_MASK));
return v;
}
#define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v)))
static inline unsigned long __hyp_kern_va(unsigned long v)
{
u64 offset = PAGE_OFFSET - HYP_PAGE_OFFSET;
asm volatile(ALTERNATIVE("add %0, %0, %1",
"nop",
ARM64_HAS_VIRT_HOST_EXTN)
: "+r" (v) : "r" (offset));
return v;
}
#define hyp_kern_va(v) (typeof(v))(__hyp_kern_va((unsigned long)(v)))
#define read_sysreg_elx(r,nvh,vh) \
({ \
u64 reg; \
......
......@@ -29,21 +29,48 @@
*
* Instead, give the HYP mode its own VA region at a fixed offset from
* the kernel by just masking the top bits (which are all ones for a
* kernel address).
* kernel address). We need to find out how many bits to mask.
*
* ARMv8.1 (using VHE) does have a TTBR1_EL2, and doesn't use these
* macros (the entire kernel runs at EL2).
* We want to build a set of page tables that cover both parts of the
* idmap (the trampoline page used to initialize EL2), and our normal
* runtime VA space, at the same time.
*
* Given that the kernel uses VA_BITS for its entire address space,
* and that half of that space (VA_BITS - 1) is used for the linear
* mapping, we can also limit the EL2 space to (VA_BITS - 1).
*
* The main question is "Within the VA_BITS space, does EL2 use the
* top or the bottom half of that space to shadow the kernel's linear
* mapping?". As we need to idmap the trampoline page, this is
* determined by the range in which this page lives.
*
* If the page is in the bottom half, we have to use the top half. If
* the page is in the top half, we have to use the bottom half:
*
* T = __virt_to_phys(__hyp_idmap_text_start)
* if (T & BIT(VA_BITS - 1))
* HYP_VA_MIN = 0 //idmap in upper half
* else
* HYP_VA_MIN = 1 << (VA_BITS - 1)
* HYP_VA_MAX = HYP_VA_MIN + (1 << (VA_BITS - 1)) - 1
*
* This of course assumes that the trampoline page exists within the
* VA_BITS range. If it doesn't, then it means we're in the odd case
* where the kernel idmap (as well as HYP) uses more levels than the
* kernel runtime page tables (as seen when the kernel is configured
* for 4k pages, 39bits VA, and yet memory lives just above that
* limit, forcing the idmap to use 4 levels of page tables while the
* kernel itself only uses 3). In this particular case, it doesn't
* matter which side of VA_BITS we use, as we're guaranteed not to
* conflict with anything.
*
* When using VHE, there are no separate hyp mappings and all KVM
* functionality is already mapped as part of the main kernel
* mappings, and none of this applies in that case.
*/
#define HYP_PAGE_OFFSET_SHIFT VA_BITS
#define HYP_PAGE_OFFSET_MASK ((UL(1) << HYP_PAGE_OFFSET_SHIFT) - 1)
#define HYP_PAGE_OFFSET (PAGE_OFFSET & HYP_PAGE_OFFSET_MASK)
/*
* Our virtual mapping for the idmap-ed MMU-enable code. Must be
* shared across all the page-tables. Conveniently, we use the last
* possible page, where no kernel mapping will ever exist.
*/
#define TRAMPOLINE_VA (HYP_PAGE_OFFSET_MASK & PAGE_MASK)
#define HYP_PAGE_OFFSET_HIGH_MASK ((UL(1) << VA_BITS) - 1)
#define HYP_PAGE_OFFSET_LOW_MASK ((UL(1) << (VA_BITS - 1)) - 1)
#ifdef __ASSEMBLY__
......@@ -53,13 +80,33 @@
/*
* Convert a kernel VA into a HYP VA.
* reg: VA to be converted.
*
* This generates the following sequences:
* - High mask:
* and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
* nop
* - Low mask:
* and x0, x0, #HYP_PAGE_OFFSET_HIGH_MASK
* and x0, x0, #HYP_PAGE_OFFSET_LOW_MASK
* - VHE:
* nop
* nop
*
* The "low mask" version works because the mask is a strict subset of
* the "high mask", hence performing the first mask for nothing.
* Should be completely invisible on any viable CPU.
*/
.macro kern_hyp_va reg
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
and \reg, \reg, #HYP_PAGE_OFFSET_MASK
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
and \reg, \reg, #HYP_PAGE_OFFSET_HIGH_MASK
alternative_else
nop
alternative_endif
alternative_if_not ARM64_HYP_OFFSET_LOW
nop
alternative_else
and \reg, \reg, #HYP_PAGE_OFFSET_LOW_MASK
alternative_endif
.endm
#else
......@@ -70,7 +117,22 @@ alternative_endif
#include <asm/mmu_context.h>
#include <asm/pgtable.h>
#define KERN_TO_HYP(kva) ((unsigned long)kva - PAGE_OFFSET + HYP_PAGE_OFFSET)
static inline unsigned long __kern_hyp_va(unsigned long v)
{
asm volatile(ALTERNATIVE("and %0, %0, %1",
"nop",
ARM64_HAS_VIRT_HOST_EXTN)
: "+r" (v)
: "i" (HYP_PAGE_OFFSET_HIGH_MASK));
asm volatile(ALTERNATIVE("nop",
"and %0, %0, %1",
ARM64_HYP_OFFSET_LOW)
: "+r" (v)
: "i" (HYP_PAGE_OFFSET_LOW_MASK));
return v;
}
#define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v)))
/*
* We currently only support a 40bit IPA.
......@@ -81,9 +143,8 @@ alternative_endif
#include <asm/stage2_pgtable.h>
int create_hyp_mappings(void *from, void *to);
int create_hyp_mappings(void *from, void *to, pgprot_t prot);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void);
void free_hyp_pgds(void);
void stage2_unmap_vm(struct kvm *kvm);
......@@ -97,7 +158,6 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run);
void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
phys_addr_t kvm_mmu_get_httbr(void);
phys_addr_t kvm_mmu_get_boot_httbr(void);
phys_addr_t kvm_get_idmap_vector(void);
phys_addr_t kvm_get_idmap_start(void);
int kvm_mmu_init(void);
......
......@@ -164,6 +164,7 @@
#define PTE_CONT (_AT(pteval_t, 1) << 52) /* Contiguous range */
#define PTE_PXN (_AT(pteval_t, 1) << 53) /* Privileged XN */
#define PTE_UXN (_AT(pteval_t, 1) << 54) /* User XN */
#define PTE_HYP_XN (_AT(pteval_t, 1) << 54) /* HYP XN */
/*
* AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
......
......@@ -55,7 +55,9 @@
#define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
#define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
#define PAGE_HYP __pgprot(_PAGE_DEFAULT | PTE_HYP)
#define PAGE_HYP __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
#define PAGE_HYP_EXEC __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
#define PAGE_HYP_RO __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
#define PAGE_HYP_DEVICE __pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
#define PAGE_S2 __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
......
......@@ -87,6 +87,10 @@ extern void verify_cpu_run_el(void);
static inline void verify_cpu_run_el(void) {}
#endif
/* The section containing the hypervisor idmap text */
extern char __hyp_idmap_text_start[];
extern char __hyp_idmap_text_end[];
/* The section containing the hypervisor text */
extern char __hyp_text_start[];
extern char __hyp_text_end[];
......
......@@ -87,9 +87,11 @@ struct kvm_regs {
/* Supported VGICv3 address types */
#define KVM_VGIC_V3_ADDR_TYPE_DIST 2
#define KVM_VGIC_V3_ADDR_TYPE_REDIST 3
#define KVM_VGIC_ITS_ADDR_TYPE 4
#define KVM_VGIC_V3_DIST_SIZE SZ_64K
#define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K)
#define KVM_VGIC_V3_ITS_SIZE (2 * SZ_64K)
#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
#define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
......
......@@ -726,6 +726,19 @@ static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused
return is_kernel_in_hyp_mode();
}
static bool hyp_offset_low(const struct arm64_cpu_capabilities *entry,
int __unused)
{
phys_addr_t idmap_addr = virt_to_phys(__hyp_idmap_text_start);
/*
* Activate the lower HYP offset only if:
* - the idmap doesn't clash with it,
* - the kernel is not running at EL2.
*/
return idmap_addr > GENMASK(VA_BITS - 2, 0) && !is_kernel_in_hyp_mode();
}
static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "GIC system register CPU interface",
......@@ -803,6 +816,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64PFR0_EL0_SHIFT,
.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
},
{
.desc = "Reduced HYP mapping offset",
.capability = ARM64_HYP_OFFSET_LOW,
.def_scope = SCOPE_SYSTEM,
.matches = hyp_offset_low,
},
{},
};
......
......@@ -36,6 +36,7 @@ config KVM
select HAVE_KVM_IRQFD
select KVM_ARM_VGIC_V3
select KVM_ARM_PMU if HW_PERF_EVENTS
select HAVE_KVM_MSI
---help---
Support hosting virtualized guest machines.
We don't support KVM with 16K page tables yet, due to the multiple
......@@ -54,13 +55,6 @@ config KVM_ARM_PMU
Adds support for a virtual Performance Monitoring Unit (PMU) in
virtual machines.
config KVM_NEW_VGIC
bool "New VGIC implementation"
depends on KVM
default y
---help---
uses the new VGIC implementation
source drivers/vhost/Kconfig
endif # VIRTUALIZATION
......@@ -20,7 +20,6 @@ kvm-$(CONFIG_KVM_ARM_HOST) += emulate.o inject_fault.o regmap.o
kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o
ifeq ($(CONFIG_KVM_NEW_VGIC),y)
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-init.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-irqfd.o
......@@ -30,12 +29,6 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio-v2.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio-v3.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-kvm-device.o
else
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
endif
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-its.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
......@@ -211,7 +211,7 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu)
/**
* kvm_arm_copy_reg_indices - get indices of all registers.
*
* We do core registers right here, then we apppend system regs.
* We do core registers right here, then we append system regs.
*/
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
......
......@@ -53,10 +53,9 @@ __invalid:
b .
/*
* x0: HYP boot pgd
* x1: HYP pgd
* x2: HYP stack
* x3: HYP vectors
* x0: HYP pgd
* x1: HYP stack
* x2: HYP vectors
*/
__do_hyp_init:
......@@ -110,71 +109,27 @@ __do_hyp_init:
msr sctlr_el2, x4
isb
/* Skip the trampoline dance if we merged the boot and runtime PGDs */
cmp x0, x1
b.eq merged
/* MMU is now enabled. Get ready for the trampoline dance */
ldr x4, =TRAMPOLINE_VA
adr x5, target
bfi x4, x5, #0, #PAGE_SHIFT
br x4
target: /* We're now in the trampoline code, switch page tables */
msr ttbr0_el2, x1
isb
/* Invalidate the old TLBs */
tlbi alle2
dsb sy
merged:
/* Set the stack and new vectors */
kern_hyp_va x1
mov sp, x1
kern_hyp_va x2
mov sp, x2
kern_hyp_va x3
msr vbar_el2, x3
msr vbar_el2, x2
/* Hello, World! */
eret
ENDPROC(__kvm_hyp_init)
/*
* Reset kvm back to the hyp stub. This is the trampoline dance in
* reverse. If kvm used an extended idmap, __extended_idmap_trampoline
* calls this code directly in the idmap. In this case switching to the
* boot tables is a no-op.
*
* x0: HYP boot pgd
* x1: HYP phys_idmap_start
* Reset kvm back to the hyp stub.
*/
ENTRY(__kvm_hyp_reset)
/* We're in trampoline code in VA, switch back to boot page tables */
msr ttbr0_el2, x0
isb
/* Ensure the PA branch doesn't find a stale tlb entry or stale code. */
ic iallu
tlbi alle2
dsb sy
isb
/* Branch into PA space */
adr x0, 1f
bfi x1, x0, #0, #PAGE_SHIFT
br x1
/* We're now in idmap, disable MMU */
1: mrs x0, sctlr_el2
mrs x0, sctlr_el2
ldr x1, =SCTLR_ELx_FLAGS
bic x0, x0, x1 // Clear SCTL_M and etc
msr sctlr_el2, x0
isb
/* Invalidate the old TLBs */
tlbi alle2
dsb sy
/* Install stub vectors */
adr_l x0, __hyp_stub_vectors
msr vbar_el2, x0
......
......@@ -164,22 +164,3 @@ alternative_endif
eret
ENDPROC(__fpsimd_guest_restore)
/*
* When using the extended idmap, we don't have a trampoline page we can use
* while we switch pages tables during __kvm_hyp_reset. Accessing the idmap
* directly would be ideal, but if we're using the extended idmap then the
* idmap is located above HYP_PAGE_OFFSET, and the address will be masked by
* kvm_call_hyp using kern_hyp_va.
*
* x0: HYP boot pgd
* x1: HYP phys_idmap_start
*/
ENTRY(__extended_idmap_trampoline)
mov x4, x1
adr_l x3, __kvm_hyp_reset
/* insert __kvm_hyp_reset()s offset into phys_idmap_start */
bfi x4, x3, #0, #PAGE_SHIFT
br x4
ENDPROC(__extended_idmap_trampoline)
......@@ -62,6 +62,21 @@ ENTRY(__vhe_hyp_call)
isb
ret
ENDPROC(__vhe_hyp_call)
/*
* Compute the idmap address of __kvm_hyp_reset based on the idmap
* start passed as a parameter, and jump there.
*
* x0: HYP phys_idmap_start
*/
ENTRY(__kvm_hyp_teardown)
mov x4, x0
adr_l x3, __kvm_hyp_reset
/* insert __kvm_hyp_reset()s offset into phys_idmap_start */
bfi x4, x3, #0, #PAGE_SHIFT
br x4
ENDPROC(__kvm_hyp_teardown)
el1_sync: // Guest trapped into EL2
save_x0_to_x3
......
......@@ -299,9 +299,16 @@ static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%
static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par)
{
unsigned long str_va = (unsigned long)__hyp_panic_string;
unsigned long str_va;
__hyp_do_panic(hyp_kern_va(str_va),
/*
* Force the panic string to be loaded from the literal pool,
* making sure it is a kernel address and not a PC-relative
* reference.
*/
asm volatile("ldr %0, =__hyp_panic_string" : "=r" (str_va));
__hyp_do_panic(str_va,
spsr, elr,
read_sysreg(esr_el2), read_sysreg_el2(far),
read_sysreg(hpfar_el2), par,
......
......@@ -65,7 +65,7 @@ static bool cpu_has_32bit_el1(void)
* We currently assume that the number of HW registers is uniform
* across all CPUs (see cpuinfo_sanity_check).
*/
int kvm_arch_dev_ioctl_check_extension(long ext)
int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext)
{
int r;
......@@ -86,6 +86,12 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
case KVM_CAP_VCPU_ATTRIBUTES:
r = 1;
break;
case KVM_CAP_MSI_DEVID:
if (!kvm)
r = -EINVAL;
else
r = kvm->arch.vgic.msis_require_devid;
break;
default:
r = 0;
}
......@@ -98,7 +104,7 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
* @vcpu: The VCPU pointer
*
* This function finds the right table above and sets the registers on
* the virtual CPU struct to their architectually defined reset
* the virtual CPU struct to their architecturally defined reset
* values.
*/
int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
......@@ -132,31 +138,3 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
/* Reset timer */
return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
}
extern char __hyp_idmap_text_start[];
unsigned long kvm_hyp_reset_entry(void)
{
if (!__kvm_cpu_uses_extended_idmap()) {
unsigned long offset;
/*
* Find the address of __kvm_hyp_reset() in the trampoline page.
* This is present in the running page tables, and the boot page
* tables, so we call the code here to start the trampoline
* dance in reverse.
*/
offset = (unsigned long)__kvm_hyp_reset
- ((unsigned long)__hyp_idmap_text_start & PAGE_MASK);
return TRAMPOLINE_VA + offset;
} else {
/*
* KVM is running with merged page tables, which don't have the
* trampoline page mapped. We know the idmap is still mapped,
* but can't be called into directly. Use
* __extended_idmap_trampoline to do the call.
*/
return (unsigned long)kvm_ksym_ref(__extended_idmap_trampoline);
}
}
......@@ -1546,7 +1546,7 @@ static void unhandled_cp_access(struct kvm_vcpu *vcpu,
struct sys_reg_params *params)
{
u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
int cp;
int cp = -1;
switch(hsr_ec) {
case ESR_ELx_EC_CP15_32:
......@@ -1558,7 +1558,7 @@ static void unhandled_cp_access(struct kvm_vcpu *vcpu,
cp = 14;
break;
default:
WARN_ON((cp = -1));
WARN_ON(1);
}
kvm_err("Unsupported guest CP%d access at: %08lx\n",
......
......@@ -1488,6 +1488,7 @@ config CPU_MIPS64_R2
select CPU_SUPPORTS_HIGHMEM
select CPU_SUPPORTS_HUGEPAGES
select CPU_SUPPORTS_MSA
select HAVE_KVM
help
Choose this option to build a kernel for release 2 or later of the
MIPS64 architecture. Many modern embedded systems with a 64-bit
......@@ -1505,6 +1506,7 @@ config CPU_MIPS64_R6
select CPU_SUPPORTS_MSA
select GENERIC_CSUM
select MIPS_O32_FP64_SUPPORT if MIPS32_O32
select HAVE_KVM
help
Choose this option to build a kernel for release 6 or later of the
MIPS64 architecture. New MIPS processors, starting with the Warrior
......
......@@ -45,7 +45,7 @@
/*
* Returns the kernel segment base of a given address
*/
#define KSEGX(a) ((_ACAST32_ (a)) & 0xe0000000)
#define KSEGX(a) ((_ACAST32_(a)) & _ACAST32_(0xe0000000))
/*
* Returns the physical address of a CKSEGx / XKPHYS address
......
This diff is collapsed.
......@@ -55,7 +55,7 @@
#define cpu_has_mipsmt 0
#define cpu_has_vint 0
#define cpu_has_veic 0
#define cpu_hwrena_impl_bits 0xc0000000
#define cpu_hwrena_impl_bits (MIPS_HWRENA_IMPL1 | MIPS_HWRENA_IMPL2)
#define cpu_has_wsbh 1
#define cpu_has_rixi (cpu_data[0].cputype != CPU_CAVIUM_OCTEON)
......
......@@ -53,7 +53,7 @@
#define CP0_SEGCTL2 $5, 4
#define CP0_WIRED $6
#define CP0_INFO $7
#define CP0_HWRENA $7, 0
#define CP0_HWRENA $7
#define CP0_BADVADDR $8
#define CP0_BADINSTR $8, 1
#define CP0_COUNT $9
......@@ -533,6 +533,7 @@
#define TX49_CONF_CWFON (_ULCAST_(1) << 27)
/* Bits specific to the MIPS32/64 PRA. */
#define MIPS_CONF_VI (_ULCAST_(1) << 3)
#define MIPS_CONF_MT (_ULCAST_(7) << 7)
#define MIPS_CONF_MT_TLB (_ULCAST_(1) << 7)
#define MIPS_CONF_MT_FTLB (_ULCAST_(4) << 7)
......@@ -853,6 +854,24 @@
#define MIPS_CDMMBASE_ADDR_SHIFT 11
#define MIPS_CDMMBASE_ADDR_START 15
/* RDHWR register numbers */
#define MIPS_HWR_CPUNUM 0 /* CPU number */
#define MIPS_HWR_SYNCISTEP 1 /* SYNCI step size */
#define MIPS_HWR_CC 2 /* Cycle counter */
#define MIPS_HWR_CCRES 3 /* Cycle counter resolution */
#define MIPS_HWR_ULR 29 /* UserLocal */
#define MIPS_HWR_IMPL1 30 /* Implementation dependent */
#define MIPS_HWR_IMPL2 31 /* Implementation dependent */
/* Bits in HWREna register */
#define MIPS_HWRENA_CPUNUM (_ULCAST_(1) << MIPS_HWR_CPUNUM)
#define MIPS_HWRENA_SYNCISTEP (_ULCAST_(1) << MIPS_HWR_SYNCISTEP)
#define MIPS_HWRENA_CC (_ULCAST_(1) << MIPS_HWR_CC)
#define MIPS_HWRENA_CCRES (_ULCAST_(1) << MIPS_HWR_CCRES)
#define MIPS_HWRENA_ULR (_ULCAST_(1) << MIPS_HWR_ULR)
#define MIPS_HWRENA_IMPL1 (_ULCAST_(1) << MIPS_HWR_IMPL1)
#define MIPS_HWRENA_IMPL2 (_ULCAST_(1) << MIPS_HWR_IMPL2)
/*
* Bitfields in the TX39 family CP0 Configuration Register 3
*/
......
......@@ -21,6 +21,7 @@ extern void *set_vi_handler(int n, vi_handler_t addr);
extern void *set_except_vector(int n, void *addr);
extern unsigned long ebase;
extern unsigned int hwrena;
extern void per_cpu_trap_init(bool);
extern void cpu_cache_init(void);
......
......@@ -104,8 +104,13 @@ Ip_u1s2(_bltz);
Ip_u1s2(_bltzl);
Ip_u1u2s3(_bne);
Ip_u2s3u1(_cache);
Ip_u1u2(_cfc1);
Ip_u2u1(_cfcmsa);
Ip_u1u2(_ctc1);
Ip_u2u1(_ctcmsa);
Ip_u2u1s3(_daddiu);
Ip_u3u1u2(_daddu);
Ip_u1(_di);
Ip_u2u1msbu3(_dins);
Ip_u2u1msbu3(_dinsm);
Ip_u1u2(_divu);
......@@ -141,6 +146,8 @@ Ip_u1(_mfhi);
Ip_u1(_mflo);
Ip_u1u2u3(_mtc0);
Ip_u1u2u3(_mthc0);
Ip_u1(_mthi);
Ip_u1(_mtlo);
Ip_u3u1u2(_mul);
Ip_u3u1u2(_or);
Ip_u2u1u3(_ori);
......
......@@ -21,20 +21,20 @@
enum major_op {
spec_op, bcond_op, j_op, jal_op,
beq_op, bne_op, blez_op, bgtz_op,
addi_op, cbcond0_op = addi_op, addiu_op, slti_op, sltiu_op,
addi_op, pop10_op = addi_op, addiu_op, slti_op, sltiu_op,
andi_op, ori_op, xori_op, lui_op,
cop0_op, cop1_op, cop2_op, cop1x_op,
beql_op, bnel_op, blezl_op, bgtzl_op,
daddi_op, cbcond1_op = daddi_op, daddiu_op, ldl_op, ldr_op,
daddi_op, pop30_op = daddi_op, daddiu_op, ldl_op, ldr_op,
spec2_op, jalx_op, mdmx_op, msa_op = mdmx_op, spec3_op,
lb_op, lh_op, lwl_op, lw_op,
lbu_op, lhu_op, lwr_op, lwu_op,
sb_op, sh_op, swl_op, sw_op,
sdl_op, sdr_op, swr_op, cache_op,
ll_op, lwc1_op, lwc2_op, bc6_op = lwc2_op, pref_op,
lld_op, ldc1_op, ldc2_op, beqzcjic_op = ldc2_op, ld_op,
lld_op, ldc1_op, ldc2_op, pop66_op = ldc2_op, ld_op,
sc_op, swc1_op, swc2_op, balc6_op = swc2_op, major_3b_op,
scd_op, sdc1_op, sdc2_op, bnezcjialc_op = sdc2_op, sd_op
scd_op, sdc1_op, sdc2_op, pop76_op = sdc2_op, sd_op
};
/*
......@@ -92,6 +92,50 @@ enum spec3_op {
rdhwr_op = 0x3b
};
/*
* Bits 10-6 minor opcode for r6 spec mult/div encodings
*/
enum mult_op {
mult_mult_op = 0x0,
mult_mul_op = 0x2,
mult_muh_op = 0x3,
};
enum multu_op {
multu_multu_op = 0x0,
multu_mulu_op = 0x2,
multu_muhu_op = 0x3,
};
enum div_op {
div_div_op = 0x0,
div_div6_op = 0x2,
div_mod_op = 0x3,
};
enum divu_op {
divu_divu_op = 0x0,
divu_divu6_op = 0x2,
divu_modu_op = 0x3,
};
enum dmult_op {
dmult_dmult_op = 0x0,
dmult_dmul_op = 0x2,
dmult_dmuh_op = 0x3,
};
enum dmultu_op {
dmultu_dmultu_op = 0x0,
dmultu_dmulu_op = 0x2,
dmultu_dmuhu_op = 0x3,
};
enum ddiv_op {
ddiv_ddiv_op = 0x0,
ddiv_ddiv6_op = 0x2,
ddiv_dmod_op = 0x3,
};
enum ddivu_op {
ddivu_ddivu_op = 0x0,
ddivu_ddivu6_op = 0x2,
ddivu_dmodu_op = 0x3,
};
/*
* rt field of bcond opcodes.
*/
......@@ -103,7 +147,7 @@ enum rt_op {
bltzal_op, bgezal_op, bltzall_op, bgezall_op,
rt_op_0x14, rt_op_0x15, rt_op_0x16, rt_op_0x17,
rt_op_0x18, rt_op_0x19, rt_op_0x1a, rt_op_0x1b,
bposge32_op, rt_op_0x1d, rt_op_0x1e, rt_op_0x1f
bposge32_op, rt_op_0x1d, rt_op_0x1e, synci_op
};
/*
......@@ -237,6 +281,21 @@ enum bshfl_func {
seh_op = 0x18,
};
/*
* MSA minor opcodes.
*/
enum msa_func {
msa_elm_op = 0x19,
};
/*
* MSA ELM opcodes.
*/
enum msa_elm {
msa_ctc_op = 0x3e,
msa_cfc_op = 0x7e,
};
/*
* func field for MSA MI10 format.
*/
......@@ -264,7 +323,7 @@ enum mm_major_op {
mm_pool32b_op, mm_pool16b_op, mm_lhu16_op, mm_andi16_op,
mm_addiu32_op, mm_lhu32_op, mm_sh32_op, mm_lh32_op,
mm_pool32i_op, mm_pool16c_op, mm_lwsp16_op, mm_pool16d_op,
mm_ori32_op, mm_pool32f_op, mm_reserved1_op, mm_reserved2_op,
mm_ori32_op, mm_pool32f_op, mm_pool32s_op, mm_reserved2_op,
mm_pool32c_op, mm_lwgp16_op, mm_lw16_op, mm_pool16e_op,
mm_xori32_op, mm_jals32_op, mm_addiupc_op, mm_reserved3_op,
mm_reserved4_op, mm_pool16f_op, mm_sb16_op, mm_beqz16_op,
......@@ -360,7 +419,10 @@ enum mm_32axf_minor_op {
mm_mflo32_op = 0x075,
mm_jalrhb_op = 0x07c,
mm_tlbwi_op = 0x08d,
mm_mthi32_op = 0x0b5,
mm_tlbwr_op = 0x0cd,
mm_mtlo32_op = 0x0f5,
mm_di_op = 0x11d,
mm_jalrs_op = 0x13c,
mm_jalrshb_op = 0x17c,
mm_sync_op = 0x1ad,
......@@ -478,6 +540,13 @@ enum mm_32f_73_minor_op {
mm_fcvts1_op = 0xed,
};
/*
* (microMIPS) POOL32S minor opcodes.
*/
enum mm_32s_minor_op {
mm_32s_elm_op = 0x16,
};
/*
* (microMIPS) POOL16C minor opcodes.
*/
......@@ -586,6 +655,36 @@ struct r_format { /* Register format */
;))))))
};
struct c0r_format { /* C0 register format */
__BITFIELD_FIELD(unsigned int opcode : 6,
__BITFIELD_FIELD(unsigned int rs : 5,
__BITFIELD_FIELD(unsigned int rt : 5,
__BITFIELD_FIELD(unsigned int rd : 5,
__BITFIELD_FIELD(unsigned int z: 8,
__BITFIELD_FIELD(unsigned int sel : 3,
;))))))
};
struct mfmc0_format { /* MFMC0 register format */
__BITFIELD_FIELD(unsigned int opcode : 6,
__BITFIELD_FIELD(unsigned int rs : 5,
__BITFIELD_FIELD(unsigned int rt : 5,
__BITFIELD_FIELD(unsigned int rd : 5,
__BITFIELD_FIELD(unsigned int re : 5,
__BITFIELD_FIELD(unsigned int sc : 1,
__BITFIELD_FIELD(unsigned int : 2,
__BITFIELD_FIELD(unsigned int sel : 3,
;))))))))
};
struct co_format { /* C0 CO format */
__BITFIELD_FIELD(unsigned int opcode : 6,
__BITFIELD_FIELD(unsigned int co : 1,
__BITFIELD_FIELD(unsigned int code : 19,
__BITFIELD_FIELD(unsigned int func : 6,
;))))
};
struct p_format { /* Performance counter format (R10000) */
__BITFIELD_FIELD(unsigned int opcode : 6,
__BITFIELD_FIELD(unsigned int rs : 5,
......@@ -937,6 +1036,9 @@ union mips_instruction {
struct u_format u_format;
struct c_format c_format;
struct r_format r_format;
struct c0r_format c0r_format;
struct mfmc0_format mfmc0_format;
struct co_format co_format;
struct p_format p_format;
struct f_format f_format;
struct ma_format ma_format;
......
......@@ -339,71 +339,9 @@ void output_pm_defines(void)
}
#endif
void output_cpuinfo_defines(void)
{
COMMENT(" MIPS cpuinfo offsets. ");
DEFINE(CPUINFO_SIZE, sizeof(struct cpuinfo_mips));
#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
OFFSET(CPUINFO_ASID_MASK, cpuinfo_mips, asid_mask);
#endif
}
void output_kvm_defines(void)
{
COMMENT(" KVM/MIPS Specfic offsets. ");
DEFINE(VCPU_ARCH_SIZE, sizeof(struct kvm_vcpu_arch));
OFFSET(VCPU_RUN, kvm_vcpu, run);
OFFSET(VCPU_HOST_ARCH, kvm_vcpu, arch);
OFFSET(VCPU_HOST_EBASE, kvm_vcpu_arch, host_ebase);
OFFSET(VCPU_GUEST_EBASE, kvm_vcpu_arch, guest_ebase);
OFFSET(VCPU_HOST_STACK, kvm_vcpu_arch, host_stack);
OFFSET(VCPU_HOST_GP, kvm_vcpu_arch, host_gp);
OFFSET(VCPU_HOST_CP0_BADVADDR, kvm_vcpu_arch, host_cp0_badvaddr);
OFFSET(VCPU_HOST_CP0_CAUSE, kvm_vcpu_arch, host_cp0_cause);
OFFSET(VCPU_HOST_EPC, kvm_vcpu_arch, host_cp0_epc);
OFFSET(VCPU_HOST_ENTRYHI, kvm_vcpu_arch, host_cp0_entryhi);
OFFSET(VCPU_GUEST_INST, kvm_vcpu_arch, guest_inst);
OFFSET(VCPU_R0, kvm_vcpu_arch, gprs[0]);
OFFSET(VCPU_R1, kvm_vcpu_arch, gprs[1]);
OFFSET(VCPU_R2, kvm_vcpu_arch, gprs[2]);
OFFSET(VCPU_R3, kvm_vcpu_arch, gprs[3]);
OFFSET(VCPU_R4, kvm_vcpu_arch, gprs[4]);
OFFSET(VCPU_R5, kvm_vcpu_arch, gprs[5]);
OFFSET(VCPU_R6, kvm_vcpu_arch, gprs[6]);
OFFSET(VCPU_R7, kvm_vcpu_arch, gprs[7]);
OFFSET(VCPU_R8, kvm_vcpu_arch, gprs[8]);
OFFSET(VCPU_R9, kvm_vcpu_arch, gprs[9]);
OFFSET(VCPU_R10, kvm_vcpu_arch, gprs[10]);
OFFSET(VCPU_R11, kvm_vcpu_arch, gprs[11]);
OFFSET(VCPU_R12, kvm_vcpu_arch, gprs[12]);
OFFSET(VCPU_R13, kvm_vcpu_arch, gprs[13]);
OFFSET(VCPU_R14, kvm_vcpu_arch, gprs[14]);
OFFSET(VCPU_R15, kvm_vcpu_arch, gprs[15]);
OFFSET(VCPU_R16, kvm_vcpu_arch, gprs[16]);
OFFSET(VCPU_R17, kvm_vcpu_arch, gprs[17]);
OFFSET(VCPU_R18, kvm_vcpu_arch, gprs[18]);
OFFSET(VCPU_R19, kvm_vcpu_arch, gprs[19]);
OFFSET(VCPU_R20, kvm_vcpu_arch, gprs[20]);
OFFSET(VCPU_R21, kvm_vcpu_arch, gprs[21]);
OFFSET(VCPU_R22, kvm_vcpu_arch, gprs[22]);
OFFSET(VCPU_R23, kvm_vcpu_arch, gprs[23]);
OFFSET(VCPU_R24, kvm_vcpu_arch, gprs[24]);
OFFSET(VCPU_R25, kvm_vcpu_arch, gprs[25]);
OFFSET(VCPU_R26, kvm_vcpu_arch, gprs[26]);
OFFSET(VCPU_R27, kvm_vcpu_arch, gprs[27]);
OFFSET(VCPU_R28, kvm_vcpu_arch, gprs[28]);
OFFSET(VCPU_R29, kvm_vcpu_arch, gprs[29]);
OFFSET(VCPU_R30, kvm_vcpu_arch, gprs[30]);
OFFSET(VCPU_R31, kvm_vcpu_arch, gprs[31]);
OFFSET(VCPU_LO, kvm_vcpu_arch, lo);
OFFSET(VCPU_HI, kvm_vcpu_arch, hi);
OFFSET(VCPU_PC, kvm_vcpu_arch, pc);
BLANK();
OFFSET(VCPU_FPR0, kvm_vcpu_arch, fpu.fpr[0]);
OFFSET(VCPU_FPR1, kvm_vcpu_arch, fpu.fpr[1]);
......@@ -441,14 +379,6 @@ void output_kvm_defines(void)
OFFSET(VCPU_FCR31, kvm_vcpu_arch, fpu.fcr31);
OFFSET(VCPU_MSA_CSR, kvm_vcpu_arch, fpu.msacsr);
BLANK();
OFFSET(VCPU_COP0, kvm_vcpu_arch, cop0);
OFFSET(VCPU_GUEST_KERNEL_ASID, kvm_vcpu_arch, guest_kernel_asid);
OFFSET(VCPU_GUEST_USER_ASID, kvm_vcpu_arch, guest_user_asid);
OFFSET(COP0_TLB_HI, mips_coproc, reg[MIPS_CP0_TLB_HI][0]);
OFFSET(COP0_STATUS, mips_coproc, reg[MIPS_CP0_STATUS][0]);
BLANK();
}
#ifdef CONFIG_MIPS_CPS
......
......@@ -790,7 +790,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
epc += 4 + (insn.i_format.simmediate << 2);
regs->cp0_epc = epc;
break;
case beqzcjic_op:
case pop66_op:
if (!cpu_has_mips_r6) {
ret = -SIGILL;
break;
......@@ -798,7 +798,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
/* Compact branch: BEQZC || JIC */
regs->cp0_epc += 8;
break;
case bnezcjialc_op:
case pop76_op:
if (!cpu_has_mips_r6) {
ret = -SIGILL;
break;
......@@ -809,8 +809,8 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
regs->cp0_epc += 8;
break;
#endif
case cbcond0_op:
case cbcond1_op:
case pop10_op:
case pop30_op:
/* Only valid for MIPS R6 */
if (!cpu_has_mips_r6) {
ret = -SIGILL;
......
......@@ -619,17 +619,17 @@ static int simulate_rdhwr(struct pt_regs *regs, int rd, int rt)
perf_sw_event(PERF_COUNT_SW_EMULATION_FAULTS,
1, regs, 0);
switch (rd) {
case 0: /* CPU number */
case MIPS_HWR_CPUNUM: /* CPU number */
regs->regs[rt] = smp_processor_id();
return 0;
case 1: /* SYNCI length */
case MIPS_HWR_SYNCISTEP: /* SYNCI length */
regs->regs[rt] = min(current_cpu_data.dcache.linesz,
current_cpu_data.icache.linesz);
return 0;
case 2: /* Read count register */
case MIPS_HWR_CC: /* Read count register */
regs->regs[rt] = read_c0_count();
return 0;
case 3: /* Count register resolution */
case MIPS_HWR_CCRES: /* Count register resolution */
switch (current_cpu_type()) {
case CPU_20KC:
case CPU_25KF:
......@@ -639,7 +639,7 @@ static int simulate_rdhwr(struct pt_regs *regs, int rd, int rt)
regs->regs[rt] = 2;
}
return 0;
case 29:
case MIPS_HWR_ULR: /* Read UserLocal register */
regs->regs[rt] = ti->tp_value;
return 0;
default:
......@@ -1859,6 +1859,7 @@ void __noreturn nmi_exception_handler(struct pt_regs *regs)
#define VECTORSPACING 0x100 /* for EI/VI mode */
unsigned long ebase;
EXPORT_SYMBOL_GPL(ebase);
unsigned long exception_handlers[32];
unsigned long vi_handlers[64];
......@@ -2063,16 +2064,22 @@ static void configure_status(void)
status_set);
}
unsigned int hwrena;
EXPORT_SYMBOL_GPL(hwrena);
/* configure HWRENA register */
static void configure_hwrena(void)
{
unsigned int hwrena = cpu_hwrena_impl_bits;
hwrena = cpu_hwrena_impl_bits;
if (cpu_has_mips_r2_r6)
hwrena |= 0x0000000f;
hwrena |= MIPS_HWRENA_CPUNUM |
MIPS_HWRENA_SYNCISTEP |
MIPS_HWRENA_CC |
MIPS_HWRENA_CCRES;
if (!noulri && cpu_has_userlocal)
hwrena |= (1 << 29);
hwrena |= MIPS_HWRENA_ULR;
if (hwrena)
write_c0_hwrena(hwrena);
......
......@@ -17,6 +17,7 @@ if VIRTUALIZATION
config KVM
tristate "Kernel-based Virtual Machine (KVM) support"
depends on HAVE_KVM
select EXPORT_UASM
select PREEMPT_NOTIFIERS
select ANON_INODES
select KVM_MMIO
......
......@@ -7,9 +7,10 @@ EXTRA_CFLAGS += -Ivirt/kvm -Iarch/mips/kvm
common-objs-$(CONFIG_CPU_HAS_MSA) += msa.o
kvm-objs := $(common-objs-y) mips.o emulate.o locore.o \
kvm-objs := $(common-objs-y) mips.o emulate.o entry.o \
interrupt.o stats.o commpage.o \
dyntrans.o trap_emul.o fpu.o
kvm-objs += mmu.o
obj-$(CONFIG_KVM) += kvm.o
obj-y += callback.o tlb.o
......@@ -4,7 +4,7 @@
* for more details.
*
* commpage, currently used for Virtual COP0 registers.
* Mapped into the guest kernel @ 0x0.
* Mapped into the guest kernel @ KVM_GUEST_COMMPAGE_ADDR.
*
* Copyright (C) 2012 MIPS Technologies, Inc. All rights reserved.
* Authors: Sanjay Lal <sanjayl@kymasys.com>
......
......@@ -11,6 +11,7 @@
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/highmem.h>
#include <linux/kvm_host.h>
#include <linux/module.h>
#include <linux/vmalloc.h>
......@@ -20,125 +21,114 @@
#include "commpage.h"
#define SYNCI_TEMPLATE 0x041f0000
#define SYNCI_BASE(x) (((x) >> 21) & 0x1f)
#define SYNCI_OFFSET ((x) & 0xffff)
/**
* kvm_mips_trans_replace() - Replace trapping instruction in guest memory.
* @vcpu: Virtual CPU.
* @opc: PC of instruction to replace.
* @replace: Instruction to write
*/
static int kvm_mips_trans_replace(struct kvm_vcpu *vcpu, u32 *opc,
union mips_instruction replace)
{
unsigned long paddr, flags;
void *vaddr;
if (KVM_GUEST_KSEGX((unsigned long)opc) == KVM_GUEST_KSEG0) {
paddr = kvm_mips_translate_guest_kseg0_to_hpa(vcpu,
(unsigned long)opc);
vaddr = kmap_atomic(pfn_to_page(PHYS_PFN(paddr)));
vaddr += paddr & ~PAGE_MASK;
memcpy(vaddr, (void *)&replace, sizeof(u32));
local_flush_icache_range((unsigned long)vaddr,
(unsigned long)vaddr + 32);
kunmap_atomic(vaddr);
} else if (KVM_GUEST_KSEGX((unsigned long) opc) == KVM_GUEST_KSEG23) {
local_irq_save(flags);
memcpy((void *)opc, (void *)&replace, sizeof(u32));
local_flush_icache_range((unsigned long)opc,
(unsigned long)opc + 32);
local_irq_restore(flags);
} else {
kvm_err("%s: Invalid address: %p\n", __func__, opc);
return -EFAULT;
}
#define LW_TEMPLATE 0x8c000000
#define CLEAR_TEMPLATE 0x00000020
#define SW_TEMPLATE 0xac000000
return 0;
}
int kvm_mips_trans_cache_index(uint32_t inst, uint32_t *opc,
int kvm_mips_trans_cache_index(union mips_instruction inst, u32 *opc,
struct kvm_vcpu *vcpu)
{
int result = 0;
unsigned long kseg0_opc;
uint32_t synci_inst = 0x0;
union mips_instruction nop_inst = { 0 };
/* Replace the CACHE instruction, with a NOP */
kseg0_opc =
CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
(vcpu, (unsigned long) opc));
memcpy((void *)kseg0_opc, (void *)&synci_inst, sizeof(uint32_t));
local_flush_icache_range(kseg0_opc, kseg0_opc + 32);
return result;
return kvm_mips_trans_replace(vcpu, opc, nop_inst);
}
/*
* Address based CACHE instructions are transformed into synci(s). A little
* heavy for just D-cache invalidates, but avoids an expensive trap
*/
int kvm_mips_trans_cache_va(uint32_t inst, uint32_t *opc,
int kvm_mips_trans_cache_va(union mips_instruction inst, u32 *opc,
struct kvm_vcpu *vcpu)
{
int result = 0;
unsigned long kseg0_opc;
uint32_t synci_inst = SYNCI_TEMPLATE, base, offset;
base = (inst >> 21) & 0x1f;
offset = inst & 0xffff;
synci_inst |= (base << 21);
synci_inst |= offset;
kseg0_opc =
CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
(vcpu, (unsigned long) opc));
memcpy((void *)kseg0_opc, (void *)&synci_inst, sizeof(uint32_t));
local_flush_icache_range(kseg0_opc, kseg0_opc + 32);
return result;
union mips_instruction synci_inst = { 0 };
synci_inst.i_format.opcode = bcond_op;
synci_inst.i_format.rs = inst.i_format.rs;
synci_inst.i_format.rt = synci_op;
if (cpu_has_mips_r6)
synci_inst.i_format.simmediate = inst.spec3_format.simmediate;
else
synci_inst.i_format.simmediate = inst.i_format.simmediate;
return kvm_mips_trans_replace(vcpu, opc, synci_inst);
}
int kvm_mips_trans_mfc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu)
int kvm_mips_trans_mfc0(union mips_instruction inst, u32 *opc,
struct kvm_vcpu *vcpu)
{
int32_t rt, rd, sel;
uint32_t mfc0_inst;
unsigned long kseg0_opc, flags;
rt = (inst >> 16) & 0x1f;
rd = (inst >> 11) & 0x1f;
sel = inst & 0x7;
union mips_instruction mfc0_inst = { 0 };
u32 rd, sel;
if ((rd == MIPS_CP0_ERRCTL) && (sel == 0)) {
mfc0_inst = CLEAR_TEMPLATE;
mfc0_inst |= ((rt & 0x1f) << 16);
} else {
mfc0_inst = LW_TEMPLATE;
mfc0_inst |= ((rt & 0x1f) << 16);
mfc0_inst |= offsetof(struct kvm_mips_commpage,
cop0.reg[rd][sel]);
}
rd = inst.c0r_format.rd;
sel = inst.c0r_format.sel;
if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
kseg0_opc =
CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
(vcpu, (unsigned long) opc));
memcpy((void *)kseg0_opc, (void *)&mfc0_inst, sizeof(uint32_t));
local_flush_icache_range(kseg0_opc, kseg0_opc + 32);
} else if (KVM_GUEST_KSEGX((unsigned long) opc) == KVM_GUEST_KSEG23) {
local_irq_save(flags);
memcpy((void *)opc, (void *)&mfc0_inst, sizeof(uint32_t));
local_flush_icache_range((unsigned long)opc,
(unsigned long)opc + 32);
local_irq_restore(flags);
if (rd == MIPS_CP0_ERRCTL && sel == 0) {
mfc0_inst.r_format.opcode = spec_op;
mfc0_inst.r_format.rd = inst.c0r_format.rt;
mfc0_inst.r_format.func = add_op;
} else {
kvm_err("%s: Invalid address: %p\n", __func__, opc);
return -EFAULT;
mfc0_inst.i_format.opcode = lw_op;
mfc0_inst.i_format.rt = inst.c0r_format.rt;
mfc0_inst.i_format.simmediate = KVM_GUEST_COMMPAGE_ADDR |
offsetof(struct kvm_mips_commpage, cop0.reg[rd][sel]);
#ifdef CONFIG_CPU_BIG_ENDIAN
if (sizeof(vcpu->arch.cop0->reg[0][0]) == 8)
mfc0_inst.i_format.simmediate |= 4;
#endif
}
return 0;
return kvm_mips_trans_replace(vcpu, opc, mfc0_inst);
}
int kvm_mips_trans_mtc0(uint32_t inst, uint32_t *opc, struct kvm_vcpu *vcpu)
int kvm_mips_trans_mtc0(union mips_instruction inst, u32 *opc,
struct kvm_vcpu *vcpu)
{
int32_t rt, rd, sel;
uint32_t mtc0_inst = SW_TEMPLATE;
unsigned long kseg0_opc, flags;
rt = (inst >> 16) & 0x1f;
rd = (inst >> 11) & 0x1f;
sel = inst & 0x7;
mtc0_inst |= ((rt & 0x1f) << 16);
mtc0_inst |= offsetof(struct kvm_mips_commpage, cop0.reg[rd][sel]);
if (KVM_GUEST_KSEGX(opc) == KVM_GUEST_KSEG0) {
kseg0_opc =
CKSEG0ADDR(kvm_mips_translate_guest_kseg0_to_hpa
(vcpu, (unsigned long) opc));
memcpy((void *)kseg0_opc, (void *)&mtc0_inst, sizeof(uint32_t));
local_flush_icache_range(kseg0_opc, kseg0_opc + 32);
} else if (KVM_GUEST_KSEGX((unsigned long) opc) == KVM_GUEST_KSEG23) {
local_irq_save(flags);
memcpy((void *)opc, (void *)&mtc0_inst, sizeof(uint32_t));
local_flush_icache_range((unsigned long)opc,
(unsigned long)opc + 32);
local_irq_restore(flags);
} else {
kvm_err("%s: Invalid address: %p\n", __func__, opc);
return -EFAULT;
}
return 0;
union mips_instruction mtc0_inst = { 0 };
u32 rd, sel;
rd = inst.c0r_format.rd;
sel = inst.c0r_format.sel;
mtc0_inst.i_format.opcode = sw_op;
mtc0_inst.i_format.rt = inst.c0r_format.rt;
mtc0_inst.i_format.simmediate = KVM_GUEST_COMMPAGE_ADDR |
offsetof(struct kvm_mips_commpage, cop0.reg[rd][sel]);
#ifdef CONFIG_CPU_BIG_ENDIAN
if (sizeof(vcpu->arch.cop0->reg[0][0]) == 8)
mtc0_inst.i_format.simmediate |= 4;
#endif
return kvm_mips_trans_replace(vcpu, opc, mtc0_inst);
}
This diff is collapsed.
This diff is collapsed.
......@@ -14,13 +14,16 @@
#include <asm/mipsregs.h>
#include <asm/regdef.h>
/* preprocessor replaces the fp in ".set fp=64" with $30 otherwise */
#undef fp
.set noreorder
.set noat
LEAF(__kvm_save_fpu)
.set push
.set mips64r2
SET_HARDFLOAT
.set fp=64
mfc0 t0, CP0_STATUS
sll t0, t0, 5 # is Status.FR set?
bgez t0, 1f # no: skip odd doubles
......@@ -63,8 +66,8 @@ LEAF(__kvm_save_fpu)
LEAF(__kvm_restore_fpu)
.set push
.set mips64r2
SET_HARDFLOAT
.set fp=64
mfc0 t0, CP0_STATUS
sll t0, t0, 5 # is Status.FR set?
bgez t0, 1f # no: skip odd doubles
......
......@@ -22,12 +22,12 @@
#include "interrupt.h"
void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, uint32_t priority)
void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, unsigned int priority)
{
set_bit(priority, &vcpu->arch.pending_exceptions);
}
void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, uint32_t priority)
void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int priority)
{
clear_bit(priority, &vcpu->arch.pending_exceptions);
}
......@@ -114,10 +114,10 @@ void kvm_mips_dequeue_io_int_cb(struct kvm_vcpu *vcpu,
/* Deliver the interrupt of the corresponding priority, if possible. */
int kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
uint32_t cause)
u32 cause)
{
int allowed = 0;
uint32_t exccode;
u32 exccode;
struct kvm_vcpu_arch *arch = &vcpu->arch;
struct mips_coproc *cop0 = vcpu->arch.cop0;
......@@ -196,12 +196,12 @@ int kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
}
int kvm_mips_irq_clear_cb(struct kvm_vcpu *vcpu, unsigned int priority,
uint32_t cause)
u32 cause)
{
return 1;
}
void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, uint32_t cause)
void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, u32 cause)
{
unsigned long *pending = &vcpu->arch.pending_exceptions;
unsigned long *pending_clr = &vcpu->arch.pending_exceptions_clr;
......
......@@ -28,17 +28,13 @@
#define MIPS_EXC_MAX 12
/* XXXSL More to follow */
extern char __kvm_mips_vcpu_run_end[];
extern char mips32_exception[], mips32_exceptionEnd[];
extern char mips32_GuestException[], mips32_GuestExceptionEnd[];
#define C_TI (_ULCAST_(1) << 30)
#define KVM_MIPS_IRQ_DELIVER_ALL_AT_ONCE (0)
#define KVM_MIPS_IRQ_CLEAR_ALL_AT_ONCE (0)
void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, uint32_t priority);
void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, uint32_t priority);
void kvm_mips_queue_irq(struct kvm_vcpu *vcpu, unsigned int priority);
void kvm_mips_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int priority);
int kvm_mips_pending_timer(struct kvm_vcpu *vcpu);
void kvm_mips_queue_timer_int_cb(struct kvm_vcpu *vcpu);
......@@ -48,7 +44,7 @@ void kvm_mips_queue_io_int_cb(struct kvm_vcpu *vcpu,
void kvm_mips_dequeue_io_int_cb(struct kvm_vcpu *vcpu,
struct kvm_mips_interrupt *irq);
int kvm_mips_irq_deliver_cb(struct kvm_vcpu *vcpu, unsigned int priority,
uint32_t cause);
u32 cause);
int kvm_mips_irq_clear_cb(struct kvm_vcpu *vcpu, unsigned int priority,
uint32_t cause);
void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, uint32_t cause);
u32 cause);
void kvm_mips_deliver_interrupts(struct kvm_vcpu *vcpu, u32 cause);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -11,27 +11,6 @@
#include <linux/kvm_host.h>
char *kvm_mips_exit_types_str[MAX_KVM_MIPS_EXIT_TYPES] = {
"WAIT",
"CACHE",
"Signal",
"Interrupt",
"COP0/1 Unusable",
"TLB Mod",
"TLB Miss (LD)",
"TLB Miss (ST)",
"Address Err (ST)",
"Address Error (LD)",
"System Call",
"Reserved Inst",
"Break Inst",
"Trap Inst",
"MSA FPE",
"FPE",
"MSA Disabled",
"D-Cache Flushes",
};
char *kvm_cop0_str[N_MIPS_COPROC_REGS] = {
"Index",
"Random",
......
This diff is collapsed.
......@@ -17,8 +17,75 @@
#define TRACE_INCLUDE_PATH .
#define TRACE_INCLUDE_FILE trace
/* Tracepoints for VM eists */
extern char *kvm_mips_exit_types_str[MAX_KVM_MIPS_EXIT_TYPES];
/*
* Tracepoints for VM enters
*/
DECLARE_EVENT_CLASS(kvm_transition,
TP_PROTO(struct kvm_vcpu *vcpu),
TP_ARGS(vcpu),
TP_STRUCT__entry(
__field(unsigned long, pc)
),
TP_fast_assign(
__entry->pc = vcpu->arch.pc;
),
TP_printk("PC: 0x%08lx",
__entry->pc)
);
DEFINE_EVENT(kvm_transition, kvm_enter,
TP_PROTO(struct kvm_vcpu *vcpu),
TP_ARGS(vcpu));
DEFINE_EVENT(kvm_transition, kvm_reenter,
TP_PROTO(struct kvm_vcpu *vcpu),
TP_ARGS(vcpu));
DEFINE_EVENT(kvm_transition, kvm_out,
TP_PROTO(struct kvm_vcpu *vcpu),
TP_ARGS(vcpu));
/* The first 32 exit reasons correspond to Cause.ExcCode */
#define KVM_TRACE_EXIT_INT 0
#define KVM_TRACE_EXIT_TLBMOD 1
#define KVM_TRACE_EXIT_TLBMISS_LD 2
#define KVM_TRACE_EXIT_TLBMISS_ST 3
#define KVM_TRACE_EXIT_ADDRERR_LD 4
#define KVM_TRACE_EXIT_ADDRERR_ST 5
#define KVM_TRACE_EXIT_SYSCALL 8
#define KVM_TRACE_EXIT_BREAK_INST 9
#define KVM_TRACE_EXIT_RESVD_INST 10
#define KVM_TRACE_EXIT_COP_UNUSABLE 11
#define KVM_TRACE_EXIT_TRAP_INST 13
#define KVM_TRACE_EXIT_MSA_FPE 14
#define KVM_TRACE_EXIT_FPE 15
#define KVM_TRACE_EXIT_MSA_DISABLED 21
/* Further exit reasons */
#define KVM_TRACE_EXIT_WAIT 32
#define KVM_TRACE_EXIT_CACHE 33
#define KVM_TRACE_EXIT_SIGNAL 34
/* Tracepoints for VM exits */
#define kvm_trace_symbol_exit_types \
{ KVM_TRACE_EXIT_INT, "Interrupt" }, \
{ KVM_TRACE_EXIT_TLBMOD, "TLB Mod" }, \
{ KVM_TRACE_EXIT_TLBMISS_LD, "TLB Miss (LD)" }, \
{ KVM_TRACE_EXIT_TLBMISS_ST, "TLB Miss (ST)" }, \
{ KVM_TRACE_EXIT_ADDRERR_LD, "Address Error (LD)" }, \
{ KVM_TRACE_EXIT_ADDRERR_ST, "Address Err (ST)" }, \
{ KVM_TRACE_EXIT_SYSCALL, "System Call" }, \
{ KVM_TRACE_EXIT_BREAK_INST, "Break Inst" }, \
{ KVM_TRACE_EXIT_RESVD_INST, "Reserved Inst" }, \
{ KVM_TRACE_EXIT_COP_UNUSABLE, "COP0/1 Unusable" }, \
{ KVM_TRACE_EXIT_TRAP_INST, "Trap Inst" }, \
{ KVM_TRACE_EXIT_MSA_FPE, "MSA FPE" }, \
{ KVM_TRACE_EXIT_FPE, "FPE" }, \
{ KVM_TRACE_EXIT_MSA_DISABLED, "MSA Disabled" }, \
{ KVM_TRACE_EXIT_WAIT, "WAIT" }, \
{ KVM_TRACE_EXIT_CACHE, "CACHE" }, \
{ KVM_TRACE_EXIT_SIGNAL, "Signal" }
TRACE_EVENT(kvm_exit,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
......@@ -34,10 +101,173 @@ TRACE_EVENT(kvm_exit,
),
TP_printk("[%s]PC: 0x%08lx",
kvm_mips_exit_types_str[__entry->reason],
__print_symbolic(__entry->reason,
kvm_trace_symbol_exit_types),
__entry->pc)
);
#define KVM_TRACE_MFC0 0
#define KVM_TRACE_MTC0 1
#define KVM_TRACE_DMFC0 2
#define KVM_TRACE_DMTC0 3
#define KVM_TRACE_RDHWR 4
#define KVM_TRACE_HWR_COP0 0
#define KVM_TRACE_HWR_HWR 1
#define KVM_TRACE_COP0(REG, SEL) ((KVM_TRACE_HWR_COP0 << 8) | \
((REG) << 3) | (SEL))
#define KVM_TRACE_HWR(REG, SEL) ((KVM_TRACE_HWR_HWR << 8) | \
((REG) << 3) | (SEL))
#define kvm_trace_symbol_hwr_ops \
{ KVM_TRACE_MFC0, "MFC0" }, \
{ KVM_TRACE_MTC0, "MTC0" }, \
{ KVM_TRACE_DMFC0, "DMFC0" }, \
{ KVM_TRACE_DMTC0, "DMTC0" }, \
{ KVM_TRACE_RDHWR, "RDHWR" }
#define kvm_trace_symbol_hwr_cop \
{ KVM_TRACE_HWR_COP0, "COP0" }, \
{ KVM_TRACE_HWR_HWR, "HWR" }
#define kvm_trace_symbol_hwr_regs \
{ KVM_TRACE_COP0( 0, 0), "Index" }, \
{ KVM_TRACE_COP0( 2, 0), "EntryLo0" }, \
{ KVM_TRACE_COP0( 3, 0), "EntryLo1" }, \
{ KVM_TRACE_COP0( 4, 0), "Context" }, \
{ KVM_TRACE_COP0( 4, 2), "UserLocal" }, \
{ KVM_TRACE_COP0( 5, 0), "PageMask" }, \
{ KVM_TRACE_COP0( 6, 0), "Wired" }, \
{ KVM_TRACE_COP0( 7, 0), "HWREna" }, \
{ KVM_TRACE_COP0( 8, 0), "BadVAddr" }, \
{ KVM_TRACE_COP0( 9, 0), "Count" }, \
{ KVM_TRACE_COP0(10, 0), "EntryHi" }, \
{ KVM_TRACE_COP0(11, 0), "Compare" }, \
{ KVM_TRACE_COP0(12, 0), "Status" }, \
{ KVM_TRACE_COP0(12, 1), "IntCtl" }, \
{ KVM_TRACE_COP0(12, 2), "SRSCtl" }, \
{ KVM_TRACE_COP0(13, 0), "Cause" }, \
{ KVM_TRACE_COP0(14, 0), "EPC" }, \
{ KVM_TRACE_COP0(15, 0), "PRId" }, \
{ KVM_TRACE_COP0(15, 1), "EBase" }, \
{ KVM_TRACE_COP0(16, 0), "Config" }, \
{ KVM_TRACE_COP0(16, 1), "Config1" }, \
{ KVM_TRACE_COP0(16, 2), "Config2" }, \
{ KVM_TRACE_COP0(16, 3), "Config3" }, \
{ KVM_TRACE_COP0(16, 4), "Config4" }, \
{ KVM_TRACE_COP0(16, 5), "Config5" }, \
{ KVM_TRACE_COP0(16, 7), "Config7" }, \
{ KVM_TRACE_COP0(26, 0), "ECC" }, \
{ KVM_TRACE_COP0(30, 0), "ErrorEPC" }, \
{ KVM_TRACE_COP0(31, 2), "KScratch1" }, \
{ KVM_TRACE_COP0(31, 3), "KScratch2" }, \
{ KVM_TRACE_COP0(31, 4), "KScratch3" }, \
{ KVM_TRACE_COP0(31, 5), "KScratch4" }, \
{ KVM_TRACE_COP0(31, 6), "KScratch5" }, \
{ KVM_TRACE_COP0(31, 7), "KScratch6" }, \
{ KVM_TRACE_HWR( 0, 0), "CPUNum" }, \
{ KVM_TRACE_HWR( 1, 0), "SYNCI_Step" }, \
{ KVM_TRACE_HWR( 2, 0), "CC" }, \
{ KVM_TRACE_HWR( 3, 0), "CCRes" }, \
{ KVM_TRACE_HWR(29, 0), "ULR" }
TRACE_EVENT(kvm_hwr,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op, unsigned int reg,
unsigned long val),
TP_ARGS(vcpu, op, reg, val),
TP_STRUCT__entry(
__field(unsigned long, val)
__field(u16, reg)
__field(u8, op)
),
TP_fast_assign(
__entry->val = val;
__entry->reg = reg;
__entry->op = op;
),
TP_printk("%s %s (%s:%u:%u) 0x%08lx",
__print_symbolic(__entry->op,
kvm_trace_symbol_hwr_ops),
__print_symbolic(__entry->reg,
kvm_trace_symbol_hwr_regs),
__print_symbolic(__entry->reg >> 8,
kvm_trace_symbol_hwr_cop),
(__entry->reg >> 3) & 0x1f,
__entry->reg & 0x7,
__entry->val)
);
#define KVM_TRACE_AUX_RESTORE 0
#define KVM_TRACE_AUX_SAVE 1
#define KVM_TRACE_AUX_ENABLE 2
#define KVM_TRACE_AUX_DISABLE 3
#define KVM_TRACE_AUX_DISCARD 4
#define KVM_TRACE_AUX_FPU 1
#define KVM_TRACE_AUX_MSA 2
#define KVM_TRACE_AUX_FPU_MSA 3
#define kvm_trace_symbol_aux_op \
{ KVM_TRACE_AUX_RESTORE, "restore" }, \
{ KVM_TRACE_AUX_SAVE, "save" }, \
{ KVM_TRACE_AUX_ENABLE, "enable" }, \
{ KVM_TRACE_AUX_DISABLE, "disable" }, \
{ KVM_TRACE_AUX_DISCARD, "discard" }
#define kvm_trace_symbol_aux_state \
{ KVM_TRACE_AUX_FPU, "FPU" }, \
{ KVM_TRACE_AUX_MSA, "MSA" }, \
{ KVM_TRACE_AUX_FPU_MSA, "FPU & MSA" }
TRACE_EVENT(kvm_aux,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
unsigned int state),
TP_ARGS(vcpu, op, state),
TP_STRUCT__entry(
__field(unsigned long, pc)
__field(u8, op)
__field(u8, state)
),
TP_fast_assign(
__entry->pc = vcpu->arch.pc;
__entry->op = op;
__entry->state = state;
),
TP_printk("%s %s PC: 0x%08lx",
__print_symbolic(__entry->op,
kvm_trace_symbol_aux_op),
__print_symbolic(__entry->state,
kvm_trace_symbol_aux_state),
__entry->pc)
);
TRACE_EVENT(kvm_asid_change,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int old_asid,
unsigned int new_asid),
TP_ARGS(vcpu, old_asid, new_asid),
TP_STRUCT__entry(
__field(unsigned long, pc)
__field(u8, old_asid)
__field(u8, new_asid)
),
TP_fast_assign(
__entry->pc = vcpu->arch.pc;
__entry->old_asid = old_asid;
__entry->new_asid = new_asid;
),
TP_printk("PC: 0x%08lx old: 0x%02x new: 0x%02x",
__entry->pc,
__entry->old_asid,
__entry->new_asid)
);
#endif /* _TRACE_KVM_H */
/* This part must be outside protection */
......
This diff is collapsed.
......@@ -627,8 +627,8 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
dec_insn.pc_inc +
dec_insn.next_pc_inc;
return 1;
case cbcond0_op:
case cbcond1_op:
case pop10_op:
case pop30_op:
if (!cpu_has_mips_r6)
break;
if (insn.i_format.rt && !insn.i_format.rs)
......@@ -683,14 +683,14 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
dec_insn.next_pc_inc;
return 1;
case beqzcjic_op:
case pop66_op:
if (!cpu_has_mips_r6)
break;
*contpc = regs->cp0_epc + dec_insn.pc_inc +
dec_insn.next_pc_inc;
return 1;
case bnezcjialc_op:
case pop76_op:
if (!cpu_has_mips_r6)
break;
if (!insn.i_format.rs)
......
......@@ -1206,7 +1206,7 @@ static void probe_pcache(void)
c->icache.linesz;
c->icache.waybit = __ffs(icache_size/c->icache.ways);
if (config & 0x8) /* VI bit */
if (config & MIPS_CONF_VI)
c->icache.flags |= MIPS_CACHE_VTAG;
/*
......
This diff is collapsed.
......@@ -67,9 +67,14 @@ static struct insn insn_table[] = {
#else
{ insn_cache, M6(cache_op, 0, 0, 0, cache6_op), RS | RT | SIMM9 },
#endif
{ insn_cfc1, M(cop1_op, cfc_op, 0, 0, 0, 0), RT | RD },
{ insn_cfcmsa, M(msa_op, 0, msa_cfc_op, 0, 0, msa_elm_op), RD | RE },
{ insn_ctc1, M(cop1_op, ctc_op, 0, 0, 0, 0), RT | RD },
{ insn_ctcmsa, M(msa_op, 0, msa_ctc_op, 0, 0, msa_elm_op), RD | RE },
{ insn_daddiu, M(daddiu_op, 0, 0, 0, 0, 0), RS | RT | SIMM },
{ insn_daddu, M(spec_op, 0, 0, 0, 0, daddu_op), RS | RT | RD },
{ insn_dinsm, M(spec3_op, 0, 0, 0, 0, dinsm_op), RS | RT | RD | RE },
{ insn_di, M(cop0_op, mfmc0_op, 0, 12, 0, 0), RT },
{ insn_dins, M(spec3_op, 0, 0, 0, 0, dins_op), RS | RT | RD | RE },
{ insn_divu, M(spec_op, 0, 0, 0, 0, divu_op), RS | RT },
{ insn_dmfc0, M(cop0_op, dmfc_op, 0, 0, 0, 0), RT | RD | SET},
......@@ -114,7 +119,13 @@ static struct insn insn_table[] = {
{ insn_mflo, M(spec_op, 0, 0, 0, 0, mflo_op), RD },
{ insn_mtc0, M(cop0_op, mtc_op, 0, 0, 0, 0), RT | RD | SET},
{ insn_mthc0, M(cop0_op, mthc0_op, 0, 0, 0, 0), RT | RD | SET},
{ insn_mthi, M(spec_op, 0, 0, 0, 0, mthi_op), RS },
{ insn_mtlo, M(spec_op, 0, 0, 0, 0, mtlo_op), RS },
#ifndef CONFIG_CPU_MIPSR6
{ insn_mul, M(spec2_op, 0, 0, 0, 0, mul_op), RS | RT | RD},
#else
{ insn_mul, M(spec_op, 0, 0, 0, mult_mul_op, mult_op), RS | RT | RD},
#endif
{ insn_ori, M(ori_op, 0, 0, 0, 0, 0), RS | RT | UIMM },
{ insn_or, M(spec_op, 0, 0, 0, 0, or_op), RS | RT | RD },
#ifndef CONFIG_CPU_MIPSR6
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -302,7 +302,6 @@ int kvmppc_emulate_instruction(struct kvm_run *run, struct kvm_vcpu *vcpu)
advance = 0;
printk(KERN_ERR "Couldn't emulate instruction 0x%08x "
"(op %d xop %d)\n", inst, get_op(inst), get_xop(inst));
kvmppc_core_queue_program(vcpu, 0);
}
}
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment