Commit 10dc3747 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM updates from Paolo Bonzini:
 "One of the largest releases for KVM...  Hardly any generic
  changes, but lots of architecture-specific updates.

  ARM:
   - VHE support so that we can run the kernel at EL2 on ARMv8.1 systems
   - PMU support for guests
   - 32bit world switch rewritten in C
   - various optimizations to the vgic save/restore code.

  PPC:
   - enabled KVM-VFIO integration ("VFIO device")
   - optimizations to speed up IPIs between vcpus
   - in-kernel handling of IOMMU hypercalls
   - support for dynamic DMA windows (DDW).

  s390:
   - provide the floating point registers via sync regs;
   - separated instruction vs.  data accesses
   - dirty log improvements for huge guests
   - bugfixes and documentation improvements.

  x86:
   - Hyper-V VMBus hypercall userspace exit
   - alternative implementation of lowest-priority interrupts using
     vector hashing (for better VT-d posted interrupt support)
   - fixed guest debugging with nested virtualizations
   - improved interrupt tracking in the in-kernel IOAPIC
   - generic infrastructure for tracking writes to guest
     memory - currently its only use is to speedup the legacy shadow
     paging (pre-EPT) case, but in the future it will be used for
     virtual GPUs as well
   - much cleanup (LAPIC, kvmclock, MMU, PIT), including ubsan fixes"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (217 commits)
  KVM: x86: remove eager_fpu field of struct kvm_vcpu_arch
  KVM: x86: disable MPX if host did not enable MPX XSAVE features
  arm64: KVM: vgic-v3: Only wipe LRs on vcpu exit
  arm64: KVM: vgic-v3: Reset LRs at boot time
  arm64: KVM: vgic-v3: Do not save an LR known to be empty
  arm64: KVM: vgic-v3: Save maintenance interrupt state only if required
  arm64: KVM: vgic-v3: Avoid accessing ICH registers
  KVM: arm/arm64: vgic-v2: Make GICD_SGIR quicker to hit
  KVM: arm/arm64: vgic-v2: Only wipe LRs on vcpu exit
  KVM: arm/arm64: vgic-v2: Reset LRs at boot time
  KVM: arm/arm64: vgic-v2: Do not save an LR known to be empty
  KVM: arm/arm64: vgic-v2: Move GICH_ELRSR saving to its own function
  KVM: arm/arm64: vgic-v2: Save maintenance interrupt state only if required
  KVM: arm/arm64: vgic-v2: Avoid accessing GICH registers
  KVM: s390: allocate only one DMA page per VM
  KVM: s390: enable STFLE interpretation only if enabled for the guest
  KVM: s390: wake up when the VCPU cpu timer expires
  KVM: s390: step the VCPU timer while in enabled wait
  KVM: s390: protect VCPU cpu timer with a seqcount
  KVM: s390: step VCPU cpu timer during kvm_run ioctl
  ...
parents 047486d8 f958ee74
...@@ -2507,8 +2507,9 @@ struct kvm_create_device { ...@@ -2507,8 +2507,9 @@ struct kvm_create_device {
4.80 KVM_SET_DEVICE_ATTR/KVM_GET_DEVICE_ATTR 4.80 KVM_SET_DEVICE_ATTR/KVM_GET_DEVICE_ATTR
Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
Type: device ioctl, vm ioctl KVM_CAP_VCPU_ATTRIBUTES for vcpu device
Type: device ioctl, vm ioctl, vcpu ioctl
Parameters: struct kvm_device_attr Parameters: struct kvm_device_attr
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
Errors: Errors:
...@@ -2533,8 +2534,9 @@ struct kvm_device_attr { ...@@ -2533,8 +2534,9 @@ struct kvm_device_attr {
4.81 KVM_HAS_DEVICE_ATTR 4.81 KVM_HAS_DEVICE_ATTR
Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device Capability: KVM_CAP_DEVICE_CTRL, KVM_CAP_VM_ATTRIBUTES for vm device,
Type: device ioctl, vm ioctl KVM_CAP_VCPU_ATTRIBUTES for vcpu device
Type: device ioctl, vm ioctl, vcpu ioctl
Parameters: struct kvm_device_attr Parameters: struct kvm_device_attr
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
Errors: Errors:
...@@ -2577,6 +2579,8 @@ Possible features: ...@@ -2577,6 +2579,8 @@ Possible features:
Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only). Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU. - KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
Depends on KVM_CAP_ARM_PSCI_0_2. Depends on KVM_CAP_ARM_PSCI_0_2.
- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
Depends on KVM_CAP_ARM_PMU_V3.
4.83 KVM_ARM_PREFERRED_TARGET 4.83 KVM_ARM_PREFERRED_TARGET
...@@ -3035,6 +3039,87 @@ Returns: 0 on success, -1 on error ...@@ -3035,6 +3039,87 @@ Returns: 0 on success, -1 on error
Queues an SMI on the thread's vcpu. Queues an SMI on the thread's vcpu.
4.97 KVM_CAP_PPC_MULTITCE
Capability: KVM_CAP_PPC_MULTITCE
Architectures: ppc
Type: vm
This capability means the kernel is capable of handling hypercalls
H_PUT_TCE_INDIRECT and H_STUFF_TCE without passing those into the user
space. This significantly accelerates DMA operations for PPC KVM guests.
User space should expect that its handlers for these hypercalls
are not going to be called if user space previously registered LIOBN
in KVM (via KVM_CREATE_SPAPR_TCE or similar calls).
In order to enable H_PUT_TCE_INDIRECT and H_STUFF_TCE use in the guest,
user space might have to advertise it for the guest. For example,
IBM pSeries (sPAPR) guest starts using them if "hcall-multi-tce" is
present in the "ibm,hypertas-functions" device-tree property.
The hypercalls mentioned above may or may not be processed successfully
in the kernel based fast path. If they can not be handled by the kernel,
they will get passed on to user space. So user space still has to have
an implementation for these despite the in kernel acceleration.
This capability is always enabled.
4.98 KVM_CREATE_SPAPR_TCE_64
Capability: KVM_CAP_SPAPR_TCE_64
Architectures: powerpc
Type: vm ioctl
Parameters: struct kvm_create_spapr_tce_64 (in)
Returns: file descriptor for manipulating the created TCE table
This is an extension for KVM_CAP_SPAPR_TCE which only supports 32bit
windows, described in 4.62 KVM_CREATE_SPAPR_TCE
This capability uses extended struct in ioctl interface:
/* for KVM_CAP_SPAPR_TCE_64 */
struct kvm_create_spapr_tce_64 {
__u64 liobn;
__u32 page_shift;
__u32 flags;
__u64 offset; /* in pages */
__u64 size; /* in pages */
};
The aim of extension is to support an additional bigger DMA window with
a variable page size.
KVM_CREATE_SPAPR_TCE_64 receives a 64bit window size, an IOMMU page shift and
a bus offset of the corresponding DMA window, @size and @offset are numbers
of IOMMU pages.
@flags are not used at the moment.
The rest of functionality is identical to KVM_CREATE_SPAPR_TCE.
4.98 KVM_REINJECT_CONTROL
Capability: KVM_CAP_REINJECT_CONTROL
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_reinject_control (in)
Returns: 0 on success,
-EFAULT if struct kvm_reinject_control cannot be read,
-ENXIO if KVM_CREATE_PIT or KVM_CREATE_PIT2 didn't succeed earlier.
i8254 (PIT) has two modes, reinject and !reinject. The default is reinject,
where KVM queues elapsed i8254 ticks and monitors completion of interrupt from
vector(s) that i8254 injects. Reinject mode dequeues a tick and injects its
interrupt whenever there isn't a pending interrupt from i8254.
!reinject mode injects an interrupt as soon as a tick arrives.
struct kvm_reinject_control {
__u8 pit_reinject;
__u8 reserved[31];
};
pit_reinject = 0 (!reinject mode) is recommended, unless running an old
operating system that uses the PIT for timing (e.g. Linux 2.4.x).
5. The kvm_run structure 5. The kvm_run structure
------------------------ ------------------------
...@@ -3339,6 +3424,7 @@ EOI was received. ...@@ -3339,6 +3424,7 @@ EOI was received.
struct kvm_hyperv_exit { struct kvm_hyperv_exit {
#define KVM_EXIT_HYPERV_SYNIC 1 #define KVM_EXIT_HYPERV_SYNIC 1
#define KVM_EXIT_HYPERV_HCALL 2
__u32 type; __u32 type;
union { union {
struct { struct {
...@@ -3347,6 +3433,11 @@ EOI was received. ...@@ -3347,6 +3433,11 @@ EOI was received.
__u64 evt_page; __u64 evt_page;
__u64 msg_page; __u64 msg_page;
} synic; } synic;
struct {
__u64 input;
__u64 result;
__u64 params[2];
} hcall;
} u; } u;
}; };
/* KVM_EXIT_HYPERV */ /* KVM_EXIT_HYPERV */
......
...@@ -88,6 +88,8 @@ struct kvm_s390_io_adapter_req { ...@@ -88,6 +88,8 @@ struct kvm_s390_io_adapter_req {
perform a gmap translation for the guest address provided in addr, perform a gmap translation for the guest address provided in addr,
pin a userspace page for the translated address and add it to the pin a userspace page for the translated address and add it to the
list of mappings list of mappings
Note: A new mapping will be created unconditionally; therefore,
the calling code should avoid making duplicate mappings.
KVM_S390_IO_ADAPTER_UNMAP KVM_S390_IO_ADAPTER_UNMAP
release a userspace page for the translated address specified in addr release a userspace page for the translated address specified in addr
......
Generic vcpu interface
====================================
The virtual cpu "device" also accepts the ioctls KVM_SET_DEVICE_ATTR,
KVM_GET_DEVICE_ATTR, and KVM_HAS_DEVICE_ATTR. The interface uses the same struct
kvm_device_attr as other devices, but targets VCPU-wide settings and controls.
The groups and attributes per virtual cpu, if any, are architecture specific.
1. GROUP: KVM_ARM_VCPU_PMU_V3_CTRL
Architectures: ARM64
1.1. ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_IRQ
Parameters: in kvm_device_attr.addr the address for PMU overflow interrupt is a
pointer to an int
Returns: -EBUSY: The PMU overflow interrupt is already set
-ENXIO: The overflow interrupt not set when attempting to get it
-ENODEV: PMUv3 not supported
-EINVAL: Invalid PMU overflow interrupt number supplied
A value describing the PMUv3 (Performance Monitor Unit v3) overflow interrupt
number for this vcpu. This interrupt could be a PPI or SPI, but the interrupt
type must be same for each vcpu. As a PPI, the interrupt number is the same for
all vcpus, while as an SPI it must be a separate number per vcpu.
1.2 ATTRIBUTE: KVM_ARM_VCPU_PMU_V3_INIT
Parameters: no additional parameter in kvm_device_attr.addr
Returns: -ENODEV: PMUv3 not supported
-ENXIO: PMUv3 not properly configured as required prior to calling this
attribute
-EBUSY: PMUv3 already initialized
Request the initialization of the PMUv3.
...@@ -84,3 +84,55 @@ Returns: -EBUSY in case 1 or more vcpus are already activated (only in write ...@@ -84,3 +84,55 @@ Returns: -EBUSY in case 1 or more vcpus are already activated (only in write
-EFAULT if the given address is not accessible from kernel space -EFAULT if the given address is not accessible from kernel space
-ENOMEM if not enough memory is available to process the ioctl -ENOMEM if not enough memory is available to process the ioctl
0 in case of success 0 in case of success
3. GROUP: KVM_S390_VM_TOD
Architectures: s390
3.1. ATTRIBUTE: KVM_S390_VM_TOD_HIGH
Allows user space to set/get the TOD clock extension (u8).
Parameters: address of a buffer in user space to store the data (u8) to
Returns: -EFAULT if the given address is not accessible from kernel space
-EINVAL if setting the TOD clock extension to != 0 is not supported
3.2. ATTRIBUTE: KVM_S390_VM_TOD_LOW
Allows user space to set/get bits 0-63 of the TOD clock register as defined in
the POP (u64).
Parameters: address of a buffer in user space to store the data (u64) to
Returns: -EFAULT if the given address is not accessible from kernel space
4. GROUP: KVM_S390_VM_CRYPTO
Architectures: s390
4.1. ATTRIBUTE: KVM_S390_VM_CRYPTO_ENABLE_AES_KW (w/o)
Allows user space to enable aes key wrapping, including generating a new
wrapping key.
Parameters: none
Returns: 0
4.2. ATTRIBUTE: KVM_S390_VM_CRYPTO_ENABLE_DEA_KW (w/o)
Allows user space to enable dea key wrapping, including generating a new
wrapping key.
Parameters: none
Returns: 0
4.3. ATTRIBUTE: KVM_S390_VM_CRYPTO_DISABLE_AES_KW (w/o)
Allows user space to disable aes key wrapping, clearing the wrapping key.
Parameters: none
Returns: 0
4.4. ATTRIBUTE: KVM_S390_VM_CRYPTO_DISABLE_DEA_KW (w/o)
Allows user space to disable dea key wrapping, clearing the wrapping key.
Parameters: none
Returns: 0
...@@ -392,11 +392,11 @@ To instantiate a large spte, four constraints must be satisfied: ...@@ -392,11 +392,11 @@ To instantiate a large spte, four constraints must be satisfied:
write-protected pages write-protected pages
- the guest page must be wholly contained by a single memory slot - the guest page must be wholly contained by a single memory slot
To check the last two conditions, the mmu maintains a ->write_count set of To check the last two conditions, the mmu maintains a ->disallow_lpage set of
arrays for each memory slot and large page size. Every write protected page arrays for each memory slot and large page size. Every write protected page
causes its write_count to be incremented, thus preventing instantiation of causes its disallow_lpage to be incremented, thus preventing instantiation of
a large spte. The frames at the end of an unaligned memory slot have a large spte. The frames at the end of an unaligned memory slot have
artificially inflated ->write_counts so they can never be instantiated. artificially inflated ->disallow_lpages so they can never be instantiated.
Zapping all pages (page generation count) Zapping all pages (page generation count)
========================================= =========================================
......
...@@ -19,38 +19,7 @@ ...@@ -19,38 +19,7 @@
#ifndef __ARM_KVM_ASM_H__ #ifndef __ARM_KVM_ASM_H__
#define __ARM_KVM_ASM_H__ #define __ARM_KVM_ASM_H__
/* 0 is reserved as an invalid value. */ #include <asm/virt.h>
#define c0_MPIDR 1 /* MultiProcessor ID Register */
#define c0_CSSELR 2 /* Cache Size Selection Register */
#define c1_SCTLR 3 /* System Control Register */
#define c1_ACTLR 4 /* Auxiliary Control Register */
#define c1_CPACR 5 /* Coprocessor Access Control */
#define c2_TTBR0 6 /* Translation Table Base Register 0 */
#define c2_TTBR0_high 7 /* TTBR0 top 32 bits */
#define c2_TTBR1 8 /* Translation Table Base Register 1 */
#define c2_TTBR1_high 9 /* TTBR1 top 32 bits */
#define c2_TTBCR 10 /* Translation Table Base Control R. */
#define c3_DACR 11 /* Domain Access Control Register */
#define c5_DFSR 12 /* Data Fault Status Register */
#define c5_IFSR 13 /* Instruction Fault Status Register */
#define c5_ADFSR 14 /* Auxilary Data Fault Status R */
#define c5_AIFSR 15 /* Auxilary Instrunction Fault Status R */
#define c6_DFAR 16 /* Data Fault Address Register */
#define c6_IFAR 17 /* Instruction Fault Address Register */
#define c7_PAR 18 /* Physical Address Register */
#define c7_PAR_high 19 /* PAR top 32 bits */
#define c9_L2CTLR 20 /* Cortex A15/A7 L2 Control Register */
#define c10_PRRR 21 /* Primary Region Remap Register */
#define c10_NMRR 22 /* Normal Memory Remap Register */
#define c12_VBAR 23 /* Vector Base Address Register */
#define c13_CID 24 /* Context ID Register */
#define c13_TID_URW 25 /* Thread ID, User R/W */
#define c13_TID_URO 26 /* Thread ID, User R/O */
#define c13_TID_PRIV 27 /* Thread ID, Privileged */
#define c14_CNTKCTL 28 /* Timer Control Register (PL1) */
#define c10_AMAIR0 29 /* Auxilary Memory Attribute Indirection Reg0 */
#define c10_AMAIR1 30 /* Auxilary Memory Attribute Indirection Reg1 */
#define NR_CP15_REGS 31 /* Number of regs (incl. invalid) */
#define ARM_EXCEPTION_RESET 0 #define ARM_EXCEPTION_RESET 0
#define ARM_EXCEPTION_UNDEFINED 1 #define ARM_EXCEPTION_UNDEFINED 1
...@@ -86,19 +55,15 @@ struct kvm_vcpu; ...@@ -86,19 +55,15 @@ struct kvm_vcpu;
extern char __kvm_hyp_init[]; extern char __kvm_hyp_init[];
extern char __kvm_hyp_init_end[]; extern char __kvm_hyp_init_end[];
extern char __kvm_hyp_exit[];
extern char __kvm_hyp_exit_end[];
extern char __kvm_hyp_vector[]; extern char __kvm_hyp_vector[];
extern char __kvm_hyp_code_start[];
extern char __kvm_hyp_code_end[];
extern void __kvm_flush_vm_context(void); extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
extern void __init_stage2_translation(void);
#endif #endif
#endif /* __ARM_KVM_ASM_H__ */ #endif /* __ARM_KVM_ASM_H__ */
...@@ -68,12 +68,12 @@ static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu) ...@@ -68,12 +68,12 @@ static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu) static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu)
{ {
return &vcpu->arch.regs.usr_regs.ARM_pc; return &vcpu->arch.ctxt.gp_regs.usr_regs.ARM_pc;
} }
static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu) static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
{ {
return &vcpu->arch.regs.usr_regs.ARM_cpsr; return &vcpu->arch.ctxt.gp_regs.usr_regs.ARM_cpsr;
} }
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu) static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
...@@ -83,13 +83,13 @@ static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu) ...@@ -83,13 +83,13 @@ static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
static inline bool mode_has_spsr(struct kvm_vcpu *vcpu) static inline bool mode_has_spsr(struct kvm_vcpu *vcpu)
{ {
unsigned long cpsr_mode = vcpu->arch.regs.usr_regs.ARM_cpsr & MODE_MASK; unsigned long cpsr_mode = vcpu->arch.ctxt.gp_regs.usr_regs.ARM_cpsr & MODE_MASK;
return (cpsr_mode > USR_MODE && cpsr_mode < SYSTEM_MODE); return (cpsr_mode > USR_MODE && cpsr_mode < SYSTEM_MODE);
} }
static inline bool vcpu_mode_priv(struct kvm_vcpu *vcpu) static inline bool vcpu_mode_priv(struct kvm_vcpu *vcpu)
{ {
unsigned long cpsr_mode = vcpu->arch.regs.usr_regs.ARM_cpsr & MODE_MASK; unsigned long cpsr_mode = vcpu->arch.ctxt.gp_regs.usr_regs.ARM_cpsr & MODE_MASK;
return cpsr_mode > USR_MODE;; return cpsr_mode > USR_MODE;;
} }
...@@ -108,11 +108,6 @@ static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu) ...@@ -108,11 +108,6 @@ static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
return ((phys_addr_t)vcpu->arch.fault.hpfar & HPFAR_MASK) << 8; return ((phys_addr_t)vcpu->arch.fault.hpfar & HPFAR_MASK) << 8;
} }
static inline unsigned long kvm_vcpu_get_hyp_pc(struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault.hyp_pc;
}
static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu) static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
{ {
return kvm_vcpu_get_hsr(vcpu) & HSR_ISV; return kvm_vcpu_get_hsr(vcpu) & HSR_ISV;
...@@ -143,6 +138,11 @@ static inline bool kvm_vcpu_dabt_iss1tw(struct kvm_vcpu *vcpu) ...@@ -143,6 +138,11 @@ static inline bool kvm_vcpu_dabt_iss1tw(struct kvm_vcpu *vcpu)
return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_S1PTW; return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_S1PTW;
} }
static inline bool kvm_vcpu_dabt_is_cm(struct kvm_vcpu *vcpu)
{
return !!(kvm_vcpu_get_hsr(vcpu) & HSR_DABT_CM);
}
/* Get Access Size from a data abort */ /* Get Access Size from a data abort */
static inline int kvm_vcpu_dabt_get_as(struct kvm_vcpu *vcpu) static inline int kvm_vcpu_dabt_get_as(struct kvm_vcpu *vcpu)
{ {
...@@ -192,7 +192,7 @@ static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu) ...@@ -192,7 +192,7 @@ static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu)
static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu) static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
{ {
return vcpu->arch.cp15[c0_MPIDR] & MPIDR_HWID_BITMASK; return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
} }
static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu) static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
......
...@@ -85,20 +85,61 @@ struct kvm_vcpu_fault_info { ...@@ -85,20 +85,61 @@ struct kvm_vcpu_fault_info {
u32 hsr; /* Hyp Syndrome Register */ u32 hsr; /* Hyp Syndrome Register */
u32 hxfar; /* Hyp Data/Inst. Fault Address Register */ u32 hxfar; /* Hyp Data/Inst. Fault Address Register */
u32 hpfar; /* Hyp IPA Fault Address Register */ u32 hpfar; /* Hyp IPA Fault Address Register */
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
}; };
typedef struct vfp_hard_struct kvm_cpu_context_t; /*
* 0 is reserved as an invalid value.
* Order should be kept in sync with the save/restore code.
*/
enum vcpu_sysreg {
__INVALID_SYSREG__,
c0_MPIDR, /* MultiProcessor ID Register */
c0_CSSELR, /* Cache Size Selection Register */
c1_SCTLR, /* System Control Register */
c1_ACTLR, /* Auxiliary Control Register */
c1_CPACR, /* Coprocessor Access Control */
c2_TTBR0, /* Translation Table Base Register 0 */
c2_TTBR0_high, /* TTBR0 top 32 bits */
c2_TTBR1, /* Translation Table Base Register 1 */
c2_TTBR1_high, /* TTBR1 top 32 bits */
c2_TTBCR, /* Translation Table Base Control R. */
c3_DACR, /* Domain Access Control Register */
c5_DFSR, /* Data Fault Status Register */
c5_IFSR, /* Instruction Fault Status Register */
c5_ADFSR, /* Auxilary Data Fault Status R */
c5_AIFSR, /* Auxilary Instrunction Fault Status R */
c6_DFAR, /* Data Fault Address Register */
c6_IFAR, /* Instruction Fault Address Register */
c7_PAR, /* Physical Address Register */
c7_PAR_high, /* PAR top 32 bits */
c9_L2CTLR, /* Cortex A15/A7 L2 Control Register */
c10_PRRR, /* Primary Region Remap Register */
c10_NMRR, /* Normal Memory Remap Register */
c12_VBAR, /* Vector Base Address Register */
c13_CID, /* Context ID Register */
c13_TID_URW, /* Thread ID, User R/W */
c13_TID_URO, /* Thread ID, User R/O */
c13_TID_PRIV, /* Thread ID, Privileged */
c14_CNTKCTL, /* Timer Control Register (PL1) */
c10_AMAIR0, /* Auxilary Memory Attribute Indirection Reg0 */
c10_AMAIR1, /* Auxilary Memory Attribute Indirection Reg1 */
NR_CP15_REGS /* Number of regs (incl. invalid) */
};
struct kvm_cpu_context {
struct kvm_regs gp_regs;
struct vfp_hard_struct vfp;
u32 cp15[NR_CP15_REGS];
};
typedef struct kvm_cpu_context kvm_cpu_context_t;
struct kvm_vcpu_arch { struct kvm_vcpu_arch {
struct kvm_regs regs; struct kvm_cpu_context ctxt;
int target; /* Processor target */ int target; /* Processor target */
DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES); DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
/* System control coprocessor (cp15) */
u32 cp15[NR_CP15_REGS];
/* The CPU type we expose to the VM */ /* The CPU type we expose to the VM */
u32 midr; u32 midr;
...@@ -111,9 +152,6 @@ struct kvm_vcpu_arch { ...@@ -111,9 +152,6 @@ struct kvm_vcpu_arch {
/* Exception Information */ /* Exception Information */
struct kvm_vcpu_fault_info fault; struct kvm_vcpu_fault_info fault;
/* Floating point registers (VFP and Advanced SIMD/NEON) */
struct vfp_hard_struct vfp_guest;
/* Host FP context */ /* Host FP context */
kvm_cpu_context_t *host_cpu_context; kvm_cpu_context_t *host_cpu_context;
...@@ -158,12 +196,14 @@ struct kvm_vcpu_stat { ...@@ -158,12 +196,14 @@ struct kvm_vcpu_stat {
u64 exits; u64 exits;
}; };
#define vcpu_cp15(v,r) (v)->arch.ctxt.cp15[r]
int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init); int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
u64 kvm_call_hyp(void *hypfn, ...); unsigned long kvm_call_hyp(void *hypfn, ...);
void force_vm_exit(const cpumask_t *mask); void force_vm_exit(const cpumask_t *mask);
#define KVM_ARCH_WANT_MMU_NOTIFIER #define KVM_ARCH_WANT_MMU_NOTIFIER
...@@ -220,6 +260,11 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr, ...@@ -220,6 +260,11 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr); kvm_call_hyp((void*)hyp_stack_ptr, vector_ptr, pgd_ptr);
} }
static inline void __cpu_init_stage2(void)
{
kvm_call_hyp(__init_stage2_translation);
}
static inline int kvm_arch_dev_ioctl_check_extension(long ext) static inline int kvm_arch_dev_ioctl_check_extension(long ext)
{ {
return 0; return 0;
...@@ -242,5 +287,20 @@ static inline void kvm_arm_init_debug(void) {} ...@@ -242,5 +287,20 @@ static inline void kvm_arm_init_debug(void) {}
static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {} static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {} static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {} static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
static inline int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
return -ENXIO;
}
static inline int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
return -ENXIO;
}
static inline int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
return -ENXIO;
}
#endif /* __ARM_KVM_HOST_H__ */ #endif /* __ARM_KVM_HOST_H__ */
/*
* Copyright (C) 2015 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ARM_KVM_HYP_H__
#define __ARM_KVM_HYP_H__
#include <linux/compiler.h>
#include <linux/kvm_host.h>
#include <asm/kvm_mmu.h>
#include <asm/vfp.h>
#define __hyp_text __section(.hyp.text) notrace
#define kern_hyp_va(v) (v)
#define hyp_kern_va(v) (v)
#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \
"mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32
#define __ACCESS_CP15_64(Op1, CRm) \
"mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64
#define __ACCESS_VFP(CRn) \
"mrc", "mcr", __stringify(p10, 7, %0, CRn, cr0, 0), u32
#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v)))
#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__)
#define __read_sysreg(r, w, c, t) ({ \
t __val; \
asm volatile(r " " c : "=r" (__val)); \
__val; \
})
#define read_sysreg(...) __read_sysreg(__VA_ARGS__)
#define write_special(v, r) \
asm volatile("msr " __stringify(r) ", %0" : : "r" (v))
#define read_special(r) ({ \
u32 __val; \
asm volatile("mrs %0, " __stringify(r) : "=r" (__val)); \
__val; \
})
#define TTBR0 __ACCESS_CP15_64(0, c2)
#define TTBR1 __ACCESS_CP15_64(1, c2)
#define VTTBR __ACCESS_CP15_64(6, c2)
#define PAR __ACCESS_CP15_64(0, c7)
#define CNTV_CVAL __ACCESS_CP15_64(3, c14)
#define CNTVOFF __ACCESS_CP15_64(4, c14)
#define MIDR __ACCESS_CP15(c0, 0, c0, 0)
#define CSSELR __ACCESS_CP15(c0, 2, c0, 0)
#define VPIDR __ACCESS_CP15(c0, 4, c0, 0)
#define VMPIDR __ACCESS_CP15(c0, 4, c0, 5)
#define SCTLR __ACCESS_CP15(c1, 0, c0, 0)
#define CPACR __ACCESS_CP15(c1, 0, c0, 2)
#define HCR __ACCESS_CP15(c1, 4, c1, 0)
#define HDCR __ACCESS_CP15(c1, 4, c1, 1)
#define HCPTR __ACCESS_CP15(c1, 4, c1, 2)
#define HSTR __ACCESS_CP15(c1, 4, c1, 3)
#define TTBCR __ACCESS_CP15(c2, 0, c0, 2)
#define HTCR __ACCESS_CP15(c2, 4, c0, 2)
#define VTCR __ACCESS_CP15(c2, 4, c1, 2)
#define DACR __ACCESS_CP15(c3, 0, c0, 0)
#define DFSR __ACCESS_CP15(c5, 0, c0, 0)
#define IFSR __ACCESS_CP15(c5, 0, c0, 1)
#define ADFSR __ACCESS_CP15(c5, 0, c1, 0)
#define AIFSR __ACCESS_CP15(c5, 0, c1, 1)
#define HSR __ACCESS_CP15(c5, 4, c2, 0)
#define DFAR __ACCESS_CP15(c6, 0, c0, 0)
#define IFAR __ACCESS_CP15(c6, 0, c0, 2)
#define HDFAR __ACCESS_CP15(c6, 4, c0, 0)
#define HIFAR __ACCESS_CP15(c6, 4, c0, 2)
#define HPFAR __ACCESS_CP15(c6, 4, c0, 4)
#define ICIALLUIS __ACCESS_CP15(c7, 0, c1, 0)
#define ATS1CPR __ACCESS_CP15(c7, 0, c8, 0)
#define TLBIALLIS __ACCESS_CP15(c8, 0, c3, 0)
#define TLBIALLNSNHIS __ACCESS_CP15(c8, 4, c3, 4)
#define PRRR __ACCESS_CP15(c10, 0, c2, 0)
#define NMRR __ACCESS_CP15(c10, 0, c2, 1)
#define AMAIR0 __ACCESS_CP15(c10, 0, c3, 0)
#define AMAIR1 __ACCESS_CP15(c10, 0, c3, 1)
#define VBAR __ACCESS_CP15(c12, 0, c0, 0)
#define CID __ACCESS_CP15(c13, 0, c0, 1)
#define TID_URW __ACCESS_CP15(c13, 0, c0, 2)
#define TID_URO __ACCESS_CP15(c13, 0, c0, 3)
#define TID_PRIV __ACCESS_CP15(c13, 0, c0, 4)
#define HTPIDR __ACCESS_CP15(c13, 4, c0, 2)
#define CNTKCTL __ACCESS_CP15(c14, 0, c1, 0)
#define CNTV_CTL __ACCESS_CP15(c14, 0, c3, 1)
#define CNTHCTL __ACCESS_CP15(c14, 4, c1, 0)
#define VFP_FPEXC __ACCESS_VFP(FPEXC)
/* AArch64 compatibility macros, only for the timer so far */
#define read_sysreg_el0(r) read_sysreg(r##_el0)
#define write_sysreg_el0(v, r) write_sysreg(v, r##_el0)
#define cntv_ctl_el0 CNTV_CTL
#define cntv_cval_el0 CNTV_CVAL
#define cntvoff_el2 CNTVOFF
#define cnthctl_el2 CNTHCTL
void __timer_save_state(struct kvm_vcpu *vcpu);
void __timer_restore_state(struct kvm_vcpu *vcpu);
void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
void __sysreg_save_state(struct kvm_cpu_context *ctxt);
void __sysreg_restore_state(struct kvm_cpu_context *ctxt);
void asmlinkage __vfp_save_state(struct vfp_hard_struct *vfp);
void asmlinkage __vfp_restore_state(struct vfp_hard_struct *vfp);
static inline bool __vfp_enabled(void)
{
return !(read_sysreg(HCPTR) & (HCPTR_TCP(11) | HCPTR_TCP(10)));
}
void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt);
void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt);
int asmlinkage __guest_enter(struct kvm_vcpu *vcpu,
struct kvm_cpu_context *host);
int asmlinkage __hyp_do_panic(const char *, int, u32);
#endif /* __ARM_KVM_HYP_H__ */
...@@ -179,7 +179,7 @@ struct kvm; ...@@ -179,7 +179,7 @@ struct kvm;
static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu)
{ {
return (vcpu->arch.cp15[c1_SCTLR] & 0b101) == 0b101; return (vcpu_cp15(vcpu, c1_SCTLR) & 0b101) == 0b101;
} }
static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
......
...@@ -74,6 +74,15 @@ static inline bool is_hyp_mode_mismatched(void) ...@@ -74,6 +74,15 @@ static inline bool is_hyp_mode_mismatched(void)
{ {
return !!(__boot_cpu_mode & BOOT_CPU_MODE_MISMATCH); return !!(__boot_cpu_mode & BOOT_CPU_MODE_MISMATCH);
} }
static inline bool is_kernel_in_hyp_mode(void)
{
return false;
}
/* The section containing the hypervisor text */
extern char __hyp_text_start[];
extern char __hyp_text_end[];
#endif #endif
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -170,41 +170,11 @@ int main(void) ...@@ -170,41 +170,11 @@ int main(void)
DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE); DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE);
BLANK(); BLANK();
#ifdef CONFIG_KVM_ARM_HOST #ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm)); DEFINE(VCPU_GUEST_CTXT, offsetof(struct kvm_vcpu, arch.ctxt));
DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr)); DEFINE(VCPU_HOST_CTXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
DEFINE(VCPU_CP15, offsetof(struct kvm_vcpu, arch.cp15)); DEFINE(CPU_CTXT_VFP, offsetof(struct kvm_cpu_context, vfp));
DEFINE(VCPU_VFP_GUEST, offsetof(struct kvm_vcpu, arch.vfp_guest)); DEFINE(CPU_CTXT_GP_REGS, offsetof(struct kvm_cpu_context, gp_regs));
DEFINE(VCPU_VFP_HOST, offsetof(struct kvm_vcpu, arch.host_cpu_context)); DEFINE(GP_REGS_USR, offsetof(struct kvm_regs, usr_regs));
DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
DEFINE(VCPU_USR_REGS, offsetof(struct kvm_vcpu, arch.regs.usr_regs));
DEFINE(VCPU_SVC_REGS, offsetof(struct kvm_vcpu, arch.regs.svc_regs));
DEFINE(VCPU_ABT_REGS, offsetof(struct kvm_vcpu, arch.regs.abt_regs));
DEFINE(VCPU_UND_REGS, offsetof(struct kvm_vcpu, arch.regs.und_regs));
DEFINE(VCPU_IRQ_REGS, offsetof(struct kvm_vcpu, arch.regs.irq_regs));
DEFINE(VCPU_FIQ_REGS, offsetof(struct kvm_vcpu, arch.regs.fiq_regs));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
DEFINE(VCPU_HCR, offsetof(struct kvm_vcpu, arch.hcr));
DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.fault.hxfar));
DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.fault.hpfar));
DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
DEFINE(VCPU_VGIC_CPU, offsetof(struct kvm_vcpu, arch.vgic_cpu));
DEFINE(VGIC_V2_CPU_HCR, offsetof(struct vgic_cpu, vgic_v2.vgic_hcr));
DEFINE(VGIC_V2_CPU_VMCR, offsetof(struct vgic_cpu, vgic_v2.vgic_vmcr));
DEFINE(VGIC_V2_CPU_MISR, offsetof(struct vgic_cpu, vgic_v2.vgic_misr));
DEFINE(VGIC_V2_CPU_EISR, offsetof(struct vgic_cpu, vgic_v2.vgic_eisr));
DEFINE(VGIC_V2_CPU_ELRSR, offsetof(struct vgic_cpu, vgic_v2.vgic_elrsr));
DEFINE(VGIC_V2_CPU_APR, offsetof(struct vgic_cpu, vgic_v2.vgic_apr));
DEFINE(VGIC_V2_CPU_LR, offsetof(struct vgic_cpu, vgic_v2.vgic_lr));
DEFINE(VGIC_CPU_NR_LR, offsetof(struct vgic_cpu, nr_lr));
DEFINE(VCPU_TIMER_CNTV_CTL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_ctl));
DEFINE(VCPU_TIMER_CNTV_CVAL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_cval));
DEFINE(KVM_TIMER_CNTVOFF, offsetof(struct kvm, arch.timer.cntvoff));
DEFINE(KVM_TIMER_ENABLED, offsetof(struct kvm, arch.timer.enabled));
DEFINE(KVM_VGIC_VCTRL, offsetof(struct kvm, arch.vgic.vctrl_base));
DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr));
#endif #endif
BLANK(); BLANK();
#ifdef CONFIG_VDSO #ifdef CONFIG_VDSO
......
...@@ -18,6 +18,11 @@ ...@@ -18,6 +18,11 @@
*(.proc.info.init) \ *(.proc.info.init) \
VMLINUX_SYMBOL(__proc_info_end) = .; VMLINUX_SYMBOL(__proc_info_end) = .;
#define HYPERVISOR_TEXT \
VMLINUX_SYMBOL(__hyp_text_start) = .; \
*(.hyp.text) \
VMLINUX_SYMBOL(__hyp_text_end) = .;
#define IDMAP_TEXT \ #define IDMAP_TEXT \
ALIGN_FUNCTION(); \ ALIGN_FUNCTION(); \
VMLINUX_SYMBOL(__idmap_text_start) = .; \ VMLINUX_SYMBOL(__idmap_text_start) = .; \
...@@ -108,6 +113,7 @@ SECTIONS ...@@ -108,6 +113,7 @@ SECTIONS
TEXT_TEXT TEXT_TEXT
SCHED_TEXT SCHED_TEXT
LOCK_TEXT LOCK_TEXT
HYPERVISOR_TEXT
KPROBES_TEXT KPROBES_TEXT
*(.gnu.warning) *(.gnu.warning)
*(.glue_7) *(.glue_7)
......
...@@ -17,6 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt) ...@@ -17,6 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
KVM := ../../../virt/kvm KVM := ../../../virt/kvm
kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp/
obj-y += kvm-arm.o init.o interrupts.o obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
......
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/kvm.h> #include <linux/kvm.h>
#include <trace/events/kvm.h> #include <trace/events/kvm.h>
#include <kvm/arm_pmu.h>
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include "trace.h" #include "trace.h"
...@@ -265,6 +266,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) ...@@ -265,6 +266,7 @@ void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu)
kvm_mmu_free_memory_caches(vcpu); kvm_mmu_free_memory_caches(vcpu);
kvm_timer_vcpu_terminate(vcpu); kvm_timer_vcpu_terminate(vcpu);
kvm_vgic_vcpu_destroy(vcpu); kvm_vgic_vcpu_destroy(vcpu);
kvm_pmu_vcpu_destroy(vcpu);
kmem_cache_free(kvm_vcpu_cache, vcpu); kmem_cache_free(kvm_vcpu_cache, vcpu);
} }
...@@ -320,6 +322,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) ...@@ -320,6 +322,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
vcpu->cpu = -1; vcpu->cpu = -1;
kvm_arm_set_running_vcpu(NULL); kvm_arm_set_running_vcpu(NULL);
kvm_timer_vcpu_put(vcpu);
} }
int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
...@@ -577,6 +580,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -577,6 +580,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
* non-preemptible context. * non-preemptible context.
*/ */
preempt_disable(); preempt_disable();
kvm_pmu_flush_hwstate(vcpu);
kvm_timer_flush_hwstate(vcpu); kvm_timer_flush_hwstate(vcpu);
kvm_vgic_flush_hwstate(vcpu); kvm_vgic_flush_hwstate(vcpu);
...@@ -593,6 +597,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -593,6 +597,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) || if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) ||
vcpu->arch.power_off || vcpu->arch.pause) { vcpu->arch.power_off || vcpu->arch.pause) {
local_irq_enable(); local_irq_enable();
kvm_pmu_sync_hwstate(vcpu);
kvm_timer_sync_hwstate(vcpu); kvm_timer_sync_hwstate(vcpu);
kvm_vgic_sync_hwstate(vcpu); kvm_vgic_sync_hwstate(vcpu);
preempt_enable(); preempt_enable();
...@@ -642,10 +647,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -642,10 +647,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu)); trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));
/* /*
* We must sync the timer state before the vgic state so that * We must sync the PMU and timer state before the vgic state so
* the vgic can properly sample the updated state of the * that the vgic can properly sample the updated state of the
* interrupt line. * interrupt line.
*/ */
kvm_pmu_sync_hwstate(vcpu);
kvm_timer_sync_hwstate(vcpu); kvm_timer_sync_hwstate(vcpu);
kvm_vgic_sync_hwstate(vcpu); kvm_vgic_sync_hwstate(vcpu);
...@@ -823,11 +829,54 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, ...@@ -823,11 +829,54 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
return 0; return 0;
} }
static int kvm_arm_vcpu_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
int ret = -ENXIO;
switch (attr->group) {
default:
ret = kvm_arm_vcpu_arch_set_attr(vcpu, attr);
break;
}
return ret;
}
static int kvm_arm_vcpu_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
int ret = -ENXIO;
switch (attr->group) {
default:
ret = kvm_arm_vcpu_arch_get_attr(vcpu, attr);
break;
}
return ret;
}
static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
int ret = -ENXIO;
switch (attr->group) {
default:
ret = kvm_arm_vcpu_arch_has_attr(vcpu, attr);
break;
}
return ret;
}
long kvm_arch_vcpu_ioctl(struct file *filp, long kvm_arch_vcpu_ioctl(struct file *filp,
unsigned int ioctl, unsigned long arg) unsigned int ioctl, unsigned long arg)
{ {
struct kvm_vcpu *vcpu = filp->private_data; struct kvm_vcpu *vcpu = filp->private_data;
void __user *argp = (void __user *)arg; void __user *argp = (void __user *)arg;
struct kvm_device_attr attr;
switch (ioctl) { switch (ioctl) {
case KVM_ARM_VCPU_INIT: { case KVM_ARM_VCPU_INIT: {
...@@ -870,6 +919,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp, ...@@ -870,6 +919,21 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
return -E2BIG; return -E2BIG;
return kvm_arm_copy_reg_indices(vcpu, user_list->reg); return kvm_arm_copy_reg_indices(vcpu, user_list->reg);
} }
case KVM_SET_DEVICE_ATTR: {
if (copy_from_user(&attr, argp, sizeof(attr)))
return -EFAULT;
return kvm_arm_vcpu_set_attr(vcpu, &attr);
}
case KVM_GET_DEVICE_ATTR: {
if (copy_from_user(&attr, argp, sizeof(attr)))
return -EFAULT;
return kvm_arm_vcpu_get_attr(vcpu, &attr);
}
case KVM_HAS_DEVICE_ATTR: {
if (copy_from_user(&attr, argp, sizeof(attr)))
return -EFAULT;
return kvm_arm_vcpu_has_attr(vcpu, &attr);
}
default: default:
return -EINVAL; return -EINVAL;
} }
...@@ -967,6 +1031,11 @@ long kvm_arch_vm_ioctl(struct file *filp, ...@@ -967,6 +1031,11 @@ long kvm_arch_vm_ioctl(struct file *filp,
} }
} }
static void cpu_init_stage2(void *dummy)
{
__cpu_init_stage2();
}
static void cpu_init_hyp_mode(void *dummy) static void cpu_init_hyp_mode(void *dummy)
{ {
phys_addr_t boot_pgd_ptr; phys_addr_t boot_pgd_ptr;
...@@ -985,6 +1054,7 @@ static void cpu_init_hyp_mode(void *dummy) ...@@ -985,6 +1054,7 @@ static void cpu_init_hyp_mode(void *dummy)
vector_ptr = (unsigned long)__kvm_hyp_vector; vector_ptr = (unsigned long)__kvm_hyp_vector;
__cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr); __cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr);
__cpu_init_stage2();
kvm_arm_init_debug(); kvm_arm_init_debug();
} }
...@@ -1035,6 +1105,82 @@ static inline void hyp_cpu_pm_init(void) ...@@ -1035,6 +1105,82 @@ static inline void hyp_cpu_pm_init(void)
} }
#endif #endif
static void teardown_common_resources(void)
{
free_percpu(kvm_host_cpu_state);
}
static int init_common_resources(void)
{
kvm_host_cpu_state = alloc_percpu(kvm_cpu_context_t);
if (!kvm_host_cpu_state) {
kvm_err("Cannot allocate host CPU state\n");
return -ENOMEM;
}
return 0;
}
static int init_subsystems(void)
{
int err;
/*
* Init HYP view of VGIC
*/
err = kvm_vgic_hyp_init();
switch (err) {
case 0:
vgic_present = true;
break;
case -ENODEV:
case -ENXIO:
vgic_present = false;
break;
default:
return err;
}
/*
* Init HYP architected timer support
*/
err = kvm_timer_hyp_init();
if (err)
return err;
kvm_perf_init();
kvm_coproc_table_init();
return 0;
}
static void teardown_hyp_mode(void)
{
int cpu;
if (is_kernel_in_hyp_mode())
return;
free_hyp_pgds();
for_each_possible_cpu(cpu)
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
}
static int init_vhe_mode(void)
{
/*
* Execute the init code on each CPU.
*/
on_each_cpu(cpu_init_stage2, NULL, 1);
/* set size of VMID supported by CPU */
kvm_vmid_bits = kvm_get_vmid_bits();
kvm_info("%d-bit VMID\n", kvm_vmid_bits);
kvm_info("VHE mode initialized successfully\n");
return 0;
}
/** /**
* Inits Hyp-mode on all online CPUs * Inits Hyp-mode on all online CPUs
*/ */
...@@ -1065,7 +1211,7 @@ static int init_hyp_mode(void) ...@@ -1065,7 +1211,7 @@ static int init_hyp_mode(void)
stack_page = __get_free_page(GFP_KERNEL); stack_page = __get_free_page(GFP_KERNEL);
if (!stack_page) { if (!stack_page) {
err = -ENOMEM; err = -ENOMEM;
goto out_free_stack_pages; goto out_err;
} }
per_cpu(kvm_arm_hyp_stack_page, cpu) = stack_page; per_cpu(kvm_arm_hyp_stack_page, cpu) = stack_page;
...@@ -1074,16 +1220,16 @@ static int init_hyp_mode(void) ...@@ -1074,16 +1220,16 @@ static int init_hyp_mode(void)
/* /*
* Map the Hyp-code called directly from the host * Map the Hyp-code called directly from the host
*/ */
err = create_hyp_mappings(__kvm_hyp_code_start, __kvm_hyp_code_end); err = create_hyp_mappings(__hyp_text_start, __hyp_text_end);
if (err) { if (err) {
kvm_err("Cannot map world-switch code\n"); kvm_err("Cannot map world-switch code\n");
goto out_free_mappings; goto out_err;
} }
err = create_hyp_mappings(__start_rodata, __end_rodata); err = create_hyp_mappings(__start_rodata, __end_rodata);
if (err) { if (err) {
kvm_err("Cannot map rodata section\n"); kvm_err("Cannot map rodata section\n");
goto out_free_mappings; goto out_err;
} }
/* /*
...@@ -1095,20 +1241,10 @@ static int init_hyp_mode(void) ...@@ -1095,20 +1241,10 @@ static int init_hyp_mode(void)
if (err) { if (err) {
kvm_err("Cannot map hyp stack\n"); kvm_err("Cannot map hyp stack\n");
goto out_free_mappings; goto out_err;
} }
} }
/*
* Map the host CPU structures
*/
kvm_host_cpu_state = alloc_percpu(kvm_cpu_context_t);
if (!kvm_host_cpu_state) {
err = -ENOMEM;
kvm_err("Cannot allocate host CPU state\n");
goto out_free_mappings;
}
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
kvm_cpu_context_t *cpu_ctxt; kvm_cpu_context_t *cpu_ctxt;
...@@ -1117,7 +1253,7 @@ static int init_hyp_mode(void) ...@@ -1117,7 +1253,7 @@ static int init_hyp_mode(void)
if (err) { if (err) {
kvm_err("Cannot map host CPU state: %d\n", err); kvm_err("Cannot map host CPU state: %d\n", err);
goto out_free_context; goto out_err;
} }
} }
...@@ -1126,34 +1262,22 @@ static int init_hyp_mode(void) ...@@ -1126,34 +1262,22 @@ static int init_hyp_mode(void)
*/ */
on_each_cpu(cpu_init_hyp_mode, NULL, 1); on_each_cpu(cpu_init_hyp_mode, NULL, 1);
/*
* Init HYP view of VGIC
*/
err = kvm_vgic_hyp_init();
switch (err) {
case 0:
vgic_present = true;
break;
case -ENODEV:
case -ENXIO:
vgic_present = false;
break;
default:
goto out_free_context;
}
/*
* Init HYP architected timer support
*/
err = kvm_timer_hyp_init();
if (err)
goto out_free_context;
#ifndef CONFIG_HOTPLUG_CPU #ifndef CONFIG_HOTPLUG_CPU
free_boot_hyp_pgd(); free_boot_hyp_pgd();
#endif #endif
kvm_perf_init(); cpu_notifier_register_begin();
err = __register_cpu_notifier(&hyp_init_cpu_nb);
cpu_notifier_register_done();
if (err) {
kvm_err("Cannot register HYP init CPU notifier (%d)\n", err);
goto out_err;
}
hyp_cpu_pm_init();
/* set size of VMID supported by CPU */ /* set size of VMID supported by CPU */
kvm_vmid_bits = kvm_get_vmid_bits(); kvm_vmid_bits = kvm_get_vmid_bits();
...@@ -1162,14 +1286,9 @@ static int init_hyp_mode(void) ...@@ -1162,14 +1286,9 @@ static int init_hyp_mode(void)
kvm_info("Hyp mode initialized successfully\n"); kvm_info("Hyp mode initialized successfully\n");
return 0; return 0;
out_free_context:
free_percpu(kvm_host_cpu_state);
out_free_mappings:
free_hyp_pgds();
out_free_stack_pages:
for_each_possible_cpu(cpu)
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
out_err: out_err:
teardown_hyp_mode();
kvm_err("error initializing Hyp mode: %d\n", err); kvm_err("error initializing Hyp mode: %d\n", err);
return err; return err;
} }
...@@ -1213,26 +1332,27 @@ int kvm_arch_init(void *opaque) ...@@ -1213,26 +1332,27 @@ int kvm_arch_init(void *opaque)
} }
} }
cpu_notifier_register_begin(); err = init_common_resources();
err = init_hyp_mode();
if (err) if (err)
goto out_err; return err;
err = __register_cpu_notifier(&hyp_init_cpu_nb); if (is_kernel_in_hyp_mode())
if (err) { err = init_vhe_mode();
kvm_err("Cannot register HYP init CPU notifier (%d)\n", err); else
err = init_hyp_mode();
if (err)
goto out_err; goto out_err;
}
cpu_notifier_register_done();
hyp_cpu_pm_init(); err = init_subsystems();
if (err)
goto out_hyp;
kvm_coproc_table_init();
return 0; return 0;
out_hyp:
teardown_hyp_mode();
out_err: out_err:
cpu_notifier_register_done(); teardown_common_resources();
return err; return err;
} }
......
This diff is collapsed.
...@@ -37,7 +37,7 @@ struct coproc_reg { ...@@ -37,7 +37,7 @@ struct coproc_reg {
unsigned long Op1; unsigned long Op1;
unsigned long Op2; unsigned long Op2;
bool is_64; bool is_64bit;
/* Trapped access from guest, if non-NULL. */ /* Trapped access from guest, if non-NULL. */
bool (*access)(struct kvm_vcpu *, bool (*access)(struct kvm_vcpu *,
...@@ -47,7 +47,7 @@ struct coproc_reg { ...@@ -47,7 +47,7 @@ struct coproc_reg {
/* Initialization for vcpu. */ /* Initialization for vcpu. */
void (*reset)(struct kvm_vcpu *, const struct coproc_reg *); void (*reset)(struct kvm_vcpu *, const struct coproc_reg *);
/* Index into vcpu->arch.cp15[], or 0 if we don't need to save it. */ /* Index into vcpu_cp15(vcpu, ...), or 0 if we don't need to save it. */
unsigned long reg; unsigned long reg;
/* Value (usually reset value) */ /* Value (usually reset value) */
...@@ -104,25 +104,25 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu, ...@@ -104,25 +104,25 @@ static inline void reset_unknown(struct kvm_vcpu *vcpu,
const struct coproc_reg *r) const struct coproc_reg *r)
{ {
BUG_ON(!r->reg); BUG_ON(!r->reg);
BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.cp15)); BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.ctxt.cp15));
vcpu->arch.cp15[r->reg] = 0xdecafbad; vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
} }
static inline void reset_val(struct kvm_vcpu *vcpu, const struct coproc_reg *r) static inline void reset_val(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
{ {
BUG_ON(!r->reg); BUG_ON(!r->reg);
BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.cp15)); BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.ctxt.cp15));
vcpu->arch.cp15[r->reg] = r->val; vcpu_cp15(vcpu, r->reg) = r->val;
} }
static inline void reset_unknown64(struct kvm_vcpu *vcpu, static inline void reset_unknown64(struct kvm_vcpu *vcpu,
const struct coproc_reg *r) const struct coproc_reg *r)
{ {
BUG_ON(!r->reg); BUG_ON(!r->reg);
BUG_ON(r->reg + 1 >= ARRAY_SIZE(vcpu->arch.cp15)); BUG_ON(r->reg + 1 >= ARRAY_SIZE(vcpu->arch.ctxt.cp15));
vcpu->arch.cp15[r->reg] = 0xdecafbad; vcpu_cp15(vcpu, r->reg) = 0xdecafbad;
vcpu->arch.cp15[r->reg+1] = 0xd0c0ffee; vcpu_cp15(vcpu, r->reg+1) = 0xd0c0ffee;
} }
static inline int cmp_reg(const struct coproc_reg *i1, static inline int cmp_reg(const struct coproc_reg *i1,
...@@ -141,7 +141,7 @@ static inline int cmp_reg(const struct coproc_reg *i1, ...@@ -141,7 +141,7 @@ static inline int cmp_reg(const struct coproc_reg *i1,
return i1->Op1 - i2->Op1; return i1->Op1 - i2->Op1;
if (i1->Op2 != i2->Op2) if (i1->Op2 != i2->Op2)
return i1->Op2 - i2->Op2; return i1->Op2 - i2->Op2;
return i2->is_64 - i1->is_64; return i2->is_64bit - i1->is_64bit;
} }
...@@ -150,8 +150,8 @@ static inline int cmp_reg(const struct coproc_reg *i1, ...@@ -150,8 +150,8 @@ static inline int cmp_reg(const struct coproc_reg *i1,
#define CRm64(_x) .CRn = _x, .CRm = 0 #define CRm64(_x) .CRn = _x, .CRm = 0
#define Op1(_x) .Op1 = _x #define Op1(_x) .Op1 = _x
#define Op2(_x) .Op2 = _x #define Op2(_x) .Op2 = _x
#define is64 .is_64 = true #define is64 .is_64bit = true
#define is32 .is_64 = false #define is32 .is_64bit = false
bool access_vm_reg(struct kvm_vcpu *vcpu, bool access_vm_reg(struct kvm_vcpu *vcpu,
const struct coproc_params *p, const struct coproc_params *p,
......
...@@ -112,7 +112,7 @@ static const unsigned long vcpu_reg_offsets[VCPU_NR_MODES][15] = { ...@@ -112,7 +112,7 @@ static const unsigned long vcpu_reg_offsets[VCPU_NR_MODES][15] = {
*/ */
unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num) unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
{ {
unsigned long *reg_array = (unsigned long *)&vcpu->arch.regs; unsigned long *reg_array = (unsigned long *)&vcpu->arch.ctxt.gp_regs;
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK; unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) { switch (mode) {
...@@ -147,15 +147,15 @@ unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu) ...@@ -147,15 +147,15 @@ unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK; unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) { switch (mode) {
case SVC_MODE: case SVC_MODE:
return &vcpu->arch.regs.KVM_ARM_SVC_spsr; return &vcpu->arch.ctxt.gp_regs.KVM_ARM_SVC_spsr;
case ABT_MODE: case ABT_MODE:
return &vcpu->arch.regs.KVM_ARM_ABT_spsr; return &vcpu->arch.ctxt.gp_regs.KVM_ARM_ABT_spsr;
case UND_MODE: case UND_MODE:
return &vcpu->arch.regs.KVM_ARM_UND_spsr; return &vcpu->arch.ctxt.gp_regs.KVM_ARM_UND_spsr;
case IRQ_MODE: case IRQ_MODE:
return &vcpu->arch.regs.KVM_ARM_IRQ_spsr; return &vcpu->arch.ctxt.gp_regs.KVM_ARM_IRQ_spsr;
case FIQ_MODE: case FIQ_MODE:
return &vcpu->arch.regs.KVM_ARM_FIQ_spsr; return &vcpu->arch.ctxt.gp_regs.KVM_ARM_FIQ_spsr;
default: default:
BUG(); BUG();
} }
...@@ -266,8 +266,8 @@ void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr) ...@@ -266,8 +266,8 @@ void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr)
static u32 exc_vector_base(struct kvm_vcpu *vcpu) static u32 exc_vector_base(struct kvm_vcpu *vcpu)
{ {
u32 sctlr = vcpu->arch.cp15[c1_SCTLR]; u32 sctlr = vcpu_cp15(vcpu, c1_SCTLR);
u32 vbar = vcpu->arch.cp15[c12_VBAR]; u32 vbar = vcpu_cp15(vcpu, c12_VBAR);
if (sctlr & SCTLR_V) if (sctlr & SCTLR_V)
return 0xffff0000; return 0xffff0000;
...@@ -282,7 +282,7 @@ static u32 exc_vector_base(struct kvm_vcpu *vcpu) ...@@ -282,7 +282,7 @@ static u32 exc_vector_base(struct kvm_vcpu *vcpu)
static void kvm_update_psr(struct kvm_vcpu *vcpu, unsigned long mode) static void kvm_update_psr(struct kvm_vcpu *vcpu, unsigned long mode)
{ {
unsigned long cpsr = *vcpu_cpsr(vcpu); unsigned long cpsr = *vcpu_cpsr(vcpu);
u32 sctlr = vcpu->arch.cp15[c1_SCTLR]; u32 sctlr = vcpu_cp15(vcpu, c1_SCTLR);
*vcpu_cpsr(vcpu) = (cpsr & ~MODE_MASK) | mode; *vcpu_cpsr(vcpu) = (cpsr & ~MODE_MASK) | mode;
...@@ -357,22 +357,22 @@ static void inject_abt(struct kvm_vcpu *vcpu, bool is_pabt, unsigned long addr) ...@@ -357,22 +357,22 @@ static void inject_abt(struct kvm_vcpu *vcpu, bool is_pabt, unsigned long addr)
if (is_pabt) { if (is_pabt) {
/* Set IFAR and IFSR */ /* Set IFAR and IFSR */
vcpu->arch.cp15[c6_IFAR] = addr; vcpu_cp15(vcpu, c6_IFAR) = addr;
is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31); is_lpae = (vcpu_cp15(vcpu, c2_TTBCR) >> 31);
/* Always give debug fault for now - should give guest a clue */ /* Always give debug fault for now - should give guest a clue */
if (is_lpae) if (is_lpae)
vcpu->arch.cp15[c5_IFSR] = 1 << 9 | 0x22; vcpu_cp15(vcpu, c5_IFSR) = 1 << 9 | 0x22;
else else
vcpu->arch.cp15[c5_IFSR] = 2; vcpu_cp15(vcpu, c5_IFSR) = 2;
} else { /* !iabt */ } else { /* !iabt */
/* Set DFAR and DFSR */ /* Set DFAR and DFSR */
vcpu->arch.cp15[c6_DFAR] = addr; vcpu_cp15(vcpu, c6_DFAR) = addr;
is_lpae = (vcpu->arch.cp15[c2_TTBCR] >> 31); is_lpae = (vcpu_cp15(vcpu, c2_TTBCR) >> 31);
/* Always give debug fault for now - should give guest a clue */ /* Always give debug fault for now - should give guest a clue */
if (is_lpae) if (is_lpae)
vcpu->arch.cp15[c5_DFSR] = 1 << 9 | 0x22; vcpu_cp15(vcpu, c5_DFSR) = 1 << 9 | 0x22;
else else
vcpu->arch.cp15[c5_DFSR] = 2; vcpu_cp15(vcpu, c5_DFSR) = 2;
} }
} }
......
...@@ -25,7 +25,6 @@ ...@@ -25,7 +25,6 @@
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/kvm.h> #include <asm/kvm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h> #include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h> #include <asm/kvm_coproc.h>
...@@ -55,7 +54,7 @@ static u64 core_reg_offset_from_id(u64 id) ...@@ -55,7 +54,7 @@ static u64 core_reg_offset_from_id(u64 id)
static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{ {
u32 __user *uaddr = (u32 __user *)(long)reg->addr; u32 __user *uaddr = (u32 __user *)(long)reg->addr;
struct kvm_regs *regs = &vcpu->arch.regs; struct kvm_regs *regs = &vcpu->arch.ctxt.gp_regs;
u64 off; u64 off;
if (KVM_REG_SIZE(reg->id) != 4) if (KVM_REG_SIZE(reg->id) != 4)
...@@ -72,7 +71,7 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) ...@@ -72,7 +71,7 @@ static int get_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{ {
u32 __user *uaddr = (u32 __user *)(long)reg->addr; u32 __user *uaddr = (u32 __user *)(long)reg->addr;
struct kvm_regs *regs = &vcpu->arch.regs; struct kvm_regs *regs = &vcpu->arch.ctxt.gp_regs;
u64 off, val; u64 off, val;
if (KVM_REG_SIZE(reg->id) != 4) if (KVM_REG_SIZE(reg->id) != 4)
......
...@@ -147,13 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, ...@@ -147,13 +147,6 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
switch (exception_index) { switch (exception_index) {
case ARM_EXCEPTION_IRQ: case ARM_EXCEPTION_IRQ:
return 1; return 1;
case ARM_EXCEPTION_UNDEFINED:
kvm_err("Undefined exception in Hyp mode at: %#08lx\n",
kvm_vcpu_get_hyp_pc(vcpu));
BUG();
panic("KVM: Hypervisor undefined exception!\n");
case ARM_EXCEPTION_DATA_ABORT:
case ARM_EXCEPTION_PREF_ABORT:
case ARM_EXCEPTION_HVC: case ARM_EXCEPTION_HVC:
/* /*
* See ARM ARM B1.14.1: "Hyp traps on instructions * See ARM ARM B1.14.1: "Hyp traps on instructions
......
#
# Makefile for Kernel-based Virtual Machine module, HYP part
#
KVM=../../../../virt/kvm
obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/timer-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += entry.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
obj-$(CONFIG_KVM_ARM_HOST) += switch.o
obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
/*
* Original code:
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <asm/kvm_hyp.h>
__asm__(".arch_extension virt");
void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt)
{
ctxt->gp_regs.usr_regs.ARM_sp = read_special(SP_usr);
ctxt->gp_regs.usr_regs.ARM_pc = read_special(ELR_hyp);
ctxt->gp_regs.usr_regs.ARM_cpsr = read_special(SPSR);
ctxt->gp_regs.KVM_ARM_SVC_sp = read_special(SP_svc);
ctxt->gp_regs.KVM_ARM_SVC_lr = read_special(LR_svc);
ctxt->gp_regs.KVM_ARM_SVC_spsr = read_special(SPSR_svc);
ctxt->gp_regs.KVM_ARM_ABT_sp = read_special(SP_abt);
ctxt->gp_regs.KVM_ARM_ABT_lr = read_special(LR_abt);
ctxt->gp_regs.KVM_ARM_ABT_spsr = read_special(SPSR_abt);
ctxt->gp_regs.KVM_ARM_UND_sp = read_special(SP_und);
ctxt->gp_regs.KVM_ARM_UND_lr = read_special(LR_und);
ctxt->gp_regs.KVM_ARM_UND_spsr = read_special(SPSR_und);
ctxt->gp_regs.KVM_ARM_IRQ_sp = read_special(SP_irq);
ctxt->gp_regs.KVM_ARM_IRQ_lr = read_special(LR_irq);
ctxt->gp_regs.KVM_ARM_IRQ_spsr = read_special(SPSR_irq);
ctxt->gp_regs.KVM_ARM_FIQ_r8 = read_special(R8_fiq);
ctxt->gp_regs.KVM_ARM_FIQ_r9 = read_special(R9_fiq);
ctxt->gp_regs.KVM_ARM_FIQ_r10 = read_special(R10_fiq);
ctxt->gp_regs.KVM_ARM_FIQ_fp = read_special(R11_fiq);
ctxt->gp_regs.KVM_ARM_FIQ_ip = read_special(R12_fiq);
ctxt->gp_regs.KVM_ARM_FIQ_sp = read_special(SP_fiq);
ctxt->gp_regs.KVM_ARM_FIQ_lr = read_special(LR_fiq);
ctxt->gp_regs.KVM_ARM_FIQ_spsr = read_special(SPSR_fiq);
}
void __hyp_text __banked_restore_state(struct kvm_cpu_context *ctxt)
{
write_special(ctxt->gp_regs.usr_regs.ARM_sp, SP_usr);
write_special(ctxt->gp_regs.usr_regs.ARM_pc, ELR_hyp);
write_special(ctxt->gp_regs.usr_regs.ARM_cpsr, SPSR_cxsf);
write_special(ctxt->gp_regs.KVM_ARM_SVC_sp, SP_svc);
write_special(ctxt->gp_regs.KVM_ARM_SVC_lr, LR_svc);
write_special(ctxt->gp_regs.KVM_ARM_SVC_spsr, SPSR_svc);
write_special(ctxt->gp_regs.KVM_ARM_ABT_sp, SP_abt);
write_special(ctxt->gp_regs.KVM_ARM_ABT_lr, LR_abt);
write_special(ctxt->gp_regs.KVM_ARM_ABT_spsr, SPSR_abt);
write_special(ctxt->gp_regs.KVM_ARM_UND_sp, SP_und);
write_special(ctxt->gp_regs.KVM_ARM_UND_lr, LR_und);
write_special(ctxt->gp_regs.KVM_ARM_UND_spsr, SPSR_und);
write_special(ctxt->gp_regs.KVM_ARM_IRQ_sp, SP_irq);
write_special(ctxt->gp_regs.KVM_ARM_IRQ_lr, LR_irq);
write_special(ctxt->gp_regs.KVM_ARM_IRQ_spsr, SPSR_irq);
write_special(ctxt->gp_regs.KVM_ARM_FIQ_r8, R8_fiq);
write_special(ctxt->gp_regs.KVM_ARM_FIQ_r9, R9_fiq);
write_special(ctxt->gp_regs.KVM_ARM_FIQ_r10, R10_fiq);
write_special(ctxt->gp_regs.KVM_ARM_FIQ_fp, R11_fiq);
write_special(ctxt->gp_regs.KVM_ARM_FIQ_ip, R12_fiq);
write_special(ctxt->gp_regs.KVM_ARM_FIQ_sp, SP_fiq);
write_special(ctxt->gp_regs.KVM_ARM_FIQ_lr, LR_fiq);
write_special(ctxt->gp_regs.KVM_ARM_FIQ_spsr, SPSR_fiq);
}
/*
* Original code:
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <asm/kvm_hyp.h>
static u64 *cp15_64(struct kvm_cpu_context *ctxt, int idx)
{
return (u64 *)(ctxt->cp15 + idx);
}
void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
{
ctxt->cp15[c0_MPIDR] = read_sysreg(VMPIDR);
ctxt->cp15[c0_CSSELR] = read_sysreg(CSSELR);
ctxt->cp15[c1_SCTLR] = read_sysreg(SCTLR);
ctxt->cp15[c1_CPACR] = read_sysreg(CPACR);
*cp15_64(ctxt, c2_TTBR0) = read_sysreg(TTBR0);
*cp15_64(ctxt, c2_TTBR1) = read_sysreg(TTBR1);
ctxt->cp15[c2_TTBCR] = read_sysreg(TTBCR);
ctxt->cp15[c3_DACR] = read_sysreg(DACR);
ctxt->cp15[c5_DFSR] = read_sysreg(DFSR);
ctxt->cp15[c5_IFSR] = read_sysreg(IFSR);
ctxt->cp15[c5_ADFSR] = read_sysreg(ADFSR);
ctxt->cp15[c5_AIFSR] = read_sysreg(AIFSR);
ctxt->cp15[c6_DFAR] = read_sysreg(DFAR);
ctxt->cp15[c6_IFAR] = read_sysreg(IFAR);
*cp15_64(ctxt, c7_PAR) = read_sysreg(PAR);
ctxt->cp15[c10_PRRR] = read_sysreg(PRRR);
ctxt->cp15[c10_NMRR] = read_sysreg(NMRR);
ctxt->cp15[c10_AMAIR0] = read_sysreg(AMAIR0);
ctxt->cp15[c10_AMAIR1] = read_sysreg(AMAIR1);
ctxt->cp15[c12_VBAR] = read_sysreg(VBAR);
ctxt->cp15[c13_CID] = read_sysreg(CID);
ctxt->cp15[c13_TID_URW] = read_sysreg(TID_URW);
ctxt->cp15[c13_TID_URO] = read_sysreg(TID_URO);
ctxt->cp15[c13_TID_PRIV] = read_sysreg(TID_PRIV);
ctxt->cp15[c14_CNTKCTL] = read_sysreg(CNTKCTL);
}
void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
{
write_sysreg(ctxt->cp15[c0_MPIDR], VMPIDR);
write_sysreg(ctxt->cp15[c0_CSSELR], CSSELR);
write_sysreg(ctxt->cp15[c1_SCTLR], SCTLR);
write_sysreg(ctxt->cp15[c1_CPACR], CPACR);
write_sysreg(*cp15_64(ctxt, c2_TTBR0), TTBR0);
write_sysreg(*cp15_64(ctxt, c2_TTBR1), TTBR1);
write_sysreg(ctxt->cp15[c2_TTBCR], TTBCR);
write_sysreg(ctxt->cp15[c3_DACR], DACR);
write_sysreg(ctxt->cp15[c5_DFSR], DFSR);
write_sysreg(ctxt->cp15[c5_IFSR], IFSR);
write_sysreg(ctxt->cp15[c5_ADFSR], ADFSR);
write_sysreg(ctxt->cp15[c5_AIFSR], AIFSR);
write_sysreg(ctxt->cp15[c6_DFAR], DFAR);
write_sysreg(ctxt->cp15[c6_IFAR], IFAR);
write_sysreg(*cp15_64(ctxt, c7_PAR), PAR);
write_sysreg(ctxt->cp15[c10_PRRR], PRRR);
write_sysreg(ctxt->cp15[c10_NMRR], NMRR);
write_sysreg(ctxt->cp15[c10_AMAIR0], AMAIR0);
write_sysreg(ctxt->cp15[c10_AMAIR1], AMAIR1);
write_sysreg(ctxt->cp15[c12_VBAR], VBAR);
write_sysreg(ctxt->cp15[c13_CID], CID);
write_sysreg(ctxt->cp15[c13_TID_URW], TID_URW);
write_sysreg(ctxt->cp15[c13_TID_URO], TID_URO);
write_sysreg(ctxt->cp15[c13_TID_PRIV], TID_PRIV);
write_sysreg(ctxt->cp15[c14_CNTKCTL], CNTKCTL);
}
/*
* Copyright (C) 2016 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
#include <asm/kvm_arm.h>
.arch_extension virt
.text
.pushsection .hyp.text, "ax"
#define USR_REGS_OFFSET (CPU_CTXT_GP_REGS + GP_REGS_USR)
/* int __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host) */
ENTRY(__guest_enter)
@ Save host registers
add r1, r1, #(USR_REGS_OFFSET + S_R4)
stm r1!, {r4-r12}
str lr, [r1, #4] @ Skip SP_usr (already saved)
@ Restore guest registers
add r0, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
ldr lr, [r0, #S_LR]
ldm r0, {r0-r12}
clrex
eret
ENDPROC(__guest_enter)
ENTRY(__guest_exit)
/*
* return convention:
* guest r0, r1, r2 saved on the stack
* r0: vcpu pointer
* r1: exception code
*/
add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R3)
stm r2!, {r3-r12}
str lr, [r2, #4]
add r2, r0, #(VCPU_GUEST_CTXT + USR_REGS_OFFSET + S_R0)
pop {r3, r4, r5} @ r0, r1, r2
stm r2, {r3-r5}
ldr r0, [r0, #VCPU_HOST_CTXT]
add r0, r0, #(USR_REGS_OFFSET + S_R4)
ldm r0!, {r4-r12}
ldr lr, [r0, #4]
mov r0, r1
bx lr
ENDPROC(__guest_exit)
/*
* If VFPv3 support is not available, then we will not switch the VFP
* registers; however cp10 and cp11 accesses will still trap and fallback
* to the regular coprocessor emulation code, which currently will
* inject an undefined exception to the guest.
*/
#ifdef CONFIG_VFPv3
ENTRY(__vfp_guest_restore)
push {r3, r4, lr}
@ NEON/VFP used. Turn on VFP access.
mrc p15, 4, r1, c1, c1, 2 @ HCPTR
bic r1, r1, #(HCPTR_TCP(10) | HCPTR_TCP(11))
mcr p15, 4, r1, c1, c1, 2 @ HCPTR
isb
@ Switch VFP/NEON hardware state to the guest's
mov r4, r0
ldr r0, [r0, #VCPU_HOST_CTXT]
add r0, r0, #CPU_CTXT_VFP
bl __vfp_save_state
add r0, r4, #(VCPU_GUEST_CTXT + CPU_CTXT_VFP)
bl __vfp_restore_state
pop {r3, r4, lr}
pop {r0, r1, r2}
clrex
eret
ENDPROC(__vfp_guest_restore)
#endif
.popsection
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <linux/linkage.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_asm.h>
.arch_extension virt
.text
.pushsection .hyp.text, "ax"
.macro load_vcpu reg
mrc p15, 4, \reg, c13, c0, 2 @ HTPIDR
.endm
/********************************************************************
* Hypervisor exception vector and handlers
*
*
* The KVM/ARM Hypervisor ABI is defined as follows:
*
* Entry to Hyp mode from the host kernel will happen _only_ when an HVC
* instruction is issued since all traps are disabled when running the host
* kernel as per the Hyp-mode initialization at boot time.
*
* HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
* below) when the HVC instruction is called from SVC mode (i.e. a guest or the
* host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
* instructions are called from within Hyp-mode.
*
* Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
* Switching to Hyp mode is done through a simple HVC #0 instruction. The
* exception vector code will check that the HVC comes from VMID==0.
* - r0 contains a pointer to a HYP function
* - r1, r2, and r3 contain arguments to the above function.
* - The HYP function will be called with its arguments in r0, r1 and r2.
* On HYP function return, we return directly to SVC.
*
* Note that the above is used to execute code in Hyp-mode from a host-kernel
* point of view, and is a different concept from performing a world-switch and
* executing guest code SVC mode (with a VMID != 0).
*/
.align 5
__kvm_hyp_vector:
.global __kvm_hyp_vector
@ Hyp-mode exception vector
W(b) hyp_reset
W(b) hyp_undef
W(b) hyp_svc
W(b) hyp_pabt
W(b) hyp_dabt
W(b) hyp_hvc
W(b) hyp_irq
W(b) hyp_fiq
.macro invalid_vector label, cause
.align
\label: mov r0, #\cause
b __hyp_panic
.endm
invalid_vector hyp_reset ARM_EXCEPTION_RESET
invalid_vector hyp_undef ARM_EXCEPTION_UNDEFINED
invalid_vector hyp_svc ARM_EXCEPTION_SOFTWARE
invalid_vector hyp_pabt ARM_EXCEPTION_PREF_ABORT
invalid_vector hyp_dabt ARM_EXCEPTION_DATA_ABORT
invalid_vector hyp_fiq ARM_EXCEPTION_FIQ
ENTRY(__hyp_do_panic)
mrs lr, cpsr
bic lr, lr, #MODE_MASK
orr lr, lr, #SVC_MODE
THUMB( orr lr, lr, #PSR_T_BIT )
msr spsr_cxsf, lr
ldr lr, =panic
msr ELR_hyp, lr
ldr lr, =kvm_call_hyp
clrex
eret
ENDPROC(__hyp_do_panic)
hyp_hvc:
/*
* Getting here is either because of a trap from a guest,
* or from executing HVC from the host kernel, which means
* "do something in Hyp mode".
*/
push {r0, r1, r2}
@ Check syndrome register
mrc p15, 4, r1, c5, c2, 0 @ HSR
lsr r0, r1, #HSR_EC_SHIFT
cmp r0, #HSR_EC_HVC
bne guest_trap @ Not HVC instr.
/*
* Let's check if the HVC came from VMID 0 and allow simple
* switch to Hyp mode
*/
mrrc p15, 6, r0, r2, c2
lsr r2, r2, #16
and r2, r2, #0xff
cmp r2, #0
bne guest_trap @ Guest called HVC
/*
* Getting here means host called HVC, we shift parameters and branch
* to Hyp function.
*/
pop {r0, r1, r2}
/* Check for __hyp_get_vectors */
cmp r0, #-1
mrceq p15, 4, r0, c12, c0, 0 @ get HVBAR
beq 1f
push {lr}
mov lr, r0
mov r0, r1
mov r1, r2
mov r2, r3
THUMB( orr lr, #1)
blx lr @ Call the HYP function
pop {lr}
1: eret
guest_trap:
load_vcpu r0 @ Load VCPU pointer to r0
#ifdef CONFIG_VFPv3
@ Check for a VFP access
lsr r1, r1, #HSR_EC_SHIFT
cmp r1, #HSR_EC_CP_0_13
beq __vfp_guest_restore
#endif
mov r1, #ARM_EXCEPTION_HVC
b __guest_exit
hyp_irq:
push {r0, r1, r2}
mov r1, #ARM_EXCEPTION_IRQ
load_vcpu r0 @ Load VCPU pointer to r0
b __guest_exit
.ltorg
.popsection
/*
* Copyright (C) 2016 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/types.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_hyp.h>
void __hyp_text __init_stage2_translation(void)
{
u64 val;
val = read_sysreg(VTCR) & ~VTCR_MASK;
val |= read_sysreg(HTCR) & VTCR_HTCR_SH;
val |= KVM_VTCR_SL0 | KVM_VTCR_T0SZ | KVM_VTCR_S;
write_sysreg(val, VTCR);
}
/*
* Copyright (C) 2015 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <asm/kvm_asm.h>
#include <asm/kvm_hyp.h>
__asm__(".arch_extension virt");
/*
* Activate the traps, saving the host's fpexc register before
* overwriting it. We'll restore it on VM exit.
*/
static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu, u32 *fpexc_host)
{
u32 val;
/*
* We are about to set HCPTR.TCP10/11 to trap all floating point
* register accesses to HYP, however, the ARM ARM clearly states that
* traps are only taken to HYP if the operation would not otherwise
* trap to SVC. Therefore, always make sure that for 32-bit guests,
* we set FPEXC.EN to prevent traps to SVC, when setting the TCP bits.
*/
val = read_sysreg(VFP_FPEXC);
*fpexc_host = val;
if (!(val & FPEXC_EN)) {
write_sysreg(val | FPEXC_EN, VFP_FPEXC);
isb();
}
write_sysreg(vcpu->arch.hcr | vcpu->arch.irq_lines, HCR);
/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
write_sysreg(HSTR_T(15), HSTR);
write_sysreg(HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11), HCPTR);
val = read_sysreg(HDCR);
write_sysreg(val | HDCR_TPM | HDCR_TPMCR, HDCR);
}
static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
{
u32 val;
write_sysreg(0, HCR);
write_sysreg(0, HSTR);
val = read_sysreg(HDCR);
write_sysreg(val & ~(HDCR_TPM | HDCR_TPMCR), HDCR);
write_sysreg(0, HCPTR);
}
static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = kern_hyp_va(vcpu->kvm);
write_sysreg(kvm->arch.vttbr, VTTBR);
write_sysreg(vcpu->arch.midr, VPIDR);
}
static void __hyp_text __deactivate_vm(struct kvm_vcpu *vcpu)
{
write_sysreg(0, VTTBR);
write_sysreg(read_sysreg(MIDR), VPIDR);
}
static void __hyp_text __vgic_save_state(struct kvm_vcpu *vcpu)
{
__vgic_v2_save_state(vcpu);
}
static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
{
__vgic_v2_restore_state(vcpu);
}
static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
{
u32 hsr = read_sysreg(HSR);
u8 ec = hsr >> HSR_EC_SHIFT;
u32 hpfar, far;
vcpu->arch.fault.hsr = hsr;
if (ec == HSR_EC_IABT)
far = read_sysreg(HIFAR);
else if (ec == HSR_EC_DABT)
far = read_sysreg(HDFAR);
else
return true;
/*
* B3.13.5 Reporting exceptions taken to the Non-secure PL2 mode:
*
* Abort on the stage 2 translation for a memory access from a
* Non-secure PL1 or PL0 mode:
*
* For any Access flag fault or Translation fault, and also for any
* Permission fault on the stage 2 translation of a memory access
* made as part of a translation table walk for a stage 1 translation,
* the HPFAR holds the IPA that caused the fault. Otherwise, the HPFAR
* is UNKNOWN.
*/
if (!(hsr & HSR_DABT_S1PTW) && (hsr & HSR_FSC_TYPE) == FSC_PERM) {
u64 par, tmp;
par = read_sysreg(PAR);
write_sysreg(far, ATS1CPR);
isb();
tmp = read_sysreg(PAR);
write_sysreg(par, PAR);
if (unlikely(tmp & 1))
return false; /* Translation failed, back to guest */
hpfar = ((tmp >> 12) & ((1UL << 28) - 1)) << 4;
} else {
hpfar = read_sysreg(HPFAR);
}
vcpu->arch.fault.hxfar = far;
vcpu->arch.fault.hpfar = hpfar;
return true;
}
static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *host_ctxt;
struct kvm_cpu_context *guest_ctxt;
bool fp_enabled;
u64 exit_code;
u32 fpexc;
vcpu = kern_hyp_va(vcpu);
write_sysreg(vcpu, HTPIDR);
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
guest_ctxt = &vcpu->arch.ctxt;
__sysreg_save_state(host_ctxt);
__banked_save_state(host_ctxt);
__activate_traps(vcpu, &fpexc);
__activate_vm(vcpu);
__vgic_restore_state(vcpu);
__timer_restore_state(vcpu);
__sysreg_restore_state(guest_ctxt);
__banked_restore_state(guest_ctxt);
/* Jump in the fire! */
again:
exit_code = __guest_enter(vcpu, host_ctxt);
/* And we're baaack! */
if (exit_code == ARM_EXCEPTION_HVC && !__populate_fault_info(vcpu))
goto again;
fp_enabled = __vfp_enabled();
__banked_save_state(guest_ctxt);
__sysreg_save_state(guest_ctxt);
__timer_save_state(vcpu);
__vgic_save_state(vcpu);
__deactivate_traps(vcpu);
__deactivate_vm(vcpu);
__banked_restore_state(host_ctxt);
__sysreg_restore_state(host_ctxt);
if (fp_enabled) {
__vfp_save_state(&guest_ctxt->vfp);
__vfp_restore_state(&host_ctxt->vfp);
}
write_sysreg(fpexc, VFP_FPEXC);
return exit_code;
}
__alias(__guest_run) int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
static const char * const __hyp_panic_string[] = {
[ARM_EXCEPTION_RESET] = "\nHYP panic: RST PC:%08x CPSR:%08x",
[ARM_EXCEPTION_UNDEFINED] = "\nHYP panic: UNDEF PC:%08x CPSR:%08x",
[ARM_EXCEPTION_SOFTWARE] = "\nHYP panic: SVC PC:%08x CPSR:%08x",
[ARM_EXCEPTION_PREF_ABORT] = "\nHYP panic: PABRT PC:%08x CPSR:%08x",
[ARM_EXCEPTION_DATA_ABORT] = "\nHYP panic: DABRT PC:%08x ADDR:%08x",
[ARM_EXCEPTION_IRQ] = "\nHYP panic: IRQ PC:%08x CPSR:%08x",
[ARM_EXCEPTION_FIQ] = "\nHYP panic: FIQ PC:%08x CPSR:%08x",
[ARM_EXCEPTION_HVC] = "\nHYP panic: HVC PC:%08x CPSR:%08x",
};
void __hyp_text __noreturn __hyp_panic(int cause)
{
u32 elr = read_special(ELR_hyp);
u32 val;
if (cause == ARM_EXCEPTION_DATA_ABORT)
val = read_sysreg(HDFAR);
else
val = read_special(SPSR);
if (read_sysreg(VTTBR)) {
struct kvm_vcpu *vcpu;
struct kvm_cpu_context *host_ctxt;
vcpu = (struct kvm_vcpu *)read_sysreg(HTPIDR);
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
__deactivate_traps(vcpu);
__deactivate_vm(vcpu);
__sysreg_restore_state(host_ctxt);
}
/* Call panic for real */
__hyp_do_panic(__hyp_panic_string[cause], elr, val);
unreachable();
}
/*
* Original code:
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* Mostly rewritten in C by Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <asm/kvm_hyp.h>
/**
* Flush per-VMID TLBs
*
* __kvm_tlb_flush_vmid(struct kvm *kvm);
*
* We rely on the hardware to broadcast the TLB invalidation to all CPUs
* inside the inner-shareable domain (which is the case for all v7
* implementations). If we come across a non-IS SMP implementation, we'll
* have to use an IPI based mechanism. Until then, we stick to the simple
* hardware assisted version.
*
* As v7 does not support flushing per IPA, just nuke the whole TLB
* instead, ignoring the ipa value.
*/
static void __hyp_text __tlb_flush_vmid(struct kvm *kvm)
{
dsb(ishst);
/* Switch to requested VMID */
kvm = kern_hyp_va(kvm);
write_sysreg(kvm->arch.vttbr, VTTBR);
isb();
write_sysreg(0, TLBIALLIS);
dsb(ish);
isb();
write_sysreg(0, VTTBR);
}
__alias(__tlb_flush_vmid) void __kvm_tlb_flush_vmid(struct kvm *kvm);
static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
{
__tlb_flush_vmid(kvm);
}
__alias(__tlb_flush_vmid_ipa) void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm,
phys_addr_t ipa);
static void __hyp_text __tlb_flush_vm_context(void)
{
write_sysreg(0, TLBIALLNSNHIS);
write_sysreg(0, ICIALLUIS);
dsb(ish);
}
__alias(__tlb_flush_vm_context) void __kvm_flush_vm_context(void);
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/linkage.h>
#include <asm/vfpmacros.h>
.text
.pushsection .hyp.text, "ax"
/* void __vfp_save_state(struct vfp_hard_struct *vfp); */
ENTRY(__vfp_save_state)
push {r4, r5}
VFPFMRX r1, FPEXC
@ Make sure *really* VFP is enabled so we can touch the registers.
orr r5, r1, #FPEXC_EN
tst r5, #FPEXC_EX @ Check for VFP Subarchitecture
bic r5, r5, #FPEXC_EX @ FPEXC_EX disable
VFPFMXR FPEXC, r5
isb
VFPFMRX r2, FPSCR
beq 1f
@ If FPEXC_EX is 0, then FPINST/FPINST2 reads are upredictable, so
@ we only need to save them if FPEXC_EX is set.
VFPFMRX r3, FPINST
tst r5, #FPEXC_FP2V
VFPFMRX r4, FPINST2, ne @ vmrsne
1:
VFPFSTMIA r0, r5 @ Save VFP registers
stm r0, {r1-r4} @ Save FPEXC, FPSCR, FPINST, FPINST2
pop {r4, r5}
bx lr
ENDPROC(__vfp_save_state)
/* void __vfp_restore_state(struct vfp_hard_struct *vfp);
* Assume FPEXC_EN is on and FPEXC_EX is off */
ENTRY(__vfp_restore_state)
VFPFLDMIA r0, r1 @ Load VFP registers
ldm r0, {r0-r3} @ Load FPEXC, FPSCR, FPINST, FPINST2
VFPFMXR FPSCR, r1
tst r0, #FPEXC_EX @ Check for VFP Subarchitecture
beq 1f
VFPFMXR FPINST, r2
tst r0, #FPEXC_FP2V
VFPFMXR FPINST2, r3, ne
1:
VFPFMXR FPEXC, r0 @ FPEXC (last, in case !EN)
bx lr
ENDPROC(__vfp_restore_state)
.popsection
...@@ -84,14 +84,6 @@ __do_hyp_init: ...@@ -84,14 +84,6 @@ __do_hyp_init:
orr r0, r0, r1 orr r0, r0, r1
mcr p15, 4, r0, c2, c0, 2 @ HTCR mcr p15, 4, r0, c2, c0, 2 @ HTCR
mrc p15, 4, r1, c2, c1, 2 @ VTCR
ldr r2, =VTCR_MASK
bic r1, r1, r2
bic r0, r0, #(~VTCR_HTCR_SH) @ clear non-reusable HTCR bits
orr r1, r0, r1
orr r1, r1, #(KVM_VTCR_SL0 | KVM_VTCR_T0SZ | KVM_VTCR_S)
mcr p15, 4, r1, c2, c1, 2 @ VTCR
@ Use the same memory attributes for hyp. accesses as the kernel @ Use the same memory attributes for hyp. accesses as the kernel
@ (copy MAIRx ro HMAIRx). @ (copy MAIRx ro HMAIRx).
mrc p15, 0, r0, c10, c2, 0 mrc p15, 0, r0, c10, c2, 0
......
This diff is collapsed.
This diff is collapsed.
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include <asm/kvm_mmio.h> #include <asm/kvm_mmio.h>
#include <asm/kvm_asm.h> #include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h> #include <asm/kvm_emulate.h>
#include <asm/virt.h>
#include "trace.h" #include "trace.h"
...@@ -598,6 +599,9 @@ int create_hyp_mappings(void *from, void *to) ...@@ -598,6 +599,9 @@ int create_hyp_mappings(void *from, void *to)
unsigned long start = KERN_TO_HYP((unsigned long)from); unsigned long start = KERN_TO_HYP((unsigned long)from);
unsigned long end = KERN_TO_HYP((unsigned long)to); unsigned long end = KERN_TO_HYP((unsigned long)to);
if (is_kernel_in_hyp_mode())
return 0;
start = start & PAGE_MASK; start = start & PAGE_MASK;
end = PAGE_ALIGN(end); end = PAGE_ALIGN(end);
...@@ -630,6 +634,9 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr) ...@@ -630,6 +634,9 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
unsigned long start = KERN_TO_HYP((unsigned long)from); unsigned long start = KERN_TO_HYP((unsigned long)from);
unsigned long end = KERN_TO_HYP((unsigned long)to); unsigned long end = KERN_TO_HYP((unsigned long)to);
if (is_kernel_in_hyp_mode())
return 0;
/* Check for a valid kernel IO mapping */ /* Check for a valid kernel IO mapping */
if (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1)) if (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1))
return -EINVAL; return -EINVAL;
...@@ -1430,6 +1437,22 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -1430,6 +1437,22 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
goto out_unlock; goto out_unlock;
} }
/*
* Check for a cache maintenance operation. Since we
* ended-up here, we know it is outside of any memory
* slot. But we can't find out if that is for a device,
* or if the guest is just being stupid. The only thing
* we know for sure is that this range cannot be cached.
*
* So let's assume that the guest is just being
* cautious, and skip the instruction.
*/
if (kvm_vcpu_dabt_is_cm(vcpu)) {
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
ret = 1;
goto out_unlock;
}
/* /*
* The IPA is reported as [MAX:12], so we need to * The IPA is reported as [MAX:12], so we need to
* complement it with the bottom 12 bits from the * complement it with the bottom 12 bits from the
......
...@@ -71,7 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) ...@@ -71,7 +71,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
} }
/* Reset core registers */ /* Reset core registers */
memcpy(&vcpu->arch.regs, reset_regs, sizeof(vcpu->arch.regs)); memcpy(&vcpu->arch.ctxt.gp_regs, reset_regs, sizeof(vcpu->arch.ctxt.gp_regs));
/* Reset CP15 registers */ /* Reset CP15 registers */
kvm_reset_coprocs(vcpu); kvm_reset_coprocs(vcpu);
......
...@@ -750,6 +750,19 @@ config ARM64_LSE_ATOMICS ...@@ -750,6 +750,19 @@ config ARM64_LSE_ATOMICS
not support these instructions and requires the kernel to be not support these instructions and requires the kernel to be
built with binutils >= 2.25. built with binutils >= 2.25.
config ARM64_VHE
bool "Enable support for Virtualization Host Extensions (VHE)"
default y
help
Virtualization Host Extensions (VHE) allow the kernel to run
directly at EL2 (instead of EL1) on processors that support
it. This leads to better performance for KVM, as they reduce
the cost of the world switch.
Selecting this option allows the VHE feature to be detected
at runtime, and does not affect processors that do not
implement this feature.
endmenu endmenu
endmenu endmenu
......
...@@ -30,8 +30,12 @@ ...@@ -30,8 +30,12 @@
#define ARM64_HAS_LSE_ATOMICS 5 #define ARM64_HAS_LSE_ATOMICS 5
#define ARM64_WORKAROUND_CAVIUM_23154 6 #define ARM64_WORKAROUND_CAVIUM_23154 6
#define ARM64_WORKAROUND_834220 7 #define ARM64_WORKAROUND_834220 7
/* #define ARM64_HAS_NO_HW_PREFETCH 8 */
/* #define ARM64_HAS_UAO 9 */
/* #define ARM64_ALT_PAN_NOT_UAO 10 */
#define ARM64_HAS_VIRT_HOST_EXTN 11
#define ARM64_NCAPS 8 #define ARM64_NCAPS 12
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/virt.h>
#ifdef __KERNEL__ #ifdef __KERNEL__
...@@ -35,10 +36,21 @@ struct arch_hw_breakpoint { ...@@ -35,10 +36,21 @@ struct arch_hw_breakpoint {
struct arch_hw_breakpoint_ctrl ctrl; struct arch_hw_breakpoint_ctrl ctrl;
}; };
/* Privilege Levels */
#define AARCH64_BREAKPOINT_EL1 1
#define AARCH64_BREAKPOINT_EL0 2
#define DBG_HMC_HYP (1 << 13)
static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl) static inline u32 encode_ctrl_reg(struct arch_hw_breakpoint_ctrl ctrl)
{ {
return (ctrl.len << 5) | (ctrl.type << 3) | (ctrl.privilege << 1) | u32 val = (ctrl.len << 5) | (ctrl.type << 3) | (ctrl.privilege << 1) |
ctrl.enabled; ctrl.enabled;
if (is_kernel_in_hyp_mode() && ctrl.privilege == AARCH64_BREAKPOINT_EL1)
val |= DBG_HMC_HYP;
return val;
} }
static inline void decode_ctrl_reg(u32 reg, static inline void decode_ctrl_reg(u32 reg,
...@@ -61,10 +73,6 @@ static inline void decode_ctrl_reg(u32 reg, ...@@ -61,10 +73,6 @@ static inline void decode_ctrl_reg(u32 reg,
#define ARM_BREAKPOINT_STORE 2 #define ARM_BREAKPOINT_STORE 2
#define AARCH64_ESR_ACCESS_MASK (1 << 6) #define AARCH64_ESR_ACCESS_MASK (1 << 6)
/* Privilege Levels */
#define AARCH64_BREAKPOINT_EL1 1
#define AARCH64_BREAKPOINT_EL0 2
/* Lengths */ /* Lengths */
#define ARM_BREAKPOINT_LEN_1 0x1 #define ARM_BREAKPOINT_LEN_1 0x1
#define ARM_BREAKPOINT_LEN_2 0x3 #define ARM_BREAKPOINT_LEN_2 0x3
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <asm/types.h> #include <asm/types.h>
/* Hyp Configuration Register (HCR) bits */ /* Hyp Configuration Register (HCR) bits */
#define HCR_E2H (UL(1) << 34)
#define HCR_ID (UL(1) << 33) #define HCR_ID (UL(1) << 33)
#define HCR_CD (UL(1) << 32) #define HCR_CD (UL(1) << 32)
#define HCR_RW_SHIFT 31 #define HCR_RW_SHIFT 31
...@@ -81,7 +82,7 @@ ...@@ -81,7 +82,7 @@
HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW) HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
#define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF) #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
#define HCR_INT_OVERRIDE (HCR_FMO | HCR_IMO) #define HCR_INT_OVERRIDE (HCR_FMO | HCR_IMO)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
/* Hyp System Control Register (SCTLR_EL2) bits */ /* Hyp System Control Register (SCTLR_EL2) bits */
#define SCTLR_EL2_EE (1 << 25) #define SCTLR_EL2_EE (1 << 25)
...@@ -216,4 +217,7 @@ ...@@ -216,4 +217,7 @@
ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \ ECN(SOFTSTP_CUR), ECN(WATCHPT_LOW), ECN(WATCHPT_CUR), \
ECN(BKPT32), ECN(VECTOR32), ECN(BRK64) ECN(BKPT32), ECN(VECTOR32), ECN(BRK64)
#define CPACR_EL1_FPEN (3 << 20)
#define CPACR_EL1_TTA (1 << 28)
#endif /* __ARM64_KVM_ARM_H__ */ #endif /* __ARM64_KVM_ARM_H__ */
...@@ -35,9 +35,6 @@ extern char __kvm_hyp_init_end[]; ...@@ -35,9 +35,6 @@ extern char __kvm_hyp_init_end[];
extern char __kvm_hyp_vector[]; extern char __kvm_hyp_vector[];
#define __kvm_hyp_code_start __hyp_text_start
#define __kvm_hyp_code_end __hyp_text_end
extern void __kvm_flush_vm_context(void); extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa); extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm); extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
...@@ -45,9 +42,12 @@ extern void __kvm_tlb_flush_vmid(struct kvm *kvm); ...@@ -45,9 +42,12 @@ extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu); extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
extern u64 __vgic_v3_get_ich_vtr_el2(void); extern u64 __vgic_v3_get_ich_vtr_el2(void);
extern void __vgic_v3_init_lrs(void);
extern u32 __kvm_get_mdcr_el2(void); extern u32 __kvm_get_mdcr_el2(void);
extern void __init_stage2_translation(void);
#endif #endif
#endif /* __ARM_KVM_ASM_H__ */ #endif /* __ARM_KVM_ASM_H__ */
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
#include <asm/kvm_mmio.h> #include <asm/kvm_mmio.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/virt.h>
unsigned long *vcpu_reg32(const struct kvm_vcpu *vcpu, u8 reg_num); unsigned long *vcpu_reg32(const struct kvm_vcpu *vcpu, u8 reg_num);
unsigned long *vcpu_spsr32(const struct kvm_vcpu *vcpu); unsigned long *vcpu_spsr32(const struct kvm_vcpu *vcpu);
...@@ -43,6 +44,8 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr); ...@@ -43,6 +44,8 @@ void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
{ {
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS; vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
if (is_kernel_in_hyp_mode())
vcpu->arch.hcr_el2 |= HCR_E2H;
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
vcpu->arch.hcr_el2 &= ~HCR_RW; vcpu->arch.hcr_el2 &= ~HCR_RW;
} }
...@@ -189,6 +192,11 @@ static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu) ...@@ -189,6 +192,11 @@ static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW); return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
} }
static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
{
return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM);
}
static inline int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu) static inline int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu)
{ {
return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT); return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SAS) >> ESR_ELx_SAS_SHIFT);
......
...@@ -25,7 +25,9 @@ ...@@ -25,7 +25,9 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/kvm_types.h> #include <linux/kvm_types.h>
#include <asm/kvm.h> #include <asm/kvm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmio.h> #include <asm/kvm_mmio.h>
#include <asm/kvm_perf_event.h>
#define __KVM_HAVE_ARCH_INTC_INITIALIZED #define __KVM_HAVE_ARCH_INTC_INITIALIZED
...@@ -36,10 +38,11 @@ ...@@ -36,10 +38,11 @@
#include <kvm/arm_vgic.h> #include <kvm/arm_vgic.h>
#include <kvm/arm_arch_timer.h> #include <kvm/arm_arch_timer.h>
#include <kvm/arm_pmu.h>
#define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
#define KVM_VCPU_MAX_FEATURES 3 #define KVM_VCPU_MAX_FEATURES 4
int __attribute_const__ kvm_target_cpu(void); int __attribute_const__ kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu); int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
...@@ -114,6 +117,21 @@ enum vcpu_sysreg { ...@@ -114,6 +117,21 @@ enum vcpu_sysreg {
MDSCR_EL1, /* Monitor Debug System Control Register */ MDSCR_EL1, /* Monitor Debug System Control Register */
MDCCINT_EL1, /* Monitor Debug Comms Channel Interrupt Enable Reg */ MDCCINT_EL1, /* Monitor Debug Comms Channel Interrupt Enable Reg */
/* Performance Monitors Registers */
PMCR_EL0, /* Control Register */
PMSELR_EL0, /* Event Counter Selection Register */
PMEVCNTR0_EL0, /* Event Counter Register (0-30) */
PMEVCNTR30_EL0 = PMEVCNTR0_EL0 + 30,
PMCCNTR_EL0, /* Cycle Counter Register */
PMEVTYPER0_EL0, /* Event Type Register (0-30) */
PMEVTYPER30_EL0 = PMEVTYPER0_EL0 + 30,
PMCCFILTR_EL0, /* Cycle Count Filter Register */
PMCNTENSET_EL0, /* Count Enable Set Register */
PMINTENSET_EL1, /* Interrupt Enable Set Register */
PMOVSSET_EL0, /* Overflow Flag Status Set Register */
PMSWINC_EL0, /* Software Increment Register */
PMUSERENR_EL0, /* User Enable Register */
/* 32bit specific registers. Keep them at the end of the range */ /* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */ DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */ IFSR32_EL2, /* Instruction Fault Status Register */
...@@ -211,6 +229,7 @@ struct kvm_vcpu_arch { ...@@ -211,6 +229,7 @@ struct kvm_vcpu_arch {
/* VGIC state */ /* VGIC state */
struct vgic_cpu vgic_cpu; struct vgic_cpu vgic_cpu;
struct arch_timer_cpu timer_cpu; struct arch_timer_cpu timer_cpu;
struct kvm_pmu pmu;
/* /*
* Anything that is not used directly from assembly code goes * Anything that is not used directly from assembly code goes
...@@ -342,5 +361,18 @@ void kvm_arm_init_debug(void); ...@@ -342,5 +361,18 @@ void kvm_arm_init_debug(void);
void kvm_arm_setup_debug(struct kvm_vcpu *vcpu); void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
void kvm_arm_clear_debug(struct kvm_vcpu *vcpu); void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu); void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr);
int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr);
int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr);
/* #define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__) */
static inline void __cpu_init_stage2(void)
{
kvm_call_hyp(__init_stage2_translation);
}
#endif /* __ARM64_KVM_HOST_H__ */ #endif /* __ARM64_KVM_HOST_H__ */
...@@ -21,13 +21,105 @@ ...@@ -21,13 +21,105 @@
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <asm/kvm_mmu.h> #include <asm/kvm_mmu.h>
#include <asm/kvm_perf_event.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
#define __hyp_text __section(.hyp.text) notrace #define __hyp_text __section(.hyp.text) notrace
#define kern_hyp_va(v) (typeof(v))((unsigned long)(v) & HYP_PAGE_OFFSET_MASK) static inline unsigned long __kern_hyp_va(unsigned long v)
#define hyp_kern_va(v) (typeof(v))((unsigned long)(v) - HYP_PAGE_OFFSET \ {
+ PAGE_OFFSET) asm volatile(ALTERNATIVE("and %0, %0, %1",
"nop",
ARM64_HAS_VIRT_HOST_EXTN)
: "+r" (v) : "i" (HYP_PAGE_OFFSET_MASK));
return v;
}
#define kern_hyp_va(v) (typeof(v))(__kern_hyp_va((unsigned long)(v)))
static inline unsigned long __hyp_kern_va(unsigned long v)
{
u64 offset = PAGE_OFFSET - HYP_PAGE_OFFSET;
asm volatile(ALTERNATIVE("add %0, %0, %1",
"nop",
ARM64_HAS_VIRT_HOST_EXTN)
: "+r" (v) : "r" (offset));
return v;
}
#define hyp_kern_va(v) (typeof(v))(__hyp_kern_va((unsigned long)(v)))
#define read_sysreg_elx(r,nvh,vh) \
({ \
u64 reg; \
asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##nvh),\
"mrs_s %0, " __stringify(r##vh),\
ARM64_HAS_VIRT_HOST_EXTN) \
: "=r" (reg)); \
reg; \
})
#define write_sysreg_elx(v,r,nvh,vh) \
do { \
u64 __val = (u64)(v); \
asm volatile(ALTERNATIVE("msr " __stringify(r##nvh) ", %x0",\
"msr_s " __stringify(r##vh) ", %x0",\
ARM64_HAS_VIRT_HOST_EXTN) \
: : "rZ" (__val)); \
} while (0)
/*
* Unified accessors for registers that have a different encoding
* between VHE and non-VHE. They must be specified without their "ELx"
* encoding.
*/
#define read_sysreg_el2(r) \
({ \
u64 reg; \
asm volatile(ALTERNATIVE("mrs %0, " __stringify(r##_EL2),\
"mrs %0, " __stringify(r##_EL1),\
ARM64_HAS_VIRT_HOST_EXTN) \
: "=r" (reg)); \
reg; \
})
#define write_sysreg_el2(v,r) \
do { \
u64 __val = (u64)(v); \
asm volatile(ALTERNATIVE("msr " __stringify(r##_EL2) ", %x0",\
"msr " __stringify(r##_EL1) ", %x0",\
ARM64_HAS_VIRT_HOST_EXTN) \
: : "rZ" (__val)); \
} while (0)
#define read_sysreg_el0(r) read_sysreg_elx(r, _EL0, _EL02)
#define write_sysreg_el0(v,r) write_sysreg_elx(v, r, _EL0, _EL02)
#define read_sysreg_el1(r) read_sysreg_elx(r, _EL1, _EL12)
#define write_sysreg_el1(v,r) write_sysreg_elx(v, r, _EL1, _EL12)
/* The VHE specific system registers and their encoding */
#define sctlr_EL12 sys_reg(3, 5, 1, 0, 0)
#define cpacr_EL12 sys_reg(3, 5, 1, 0, 2)
#define ttbr0_EL12 sys_reg(3, 5, 2, 0, 0)
#define ttbr1_EL12 sys_reg(3, 5, 2, 0, 1)
#define tcr_EL12 sys_reg(3, 5, 2, 0, 2)
#define afsr0_EL12 sys_reg(3, 5, 5, 1, 0)
#define afsr1_EL12 sys_reg(3, 5, 5, 1, 1)
#define esr_EL12 sys_reg(3, 5, 5, 2, 0)
#define far_EL12 sys_reg(3, 5, 6, 0, 0)
#define mair_EL12 sys_reg(3, 5, 10, 2, 0)
#define amair_EL12 sys_reg(3, 5, 10, 3, 0)
#define vbar_EL12 sys_reg(3, 5, 12, 0, 0)
#define contextidr_EL12 sys_reg(3, 5, 13, 0, 1)
#define cntkctl_EL12 sys_reg(3, 5, 14, 1, 0)
#define cntp_tval_EL02 sys_reg(3, 5, 14, 2, 0)
#define cntp_ctl_EL02 sys_reg(3, 5, 14, 2, 1)
#define cntp_cval_EL02 sys_reg(3, 5, 14, 2, 2)
#define cntv_tval_EL02 sys_reg(3, 5, 14, 3, 0)
#define cntv_ctl_EL02 sys_reg(3, 5, 14, 3, 1)
#define cntv_cval_EL02 sys_reg(3, 5, 14, 3, 2)
#define spsr_EL12 sys_reg(3, 5, 4, 0, 0)
#define elr_EL12 sys_reg(3, 5, 4, 0, 1)
/** /**
* hyp_alternate_select - Generates patchable code sequences that are * hyp_alternate_select - Generates patchable code sequences that are
...@@ -62,8 +154,10 @@ void __vgic_v3_restore_state(struct kvm_vcpu *vcpu); ...@@ -62,8 +154,10 @@ void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
void __timer_save_state(struct kvm_vcpu *vcpu); void __timer_save_state(struct kvm_vcpu *vcpu);
void __timer_restore_state(struct kvm_vcpu *vcpu); void __timer_restore_state(struct kvm_vcpu *vcpu);
void __sysreg_save_state(struct kvm_cpu_context *ctxt); void __sysreg_save_host_state(struct kvm_cpu_context *ctxt);
void __sysreg_restore_state(struct kvm_cpu_context *ctxt); void __sysreg_restore_host_state(struct kvm_cpu_context *ctxt);
void __sysreg_save_guest_state(struct kvm_cpu_context *ctxt);
void __sysreg_restore_guest_state(struct kvm_cpu_context *ctxt);
void __sysreg32_save_state(struct kvm_vcpu *vcpu); void __sysreg32_save_state(struct kvm_vcpu *vcpu);
void __sysreg32_restore_state(struct kvm_vcpu *vcpu); void __sysreg32_restore_state(struct kvm_vcpu *vcpu);
...@@ -78,10 +172,7 @@ void __debug_cond_restore_host_state(struct kvm_vcpu *vcpu); ...@@ -78,10 +172,7 @@ void __debug_cond_restore_host_state(struct kvm_vcpu *vcpu);
void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
static inline bool __fpsimd_enabled(void) bool __fpsimd_enabled(void);
{
return !(read_sysreg(cptr_el2) & CPTR_EL2_TFP);
}
u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt); u64 __guest_enter(struct kvm_vcpu *vcpu, struct kvm_cpu_context *host_ctxt);
void __noreturn __hyp_do_panic(unsigned long, ...); void __noreturn __hyp_do_panic(unsigned long, ...);
......
...@@ -23,13 +23,16 @@ ...@@ -23,13 +23,16 @@
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
/* /*
* As we only have the TTBR0_EL2 register, we cannot express * As ARMv8.0 only has the TTBR0_EL2 register, we cannot express
* "negative" addresses. This makes it impossible to directly share * "negative" addresses. This makes it impossible to directly share
* mappings with the kernel. * mappings with the kernel.
* *
* Instead, give the HYP mode its own VA region at a fixed offset from * Instead, give the HYP mode its own VA region at a fixed offset from
* the kernel by just masking the top bits (which are all ones for a * the kernel by just masking the top bits (which are all ones for a
* kernel address). * kernel address).
*
* ARMv8.1 (using VHE) does have a TTBR1_EL2, and doesn't use these
* macros (the entire kernel runs at EL2).
*/ */
#define HYP_PAGE_OFFSET_SHIFT VA_BITS #define HYP_PAGE_OFFSET_SHIFT VA_BITS
#define HYP_PAGE_OFFSET_MASK ((UL(1) << HYP_PAGE_OFFSET_SHIFT) - 1) #define HYP_PAGE_OFFSET_MASK ((UL(1) << HYP_PAGE_OFFSET_SHIFT) - 1)
...@@ -56,12 +59,19 @@ ...@@ -56,12 +59,19 @@
#ifdef __ASSEMBLY__ #ifdef __ASSEMBLY__
#include <asm/alternative.h>
#include <asm/cpufeature.h>
/* /*
* Convert a kernel VA into a HYP VA. * Convert a kernel VA into a HYP VA.
* reg: VA to be converted. * reg: VA to be converted.
*/ */
.macro kern_hyp_va reg .macro kern_hyp_va reg
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
and \reg, \reg, #HYP_PAGE_OFFSET_MASK and \reg, \reg, #HYP_PAGE_OFFSET_MASK
alternative_else
nop
alternative_endif
.endm .endm
#else #else
......
/*
* Copyright (C) 2012 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_KVM_PERF_EVENT_H
#define __ASM_KVM_PERF_EVENT_H
#define ARMV8_PMU_MAX_COUNTERS 32
#define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1)
/*
* Per-CPU PMCR: config reg
*/
#define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */
#define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */
#define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */
#define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */
#define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */
#define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/
/* Determines which bit of PMCCNTR_EL0 generates an overflow */
#define ARMV8_PMU_PMCR_LC (1 << 6)
#define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */
#define ARMV8_PMU_PMCR_N_MASK 0x1f
#define ARMV8_PMU_PMCR_MASK 0x7f /* Mask for writable bits */
/*
* PMOVSR: counters overflow flag status reg
*/
#define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */
#define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK
/*
* PMXEVTYPER: Event selection reg
*/
#define ARMV8_PMU_EVTYPE_MASK 0xc80003ff /* Mask for writable bits */
#define ARMV8_PMU_EVTYPE_EVENT 0x3ff /* Mask for EVENT bits */
#define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0 /* Software increment event */
/*
* Event filters for PMUv3
*/
#define ARMV8_PMU_EXCLUDE_EL1 (1 << 31)
#define ARMV8_PMU_EXCLUDE_EL0 (1 << 30)
#define ARMV8_PMU_INCLUDE_EL2 (1 << 27)
/*
* PMUSERENR: user enable reg
*/
#define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */
#define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */
#define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */
#define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */
#define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */
#endif
...@@ -23,6 +23,8 @@ ...@@ -23,6 +23,8 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/ptrace.h>
/* /*
* __boot_cpu_mode records what mode CPUs were booted in. * __boot_cpu_mode records what mode CPUs were booted in.
* A correctly-implemented bootloader must start all CPUs in the same mode: * A correctly-implemented bootloader must start all CPUs in the same mode:
...@@ -50,6 +52,14 @@ static inline bool is_hyp_mode_mismatched(void) ...@@ -50,6 +52,14 @@ static inline bool is_hyp_mode_mismatched(void)
return __boot_cpu_mode[0] != __boot_cpu_mode[1]; return __boot_cpu_mode[0] != __boot_cpu_mode[1];
} }
static inline bool is_kernel_in_hyp_mode(void)
{
u64 el;
asm("mrs %0, CurrentEL" : "=r" (el));
return el == CurrentEL_EL2;
}
/* The section containing the hypervisor text */ /* The section containing the hypervisor text */
extern char __hyp_text_start[]; extern char __hyp_text_start[];
extern char __hyp_text_end[]; extern char __hyp_text_end[];
......
...@@ -94,6 +94,7 @@ struct kvm_regs { ...@@ -94,6 +94,7 @@ struct kvm_regs {
#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */ #define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
#define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */ #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
#define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */ #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */
#define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */
struct kvm_vcpu_init { struct kvm_vcpu_init {
__u32 target; __u32 target;
...@@ -204,6 +205,11 @@ struct kvm_arch_memory_slot { ...@@ -204,6 +205,11 @@ struct kvm_arch_memory_slot {
#define KVM_DEV_ARM_VGIC_GRP_CTRL 4 #define KVM_DEV_ARM_VGIC_GRP_CTRL 4
#define KVM_DEV_ARM_VGIC_CTRL_INIT 0 #define KVM_DEV_ARM_VGIC_CTRL_INIT 0
/* Device Control API on vcpu fd */
#define KVM_ARM_VCPU_PMU_V3_CTRL 0
#define KVM_ARM_VCPU_PMU_V3_IRQ 0
#define KVM_ARM_VCPU_PMU_V3_INIT 1
/* KVM_IRQ_LINE irq field index values */ /* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24 #define KVM_ARM_IRQ_TYPE_SHIFT 24
#define KVM_ARM_IRQ_TYPE_MASK 0xff #define KVM_ARM_IRQ_TYPE_MASK 0xff
......
...@@ -110,9 +110,6 @@ int main(void) ...@@ -110,9 +110,6 @@ int main(void)
DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_regs, regs)); DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_regs, regs));
DEFINE(CPU_FP_REGS, offsetof(struct kvm_regs, fp_regs)); DEFINE(CPU_FP_REGS, offsetof(struct kvm_regs, fp_regs));
DEFINE(VCPU_FPEXC32_EL2, offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2])); DEFINE(VCPU_FPEXC32_EL2, offsetof(struct kvm_vcpu, arch.ctxt.sys_regs[FPEXC32_EL2]));
DEFINE(VCPU_ESR_EL2, offsetof(struct kvm_vcpu, arch.fault.esr_el2));
DEFINE(VCPU_FAR_EL2, offsetof(struct kvm_vcpu, arch.fault.far_el2));
DEFINE(VCPU_HPFAR_EL2, offsetof(struct kvm_vcpu, arch.fault.hpfar_el2));
DEFINE(VCPU_HOST_CONTEXT, offsetof(struct kvm_vcpu, arch.host_cpu_context)); DEFINE(VCPU_HOST_CONTEXT, offsetof(struct kvm_vcpu, arch.host_cpu_context));
#endif #endif
#ifdef CONFIG_CPU_PM #ifdef CONFIG_CPU_PM
......
...@@ -26,6 +26,7 @@ ...@@ -26,6 +26,7 @@
#include <asm/cpu_ops.h> #include <asm/cpu_ops.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/sysreg.h> #include <asm/sysreg.h>
#include <asm/virt.h>
unsigned long elf_hwcap __read_mostly; unsigned long elf_hwcap __read_mostly;
EXPORT_SYMBOL_GPL(elf_hwcap); EXPORT_SYMBOL_GPL(elf_hwcap);
...@@ -621,6 +622,11 @@ static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry) ...@@ -621,6 +622,11 @@ static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry)
return has_sre; return has_sre;
} }
static bool runs_at_el2(const struct arm64_cpu_capabilities *entry)
{
return is_kernel_in_hyp_mode();
}
static const struct arm64_cpu_capabilities arm64_features[] = { static const struct arm64_cpu_capabilities arm64_features[] = {
{ {
.desc = "GIC system register CPU interface", .desc = "GIC system register CPU interface",
...@@ -651,6 +657,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = { ...@@ -651,6 +657,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.min_field_value = 2, .min_field_value = 2,
}, },
#endif /* CONFIG_AS_LSE && CONFIG_ARM64_LSE_ATOMICS */ #endif /* CONFIG_AS_LSE && CONFIG_ARM64_LSE_ATOMICS */
{
.desc = "Virtualization Host Extensions",
.capability = ARM64_HAS_VIRT_HOST_EXTN,
.matches = runs_at_el2,
},
{}, {},
}; };
......
...@@ -30,6 +30,7 @@ ...@@ -30,6 +30,7 @@
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/kernel-pgtable.h> #include <asm/kernel-pgtable.h>
#include <asm/kvm_arm.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/pgtable-hwdef.h> #include <asm/pgtable-hwdef.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
...@@ -464,9 +465,27 @@ CPU_LE( bic x0, x0, #(3 << 24) ) // Clear the EE and E0E bits for EL1 ...@@ -464,9 +465,27 @@ CPU_LE( bic x0, x0, #(3 << 24) ) // Clear the EE and E0E bits for EL1
isb isb
ret ret
2:
#ifdef CONFIG_ARM64_VHE
/*
* Check for VHE being present. For the rest of the EL2 setup,
* x2 being non-zero indicates that we do have VHE, and that the
* kernel is intended to run at EL2.
*/
mrs x2, id_aa64mmfr1_el1
ubfx x2, x2, #8, #4
#else
mov x2, xzr
#endif
/* Hyp configuration. */ /* Hyp configuration. */
2: mov x0, #(1 << 31) // 64-bit EL1 mov x0, #HCR_RW // 64-bit EL1
cbz x2, set_hcr
orr x0, x0, #HCR_TGE // Enable Host Extensions
orr x0, x0, #HCR_E2H
set_hcr:
msr hcr_el2, x0 msr hcr_el2, x0
isb
/* Generic timers. */ /* Generic timers. */
mrs x0, cnthctl_el2 mrs x0, cnthctl_el2
...@@ -526,6 +545,13 @@ CPU_LE( movk x0, #0x30d0, lsl #16 ) // Clear EE and E0E on LE systems ...@@ -526,6 +545,13 @@ CPU_LE( movk x0, #0x30d0, lsl #16 ) // Clear EE and E0E on LE systems
/* Stage-2 translation */ /* Stage-2 translation */
msr vttbr_el2, xzr msr vttbr_el2, xzr
cbz x2, install_el2_stub
mov w20, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2
isb
ret
install_el2_stub:
/* Hypervisor stub */ /* Hypervisor stub */
adrp x0, __hyp_stub_vectors adrp x0, __hyp_stub_vectors
add x0, x0, #:lo12:__hyp_stub_vectors add x0, x0, #:lo12:__hyp_stub_vectors
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
*/ */
#include <asm/irq_regs.h> #include <asm/irq_regs.h>
#include <asm/virt.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/perf/arm_pmu.h> #include <linux/perf/arm_pmu.h>
...@@ -691,9 +692,12 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event, ...@@ -691,9 +692,12 @@ static int armv8pmu_set_event_filter(struct hw_perf_event *event,
if (attr->exclude_idle) if (attr->exclude_idle)
return -EPERM; return -EPERM;
if (is_kernel_in_hyp_mode() &&
attr->exclude_kernel != attr->exclude_hv)
return -EINVAL;
if (attr->exclude_user) if (attr->exclude_user)
config_base |= ARMV8_EXCLUDE_EL0; config_base |= ARMV8_EXCLUDE_EL0;
if (attr->exclude_kernel) if (!is_kernel_in_hyp_mode() && attr->exclude_kernel)
config_base |= ARMV8_EXCLUDE_EL1; config_base |= ARMV8_EXCLUDE_EL1;
if (!attr->exclude_hv) if (!attr->exclude_hv)
config_base |= ARMV8_INCLUDE_EL2; config_base |= ARMV8_INCLUDE_EL2;
......
...@@ -36,6 +36,7 @@ config KVM ...@@ -36,6 +36,7 @@ config KVM
select HAVE_KVM_EVENTFD select HAVE_KVM_EVENTFD
select HAVE_KVM_IRQFD select HAVE_KVM_IRQFD
select KVM_ARM_VGIC_V3 select KVM_ARM_VGIC_V3
select KVM_ARM_PMU if HW_PERF_EVENTS
---help--- ---help---
Support hosting virtualized guest machines. Support hosting virtualized guest machines.
We don't support KVM with 16K page tables yet, due to the multiple We don't support KVM with 16K page tables yet, due to the multiple
...@@ -48,6 +49,12 @@ config KVM_ARM_HOST ...@@ -48,6 +49,12 @@ config KVM_ARM_HOST
---help--- ---help---
Provides host support for ARM processors. Provides host support for ARM processors.
config KVM_ARM_PMU
bool
---help---
Adds support for a virtual Performance Monitoring Unit (PMU) in
virtual machines.
source drivers/vhost/Kconfig source drivers/vhost/Kconfig
endif # VIRTUALIZATION endif # VIRTUALIZATION
...@@ -26,3 +26,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o ...@@ -26,3 +26,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
...@@ -380,3 +380,54 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, ...@@ -380,3 +380,54 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu,
} }
return 0; return 0;
} }
int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
int ret;
switch (attr->group) {
case KVM_ARM_VCPU_PMU_V3_CTRL:
ret = kvm_arm_pmu_v3_set_attr(vcpu, attr);
break;
default:
ret = -ENXIO;
break;
}
return ret;
}
int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
int ret;
switch (attr->group) {
case KVM_ARM_VCPU_PMU_V3_CTRL:
ret = kvm_arm_pmu_v3_get_attr(vcpu, attr);
break;
default:
ret = -ENXIO;
break;
}
return ret;
}
int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
int ret;
switch (attr->group) {
case KVM_ARM_VCPU_PMU_V3_CTRL:
ret = kvm_arm_pmu_v3_has_attr(vcpu, attr);
break;
default:
ret = -ENXIO;
break;
}
return ret;
}
...@@ -87,26 +87,13 @@ __do_hyp_init: ...@@ -87,26 +87,13 @@ __do_hyp_init:
#endif #endif
/* /*
* Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS bits in * Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS bits in
* TCR_EL2 and VTCR_EL2. * TCR_EL2.
*/ */
mrs x5, ID_AA64MMFR0_EL1 mrs x5, ID_AA64MMFR0_EL1
bfi x4, x5, #16, #3 bfi x4, x5, #16, #3
msr tcr_el2, x4 msr tcr_el2, x4
ldr x4, =VTCR_EL2_FLAGS
bfi x4, x5, #16, #3
/*
* Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS bit in
* VTCR_EL2.
*/
mrs x5, ID_AA64MMFR1_EL1
ubfx x5, x5, #5, #1
lsl x5, x5, #VTCR_EL2_VS
orr x4, x4, x5
msr vtcr_el2, x4
mrs x4, mair_el1 mrs x4, mair_el1
msr mair_el2, x4 msr mair_el2, x4
isb isb
......
...@@ -17,7 +17,9 @@ ...@@ -17,7 +17,9 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/alternative.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/cpufeature.h>
/* /*
* u64 kvm_call_hyp(void *hypfn, ...); * u64 kvm_call_hyp(void *hypfn, ...);
...@@ -38,6 +40,11 @@ ...@@ -38,6 +40,11 @@
* arch/arm64/kernel/hyp_stub.S. * arch/arm64/kernel/hyp_stub.S.
*/ */
ENTRY(kvm_call_hyp) ENTRY(kvm_call_hyp)
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
hvc #0 hvc #0
ret ret
alternative_else
b __vhe_hyp_call
nop
alternative_endif
ENDPROC(kvm_call_hyp) ENDPROC(kvm_call_hyp)
...@@ -2,9 +2,12 @@ ...@@ -2,9 +2,12 @@
# Makefile for Kernel-based Virtual Machine module, HYP part # Makefile for Kernel-based Virtual Machine module, HYP part
# #
obj-$(CONFIG_KVM_ARM_HOST) += vgic-v2-sr.o KVM=../../../../virt/kvm
obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v2-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/timer-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += vgic-v3-sr.o obj-$(CONFIG_KVM_ARM_HOST) += vgic-v3-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += timer-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += sysreg-sr.o obj-$(CONFIG_KVM_ARM_HOST) += sysreg-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += debug-sr.o obj-$(CONFIG_KVM_ARM_HOST) += debug-sr.o
obj-$(CONFIG_KVM_ARM_HOST) += entry.o obj-$(CONFIG_KVM_ARM_HOST) += entry.o
...@@ -12,3 +15,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o ...@@ -12,3 +15,4 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
obj-$(CONFIG_KVM_ARM_HOST) += tlb.o obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
...@@ -19,9 +19,7 @@ ...@@ -19,9 +19,7 @@
#include <linux/kvm_host.h> #include <linux/kvm_host.h>
#include <asm/kvm_asm.h> #include <asm/kvm_asm.h>
#include <asm/kvm_mmu.h> #include <asm/kvm_hyp.h>
#include "hyp.h"
#define read_debug(r,n) read_sysreg(r##n##_el1) #define read_debug(r,n) read_sysreg(r##n##_el1)
#define write_debug(v,r,n) write_sysreg(v, r##n##_el1) #define write_debug(v,r,n) write_sysreg(v, r##n##_el1)
......
...@@ -130,9 +130,15 @@ ENDPROC(__guest_exit) ...@@ -130,9 +130,15 @@ ENDPROC(__guest_exit)
ENTRY(__fpsimd_guest_restore) ENTRY(__fpsimd_guest_restore)
stp x4, lr, [sp, #-16]! stp x4, lr, [sp, #-16]!
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
mrs x2, cptr_el2 mrs x2, cptr_el2
bic x2, x2, #CPTR_EL2_TFP bic x2, x2, #CPTR_EL2_TFP
msr cptr_el2, x2 msr cptr_el2, x2
alternative_else
mrs x2, cpacr_el1
orr x2, x2, #CPACR_EL1_FPEN
msr cpacr_el1, x2
alternative_endif
isb isb
mrs x3, tpidr_el2 mrs x3, tpidr_el2
......
...@@ -19,7 +19,6 @@ ...@@ -19,7 +19,6 @@
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/asm-offsets.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/kvm_arm.h> #include <asm/kvm_arm.h>
#include <asm/kvm_asm.h> #include <asm/kvm_asm.h>
...@@ -38,10 +37,42 @@ ...@@ -38,10 +37,42 @@
ldp x0, x1, [sp], #16 ldp x0, x1, [sp], #16
.endm .endm
.macro do_el2_call
/*
* Shuffle the parameters before calling the function
* pointed to in x0. Assumes parameters in x[1,2,3].
*/
sub sp, sp, #16
str lr, [sp]
mov lr, x0
mov x0, x1
mov x1, x2
mov x2, x3
blr lr
ldr lr, [sp]
add sp, sp, #16
.endm
ENTRY(__vhe_hyp_call)
do_el2_call
/*
* We used to rely on having an exception return to get
* an implicit isb. In the E2H case, we don't have it anymore.
* rather than changing all the leaf functions, just do it here
* before returning to the rest of the kernel.
*/
isb
ret
ENDPROC(__vhe_hyp_call)
el1_sync: // Guest trapped into EL2 el1_sync: // Guest trapped into EL2
save_x0_to_x3 save_x0_to_x3
alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
mrs x1, esr_el2 mrs x1, esr_el2
alternative_else
mrs x1, esr_el1
alternative_endif
lsr x2, x1, #ESR_ELx_EC_SHIFT lsr x2, x1, #ESR_ELx_EC_SHIFT
cmp x2, #ESR_ELx_EC_HVC64 cmp x2, #ESR_ELx_EC_HVC64
...@@ -58,19 +89,13 @@ el1_sync: // Guest trapped into EL2 ...@@ -58,19 +89,13 @@ el1_sync: // Guest trapped into EL2
mrs x0, vbar_el2 mrs x0, vbar_el2
b 2f b 2f
1: stp lr, xzr, [sp, #-16]! 1:
/* /*
* Compute the function address in EL2, and shuffle the parameters. * Perform the EL2 call
*/ */
kern_hyp_va x0 kern_hyp_va x0
mov lr, x0 do_el2_call
mov x0, x1
mov x1, x2
mov x2, x3
blr lr
ldp lr, xzr, [sp], #16
2: eret 2: eret
el1_trap: el1_trap:
...@@ -83,72 +108,10 @@ el1_trap: ...@@ -83,72 +108,10 @@ el1_trap:
cmp x2, #ESR_ELx_EC_FP_ASIMD cmp x2, #ESR_ELx_EC_FP_ASIMD
b.eq __fpsimd_guest_restore b.eq __fpsimd_guest_restore
cmp x2, #ESR_ELx_EC_DABT_LOW mrs x0, tpidr_el2
mov x0, #ESR_ELx_EC_IABT_LOW
ccmp x2, x0, #4, ne
b.ne 1f // Not an abort we care about
/* This is an abort. Check for permission fault */
alternative_if_not ARM64_WORKAROUND_834220
and x2, x1, #ESR_ELx_FSC_TYPE
cmp x2, #FSC_PERM
b.ne 1f // Not a permission fault
alternative_else
nop // Use the permission fault path to
nop // check for a valid S1 translation,
nop // regardless of the ESR value.
alternative_endif
/*
* Check for Stage-1 page table walk, which is guaranteed
* to give a valid HPFAR_EL2.
*/
tbnz x1, #7, 1f // S1PTW is set
/* Preserve PAR_EL1 */
mrs x3, par_el1
stp x3, xzr, [sp, #-16]!
/*
* Permission fault, HPFAR_EL2 is invalid.
* Resolve the IPA the hard way using the guest VA.
* Stage-1 translation already validated the memory access rights.
* As such, we can use the EL1 translation regime, and don't have
* to distinguish between EL0 and EL1 access.
*/
mrs x2, far_el2
at s1e1r, x2
isb
/* Read result */
mrs x3, par_el1
ldp x0, xzr, [sp], #16 // Restore PAR_EL1 from the stack
msr par_el1, x0
tbnz x3, #0, 3f // Bail out if we failed the translation
ubfx x3, x3, #12, #36 // Extract IPA
lsl x3, x3, #4 // and present it like HPFAR
b 2f
1: mrs x3, hpfar_el2
mrs x2, far_el2
2: mrs x0, tpidr_el2
str w1, [x0, #VCPU_ESR_EL2]
str x2, [x0, #VCPU_FAR_EL2]
str x3, [x0, #VCPU_HPFAR_EL2]
mov x1, #ARM_EXCEPTION_TRAP mov x1, #ARM_EXCEPTION_TRAP
b __guest_exit b __guest_exit
/*
* Translation failed. Just return to the guest and
* let it fault again. Another CPU is probably playing
* behind our back.
*/
3: restore_x0_to_x3
eret
el1_irq: el1_irq:
save_x0_to_x3 save_x0_to_x3
mrs x0, tpidr_el2 mrs x0, tpidr_el2
......
/*
* Copyright (C) 2016 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/types.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_hyp.h>
void __hyp_text __init_stage2_translation(void)
{
u64 val = VTCR_EL2_FLAGS;
u64 tmp;
/*
* Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS
* bits in VTCR_EL2. Amusingly, the PARange is 4 bits, while
* PS is only 3. Fortunately, bit 19 is RES0 in VTCR_EL2...
*/
val |= (read_sysreg(id_aa64mmfr0_el1) & 7) << 16;
/*
* Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS
* bit in VTCR_EL2.
*/
tmp = (read_sysreg(id_aa64mmfr1_el1) >> 4) & 0xf;
val |= (tmp == 2) ? VTCR_EL2_VS : 0;
write_sysreg(val, vtcr_el2);
}
...@@ -15,7 +15,53 @@ ...@@ -15,7 +15,53 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>. * along with this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#include "hyp.h" #include <linux/types.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_hyp.h>
static bool __hyp_text __fpsimd_enabled_nvhe(void)
{
return !(read_sysreg(cptr_el2) & CPTR_EL2_TFP);
}
static bool __hyp_text __fpsimd_enabled_vhe(void)
{
return !!(read_sysreg(cpacr_el1) & CPACR_EL1_FPEN);
}
static hyp_alternate_select(__fpsimd_is_enabled,
__fpsimd_enabled_nvhe, __fpsimd_enabled_vhe,
ARM64_HAS_VIRT_HOST_EXTN);
bool __hyp_text __fpsimd_enabled(void)
{
return __fpsimd_is_enabled()();
}
static void __hyp_text __activate_traps_vhe(void)
{
u64 val;
val = read_sysreg(cpacr_el1);
val |= CPACR_EL1_TTA;
val &= ~CPACR_EL1_FPEN;
write_sysreg(val, cpacr_el1);
write_sysreg(__kvm_hyp_vector, vbar_el1);
}
static void __hyp_text __activate_traps_nvhe(void)
{
u64 val;
val = CPTR_EL2_DEFAULT;
val |= CPTR_EL2_TTA | CPTR_EL2_TFP;
write_sysreg(val, cptr_el2);
}
static hyp_alternate_select(__activate_traps_arch,
__activate_traps_nvhe, __activate_traps_vhe,
ARM64_HAS_VIRT_HOST_EXTN);
static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
{ {
...@@ -36,20 +82,37 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) ...@@ -36,20 +82,37 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
write_sysreg(val, hcr_el2); write_sysreg(val, hcr_el2);
/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */ /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
write_sysreg(1 << 15, hstr_el2); write_sysreg(1 << 15, hstr_el2);
/* Make sure we trap PMU access from EL0 to EL2 */
write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);
write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
__activate_traps_arch()();
}
val = CPTR_EL2_DEFAULT; static void __hyp_text __deactivate_traps_vhe(void)
val |= CPTR_EL2_TTA | CPTR_EL2_TFP; {
write_sysreg(val, cptr_el2); extern char vectors[]; /* kernel exception vectors */
write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2); write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
write_sysreg(CPACR_EL1_FPEN, cpacr_el1);
write_sysreg(vectors, vbar_el1);
} }
static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) static void __hyp_text __deactivate_traps_nvhe(void)
{ {
write_sysreg(HCR_RW, hcr_el2); write_sysreg(HCR_RW, hcr_el2);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
}
static hyp_alternate_select(__deactivate_traps_arch,
__deactivate_traps_nvhe, __deactivate_traps_vhe,
ARM64_HAS_VIRT_HOST_EXTN);
static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
{
__deactivate_traps_arch()();
write_sysreg(0, hstr_el2); write_sysreg(0, hstr_el2);
write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2); write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
write_sysreg(CPTR_EL2_DEFAULT, cptr_el2); write_sysreg(0, pmuserenr_el0);
} }
static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu) static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
...@@ -89,6 +152,86 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu) ...@@ -89,6 +152,86 @@ static void __hyp_text __vgic_restore_state(struct kvm_vcpu *vcpu)
__vgic_call_restore_state()(vcpu); __vgic_call_restore_state()(vcpu);
} }
static bool __hyp_text __true_value(void)
{
return true;
}
static bool __hyp_text __false_value(void)
{
return false;
}
static hyp_alternate_select(__check_arm_834220,
__false_value, __true_value,
ARM64_WORKAROUND_834220);
static bool __hyp_text __translate_far_to_hpfar(u64 far, u64 *hpfar)
{
u64 par, tmp;
/*
* Resolve the IPA the hard way using the guest VA.
*
* Stage-1 translation already validated the memory access
* rights. As such, we can use the EL1 translation regime, and
* don't have to distinguish between EL0 and EL1 access.
*
* We do need to save/restore PAR_EL1 though, as we haven't
* saved the guest context yet, and we may return early...
*/
par = read_sysreg(par_el1);
asm volatile("at s1e1r, %0" : : "r" (far));
isb();
tmp = read_sysreg(par_el1);
write_sysreg(par, par_el1);
if (unlikely(tmp & 1))
return false; /* Translation failed, back to guest */
/* Convert PAR to HPFAR format */
*hpfar = ((tmp >> 12) & ((1UL << 36) - 1)) << 4;
return true;
}
static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
{
u64 esr = read_sysreg_el2(esr);
u8 ec = esr >> ESR_ELx_EC_SHIFT;
u64 hpfar, far;
vcpu->arch.fault.esr_el2 = esr;
if (ec != ESR_ELx_EC_DABT_LOW && ec != ESR_ELx_EC_IABT_LOW)
return true;
far = read_sysreg_el2(far);
/*
* The HPFAR can be invalid if the stage 2 fault did not
* happen during a stage 1 page table walk (the ESR_EL2.S1PTW
* bit is clear) and one of the two following cases are true:
* 1. The fault was due to a permission fault
* 2. The processor carries errata 834220
*
* Therefore, for all non S1PTW faults where we either have a
* permission fault or the errata workaround is enabled, we
* resolve the IPA using the AT instruction.
*/
if (!(esr & ESR_ELx_S1PTW) &&
(__check_arm_834220()() || (esr & ESR_ELx_FSC_TYPE) == FSC_PERM)) {
if (!__translate_far_to_hpfar(far, &hpfar))
return false;
} else {
hpfar = read_sysreg(hpfar_el2);
}
vcpu->arch.fault.far_el2 = far;
vcpu->arch.fault.hpfar_el2 = hpfar;
return true;
}
static int __hyp_text __guest_run(struct kvm_vcpu *vcpu) static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
{ {
struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *host_ctxt;
...@@ -102,7 +245,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu) ...@@ -102,7 +245,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
guest_ctxt = &vcpu->arch.ctxt; guest_ctxt = &vcpu->arch.ctxt;
__sysreg_save_state(host_ctxt); __sysreg_save_host_state(host_ctxt);
__debug_cond_save_host_state(vcpu); __debug_cond_save_host_state(vcpu);
__activate_traps(vcpu); __activate_traps(vcpu);
...@@ -116,16 +259,20 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu) ...@@ -116,16 +259,20 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
* to Cortex-A57 erratum #852523. * to Cortex-A57 erratum #852523.
*/ */
__sysreg32_restore_state(vcpu); __sysreg32_restore_state(vcpu);
__sysreg_restore_state(guest_ctxt); __sysreg_restore_guest_state(guest_ctxt);
__debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt); __debug_restore_state(vcpu, kern_hyp_va(vcpu->arch.debug_ptr), guest_ctxt);
/* Jump in the fire! */ /* Jump in the fire! */
again:
exit_code = __guest_enter(vcpu, host_ctxt); exit_code = __guest_enter(vcpu, host_ctxt);
/* And we're baaack! */ /* And we're baaack! */
if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
goto again;
fp_enabled = __fpsimd_enabled(); fp_enabled = __fpsimd_enabled();
__sysreg_save_state(guest_ctxt); __sysreg_save_guest_state(guest_ctxt);
__sysreg32_save_state(vcpu); __sysreg32_save_state(vcpu);
__timer_save_state(vcpu); __timer_save_state(vcpu);
__vgic_save_state(vcpu); __vgic_save_state(vcpu);
...@@ -133,7 +280,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu) ...@@ -133,7 +280,7 @@ static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
__deactivate_traps(vcpu); __deactivate_traps(vcpu);
__deactivate_vm(vcpu); __deactivate_vm(vcpu);
__sysreg_restore_state(host_ctxt); __sysreg_restore_host_state(host_ctxt);
if (fp_enabled) { if (fp_enabled) {
__fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs); __fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs);
...@@ -150,11 +297,34 @@ __alias(__guest_run) int __kvm_vcpu_run(struct kvm_vcpu *vcpu); ...@@ -150,11 +297,34 @@ __alias(__guest_run) int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n"; static const char __hyp_panic_string[] = "HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n";
void __hyp_text __noreturn __hyp_panic(void) static void __hyp_text __hyp_call_panic_nvhe(u64 spsr, u64 elr, u64 par)
{ {
unsigned long str_va = (unsigned long)__hyp_panic_string; unsigned long str_va = (unsigned long)__hyp_panic_string;
u64 spsr = read_sysreg(spsr_el2);
u64 elr = read_sysreg(elr_el2); __hyp_do_panic(hyp_kern_va(str_va),
spsr, elr,
read_sysreg(esr_el2), read_sysreg_el2(far),
read_sysreg(hpfar_el2), par,
(void *)read_sysreg(tpidr_el2));
}
static void __hyp_text __hyp_call_panic_vhe(u64 spsr, u64 elr, u64 par)
{
panic(__hyp_panic_string,
spsr, elr,
read_sysreg_el2(esr), read_sysreg_el2(far),
read_sysreg(hpfar_el2), par,
(void *)read_sysreg(tpidr_el2));
}
static hyp_alternate_select(__hyp_call_panic,
__hyp_call_panic_nvhe, __hyp_call_panic_vhe,
ARM64_HAS_VIRT_HOST_EXTN);
void __hyp_text __noreturn __hyp_panic(void)
{
u64 spsr = read_sysreg_el2(spsr);
u64 elr = read_sysreg_el2(elr);
u64 par = read_sysreg(par_el1); u64 par = read_sysreg(par_el1);
if (read_sysreg(vttbr_el2)) { if (read_sysreg(vttbr_el2)) {
...@@ -165,15 +335,11 @@ void __hyp_text __noreturn __hyp_panic(void) ...@@ -165,15 +335,11 @@ void __hyp_text __noreturn __hyp_panic(void)
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
__deactivate_traps(vcpu); __deactivate_traps(vcpu);
__deactivate_vm(vcpu); __deactivate_vm(vcpu);
__sysreg_restore_state(host_ctxt); __sysreg_restore_host_state(host_ctxt);
} }
/* Call panic for real */ /* Call panic for real */
__hyp_do_panic(hyp_kern_va(str_va), __hyp_call_panic()(spsr, elr, par);
spsr, elr,
read_sysreg(esr_el2), read_sysreg(far_el2),
read_sysreg(hpfar_el2), par,
(void *)read_sysreg(tpidr_el2));
unreachable(); unreachable();
} }
This diff is collapsed.
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>. * along with this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#include "hyp.h" #include <asm/kvm_hyp.h>
static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa) static void __hyp_text __tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
{ {
......
This diff is collapsed.
...@@ -77,7 +77,11 @@ int kvm_arch_dev_ioctl_check_extension(long ext) ...@@ -77,7 +77,11 @@ int kvm_arch_dev_ioctl_check_extension(long ext)
case KVM_CAP_GUEST_DEBUG_HW_WPS: case KVM_CAP_GUEST_DEBUG_HW_WPS:
r = get_num_wrps(); r = get_num_wrps();
break; break;
case KVM_CAP_ARM_PMU_V3:
r = kvm_arm_support_pmu_v3();
break;
case KVM_CAP_SET_GUEST_DEBUG: case KVM_CAP_SET_GUEST_DEBUG:
case KVM_CAP_VCPU_ATTRIBUTES:
r = 1; r = 1;
break; break;
default: default:
...@@ -120,6 +124,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) ...@@ -120,6 +124,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
/* Reset system registers */ /* Reset system registers */
kvm_reset_sys_regs(vcpu); kvm_reset_sys_regs(vcpu);
/* Reset PMU */
kvm_pmu_vcpu_reset(vcpu);
/* Reset timer */ /* Reset timer */
return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq); return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
} }
This diff is collapsed.
...@@ -33,8 +33,6 @@ static inline void svcpu_put(struct kvmppc_book3s_shadow_vcpu *svcpu) ...@@ -33,8 +33,6 @@ static inline void svcpu_put(struct kvmppc_book3s_shadow_vcpu *svcpu)
} }
#endif #endif
#define SPAPR_TCE_SHIFT 12
#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
#define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */
#endif #endif
......
...@@ -182,7 +182,10 @@ struct kvmppc_spapr_tce_table { ...@@ -182,7 +182,10 @@ struct kvmppc_spapr_tce_table {
struct list_head list; struct list_head list;
struct kvm *kvm; struct kvm *kvm;
u64 liobn; u64 liobn;
u32 window_size; struct rcu_head rcu;
u32 page_shift;
u64 offset; /* in pages */
u64 size; /* window size in pages */
struct page *pages[0]; struct page *pages[0];
}; };
......
...@@ -165,9 +165,25 @@ extern void kvmppc_map_vrma(struct kvm_vcpu *vcpu, ...@@ -165,9 +165,25 @@ extern void kvmppc_map_vrma(struct kvm_vcpu *vcpu,
extern int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu); extern int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu);
extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm, extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm,
struct kvm_create_spapr_tce *args); struct kvm_create_spapr_tce_64 *args);
extern struct kvmppc_spapr_tce_table *kvmppc_find_table(
struct kvm_vcpu *vcpu, unsigned long liobn);
extern long kvmppc_ioba_validate(struct kvmppc_spapr_tce_table *stt,
unsigned long ioba, unsigned long npages);
extern long kvmppc_tce_validate(struct kvmppc_spapr_tce_table *tt,
unsigned long tce);
extern long kvmppc_gpa_to_ua(struct kvm *kvm, unsigned long gpa,
unsigned long *ua, unsigned long **prmap);
extern void kvmppc_tce_put(struct kvmppc_spapr_tce_table *tt,
unsigned long idx, unsigned long tce);
extern long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn, extern long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
unsigned long ioba, unsigned long tce); unsigned long ioba, unsigned long tce);
extern long kvmppc_h_put_tce_indirect(struct kvm_vcpu *vcpu,
unsigned long liobn, unsigned long ioba,
unsigned long tce_list, unsigned long npages);
extern long kvmppc_h_stuff_tce(struct kvm_vcpu *vcpu,
unsigned long liobn, unsigned long ioba,
unsigned long tce_value, unsigned long npages);
extern long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn, extern long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
unsigned long ioba); unsigned long ioba);
extern struct page *kvm_alloc_hpt(unsigned long nr_pages); extern struct page *kvm_alloc_hpt(unsigned long nr_pages);
...@@ -437,6 +453,8 @@ static inline int kvmppc_xics_enabled(struct kvm_vcpu *vcpu) ...@@ -437,6 +453,8 @@ static inline int kvmppc_xics_enabled(struct kvm_vcpu *vcpu)
{ {
return vcpu->arch.irq_type == KVMPPC_IRQ_XICS; return vcpu->arch.irq_type == KVMPPC_IRQ_XICS;
} }
extern void kvmppc_alloc_host_rm_ops(void);
extern void kvmppc_free_host_rm_ops(void);
extern void kvmppc_xics_free_icp(struct kvm_vcpu *vcpu); extern void kvmppc_xics_free_icp(struct kvm_vcpu *vcpu);
extern int kvmppc_xics_create_icp(struct kvm_vcpu *vcpu, unsigned long server); extern int kvmppc_xics_create_icp(struct kvm_vcpu *vcpu, unsigned long server);
extern int kvm_vm_ioctl_xics_irq(struct kvm *kvm, struct kvm_irq_level *args); extern int kvm_vm_ioctl_xics_irq(struct kvm *kvm, struct kvm_irq_level *args);
...@@ -445,7 +463,11 @@ extern u64 kvmppc_xics_get_icp(struct kvm_vcpu *vcpu); ...@@ -445,7 +463,11 @@ extern u64 kvmppc_xics_get_icp(struct kvm_vcpu *vcpu);
extern int kvmppc_xics_set_icp(struct kvm_vcpu *vcpu, u64 icpval); extern int kvmppc_xics_set_icp(struct kvm_vcpu *vcpu, u64 icpval);
extern int kvmppc_xics_connect_vcpu(struct kvm_device *dev, extern int kvmppc_xics_connect_vcpu(struct kvm_device *dev,
struct kvm_vcpu *vcpu, u32 cpu); struct kvm_vcpu *vcpu, u32 cpu);
extern void kvmppc_xics_ipi_action(void);
extern int h_ipi_redirect;
#else #else
static inline void kvmppc_alloc_host_rm_ops(void) {};
static inline void kvmppc_free_host_rm_ops(void) {};
static inline int kvmppc_xics_enabled(struct kvm_vcpu *vcpu) static inline int kvmppc_xics_enabled(struct kvm_vcpu *vcpu)
{ return 0; } { return 0; }
static inline void kvmppc_xics_free_icp(struct kvm_vcpu *vcpu) { } static inline void kvmppc_xics_free_icp(struct kvm_vcpu *vcpu) { }
...@@ -459,6 +481,33 @@ static inline int kvmppc_xics_hcall(struct kvm_vcpu *vcpu, u32 cmd) ...@@ -459,6 +481,33 @@ static inline int kvmppc_xics_hcall(struct kvm_vcpu *vcpu, u32 cmd)
{ return 0; } { return 0; }
#endif #endif
/*
* Host-side operations we want to set up while running in real
* mode in the guest operating on the xics.
* Currently only VCPU wakeup is supported.
*/
union kvmppc_rm_state {
unsigned long raw;
struct {
u32 in_host;
u32 rm_action;
};
};
struct kvmppc_host_rm_core {
union kvmppc_rm_state rm_state;
void *rm_data;
char pad[112];
};
struct kvmppc_host_rm_ops {
struct kvmppc_host_rm_core *rm_core;
void (*vcpu_kick)(struct kvm_vcpu *vcpu);
};
extern struct kvmppc_host_rm_ops *kvmppc_host_rm_ops_hv;
static inline unsigned long kvmppc_get_epr(struct kvm_vcpu *vcpu) static inline unsigned long kvmppc_get_epr(struct kvm_vcpu *vcpu)
{ {
#ifdef CONFIG_KVM_BOOKE_HV #ifdef CONFIG_KVM_BOOKE_HV
......
...@@ -78,6 +78,9 @@ static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea, ...@@ -78,6 +78,9 @@ static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
} }
return __find_linux_pte_or_hugepte(pgdir, ea, is_thp, shift); return __find_linux_pte_or_hugepte(pgdir, ea, is_thp, shift);
} }
unsigned long vmalloc_to_phys(void *vmalloc_addr);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_PGTABLE_H */ #endif /* _ASM_POWERPC_PGTABLE_H */
...@@ -114,6 +114,9 @@ extern int cpu_to_core_id(int cpu); ...@@ -114,6 +114,9 @@ extern int cpu_to_core_id(int cpu);
#define PPC_MSG_TICK_BROADCAST 2 #define PPC_MSG_TICK_BROADCAST 2
#define PPC_MSG_DEBUGGER_BREAK 3 #define PPC_MSG_DEBUGGER_BREAK 3
/* This is only used by the powernv kernel */
#define PPC_MSG_RM_HOST_ACTION 4
/* for irq controllers that have dedicated ipis per message (4) */ /* for irq controllers that have dedicated ipis per message (4) */
extern int smp_request_message_ipi(int virq, int message); extern int smp_request_message_ipi(int virq, int message);
extern const char *smp_ipi_name[]; extern const char *smp_ipi_name[];
...@@ -121,6 +124,7 @@ extern const char *smp_ipi_name[]; ...@@ -121,6 +124,7 @@ extern const char *smp_ipi_name[];
/* for irq controllers with only a single ipi */ /* for irq controllers with only a single ipi */
extern void smp_muxed_ipi_set_data(int cpu, unsigned long data); extern void smp_muxed_ipi_set_data(int cpu, unsigned long data);
extern void smp_muxed_ipi_message_pass(int cpu, int msg); extern void smp_muxed_ipi_message_pass(int cpu, int msg);
extern void smp_muxed_ipi_set_message(int cpu, int msg);
extern irqreturn_t smp_ipi_demux(void); extern irqreturn_t smp_ipi_demux(void);
void smp_init_pSeries(void); void smp_init_pSeries(void);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment