Commit 0586e28a authored by Marc Zyngier's avatar Marc Zyngier

Merge branch kvm-arm64/hcall-selection into kvmarm-master/next

* kvm-arm64/hcall-selection:
  : .
  : Introduce a new set of virtual sysregs for userspace to
  : select the hypercalls it wants to see exposed to the guest.
  :
  : Patches courtesy of Raghavendra and Oliver.
  : .
  KVM: arm64: Fix hypercall bitmap writeback when vcpus have already run
  KVM: arm64: Hide KVM_REG_ARM_*_BMAP_BIT_COUNT from userspace
  Documentation: Fix index.rst after psci.rst renaming
  selftests: KVM: aarch64: Add the bitmap firmware registers to get-reg-list
  selftests: KVM: aarch64: Introduce hypercall ABI test
  selftests: KVM: Create helper for making SMCCC calls
  selftests: KVM: Rename psci_cpu_on_test to psci_test
  tools: Import ARM SMCCC definitions
  Docs: KVM: Add doc for the bitmap firmware registers
  Docs: KVM: Rename psci.rst to hypercalls.rst
  KVM: arm64: Add vendor hypervisor firmware register
  KVM: arm64: Add standard hypervisor firmware register
  KVM: arm64: Setup a framework for hypercall bitmap firmware registers
  KVM: arm64: Factor out firmware register handling from psci.c
Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
parents d25f30fe 528ada28
...@@ -2601,6 +2601,24 @@ EINVAL. ...@@ -2601,6 +2601,24 @@ EINVAL.
After the vcpu's SVE configuration is finalized, further attempts to After the vcpu's SVE configuration is finalized, further attempts to
write this register will fail with EPERM. write this register will fail with EPERM.
arm64 bitmap feature firmware pseudo-registers have the following bit pattern::
0x6030 0000 0016 <regno:16>
The bitmap feature firmware registers exposes the hypercall services that
are available for userspace to configure. The set bits corresponds to the
services that are available for the guests to access. By default, KVM
sets all the supported bits during VM initialization. The userspace can
discover the available services via KVM_GET_ONE_REG, and write back the
bitmap corresponding to the features that it wishes guests to see via
KVM_SET_ONE_REG.
Note: These registers are immutable once any of the vCPUs of the VM has
run at least once. A KVM_SET_ONE_REG in such a scenario will return
a -EBUSY to userspace.
(See Documentation/virt/kvm/arm/hypercalls.rst for more details.)
MIPS registers are mapped using the lower 32 bits. The upper 16 of that is MIPS registers are mapped using the lower 32 bits. The upper 16 of that is
the register group type: the register group type:
......
.. SPDX-License-Identifier: GPL-2.0 .. SPDX-License-Identifier: GPL-2.0
========================================= =======================
Power State Coordination Interface (PSCI) ARM Hypercall Interface
========================================= =======================
KVM implements the PSCI (Power State Coordination Interface) KVM handles the hypercall services as requested by the guests. New hypercall
specification in order to provide services such as CPU on/off, reset services are regularly made available by the ARM specification or by KVM (as
and power-off to the guest. vendor services) if they make sense from a virtualization point of view.
The PSCI specification is regularly updated to provide new features, This means that a guest booted on two different versions of KVM can observe
and KVM implements these updates if they make sense from a virtualization two different "firmware" revisions. This could cause issues if a given guest
point of view. is tied to a particular version of a hypercall service, or if a migration
causes a different version to be exposed out of the blue to an unsuspecting
This means that a guest booted on two different versions of KVM can guest.
observe two different "firmware" revisions. This could cause issues if
a given guest is tied to a particular PSCI revision (unlikely), or if
a migration causes a different PSCI version to be exposed out of the
blue to an unsuspecting guest.
In order to remedy this situation, KVM exposes a set of "firmware In order to remedy this situation, KVM exposes a set of "firmware
pseudo-registers" that can be manipulated using the GET/SET_ONE_REG pseudo-registers" that can be manipulated using the GET/SET_ONE_REG
interface. These registers can be saved/restored by userspace, and set interface. These registers can be saved/restored by userspace, and set
to a convenient value if required. to a convenient value as required.
The following register is defined: The following registers are defined:
* KVM_REG_ARM_PSCI_VERSION: * KVM_REG_ARM_PSCI_VERSION:
KVM implements the PSCI (Power State Coordination Interface)
specification in order to provide services such as CPU on/off, reset
and power-off to the guest.
- Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set - Only valid if the vcpu has the KVM_ARM_VCPU_PSCI_0_2 feature set
(and thus has already been initialized) (and thus has already been initialized)
- Returns the current PSCI version on GET_ONE_REG (defaulting to the - Returns the current PSCI version on GET_ONE_REG (defaulting to the
...@@ -74,4 +74,65 @@ The following register is defined: ...@@ -74,4 +74,65 @@ The following register is defined:
KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED: KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED:
The workaround is always active on this vCPU or it is not needed. The workaround is always active on this vCPU or it is not needed.
Bitmap Feature Firmware Registers
---------------------------------
Contrary to the above registers, the following registers exposes the
hypercall services in the form of a feature-bitmap to the userspace. This
bitmap is translated to the services that are available to the guest.
There is a register defined per service call owner and can be accessed via
GET/SET_ONE_REG interface.
By default, these registers are set with the upper limit of the features
that are supported. This way userspace can discover all the usable
hypercall services via GET_ONE_REG. The user-space can write-back the
desired bitmap back via SET_ONE_REG. The features for the registers that
are untouched, probably because userspace isn't aware of them, will be
exposed as is to the guest.
Note that KVM will not allow the userspace to configure the registers
anymore once any of the vCPUs has run at least once. Instead, it will
return a -EBUSY.
The pseudo-firmware bitmap register are as follows:
* KVM_REG_ARM_STD_BMAP:
Controls the bitmap of the ARM Standard Secure Service Calls.
The following bits are accepted:
Bit-0: KVM_REG_ARM_STD_BIT_TRNG_V1_0:
The bit represents the services offered under v1.0 of ARM True Random
Number Generator (TRNG) specification, ARM DEN0098.
* KVM_REG_ARM_STD_HYP_BMAP:
Controls the bitmap of the ARM Standard Hypervisor Service Calls.
The following bits are accepted:
Bit-0: KVM_REG_ARM_STD_HYP_BIT_PV_TIME:
The bit represents the Paravirtualized Time service as represented by
ARM DEN0057A.
* KVM_REG_ARM_VENDOR_HYP_BMAP:
Controls the bitmap of the Vendor specific Hypervisor Service Calls.
The following bits are accepted:
Bit-0: KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT
The bit represents the ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID
and ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID function-ids.
Bit-1: KVM_REG_ARM_VENDOR_HYP_BIT_PTP:
The bit represents the Precision Time Protocol KVM service.
Errors:
======= =============================================================
-ENOENT Unknown register accessed.
-EBUSY Attempt a 'write' to the register after the VM has started.
-EINVAL Invalid bitmap written to the register.
======= =============================================================
.. [1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf .. [1] https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
...@@ -8,6 +8,6 @@ ARM ...@@ -8,6 +8,6 @@ ARM
:maxdepth: 2 :maxdepth: 2
hyp-abi hyp-abi
psci hypercalls
pvtime pvtime
ptp_kvm ptp_kvm
...@@ -101,6 +101,19 @@ struct kvm_s2_mmu { ...@@ -101,6 +101,19 @@ struct kvm_s2_mmu {
struct kvm_arch_memory_slot { struct kvm_arch_memory_slot {
}; };
/**
* struct kvm_smccc_features: Descriptor of the hypercall services exposed to the guests
*
* @std_bmap: Bitmap of standard secure service calls
* @std_hyp_bmap: Bitmap of standard hypervisor service calls
* @vendor_hyp_bmap: Bitmap of vendor specific hypervisor service calls
*/
struct kvm_smccc_features {
unsigned long std_bmap;
unsigned long std_hyp_bmap;
unsigned long vendor_hyp_bmap;
};
struct kvm_arch { struct kvm_arch {
struct kvm_s2_mmu mmu; struct kvm_s2_mmu mmu;
...@@ -150,6 +163,9 @@ struct kvm_arch { ...@@ -150,6 +163,9 @@ struct kvm_arch {
u8 pfr0_csv2; u8 pfr0_csv2;
u8 pfr0_csv3; u8 pfr0_csv3;
/* Hypercall features firmware registers' descriptor */
struct kvm_smccc_features smccc_feat;
}; };
struct kvm_vcpu_fault_info { struct kvm_vcpu_fault_info {
......
...@@ -332,6 +332,40 @@ struct kvm_arm_copy_mte_tags { ...@@ -332,6 +332,40 @@ struct kvm_arm_copy_mte_tags {
#define KVM_ARM64_SVE_VLS_WORDS \ #define KVM_ARM64_SVE_VLS_WORDS \
((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
/* Bitmap feature firmware registers */
#define KVM_REG_ARM_FW_FEAT_BMAP (0x0016 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_FW_FEAT_BMAP_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
KVM_REG_ARM_FW_FEAT_BMAP | \
((r) & 0xffff))
#define KVM_REG_ARM_STD_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(0)
enum {
KVM_REG_ARM_STD_BIT_TRNG_V1_0 = 0,
#ifdef __KERNEL__
KVM_REG_ARM_STD_BMAP_BIT_COUNT,
#endif
};
#define KVM_REG_ARM_STD_HYP_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(1)
enum {
KVM_REG_ARM_STD_HYP_BIT_PV_TIME = 0,
#ifdef __KERNEL__
KVM_REG_ARM_STD_HYP_BMAP_BIT_COUNT,
#endif
};
#define KVM_REG_ARM_VENDOR_HYP_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(2)
enum {
KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT = 0,
KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1,
#ifdef __KERNEL__
KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT,
#endif
};
/* Device Control API: ARM VGIC */ /* Device Control API: ARM VGIC */
#define KVM_DEV_ARM_VGIC_GRP_ADDR 0 #define KVM_DEV_ARM_VGIC_GRP_ADDR 0
#define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
......
...@@ -156,6 +156,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) ...@@ -156,6 +156,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
kvm->arch.max_vcpus = kvm_arm_default_max_vcpus(); kvm->arch.max_vcpus = kvm_arm_default_max_vcpus();
set_default_spectre(kvm); set_default_spectre(kvm);
kvm_arm_init_hypercalls(kvm);
return ret; return ret;
out_free_stage2_pgd: out_free_stage2_pgd:
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <kvm/arm_psci.h> #include <kvm/arm_hypercalls.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
...@@ -756,7 +756,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) ...@@ -756,7 +756,9 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
switch (reg->id & KVM_REG_ARM_COPROC_MASK) { switch (reg->id & KVM_REG_ARM_COPROC_MASK) {
case KVM_REG_ARM_CORE: return get_core_reg(vcpu, reg); case KVM_REG_ARM_CORE: return get_core_reg(vcpu, reg);
case KVM_REG_ARM_FW: return kvm_arm_get_fw_reg(vcpu, reg); case KVM_REG_ARM_FW:
case KVM_REG_ARM_FW_FEAT_BMAP:
return kvm_arm_get_fw_reg(vcpu, reg);
case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg);
} }
...@@ -774,7 +776,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) ...@@ -774,7 +776,9 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
switch (reg->id & KVM_REG_ARM_COPROC_MASK) { switch (reg->id & KVM_REG_ARM_COPROC_MASK) {
case KVM_REG_ARM_CORE: return set_core_reg(vcpu, reg); case KVM_REG_ARM_CORE: return set_core_reg(vcpu, reg);
case KVM_REG_ARM_FW: return kvm_arm_set_fw_reg(vcpu, reg); case KVM_REG_ARM_FW:
case KVM_REG_ARM_FW_FEAT_BMAP:
return kvm_arm_set_fw_reg(vcpu, reg);
case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg);
} }
......
This diff is collapsed.
...@@ -437,186 +437,3 @@ int kvm_psci_call(struct kvm_vcpu *vcpu) ...@@ -437,186 +437,3 @@ int kvm_psci_call(struct kvm_vcpu *vcpu)
return -EINVAL; return -EINVAL;
} }
} }
int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu)
{
return 4; /* PSCI version and three workaround registers */
}
int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
{
if (put_user(KVM_REG_ARM_PSCI_VERSION, uindices++))
return -EFAULT;
if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1, uindices++))
return -EFAULT;
if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2, uindices++))
return -EFAULT;
if (put_user(KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3, uindices++))
return -EFAULT;
return 0;
}
#define KVM_REG_FEATURE_LEVEL_WIDTH 4
#define KVM_REG_FEATURE_LEVEL_MASK (BIT(KVM_REG_FEATURE_LEVEL_WIDTH) - 1)
/*
* Convert the workaround level into an easy-to-compare number, where higher
* values mean better protection.
*/
static int get_kernel_wa_level(u64 regid)
{
switch (regid) {
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
switch (arm64_get_spectre_v2_state()) {
case SPECTRE_VULNERABLE:
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
case SPECTRE_MITIGATED:
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL;
case SPECTRE_UNAFFECTED:
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_REQUIRED;
}
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL;
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
switch (arm64_get_spectre_v4_state()) {
case SPECTRE_MITIGATED:
/*
* As for the hypercall discovery, we pretend we
* don't have any FW mitigation if SSBS is there at
* all times.
*/
if (cpus_have_final_cap(ARM64_SSBS))
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
fallthrough;
case SPECTRE_UNAFFECTED:
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED;
case SPECTRE_VULNERABLE:
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
}
break;
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
switch (arm64_get_spectre_bhb_state()) {
case SPECTRE_VULNERABLE:
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL;
case SPECTRE_MITIGATED:
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_AVAIL;
case SPECTRE_UNAFFECTED:
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_REQUIRED;
}
return KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3_NOT_AVAIL;
}
return -EINVAL;
}
int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
void __user *uaddr = (void __user *)(long)reg->addr;
u64 val;
switch (reg->id) {
case KVM_REG_ARM_PSCI_VERSION:
val = kvm_psci_version(vcpu);
break;
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
val = get_kernel_wa_level(reg->id) & KVM_REG_FEATURE_LEVEL_MASK;
break;
default:
return -ENOENT;
}
if (copy_to_user(uaddr, &val, KVM_REG_SIZE(reg->id)))
return -EFAULT;
return 0;
}
int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
{
void __user *uaddr = (void __user *)(long)reg->addr;
u64 val;
int wa_level;
if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)))
return -EFAULT;
switch (reg->id) {
case KVM_REG_ARM_PSCI_VERSION:
{
bool wants_02;
wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features);
switch (val) {
case KVM_ARM_PSCI_0_1:
if (wants_02)
return -EINVAL;
vcpu->kvm->arch.psci_version = val;
return 0;
case KVM_ARM_PSCI_0_2:
case KVM_ARM_PSCI_1_0:
case KVM_ARM_PSCI_1_1:
if (!wants_02)
return -EINVAL;
vcpu->kvm->arch.psci_version = val;
return 0;
}
break;
}
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3:
if (val & ~KVM_REG_FEATURE_LEVEL_MASK)
return -EINVAL;
if (get_kernel_wa_level(reg->id) < val)
return -EINVAL;
return 0;
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
if (val & ~(KVM_REG_FEATURE_LEVEL_MASK |
KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED))
return -EINVAL;
/* The enabled bit must not be set unless the level is AVAIL. */
if ((val & KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED) &&
(val & KVM_REG_FEATURE_LEVEL_MASK) != KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL)
return -EINVAL;
/*
* Map all the possible incoming states to the only two we
* really want to deal with.
*/
switch (val & KVM_REG_FEATURE_LEVEL_MASK) {
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL:
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN:
wa_level = KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL;
break;
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL:
case KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED:
wa_level = KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_REQUIRED;
break;
default:
return -EINVAL;
}
/*
* We can deal with NOT_AVAIL on NOT_REQUIRED, but not the
* other way around.
*/
if (get_kernel_wa_level(reg->id) < wa_level)
return -EINVAL;
return 0;
default:
return -ENOENT;
}
return -EINVAL;
}
...@@ -40,4 +40,12 @@ static inline void smccc_set_retval(struct kvm_vcpu *vcpu, ...@@ -40,4 +40,12 @@ static inline void smccc_set_retval(struct kvm_vcpu *vcpu,
vcpu_set_reg(vcpu, 3, a3); vcpu_set_reg(vcpu, 3, a3);
} }
struct kvm_one_reg;
void kvm_arm_init_hypercalls(struct kvm *kvm);
int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
#endif #endif
...@@ -39,11 +39,4 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu) ...@@ -39,11 +39,4 @@ static inline int kvm_psci_version(struct kvm_vcpu *vcpu)
int kvm_psci_call(struct kvm_vcpu *vcpu); int kvm_psci_call(struct kvm_vcpu *vcpu);
struct kvm_one_reg;
int kvm_arm_get_fw_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_fw_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
int kvm_arm_get_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
int kvm_arm_set_fw_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
#endif /* __KVM_ARM_PSCI_H__ */ #endif /* __KVM_ARM_PSCI_H__ */
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2015, Linaro Limited
*/
#ifndef __LINUX_ARM_SMCCC_H
#define __LINUX_ARM_SMCCC_H
#include <linux/const.h>
/*
* This file provides common defines for ARM SMC Calling Convention as
* specified in
* https://developer.arm.com/docs/den0028/latest
*
* This code is up-to-date with version DEN 0028 C
*/
#define ARM_SMCCC_STD_CALL _AC(0,U)
#define ARM_SMCCC_FAST_CALL _AC(1,U)
#define ARM_SMCCC_TYPE_SHIFT 31
#define ARM_SMCCC_SMC_32 0
#define ARM_SMCCC_SMC_64 1
#define ARM_SMCCC_CALL_CONV_SHIFT 30
#define ARM_SMCCC_OWNER_MASK 0x3F
#define ARM_SMCCC_OWNER_SHIFT 24
#define ARM_SMCCC_FUNC_MASK 0xFFFF
#define ARM_SMCCC_IS_FAST_CALL(smc_val) \
((smc_val) & (ARM_SMCCC_FAST_CALL << ARM_SMCCC_TYPE_SHIFT))
#define ARM_SMCCC_IS_64(smc_val) \
((smc_val) & (ARM_SMCCC_SMC_64 << ARM_SMCCC_CALL_CONV_SHIFT))
#define ARM_SMCCC_FUNC_NUM(smc_val) ((smc_val) & ARM_SMCCC_FUNC_MASK)
#define ARM_SMCCC_OWNER_NUM(smc_val) \
(((smc_val) >> ARM_SMCCC_OWNER_SHIFT) & ARM_SMCCC_OWNER_MASK)
#define ARM_SMCCC_CALL_VAL(type, calling_convention, owner, func_num) \
(((type) << ARM_SMCCC_TYPE_SHIFT) | \
((calling_convention) << ARM_SMCCC_CALL_CONV_SHIFT) | \
(((owner) & ARM_SMCCC_OWNER_MASK) << ARM_SMCCC_OWNER_SHIFT) | \
((func_num) & ARM_SMCCC_FUNC_MASK))
#define ARM_SMCCC_OWNER_ARCH 0
#define ARM_SMCCC_OWNER_CPU 1
#define ARM_SMCCC_OWNER_SIP 2
#define ARM_SMCCC_OWNER_OEM 3
#define ARM_SMCCC_OWNER_STANDARD 4
#define ARM_SMCCC_OWNER_STANDARD_HYP 5
#define ARM_SMCCC_OWNER_VENDOR_HYP 6
#define ARM_SMCCC_OWNER_TRUSTED_APP 48
#define ARM_SMCCC_OWNER_TRUSTED_APP_END 49
#define ARM_SMCCC_OWNER_TRUSTED_OS 50
#define ARM_SMCCC_OWNER_TRUSTED_OS_END 63
#define ARM_SMCCC_FUNC_QUERY_CALL_UID 0xff01
#define ARM_SMCCC_QUIRK_NONE 0
#define ARM_SMCCC_QUIRK_QCOM_A6 1 /* Save/restore register a6 */
#define ARM_SMCCC_VERSION_1_0 0x10000
#define ARM_SMCCC_VERSION_1_1 0x10001
#define ARM_SMCCC_VERSION_1_2 0x10002
#define ARM_SMCCC_VERSION_1_3 0x10003
#define ARM_SMCCC_1_3_SVE_HINT 0x10000
#define ARM_SMCCC_VERSION_FUNC_ID \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
0, 0)
#define ARM_SMCCC_ARCH_FEATURES_FUNC_ID \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
0, 1)
#define ARM_SMCCC_ARCH_SOC_ID \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
0, 2)
#define ARM_SMCCC_ARCH_WORKAROUND_1 \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
0, 0x8000)
#define ARM_SMCCC_ARCH_WORKAROUND_2 \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
0, 0x7fff)
#define ARM_SMCCC_ARCH_WORKAROUND_3 \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
0, 0x3fff)
#define ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
ARM_SMCCC_OWNER_VENDOR_HYP, \
ARM_SMCCC_FUNC_QUERY_CALL_UID)
/* KVM UID value: 28b46fb6-2ec5-11e9-a9ca-4b564d003a74 */
#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_0 0xb66fb428U
#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_1 0xe911c52eU
#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_2 0x564bcaa9U
#define ARM_SMCCC_VENDOR_HYP_UID_KVM_REG_3 0x743a004dU
/* KVM "vendor specific" services */
#define ARM_SMCCC_KVM_FUNC_FEATURES 0
#define ARM_SMCCC_KVM_FUNC_PTP 1
#define ARM_SMCCC_KVM_FUNC_FEATURES_2 127
#define ARM_SMCCC_KVM_NUM_FUNCS 128
#define ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
ARM_SMCCC_OWNER_VENDOR_HYP, \
ARM_SMCCC_KVM_FUNC_FEATURES)
#define SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED 1
/*
* ptp_kvm is a feature used for time sync between vm and host.
* ptp_kvm module in guest kernel will get service from host using
* this hypercall ID.
*/
#define ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
ARM_SMCCC_OWNER_VENDOR_HYP, \
ARM_SMCCC_KVM_FUNC_PTP)
/* ptp_kvm counter type ID */
#define KVM_PTP_VIRT_COUNTER 0
#define KVM_PTP_PHYS_COUNTER 1
/* Paravirtualised time calls (defined by ARM DEN0057A) */
#define ARM_SMCCC_HV_PV_TIME_FEATURES \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_64, \
ARM_SMCCC_OWNER_STANDARD_HYP, \
0x20)
#define ARM_SMCCC_HV_PV_TIME_ST \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_64, \
ARM_SMCCC_OWNER_STANDARD_HYP, \
0x21)
/* TRNG entropy source calls (defined by ARM DEN0098) */
#define ARM_SMCCC_TRNG_VERSION \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
ARM_SMCCC_OWNER_STANDARD, \
0x50)
#define ARM_SMCCC_TRNG_FEATURES \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
ARM_SMCCC_OWNER_STANDARD, \
0x51)
#define ARM_SMCCC_TRNG_GET_UUID \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
ARM_SMCCC_OWNER_STANDARD, \
0x52)
#define ARM_SMCCC_TRNG_RND32 \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_32, \
ARM_SMCCC_OWNER_STANDARD, \
0x53)
#define ARM_SMCCC_TRNG_RND64 \
ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
ARM_SMCCC_SMC_64, \
ARM_SMCCC_OWNER_STANDARD, \
0x53)
/*
* Return codes defined in ARM DEN 0070A
* ARM DEN 0070A is now merged/consolidated into ARM DEN 0028 C
*/
#define SMCCC_RET_SUCCESS 0
#define SMCCC_RET_NOT_SUPPORTED -1
#define SMCCC_RET_NOT_REQUIRED -2
#define SMCCC_RET_INVALID_PARAMETER -3
#endif /*__LINUX_ARM_SMCCC_H*/
...@@ -2,7 +2,8 @@ ...@@ -2,7 +2,8 @@
/aarch64/arch_timer /aarch64/arch_timer
/aarch64/debug-exceptions /aarch64/debug-exceptions
/aarch64/get-reg-list /aarch64/get-reg-list
/aarch64/psci_cpu_on_test /aarch64/hypercalls
/aarch64/psci_test
/aarch64/vcpu_width_config /aarch64/vcpu_width_config
/aarch64/vgic_init /aarch64/vgic_init
/aarch64/vgic_irq /aarch64/vgic_irq
......
...@@ -105,7 +105,8 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test ...@@ -105,7 +105,8 @@ TEST_GEN_PROGS_x86_64 += system_counter_offset_test
TEST_GEN_PROGS_aarch64 += aarch64/arch_timer TEST_GEN_PROGS_aarch64 += aarch64/arch_timer
TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions TEST_GEN_PROGS_aarch64 += aarch64/debug-exceptions
TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list TEST_GEN_PROGS_aarch64 += aarch64/get-reg-list
TEST_GEN_PROGS_aarch64 += aarch64/psci_cpu_on_test TEST_GEN_PROGS_aarch64 += aarch64/hypercalls
TEST_GEN_PROGS_aarch64 += aarch64/psci_test
TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config
TEST_GEN_PROGS_aarch64 += aarch64/vgic_init TEST_GEN_PROGS_aarch64 += aarch64/vgic_init
TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq
......
...@@ -294,6 +294,11 @@ static void print_reg(struct vcpu_config *c, __u64 id) ...@@ -294,6 +294,11 @@ static void print_reg(struct vcpu_config *c, __u64 id)
"%s: Unexpected bits set in FW reg id: 0x%llx", config_name(c), id); "%s: Unexpected bits set in FW reg id: 0x%llx", config_name(c), id);
printf("\tKVM_REG_ARM_FW_REG(%lld),\n", id & 0xffff); printf("\tKVM_REG_ARM_FW_REG(%lld),\n", id & 0xffff);
break; break;
case KVM_REG_ARM_FW_FEAT_BMAP:
TEST_ASSERT(id == KVM_REG_ARM_FW_FEAT_BMAP_REG(id & 0xffff),
"%s: Unexpected bits set in the bitmap feature FW reg id: 0x%llx", config_name(c), id);
printf("\tKVM_REG_ARM_FW_FEAT_BMAP_REG(%lld),\n", id & 0xffff);
break;
case KVM_REG_ARM64_SVE: case KVM_REG_ARM64_SVE:
if (has_cap(c, KVM_CAP_ARM_SVE)) if (has_cap(c, KVM_CAP_ARM_SVE))
printf("\t%s,\n", sve_id_to_str(c, id)); printf("\t%s,\n", sve_id_to_str(c, id));
...@@ -692,6 +697,9 @@ static __u64 base_regs[] = { ...@@ -692,6 +697,9 @@ static __u64 base_regs[] = {
KVM_REG_ARM_FW_REG(1), /* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1 */ KVM_REG_ARM_FW_REG(1), /* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1 */
KVM_REG_ARM_FW_REG(2), /* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2 */ KVM_REG_ARM_FW_REG(2), /* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2 */
KVM_REG_ARM_FW_REG(3), /* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3 */ KVM_REG_ARM_FW_REG(3), /* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_3 */
KVM_REG_ARM_FW_FEAT_BMAP_REG(0), /* KVM_REG_ARM_STD_BMAP */
KVM_REG_ARM_FW_FEAT_BMAP_REG(1), /* KVM_REG_ARM_STD_HYP_BMAP */
KVM_REG_ARM_FW_FEAT_BMAP_REG(2), /* KVM_REG_ARM_VENDOR_HYP_BMAP */
ARM64_SYS_REG(3, 3, 14, 3, 1), /* CNTV_CTL_EL0 */ ARM64_SYS_REG(3, 3, 14, 3, 1), /* CNTV_CTL_EL0 */
ARM64_SYS_REG(3, 3, 14, 3, 2), /* CNTV_CVAL_EL0 */ ARM64_SYS_REG(3, 3, 14, 3, 2), /* CNTV_CVAL_EL0 */
ARM64_SYS_REG(3, 3, 14, 0, 2), ARM64_SYS_REG(3, 3, 14, 0, 2),
......
// SPDX-License-Identifier: GPL-2.0-only
/* hypercalls: Check the ARM64's psuedo-firmware bitmap register interface.
*
* The test validates the basic hypercall functionalities that are exposed
* via the psuedo-firmware bitmap register. This includes the registers'
* read/write behavior before and after the VM has started, and if the
* hypercalls are properly masked or unmasked to the guest when disabled or
* enabled from the KVM userspace, respectively.
*/
#include <errno.h>
#include <linux/arm-smccc.h>
#include <asm/kvm.h>
#include <kvm_util.h>
#include "processor.h"
#define FW_REG_ULIMIT_VAL(max_feat_bit) (GENMASK(max_feat_bit, 0))
/* Last valid bits of the bitmapped firmware registers */
#define KVM_REG_ARM_STD_BMAP_BIT_MAX 0
#define KVM_REG_ARM_STD_HYP_BMAP_BIT_MAX 0
#define KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_MAX 1
struct kvm_fw_reg_info {
uint64_t reg; /* Register definition */
uint64_t max_feat_bit; /* Bit that represents the upper limit of the feature-map */
};
#define FW_REG_INFO(r) \
{ \
.reg = r, \
.max_feat_bit = r##_BIT_MAX, \
}
static const struct kvm_fw_reg_info fw_reg_info[] = {
FW_REG_INFO(KVM_REG_ARM_STD_BMAP),
FW_REG_INFO(KVM_REG_ARM_STD_HYP_BMAP),
FW_REG_INFO(KVM_REG_ARM_VENDOR_HYP_BMAP),
};
enum test_stage {
TEST_STAGE_REG_IFACE,
TEST_STAGE_HVC_IFACE_FEAT_DISABLED,
TEST_STAGE_HVC_IFACE_FEAT_ENABLED,
TEST_STAGE_HVC_IFACE_FALSE_INFO,
TEST_STAGE_END,
};
static int stage = TEST_STAGE_REG_IFACE;
struct test_hvc_info {
uint32_t func_id;
uint64_t arg1;
};
#define TEST_HVC_INFO(f, a1) \
{ \
.func_id = f, \
.arg1 = a1, \
}
static const struct test_hvc_info hvc_info[] = {
/* KVM_REG_ARM_STD_BMAP */
TEST_HVC_INFO(ARM_SMCCC_TRNG_VERSION, 0),
TEST_HVC_INFO(ARM_SMCCC_TRNG_FEATURES, ARM_SMCCC_TRNG_RND64),
TEST_HVC_INFO(ARM_SMCCC_TRNG_GET_UUID, 0),
TEST_HVC_INFO(ARM_SMCCC_TRNG_RND32, 0),
TEST_HVC_INFO(ARM_SMCCC_TRNG_RND64, 0),
/* KVM_REG_ARM_STD_HYP_BMAP */
TEST_HVC_INFO(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_HV_PV_TIME_FEATURES),
TEST_HVC_INFO(ARM_SMCCC_HV_PV_TIME_FEATURES, ARM_SMCCC_HV_PV_TIME_ST),
TEST_HVC_INFO(ARM_SMCCC_HV_PV_TIME_ST, 0),
/* KVM_REG_ARM_VENDOR_HYP_BMAP */
TEST_HVC_INFO(ARM_SMCCC_VENDOR_HYP_KVM_FEATURES_FUNC_ID,
ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID),
TEST_HVC_INFO(ARM_SMCCC_VENDOR_HYP_CALL_UID_FUNC_ID, 0),
TEST_HVC_INFO(ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID, KVM_PTP_VIRT_COUNTER),
};
/* Feed false hypercall info to test the KVM behavior */
static const struct test_hvc_info false_hvc_info[] = {
/* Feature support check against a different family of hypercalls */
TEST_HVC_INFO(ARM_SMCCC_TRNG_FEATURES, ARM_SMCCC_VENDOR_HYP_KVM_PTP_FUNC_ID),
TEST_HVC_INFO(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, ARM_SMCCC_TRNG_RND64),
TEST_HVC_INFO(ARM_SMCCC_HV_PV_TIME_FEATURES, ARM_SMCCC_TRNG_RND64),
};
static void guest_test_hvc(const struct test_hvc_info *hc_info)
{
unsigned int i;
struct arm_smccc_res res;
unsigned int hvc_info_arr_sz;
hvc_info_arr_sz =
hc_info == hvc_info ? ARRAY_SIZE(hvc_info) : ARRAY_SIZE(false_hvc_info);
for (i = 0; i < hvc_info_arr_sz; i++, hc_info++) {
memset(&res, 0, sizeof(res));
smccc_hvc(hc_info->func_id, hc_info->arg1, 0, 0, 0, 0, 0, 0, &res);
switch (stage) {
case TEST_STAGE_HVC_IFACE_FEAT_DISABLED:
case TEST_STAGE_HVC_IFACE_FALSE_INFO:
GUEST_ASSERT_3(res.a0 == SMCCC_RET_NOT_SUPPORTED,
res.a0, hc_info->func_id, hc_info->arg1);
break;
case TEST_STAGE_HVC_IFACE_FEAT_ENABLED:
GUEST_ASSERT_3(res.a0 != SMCCC_RET_NOT_SUPPORTED,
res.a0, hc_info->func_id, hc_info->arg1);
break;
default:
GUEST_ASSERT_1(0, stage);
}
}
}
static void guest_code(void)
{
while (stage != TEST_STAGE_END) {
switch (stage) {
case TEST_STAGE_REG_IFACE:
break;
case TEST_STAGE_HVC_IFACE_FEAT_DISABLED:
case TEST_STAGE_HVC_IFACE_FEAT_ENABLED:
guest_test_hvc(hvc_info);
break;
case TEST_STAGE_HVC_IFACE_FALSE_INFO:
guest_test_hvc(false_hvc_info);
break;
default:
GUEST_ASSERT_1(0, stage);
}
GUEST_SYNC(stage);
}
GUEST_DONE();
}
static int set_fw_reg(struct kvm_vm *vm, uint64_t id, uint64_t val)
{
struct kvm_one_reg reg = {
.id = id,
.addr = (uint64_t)&val,
};
return _vcpu_ioctl(vm, 0, KVM_SET_ONE_REG, &reg);
}
static void get_fw_reg(struct kvm_vm *vm, uint64_t id, uint64_t *addr)
{
struct kvm_one_reg reg = {
.id = id,
.addr = (uint64_t)addr,
};
vcpu_ioctl(vm, 0, KVM_GET_ONE_REG, &reg);
}
struct st_time {
uint32_t rev;
uint32_t attr;
uint64_t st_time;
};
#define STEAL_TIME_SIZE ((sizeof(struct st_time) + 63) & ~63)
#define ST_GPA_BASE (1 << 30)
static void steal_time_init(struct kvm_vm *vm)
{
uint64_t st_ipa = (ulong)ST_GPA_BASE;
unsigned int gpages;
struct kvm_device_attr dev = {
.group = KVM_ARM_VCPU_PVTIME_CTRL,
.attr = KVM_ARM_VCPU_PVTIME_IPA,
.addr = (uint64_t)&st_ipa,
};
gpages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, STEAL_TIME_SIZE);
vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, ST_GPA_BASE, 1, gpages, 0);
vcpu_ioctl(vm, 0, KVM_SET_DEVICE_ATTR, &dev);
}
static void test_fw_regs_before_vm_start(struct kvm_vm *vm)
{
uint64_t val;
unsigned int i;
int ret;
for (i = 0; i < ARRAY_SIZE(fw_reg_info); i++) {
const struct kvm_fw_reg_info *reg_info = &fw_reg_info[i];
/* First 'read' should be an upper limit of the features supported */
get_fw_reg(vm, reg_info->reg, &val);
TEST_ASSERT(val == FW_REG_ULIMIT_VAL(reg_info->max_feat_bit),
"Expected all the features to be set for reg: 0x%lx; expected: 0x%lx; read: 0x%lx\n",
reg_info->reg, FW_REG_ULIMIT_VAL(reg_info->max_feat_bit), val);
/* Test a 'write' by disabling all the features of the register map */
ret = set_fw_reg(vm, reg_info->reg, 0);
TEST_ASSERT(ret == 0,
"Failed to clear all the features of reg: 0x%lx; ret: %d\n",
reg_info->reg, errno);
get_fw_reg(vm, reg_info->reg, &val);
TEST_ASSERT(val == 0,
"Expected all the features to be cleared for reg: 0x%lx\n", reg_info->reg);
/*
* Test enabling a feature that's not supported.
* Avoid this check if all the bits are occupied.
*/
if (reg_info->max_feat_bit < 63) {
ret = set_fw_reg(vm, reg_info->reg, BIT(reg_info->max_feat_bit + 1));
TEST_ASSERT(ret != 0 && errno == EINVAL,
"Unexpected behavior or return value (%d) while setting an unsupported feature for reg: 0x%lx\n",
errno, reg_info->reg);
}
}
}
static void test_fw_regs_after_vm_start(struct kvm_vm *vm)
{
uint64_t val;
unsigned int i;
int ret;
for (i = 0; i < ARRAY_SIZE(fw_reg_info); i++) {
const struct kvm_fw_reg_info *reg_info = &fw_reg_info[i];
/*
* Before starting the VM, the test clears all the bits.
* Check if that's still the case.
*/
get_fw_reg(vm, reg_info->reg, &val);
TEST_ASSERT(val == 0,
"Expected all the features to be cleared for reg: 0x%lx\n",
reg_info->reg);
/*
* Since the VM has run at least once, KVM shouldn't allow modification of
* the registers and should return EBUSY. Set the registers and check for
* the expected errno.
*/
ret = set_fw_reg(vm, reg_info->reg, FW_REG_ULIMIT_VAL(reg_info->max_feat_bit));
TEST_ASSERT(ret != 0 && errno == EBUSY,
"Unexpected behavior or return value (%d) while setting a feature while VM is running for reg: 0x%lx\n",
errno, reg_info->reg);
}
}
static struct kvm_vm *test_vm_create(void)
{
struct kvm_vm *vm;
vm = vm_create_default(0, 0, guest_code);
ucall_init(vm, NULL);
steal_time_init(vm);
return vm;
}
static struct kvm_vm *test_guest_stage(struct kvm_vm *vm)
{
struct kvm_vm *ret_vm = vm;
pr_debug("Stage: %d\n", stage);
switch (stage) {
case TEST_STAGE_REG_IFACE:
test_fw_regs_after_vm_start(vm);
break;
case TEST_STAGE_HVC_IFACE_FEAT_DISABLED:
/* Start a new VM so that all the features are now enabled by default */
kvm_vm_free(vm);
ret_vm = test_vm_create();
break;
case TEST_STAGE_HVC_IFACE_FEAT_ENABLED:
case TEST_STAGE_HVC_IFACE_FALSE_INFO:
break;
default:
TEST_FAIL("Unknown test stage: %d\n", stage);
}
stage++;
sync_global_to_guest(vm, stage);
return ret_vm;
}
static void test_run(void)
{
struct kvm_vm *vm;
struct ucall uc;
bool guest_done = false;
vm = test_vm_create();
test_fw_regs_before_vm_start(vm);
while (!guest_done) {
vcpu_run(vm, 0);
switch (get_ucall(vm, 0, &uc)) {
case UCALL_SYNC:
vm = test_guest_stage(vm);
break;
case UCALL_DONE:
guest_done = true;
break;
case UCALL_ABORT:
TEST_FAIL("%s at %s:%ld\n\tvalues: 0x%lx, 0x%lx; 0x%lx, stage: %u",
(const char *)uc.args[0], __FILE__, uc.args[1],
uc.args[2], uc.args[3], uc.args[4], stage);
break;
default:
TEST_FAIL("Unexpected guest exit\n");
}
}
kvm_vm_free(vm);
}
int main(void)
{
setbuf(stdout, NULL);
test_run();
return 0;
}
...@@ -26,32 +26,23 @@ ...@@ -26,32 +26,23 @@
static uint64_t psci_cpu_on(uint64_t target_cpu, uint64_t entry_addr, static uint64_t psci_cpu_on(uint64_t target_cpu, uint64_t entry_addr,
uint64_t context_id) uint64_t context_id)
{ {
register uint64_t x0 asm("x0") = PSCI_0_2_FN64_CPU_ON; struct arm_smccc_res res;
register uint64_t x1 asm("x1") = target_cpu;
register uint64_t x2 asm("x2") = entry_addr;
register uint64_t x3 asm("x3") = context_id;
asm("hvc #0" smccc_hvc(PSCI_0_2_FN64_CPU_ON, target_cpu, entry_addr, context_id,
: "=r"(x0) 0, 0, 0, 0, &res);
: "r"(x0), "r"(x1), "r"(x2), "r"(x3)
: "memory");
return x0; return res.a0;
} }
static uint64_t psci_affinity_info(uint64_t target_affinity, static uint64_t psci_affinity_info(uint64_t target_affinity,
uint64_t lowest_affinity_level) uint64_t lowest_affinity_level)
{ {
register uint64_t x0 asm("x0") = PSCI_0_2_FN64_AFFINITY_INFO; struct arm_smccc_res res;
register uint64_t x1 asm("x1") = target_affinity;
register uint64_t x2 asm("x2") = lowest_affinity_level;
asm("hvc #0" smccc_hvc(PSCI_0_2_FN64_AFFINITY_INFO, target_affinity, lowest_affinity_level,
: "=r"(x0) 0, 0, 0, 0, 0, &res);
: "r"(x0), "r"(x1), "r"(x2)
: "memory");
return x0; return res.a0;
} }
static void guest_main(uint64_t target_cpu) static void guest_main(uint64_t target_cpu)
......
...@@ -185,4 +185,26 @@ static inline void local_irq_disable(void) ...@@ -185,4 +185,26 @@ static inline void local_irq_disable(void)
asm volatile("msr daifset, #3" : : : "memory"); asm volatile("msr daifset, #3" : : : "memory");
} }
/**
* struct arm_smccc_res - Result from SMC/HVC call
* @a0-a3 result values from registers 0 to 3
*/
struct arm_smccc_res {
unsigned long a0;
unsigned long a1;
unsigned long a2;
unsigned long a3;
};
/**
* smccc_hvc - Invoke a SMCCC function using the hvc conduit
* @function_id: the SMCCC function to be called
* @arg0-arg6: SMCCC function arguments, corresponding to registers x1-x7
* @res: pointer to write the return values from registers x0-x3
*
*/
void smccc_hvc(uint32_t function_id, uint64_t arg0, uint64_t arg1,
uint64_t arg2, uint64_t arg3, uint64_t arg4, uint64_t arg5,
uint64_t arg6, struct arm_smccc_res *res);
#endif /* SELFTEST_KVM_PROCESSOR_H */ #endif /* SELFTEST_KVM_PROCESSOR_H */
...@@ -500,3 +500,28 @@ void __attribute__((constructor)) init_guest_modes(void) ...@@ -500,3 +500,28 @@ void __attribute__((constructor)) init_guest_modes(void)
{ {
guest_modes_append_default(); guest_modes_append_default();
} }
void smccc_hvc(uint32_t function_id, uint64_t arg0, uint64_t arg1,
uint64_t arg2, uint64_t arg3, uint64_t arg4, uint64_t arg5,
uint64_t arg6, struct arm_smccc_res *res)
{
asm volatile("mov w0, %w[function_id]\n"
"mov x1, %[arg0]\n"
"mov x2, %[arg1]\n"
"mov x3, %[arg2]\n"
"mov x4, %[arg3]\n"
"mov x5, %[arg4]\n"
"mov x6, %[arg5]\n"
"mov x7, %[arg6]\n"
"hvc #0\n"
"mov %[res0], x0\n"
"mov %[res1], x1\n"
"mov %[res2], x2\n"
"mov %[res3], x3\n"
: [res0] "=r"(res->a0), [res1] "=r"(res->a1),
[res2] "=r"(res->a2), [res3] "=r"(res->a3)
: [function_id] "r"(function_id), [arg0] "r"(arg0),
[arg1] "r"(arg1), [arg2] "r"(arg2), [arg3] "r"(arg3),
[arg4] "r"(arg4), [arg5] "r"(arg5), [arg6] "r"(arg6)
: "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7");
}
...@@ -118,17 +118,10 @@ struct st_time { ...@@ -118,17 +118,10 @@ struct st_time {
static int64_t smccc(uint32_t func, uint64_t arg) static int64_t smccc(uint32_t func, uint64_t arg)
{ {
unsigned long ret; struct arm_smccc_res res;
asm volatile( smccc_hvc(func, arg, 0, 0, 0, 0, 0, 0, &res);
"mov w0, %w1\n" return res.a0;
"mov x1, %2\n"
"hvc #0\n"
"mov %0, x0\n"
: "=r" (ret) : "r" (func), "r" (arg) :
"x0", "x1", "x2", "x3");
return ret;
} }
static void check_status(struct st_time *st) static void check_status(struct st_time *st)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment