Commit ef12ea62 authored by Paolo Bonzini's avatar Paolo Bonzini

Merge tag 'loongarch-kvm-6.7' of...

Merge tag 'loongarch-kvm-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson into HEAD

LoongArch KVM changes for v6.7

Add LoongArch's KVM support. Loongson 3A5000/3A6000 supports hardware
assisted virtualization. With cpu virtualization, there are separate
hw-supported user mode and kernel mode in guest mode. With memory
virtualization, there are two-level hw mmu table for guest mode and host
mode. Also there is separate hw cpu timer with consant frequency in
guest mode, so that vm can migrate between hosts with different freq.
Currently, we are able to boot LoongArch Linux Guests.

Few key aspects of KVM LoongArch added by this series are:
1. Enable kvm hardware function when kvm module is loaded.
2. Implement VM and vcpu related ioctl interface such as vcpu create,
   vcpu run etc. GET_ONE_REG/SET_ONE_REG ioctl commands are use to
   get general registers one by one.
3. Hardware access about MMU, timer and csr are emulated in kernel.
4. Hardwares such as mmio and iocsr device are emulated in user space
   such as IPI, irqchips, pci devices etc.
parents ffc25326 2c10cda4
......@@ -416,6 +416,13 @@ Reads the general purpose registers from the vcpu.
__u64 pc;
};
/* LoongArch */
struct kvm_regs {
/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
unsigned long gpr[32];
unsigned long pc;
};
4.12 KVM_SET_REGS
-----------------
......@@ -506,7 +513,7 @@ translation mode.
------------------
:Capability: basic
:Architectures: x86, ppc, mips, riscv
:Architectures: x86, ppc, mips, riscv, loongarch
:Type: vcpu ioctl
:Parameters: struct kvm_interrupt (in)
:Returns: 0 on success, negative on failure.
......@@ -592,6 +599,14 @@ b) KVM_INTERRUPT_UNSET
This is an asynchronous vcpu ioctl and can be invoked from any thread.
LOONGARCH:
^^^^^^^^^^
Queues an external interrupt to be injected into the virtual CPU. A negative
interrupt number dequeues the interrupt.
This is an asynchronous vcpu ioctl and can be invoked from any thread.
4.17 KVM_DEBUG_GUEST
--------------------
......@@ -737,7 +752,7 @@ signal mask.
----------------
:Capability: basic
:Architectures: x86
:Architectures: x86, loongarch
:Type: vcpu ioctl
:Parameters: struct kvm_fpu (out)
:Returns: 0 on success, -1 on error
......@@ -746,7 +761,7 @@ Reads the floating point state from the vcpu.
::
/* for KVM_GET_FPU and KVM_SET_FPU */
/* x86: for KVM_GET_FPU and KVM_SET_FPU */
struct kvm_fpu {
__u8 fpr[8][16];
__u16 fcw;
......@@ -761,12 +776,21 @@ Reads the floating point state from the vcpu.
__u32 pad2;
};
/* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */
struct kvm_fpu {
__u32 fcsr;
__u64 fcc;
struct kvm_fpureg {
__u64 val64[4];
}fpr[32];
};
4.23 KVM_SET_FPU
----------------
:Capability: basic
:Architectures: x86
:Architectures: x86, loongarch
:Type: vcpu ioctl
:Parameters: struct kvm_fpu (in)
:Returns: 0 on success, -1 on error
......@@ -775,7 +799,7 @@ Writes the floating point state to the vcpu.
::
/* for KVM_GET_FPU and KVM_SET_FPU */
/* x86: for KVM_GET_FPU and KVM_SET_FPU */
struct kvm_fpu {
__u8 fpr[8][16];
__u16 fcw;
......@@ -790,6 +814,15 @@ Writes the floating point state to the vcpu.
__u32 pad2;
};
/* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */
struct kvm_fpu {
__u32 fcsr;
__u64 fcc;
struct kvm_fpureg {
__u64 val64[4];
}fpr[32];
};
4.24 KVM_CREATE_IRQCHIP
-----------------------
......@@ -1387,7 +1420,7 @@ documentation when it pops into existence).
-------------------
:Capability: KVM_CAP_ENABLE_CAP
:Architectures: mips, ppc, s390, x86
:Architectures: mips, ppc, s390, x86, loongarch
:Type: vcpu ioctl
:Parameters: struct kvm_enable_cap (in)
:Returns: 0 on success; -1 on error
......@@ -1442,7 +1475,7 @@ for vm-wide capabilities.
---------------------
:Capability: KVM_CAP_MP_STATE
:Architectures: x86, s390, arm64, riscv
:Architectures: x86, s390, arm64, riscv, loongarch
:Type: vcpu ioctl
:Parameters: struct kvm_mp_state (out)
:Returns: 0 on success; -1 on error
......@@ -1460,7 +1493,7 @@ Possible values are:
========================== ===============================================
KVM_MP_STATE_RUNNABLE the vcpu is currently running
[x86,arm64,riscv]
[x86,arm64,riscv,loongarch]
KVM_MP_STATE_UNINITIALIZED the vcpu is an application processor (AP)
which has not yet received an INIT signal [x86]
KVM_MP_STATE_INIT_RECEIVED the vcpu has received an INIT signal, and is
......@@ -1516,11 +1549,14 @@ For riscv:
The only states that are valid are KVM_MP_STATE_STOPPED and
KVM_MP_STATE_RUNNABLE which reflect if the vcpu is paused or not.
On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect
whether the vcpu is runnable.
4.39 KVM_SET_MP_STATE
---------------------
:Capability: KVM_CAP_MP_STATE
:Architectures: x86, s390, arm64, riscv
:Architectures: x86, s390, arm64, riscv, loongarch
:Type: vcpu ioctl
:Parameters: struct kvm_mp_state (in)
:Returns: 0 on success; -1 on error
......@@ -1538,6 +1574,9 @@ For arm64/riscv:
The only states that are valid are KVM_MP_STATE_STOPPED and
KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be paused or not.
On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect
whether the vcpu is runnable.
4.40 KVM_SET_IDENTITY_MAP_ADDR
------------------------------
......@@ -2841,6 +2880,19 @@ Following are the RISC-V D-extension registers:
0x8020 0000 0600 0020 fcsr Floating point control and status register
======================= ========= =============================================
LoongArch registers are mapped using the lower 32 bits. The upper 16 bits of
that is the register group type.
LoongArch csr registers are used to control guest cpu or get status of guest
cpu, and they have the following id bit patterns::
0x9030 0000 0001 00 <reg:5> <sel:3> (64-bit)
LoongArch KVM control registers are used to implement some new defined functions
such as set vcpu counter or reset vcpu, and they have the following id bit patterns::
0x9030 0000 0002 <reg:16>
4.69 KVM_GET_ONE_REG
--------------------
......
......@@ -11522,6 +11522,18 @@ F: include/kvm/arm_*
F: tools/testing/selftests/kvm/*/aarch64/
F: tools/testing/selftests/kvm/aarch64/
KERNEL VIRTUAL MACHINE FOR LOONGARCH (KVM/LoongArch)
M: Tianrui Zhao <zhaotianrui@loongson.cn>
M: Bibo Mao <maobibo@loongson.cn>
M: Huacai Chen <chenhuacai@kernel.org>
L: kvm@vger.kernel.org
L: loongarch@lists.linux.dev
S: Maintained
T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git
F: arch/loongarch/include/asm/kvm*
F: arch/loongarch/include/uapi/asm/kvm*
F: arch/loongarch/kvm/
KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
M: Huacai Chen <chenhuacai@kernel.org>
L: linux-mips@vger.kernel.org
......
......@@ -3,5 +3,7 @@ obj-y += mm/
obj-y += net/
obj-y += vdso/
obj-$(CONFIG_KVM) += kvm/
# for cleaning
subdir- += boot
......@@ -129,6 +129,7 @@ config LOONGARCH
select HAVE_KPROBES
select HAVE_KPROBES_ON_FTRACE
select HAVE_KRETPROBES
select HAVE_KVM
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select HAVE_PCI
......@@ -263,6 +264,9 @@ config AS_HAS_LASX_EXTENSION
config AS_HAS_LBT_EXTENSION
def_bool $(as-instr,movscr2gr \$a0$(comma)\$scr0)
config AS_HAS_LVZ_EXTENSION
def_bool $(as-instr,hvcl 0)
menu "Kernel type and options"
source "kernel/Kconfig.hz"
......@@ -676,3 +680,5 @@ source "kernel/power/Kconfig"
source "drivers/acpi/Kconfig"
endmenu
source "arch/loongarch/kvm/Kconfig"
......@@ -66,6 +66,8 @@ CONFIG_EFI_ZBOOT=y
CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
CONFIG_EFI_CAPSULE_LOADER=m
CONFIG_EFI_TEST=m
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=m
CONFIG_JUMP_LABEL=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
......
......@@ -65,6 +65,14 @@ enum reg2_op {
revbd_op = 0x0f,
revh2w_op = 0x10,
revhd_op = 0x11,
iocsrrdb_op = 0x19200,
iocsrrdh_op = 0x19201,
iocsrrdw_op = 0x19202,
iocsrrdd_op = 0x19203,
iocsrwrb_op = 0x19204,
iocsrwrh_op = 0x19205,
iocsrwrw_op = 0x19206,
iocsrwrd_op = 0x19207,
};
enum reg2i5_op {
......@@ -318,6 +326,13 @@ struct reg2bstrd_format {
unsigned int opcode : 10;
};
struct reg2csr_format {
unsigned int rd : 5;
unsigned int rj : 5;
unsigned int csr : 14;
unsigned int opcode : 8;
};
struct reg3_format {
unsigned int rd : 5;
unsigned int rj : 5;
......@@ -346,6 +361,7 @@ union loongarch_instruction {
struct reg2i14_format reg2i14_format;
struct reg2i16_format reg2i16_format;
struct reg2bstrd_format reg2bstrd_format;
struct reg2csr_format reg2csr_format;
struct reg3_format reg3_format;
struct reg3sa2_format reg3sa2_format;
};
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#ifndef __ASM_LOONGARCH_KVM_CSR_H__
#define __ASM_LOONGARCH_KVM_CSR_H__
#include <linux/uaccess.h>
#include <linux/kvm_host.h>
#include <asm/loongarch.h>
#include <asm/kvm_vcpu.h>
#define gcsr_read(csr) \
({ \
register unsigned long __v; \
__asm__ __volatile__( \
" gcsrrd %[val], %[reg]\n\t" \
: [val] "=r" (__v) \
: [reg] "i" (csr) \
: "memory"); \
__v; \
})
#define gcsr_write(v, csr) \
({ \
register unsigned long __v = v; \
__asm__ __volatile__ ( \
" gcsrwr %[val], %[reg]\n\t" \
: [val] "+r" (__v) \
: [reg] "i" (csr) \
: "memory"); \
})
#define gcsr_xchg(v, m, csr) \
({ \
register unsigned long __v = v; \
__asm__ __volatile__( \
" gcsrxchg %[val], %[mask], %[reg]\n\t" \
: [val] "+r" (__v) \
: [mask] "r" (m), [reg] "i" (csr) \
: "memory"); \
__v; \
})
/* Guest CSRS read and write */
#define read_gcsr_crmd() gcsr_read(LOONGARCH_CSR_CRMD)
#define write_gcsr_crmd(val) gcsr_write(val, LOONGARCH_CSR_CRMD)
#define read_gcsr_prmd() gcsr_read(LOONGARCH_CSR_PRMD)
#define write_gcsr_prmd(val) gcsr_write(val, LOONGARCH_CSR_PRMD)
#define read_gcsr_euen() gcsr_read(LOONGARCH_CSR_EUEN)
#define write_gcsr_euen(val) gcsr_write(val, LOONGARCH_CSR_EUEN)
#define read_gcsr_misc() gcsr_read(LOONGARCH_CSR_MISC)
#define write_gcsr_misc(val) gcsr_write(val, LOONGARCH_CSR_MISC)
#define read_gcsr_ecfg() gcsr_read(LOONGARCH_CSR_ECFG)
#define write_gcsr_ecfg(val) gcsr_write(val, LOONGARCH_CSR_ECFG)
#define read_gcsr_estat() gcsr_read(LOONGARCH_CSR_ESTAT)
#define write_gcsr_estat(val) gcsr_write(val, LOONGARCH_CSR_ESTAT)
#define read_gcsr_era() gcsr_read(LOONGARCH_CSR_ERA)
#define write_gcsr_era(val) gcsr_write(val, LOONGARCH_CSR_ERA)
#define read_gcsr_badv() gcsr_read(LOONGARCH_CSR_BADV)
#define write_gcsr_badv(val) gcsr_write(val, LOONGARCH_CSR_BADV)
#define read_gcsr_badi() gcsr_read(LOONGARCH_CSR_BADI)
#define write_gcsr_badi(val) gcsr_write(val, LOONGARCH_CSR_BADI)
#define read_gcsr_eentry() gcsr_read(LOONGARCH_CSR_EENTRY)
#define write_gcsr_eentry(val) gcsr_write(val, LOONGARCH_CSR_EENTRY)
#define read_gcsr_asid() gcsr_read(LOONGARCH_CSR_ASID)
#define write_gcsr_asid(val) gcsr_write(val, LOONGARCH_CSR_ASID)
#define read_gcsr_pgdl() gcsr_read(LOONGARCH_CSR_PGDL)
#define write_gcsr_pgdl(val) gcsr_write(val, LOONGARCH_CSR_PGDL)
#define read_gcsr_pgdh() gcsr_read(LOONGARCH_CSR_PGDH)
#define write_gcsr_pgdh(val) gcsr_write(val, LOONGARCH_CSR_PGDH)
#define write_gcsr_pgd(val) gcsr_write(val, LOONGARCH_CSR_PGD)
#define read_gcsr_pgd() gcsr_read(LOONGARCH_CSR_PGD)
#define read_gcsr_pwctl0() gcsr_read(LOONGARCH_CSR_PWCTL0)
#define write_gcsr_pwctl0(val) gcsr_write(val, LOONGARCH_CSR_PWCTL0)
#define read_gcsr_pwctl1() gcsr_read(LOONGARCH_CSR_PWCTL1)
#define write_gcsr_pwctl1(val) gcsr_write(val, LOONGARCH_CSR_PWCTL1)
#define read_gcsr_stlbpgsize() gcsr_read(LOONGARCH_CSR_STLBPGSIZE)
#define write_gcsr_stlbpgsize(val) gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE)
#define read_gcsr_rvacfg() gcsr_read(LOONGARCH_CSR_RVACFG)
#define write_gcsr_rvacfg(val) gcsr_write(val, LOONGARCH_CSR_RVACFG)
#define read_gcsr_cpuid() gcsr_read(LOONGARCH_CSR_CPUID)
#define write_gcsr_cpuid(val) gcsr_write(val, LOONGARCH_CSR_CPUID)
#define read_gcsr_prcfg1() gcsr_read(LOONGARCH_CSR_PRCFG1)
#define write_gcsr_prcfg1(val) gcsr_write(val, LOONGARCH_CSR_PRCFG1)
#define read_gcsr_prcfg2() gcsr_read(LOONGARCH_CSR_PRCFG2)
#define write_gcsr_prcfg2(val) gcsr_write(val, LOONGARCH_CSR_PRCFG2)
#define read_gcsr_prcfg3() gcsr_read(LOONGARCH_CSR_PRCFG3)
#define write_gcsr_prcfg3(val) gcsr_write(val, LOONGARCH_CSR_PRCFG3)
#define read_gcsr_kscratch0() gcsr_read(LOONGARCH_CSR_KS0)
#define write_gcsr_kscratch0(val) gcsr_write(val, LOONGARCH_CSR_KS0)
#define read_gcsr_kscratch1() gcsr_read(LOONGARCH_CSR_KS1)
#define write_gcsr_kscratch1(val) gcsr_write(val, LOONGARCH_CSR_KS1)
#define read_gcsr_kscratch2() gcsr_read(LOONGARCH_CSR_KS2)
#define write_gcsr_kscratch2(val) gcsr_write(val, LOONGARCH_CSR_KS2)
#define read_gcsr_kscratch3() gcsr_read(LOONGARCH_CSR_KS3)
#define write_gcsr_kscratch3(val) gcsr_write(val, LOONGARCH_CSR_KS3)
#define read_gcsr_kscratch4() gcsr_read(LOONGARCH_CSR_KS4)
#define write_gcsr_kscratch4(val) gcsr_write(val, LOONGARCH_CSR_KS4)
#define read_gcsr_kscratch5() gcsr_read(LOONGARCH_CSR_KS5)
#define write_gcsr_kscratch5(val) gcsr_write(val, LOONGARCH_CSR_KS5)
#define read_gcsr_kscratch6() gcsr_read(LOONGARCH_CSR_KS6)
#define write_gcsr_kscratch6(val) gcsr_write(val, LOONGARCH_CSR_KS6)
#define read_gcsr_kscratch7() gcsr_read(LOONGARCH_CSR_KS7)
#define write_gcsr_kscratch7(val) gcsr_write(val, LOONGARCH_CSR_KS7)
#define read_gcsr_timerid() gcsr_read(LOONGARCH_CSR_TMID)
#define write_gcsr_timerid(val) gcsr_write(val, LOONGARCH_CSR_TMID)
#define read_gcsr_timercfg() gcsr_read(LOONGARCH_CSR_TCFG)
#define write_gcsr_timercfg(val) gcsr_write(val, LOONGARCH_CSR_TCFG)
#define read_gcsr_timertick() gcsr_read(LOONGARCH_CSR_TVAL)
#define write_gcsr_timertick(val) gcsr_write(val, LOONGARCH_CSR_TVAL)
#define read_gcsr_timeroffset() gcsr_read(LOONGARCH_CSR_CNTC)
#define write_gcsr_timeroffset(val) gcsr_write(val, LOONGARCH_CSR_CNTC)
#define read_gcsr_llbctl() gcsr_read(LOONGARCH_CSR_LLBCTL)
#define write_gcsr_llbctl(val) gcsr_write(val, LOONGARCH_CSR_LLBCTL)
#define read_gcsr_tlbidx() gcsr_read(LOONGARCH_CSR_TLBIDX)
#define write_gcsr_tlbidx(val) gcsr_write(val, LOONGARCH_CSR_TLBIDX)
#define read_gcsr_tlbrentry() gcsr_read(LOONGARCH_CSR_TLBRENTRY)
#define write_gcsr_tlbrentry(val) gcsr_write(val, LOONGARCH_CSR_TLBRENTRY)
#define read_gcsr_tlbrbadv() gcsr_read(LOONGARCH_CSR_TLBRBADV)
#define write_gcsr_tlbrbadv(val) gcsr_write(val, LOONGARCH_CSR_TLBRBADV)
#define read_gcsr_tlbrera() gcsr_read(LOONGARCH_CSR_TLBRERA)
#define write_gcsr_tlbrera(val) gcsr_write(val, LOONGARCH_CSR_TLBRERA)
#define read_gcsr_tlbrsave() gcsr_read(LOONGARCH_CSR_TLBRSAVE)
#define write_gcsr_tlbrsave(val) gcsr_write(val, LOONGARCH_CSR_TLBRSAVE)
#define read_gcsr_tlbrelo0() gcsr_read(LOONGARCH_CSR_TLBRELO0)
#define write_gcsr_tlbrelo0(val) gcsr_write(val, LOONGARCH_CSR_TLBRELO0)
#define read_gcsr_tlbrelo1() gcsr_read(LOONGARCH_CSR_TLBRELO1)
#define write_gcsr_tlbrelo1(val) gcsr_write(val, LOONGARCH_CSR_TLBRELO1)
#define read_gcsr_tlbrehi() gcsr_read(LOONGARCH_CSR_TLBREHI)
#define write_gcsr_tlbrehi(val) gcsr_write(val, LOONGARCH_CSR_TLBREHI)
#define read_gcsr_tlbrprmd() gcsr_read(LOONGARCH_CSR_TLBRPRMD)
#define write_gcsr_tlbrprmd(val) gcsr_write(val, LOONGARCH_CSR_TLBRPRMD)
#define read_gcsr_directwin0() gcsr_read(LOONGARCH_CSR_DMWIN0)
#define write_gcsr_directwin0(val) gcsr_write(val, LOONGARCH_CSR_DMWIN0)
#define read_gcsr_directwin1() gcsr_read(LOONGARCH_CSR_DMWIN1)
#define write_gcsr_directwin1(val) gcsr_write(val, LOONGARCH_CSR_DMWIN1)
#define read_gcsr_directwin2() gcsr_read(LOONGARCH_CSR_DMWIN2)
#define write_gcsr_directwin2(val) gcsr_write(val, LOONGARCH_CSR_DMWIN2)
#define read_gcsr_directwin3() gcsr_read(LOONGARCH_CSR_DMWIN3)
#define write_gcsr_directwin3(val) gcsr_write(val, LOONGARCH_CSR_DMWIN3)
/* Guest related CSRs */
#define read_csr_gtlbc() csr_read64(LOONGARCH_CSR_GTLBC)
#define write_csr_gtlbc(val) csr_write64(val, LOONGARCH_CSR_GTLBC)
#define read_csr_trgp() csr_read64(LOONGARCH_CSR_TRGP)
#define read_csr_gcfg() csr_read64(LOONGARCH_CSR_GCFG)
#define write_csr_gcfg(val) csr_write64(val, LOONGARCH_CSR_GCFG)
#define read_csr_gstat() csr_read64(LOONGARCH_CSR_GSTAT)
#define write_csr_gstat(val) csr_write64(val, LOONGARCH_CSR_GSTAT)
#define read_csr_gintc() csr_read64(LOONGARCH_CSR_GINTC)
#define write_csr_gintc(val) csr_write64(val, LOONGARCH_CSR_GINTC)
#define read_csr_gcntc() csr_read64(LOONGARCH_CSR_GCNTC)
#define write_csr_gcntc(val) csr_write64(val, LOONGARCH_CSR_GCNTC)
#define __BUILD_GCSR_OP(name) __BUILD_CSR_COMMON(gcsr_##name)
__BUILD_CSR_OP(gcfg)
__BUILD_CSR_OP(gstat)
__BUILD_CSR_OP(gtlbc)
__BUILD_CSR_OP(gintc)
__BUILD_GCSR_OP(llbctl)
__BUILD_GCSR_OP(tlbidx)
#define set_gcsr_estat(val) \
gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT)
#define clear_gcsr_estat(val) \
gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT)
#define kvm_read_hw_gcsr(id) gcsr_read(id)
#define kvm_write_hw_gcsr(id, val) gcsr_write(val, id)
#define kvm_save_hw_gcsr(csr, gid) (csr->csrs[gid] = gcsr_read(gid))
#define kvm_restore_hw_gcsr(csr, gid) (gcsr_write(csr->csrs[gid], gid))
int kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu);
static __always_inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid)
{
return csr->csrs[gid];
}
static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long val)
{
csr->csrs[gid] = val;
}
static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr,
int gid, unsigned long val)
{
csr->csrs[gid] |= val;
}
static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr,
int gid, unsigned long mask, unsigned long val)
{
unsigned long _mask = mask;
csr->csrs[gid] &= ~_mask;
csr->csrs[gid] |= val & _mask;
}
#endif /* __ASM_LOONGARCH_KVM_CSR_H__ */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#ifndef __ASM_LOONGARCH_KVM_HOST_H__
#define __ASM_LOONGARCH_KVM_HOST_H__
#include <linux/cpumask.h>
#include <linux/hrtimer.h>
#include <linux/interrupt.h>
#include <linux/kvm.h>
#include <linux/kvm_types.h>
#include <linux/mutex.h>
#include <linux/spinlock.h>
#include <linux/threads.h>
#include <linux/types.h>
#include <asm/inst.h>
#include <asm/kvm_mmu.h>
#include <asm/loongarch.h>
/* Loongarch KVM register ids */
#define KVM_GET_IOC_CSR_IDX(id) ((id & KVM_CSR_IDX_MASK) >> LOONGARCH_REG_SHIFT)
#define KVM_GET_IOC_CPUCFG_IDX(id) ((id & KVM_CPUCFG_IDX_MASK) >> LOONGARCH_REG_SHIFT)
#define KVM_MAX_VCPUS 256
#define KVM_MAX_CPUCFG_REGS 21
/* memory slots that does not exposed to userspace */
#define KVM_PRIVATE_MEM_SLOTS 0
#define KVM_HALT_POLL_NS_DEFAULT 500000
struct kvm_vm_stat {
struct kvm_vm_stat_generic generic;
u64 pages;
u64 hugepages;
};
struct kvm_vcpu_stat {
struct kvm_vcpu_stat_generic generic;
u64 int_exits;
u64 idle_exits;
u64 cpucfg_exits;
u64 signal_exits;
};
struct kvm_arch_memory_slot {
};
struct kvm_context {
unsigned long vpid_cache;
struct kvm_vcpu *last_vcpu;
};
struct kvm_world_switch {
int (*exc_entry)(void);
int (*enter_guest)(struct kvm_run *run, struct kvm_vcpu *vcpu);
unsigned long page_order;
};
#define MAX_PGTABLE_LEVELS 4
struct kvm_arch {
/* Guest physical mm */
kvm_pte_t *pgd;
unsigned long gpa_size;
unsigned long invalid_ptes[MAX_PGTABLE_LEVELS];
unsigned int pte_shifts[MAX_PGTABLE_LEVELS];
unsigned int root_level;
s64 time_offset;
struct kvm_context __percpu *vmcs;
};
#define CSR_MAX_NUMS 0x800
struct loongarch_csrs {
unsigned long csrs[CSR_MAX_NUMS];
};
/* Resume Flags */
#define RESUME_HOST 0
#define RESUME_GUEST 1
enum emulation_result {
EMULATE_DONE, /* no further processing */
EMULATE_DO_MMIO, /* kvm_run filled with MMIO request */
EMULATE_DO_IOCSR, /* handle IOCSR request */
EMULATE_FAIL, /* can't emulate this instruction */
EMULATE_EXCEPT, /* A guest exception has been generated */
};
#define KVM_LARCH_FPU (0x1 << 0)
#define KVM_LARCH_SWCSR_LATEST (0x1 << 1)
#define KVM_LARCH_HWCSR_USABLE (0x1 << 2)
struct kvm_vcpu_arch {
/*
* Switch pointer-to-function type to unsigned long
* for loading the value into register directly.
*/
unsigned long host_eentry;
unsigned long guest_eentry;
/* Pointers stored here for easy accessing from assembly code */
int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu);
/* Host registers preserved across guest mode execution */
unsigned long host_sp;
unsigned long host_tp;
unsigned long host_pgd;
/* Host CSRs are used when handling exits from guest */
unsigned long badi;
unsigned long badv;
unsigned long host_ecfg;
unsigned long host_estat;
unsigned long host_percpu;
/* GPRs */
unsigned long gprs[32];
unsigned long pc;
/* Which auxiliary state is loaded (KVM_LARCH_*) */
unsigned int aux_inuse;
/* FPU state */
struct loongarch_fpu fpu FPU_ALIGN;
/* CSR state */
struct loongarch_csrs *csr;
/* GPR used as IO source/target */
u32 io_gpr;
/* KVM register to control count timer */
u32 count_ctl;
struct hrtimer swtimer;
/* Bitmask of intr that are pending */
unsigned long irq_pending;
/* Bitmask of pending intr to be cleared */
unsigned long irq_clear;
/* Bitmask of exceptions that are pending */
unsigned long exception_pending;
unsigned int esubcode;
/* Cache for pages needed inside spinlock regions */
struct kvm_mmu_memory_cache mmu_page_cache;
/* vcpu's vpid */
u64 vpid;
/* Frequency of stable timer in Hz */
u64 timer_mhz;
ktime_t expire;
/* Last CPU the vCPU state was loaded on */
int last_sched_cpu;
/* mp state */
struct kvm_mp_state mp_state;
/* cpucfg */
u32 cpucfg[KVM_MAX_CPUCFG_REGS];
};
static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg)
{
return csr->csrs[reg];
}
static inline void writel_sw_gcsr(struct loongarch_csrs *csr, int reg, unsigned long val)
{
csr->csrs[reg] = val;
}
/* Debug: dump vcpu state */
int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
/* MMU handling */
void kvm_flush_tlb_all(void);
void kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa);
int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write);
#define KVM_ARCH_WANT_MMU_NOTIFIER
void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, bool blockable);
int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
static inline void update_pc(struct kvm_vcpu_arch *arch)
{
arch->pc += 4;
}
/*
* kvm_is_ifetch_fault() - Find whether a TLBL exception is due to ifetch fault.
* @vcpu: Virtual CPU.
*
* Returns: Whether the TLBL exception was likely due to an instruction
* fetch fault rather than a data load fault.
*/
static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
{
return arch->pc == arch->badv;
}
/* Misc */
static inline void kvm_arch_hardware_unsetup(void) {}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) {}
void kvm_check_vpid(struct kvm_vcpu *vcpu);
enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer);
void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, const struct kvm_memory_slot *memslot);
void kvm_init_vmcs(struct kvm *kvm);
void kvm_exc_entry(void);
int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu);
extern unsigned long vpid_mask;
extern const unsigned long kvm_exception_size;
extern const unsigned long kvm_enter_guest_size;
extern struct kvm_world_switch *kvm_loongarch_ops;
#define SW_GCSR (1 << 0)
#define HW_GCSR (1 << 1)
#define INVALID_GCSR (1 << 2)
int get_gcsr_flag(int csr);
void set_hw_gcsr(int csr_id, unsigned long val);
#endif /* __ASM_LOONGARCH_KVM_HOST_H__ */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#ifndef __ASM_LOONGARCH_KVM_MMU_H__
#define __ASM_LOONGARCH_KVM_MMU_H__
#include <linux/kvm_host.h>
#include <asm/pgalloc.h>
#include <asm/tlb.h>
/*
* KVM_MMU_CACHE_MIN_PAGES is the number of GPA page table translation levels
* for which pages need to be cached.
*/
#define KVM_MMU_CACHE_MIN_PAGES (CONFIG_PGTABLE_LEVELS - 1)
#define _KVM_FLUSH_PGTABLE 0x1
#define _KVM_HAS_PGMASK 0x2
#define kvm_pfn_pte(pfn, prot) (((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot))
#define kvm_pte_pfn(x) ((phys_addr_t)((x & _PFN_MASK) >> PFN_PTE_SHIFT))
typedef unsigned long kvm_pte_t;
typedef struct kvm_ptw_ctx kvm_ptw_ctx;
typedef int (*kvm_pte_ops)(kvm_pte_t *pte, phys_addr_t addr, kvm_ptw_ctx *ctx);
struct kvm_ptw_ctx {
kvm_pte_ops ops;
unsigned long flag;
/* for kvm_arch_mmu_enable_log_dirty_pt_masked use */
unsigned long mask;
unsigned long gfn;
/* page walk mmu info */
unsigned int level;
unsigned long pgtable_shift;
unsigned long invalid_entry;
unsigned long *invalid_ptes;
unsigned int *pte_shifts;
void *opaque;
/* free pte table page list */
struct list_head list;
};
kvm_pte_t *kvm_pgd_alloc(void);
static inline void kvm_set_pte(kvm_pte_t *ptep, kvm_pte_t val)
{
WRITE_ONCE(*ptep, val);
}
static inline int kvm_pte_write(kvm_pte_t pte) { return pte & _PAGE_WRITE; }
static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & _PAGE_DIRTY; }
static inline int kvm_pte_young(kvm_pte_t pte) { return pte & _PAGE_ACCESSED; }
static inline int kvm_pte_huge(kvm_pte_t pte) { return pte & _PAGE_HUGE; }
static inline kvm_pte_t kvm_pte_mkyoung(kvm_pte_t pte)
{
return pte | _PAGE_ACCESSED;
}
static inline kvm_pte_t kvm_pte_mkold(kvm_pte_t pte)
{
return pte & ~_PAGE_ACCESSED;
}
static inline kvm_pte_t kvm_pte_mkdirty(kvm_pte_t pte)
{
return pte | _PAGE_DIRTY;
}
static inline kvm_pte_t kvm_pte_mkclean(kvm_pte_t pte)
{
return pte & ~_PAGE_DIRTY;
}
static inline kvm_pte_t kvm_pte_mkhuge(kvm_pte_t pte)
{
return pte | _PAGE_HUGE;
}
static inline kvm_pte_t kvm_pte_mksmall(kvm_pte_t pte)
{
return pte & ~_PAGE_HUGE;
}
static inline int kvm_need_flush(kvm_ptw_ctx *ctx)
{
return ctx->flag & _KVM_FLUSH_PGTABLE;
}
static inline kvm_pte_t *kvm_pgtable_offset(kvm_ptw_ctx *ctx, kvm_pte_t *table,
phys_addr_t addr)
{
return table + ((addr >> ctx->pgtable_shift) & (PTRS_PER_PTE - 1));
}
static inline phys_addr_t kvm_pgtable_addr_end(kvm_ptw_ctx *ctx,
phys_addr_t addr, phys_addr_t end)
{
phys_addr_t boundary, size;
size = 0x1UL << ctx->pgtable_shift;
boundary = (addr + size) & ~(size - 1);
return (boundary - 1 < end - 1) ? boundary : end;
}
static inline int kvm_pte_present(kvm_ptw_ctx *ctx, kvm_pte_t *entry)
{
if (!ctx || ctx->level == 0)
return !!(*entry & _PAGE_PRESENT);
return *entry != ctx->invalid_entry;
}
static inline int kvm_pte_none(kvm_ptw_ctx *ctx, kvm_pte_t *entry)
{
return *entry == ctx->invalid_entry;
}
static inline void kvm_ptw_enter(kvm_ptw_ctx *ctx)
{
ctx->level--;
ctx->pgtable_shift = ctx->pte_shifts[ctx->level];
ctx->invalid_entry = ctx->invalid_ptes[ctx->level];
}
static inline void kvm_ptw_exit(kvm_ptw_ctx *ctx)
{
ctx->level++;
ctx->pgtable_shift = ctx->pte_shifts[ctx->level];
ctx->invalid_entry = ctx->invalid_ptes[ctx->level];
}
#endif /* __ASM_LOONGARCH_KVM_MMU_H__ */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#ifndef _ASM_LOONGARCH_KVM_TYPES_H
#define _ASM_LOONGARCH_KVM_TYPES_H
#define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40
#endif /* _ASM_LOONGARCH_KVM_TYPES_H */
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#ifndef __ASM_LOONGARCH_KVM_VCPU_H__
#define __ASM_LOONGARCH_KVM_VCPU_H__
#include <linux/kvm_host.h>
#include <asm/loongarch.h>
/* Controlled by 0x5 guest estat */
#define CPU_SIP0 (_ULCAST_(1))
#define CPU_SIP1 (_ULCAST_(1) << 1)
#define CPU_PMU (_ULCAST_(1) << 10)
#define CPU_TIMER (_ULCAST_(1) << 11)
#define CPU_IPI (_ULCAST_(1) << 12)
/* Controlled by 0x52 guest exception VIP aligned to estat bit 5~12 */
#define CPU_IP0 (_ULCAST_(1))
#define CPU_IP1 (_ULCAST_(1) << 1)
#define CPU_IP2 (_ULCAST_(1) << 2)
#define CPU_IP3 (_ULCAST_(1) << 3)
#define CPU_IP4 (_ULCAST_(1) << 4)
#define CPU_IP5 (_ULCAST_(1) << 5)
#define CPU_IP6 (_ULCAST_(1) << 6)
#define CPU_IP7 (_ULCAST_(1) << 7)
#define MNSEC_PER_SEC (NSEC_PER_SEC >> 20)
/* KVM_IRQ_LINE irq field index values */
#define KVM_LOONGSON_IRQ_TYPE_SHIFT 24
#define KVM_LOONGSON_IRQ_TYPE_MASK 0xff
#define KVM_LOONGSON_IRQ_VCPU_SHIFT 16
#define KVM_LOONGSON_IRQ_VCPU_MASK 0xff
#define KVM_LOONGSON_IRQ_NUM_SHIFT 0
#define KVM_LOONGSON_IRQ_NUM_MASK 0xffff
typedef union loongarch_instruction larch_inst;
typedef int (*exit_handle_fn)(struct kvm_vcpu *);
int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
int kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_emu_idle(struct kvm_vcpu *vcpu);
int kvm_pending_timer(struct kvm_vcpu *vcpu);
int kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
void kvm_deliver_intr(struct kvm_vcpu *vcpu);
void kvm_deliver_exception(struct kvm_vcpu *vcpu);
void kvm_own_fpu(struct kvm_vcpu *vcpu);
void kvm_lose_fpu(struct kvm_vcpu *vcpu);
void kvm_save_fpu(struct loongarch_fpu *fpu);
void kvm_restore_fpu(struct loongarch_fpu *fpu);
void kvm_restore_fcsr(struct loongarch_fpu *fpu);
void kvm_acquire_timer(struct kvm_vcpu *vcpu);
void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz);
void kvm_reset_timer(struct kvm_vcpu *vcpu);
void kvm_save_timer(struct kvm_vcpu *vcpu);
void kvm_restore_timer(struct kvm_vcpu *vcpu);
int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq);
/*
* Loongarch KVM guest interrupt handling
*/
static inline void kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
{
set_bit(irq, &vcpu->arch.irq_pending);
clear_bit(irq, &vcpu->arch.irq_clear);
}
static inline void kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq)
{
clear_bit(irq, &vcpu->arch.irq_pending);
set_bit(irq, &vcpu->arch.irq_clear);
}
static inline int kvm_queue_exception(struct kvm_vcpu *vcpu,
unsigned int code, unsigned int subcode)
{
/* only one exception can be injected */
if (!vcpu->arch.exception_pending) {
set_bit(code, &vcpu->arch.exception_pending);
vcpu->arch.esubcode = subcode;
return 0;
} else
return -1;
}
#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
......@@ -226,6 +226,7 @@
#define LOONGARCH_CSR_ECFG 0x4 /* Exception config */
#define CSR_ECFG_VS_SHIFT 16
#define CSR_ECFG_VS_WIDTH 3
#define CSR_ECFG_VS_SHIFT_END (CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1)
#define CSR_ECFG_VS (_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT)
#define CSR_ECFG_IM_SHIFT 0
#define CSR_ECFG_IM_WIDTH 14
......@@ -314,13 +315,14 @@
#define CSR_TLBLO1_V (_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT)
#define LOONGARCH_CSR_GTLBC 0x15 /* Guest TLB control */
#define CSR_GTLBC_RID_SHIFT 16
#define CSR_GTLBC_RID_WIDTH 8
#define CSR_GTLBC_RID (_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT)
#define CSR_GTLBC_TGID_SHIFT 16
#define CSR_GTLBC_TGID_WIDTH 8
#define CSR_GTLBC_TGID_SHIFT_END (CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1)
#define CSR_GTLBC_TGID (_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT)
#define CSR_GTLBC_TOTI_SHIFT 13
#define CSR_GTLBC_TOTI (_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT)
#define CSR_GTLBC_USERID_SHIFT 12
#define CSR_GTLBC_USERID (_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT)
#define CSR_GTLBC_USETGID_SHIFT 12
#define CSR_GTLBC_USETGID (_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT)
#define CSR_GTLBC_GMTLBSZ_SHIFT 0
#define CSR_GTLBC_GMTLBSZ_WIDTH 6
#define CSR_GTLBC_GMTLBSZ (_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT)
......@@ -475,6 +477,7 @@
#define LOONGARCH_CSR_GSTAT 0x50 /* Guest status */
#define CSR_GSTAT_GID_SHIFT 16
#define CSR_GSTAT_GID_WIDTH 8
#define CSR_GSTAT_GID_SHIFT_END (CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1)
#define CSR_GSTAT_GID (_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT)
#define CSR_GSTAT_GIDBIT_SHIFT 4
#define CSR_GSTAT_GIDBIT_WIDTH 6
......@@ -525,6 +528,12 @@
#define CSR_GCFG_MATC_GUEST (_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF)
#define CSR_GCFG_MATC_ROOT (_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF)
#define CSR_GCFG_MATC_NEST (_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF)
#define CSR_GCFG_MATP_NEST_SHIFT 2
#define CSR_GCFG_MATP_NEST (_ULCAST_(0x1) << CSR_GCFG_MATP_NEST_SHIFT)
#define CSR_GCFG_MATP_ROOT_SHIFT 1
#define CSR_GCFG_MATP_ROOT (_ULCAST_(0x1) << CSR_GCFG_MATP_ROOT_SHIFT)
#define CSR_GCFG_MATP_GUEST_SHIFT 0
#define CSR_GCFG_MATP_GUEST (_ULCAST_(0x1) << CSR_GCFG_MATP_GUEST_SHIFT)
#define LOONGARCH_CSR_GINTC 0x52 /* Guest interrupt control */
#define CSR_GINTC_HC_SHIFT 16
......
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#ifndef __UAPI_ASM_LOONGARCH_KVM_H
#define __UAPI_ASM_LOONGARCH_KVM_H
#include <linux/types.h>
/*
* KVM LoongArch specific structures and definitions.
*
* Some parts derived from the x86 version of this file.
*/
#define __KVM_HAVE_READONLY_MEM
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
#define KVM_DIRTY_LOG_PAGE_OFFSET 64
/*
* for KVM_GET_REGS and KVM_SET_REGS
*/
struct kvm_regs {
/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
__u64 gpr[32];
__u64 pc;
};
/*
* for KVM_GET_FPU and KVM_SET_FPU
*/
struct kvm_fpu {
__u32 fcsr;
__u64 fcc; /* 8x8 */
struct kvm_fpureg {
__u64 val64[4];
} fpr[32];
};
/*
* For LoongArch, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various
* registers. The id field is broken down as follows:
*
* bits[63..52] - As per linux/kvm.h
* bits[51..32] - Must be zero.
* bits[31..16] - Register set.
*
* Register set = 0: GP registers from kvm_regs (see definitions below).
*
* Register set = 1: CSR registers.
*
* Register set = 2: KVM specific registers (see definitions below).
*
* Register set = 3: FPU / SIMD registers (see definitions below).
*
* Other sets registers may be added in the future. Each set would
* have its own identifier in bits[31..16].
*/
#define KVM_REG_LOONGARCH_GPR (KVM_REG_LOONGARCH | 0x00000ULL)
#define KVM_REG_LOONGARCH_CSR (KVM_REG_LOONGARCH | 0x10000ULL)
#define KVM_REG_LOONGARCH_KVM (KVM_REG_LOONGARCH | 0x20000ULL)
#define KVM_REG_LOONGARCH_FPSIMD (KVM_REG_LOONGARCH | 0x30000ULL)
#define KVM_REG_LOONGARCH_CPUCFG (KVM_REG_LOONGARCH | 0x40000ULL)
#define KVM_REG_LOONGARCH_MASK (KVM_REG_LOONGARCH | 0x70000ULL)
#define KVM_CSR_IDX_MASK 0x7fff
#define KVM_CPUCFG_IDX_MASK 0x7fff
/*
* KVM_REG_LOONGARCH_KVM - KVM specific control registers.
*/
#define KVM_REG_LOONGARCH_COUNTER (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 1)
#define KVM_REG_LOONGARCH_VCPU_RESET (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 2)
#define LOONGARCH_REG_SHIFT 3
#define LOONGARCH_REG_64(TYPE, REG) (TYPE | KVM_REG_SIZE_U64 | (REG << LOONGARCH_REG_SHIFT))
#define KVM_IOC_CSRID(REG) LOONGARCH_REG_64(KVM_REG_LOONGARCH_CSR, REG)
#define KVM_IOC_CPUCFG(REG) LOONGARCH_REG_64(KVM_REG_LOONGARCH_CPUCFG, REG)
struct kvm_debug_exit_arch {
};
/* for KVM_SET_GUEST_DEBUG */
struct kvm_guest_debug_arch {
};
/* definition of registers in kvm_run */
struct kvm_sync_regs {
};
/* dummy definition */
struct kvm_sregs {
};
struct kvm_iocsr_entry {
__u32 addr;
__u32 pad;
__u64 data;
};
#define KVM_NR_IRQCHIPS 1
#define KVM_IRQCHIP_NUM_PINS 64
#define KVM_MAX_CORES 256
#endif /* __UAPI_ASM_LOONGARCH_KVM_H */
......@@ -9,6 +9,7 @@
#include <linux/mm.h>
#include <linux/kbuild.h>
#include <linux/suspend.h>
#include <linux/kvm_host.h>
#include <asm/cpu-info.h>
#include <asm/ptrace.h>
#include <asm/processor.h>
......@@ -289,3 +290,34 @@ void output_fgraph_ret_regs_defines(void)
BLANK();
}
#endif
void output_kvm_defines(void)
{
COMMENT("KVM/LoongArch Specific offsets.");
OFFSET(VCPU_FCC, kvm_vcpu_arch, fpu.fcc);
OFFSET(VCPU_FCSR0, kvm_vcpu_arch, fpu.fcsr);
BLANK();
OFFSET(KVM_VCPU_ARCH, kvm_vcpu, arch);
OFFSET(KVM_VCPU_KVM, kvm_vcpu, kvm);
OFFSET(KVM_VCPU_RUN, kvm_vcpu, run);
BLANK();
OFFSET(KVM_ARCH_HSP, kvm_vcpu_arch, host_sp);
OFFSET(KVM_ARCH_HTP, kvm_vcpu_arch, host_tp);
OFFSET(KVM_ARCH_HPGD, kvm_vcpu_arch, host_pgd);
OFFSET(KVM_ARCH_HANDLE_EXIT, kvm_vcpu_arch, handle_exit);
OFFSET(KVM_ARCH_HEENTRY, kvm_vcpu_arch, host_eentry);
OFFSET(KVM_ARCH_GEENTRY, kvm_vcpu_arch, guest_eentry);
OFFSET(KVM_ARCH_GPC, kvm_vcpu_arch, pc);
OFFSET(KVM_ARCH_GGPR, kvm_vcpu_arch, gprs);
OFFSET(KVM_ARCH_HBADI, kvm_vcpu_arch, badi);
OFFSET(KVM_ARCH_HBADV, kvm_vcpu_arch, badv);
OFFSET(KVM_ARCH_HECFG, kvm_vcpu_arch, host_ecfg);
OFFSET(KVM_ARCH_HESTAT, kvm_vcpu_arch, host_estat);
OFFSET(KVM_ARCH_HPERCPU, kvm_vcpu_arch, host_percpu);
OFFSET(KVM_GPGD, kvm, arch.pgd);
BLANK();
}
# SPDX-License-Identifier: GPL-2.0
#
# KVM configuration
#
source "virt/kvm/Kconfig"
menuconfig VIRTUALIZATION
bool "Virtualization"
help
Say Y here to get to see options for using your Linux host to run
other operating systems inside virtual machines (guests).
This option alone does not add any kernel code.
If you say N, all options in this submenu will be skipped and
disabled.
if VIRTUALIZATION
config KVM
tristate "Kernel-based Virtual Machine (KVM) support"
depends on AS_HAS_LVZ_EXTENSION
depends on HAVE_KVM
select HAVE_KVM_DIRTY_RING_ACQ_REL
select HAVE_KVM_EVENTFD
select HAVE_KVM_VCPU_ASYNC_IOCTL
select KVM_GENERIC_DIRTYLOG_READ_PROTECT
select KVM_GENERIC_HARDWARE_ENABLING
select KVM_MMIO
select KVM_XFER_TO_GUEST_WORK
select MMU_NOTIFIER
select PREEMPT_NOTIFIERS
help
Support hosting virtualized guest machines using
hardware virtualization extensions. You will need
a processor equipped with virtualization extensions.
If unsure, say N.
endif # VIRTUALIZATION
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for LoongArch KVM support
#
ccflags-y += -I $(srctree)/$(src)
include $(srctree)/virt/kvm/Makefile.kvm
obj-$(CONFIG_KVM) += kvm.o
kvm-y += exit.o
kvm-y += interrupt.o
kvm-y += main.o
kvm-y += mmu.o
kvm-y += switch.o
kvm-y += timer.o
kvm-y += tlb.o
kvm-y += vcpu.o
kvm-y += vm.o
CFLAGS_exit.o += $(call cc-option,-Wno-override-init,)
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#include <linux/err.h>
#include <linux/errno.h>
#include <asm/kvm_csr.h>
#include <asm/kvm_vcpu.h>
static unsigned int priority_to_irq[EXCCODE_INT_NUM] = {
[INT_TI] = CPU_TIMER,
[INT_IPI] = CPU_IPI,
[INT_SWI0] = CPU_SIP0,
[INT_SWI1] = CPU_SIP1,
[INT_HWI0] = CPU_IP0,
[INT_HWI1] = CPU_IP1,
[INT_HWI2] = CPU_IP2,
[INT_HWI3] = CPU_IP3,
[INT_HWI4] = CPU_IP4,
[INT_HWI5] = CPU_IP5,
[INT_HWI6] = CPU_IP6,
[INT_HWI7] = CPU_IP7,
};
static int kvm_irq_deliver(struct kvm_vcpu *vcpu, unsigned int priority)
{
unsigned int irq = 0;
clear_bit(priority, &vcpu->arch.irq_pending);
if (priority < EXCCODE_INT_NUM)
irq = priority_to_irq[priority];
switch (priority) {
case INT_TI:
case INT_IPI:
case INT_SWI0:
case INT_SWI1:
set_gcsr_estat(irq);
break;
case INT_HWI0 ... INT_HWI7:
set_csr_gintc(irq);
break;
default:
break;
}
return 1;
}
static int kvm_irq_clear(struct kvm_vcpu *vcpu, unsigned int priority)
{
unsigned int irq = 0;
clear_bit(priority, &vcpu->arch.irq_clear);
if (priority < EXCCODE_INT_NUM)
irq = priority_to_irq[priority];
switch (priority) {
case INT_TI:
case INT_IPI:
case INT_SWI0:
case INT_SWI1:
clear_gcsr_estat(irq);
break;
case INT_HWI0 ... INT_HWI7:
clear_csr_gintc(irq);
break;
default:
break;
}
return 1;
}
void kvm_deliver_intr(struct kvm_vcpu *vcpu)
{
unsigned int priority;
unsigned long *pending = &vcpu->arch.irq_pending;
unsigned long *pending_clr = &vcpu->arch.irq_clear;
if (!(*pending) && !(*pending_clr))
return;
if (*pending_clr) {
priority = __ffs(*pending_clr);
while (priority <= INT_IPI) {
kvm_irq_clear(vcpu, priority);
priority = find_next_bit(pending_clr,
BITS_PER_BYTE * sizeof(*pending_clr),
priority + 1);
}
}
if (*pending) {
priority = __ffs(*pending);
while (priority <= INT_IPI) {
kvm_irq_deliver(vcpu, priority);
priority = find_next_bit(pending,
BITS_PER_BYTE * sizeof(*pending),
priority + 1);
}
}
}
int kvm_pending_timer(struct kvm_vcpu *vcpu)
{
return test_bit(INT_TI, &vcpu->arch.irq_pending);
}
/*
* Only support illegal instruction or illegal Address Error exception,
* Other exceptions are injected by hardware in kvm mode
*/
static void _kvm_deliver_exception(struct kvm_vcpu *vcpu,
unsigned int code, unsigned int subcode)
{
unsigned long val, vec_size;
/*
* BADV is added for EXCCODE_ADE exception
* Use PC register (GVA address) if it is instruction exeception
* Else use BADV from host side (GPA address) for data exeception
*/
if (code == EXCCODE_ADE) {
if (subcode == EXSUBCODE_ADEF)
val = vcpu->arch.pc;
else
val = vcpu->arch.badv;
kvm_write_hw_gcsr(LOONGARCH_CSR_BADV, val);
}
/* Set exception instruction */
kvm_write_hw_gcsr(LOONGARCH_CSR_BADI, vcpu->arch.badi);
/*
* Save CRMD in PRMD
* Set IRQ disabled and PLV0 with CRMD
*/
val = kvm_read_hw_gcsr(LOONGARCH_CSR_CRMD);
kvm_write_hw_gcsr(LOONGARCH_CSR_PRMD, val);
val = val & ~(CSR_CRMD_PLV | CSR_CRMD_IE);
kvm_write_hw_gcsr(LOONGARCH_CSR_CRMD, val);
/* Set exception PC address */
kvm_write_hw_gcsr(LOONGARCH_CSR_ERA, vcpu->arch.pc);
/*
* Set exception code
* Exception and interrupt can be inject at the same time
* Hardware will handle exception first and then extern interrupt
* Exception code is Ecode in ESTAT[16:21]
* Interrupt code in ESTAT[0:12]
*/
val = kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT);
val = (val & ~CSR_ESTAT_EXC) | code;
kvm_write_hw_gcsr(LOONGARCH_CSR_ESTAT, val);
/* Calculate expcetion entry address */
val = kvm_read_hw_gcsr(LOONGARCH_CSR_ECFG);
vec_size = (val & CSR_ECFG_VS) >> CSR_ECFG_VS_SHIFT;
if (vec_size)
vec_size = (1 << vec_size) * 4;
val = kvm_read_hw_gcsr(LOONGARCH_CSR_EENTRY);
vcpu->arch.pc = val + code * vec_size;
}
void kvm_deliver_exception(struct kvm_vcpu *vcpu)
{
unsigned int code;
unsigned long *pending = &vcpu->arch.exception_pending;
if (*pending) {
code = __ffs(*pending);
_kvm_deliver_exception(vcpu, code, vcpu->arch.esubcode);
*pending = 0;
vcpu->arch.esubcode = 0;
}
}
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#include <linux/linkage.h>
#include <asm/asm.h>
#include <asm/asmmacro.h>
#include <asm/loongarch.h>
#include <asm/regdef.h>
#include <asm/stackframe.h>
#define HGPR_OFFSET(x) (PT_R0 + 8*x)
#define GGPR_OFFSET(x) (KVM_ARCH_GGPR + 8*x)
.macro kvm_save_host_gpr base
.irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
st.d $r\n, \base, HGPR_OFFSET(\n)
.endr
.endm
.macro kvm_restore_host_gpr base
.irp n,1,2,3,22,23,24,25,26,27,28,29,30,31
ld.d $r\n, \base, HGPR_OFFSET(\n)
.endr
.endm
/*
* Save and restore all GPRs except base register,
* and default value of base register is a2.
*/
.macro kvm_save_guest_gprs base
.irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
st.d $r\n, \base, GGPR_OFFSET(\n)
.endr
.endm
.macro kvm_restore_guest_gprs base
.irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
ld.d $r\n, \base, GGPR_OFFSET(\n)
.endr
.endm
/*
* Prepare switch to guest, save host regs and restore guest regs.
* a2: kvm_vcpu_arch, don't touch it until 'ertn'
* t0, t1: temp register
*/
.macro kvm_switch_to_guest
/* Set host ECFG.VS=0, all exceptions share one exception entry */
csrrd t0, LOONGARCH_CSR_ECFG
bstrins.w t0, zero, CSR_ECFG_VS_SHIFT_END, CSR_ECFG_VS_SHIFT
csrwr t0, LOONGARCH_CSR_ECFG
/* Load up the new EENTRY */
ld.d t0, a2, KVM_ARCH_GEENTRY
csrwr t0, LOONGARCH_CSR_EENTRY
/* Set Guest ERA */
ld.d t0, a2, KVM_ARCH_GPC
csrwr t0, LOONGARCH_CSR_ERA
/* Save host PGDL */
csrrd t0, LOONGARCH_CSR_PGDL
st.d t0, a2, KVM_ARCH_HPGD
/* Switch to kvm */
ld.d t1, a2, KVM_VCPU_KVM - KVM_VCPU_ARCH
/* Load guest PGDL */
li.w t0, KVM_GPGD
ldx.d t0, t1, t0
csrwr t0, LOONGARCH_CSR_PGDL
/* Mix GID and RID */
csrrd t1, LOONGARCH_CSR_GSTAT
bstrpick.w t1, t1, CSR_GSTAT_GID_SHIFT_END, CSR_GSTAT_GID_SHIFT
csrrd t0, LOONGARCH_CSR_GTLBC
bstrins.w t0, t1, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
csrwr t0, LOONGARCH_CSR_GTLBC
/*
* Enable intr in root mode with future ertn so that host interrupt
* can be responsed during VM runs
* Guest CRMD comes from separate GCSR_CRMD register
*/
ori t0, zero, CSR_PRMD_PIE
csrxchg t0, t0, LOONGARCH_CSR_PRMD
/* Set PVM bit to setup ertn to guest context */
ori t0, zero, CSR_GSTAT_PVM
csrxchg t0, t0, LOONGARCH_CSR_GSTAT
/* Load Guest GPRs */
kvm_restore_guest_gprs a2
/* Load KVM_ARCH register */
ld.d a2, a2, (KVM_ARCH_GGPR + 8 * REG_A2)
ertn /* Switch to guest: GSTAT.PGM = 1, ERRCTL.ISERR = 0, TLBRPRMD.ISTLBR = 0 */
.endm
/*
* Exception entry for general exception from guest mode
* - IRQ is disabled
* - kernel privilege in root mode
* - page mode keep unchanged from previous PRMD in root mode
* - Fixme: tlb exception cannot happen since registers relative with TLB
* - is still in guest mode, such as pgd table/vmid registers etc,
* - will fix with hw page walk enabled in future
* load kvm_vcpu from reserved CSR KVM_VCPU_KS, and save a2 to KVM_TEMP_KS
*/
.text
.cfi_sections .debug_frame
SYM_CODE_START(kvm_exc_entry)
csrwr a2, KVM_TEMP_KS
csrrd a2, KVM_VCPU_KS
addi.d a2, a2, KVM_VCPU_ARCH
/* After save GPRs, free to use any GPR */
kvm_save_guest_gprs a2
/* Save guest A2 */
csrrd t0, KVM_TEMP_KS
st.d t0, a2, (KVM_ARCH_GGPR + 8 * REG_A2)
/* A2 is kvm_vcpu_arch, A1 is free to use */
csrrd s1, KVM_VCPU_KS
ld.d s0, s1, KVM_VCPU_RUN
csrrd t0, LOONGARCH_CSR_ESTAT
st.d t0, a2, KVM_ARCH_HESTAT
csrrd t0, LOONGARCH_CSR_ERA
st.d t0, a2, KVM_ARCH_GPC
csrrd t0, LOONGARCH_CSR_BADV
st.d t0, a2, KVM_ARCH_HBADV
csrrd t0, LOONGARCH_CSR_BADI
st.d t0, a2, KVM_ARCH_HBADI
/* Restore host ECFG.VS */
csrrd t0, LOONGARCH_CSR_ECFG
ld.d t1, a2, KVM_ARCH_HECFG
or t0, t0, t1
csrwr t0, LOONGARCH_CSR_ECFG
/* Restore host EENTRY */
ld.d t0, a2, KVM_ARCH_HEENTRY
csrwr t0, LOONGARCH_CSR_EENTRY
/* Restore host pgd table */
ld.d t0, a2, KVM_ARCH_HPGD
csrwr t0, LOONGARCH_CSR_PGDL
/*
* Disable PGM bit to enter root mode by default with next ertn
*/
ori t0, zero, CSR_GSTAT_PVM
csrxchg zero, t0, LOONGARCH_CSR_GSTAT
/*
* Clear GTLBC.TGID field
* 0: for root tlb update in future tlb instr
* others: for guest tlb update like gpa to hpa in future tlb instr
*/
csrrd t0, LOONGARCH_CSR_GTLBC
bstrins.w t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT
csrwr t0, LOONGARCH_CSR_GTLBC
ld.d tp, a2, KVM_ARCH_HTP
ld.d sp, a2, KVM_ARCH_HSP
/* restore per cpu register */
ld.d u0, a2, KVM_ARCH_HPERCPU
addi.d sp, sp, -PT_SIZE
/* Prepare handle exception */
or a0, s0, zero
or a1, s1, zero
ld.d t8, a2, KVM_ARCH_HANDLE_EXIT
jirl ra, t8, 0
or a2, s1, zero
addi.d a2, a2, KVM_VCPU_ARCH
/* Resume host when ret <= 0 */
blez a0, ret_to_host
/*
* Return to guest
* Save per cpu register again, maybe switched to another cpu
*/
st.d u0, a2, KVM_ARCH_HPERCPU
/* Save kvm_vcpu to kscratch */
csrwr s1, KVM_VCPU_KS
kvm_switch_to_guest
ret_to_host:
ld.d a2, a2, KVM_ARCH_HSP
addi.d a2, a2, -PT_SIZE
kvm_restore_host_gpr a2
jr ra
SYM_INNER_LABEL(kvm_exc_entry_end, SYM_L_LOCAL)
SYM_CODE_END(kvm_exc_entry)
/*
* int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu)
*
* @register_param:
* a0: kvm_run* run
* a1: kvm_vcpu* vcpu
*/
SYM_FUNC_START(kvm_enter_guest)
/* Allocate space in stack bottom */
addi.d a2, sp, -PT_SIZE
/* Save host GPRs */
kvm_save_host_gpr a2
/* Save host CRMD, PRMD to stack */
csrrd a3, LOONGARCH_CSR_CRMD
st.d a3, a2, PT_CRMD
csrrd a3, LOONGARCH_CSR_PRMD
st.d a3, a2, PT_PRMD
addi.d a2, a1, KVM_VCPU_ARCH
st.d sp, a2, KVM_ARCH_HSP
st.d tp, a2, KVM_ARCH_HTP
/* Save per cpu register */
st.d u0, a2, KVM_ARCH_HPERCPU
/* Save kvm_vcpu to kscratch */
csrwr a1, KVM_VCPU_KS
kvm_switch_to_guest
SYM_INNER_LABEL(kvm_enter_guest_end, SYM_L_LOCAL)
SYM_FUNC_END(kvm_enter_guest)
SYM_FUNC_START(kvm_save_fpu)
fpu_save_csr a0 t1
fpu_save_double a0 t1
fpu_save_cc a0 t1 t2
jr ra
SYM_FUNC_END(kvm_save_fpu)
SYM_FUNC_START(kvm_restore_fpu)
fpu_restore_double a0 t1
fpu_restore_csr a0 t1 t2
fpu_restore_cc a0 t1 t2
jr ra
SYM_FUNC_END(kvm_restore_fpu)
.section ".rodata"
SYM_DATA(kvm_exception_size, .quad kvm_exc_entry_end - kvm_exc_entry)
SYM_DATA(kvm_enter_guest_size, .quad kvm_enter_guest_end - kvm_enter_guest)
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#include <linux/kvm_host.h>
#include <asm/kvm_csr.h>
#include <asm/kvm_vcpu.h>
/*
* ktime_to_tick() - Scale ktime_t to timer tick value.
*/
static inline u64 ktime_to_tick(struct kvm_vcpu *vcpu, ktime_t now)
{
u64 delta;
delta = ktime_to_ns(now);
return div_u64(delta * vcpu->arch.timer_mhz, MNSEC_PER_SEC);
}
static inline u64 tick_to_ns(struct kvm_vcpu *vcpu, u64 tick)
{
return div_u64(tick * MNSEC_PER_SEC, vcpu->arch.timer_mhz);
}
/*
* Push timer forward on timeout.
* Handle an hrtimer event by push the hrtimer forward a period.
*/
static enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu)
{
unsigned long cfg, period;
/* Add periodic tick to current expire time */
cfg = kvm_read_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG);
if (cfg & CSR_TCFG_PERIOD) {
period = tick_to_ns(vcpu, cfg & CSR_TCFG_VAL);
hrtimer_add_expires_ns(&vcpu->arch.swtimer, period);
return HRTIMER_RESTART;
} else
return HRTIMER_NORESTART;
}
/* Low level hrtimer wake routine */
enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer)
{
struct kvm_vcpu *vcpu;
vcpu = container_of(timer, struct kvm_vcpu, arch.swtimer);
kvm_queue_irq(vcpu, INT_TI);
rcuwait_wake_up(&vcpu->wait);
return kvm_count_timeout(vcpu);
}
/*
* Initialise the timer to the specified frequency, zero it
*/
void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long timer_hz)
{
vcpu->arch.timer_mhz = timer_hz >> 20;
/* Starting at 0 */
kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TVAL, 0);
}
/*
* Restore hard timer state and enable guest to access timer registers
* without trap, should be called with irq disabled
*/
void kvm_acquire_timer(struct kvm_vcpu *vcpu)
{
unsigned long cfg;
cfg = read_csr_gcfg();
if (!(cfg & CSR_GCFG_TIT))
return;
/* Enable guest access to hard timer */
write_csr_gcfg(cfg & ~CSR_GCFG_TIT);
/*
* Freeze the soft-timer and sync the guest stable timer with it. We do
* this with interrupts disabled to avoid latency.
*/
hrtimer_cancel(&vcpu->arch.swtimer);
}
/*
* Restore soft timer state from saved context.
*/
void kvm_restore_timer(struct kvm_vcpu *vcpu)
{
unsigned long cfg, delta, period;
ktime_t expire, now;
struct loongarch_csrs *csr = vcpu->arch.csr;
/*
* Set guest stable timer cfg csr
*/
cfg = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG);
kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ESTAT);
kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TCFG);
if (!(cfg & CSR_TCFG_EN)) {
/* Guest timer is disabled, just restore timer registers */
kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TVAL);
return;
}
/*
* Set remainder tick value if not expired
*/
now = ktime_get();
expire = vcpu->arch.expire;
if (ktime_before(now, expire))
delta = ktime_to_tick(vcpu, ktime_sub(expire, now));
else {
if (cfg & CSR_TCFG_PERIOD) {
period = cfg & CSR_TCFG_VAL;
delta = ktime_to_tick(vcpu, ktime_sub(now, expire));
delta = period - (delta % period);
} else
delta = 0;
/*
* Inject timer here though sw timer should inject timer
* interrupt async already, since sw timer may be cancelled
* during injecting intr async in function kvm_acquire_timer
*/
kvm_queue_irq(vcpu, INT_TI);
}
write_gcsr_timertick(delta);
}
/*
* Save guest timer state and switch to software emulation of guest
* timer. The hard timer must already be in use, so preemption should be
* disabled.
*/
static void _kvm_save_timer(struct kvm_vcpu *vcpu)
{
unsigned long ticks, delta;
ktime_t expire;
struct loongarch_csrs *csr = vcpu->arch.csr;
ticks = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TVAL);
delta = tick_to_ns(vcpu, ticks);
expire = ktime_add_ns(ktime_get(), delta);
vcpu->arch.expire = expire;
if (ticks) {
/*
* Update hrtimer to use new timeout
* HRTIMER_MODE_PINNED is suggested since vcpu may run in
* the same physical cpu in next time
*/
hrtimer_cancel(&vcpu->arch.swtimer);
hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED);
} else
/*
* Inject timer interrupt so that hall polling can dectect and exit
*/
kvm_queue_irq(vcpu, INT_TI);
}
/*
* Save guest timer state and switch to soft guest timer if hard timer was in
* use.
*/
void kvm_save_timer(struct kvm_vcpu *vcpu)
{
unsigned long cfg;
struct loongarch_csrs *csr = vcpu->arch.csr;
preempt_disable();
cfg = read_csr_gcfg();
if (!(cfg & CSR_GCFG_TIT)) {
/* Disable guest use of hard timer */
write_csr_gcfg(cfg | CSR_GCFG_TIT);
/* Save hard timer state */
kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TCFG);
kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TVAL);
if (kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG) & CSR_TCFG_EN)
_kvm_save_timer(vcpu);
}
/* Save timer-related state to vCPU context */
kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ESTAT);
preempt_enable();
}
void kvm_reset_timer(struct kvm_vcpu *vcpu)
{
write_gcsr_timercfg(0);
kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG, 0);
hrtimer_cancel(&vcpu->arch.swtimer);
}
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#include <linux/kvm_host.h>
#include <asm/tlb.h>
#include <asm/kvm_csr.h>
/*
* kvm_flush_tlb_all() - Flush all root TLB entries for guests.
*
* Invalidate all entries including GVA-->GPA and GPA-->HPA mappings.
*/
void kvm_flush_tlb_all(void)
{
unsigned long flags;
local_irq_save(flags);
invtlb_all(INVTLB_ALLGID, 0, 0);
local_irq_restore(flags);
}
void kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa)
{
unsigned long flags;
local_irq_save(flags);
gpa &= (PAGE_MASK << 1);
invtlb(INVTLB_GID_ADDR, read_csr_gstat() & CSR_GSTAT_GID, gpa);
local_irq_restore(flags);
}
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2023 Loongson Technology Corporation Limited
*/
#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_KVM_H
#include <linux/tracepoint.h>
#include <asm/kvm_csr.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm
/*
* Tracepoints for VM enters
*/
DECLARE_EVENT_CLASS(kvm_transition,
TP_PROTO(struct kvm_vcpu *vcpu),
TP_ARGS(vcpu),
TP_STRUCT__entry(
__field(unsigned long, pc)
),
TP_fast_assign(
__entry->pc = vcpu->arch.pc;
),
TP_printk("PC: 0x%08lx", __entry->pc)
);
DEFINE_EVENT(kvm_transition, kvm_enter,
TP_PROTO(struct kvm_vcpu *vcpu),
TP_ARGS(vcpu));
DEFINE_EVENT(kvm_transition, kvm_reenter,
TP_PROTO(struct kvm_vcpu *vcpu),
TP_ARGS(vcpu));
DEFINE_EVENT(kvm_transition, kvm_out,
TP_PROTO(struct kvm_vcpu *vcpu),
TP_ARGS(vcpu));
/* Further exit reasons */
#define KVM_TRACE_EXIT_IDLE 64
#define KVM_TRACE_EXIT_CACHE 65
/* Tracepoints for VM exits */
#define kvm_trace_symbol_exit_types \
{ KVM_TRACE_EXIT_IDLE, "IDLE" }, \
{ KVM_TRACE_EXIT_CACHE, "CACHE" }
DECLARE_EVENT_CLASS(kvm_exit,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
TP_ARGS(vcpu, reason),
TP_STRUCT__entry(
__field(unsigned long, pc)
__field(unsigned int, reason)
),
TP_fast_assign(
__entry->pc = vcpu->arch.pc;
__entry->reason = reason;
),
TP_printk("[%s]PC: 0x%08lx",
__print_symbolic(__entry->reason,
kvm_trace_symbol_exit_types),
__entry->pc)
);
DEFINE_EVENT(kvm_exit, kvm_exit_idle,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
TP_ARGS(vcpu, reason));
DEFINE_EVENT(kvm_exit, kvm_exit_cache,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
TP_ARGS(vcpu, reason));
DEFINE_EVENT(kvm_exit, kvm_exit,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason),
TP_ARGS(vcpu, reason));
TRACE_EVENT(kvm_exit_gspr,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word),
TP_ARGS(vcpu, inst_word),
TP_STRUCT__entry(
__field(unsigned int, inst_word)
),
TP_fast_assign(
__entry->inst_word = inst_word;
),
TP_printk("Inst word: 0x%08x", __entry->inst_word)
);
#define KVM_TRACE_AUX_SAVE 0
#define KVM_TRACE_AUX_RESTORE 1
#define KVM_TRACE_AUX_ENABLE 2
#define KVM_TRACE_AUX_DISABLE 3
#define KVM_TRACE_AUX_DISCARD 4
#define KVM_TRACE_AUX_FPU 1
#define kvm_trace_symbol_aux_op \
{ KVM_TRACE_AUX_SAVE, "save" }, \
{ KVM_TRACE_AUX_RESTORE, "restore" }, \
{ KVM_TRACE_AUX_ENABLE, "enable" }, \
{ KVM_TRACE_AUX_DISABLE, "disable" }, \
{ KVM_TRACE_AUX_DISCARD, "discard" }
#define kvm_trace_symbol_aux_state \
{ KVM_TRACE_AUX_FPU, "FPU" }
TRACE_EVENT(kvm_aux,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op,
unsigned int state),
TP_ARGS(vcpu, op, state),
TP_STRUCT__entry(
__field(unsigned long, pc)
__field(u8, op)
__field(u8, state)
),
TP_fast_assign(
__entry->pc = vcpu->arch.pc;
__entry->op = op;
__entry->state = state;
),
TP_printk("%s %s PC: 0x%08lx",
__print_symbolic(__entry->op,
kvm_trace_symbol_aux_op),
__print_symbolic(__entry->state,
kvm_trace_symbol_aux_state),
__entry->pc)
);
TRACE_EVENT(kvm_vpid_change,
TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid),
TP_ARGS(vcpu, vpid),
TP_STRUCT__entry(
__field(unsigned long, vpid)
),
TP_fast_assign(
__entry->vpid = vpid;
),
TP_printk("VPID: 0x%08lx", __entry->vpid)
);
#endif /* _TRACE_KVM_H */
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_FILE trace
/* This part must be outside protection */
#include <trace/define_trace.h>
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment