Commit f557af08 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'riscv-for-linus-6.11-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux

Pull RISC-V updates from Palmer Dabbelt:

 - Support for various new ISA extensions:
     * The Zve32[xf] and Zve64[xfd] sub-extensios of the vector
       extension
     * Zimop and Zcmop for may-be-operations
     * The Zca, Zcf, Zcd and Zcb sub-extensions of the C extension
     * Zawrs

 - riscv,cpu-intc is now dtschema

 - A handful of performance improvements and cleanups to text patching

 - Support for memory hot{,un}plug

 - The highest user-allocatable virtual address is now visible in
   hwprobe

* tag 'riscv-for-linus-6.11-mw1' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (58 commits)
  riscv: lib: relax assembly constraints in hweight
  riscv: set trap vector earlier
  KVM: riscv: selftests: Add Zawrs extension to get-reg-list test
  KVM: riscv: Support guest wrs.nto
  riscv: hwprobe: export Zawrs ISA extension
  riscv: Add Zawrs support for spinlocks
  dt-bindings: riscv: Add Zawrs ISA extension description
  riscv: Provide a definition for 'pause'
  riscv: hwprobe: export highest virtual userspace address
  riscv: Improve sbi_ecall() code generation by reordering arguments
  riscv: Add tracepoints for SBI calls and returns
  riscv: Optimize crc32 with Zbc extension
  riscv: Enable DAX VMEMMAP optimization
  riscv: mm: Add support for ZONE_DEVICE
  virtio-mem: Enable virtio-mem for RISC-V
  riscv: Enable memory hotplugging for RISC-V
  riscv: mm: Take memory hotplug read-lock during kernel page table dump
  riscv: mm: Add memory hotplugging support
  riscv: mm: Add pfn_to_kaddr() implementation
  riscv: mm: Refactor create_linear_mapping_range() for memory hot add
  ...
parents d2be38b9 93b63f68
...@@ -192,6 +192,53 @@ The following keys are defined: ...@@ -192,6 +192,53 @@ The following keys are defined:
supported as defined in the RISC-V ISA manual starting from commit supported as defined in the RISC-V ISA manual starting from commit
d8ab5c78c207 ("Zihintpause is ratified"). d8ab5c78c207 ("Zihintpause is ratified").
* :c:macro:`RISCV_HWPROBE_EXT_ZVE32X`: The Vector sub-extension Zve32x is
supported, as defined by version 1.0 of the RISC-V Vector extension manual.
* :c:macro:`RISCV_HWPROBE_EXT_ZVE32F`: The Vector sub-extension Zve32f is
supported, as defined by version 1.0 of the RISC-V Vector extension manual.
* :c:macro:`RISCV_HWPROBE_EXT_ZVE64X`: The Vector sub-extension Zve64x is
supported, as defined by version 1.0 of the RISC-V Vector extension manual.
* :c:macro:`RISCV_HWPROBE_EXT_ZVE64F`: The Vector sub-extension Zve64f is
supported, as defined by version 1.0 of the RISC-V Vector extension manual.
* :c:macro:`RISCV_HWPROBE_EXT_ZVE64D`: The Vector sub-extension Zve64d is
supported, as defined by version 1.0 of the RISC-V Vector extension manual.
* :c:macro:`RISCV_HWPROBE_EXT_ZIMOP`: The Zimop May-Be-Operations extension is
supported as defined in the RISC-V ISA manual starting from commit
58220614a5f ("Zimop is ratified/1.0").
* :c:macro:`RISCV_HWPROBE_EXT_ZCA`: The Zca extension part of Zc* standard
extensions for code size reduction, as ratified in commit 8be3419c1c0
("Zcf doesn't exist on RV64 as it contains no instructions") of
riscv-code-size-reduction.
* :c:macro:`RISCV_HWPROBE_EXT_ZCB`: The Zcb extension part of Zc* standard
extensions for code size reduction, as ratified in commit 8be3419c1c0
("Zcf doesn't exist on RV64 as it contains no instructions") of
riscv-code-size-reduction.
* :c:macro:`RISCV_HWPROBE_EXT_ZCD`: The Zcd extension part of Zc* standard
extensions for code size reduction, as ratified in commit 8be3419c1c0
("Zcf doesn't exist on RV64 as it contains no instructions") of
riscv-code-size-reduction.
* :c:macro:`RISCV_HWPROBE_EXT_ZCF`: The Zcf extension part of Zc* standard
extensions for code size reduction, as ratified in commit 8be3419c1c0
("Zcf doesn't exist on RV64 as it contains no instructions") of
riscv-code-size-reduction.
* :c:macro:`RISCV_HWPROBE_EXT_ZCMOP`: The Zcmop May-Be-Operations extension is
supported as defined in the RISC-V ISA manual starting from commit
c732a4f39a4 ("Zcmop is ratified/1.0").
* :c:macro:`RISCV_HWPROBE_EXT_ZAWRS`: The Zawrs extension is supported as
ratified in commit 98918c844281 ("Merge pull request #1217 from
riscv/zawrs") of riscv-isa-manual.
* :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance * :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance
information about the selected set of processors. information about the selected set of processors.
...@@ -214,3 +261,6 @@ The following keys are defined: ...@@ -214,3 +261,6 @@ The following keys are defined:
* :c:macro:`RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE`: An unsigned int which * :c:macro:`RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE`: An unsigned int which
represents the size of the Zicboz block in bytes. represents the size of the Zicboz block in bytes.
* :c:macro:`RISCV_HWPROBE_KEY_HIGHEST_VIRT_ADDRESS`: An unsigned long which
represent the highest userspace virtual address usable.
RISC-V Hart-Level Interrupt Controller (HLIC)
---------------------------------------------
RISC-V cores include Control Status Registers (CSRs) which are local to each
CPU core (HART in RISC-V terminology) and can be read or written by software.
Some of these CSRs are used to control local interrupts connected to the core.
Every interrupt is ultimately routed through a hart's HLIC before it
interrupts that hart.
The RISC-V supervisor ISA manual specifies three interrupt sources that are
attached to every HLIC: software interrupts, the timer interrupt, and external
interrupts. Software interrupts are used to send IPIs between cores. The
timer interrupt comes from an architecturally mandated real-time timer that is
controlled via Supervisor Binary Interface (SBI) calls and CSR reads. External
interrupts connect all other device interrupts to the HLIC, which are routed
via the platform-level interrupt controller (PLIC).
All RISC-V systems that conform to the supervisor ISA specification are
required to have a HLIC with these three interrupt sources present. Since the
interrupt map is defined by the ISA it's not listed in the HLIC's device tree
entry, though external interrupt controllers (like the PLIC, for example) will
need to define how their interrupts map to the relevant HLICs. This means
a PLIC interrupt property will typically list the HLICs for all present HARTs
in the system.
Required properties:
- compatible : "riscv,cpu-intc"
- #interrupt-cells : should be <1>. The interrupt sources are defined by the
RISC-V supervisor ISA manual, with only the following three interrupts being
defined for supervisor mode:
- Source 1 is the supervisor software interrupt, which can be sent by an SBI
call and is reserved for use by software.
- Source 5 is the supervisor timer interrupt, which can be configured by
SBI calls and implements a one-shot timer.
- Source 9 is the supervisor external interrupt, which chains to all other
device interrupts.
- interrupt-controller : Identifies the node as an interrupt controller
Furthermore, this interrupt-controller MUST be embedded inside the cpu
definition of the hart whose CSRs control these local interrupts.
An example device tree entry for a HLIC is show below.
cpu1: cpu@1 {
compatible = "riscv";
...
cpu1-intc: interrupt-controller {
#interrupt-cells = <1>;
compatible = "sifive,fu540-c000-cpu-intc", "riscv,cpu-intc";
interrupt-controller;
};
};
# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/interrupt-controller/riscv,cpu-intc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: RISC-V Hart-Level Interrupt Controller (HLIC)
description:
RISC-V cores include Control Status Registers (CSRs) which are local to
each CPU core (HART in RISC-V terminology) and can be read or written by
software. Some of these CSRs are used to control local interrupts connected
to the core. Every interrupt is ultimately routed through a hart's HLIC
before it interrupts that hart.
The RISC-V supervisor ISA manual specifies three interrupt sources that are
attached to every HLIC namely software interrupts, the timer interrupt, and
external interrupts. Software interrupts are used to send IPIs between
cores. The timer interrupt comes from an architecturally mandated real-
time timer that is controlled via Supervisor Binary Interface (SBI) calls
and CSR reads. External interrupts connect all other device interrupts to
the HLIC, which are routed via the platform-level interrupt controller
(PLIC).
All RISC-V systems that conform to the supervisor ISA specification are
required to have a HLIC with these three interrupt sources present. Since
the interrupt map is defined by the ISA it's not listed in the HLIC's device
tree entry, though external interrupt controllers (like the PLIC, for
example) will need to define how their interrupts map to the relevant HLICs.
This means a PLIC interrupt property will typically list the HLICs for all
present HARTs in the system.
maintainers:
- Palmer Dabbelt <palmer@dabbelt.com>
- Paul Walmsley <paul.walmsley@sifive.com>
properties:
compatible:
oneOf:
- items:
- const: andestech,cpu-intc
- const: riscv,cpu-intc
- const: riscv,cpu-intc
interrupt-controller: true
'#interrupt-cells':
const: 1
description: |
The interrupt sources are defined by the RISC-V supervisor ISA manual,
with only the following three interrupts being defined for
supervisor mode:
- Source 1 is the supervisor software interrupt, which can be sent by
an SBI call and is reserved for use by software.
- Source 5 is the supervisor timer interrupt, which can be configured
by SBI calls and implements a one-shot timer.
- Source 9 is the supervisor external interrupt, which chains to all
other device interrupts.
required:
- compatible
- '#interrupt-cells'
- interrupt-controller
additionalProperties: false
examples:
- |
interrupt-controller {
#interrupt-cells = <1>;
compatible = "riscv,cpu-intc";
interrupt-controller;
};
...@@ -103,26 +103,7 @@ properties: ...@@ -103,26 +103,7 @@ properties:
interrupt-controller: interrupt-controller:
type: object type: object
additionalProperties: false $ref: /schemas/interrupt-controller/riscv,cpu-intc.yaml#
description: Describes the CPU's local interrupt controller
properties:
'#interrupt-cells':
const: 1
compatible:
oneOf:
- items:
- const: andestech,cpu-intc
- const: riscv,cpu-intc
- const: riscv,cpu-intc
interrupt-controller: true
required:
- '#interrupt-cells'
- compatible
- interrupt-controller
cpu-idle-states: cpu-idle-states:
$ref: /schemas/types.yaml#/definitions/phandle-array $ref: /schemas/types.yaml#/definitions/phandle-array
......
...@@ -177,6 +177,13 @@ properties: ...@@ -177,6 +177,13 @@ properties:
is supported as ratified at commit 5059e0ca641c ("update to is supported as ratified at commit 5059e0ca641c ("update to
ratified") of the riscv-zacas. ratified") of the riscv-zacas.
- const: zawrs
description: |
The Zawrs extension for entering a low-power state or for trapping
to a hypervisor while waiting on a store to a memory location, as
ratified in commit 98918c844281 ("Merge pull request #1217 from
riscv/zawrs") of riscv-isa-manual.
- const: zba - const: zba
description: | description: |
The standard Zba bit-manipulation extension for address generation The standard Zba bit-manipulation extension for address generation
...@@ -220,6 +227,43 @@ properties: ...@@ -220,6 +227,43 @@ properties:
instructions as ratified at commit 6d33919 ("Merge pull request #158 instructions as ratified at commit 6d33919 ("Merge pull request #158
from hirooih/clmul-fix-loop-end-condition") of riscv-bitmanip. from hirooih/clmul-fix-loop-end-condition") of riscv-bitmanip.
- const: zca
description: |
The Zca extension part of Zc* standard extensions for code size
reduction, as ratified in commit 8be3419c1c0 ("Zcf doesn't exist on
RV64 as it contains no instructions") of riscv-code-size-reduction,
merged in the riscv-isa-manual by commit dbc79cf28a2 ("Initial seed
of zc.adoc to src tree.").
- const: zcb
description: |
The Zcb extension part of Zc* standard extensions for code size
reduction, as ratified in commit 8be3419c1c0 ("Zcf doesn't exist on
RV64 as it contains no instructions") of riscv-code-size-reduction,
merged in the riscv-isa-manual by commit dbc79cf28a2 ("Initial seed
of zc.adoc to src tree.").
- const: zcd
description: |
The Zcd extension part of Zc* standard extensions for code size
reduction, as ratified in commit 8be3419c1c0 ("Zcf doesn't exist on
RV64 as it contains no instructions") of riscv-code-size-reduction,
merged in the riscv-isa-manual by commit dbc79cf28a2 ("Initial seed
of zc.adoc to src tree.").
- const: zcf
description: |
The Zcf extension part of Zc* standard extensions for code size
reduction, as ratified in commit 8be3419c1c0 ("Zcf doesn't exist on
RV64 as it contains no instructions") of riscv-code-size-reduction,
merged in the riscv-isa-manual by commit dbc79cf28a2 ("Initial seed
of zc.adoc to src tree.").
- const: zcmop
description:
The standard Zcmop extension version 1.0, as ratified in commit
c732a4f39a4 ("Zcmop is ratified/1.0") of the riscv-isa-manual.
- const: zfa - const: zfa
description: description:
The standard Zfa extension for additional floating point The standard Zfa extension for additional floating point
...@@ -363,6 +407,11 @@ properties: ...@@ -363,6 +407,11 @@ properties:
ratified in the 20191213 version of the unprivileged ISA ratified in the 20191213 version of the unprivileged ISA
specification. specification.
- const: zimop
description:
The standard Zimop extension version 1.0, as ratified in commit
58220614a5f ("Zimop is ratified/1.0") of the riscv-isa-manual.
- const: ztso - const: ztso
description: description:
The standard Ztso extension for total store ordering, as ratified The standard Ztso extension for total store ordering, as ratified
...@@ -381,6 +430,36 @@ properties: ...@@ -381,6 +430,36 @@ properties:
instructions, as ratified in commit 56ed795 ("Update instructions, as ratified in commit 56ed795 ("Update
riscv-crypto-spec-vector.adoc") of riscv-crypto. riscv-crypto-spec-vector.adoc") of riscv-crypto.
- const: zve32f
description:
The standard Zve32f extension for embedded processors, as ratified
in commit 6f702a2 ("Vector extensions are now ratified") of
riscv-v-spec.
- const: zve32x
description:
The standard Zve32x extension for embedded processors, as ratified
in commit 6f702a2 ("Vector extensions are now ratified") of
riscv-v-spec.
- const: zve64d
description:
The standard Zve64d extension for embedded processors, as ratified
in commit 6f702a2 ("Vector extensions are now ratified") of
riscv-v-spec.
- const: zve64f
description:
The standard Zve64f extension for embedded processors, as ratified
in commit 6f702a2 ("Vector extensions are now ratified") of
riscv-v-spec.
- const: zve64x
description:
The standard Zve64x extension for embedded processors, as ratified
in commit 6f702a2 ("Vector extensions are now ratified") of
riscv-v-spec.
- const: zvfh - const: zvfh
description: description:
The standard Zvfh extension for vectored half-precision The standard Zvfh extension for vectored half-precision
...@@ -484,5 +563,58 @@ properties: ...@@ -484,5 +563,58 @@ properties:
Registers in the AX45MP datasheet. Registers in the AX45MP datasheet.
https://www.andestech.com/wp-content/uploads/AX45MP-1C-Rev.-5.0.0-Datasheet.pdf https://www.andestech.com/wp-content/uploads/AX45MP-1C-Rev.-5.0.0-Datasheet.pdf
allOf:
# Zcb depends on Zca
- if:
contains:
const: zcb
then:
contains:
const: zca
# Zcd depends on Zca and D
- if:
contains:
const: zcd
then:
allOf:
- contains:
const: zca
- contains:
const: d
# Zcf depends on Zca and F
- if:
contains:
const: zcf
then:
allOf:
- contains:
const: zca
- contains:
const: f
# Zcmop depends on Zca
- if:
contains:
const: zcmop
then:
contains:
const: zca
allOf:
# Zcf extension does not exist on rv64
- if:
properties:
riscv,isa-extensions:
contains:
const: zcf
riscv,isa-base:
contains:
const: rv64i
then:
properties:
riscv,isa-extensions:
not:
contains:
const: zcf
additionalProperties: true additionalProperties: true
... ...
...@@ -16,6 +16,8 @@ config RISCV ...@@ -16,6 +16,8 @@ config RISCV
select ACPI_REDUCED_HARDWARE_ONLY if ACPI select ACPI_REDUCED_HARDWARE_ONLY if ACPI
select ARCH_DMA_DEFAULT_COHERENT select ARCH_DMA_DEFAULT_COHERENT
select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION
select ARCH_ENABLE_MEMORY_HOTPLUG if SPARSEMEM_VMEMMAP
select ARCH_ENABLE_MEMORY_HOTREMOVE if MEMORY_HOTPLUG
select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2 select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
select ARCH_HAS_BINFMT_FLAT select ARCH_HAS_BINFMT_FLAT
...@@ -35,6 +37,7 @@ config RISCV ...@@ -35,6 +37,7 @@ config RISCV
select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
select ARCH_HAS_PMEM_API select ARCH_HAS_PMEM_API
select ARCH_HAS_PREPARE_SYNC_CORE_CMD select ARCH_HAS_PREPARE_SYNC_CORE_CMD
select ARCH_HAS_PTE_DEVMAP if 64BIT && MMU
select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_PTE_SPECIAL
select ARCH_HAS_SET_DIRECT_MAP if MMU select ARCH_HAS_SET_DIRECT_MAP if MMU
select ARCH_HAS_SET_MEMORY if MMU select ARCH_HAS_SET_MEMORY if MMU
...@@ -46,6 +49,7 @@ config RISCV ...@@ -46,6 +49,7 @@ config RISCV
select ARCH_HAS_UBSAN select ARCH_HAS_UBSAN
select ARCH_HAS_VDSO_DATA select ARCH_HAS_VDSO_DATA
select ARCH_KEEP_MEMBLOCK if ACPI select ARCH_KEEP_MEMBLOCK if ACPI
select ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE if 64BIT && MMU
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
select ARCH_STACKWALK select ARCH_STACKWALK
...@@ -69,6 +73,7 @@ config RISCV ...@@ -69,6 +73,7 @@ config RISCV
select ARCH_WANT_GENERAL_HUGETLB if !RISCV_ISA_SVNAPOT select ARCH_WANT_GENERAL_HUGETLB if !RISCV_ISA_SVNAPOT
select ARCH_WANT_HUGE_PMD_SHARE if 64BIT select ARCH_WANT_HUGE_PMD_SHARE if 64BIT
select ARCH_WANT_LD_ORPHAN_WARN if !XIP_KERNEL select ARCH_WANT_LD_ORPHAN_WARN if !XIP_KERNEL
select ARCH_WANT_OPTIMIZE_DAX_VMEMMAP
select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP
select ARCH_WANTS_NO_INSTR select ARCH_WANTS_NO_INSTR
select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE
...@@ -595,6 +600,19 @@ config RISCV_ISA_V_PREEMPTIVE ...@@ -595,6 +600,19 @@ config RISCV_ISA_V_PREEMPTIVE
preemption. Enabling this config will result in higher memory preemption. Enabling this config will result in higher memory
consumption due to the allocation of per-task's kernel Vector context. consumption due to the allocation of per-task's kernel Vector context.
config RISCV_ISA_ZAWRS
bool "Zawrs extension support for more efficient busy waiting"
depends on RISCV_ALTERNATIVE
default y
help
The Zawrs extension defines instructions to be used in polling loops
which allow a hart to enter a low-power state or to trap to the
hypervisor while waiting on a store to a memory location. Enable the
use of these instructions in the kernel when the Zawrs extension is
detected at boot.
If you don't know what to do here, say Y.
config TOOLCHAIN_HAS_ZBB config TOOLCHAIN_HAS_ZBB
bool bool
default y default y
...@@ -637,6 +655,29 @@ config RISCV_ISA_ZBB ...@@ -637,6 +655,29 @@ config RISCV_ISA_ZBB
If you don't know what to do here, say Y. If you don't know what to do here, say Y.
config TOOLCHAIN_HAS_ZBC
bool
default y
depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zbc)
depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zbc)
depends on LLD_VERSION >= 150000 || LD_VERSION >= 23900
depends on AS_HAS_OPTION_ARCH
config RISCV_ISA_ZBC
bool "Zbc extension support for carry-less multiplication instructions"
depends on TOOLCHAIN_HAS_ZBC
depends on MMU
depends on RISCV_ALTERNATIVE
default y
help
Adds support to dynamically detect the presence of the Zbc
extension (carry-less multiplication) and enable its usage.
The Zbc extension could accelerate CRC (cyclic redundancy check)
calculations.
If you don't know what to do here, say Y.
config RISCV_ISA_ZICBOM config RISCV_ISA_ZICBOM
bool "Zicbom extension support for non-coherent DMA operation" bool "Zicbom extension support for non-coherent DMA operation"
depends on MMU depends on MMU
...@@ -666,13 +707,6 @@ config RISCV_ISA_ZICBOZ ...@@ -666,13 +707,6 @@ config RISCV_ISA_ZICBOZ
If you don't know what to do here, say Y. If you don't know what to do here, say Y.
config TOOLCHAIN_HAS_ZIHINTPAUSE
bool
default y
depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zihintpause)
depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zihintpause)
depends on LLD_VERSION >= 150000 || LD_VERSION >= 23600
config TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI config TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
def_bool y def_bool y
# https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=aed44286efa8ae8717a77d94b51ac3614e2ca6dc # https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=aed44286efa8ae8717a77d94b51ac3614e2ca6dc
...@@ -979,6 +1013,17 @@ config EFI ...@@ -979,6 +1013,17 @@ config EFI
allow the kernel to be booted as an EFI application. This allow the kernel to be booted as an EFI application. This
is only useful on systems that have UEFI firmware. is only useful on systems that have UEFI firmware.
config DMI
bool "Enable support for SMBIOS (DMI) tables"
depends on EFI
default y
help
This enables SMBIOS/DMI feature for systems.
This option is only useful on systems that have UEFI firmware.
However, even with this option, the resultant kernel should
continue to boot on existing non-UEFI platforms.
config CC_HAVE_STACKPROTECTOR_TLS config CC_HAVE_STACKPROTECTOR_TLS
def_bool $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=tp -mstack-protector-guard-offset=0) def_bool $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=tp -mstack-protector-guard-offset=0)
......
...@@ -82,9 +82,6 @@ else ...@@ -82,9 +82,6 @@ else
riscv-march-$(CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI) := $(riscv-march-y)_zicsr_zifencei riscv-march-$(CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI) := $(riscv-march-y)_zicsr_zifencei
endif endif
# Check if the toolchain supports Zihintpause extension
riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE) := $(riscv-march-y)_zihintpause
# Remove F,D,V from isa string for all. Keep extensions between "fd" and "v" by # Remove F,D,V from isa string for all. Keep extensions between "fd" and "v" by
# matching non-v and non-multi-letter extensions out with the filter ([^v_]*) # matching non-v and non-multi-letter extensions out with the filter ([^v_]*)
KBUILD_CFLAGS += -march=$(shell echo $(riscv-march-y) | sed -E 's/(rv32ima|rv64ima)fd([^v_]*)v?/\1\2/') KBUILD_CFLAGS += -march=$(shell echo $(riscv-march-y) | sed -E 's/(rv32ima|rv64ima)fd([^v_]*)v?/\1\2/')
......
...@@ -26,9 +26,9 @@ static __always_inline unsigned int __arch_hweight32(unsigned int w) ...@@ -26,9 +26,9 @@ static __always_inline unsigned int __arch_hweight32(unsigned int w)
asm (".option push\n" asm (".option push\n"
".option arch,+zbb\n" ".option arch,+zbb\n"
CPOPW "%0, %0\n" CPOPW "%0, %1\n"
".option pop\n" ".option pop\n"
: "+r" (w) : :); : "=r" (w) : "r" (w) :);
return w; return w;
...@@ -57,9 +57,9 @@ static __always_inline unsigned long __arch_hweight64(__u64 w) ...@@ -57,9 +57,9 @@ static __always_inline unsigned long __arch_hweight64(__u64 w)
asm (".option push\n" asm (".option push\n"
".option arch,+zbb\n" ".option arch,+zbb\n"
"cpop %0, %0\n" "cpop %0, %1\n"
".option pop\n" ".option pop\n"
: "+r" (w) : :); : "=r" (w) : "r" (w) :);
return w; return w;
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#define _ASM_RISCV_BARRIER_H #define _ASM_RISCV_BARRIER_H
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/cmpxchg.h>
#include <asm/fence.h> #include <asm/fence.h>
#define nop() __asm__ __volatile__ ("nop") #define nop() __asm__ __volatile__ ("nop")
...@@ -28,21 +29,6 @@ ...@@ -28,21 +29,6 @@
#define __smp_rmb() RISCV_FENCE(r, r) #define __smp_rmb() RISCV_FENCE(r, r)
#define __smp_wmb() RISCV_FENCE(w, w) #define __smp_wmb() RISCV_FENCE(w, w)
#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
RISCV_FENCE(rw, w); \
WRITE_ONCE(*p, v); \
} while (0)
#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
RISCV_FENCE(r, rw); \
___p1; \
})
/* /*
* This is a very specific barrier: it's currently only used in two places in * This is a very specific barrier: it's currently only used in two places in
* the kernel, both in the scheduler. See include/linux/spinlock.h for the two * the kernel, both in the scheduler. See include/linux/spinlock.h for the two
...@@ -70,6 +56,35 @@ do { \ ...@@ -70,6 +56,35 @@ do { \
*/ */
#define smp_mb__after_spinlock() RISCV_FENCE(iorw, iorw) #define smp_mb__after_spinlock() RISCV_FENCE(iorw, iorw)
#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
RISCV_FENCE(rw, w); \
WRITE_ONCE(*p, v); \
} while (0)
#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
RISCV_FENCE(r, rw); \
___p1; \
})
#ifdef CONFIG_RISCV_ISA_ZAWRS
#define smp_cond_load_relaxed(ptr, cond_expr) ({ \
typeof(ptr) __PTR = (ptr); \
__unqual_scalar_typeof(*ptr) VAL; \
for (;;) { \
VAL = READ_ONCE(*__PTR); \
if (cond_expr) \
break; \
__cmpwait_relaxed(ptr, VAL); \
} \
(typeof(*ptr))VAL; \
})
#endif
#include <asm-generic/barrier.h> #include <asm-generic/barrier.h>
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -8,7 +8,10 @@ ...@@ -8,7 +8,10 @@
#include <linux/bug.h> #include <linux/bug.h>
#include <asm/alternative-macros.h>
#include <asm/fence.h> #include <asm/fence.h>
#include <asm/hwcap.h>
#include <asm/insn-def.h>
#define __arch_xchg_masked(sc_sfx, prepend, append, r, p, n) \ #define __arch_xchg_masked(sc_sfx, prepend, append, r, p, n) \
({ \ ({ \
...@@ -223,4 +226,59 @@ ...@@ -223,4 +226,59 @@
arch_cmpxchg_release((ptr), (o), (n)); \ arch_cmpxchg_release((ptr), (o), (n)); \
}) })
#ifdef CONFIG_RISCV_ISA_ZAWRS
/*
* Despite wrs.nto being "WRS-with-no-timeout", in the absence of changes to
* @val we expect it to still terminate within a "reasonable" amount of time
* for an implementation-specific other reason, a pending, locally-enabled
* interrupt, or because it has been configured to raise an illegal
* instruction exception.
*/
static __always_inline void __cmpwait(volatile void *ptr,
unsigned long val,
int size)
{
unsigned long tmp;
asm goto(ALTERNATIVE("j %l[no_zawrs]", "nop",
0, RISCV_ISA_EXT_ZAWRS, 1)
: : : : no_zawrs);
switch (size) {
case 4:
asm volatile(
" lr.w %0, %1\n"
" xor %0, %0, %2\n"
" bnez %0, 1f\n"
ZAWRS_WRS_NTO "\n"
"1:"
: "=&r" (tmp), "+A" (*(u32 *)ptr)
: "r" (val));
break;
#if __riscv_xlen == 64
case 8:
asm volatile(
" lr.d %0, %1\n"
" xor %0, %0, %2\n"
" bnez %0, 1f\n"
ZAWRS_WRS_NTO "\n"
"1:"
: "=&r" (tmp), "+A" (*(u64 *)ptr)
: "r" (val));
break;
#endif
default:
BUILD_BUG();
}
return;
no_zawrs:
asm volatile(RISCV_PAUSE : : : "memory");
}
#define __cmpwait_relaxed(ptr, val) \
__cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr)))
#endif
#endif /* _ASM_RISCV_CMPXCHG_H */ #endif /* _ASM_RISCV_CMPXCHG_H */
...@@ -70,6 +70,7 @@ struct riscv_isa_ext_data { ...@@ -70,6 +70,7 @@ struct riscv_isa_ext_data {
const char *property; const char *property;
const unsigned int *subset_ext_ids; const unsigned int *subset_ext_ids;
const unsigned int subset_ext_size; const unsigned int subset_ext_size;
int (*validate)(const struct riscv_isa_ext_data *data, const unsigned long *isa_bitmap);
}; };
extern const struct riscv_isa_ext_data riscv_isa_ext[]; extern const struct riscv_isa_ext_data riscv_isa_ext[];
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2024 Intel Corporation
*
* based on arch/arm64/include/asm/dmi.h
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef __ASM_DMI_H
#define __ASM_DMI_H
#include <linux/io.h>
#include <linux/slab.h>
#define dmi_early_remap(x, l) memremap(x, l, MEMREMAP_WB)
#define dmi_early_unmap(x, l) memunmap(x)
#define dmi_remap(x, l) memremap(x, l, MEMREMAP_WB)
#define dmi_unmap(x) memunmap(x)
#define dmi_alloc(l) kzalloc(l, GFP_KERNEL)
#endif
...@@ -81,6 +81,18 @@ ...@@ -81,6 +81,18 @@
#define RISCV_ISA_EXT_ZTSO 72 #define RISCV_ISA_EXT_ZTSO 72
#define RISCV_ISA_EXT_ZACAS 73 #define RISCV_ISA_EXT_ZACAS 73
#define RISCV_ISA_EXT_XANDESPMU 74 #define RISCV_ISA_EXT_XANDESPMU 74
#define RISCV_ISA_EXT_ZVE32X 75
#define RISCV_ISA_EXT_ZVE32F 76
#define RISCV_ISA_EXT_ZVE64X 77
#define RISCV_ISA_EXT_ZVE64F 78
#define RISCV_ISA_EXT_ZVE64D 79
#define RISCV_ISA_EXT_ZIMOP 80
#define RISCV_ISA_EXT_ZCA 81
#define RISCV_ISA_EXT_ZCB 82
#define RISCV_ISA_EXT_ZCD 83
#define RISCV_ISA_EXT_ZCF 84
#define RISCV_ISA_EXT_ZCMOP 85
#define RISCV_ISA_EXT_ZAWRS 86
#define RISCV_ISA_EXT_XLINUXENVCFG 127 #define RISCV_ISA_EXT_XLINUXENVCFG 127
......
...@@ -8,7 +8,7 @@ ...@@ -8,7 +8,7 @@
#include <uapi/asm/hwprobe.h> #include <uapi/asm/hwprobe.h>
#define RISCV_HWPROBE_MAX_KEY 6 #define RISCV_HWPROBE_MAX_KEY 7
static inline bool riscv_hwprobe_key_is_valid(__s64 key) static inline bool riscv_hwprobe_key_is_valid(__s64 key)
{ {
......
...@@ -196,4 +196,8 @@ ...@@ -196,4 +196,8 @@
INSN_I(OPCODE_MISC_MEM, FUNC3(2), __RD(0), \ INSN_I(OPCODE_MISC_MEM, FUNC3(2), __RD(0), \
RS1(base), SIMM12(4)) RS1(base), SIMM12(4))
#define RISCV_PAUSE ".4byte 0x100000f"
#define ZAWRS_WRS_NTO ".4byte 0x00d00073"
#define ZAWRS_WRS_STO ".4byte 0x01d00073"
#endif /* __ASM_INSN_DEF_H */ #endif /* __ASM_INSN_DEF_H */
...@@ -12,6 +12,8 @@ ...@@ -12,6 +12,8 @@
#include <linux/types.h> #include <linux/types.h>
#include <asm/asm.h> #include <asm/asm.h>
#define HAVE_JUMP_LABEL_BATCH
#define JUMP_LABEL_NOP_SIZE 4 #define JUMP_LABEL_NOP_SIZE 4
static __always_inline bool arch_static_branch(struct static_key * const key, static __always_inline bool arch_static_branch(struct static_key * const key,
...@@ -44,7 +46,7 @@ static __always_inline bool arch_static_branch_jump(struct static_key * const ke ...@@ -44,7 +46,7 @@ static __always_inline bool arch_static_branch_jump(struct static_key * const ke
" .option push \n\t" " .option push \n\t"
" .option norelax \n\t" " .option norelax \n\t"
" .option norvc \n\t" " .option norvc \n\t"
"1: jal zero, %l[label] \n\t" "1: j %l[label] \n\t"
" .option pop \n\t" " .option pop \n\t"
" .pushsection __jump_table, \"aw\" \n\t" " .pushsection __jump_table, \"aw\" \n\t"
" .align " RISCV_LGPTR " \n\t" " .align " RISCV_LGPTR " \n\t"
......
...@@ -6,8 +6,6 @@ ...@@ -6,8 +6,6 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#ifdef CONFIG_KASAN
/* /*
* The following comment was copied from arm64: * The following comment was copied from arm64:
* KASAN_SHADOW_START: beginning of the kernel virtual addresses. * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
...@@ -34,6 +32,8 @@ ...@@ -34,6 +32,8 @@
*/ */
#define KASAN_SHADOW_START ((KASAN_SHADOW_END - KASAN_SHADOW_SIZE) & PGDIR_MASK) #define KASAN_SHADOW_START ((KASAN_SHADOW_END - KASAN_SHADOW_SIZE) & PGDIR_MASK)
#define KASAN_SHADOW_END MODULES_LOWEST_VADDR #define KASAN_SHADOW_END MODULES_LOWEST_VADDR
#ifdef CONFIG_KASAN
#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
void kasan_init(void); void kasan_init(void);
......
...@@ -80,6 +80,7 @@ struct kvm_vcpu_stat { ...@@ -80,6 +80,7 @@ struct kvm_vcpu_stat {
struct kvm_vcpu_stat_generic generic; struct kvm_vcpu_stat_generic generic;
u64 ecall_exit_stat; u64 ecall_exit_stat;
u64 wfi_exit_stat; u64 wfi_exit_stat;
u64 wrs_exit_stat;
u64 mmio_exit_user; u64 mmio_exit_user;
u64 mmio_exit_kernel; u64 mmio_exit_kernel;
u64 csr_exit_user; u64 csr_exit_user;
......
...@@ -31,8 +31,8 @@ typedef struct { ...@@ -31,8 +31,8 @@ typedef struct {
#define cntx2asid(cntx) ((cntx) & SATP_ASID_MASK) #define cntx2asid(cntx) ((cntx) & SATP_ASID_MASK)
#define cntx2version(cntx) ((cntx) & ~SATP_ASID_MASK) #define cntx2version(cntx) ((cntx) & ~SATP_ASID_MASK)
void __init create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, phys_addr_t sz,
phys_addr_t sz, pgprot_t prot); pgprot_t prot);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_RISCV_MMU_H */ #endif /* _ASM_RISCV_MMU_H */
...@@ -188,6 +188,11 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x); ...@@ -188,6 +188,11 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
unsigned long kaslr_offset(void); unsigned long kaslr_offset(void);
static __always_inline void *pfn_to_kaddr(unsigned long pfn)
{
return __va(pfn << PAGE_SHIFT);
}
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#define virt_addr_valid(vaddr) ({ \ #define virt_addr_valid(vaddr) ({ \
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
int patch_insn_write(void *addr, const void *insn, size_t len); int patch_insn_write(void *addr, const void *insn, size_t len);
int patch_text_nosync(void *addr, const void *insns, size_t len); int patch_text_nosync(void *addr, const void *insns, size_t len);
int patch_text_set_nosync(void *addr, u8 c, size_t len); int patch_text_set_nosync(void *addr, u8 c, size_t len);
int patch_text(void *addr, u32 *insns, int ninsns); int patch_text(void *addr, u32 *insns, size_t len);
extern int riscv_patch_in_stop_machine; extern int riscv_patch_in_stop_machine;
......
...@@ -398,4 +398,24 @@ static inline struct page *pgd_page(pgd_t pgd) ...@@ -398,4 +398,24 @@ static inline struct page *pgd_page(pgd_t pgd)
#define p4d_offset p4d_offset #define p4d_offset p4d_offset
p4d_t *p4d_offset(pgd_t *pgd, unsigned long address); p4d_t *p4d_offset(pgd_t *pgd, unsigned long address);
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static inline int pte_devmap(pte_t pte);
static inline pte_t pmd_pte(pmd_t pmd);
static inline int pmd_devmap(pmd_t pmd)
{
return pte_devmap(pmd_pte(pmd));
}
static inline int pud_devmap(pud_t pud)
{
return 0;
}
static inline int pgd_devmap(pgd_t pgd)
{
return 0;
}
#endif
#endif /* _ASM_RISCV_PGTABLE_64_H */ #endif /* _ASM_RISCV_PGTABLE_64_H */
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#define _PAGE_SOFT (3 << 8) /* Reserved for software */ #define _PAGE_SOFT (3 << 8) /* Reserved for software */
#define _PAGE_SPECIAL (1 << 8) /* RSW: 0x1 */ #define _PAGE_SPECIAL (1 << 8) /* RSW: 0x1 */
#define _PAGE_DEVMAP (1 << 9) /* RSW, devmap */
#define _PAGE_TABLE _PAGE_PRESENT #define _PAGE_TABLE _PAGE_PRESENT
/* /*
......
...@@ -165,7 +165,7 @@ struct pt_alloc_ops { ...@@ -165,7 +165,7 @@ struct pt_alloc_ops {
#endif #endif
}; };
extern struct pt_alloc_ops pt_ops __initdata; extern struct pt_alloc_ops pt_ops __meminitdata;
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
/* Number of PGD entries that a user-mode program can use */ /* Number of PGD entries that a user-mode program can use */
...@@ -350,6 +350,19 @@ static inline int pte_present(pte_t pte) ...@@ -350,6 +350,19 @@ static inline int pte_present(pte_t pte)
return (pte_val(pte) & (_PAGE_PRESENT | _PAGE_PROT_NONE)); return (pte_val(pte) & (_PAGE_PRESENT | _PAGE_PROT_NONE));
} }
#define pte_accessible pte_accessible
static inline unsigned long pte_accessible(struct mm_struct *mm, pte_t a)
{
if (pte_val(a) & _PAGE_PRESENT)
return true;
if ((pte_val(a) & _PAGE_PROT_NONE) &&
atomic_read(&mm->tlb_flush_pending))
return true;
return false;
}
static inline int pte_none(pte_t pte) static inline int pte_none(pte_t pte)
{ {
return (pte_val(pte) == 0); return (pte_val(pte) == 0);
...@@ -390,6 +403,13 @@ static inline int pte_special(pte_t pte) ...@@ -390,6 +403,13 @@ static inline int pte_special(pte_t pte)
return pte_val(pte) & _PAGE_SPECIAL; return pte_val(pte) & _PAGE_SPECIAL;
} }
#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP
static inline int pte_devmap(pte_t pte)
{
return pte_val(pte) & _PAGE_DEVMAP;
}
#endif
/* static inline pte_t pte_rdprotect(pte_t pte) */ /* static inline pte_t pte_rdprotect(pte_t pte) */
static inline pte_t pte_wrprotect(pte_t pte) static inline pte_t pte_wrprotect(pte_t pte)
...@@ -431,6 +451,11 @@ static inline pte_t pte_mkspecial(pte_t pte) ...@@ -431,6 +451,11 @@ static inline pte_t pte_mkspecial(pte_t pte)
return __pte(pte_val(pte) | _PAGE_SPECIAL); return __pte(pte_val(pte) | _PAGE_SPECIAL);
} }
static inline pte_t pte_mkdevmap(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_DEVMAP);
}
static inline pte_t pte_mkhuge(pte_t pte) static inline pte_t pte_mkhuge(pte_t pte)
{ {
return pte; return pte;
...@@ -721,6 +746,11 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd) ...@@ -721,6 +746,11 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd)
return pte_pmd(pte_mkdirty(pmd_pte(pmd))); return pte_pmd(pte_mkdirty(pmd_pte(pmd)));
} }
static inline pmd_t pmd_mkdevmap(pmd_t pmd)
{
return pte_pmd(pte_mkdevmap(pmd_pte(pmd)));
}
static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t pmd) pmd_t *pmdp, pmd_t pmd)
{ {
......
...@@ -57,6 +57,12 @@ ...@@ -57,6 +57,12 @@
#define STACK_TOP DEFAULT_MAP_WINDOW #define STACK_TOP DEFAULT_MAP_WINDOW
#ifdef CONFIG_MMU
#define user_max_virt_addr() arch_get_mmap_end(ULONG_MAX, 0, 0)
#else
#define user_max_virt_addr() 0
#endif /* CONFIG_MMU */
/* /*
* This decides where the kernel will search for a free chunk of vm * This decides where the kernel will search for a free chunk of vm
* space during mmap's. * space during mmap's.
......
...@@ -304,10 +304,12 @@ struct sbiret { ...@@ -304,10 +304,12 @@ struct sbiret {
}; };
void sbi_init(void); void sbi_init(void);
struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0, struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1,
unsigned long arg1, unsigned long arg2, unsigned long arg2, unsigned long arg3,
unsigned long arg3, unsigned long arg4, unsigned long arg4, unsigned long arg5,
unsigned long arg5); int fid, int ext);
#define sbi_ecall(e, f, a0, a1, a2, a3, a4, a5) \
__sbi_ecall(a0, a1, a2, a3, a4, a5, f, e)
#ifdef CONFIG_RISCV_SBI_V01 #ifdef CONFIG_RISCV_SBI_V01
void sbi_console_putchar(int ch); void sbi_console_putchar(int ch);
......
/* SPDX-License-Identifier: GPL-2.0 */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM riscv
#if !defined(_TRACE_RISCV_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_RISCV_H
#include <linux/tracepoint.h>
TRACE_EVENT_CONDITION(sbi_call,
TP_PROTO(int ext, int fid),
TP_ARGS(ext, fid),
TP_CONDITION(ext != SBI_EXT_HSM),
TP_STRUCT__entry(
__field(int, ext)
__field(int, fid)
),
TP_fast_assign(
__entry->ext = ext;
__entry->fid = fid;
),
TP_printk("ext=0x%x fid=%d", __entry->ext, __entry->fid)
);
TRACE_EVENT_CONDITION(sbi_return,
TP_PROTO(int ext, long error, long value),
TP_ARGS(ext, error, value),
TP_CONDITION(ext != SBI_EXT_HSM),
TP_STRUCT__entry(
__field(long, error)
__field(long, value)
),
TP_fast_assign(
__entry->error = error;
__entry->value = value;
),
TP_printk("error=%ld value=0x%lx", __entry->error, __entry->value)
);
#endif /* _TRACE_RISCV_H */
#undef TRACE_INCLUDE_PATH
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_PATH asm
#define TRACE_INCLUDE_FILE trace
#include <trace/define_trace.h>
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/barrier.h> #include <asm/barrier.h>
#include <asm/insn-def.h>
static inline void cpu_relax(void) static inline void cpu_relax(void)
{ {
...@@ -14,16 +15,11 @@ static inline void cpu_relax(void) ...@@ -14,16 +15,11 @@ static inline void cpu_relax(void)
__asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy)); __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
#endif #endif
#ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE
/* /*
* Reduce instruction retirement. * Reduce instruction retirement.
* This assumes the PC changes. * This assumes the PC changes.
*/ */
__asm__ __volatile__ ("pause"); __asm__ __volatile__ (RISCV_PAUSE);
#else
/* Encoding of the pause instruction */
__asm__ __volatile__ (".4byte 0x100000F");
#endif
barrier(); barrier();
} }
......
...@@ -37,7 +37,7 @@ static inline u32 riscv_v_flags(void) ...@@ -37,7 +37,7 @@ static inline u32 riscv_v_flags(void)
static __always_inline bool has_vector(void) static __always_inline bool has_vector(void)
{ {
return riscv_has_extension_unlikely(RISCV_ISA_EXT_v); return riscv_has_extension_unlikely(RISCV_ISA_EXT_ZVE32X);
} }
static inline void __riscv_v_vstate_clean(struct pt_regs *regs) static inline void __riscv_v_vstate_clean(struct pt_regs *regs)
...@@ -91,7 +91,7 @@ static __always_inline void __vstate_csr_restore(struct __riscv_v_ext_state *src ...@@ -91,7 +91,7 @@ static __always_inline void __vstate_csr_restore(struct __riscv_v_ext_state *src
{ {
asm volatile ( asm volatile (
".option push\n\t" ".option push\n\t"
".option arch, +v\n\t" ".option arch, +zve32x\n\t"
"vsetvl x0, %2, %1\n\t" "vsetvl x0, %2, %1\n\t"
".option pop\n\t" ".option pop\n\t"
"csrw " __stringify(CSR_VSTART) ", %0\n\t" "csrw " __stringify(CSR_VSTART) ", %0\n\t"
...@@ -109,7 +109,7 @@ static inline void __riscv_v_vstate_save(struct __riscv_v_ext_state *save_to, ...@@ -109,7 +109,7 @@ static inline void __riscv_v_vstate_save(struct __riscv_v_ext_state *save_to,
__vstate_csr_save(save_to); __vstate_csr_save(save_to);
asm volatile ( asm volatile (
".option push\n\t" ".option push\n\t"
".option arch, +v\n\t" ".option arch, +zve32x\n\t"
"vsetvli %0, x0, e8, m8, ta, ma\n\t" "vsetvli %0, x0, e8, m8, ta, ma\n\t"
"vse8.v v0, (%1)\n\t" "vse8.v v0, (%1)\n\t"
"add %1, %1, %0\n\t" "add %1, %1, %0\n\t"
...@@ -131,7 +131,7 @@ static inline void __riscv_v_vstate_restore(struct __riscv_v_ext_state *restore_ ...@@ -131,7 +131,7 @@ static inline void __riscv_v_vstate_restore(struct __riscv_v_ext_state *restore_
riscv_v_enable(); riscv_v_enable();
asm volatile ( asm volatile (
".option push\n\t" ".option push\n\t"
".option arch, +v\n\t" ".option arch, +zve32x\n\t"
"vsetvli %0, x0, e8, m8, ta, ma\n\t" "vsetvli %0, x0, e8, m8, ta, ma\n\t"
"vle8.v v0, (%1)\n\t" "vle8.v v0, (%1)\n\t"
"add %1, %1, %0\n\t" "add %1, %1, %0\n\t"
...@@ -153,7 +153,7 @@ static inline void __riscv_v_vstate_discard(void) ...@@ -153,7 +153,7 @@ static inline void __riscv_v_vstate_discard(void)
riscv_v_enable(); riscv_v_enable();
asm volatile ( asm volatile (
".option push\n\t" ".option push\n\t"
".option arch, +v\n\t" ".option arch, +zve32x\n\t"
"vsetvli %0, x0, e8, m8, ta, ma\n\t" "vsetvli %0, x0, e8, m8, ta, ma\n\t"
"vmv.v.i v0, -1\n\t" "vmv.v.i v0, -1\n\t"
"vmv.v.i v8, -1\n\t" "vmv.v.i v8, -1\n\t"
......
...@@ -60,6 +60,18 @@ struct riscv_hwprobe { ...@@ -60,6 +60,18 @@ struct riscv_hwprobe {
#define RISCV_HWPROBE_EXT_ZACAS (1ULL << 34) #define RISCV_HWPROBE_EXT_ZACAS (1ULL << 34)
#define RISCV_HWPROBE_EXT_ZICOND (1ULL << 35) #define RISCV_HWPROBE_EXT_ZICOND (1ULL << 35)
#define RISCV_HWPROBE_EXT_ZIHINTPAUSE (1ULL << 36) #define RISCV_HWPROBE_EXT_ZIHINTPAUSE (1ULL << 36)
#define RISCV_HWPROBE_EXT_ZVE32X (1ULL << 37)
#define RISCV_HWPROBE_EXT_ZVE32F (1ULL << 38)
#define RISCV_HWPROBE_EXT_ZVE64X (1ULL << 39)
#define RISCV_HWPROBE_EXT_ZVE64F (1ULL << 40)
#define RISCV_HWPROBE_EXT_ZVE64D (1ULL << 41)
#define RISCV_HWPROBE_EXT_ZIMOP (1ULL << 42)
#define RISCV_HWPROBE_EXT_ZCA (1ULL << 43)
#define RISCV_HWPROBE_EXT_ZCB (1ULL << 44)
#define RISCV_HWPROBE_EXT_ZCD (1ULL << 45)
#define RISCV_HWPROBE_EXT_ZCF (1ULL << 46)
#define RISCV_HWPROBE_EXT_ZCMOP (1ULL << 47)
#define RISCV_HWPROBE_EXT_ZAWRS (1ULL << 48)
#define RISCV_HWPROBE_KEY_CPUPERF_0 5 #define RISCV_HWPROBE_KEY_CPUPERF_0 5
#define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0) #define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0)
#define RISCV_HWPROBE_MISALIGNED_EMULATED (1 << 0) #define RISCV_HWPROBE_MISALIGNED_EMULATED (1 << 0)
...@@ -68,6 +80,7 @@ struct riscv_hwprobe { ...@@ -68,6 +80,7 @@ struct riscv_hwprobe {
#define RISCV_HWPROBE_MISALIGNED_UNSUPPORTED (4 << 0) #define RISCV_HWPROBE_MISALIGNED_UNSUPPORTED (4 << 0)
#define RISCV_HWPROBE_MISALIGNED_MASK (7 << 0) #define RISCV_HWPROBE_MISALIGNED_MASK (7 << 0)
#define RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE 6 #define RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE 6
#define RISCV_HWPROBE_KEY_HIGHEST_VIRT_ADDRESS 7
/* Increase RISCV_HWPROBE_MAX_KEY when adding items. */ /* Increase RISCV_HWPROBE_MAX_KEY when adding items. */
/* Flags */ /* Flags */
......
...@@ -168,6 +168,13 @@ enum KVM_RISCV_ISA_EXT_ID { ...@@ -168,6 +168,13 @@ enum KVM_RISCV_ISA_EXT_ID {
KVM_RISCV_ISA_EXT_ZTSO, KVM_RISCV_ISA_EXT_ZTSO,
KVM_RISCV_ISA_EXT_ZACAS, KVM_RISCV_ISA_EXT_ZACAS,
KVM_RISCV_ISA_EXT_SSCOFPMF, KVM_RISCV_ISA_EXT_SSCOFPMF,
KVM_RISCV_ISA_EXT_ZIMOP,
KVM_RISCV_ISA_EXT_ZCA,
KVM_RISCV_ISA_EXT_ZCB,
KVM_RISCV_ISA_EXT_ZCD,
KVM_RISCV_ISA_EXT_ZCF,
KVM_RISCV_ISA_EXT_ZCMOP,
KVM_RISCV_ISA_EXT_ZAWRS,
KVM_RISCV_ISA_EXT_MAX, KVM_RISCV_ISA_EXT_MAX,
}; };
......
...@@ -72,51 +72,89 @@ bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, unsigned i ...@@ -72,51 +72,89 @@ bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, unsigned i
} }
EXPORT_SYMBOL_GPL(__riscv_isa_extension_available); EXPORT_SYMBOL_GPL(__riscv_isa_extension_available);
static bool riscv_isa_extension_check(int id) static int riscv_ext_zicbom_validate(const struct riscv_isa_ext_data *data,
const unsigned long *isa_bitmap)
{ {
switch (id) { if (!riscv_cbom_block_size) {
case RISCV_ISA_EXT_ZICBOM: pr_err("Zicbom detected in ISA string, disabling as no cbom-block-size found\n");
if (!riscv_cbom_block_size) { return -EINVAL;
pr_err("Zicbom detected in ISA string, disabling as no cbom-block-size found\n"); }
return false; if (!is_power_of_2(riscv_cbom_block_size)) {
} else if (!is_power_of_2(riscv_cbom_block_size)) { pr_err("Zicbom disabled as cbom-block-size present, but is not a power-of-2\n");
pr_err("Zicbom disabled as cbom-block-size present, but is not a power-of-2\n"); return -EINVAL;
return false;
}
return true;
case RISCV_ISA_EXT_ZICBOZ:
if (!riscv_cboz_block_size) {
pr_err("Zicboz detected in ISA string, disabling as no cboz-block-size found\n");
return false;
} else if (!is_power_of_2(riscv_cboz_block_size)) {
pr_err("Zicboz disabled as cboz-block-size present, but is not a power-of-2\n");
return false;
}
return true;
case RISCV_ISA_EXT_INVALID:
return false;
} }
return 0;
}
return true; static int riscv_ext_zicboz_validate(const struct riscv_isa_ext_data *data,
const unsigned long *isa_bitmap)
{
if (!riscv_cboz_block_size) {
pr_err("Zicboz detected in ISA string, disabling as no cboz-block-size found\n");
return -EINVAL;
}
if (!is_power_of_2(riscv_cboz_block_size)) {
pr_err("Zicboz disabled as cboz-block-size present, but is not a power-of-2\n");
return -EINVAL;
}
return 0;
} }
#define _RISCV_ISA_EXT_DATA(_name, _id, _subset_exts, _subset_exts_size) { \ #define _RISCV_ISA_EXT_DATA(_name, _id, _subset_exts, _subset_exts_size, _validate) { \
.name = #_name, \ .name = #_name, \
.property = #_name, \ .property = #_name, \
.id = _id, \ .id = _id, \
.subset_ext_ids = _subset_exts, \ .subset_ext_ids = _subset_exts, \
.subset_ext_size = _subset_exts_size \ .subset_ext_size = _subset_exts_size, \
.validate = _validate \
} }
#define __RISCV_ISA_EXT_DATA(_name, _id) _RISCV_ISA_EXT_DATA(_name, _id, NULL, 0) #define __RISCV_ISA_EXT_DATA(_name, _id) _RISCV_ISA_EXT_DATA(_name, _id, NULL, 0, NULL)
#define __RISCV_ISA_EXT_DATA_VALIDATE(_name, _id, _validate) \
_RISCV_ISA_EXT_DATA(_name, _id, NULL, 0, _validate)
/* Used to declare pure "lasso" extension (Zk for instance) */ /* Used to declare pure "lasso" extension (Zk for instance) */
#define __RISCV_ISA_EXT_BUNDLE(_name, _bundled_exts) \ #define __RISCV_ISA_EXT_BUNDLE(_name, _bundled_exts) \
_RISCV_ISA_EXT_DATA(_name, RISCV_ISA_EXT_INVALID, _bundled_exts, ARRAY_SIZE(_bundled_exts)) _RISCV_ISA_EXT_DATA(_name, RISCV_ISA_EXT_INVALID, _bundled_exts, \
ARRAY_SIZE(_bundled_exts), NULL)
/* Used to declare extensions that are a superset of other extensions (Zvbb for instance) */ /* Used to declare extensions that are a superset of other extensions (Zvbb for instance) */
#define __RISCV_ISA_EXT_SUPERSET(_name, _id, _sub_exts) \ #define __RISCV_ISA_EXT_SUPERSET(_name, _id, _sub_exts) \
_RISCV_ISA_EXT_DATA(_name, _id, _sub_exts, ARRAY_SIZE(_sub_exts)) _RISCV_ISA_EXT_DATA(_name, _id, _sub_exts, ARRAY_SIZE(_sub_exts), NULL)
#define __RISCV_ISA_EXT_SUPERSET_VALIDATE(_name, _id, _sub_exts, _validate) \
_RISCV_ISA_EXT_DATA(_name, _id, _sub_exts, ARRAY_SIZE(_sub_exts), _validate)
static int riscv_ext_zca_depends(const struct riscv_isa_ext_data *data,
const unsigned long *isa_bitmap)
{
if (__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_ZCA))
return 0;
return -EPROBE_DEFER;
}
static int riscv_ext_zcd_validate(const struct riscv_isa_ext_data *data,
const unsigned long *isa_bitmap)
{
if (__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_ZCA) &&
__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_d))
return 0;
return -EPROBE_DEFER;
}
static int riscv_ext_zcf_validate(const struct riscv_isa_ext_data *data,
const unsigned long *isa_bitmap)
{
if (IS_ENABLED(CONFIG_64BIT))
return -EINVAL;
if (__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_ZCA) &&
__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_f))
return 0;
return -EPROBE_DEFER;
}
static const unsigned int riscv_zk_bundled_exts[] = { static const unsigned int riscv_zk_bundled_exts[] = {
RISCV_ISA_EXT_ZBKB, RISCV_ISA_EXT_ZBKB,
...@@ -188,6 +226,40 @@ static const unsigned int riscv_zvbb_exts[] = { ...@@ -188,6 +226,40 @@ static const unsigned int riscv_zvbb_exts[] = {
RISCV_ISA_EXT_ZVKB RISCV_ISA_EXT_ZVKB
}; };
#define RISCV_ISA_EXT_ZVE64F_IMPLY_LIST \
RISCV_ISA_EXT_ZVE64X, \
RISCV_ISA_EXT_ZVE32F, \
RISCV_ISA_EXT_ZVE32X
#define RISCV_ISA_EXT_ZVE64D_IMPLY_LIST \
RISCV_ISA_EXT_ZVE64F, \
RISCV_ISA_EXT_ZVE64F_IMPLY_LIST
#define RISCV_ISA_EXT_V_IMPLY_LIST \
RISCV_ISA_EXT_ZVE64D, \
RISCV_ISA_EXT_ZVE64D_IMPLY_LIST
static const unsigned int riscv_zve32f_exts[] = {
RISCV_ISA_EXT_ZVE32X
};
static const unsigned int riscv_zve64f_exts[] = {
RISCV_ISA_EXT_ZVE64F_IMPLY_LIST
};
static const unsigned int riscv_zve64d_exts[] = {
RISCV_ISA_EXT_ZVE64D_IMPLY_LIST
};
static const unsigned int riscv_v_exts[] = {
RISCV_ISA_EXT_V_IMPLY_LIST
};
static const unsigned int riscv_zve64x_exts[] = {
RISCV_ISA_EXT_ZVE32X,
RISCV_ISA_EXT_ZVE64X
};
/* /*
* While the [ms]envcfg CSRs were not defined until version 1.12 of the RISC-V * While the [ms]envcfg CSRs were not defined until version 1.12 of the RISC-V
* privileged ISA, the existence of the CSRs is implied by any extension which * privileged ISA, the existence of the CSRs is implied by any extension which
...@@ -198,6 +270,21 @@ static const unsigned int riscv_xlinuxenvcfg_exts[] = { ...@@ -198,6 +270,21 @@ static const unsigned int riscv_xlinuxenvcfg_exts[] = {
RISCV_ISA_EXT_XLINUXENVCFG RISCV_ISA_EXT_XLINUXENVCFG
}; };
/*
* Zc* spec states that:
* - C always implies Zca
* - C+F implies Zcf (RV32 only)
* - C+D implies Zcd
*
* These extensions will be enabled and then validated depending on the
* availability of F/D RV32.
*/
static const unsigned int riscv_c_exts[] = {
RISCV_ISA_EXT_ZCA,
RISCV_ISA_EXT_ZCF,
RISCV_ISA_EXT_ZCD,
};
/* /*
* The canonical order of ISA extension names in the ISA string is defined in * The canonical order of ISA extension names in the ISA string is defined in
* chapter 27 of the unprivileged specification. * chapter 27 of the unprivileged specification.
...@@ -244,11 +331,13 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = { ...@@ -244,11 +331,13 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = {
__RISCV_ISA_EXT_DATA(f, RISCV_ISA_EXT_f), __RISCV_ISA_EXT_DATA(f, RISCV_ISA_EXT_f),
__RISCV_ISA_EXT_DATA(d, RISCV_ISA_EXT_d), __RISCV_ISA_EXT_DATA(d, RISCV_ISA_EXT_d),
__RISCV_ISA_EXT_DATA(q, RISCV_ISA_EXT_q), __RISCV_ISA_EXT_DATA(q, RISCV_ISA_EXT_q),
__RISCV_ISA_EXT_DATA(c, RISCV_ISA_EXT_c), __RISCV_ISA_EXT_SUPERSET(c, RISCV_ISA_EXT_c, riscv_c_exts),
__RISCV_ISA_EXT_DATA(v, RISCV_ISA_EXT_v), __RISCV_ISA_EXT_SUPERSET(v, RISCV_ISA_EXT_v, riscv_v_exts),
__RISCV_ISA_EXT_DATA(h, RISCV_ISA_EXT_h), __RISCV_ISA_EXT_DATA(h, RISCV_ISA_EXT_h),
__RISCV_ISA_EXT_SUPERSET(zicbom, RISCV_ISA_EXT_ZICBOM, riscv_xlinuxenvcfg_exts), __RISCV_ISA_EXT_SUPERSET_VALIDATE(zicbom, RISCV_ISA_EXT_ZICBOM, riscv_xlinuxenvcfg_exts,
__RISCV_ISA_EXT_SUPERSET(zicboz, RISCV_ISA_EXT_ZICBOZ, riscv_xlinuxenvcfg_exts), riscv_ext_zicbom_validate),
__RISCV_ISA_EXT_SUPERSET_VALIDATE(zicboz, RISCV_ISA_EXT_ZICBOZ, riscv_xlinuxenvcfg_exts,
riscv_ext_zicboz_validate),
__RISCV_ISA_EXT_DATA(zicntr, RISCV_ISA_EXT_ZICNTR), __RISCV_ISA_EXT_DATA(zicntr, RISCV_ISA_EXT_ZICNTR),
__RISCV_ISA_EXT_DATA(zicond, RISCV_ISA_EXT_ZICOND), __RISCV_ISA_EXT_DATA(zicond, RISCV_ISA_EXT_ZICOND),
__RISCV_ISA_EXT_DATA(zicsr, RISCV_ISA_EXT_ZICSR), __RISCV_ISA_EXT_DATA(zicsr, RISCV_ISA_EXT_ZICSR),
...@@ -256,10 +345,17 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = { ...@@ -256,10 +345,17 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = {
__RISCV_ISA_EXT_DATA(zihintntl, RISCV_ISA_EXT_ZIHINTNTL), __RISCV_ISA_EXT_DATA(zihintntl, RISCV_ISA_EXT_ZIHINTNTL),
__RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE), __RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE),
__RISCV_ISA_EXT_DATA(zihpm, RISCV_ISA_EXT_ZIHPM), __RISCV_ISA_EXT_DATA(zihpm, RISCV_ISA_EXT_ZIHPM),
__RISCV_ISA_EXT_DATA(zimop, RISCV_ISA_EXT_ZIMOP),
__RISCV_ISA_EXT_DATA(zacas, RISCV_ISA_EXT_ZACAS), __RISCV_ISA_EXT_DATA(zacas, RISCV_ISA_EXT_ZACAS),
__RISCV_ISA_EXT_DATA(zawrs, RISCV_ISA_EXT_ZAWRS),
__RISCV_ISA_EXT_DATA(zfa, RISCV_ISA_EXT_ZFA), __RISCV_ISA_EXT_DATA(zfa, RISCV_ISA_EXT_ZFA),
__RISCV_ISA_EXT_DATA(zfh, RISCV_ISA_EXT_ZFH), __RISCV_ISA_EXT_DATA(zfh, RISCV_ISA_EXT_ZFH),
__RISCV_ISA_EXT_DATA(zfhmin, RISCV_ISA_EXT_ZFHMIN), __RISCV_ISA_EXT_DATA(zfhmin, RISCV_ISA_EXT_ZFHMIN),
__RISCV_ISA_EXT_DATA(zca, RISCV_ISA_EXT_ZCA),
__RISCV_ISA_EXT_DATA_VALIDATE(zcb, RISCV_ISA_EXT_ZCB, riscv_ext_zca_depends),
__RISCV_ISA_EXT_DATA_VALIDATE(zcd, RISCV_ISA_EXT_ZCD, riscv_ext_zcd_validate),
__RISCV_ISA_EXT_DATA_VALIDATE(zcf, RISCV_ISA_EXT_ZCF, riscv_ext_zcf_validate),
__RISCV_ISA_EXT_DATA_VALIDATE(zcmop, RISCV_ISA_EXT_ZCMOP, riscv_ext_zca_depends),
__RISCV_ISA_EXT_DATA(zba, RISCV_ISA_EXT_ZBA), __RISCV_ISA_EXT_DATA(zba, RISCV_ISA_EXT_ZBA),
__RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB), __RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB),
__RISCV_ISA_EXT_DATA(zbc, RISCV_ISA_EXT_ZBC), __RISCV_ISA_EXT_DATA(zbc, RISCV_ISA_EXT_ZBC),
...@@ -280,6 +376,11 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = { ...@@ -280,6 +376,11 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = {
__RISCV_ISA_EXT_DATA(ztso, RISCV_ISA_EXT_ZTSO), __RISCV_ISA_EXT_DATA(ztso, RISCV_ISA_EXT_ZTSO),
__RISCV_ISA_EXT_SUPERSET(zvbb, RISCV_ISA_EXT_ZVBB, riscv_zvbb_exts), __RISCV_ISA_EXT_SUPERSET(zvbb, RISCV_ISA_EXT_ZVBB, riscv_zvbb_exts),
__RISCV_ISA_EXT_DATA(zvbc, RISCV_ISA_EXT_ZVBC), __RISCV_ISA_EXT_DATA(zvbc, RISCV_ISA_EXT_ZVBC),
__RISCV_ISA_EXT_SUPERSET(zve32f, RISCV_ISA_EXT_ZVE32F, riscv_zve32f_exts),
__RISCV_ISA_EXT_DATA(zve32x, RISCV_ISA_EXT_ZVE32X),
__RISCV_ISA_EXT_SUPERSET(zve64d, RISCV_ISA_EXT_ZVE64D, riscv_zve64d_exts),
__RISCV_ISA_EXT_SUPERSET(zve64f, RISCV_ISA_EXT_ZVE64F, riscv_zve64f_exts),
__RISCV_ISA_EXT_SUPERSET(zve64x, RISCV_ISA_EXT_ZVE64X, riscv_zve64x_exts),
__RISCV_ISA_EXT_DATA(zvfh, RISCV_ISA_EXT_ZVFH), __RISCV_ISA_EXT_DATA(zvfh, RISCV_ISA_EXT_ZVFH),
__RISCV_ISA_EXT_DATA(zvfhmin, RISCV_ISA_EXT_ZVFHMIN), __RISCV_ISA_EXT_DATA(zvfhmin, RISCV_ISA_EXT_ZVFHMIN),
__RISCV_ISA_EXT_DATA(zvkb, RISCV_ISA_EXT_ZVKB), __RISCV_ISA_EXT_DATA(zvkb, RISCV_ISA_EXT_ZVKB),
...@@ -309,33 +410,93 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = { ...@@ -309,33 +410,93 @@ const struct riscv_isa_ext_data riscv_isa_ext[] = {
const size_t riscv_isa_ext_count = ARRAY_SIZE(riscv_isa_ext); const size_t riscv_isa_ext_count = ARRAY_SIZE(riscv_isa_ext);
static void __init match_isa_ext(const struct riscv_isa_ext_data *ext, const char *name, static void riscv_isa_set_ext(const struct riscv_isa_ext_data *ext, unsigned long *bitmap)
const char *name_end, struct riscv_isainfo *isainfo)
{ {
if ((name_end - name == strlen(ext->name)) && if (ext->id != RISCV_ISA_EXT_INVALID)
!strncasecmp(name, ext->name, name_end - name)) { set_bit(ext->id, bitmap);
/*
* If this is a bundle, enable all the ISA extensions that for (int i = 0; i < ext->subset_ext_size; i++) {
* comprise the bundle. if (ext->subset_ext_ids[i] != RISCV_ISA_EXT_INVALID)
*/ set_bit(ext->subset_ext_ids[i], bitmap);
if (ext->subset_ext_size) { }
for (int i = 0; i < ext->subset_ext_size; i++) { }
if (riscv_isa_extension_check(ext->subset_ext_ids[i]))
set_bit(ext->subset_ext_ids[i], isainfo->isa); static const struct riscv_isa_ext_data *riscv_get_isa_ext_data(unsigned int ext_id)
{
for (int i = 0; i < riscv_isa_ext_count; i++) {
if (riscv_isa_ext[i].id == ext_id)
return &riscv_isa_ext[i];
}
return NULL;
}
/*
* "Resolve" a source ISA bitmap into one that matches kernel configuration as
* well as correct extension dependencies. Some extensions depends on specific
* kernel configuration to be usable (V needs CONFIG_RISCV_ISA_V for instance)
* and this function will actually validate all the extensions provided in
* source_isa into the resolved_isa based on extensions validate() callbacks.
*/
static void __init riscv_resolve_isa(unsigned long *source_isa,
unsigned long *resolved_isa, unsigned long *this_hwcap,
unsigned long *isa2hwcap)
{
bool loop;
const struct riscv_isa_ext_data *ext;
DECLARE_BITMAP(prev_resolved_isa, RISCV_ISA_EXT_MAX);
int max_loop_count = riscv_isa_ext_count, ret;
unsigned int bit;
do {
loop = false;
if (max_loop_count-- < 0) {
pr_err("Failed to reach a stable ISA state\n");
return;
}
bitmap_copy(prev_resolved_isa, resolved_isa, RISCV_ISA_EXT_MAX);
for_each_set_bit(bit, source_isa, RISCV_ISA_EXT_MAX) {
ext = riscv_get_isa_ext_data(bit);
if (!ext)
continue;
if (ext->validate) {
ret = ext->validate(ext, resolved_isa);
if (ret == -EPROBE_DEFER) {
loop = true;
continue;
} else if (ret) {
/* Disable the extension entirely */
clear_bit(ext->id, source_isa);
continue;
}
} }
set_bit(ext->id, resolved_isa);
/* No need to keep it in source isa now that it is enabled */
clear_bit(ext->id, source_isa);
/* Single letter extensions get set in hwcap */
if (ext->id < RISCV_ISA_EXT_BASE)
*this_hwcap |= isa2hwcap[ext->id];
} }
} while (loop && memcmp(prev_resolved_isa, resolved_isa, sizeof(prev_resolved_isa)));
}
/* static void __init match_isa_ext(const char *name, const char *name_end, unsigned long *bitmap)
* This is valid even for bundle extensions which uses the RISCV_ISA_EXT_INVALID id {
* (rejected by riscv_isa_extension_check()). for (int i = 0; i < riscv_isa_ext_count; i++) {
*/ const struct riscv_isa_ext_data *ext = &riscv_isa_ext[i];
if (riscv_isa_extension_check(ext->id))
set_bit(ext->id, isainfo->isa); if ((name_end - name == strlen(ext->name)) &&
!strncasecmp(name, ext->name, name_end - name)) {
riscv_isa_set_ext(ext, bitmap);
break;
}
} }
} }
static void __init riscv_parse_isa_string(unsigned long *this_hwcap, struct riscv_isainfo *isainfo, static void __init riscv_parse_isa_string(const char *isa, unsigned long *bitmap)
unsigned long *isa2hwcap, const char *isa)
{ {
/* /*
* For all possible cpus, we have already validated in * For all possible cpus, we have already validated in
...@@ -348,7 +509,7 @@ static void __init riscv_parse_isa_string(unsigned long *this_hwcap, struct risc ...@@ -348,7 +509,7 @@ static void __init riscv_parse_isa_string(unsigned long *this_hwcap, struct risc
while (*isa) { while (*isa) {
const char *ext = isa++; const char *ext = isa++;
const char *ext_end = isa; const char *ext_end = isa;
bool ext_long = false, ext_err = false; bool ext_err = false;
switch (*ext) { switch (*ext) {
case 's': case 's':
...@@ -388,7 +549,6 @@ static void __init riscv_parse_isa_string(unsigned long *this_hwcap, struct risc ...@@ -388,7 +549,6 @@ static void __init riscv_parse_isa_string(unsigned long *this_hwcap, struct risc
* character itself while eliminating the extensions version number. * character itself while eliminating the extensions version number.
* A simple re-increment solves this problem. * A simple re-increment solves this problem.
*/ */
ext_long = true;
for (; *isa && *isa != '_'; ++isa) for (; *isa && *isa != '_'; ++isa)
if (unlikely(!isalnum(*isa))) if (unlikely(!isalnum(*isa)))
ext_err = true; ext_err = true;
...@@ -468,17 +628,8 @@ static void __init riscv_parse_isa_string(unsigned long *this_hwcap, struct risc ...@@ -468,17 +628,8 @@ static void __init riscv_parse_isa_string(unsigned long *this_hwcap, struct risc
if (unlikely(ext_err)) if (unlikely(ext_err))
continue; continue;
if (!ext_long) {
int nr = tolower(*ext) - 'a';
if (riscv_isa_extension_check(nr)) { match_isa_ext(ext, ext_end, bitmap);
*this_hwcap |= isa2hwcap[nr];
set_bit(nr, isainfo->isa);
}
} else {
for (int i = 0; i < riscv_isa_ext_count; i++)
match_isa_ext(&riscv_isa_ext[i], ext, ext_end, isainfo);
}
} }
} }
...@@ -505,6 +656,7 @@ static void __init riscv_fill_hwcap_from_isa_string(unsigned long *isa2hwcap) ...@@ -505,6 +656,7 @@ static void __init riscv_fill_hwcap_from_isa_string(unsigned long *isa2hwcap)
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
struct riscv_isainfo *isainfo = &hart_isa[cpu]; struct riscv_isainfo *isainfo = &hart_isa[cpu];
unsigned long this_hwcap = 0; unsigned long this_hwcap = 0;
DECLARE_BITMAP(source_isa, RISCV_ISA_EXT_MAX) = { 0 };
if (acpi_disabled) { if (acpi_disabled) {
node = of_cpu_device_node_get(cpu); node = of_cpu_device_node_get(cpu);
...@@ -527,7 +679,7 @@ static void __init riscv_fill_hwcap_from_isa_string(unsigned long *isa2hwcap) ...@@ -527,7 +679,7 @@ static void __init riscv_fill_hwcap_from_isa_string(unsigned long *isa2hwcap)
} }
} }
riscv_parse_isa_string(&this_hwcap, isainfo, isa2hwcap, isa); riscv_parse_isa_string(isa, source_isa);
/* /*
* These ones were as they were part of the base ISA when the * These ones were as they were part of the base ISA when the
...@@ -535,10 +687,10 @@ static void __init riscv_fill_hwcap_from_isa_string(unsigned long *isa2hwcap) ...@@ -535,10 +687,10 @@ static void __init riscv_fill_hwcap_from_isa_string(unsigned long *isa2hwcap)
* unconditionally where `i` is in riscv,isa on DT systems. * unconditionally where `i` is in riscv,isa on DT systems.
*/ */
if (acpi_disabled) { if (acpi_disabled) {
set_bit(RISCV_ISA_EXT_ZICSR, isainfo->isa); set_bit(RISCV_ISA_EXT_ZICSR, source_isa);
set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa); set_bit(RISCV_ISA_EXT_ZIFENCEI, source_isa);
set_bit(RISCV_ISA_EXT_ZICNTR, isainfo->isa); set_bit(RISCV_ISA_EXT_ZICNTR, source_isa);
set_bit(RISCV_ISA_EXT_ZIHPM, isainfo->isa); set_bit(RISCV_ISA_EXT_ZIHPM, source_isa);
} }
/* /*
...@@ -551,9 +703,11 @@ static void __init riscv_fill_hwcap_from_isa_string(unsigned long *isa2hwcap) ...@@ -551,9 +703,11 @@ static void __init riscv_fill_hwcap_from_isa_string(unsigned long *isa2hwcap)
*/ */
if (acpi_disabled && boot_vendorid == THEAD_VENDOR_ID && boot_archid == 0x0) { if (acpi_disabled && boot_vendorid == THEAD_VENDOR_ID && boot_archid == 0x0) {
this_hwcap &= ~isa2hwcap[RISCV_ISA_EXT_v]; this_hwcap &= ~isa2hwcap[RISCV_ISA_EXT_v];
clear_bit(RISCV_ISA_EXT_v, isainfo->isa); clear_bit(RISCV_ISA_EXT_v, source_isa);
} }
riscv_resolve_isa(source_isa, isainfo->isa, &this_hwcap, isa2hwcap);
/* /*
* All "okay" hart should have same isa. Set HWCAP based on * All "okay" hart should have same isa. Set HWCAP based on
* common capabilities of every "okay" hart, in case they don't * common capabilities of every "okay" hart, in case they don't
...@@ -582,6 +736,7 @@ static int __init riscv_fill_hwcap_from_ext_list(unsigned long *isa2hwcap) ...@@ -582,6 +736,7 @@ static int __init riscv_fill_hwcap_from_ext_list(unsigned long *isa2hwcap)
unsigned long this_hwcap = 0; unsigned long this_hwcap = 0;
struct device_node *cpu_node; struct device_node *cpu_node;
struct riscv_isainfo *isainfo = &hart_isa[cpu]; struct riscv_isainfo *isainfo = &hart_isa[cpu];
DECLARE_BITMAP(source_isa, RISCV_ISA_EXT_MAX) = { 0 };
cpu_node = of_cpu_device_node_get(cpu); cpu_node = of_cpu_device_node_get(cpu);
if (!cpu_node) { if (!cpu_node) {
...@@ -601,22 +756,11 @@ static int __init riscv_fill_hwcap_from_ext_list(unsigned long *isa2hwcap) ...@@ -601,22 +756,11 @@ static int __init riscv_fill_hwcap_from_ext_list(unsigned long *isa2hwcap)
ext->property) < 0) ext->property) < 0)
continue; continue;
if (ext->subset_ext_size) { riscv_isa_set_ext(ext, source_isa);
for (int j = 0; j < ext->subset_ext_size; j++) {
if (riscv_isa_extension_check(ext->subset_ext_ids[j]))
set_bit(ext->subset_ext_ids[j], isainfo->isa);
}
}
if (riscv_isa_extension_check(ext->id)) {
set_bit(ext->id, isainfo->isa);
/* Only single letter extensions get set in hwcap */
if (strnlen(riscv_isa_ext[i].name, 2) == 1)
this_hwcap |= isa2hwcap[riscv_isa_ext[i].id];
}
} }
riscv_resolve_isa(source_isa, isainfo->isa, &this_hwcap, isa2hwcap);
of_node_put(cpu_node); of_node_put(cpu_node);
/* /*
...@@ -686,8 +830,14 @@ void __init riscv_fill_hwcap(void) ...@@ -686,8 +830,14 @@ void __init riscv_fill_hwcap(void)
elf_hwcap &= ~COMPAT_HWCAP_ISA_F; elf_hwcap &= ~COMPAT_HWCAP_ISA_F;
} }
if (elf_hwcap & COMPAT_HWCAP_ISA_V) { if (__riscv_isa_extension_available(NULL, RISCV_ISA_EXT_ZVE32X)) {
/*
* This cannot fail when called on the boot hart
*/
riscv_v_setup_vsize(); riscv_v_setup_vsize();
}
if (elf_hwcap & COMPAT_HWCAP_ISA_V) {
/* /*
* ISA string in device tree might have 'v' flag, but * ISA string in device tree might have 'v' flag, but
* CONFIG_RISCV_ISA_V is disabled in kernel. * CONFIG_RISCV_ISA_V is disabled in kernel.
......
...@@ -165,9 +165,20 @@ secondary_start_sbi: ...@@ -165,9 +165,20 @@ secondary_start_sbi:
#endif #endif
call .Lsetup_trap_vector call .Lsetup_trap_vector
scs_load_current scs_load_current
tail smp_callin call smp_callin
#endif /* CONFIG_SMP */ #endif /* CONFIG_SMP */
.align 2
.Lsecondary_park:
/*
* Park this hart if we:
* - have too many harts on CONFIG_RISCV_BOOT_SPINWAIT
* - receive an early trap, before setup_trap_vector finished
* - fail in smp_callin(), as a successful one wouldn't return
*/
wfi
j .Lsecondary_park
.align 2 .align 2
.Lsetup_trap_vector: .Lsetup_trap_vector:
/* Set trap vector to exception handler */ /* Set trap vector to exception handler */
...@@ -181,12 +192,6 @@ secondary_start_sbi: ...@@ -181,12 +192,6 @@ secondary_start_sbi:
csrw CSR_SCRATCH, zero csrw CSR_SCRATCH, zero
ret ret
.align 2
.Lsecondary_park:
/* We lack SMP support or have too many harts, so park this hart */
wfi
j .Lsecondary_park
SYM_CODE_END(_start) SYM_CODE_END(_start)
SYM_CODE_START(_start_kernel) SYM_CODE_START(_start_kernel)
...@@ -300,6 +305,9 @@ SYM_CODE_START(_start_kernel) ...@@ -300,6 +305,9 @@ SYM_CODE_START(_start_kernel)
#else #else
mv a0, a1 mv a0, a1
#endif /* CONFIG_BUILTIN_DTB */ #endif /* CONFIG_BUILTIN_DTB */
/* Set trap vector to spin forever to help debug */
la a3, .Lsecondary_park
csrw CSR_TVEC, a3
call setup_vm call setup_vm
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
la a0, early_pg_dir la a0, early_pg_dir
......
...@@ -9,13 +9,14 @@ ...@@ -9,13 +9,14 @@
#include <linux/memory.h> #include <linux/memory.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <asm/bug.h> #include <asm/bug.h>
#include <asm/cacheflush.h>
#include <asm/patch.h> #include <asm/patch.h>
#define RISCV_INSN_NOP 0x00000013U #define RISCV_INSN_NOP 0x00000013U
#define RISCV_INSN_JAL 0x0000006fU #define RISCV_INSN_JAL 0x0000006fU
void arch_jump_label_transform(struct jump_entry *entry, bool arch_jump_label_transform_queue(struct jump_entry *entry,
enum jump_label_type type) enum jump_label_type type)
{ {
void *addr = (void *)jump_entry_code(entry); void *addr = (void *)jump_entry_code(entry);
u32 insn; u32 insn;
...@@ -24,7 +25,7 @@ void arch_jump_label_transform(struct jump_entry *entry, ...@@ -24,7 +25,7 @@ void arch_jump_label_transform(struct jump_entry *entry,
long offset = jump_entry_target(entry) - jump_entry_code(entry); long offset = jump_entry_target(entry) - jump_entry_code(entry);
if (WARN_ON(offset & 1 || offset < -524288 || offset >= 524288)) if (WARN_ON(offset & 1 || offset < -524288 || offset >= 524288))
return; return true;
insn = RISCV_INSN_JAL | insn = RISCV_INSN_JAL |
(((u32)offset & GENMASK(19, 12)) << (12 - 12)) | (((u32)offset & GENMASK(19, 12)) << (12 - 12)) |
...@@ -36,6 +37,13 @@ void arch_jump_label_transform(struct jump_entry *entry, ...@@ -36,6 +37,13 @@ void arch_jump_label_transform(struct jump_entry *entry,
} }
mutex_lock(&text_mutex); mutex_lock(&text_mutex);
patch_text_nosync(addr, &insn, sizeof(insn)); patch_insn_write(addr, &insn, sizeof(insn));
mutex_unlock(&text_mutex); mutex_unlock(&text_mutex);
return true;
}
void arch_jump_label_transform_apply(void)
{
flush_icache_all();
} }
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
struct patch_insn { struct patch_insn {
void *addr; void *addr;
u32 *insns; u32 *insns;
int ninsns; size_t len;
atomic_t cpu_count; atomic_t cpu_count;
}; };
...@@ -54,7 +54,7 @@ static __always_inline void *patch_map(void *addr, const unsigned int fixmap) ...@@ -54,7 +54,7 @@ static __always_inline void *patch_map(void *addr, const unsigned int fixmap)
BUG_ON(!page); BUG_ON(!page);
return (void *)set_fixmap_offset(fixmap, page_to_phys(page) + return (void *)set_fixmap_offset(fixmap, page_to_phys(page) +
(uintaddr & ~PAGE_MASK)); offset_in_page(addr));
} }
static void patch_unmap(int fixmap) static void patch_unmap(int fixmap)
...@@ -65,8 +65,8 @@ NOKPROBE_SYMBOL(patch_unmap); ...@@ -65,8 +65,8 @@ NOKPROBE_SYMBOL(patch_unmap);
static int __patch_insn_set(void *addr, u8 c, size_t len) static int __patch_insn_set(void *addr, u8 c, size_t len)
{ {
bool across_pages = (offset_in_page(addr) + len) > PAGE_SIZE;
void *waddr = addr; void *waddr = addr;
bool across_pages = (((uintptr_t)addr & ~PAGE_MASK) + len) > PAGE_SIZE;
/* /*
* Only two pages can be mapped at a time for writing. * Only two pages can be mapped at a time for writing.
...@@ -110,8 +110,8 @@ NOKPROBE_SYMBOL(__patch_insn_set); ...@@ -110,8 +110,8 @@ NOKPROBE_SYMBOL(__patch_insn_set);
static int __patch_insn_write(void *addr, const void *insn, size_t len) static int __patch_insn_write(void *addr, const void *insn, size_t len)
{ {
bool across_pages = (offset_in_page(addr) + len) > PAGE_SIZE;
void *waddr = addr; void *waddr = addr;
bool across_pages = (((uintptr_t) addr & ~PAGE_MASK) + len) > PAGE_SIZE;
int ret; int ret;
/* /*
...@@ -179,31 +179,32 @@ NOKPROBE_SYMBOL(__patch_insn_write); ...@@ -179,31 +179,32 @@ NOKPROBE_SYMBOL(__patch_insn_write);
static int patch_insn_set(void *addr, u8 c, size_t len) static int patch_insn_set(void *addr, u8 c, size_t len)
{ {
size_t patched = 0;
size_t size; size_t size;
int ret = 0; int ret;
/* /*
* __patch_insn_set() can only work on 2 pages at a time so call it in a * __patch_insn_set() can only work on 2 pages at a time so call it in a
* loop with len <= 2 * PAGE_SIZE. * loop with len <= 2 * PAGE_SIZE.
*/ */
while (patched < len && !ret) { while (len) {
size = min_t(size_t, PAGE_SIZE * 2 - offset_in_page(addr + patched), len - patched); size = min(len, PAGE_SIZE * 2 - offset_in_page(addr));
ret = __patch_insn_set(addr + patched, c, size); ret = __patch_insn_set(addr, c, size);
if (ret)
patched += size; return ret;
addr += size;
len -= size;
} }
return ret; return 0;
} }
NOKPROBE_SYMBOL(patch_insn_set); NOKPROBE_SYMBOL(patch_insn_set);
int patch_text_set_nosync(void *addr, u8 c, size_t len) int patch_text_set_nosync(void *addr, u8 c, size_t len)
{ {
u32 *tp = addr;
int ret; int ret;
ret = patch_insn_set(tp, c, len); ret = patch_insn_set(addr, c, len);
return ret; return ret;
} }
...@@ -211,31 +212,33 @@ NOKPROBE_SYMBOL(patch_text_set_nosync); ...@@ -211,31 +212,33 @@ NOKPROBE_SYMBOL(patch_text_set_nosync);
int patch_insn_write(void *addr, const void *insn, size_t len) int patch_insn_write(void *addr, const void *insn, size_t len)
{ {
size_t patched = 0;
size_t size; size_t size;
int ret = 0; int ret;
/* /*
* Copy the instructions to the destination address, two pages at a time * Copy the instructions to the destination address, two pages at a time
* because __patch_insn_write() can only handle len <= 2 * PAGE_SIZE. * because __patch_insn_write() can only handle len <= 2 * PAGE_SIZE.
*/ */
while (patched < len && !ret) { while (len) {
size = min_t(size_t, PAGE_SIZE * 2 - offset_in_page(addr + patched), len - patched); size = min(len, PAGE_SIZE * 2 - offset_in_page(addr));
ret = __patch_insn_write(addr + patched, insn + patched, size); ret = __patch_insn_write(addr, insn, size);
if (ret)
patched += size; return ret;
addr += size;
insn += size;
len -= size;
} }
return ret; return 0;
} }
NOKPROBE_SYMBOL(patch_insn_write); NOKPROBE_SYMBOL(patch_insn_write);
int patch_text_nosync(void *addr, const void *insns, size_t len) int patch_text_nosync(void *addr, const void *insns, size_t len)
{ {
u32 *tp = addr;
int ret; int ret;
ret = patch_insn_write(tp, insns, len); ret = patch_insn_write(addr, insns, len);
return ret; return ret;
} }
...@@ -244,14 +247,10 @@ NOKPROBE_SYMBOL(patch_text_nosync); ...@@ -244,14 +247,10 @@ NOKPROBE_SYMBOL(patch_text_nosync);
static int patch_text_cb(void *data) static int patch_text_cb(void *data)
{ {
struct patch_insn *patch = data; struct patch_insn *patch = data;
unsigned long len; int ret = 0;
int i, ret = 0;
if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) { if (atomic_inc_return(&patch->cpu_count) == num_online_cpus()) {
for (i = 0; ret == 0 && i < patch->ninsns; i++) { ret = patch_insn_write(patch->addr, patch->insns, patch->len);
len = GET_INSN_LENGTH(patch->insns[i]);
ret = patch_insn_write(patch->addr + i * len, &patch->insns[i], len);
}
/* /*
* Make sure the patching store is effective *before* we * Make sure the patching store is effective *before* we
* increment the counter which releases all waiting CPUs * increment the counter which releases all waiting CPUs
...@@ -271,13 +270,13 @@ static int patch_text_cb(void *data) ...@@ -271,13 +270,13 @@ static int patch_text_cb(void *data)
} }
NOKPROBE_SYMBOL(patch_text_cb); NOKPROBE_SYMBOL(patch_text_cb);
int patch_text(void *addr, u32 *insns, int ninsns) int patch_text(void *addr, u32 *insns, size_t len)
{ {
int ret; int ret;
struct patch_insn patch = { struct patch_insn patch = {
.addr = addr, .addr = addr,
.insns = insns, .insns = insns,
.ninsns = ninsns, .len = len,
.cpu_count = ATOMIC_INIT(0), .cpu_count = ATOMIC_INIT(0),
}; };
......
...@@ -24,14 +24,13 @@ post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *); ...@@ -24,14 +24,13 @@ post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *);
static void __kprobes arch_prepare_ss_slot(struct kprobe *p) static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
{ {
size_t len = GET_INSN_LENGTH(p->opcode);
u32 insn = __BUG_INSN_32; u32 insn = __BUG_INSN_32;
unsigned long offset = GET_INSN_LENGTH(p->opcode);
p->ainsn.api.restore = (unsigned long)p->addr + offset; p->ainsn.api.restore = (unsigned long)p->addr + len;
patch_text(p->ainsn.api.insn, &p->opcode, 1); patch_text_nosync(p->ainsn.api.insn, &p->opcode, len);
patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset), patch_text_nosync(p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn));
&insn, 1);
} }
static void __kprobes arch_prepare_simulate(struct kprobe *p) static void __kprobes arch_prepare_simulate(struct kprobe *p)
...@@ -108,16 +107,18 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) ...@@ -108,16 +107,18 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
/* install breakpoint in text */ /* install breakpoint in text */
void __kprobes arch_arm_kprobe(struct kprobe *p) void __kprobes arch_arm_kprobe(struct kprobe *p)
{ {
u32 insn = (p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32 ? size_t len = GET_INSN_LENGTH(p->opcode);
__BUG_INSN_32 : __BUG_INSN_16; u32 insn = len == 4 ? __BUG_INSN_32 : __BUG_INSN_16;
patch_text(p->addr, &insn, 1); patch_text(p->addr, &insn, len);
} }
/* remove breakpoint from text */ /* remove breakpoint from text */
void __kprobes arch_disarm_kprobe(struct kprobe *p) void __kprobes arch_disarm_kprobe(struct kprobe *p)
{ {
patch_text(p->addr, &p->opcode, 1); size_t len = GET_INSN_LENGTH(p->opcode);
patch_text(p->addr, &p->opcode, len);
} }
void __kprobes arch_remove_kprobe(struct kprobe *p) void __kprobes arch_remove_kprobe(struct kprobe *p)
......
...@@ -14,6 +14,9 @@ ...@@ -14,6 +14,9 @@
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
#define CREATE_TRACE_POINTS
#include <asm/trace.h>
/* default SBI version is 0.1 */ /* default SBI version is 0.1 */
unsigned long sbi_spec_version __ro_after_init = SBI_SPEC_VERSION_DEFAULT; unsigned long sbi_spec_version __ro_after_init = SBI_SPEC_VERSION_DEFAULT;
EXPORT_SYMBOL(sbi_spec_version); EXPORT_SYMBOL(sbi_spec_version);
...@@ -24,13 +27,15 @@ static int (*__sbi_rfence)(int fid, const struct cpumask *cpu_mask, ...@@ -24,13 +27,15 @@ static int (*__sbi_rfence)(int fid, const struct cpumask *cpu_mask,
unsigned long start, unsigned long size, unsigned long start, unsigned long size,
unsigned long arg4, unsigned long arg5) __ro_after_init; unsigned long arg4, unsigned long arg5) __ro_after_init;
struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0, struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1,
unsigned long arg1, unsigned long arg2, unsigned long arg2, unsigned long arg3,
unsigned long arg3, unsigned long arg4, unsigned long arg4, unsigned long arg5,
unsigned long arg5) int fid, int ext)
{ {
struct sbiret ret; struct sbiret ret;
trace_sbi_call(ext, fid);
register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0); register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0);
register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1); register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1);
register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2); register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2);
...@@ -46,9 +51,11 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0, ...@@ -46,9 +51,11 @@ struct sbiret sbi_ecall(int ext, int fid, unsigned long arg0,
ret.error = a0; ret.error = a0;
ret.value = a1; ret.value = a1;
trace_sbi_return(ext, ret.error, ret.value);
return ret; return ret;
} }
EXPORT_SYMBOL(sbi_ecall); EXPORT_SYMBOL(__sbi_ecall);
int sbi_err_map_linux_errno(int err) int sbi_err_map_linux_errno(int err)
{ {
......
...@@ -214,6 +214,15 @@ asmlinkage __visible void smp_callin(void) ...@@ -214,6 +214,15 @@ asmlinkage __visible void smp_callin(void)
struct mm_struct *mm = &init_mm; struct mm_struct *mm = &init_mm;
unsigned int curr_cpuid = smp_processor_id(); unsigned int curr_cpuid = smp_processor_id();
if (has_vector()) {
/*
* Return as early as possible so the hart with a mismatching
* vlen won't boot.
*/
if (riscv_v_setup_vsize())
return;
}
/* All kernel threads share the same mm context. */ /* All kernel threads share the same mm context. */
mmgrab(mm); mmgrab(mm);
current->active_mm = mm; current->active_mm = mm;
...@@ -226,11 +235,6 @@ asmlinkage __visible void smp_callin(void) ...@@ -226,11 +235,6 @@ asmlinkage __visible void smp_callin(void)
numa_add_cpu(curr_cpuid); numa_add_cpu(curr_cpuid);
set_cpu_online(curr_cpuid, true); set_cpu_online(curr_cpuid, true);
if (has_vector()) {
if (riscv_v_setup_vsize())
elf_hwcap &= ~COMPAT_HWCAP_ISA_V;
}
riscv_user_isa_enable(); riscv_user_isa_enable();
/* /*
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#include <asm/hwprobe.h> #include <asm/hwprobe.h>
#include <asm/processor.h>
#include <asm/sbi.h> #include <asm/sbi.h>
#include <asm/switch_to.h> #include <asm/switch_to.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
...@@ -69,7 +70,7 @@ static void hwprobe_isa_ext0(struct riscv_hwprobe *pair, ...@@ -69,7 +70,7 @@ static void hwprobe_isa_ext0(struct riscv_hwprobe *pair,
if (riscv_isa_extension_available(NULL, c)) if (riscv_isa_extension_available(NULL, c))
pair->value |= RISCV_HWPROBE_IMA_C; pair->value |= RISCV_HWPROBE_IMA_C;
if (has_vector()) if (has_vector() && riscv_isa_extension_available(NULL, v))
pair->value |= RISCV_HWPROBE_IMA_V; pair->value |= RISCV_HWPROBE_IMA_V;
/* /*
...@@ -112,8 +113,22 @@ static void hwprobe_isa_ext0(struct riscv_hwprobe *pair, ...@@ -112,8 +113,22 @@ static void hwprobe_isa_ext0(struct riscv_hwprobe *pair,
EXT_KEY(ZACAS); EXT_KEY(ZACAS);
EXT_KEY(ZICOND); EXT_KEY(ZICOND);
EXT_KEY(ZIHINTPAUSE); EXT_KEY(ZIHINTPAUSE);
EXT_KEY(ZIMOP);
EXT_KEY(ZCA);
EXT_KEY(ZCB);
EXT_KEY(ZCMOP);
EXT_KEY(ZAWRS);
/*
* All the following extensions must depend on the kernel
* support of V.
*/
if (has_vector()) { if (has_vector()) {
EXT_KEY(ZVE32X);
EXT_KEY(ZVE32F);
EXT_KEY(ZVE64X);
EXT_KEY(ZVE64F);
EXT_KEY(ZVE64D);
EXT_KEY(ZVBB); EXT_KEY(ZVBB);
EXT_KEY(ZVBC); EXT_KEY(ZVBC);
EXT_KEY(ZVKB); EXT_KEY(ZVKB);
...@@ -132,6 +147,8 @@ static void hwprobe_isa_ext0(struct riscv_hwprobe *pair, ...@@ -132,6 +147,8 @@ static void hwprobe_isa_ext0(struct riscv_hwprobe *pair,
EXT_KEY(ZFH); EXT_KEY(ZFH);
EXT_KEY(ZFHMIN); EXT_KEY(ZFHMIN);
EXT_KEY(ZFA); EXT_KEY(ZFA);
EXT_KEY(ZCD);
EXT_KEY(ZCF);
} }
#undef EXT_KEY #undef EXT_KEY
} }
...@@ -216,6 +233,9 @@ static void hwprobe_one_pair(struct riscv_hwprobe *pair, ...@@ -216,6 +233,9 @@ static void hwprobe_one_pair(struct riscv_hwprobe *pair,
if (hwprobe_ext0_has(cpus, RISCV_HWPROBE_EXT_ZICBOZ)) if (hwprobe_ext0_has(cpus, RISCV_HWPROBE_EXT_ZICBOZ))
pair->value = riscv_cboz_block_size; pair->value = riscv_cboz_block_size;
break; break;
case RISCV_HWPROBE_KEY_HIGHEST_VIRT_ADDRESS:
pair->value = user_max_virt_addr();
break;
/* /*
* For forward compatibility, unknown keys don't fail the whole * For forward compatibility, unknown keys don't fail the whole
......
...@@ -173,8 +173,11 @@ bool riscv_v_first_use_handler(struct pt_regs *regs) ...@@ -173,8 +173,11 @@ bool riscv_v_first_use_handler(struct pt_regs *regs)
u32 __user *epc = (u32 __user *)regs->epc; u32 __user *epc = (u32 __user *)regs->epc;
u32 insn = (u32)regs->badaddr; u32 insn = (u32)regs->badaddr;
if (!has_vector())
return false;
/* Do not handle if V is not supported, or disabled */ /* Do not handle if V is not supported, or disabled */
if (!(ELF_HWCAP & COMPAT_HWCAP_ISA_V)) if (!riscv_v_vstate_ctrl_user_allowed())
return false; return false;
/* If V has been enabled then it is not the first-use trap */ /* If V has been enabled then it is not the first-use trap */
......
...@@ -25,6 +25,7 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = { ...@@ -25,6 +25,7 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(), KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, ecall_exit_stat), STATS_DESC_COUNTER(VCPU, ecall_exit_stat),
STATS_DESC_COUNTER(VCPU, wfi_exit_stat), STATS_DESC_COUNTER(VCPU, wfi_exit_stat),
STATS_DESC_COUNTER(VCPU, wrs_exit_stat),
STATS_DESC_COUNTER(VCPU, mmio_exit_user), STATS_DESC_COUNTER(VCPU, mmio_exit_user),
STATS_DESC_COUNTER(VCPU, mmio_exit_kernel), STATS_DESC_COUNTER(VCPU, mmio_exit_kernel),
STATS_DESC_COUNTER(VCPU, csr_exit_user), STATS_DESC_COUNTER(VCPU, csr_exit_user),
......
...@@ -16,6 +16,9 @@ ...@@ -16,6 +16,9 @@
#define INSN_MASK_WFI 0xffffffff #define INSN_MASK_WFI 0xffffffff
#define INSN_MATCH_WFI 0x10500073 #define INSN_MATCH_WFI 0x10500073
#define INSN_MASK_WRS 0xffffffff
#define INSN_MATCH_WRS 0x00d00073
#define INSN_MATCH_CSRRW 0x1073 #define INSN_MATCH_CSRRW 0x1073
#define INSN_MASK_CSRRW 0x707f #define INSN_MASK_CSRRW 0x707f
#define INSN_MATCH_CSRRS 0x2073 #define INSN_MATCH_CSRRS 0x2073
...@@ -203,6 +206,13 @@ static int wfi_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn) ...@@ -203,6 +206,13 @@ static int wfi_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn)
return KVM_INSN_CONTINUE_NEXT_SEPC; return KVM_INSN_CONTINUE_NEXT_SEPC;
} }
static int wrs_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, ulong insn)
{
vcpu->stat.wrs_exit_stat++;
kvm_vcpu_on_spin(vcpu, vcpu->arch.guest_context.sstatus & SR_SPP);
return KVM_INSN_CONTINUE_NEXT_SEPC;
}
struct csr_func { struct csr_func {
unsigned int base; unsigned int base;
unsigned int count; unsigned int count;
...@@ -378,6 +388,11 @@ static const struct insn_func system_opcode_funcs[] = { ...@@ -378,6 +388,11 @@ static const struct insn_func system_opcode_funcs[] = {
.match = INSN_MATCH_WFI, .match = INSN_MATCH_WFI,
.func = wfi_insn, .func = wfi_insn,
}, },
{
.mask = INSN_MASK_WRS,
.match = INSN_MATCH_WRS,
.func = wrs_insn,
},
}; };
static int system_opcode_insn(struct kvm_vcpu *vcpu, struct kvm_run *run, static int system_opcode_insn(struct kvm_vcpu *vcpu, struct kvm_run *run,
......
...@@ -42,6 +42,7 @@ static const unsigned long kvm_isa_ext_arr[] = { ...@@ -42,6 +42,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
KVM_ISA_EXT_ARR(SVNAPOT), KVM_ISA_EXT_ARR(SVNAPOT),
KVM_ISA_EXT_ARR(SVPBMT), KVM_ISA_EXT_ARR(SVPBMT),
KVM_ISA_EXT_ARR(ZACAS), KVM_ISA_EXT_ARR(ZACAS),
KVM_ISA_EXT_ARR(ZAWRS),
KVM_ISA_EXT_ARR(ZBA), KVM_ISA_EXT_ARR(ZBA),
KVM_ISA_EXT_ARR(ZBB), KVM_ISA_EXT_ARR(ZBB),
KVM_ISA_EXT_ARR(ZBC), KVM_ISA_EXT_ARR(ZBC),
...@@ -49,6 +50,11 @@ static const unsigned long kvm_isa_ext_arr[] = { ...@@ -49,6 +50,11 @@ static const unsigned long kvm_isa_ext_arr[] = {
KVM_ISA_EXT_ARR(ZBKC), KVM_ISA_EXT_ARR(ZBKC),
KVM_ISA_EXT_ARR(ZBKX), KVM_ISA_EXT_ARR(ZBKX),
KVM_ISA_EXT_ARR(ZBS), KVM_ISA_EXT_ARR(ZBS),
KVM_ISA_EXT_ARR(ZCA),
KVM_ISA_EXT_ARR(ZCB),
KVM_ISA_EXT_ARR(ZCD),
KVM_ISA_EXT_ARR(ZCF),
KVM_ISA_EXT_ARR(ZCMOP),
KVM_ISA_EXT_ARR(ZFA), KVM_ISA_EXT_ARR(ZFA),
KVM_ISA_EXT_ARR(ZFH), KVM_ISA_EXT_ARR(ZFH),
KVM_ISA_EXT_ARR(ZFHMIN), KVM_ISA_EXT_ARR(ZFHMIN),
...@@ -61,6 +67,7 @@ static const unsigned long kvm_isa_ext_arr[] = { ...@@ -61,6 +67,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
KVM_ISA_EXT_ARR(ZIHINTNTL), KVM_ISA_EXT_ARR(ZIHINTNTL),
KVM_ISA_EXT_ARR(ZIHINTPAUSE), KVM_ISA_EXT_ARR(ZIHINTPAUSE),
KVM_ISA_EXT_ARR(ZIHPM), KVM_ISA_EXT_ARR(ZIHPM),
KVM_ISA_EXT_ARR(ZIMOP),
KVM_ISA_EXT_ARR(ZKND), KVM_ISA_EXT_ARR(ZKND),
KVM_ISA_EXT_ARR(ZKNE), KVM_ISA_EXT_ARR(ZKNE),
KVM_ISA_EXT_ARR(ZKNH), KVM_ISA_EXT_ARR(ZKNH),
...@@ -126,6 +133,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext) ...@@ -126,6 +133,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
case KVM_RISCV_ISA_EXT_SVINVAL: case KVM_RISCV_ISA_EXT_SVINVAL:
case KVM_RISCV_ISA_EXT_SVNAPOT: case KVM_RISCV_ISA_EXT_SVNAPOT:
case KVM_RISCV_ISA_EXT_ZACAS: case KVM_RISCV_ISA_EXT_ZACAS:
case KVM_RISCV_ISA_EXT_ZAWRS:
case KVM_RISCV_ISA_EXT_ZBA: case KVM_RISCV_ISA_EXT_ZBA:
case KVM_RISCV_ISA_EXT_ZBB: case KVM_RISCV_ISA_EXT_ZBB:
case KVM_RISCV_ISA_EXT_ZBC: case KVM_RISCV_ISA_EXT_ZBC:
...@@ -133,6 +141,11 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext) ...@@ -133,6 +141,11 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
case KVM_RISCV_ISA_EXT_ZBKC: case KVM_RISCV_ISA_EXT_ZBKC:
case KVM_RISCV_ISA_EXT_ZBKX: case KVM_RISCV_ISA_EXT_ZBKX:
case KVM_RISCV_ISA_EXT_ZBS: case KVM_RISCV_ISA_EXT_ZBS:
case KVM_RISCV_ISA_EXT_ZCA:
case KVM_RISCV_ISA_EXT_ZCB:
case KVM_RISCV_ISA_EXT_ZCD:
case KVM_RISCV_ISA_EXT_ZCF:
case KVM_RISCV_ISA_EXT_ZCMOP:
case KVM_RISCV_ISA_EXT_ZFA: case KVM_RISCV_ISA_EXT_ZFA:
case KVM_RISCV_ISA_EXT_ZFH: case KVM_RISCV_ISA_EXT_ZFH:
case KVM_RISCV_ISA_EXT_ZFHMIN: case KVM_RISCV_ISA_EXT_ZFHMIN:
...@@ -143,6 +156,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext) ...@@ -143,6 +156,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
case KVM_RISCV_ISA_EXT_ZIHINTNTL: case KVM_RISCV_ISA_EXT_ZIHINTNTL:
case KVM_RISCV_ISA_EXT_ZIHINTPAUSE: case KVM_RISCV_ISA_EXT_ZIHINTPAUSE:
case KVM_RISCV_ISA_EXT_ZIHPM: case KVM_RISCV_ISA_EXT_ZIHPM:
case KVM_RISCV_ISA_EXT_ZIMOP:
case KVM_RISCV_ISA_EXT_ZKND: case KVM_RISCV_ISA_EXT_ZKND:
case KVM_RISCV_ISA_EXT_ZKNE: case KVM_RISCV_ISA_EXT_ZKNE:
case KVM_RISCV_ISA_EXT_ZKNH: case KVM_RISCV_ISA_EXT_ZKNH:
......
...@@ -13,6 +13,7 @@ endif ...@@ -13,6 +13,7 @@ endif
lib-$(CONFIG_MMU) += uaccess.o lib-$(CONFIG_MMU) += uaccess.o
lib-$(CONFIG_64BIT) += tishift.o lib-$(CONFIG_64BIT) += tishift.o
lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o
lib-$(CONFIG_RISCV_ISA_ZBC) += crc32.o
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
lib-$(CONFIG_RISCV_ISA_V) += xor.o lib-$(CONFIG_RISCV_ISA_V) += xor.o
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Accelerated CRC32 implementation with Zbc extension.
*
* Copyright (C) 2024 Intel Corporation
*/
#include <asm/hwcap.h>
#include <asm/alternative-macros.h>
#include <asm/byteorder.h>
#include <linux/types.h>
#include <linux/minmax.h>
#include <linux/crc32poly.h>
#include <linux/crc32.h>
#include <linux/byteorder/generic.h>
/*
* Refer to https://www.corsix.org/content/barrett-reduction-polynomials for
* better understanding of how this math works.
*
* let "+" denotes polynomial add (XOR)
* let "-" denotes polynomial sub (XOR)
* let "*" denotes polynomial multiplication
* let "/" denotes polynomial floor division
* let "S" denotes source data, XLEN bit wide
* let "P" denotes CRC32 polynomial
* let "T" denotes 2^(XLEN+32)
* let "QT" denotes quotient of T/P, with the bit for 2^XLEN being implicit
*
* crc32(S, P)
* => S * (2^32) - S * (2^32) / P * P
* => lowest 32 bits of: S * (2^32) / P * P
* => lowest 32 bits of: S * (2^32) * (T / P) / T * P
* => lowest 32 bits of: S * (2^32) * quotient / T * P
* => lowest 32 bits of: S * quotient / 2^XLEN * P
* => lowest 32 bits of: (clmul_high_part(S, QT) + S) * P
* => clmul_low_part(clmul_high_part(S, QT) + S, P)
*
* In terms of below implementations, the BE case is more intuitive, since the
* higher order bit sits at more significant position.
*/
#if __riscv_xlen == 64
/* Slide by XLEN bits per iteration */
# define STEP_ORDER 3
/* Each below polynomial quotient has an implicit bit for 2^XLEN */
/* Polynomial quotient of (2^(XLEN+32))/CRC32_POLY, in LE format */
# define CRC32_POLY_QT_LE 0x5a72d812fb808b20
/* Polynomial quotient of (2^(XLEN+32))/CRC32C_POLY, in LE format */
# define CRC32C_POLY_QT_LE 0xa434f61c6f5389f8
/* Polynomial quotient of (2^(XLEN+32))/CRC32_POLY, in BE format, it should be
* the same as the bit-reversed version of CRC32_POLY_QT_LE
*/
# define CRC32_POLY_QT_BE 0x04d101df481b4e5a
static inline u64 crc32_le_prep(u32 crc, unsigned long const *ptr)
{
return (u64)crc ^ (__force u64)__cpu_to_le64(*ptr);
}
static inline u32 crc32_le_zbc(unsigned long s, u32 poly, unsigned long poly_qt)
{
u32 crc;
/* We don't have a "clmulrh" insn, so use clmul + slli instead. */
asm volatile (".option push\n"
".option arch,+zbc\n"
"clmul %0, %1, %2\n"
"slli %0, %0, 1\n"
"xor %0, %0, %1\n"
"clmulr %0, %0, %3\n"
"srli %0, %0, 32\n"
".option pop\n"
: "=&r" (crc)
: "r" (s),
"r" (poly_qt),
"r" ((u64)poly << 32)
:);
return crc;
}
static inline u64 crc32_be_prep(u32 crc, unsigned long const *ptr)
{
return ((u64)crc << 32) ^ (__force u64)__cpu_to_be64(*ptr);
}
#elif __riscv_xlen == 32
# define STEP_ORDER 2
/* Each quotient should match the upper half of its analog in RV64 */
# define CRC32_POLY_QT_LE 0xfb808b20
# define CRC32C_POLY_QT_LE 0x6f5389f8
# define CRC32_POLY_QT_BE 0x04d101df
static inline u32 crc32_le_prep(u32 crc, unsigned long const *ptr)
{
return crc ^ (__force u32)__cpu_to_le32(*ptr);
}
static inline u32 crc32_le_zbc(unsigned long s, u32 poly, unsigned long poly_qt)
{
u32 crc;
/* We don't have a "clmulrh" insn, so use clmul + slli instead. */
asm volatile (".option push\n"
".option arch,+zbc\n"
"clmul %0, %1, %2\n"
"slli %0, %0, 1\n"
"xor %0, %0, %1\n"
"clmulr %0, %0, %3\n"
".option pop\n"
: "=&r" (crc)
: "r" (s),
"r" (poly_qt),
"r" (poly)
:);
return crc;
}
static inline u32 crc32_be_prep(u32 crc, unsigned long const *ptr)
{
return crc ^ (__force u32)__cpu_to_be32(*ptr);
}
#else
# error "Unexpected __riscv_xlen"
#endif
static inline u32 crc32_be_zbc(unsigned long s)
{
u32 crc;
asm volatile (".option push\n"
".option arch,+zbc\n"
"clmulh %0, %1, %2\n"
"xor %0, %0, %1\n"
"clmul %0, %0, %3\n"
".option pop\n"
: "=&r" (crc)
: "r" (s),
"r" (CRC32_POLY_QT_BE),
"r" (CRC32_POLY_BE)
:);
return crc;
}
#define STEP (1 << STEP_ORDER)
#define OFFSET_MASK (STEP - 1)
typedef u32 (*fallback)(u32 crc, unsigned char const *p, size_t len);
static inline u32 crc32_le_unaligned(u32 crc, unsigned char const *p,
size_t len, u32 poly,
unsigned long poly_qt)
{
size_t bits = len * 8;
unsigned long s = 0;
u32 crc_low = 0;
for (int i = 0; i < len; i++)
s = ((unsigned long)*p++ << (__riscv_xlen - 8)) | (s >> 8);
s ^= (unsigned long)crc << (__riscv_xlen - bits);
if (__riscv_xlen == 32 || len < sizeof(u32))
crc_low = crc >> bits;
crc = crc32_le_zbc(s, poly, poly_qt);
crc ^= crc_low;
return crc;
}
static inline u32 __pure crc32_le_generic(u32 crc, unsigned char const *p,
size_t len, u32 poly,
unsigned long poly_qt,
fallback crc_fb)
{
size_t offset, head_len, tail_len;
unsigned long const *p_ul;
unsigned long s;
asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
RISCV_ISA_EXT_ZBC, 1)
: : : : legacy);
/* Handle the unaligned head. */
offset = (unsigned long)p & OFFSET_MASK;
if (offset && len) {
head_len = min(STEP - offset, len);
crc = crc32_le_unaligned(crc, p, head_len, poly, poly_qt);
p += head_len;
len -= head_len;
}
tail_len = len & OFFSET_MASK;
len = len >> STEP_ORDER;
p_ul = (unsigned long const *)p;
for (int i = 0; i < len; i++) {
s = crc32_le_prep(crc, p_ul);
crc = crc32_le_zbc(s, poly, poly_qt);
p_ul++;
}
/* Handle the tail bytes. */
p = (unsigned char const *)p_ul;
if (tail_len)
crc = crc32_le_unaligned(crc, p, tail_len, poly, poly_qt);
return crc;
legacy:
return crc_fb(crc, p, len);
}
u32 __pure crc32_le(u32 crc, unsigned char const *p, size_t len)
{
return crc32_le_generic(crc, p, len, CRC32_POLY_LE, CRC32_POLY_QT_LE,
crc32_le_base);
}
u32 __pure __crc32c_le(u32 crc, unsigned char const *p, size_t len)
{
return crc32_le_generic(crc, p, len, CRC32C_POLY_LE,
CRC32C_POLY_QT_LE, __crc32c_le_base);
}
static inline u32 crc32_be_unaligned(u32 crc, unsigned char const *p,
size_t len)
{
size_t bits = len * 8;
unsigned long s = 0;
u32 crc_low = 0;
s = 0;
for (int i = 0; i < len; i++)
s = *p++ | (s << 8);
if (__riscv_xlen == 32 || len < sizeof(u32)) {
s ^= crc >> (32 - bits);
crc_low = crc << bits;
} else {
s ^= (unsigned long)crc << (bits - 32);
}
crc = crc32_be_zbc(s);
crc ^= crc_low;
return crc;
}
u32 __pure crc32_be(u32 crc, unsigned char const *p, size_t len)
{
size_t offset, head_len, tail_len;
unsigned long const *p_ul;
unsigned long s;
asm goto(ALTERNATIVE("j %l[legacy]", "nop", 0,
RISCV_ISA_EXT_ZBC, 1)
: : : : legacy);
/* Handle the unaligned head. */
offset = (unsigned long)p & OFFSET_MASK;
if (offset && len) {
head_len = min(STEP - offset, len);
crc = crc32_be_unaligned(crc, p, head_len);
p += head_len;
len -= head_len;
}
tail_len = len & OFFSET_MASK;
len = len >> STEP_ORDER;
p_ul = (unsigned long const *)p;
for (int i = 0; i < len; i++) {
s = crc32_be_prep(crc, p_ul);
crc = crc32_be_zbc(s);
p_ul++;
}
/* Handle the tail bytes. */
p = (unsigned char const *)p_ul;
if (tail_len)
crc = crc32_be_unaligned(crc, p, tail_len);
return crc;
legacy:
return crc32_be_base(crc, p, len);
}
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
SYM_FUNC_START(__asm_copy_to_user) SYM_FUNC_START(__asm_copy_to_user)
#ifdef CONFIG_RISCV_ISA_V #ifdef CONFIG_RISCV_ISA_V
ALTERNATIVE("j fallback_scalar_usercopy", "nop", 0, RISCV_ISA_EXT_v, CONFIG_RISCV_ISA_V) ALTERNATIVE("j fallback_scalar_usercopy", "nop", 0, RISCV_ISA_EXT_ZVE32X, CONFIG_RISCV_ISA_V)
REG_L t0, riscv_v_usercopy_threshold REG_L t0, riscv_v_usercopy_threshold
bltu a2, t0, fallback_scalar_usercopy bltu a2, t0, fallback_scalar_usercopy
tail enter_vector_usercopy tail enter_vector_usercopy
......
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include <asm/fixmap.h> #include <asm/fixmap.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/kasan.h>
#include <asm/numa.h> #include <asm/numa.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/sections.h> #include <asm/sections.h>
...@@ -296,7 +297,7 @@ static void __init setup_bootmem(void) ...@@ -296,7 +297,7 @@ static void __init setup_bootmem(void)
} }
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
struct pt_alloc_ops pt_ops __initdata; struct pt_alloc_ops pt_ops __meminitdata;
pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss; pgd_t swapper_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
pgd_t trampoline_pg_dir[PTRS_PER_PGD] __page_aligned_bss; pgd_t trampoline_pg_dir[PTRS_PER_PGD] __page_aligned_bss;
...@@ -358,7 +359,7 @@ static inline pte_t *__init get_pte_virt_fixmap(phys_addr_t pa) ...@@ -358,7 +359,7 @@ static inline pte_t *__init get_pte_virt_fixmap(phys_addr_t pa)
return (pte_t *)set_fixmap_offset(FIX_PTE, pa); return (pte_t *)set_fixmap_offset(FIX_PTE, pa);
} }
static inline pte_t *__init get_pte_virt_late(phys_addr_t pa) static inline pte_t *__meminit get_pte_virt_late(phys_addr_t pa)
{ {
return (pte_t *) __va(pa); return (pte_t *) __va(pa);
} }
...@@ -377,7 +378,7 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va) ...@@ -377,7 +378,7 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va)
return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
} }
static phys_addr_t __init alloc_pte_late(uintptr_t va) static phys_addr_t __meminit alloc_pte_late(uintptr_t va)
{ {
struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0);
...@@ -385,9 +386,8 @@ static phys_addr_t __init alloc_pte_late(uintptr_t va) ...@@ -385,9 +386,8 @@ static phys_addr_t __init alloc_pte_late(uintptr_t va)
return __pa((pte_t *)ptdesc_address(ptdesc)); return __pa((pte_t *)ptdesc_address(ptdesc));
} }
static void __init create_pte_mapping(pte_t *ptep, static void __meminit create_pte_mapping(pte_t *ptep, uintptr_t va, phys_addr_t pa, phys_addr_t sz,
uintptr_t va, phys_addr_t pa, pgprot_t prot)
phys_addr_t sz, pgprot_t prot)
{ {
uintptr_t pte_idx = pte_index(va); uintptr_t pte_idx = pte_index(va);
...@@ -441,7 +441,7 @@ static pmd_t *__init get_pmd_virt_fixmap(phys_addr_t pa) ...@@ -441,7 +441,7 @@ static pmd_t *__init get_pmd_virt_fixmap(phys_addr_t pa)
return (pmd_t *)set_fixmap_offset(FIX_PMD, pa); return (pmd_t *)set_fixmap_offset(FIX_PMD, pa);
} }
static pmd_t *__init get_pmd_virt_late(phys_addr_t pa) static pmd_t *__meminit get_pmd_virt_late(phys_addr_t pa)
{ {
return (pmd_t *) __va(pa); return (pmd_t *) __va(pa);
} }
...@@ -458,7 +458,7 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va) ...@@ -458,7 +458,7 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va)
return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
} }
static phys_addr_t __init alloc_pmd_late(uintptr_t va) static phys_addr_t __meminit alloc_pmd_late(uintptr_t va)
{ {
struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0);
...@@ -466,9 +466,9 @@ static phys_addr_t __init alloc_pmd_late(uintptr_t va) ...@@ -466,9 +466,9 @@ static phys_addr_t __init alloc_pmd_late(uintptr_t va)
return __pa((pmd_t *)ptdesc_address(ptdesc)); return __pa((pmd_t *)ptdesc_address(ptdesc));
} }
static void __init create_pmd_mapping(pmd_t *pmdp, static void __meminit create_pmd_mapping(pmd_t *pmdp,
uintptr_t va, phys_addr_t pa, uintptr_t va, phys_addr_t pa,
phys_addr_t sz, pgprot_t prot) phys_addr_t sz, pgprot_t prot)
{ {
pte_t *ptep; pte_t *ptep;
phys_addr_t pte_phys; phys_addr_t pte_phys;
...@@ -504,7 +504,7 @@ static pud_t *__init get_pud_virt_fixmap(phys_addr_t pa) ...@@ -504,7 +504,7 @@ static pud_t *__init get_pud_virt_fixmap(phys_addr_t pa)
return (pud_t *)set_fixmap_offset(FIX_PUD, pa); return (pud_t *)set_fixmap_offset(FIX_PUD, pa);
} }
static pud_t *__init get_pud_virt_late(phys_addr_t pa) static pud_t *__meminit get_pud_virt_late(phys_addr_t pa)
{ {
return (pud_t *)__va(pa); return (pud_t *)__va(pa);
} }
...@@ -522,7 +522,7 @@ static phys_addr_t __init alloc_pud_fixmap(uintptr_t va) ...@@ -522,7 +522,7 @@ static phys_addr_t __init alloc_pud_fixmap(uintptr_t va)
return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
} }
static phys_addr_t alloc_pud_late(uintptr_t va) static phys_addr_t __meminit alloc_pud_late(uintptr_t va)
{ {
unsigned long vaddr; unsigned long vaddr;
...@@ -542,7 +542,7 @@ static p4d_t *__init get_p4d_virt_fixmap(phys_addr_t pa) ...@@ -542,7 +542,7 @@ static p4d_t *__init get_p4d_virt_fixmap(phys_addr_t pa)
return (p4d_t *)set_fixmap_offset(FIX_P4D, pa); return (p4d_t *)set_fixmap_offset(FIX_P4D, pa);
} }
static p4d_t *__init get_p4d_virt_late(phys_addr_t pa) static p4d_t *__meminit get_p4d_virt_late(phys_addr_t pa)
{ {
return (p4d_t *)__va(pa); return (p4d_t *)__va(pa);
} }
...@@ -560,7 +560,7 @@ static phys_addr_t __init alloc_p4d_fixmap(uintptr_t va) ...@@ -560,7 +560,7 @@ static phys_addr_t __init alloc_p4d_fixmap(uintptr_t va)
return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE); return memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
} }
static phys_addr_t alloc_p4d_late(uintptr_t va) static phys_addr_t __meminit alloc_p4d_late(uintptr_t va)
{ {
unsigned long vaddr; unsigned long vaddr;
...@@ -569,9 +569,8 @@ static phys_addr_t alloc_p4d_late(uintptr_t va) ...@@ -569,9 +569,8 @@ static phys_addr_t alloc_p4d_late(uintptr_t va)
return __pa(vaddr); return __pa(vaddr);
} }
static void __init create_pud_mapping(pud_t *pudp, static void __meminit create_pud_mapping(pud_t *pudp, uintptr_t va, phys_addr_t pa, phys_addr_t sz,
uintptr_t va, phys_addr_t pa, pgprot_t prot)
phys_addr_t sz, pgprot_t prot)
{ {
pmd_t *nextp; pmd_t *nextp;
phys_addr_t next_phys; phys_addr_t next_phys;
...@@ -596,9 +595,8 @@ static void __init create_pud_mapping(pud_t *pudp, ...@@ -596,9 +595,8 @@ static void __init create_pud_mapping(pud_t *pudp,
create_pmd_mapping(nextp, va, pa, sz, prot); create_pmd_mapping(nextp, va, pa, sz, prot);
} }
static void __init create_p4d_mapping(p4d_t *p4dp, static void __meminit create_p4d_mapping(p4d_t *p4dp, uintptr_t va, phys_addr_t pa, phys_addr_t sz,
uintptr_t va, phys_addr_t pa, pgprot_t prot)
phys_addr_t sz, pgprot_t prot)
{ {
pud_t *nextp; pud_t *nextp;
phys_addr_t next_phys; phys_addr_t next_phys;
...@@ -654,9 +652,8 @@ static void __init create_p4d_mapping(p4d_t *p4dp, ...@@ -654,9 +652,8 @@ static void __init create_p4d_mapping(p4d_t *p4dp,
#define create_pmd_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0) #define create_pmd_mapping(__pmdp, __va, __pa, __sz, __prot) do {} while(0)
#endif /* __PAGETABLE_PMD_FOLDED */ #endif /* __PAGETABLE_PMD_FOLDED */
void __init create_pgd_mapping(pgd_t *pgdp, void __meminit create_pgd_mapping(pgd_t *pgdp, uintptr_t va, phys_addr_t pa, phys_addr_t sz,
uintptr_t va, phys_addr_t pa, pgprot_t prot)
phys_addr_t sz, pgprot_t prot)
{ {
pgd_next_t *nextp; pgd_next_t *nextp;
phys_addr_t next_phys; phys_addr_t next_phys;
...@@ -681,8 +678,7 @@ void __init create_pgd_mapping(pgd_t *pgdp, ...@@ -681,8 +678,7 @@ void __init create_pgd_mapping(pgd_t *pgdp,
create_pgd_next_mapping(nextp, va, pa, sz, prot); create_pgd_next_mapping(nextp, va, pa, sz, prot);
} }
static uintptr_t __init best_map_size(phys_addr_t pa, uintptr_t va, static uintptr_t __meminit best_map_size(phys_addr_t pa, uintptr_t va, phys_addr_t size)
phys_addr_t size)
{ {
if (debug_pagealloc_enabled()) if (debug_pagealloc_enabled())
return PAGE_SIZE; return PAGE_SIZE;
...@@ -718,7 +714,7 @@ asmlinkage void __init __copy_data(void) ...@@ -718,7 +714,7 @@ asmlinkage void __init __copy_data(void)
#endif #endif
#ifdef CONFIG_STRICT_KERNEL_RWX #ifdef CONFIG_STRICT_KERNEL_RWX
static __init pgprot_t pgprot_from_va(uintptr_t va) static __meminit pgprot_t pgprot_from_va(uintptr_t va)
{ {
if (is_va_kernel_text(va)) if (is_va_kernel_text(va))
return PAGE_KERNEL_READ_EXEC; return PAGE_KERNEL_READ_EXEC;
...@@ -743,7 +739,7 @@ void mark_rodata_ro(void) ...@@ -743,7 +739,7 @@ void mark_rodata_ro(void)
set_memory_ro); set_memory_ro);
} }
#else #else
static __init pgprot_t pgprot_from_va(uintptr_t va) static __meminit pgprot_t pgprot_from_va(uintptr_t va)
{ {
if (IS_ENABLED(CONFIG_64BIT) && !is_kernel_mapping(va)) if (IS_ENABLED(CONFIG_64BIT) && !is_kernel_mapping(va))
return PAGE_KERNEL; return PAGE_KERNEL;
...@@ -1235,9 +1231,8 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa) ...@@ -1235,9 +1231,8 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
pt_ops_set_fixmap(); pt_ops_set_fixmap();
} }
static void __init create_linear_mapping_range(phys_addr_t start, static void __meminit create_linear_mapping_range(phys_addr_t start, phys_addr_t end,
phys_addr_t end, uintptr_t fixed_map_size, const pgprot_t *pgprot)
uintptr_t fixed_map_size)
{ {
phys_addr_t pa; phys_addr_t pa;
uintptr_t va, map_size; uintptr_t va, map_size;
...@@ -1248,7 +1243,7 @@ static void __init create_linear_mapping_range(phys_addr_t start, ...@@ -1248,7 +1243,7 @@ static void __init create_linear_mapping_range(phys_addr_t start,
best_map_size(pa, va, end - pa); best_map_size(pa, va, end - pa);
create_pgd_mapping(swapper_pg_dir, va, pa, map_size, create_pgd_mapping(swapper_pg_dir, va, pa, map_size,
pgprot_from_va(va)); pgprot ? *pgprot : pgprot_from_va(va));
} }
} }
...@@ -1292,22 +1287,19 @@ static void __init create_linear_mapping_page_table(void) ...@@ -1292,22 +1287,19 @@ static void __init create_linear_mapping_page_table(void)
if (end >= __pa(PAGE_OFFSET) + memory_limit) if (end >= __pa(PAGE_OFFSET) + memory_limit)
end = __pa(PAGE_OFFSET) + memory_limit; end = __pa(PAGE_OFFSET) + memory_limit;
create_linear_mapping_range(start, end, 0); create_linear_mapping_range(start, end, 0, NULL);
} }
#ifdef CONFIG_STRICT_KERNEL_RWX #ifdef CONFIG_STRICT_KERNEL_RWX
create_linear_mapping_range(ktext_start, ktext_start + ktext_size, 0); create_linear_mapping_range(ktext_start, ktext_start + ktext_size, 0, NULL);
create_linear_mapping_range(krodata_start, create_linear_mapping_range(krodata_start, krodata_start + krodata_size, 0, NULL);
krodata_start + krodata_size, 0);
memblock_clear_nomap(ktext_start, ktext_size); memblock_clear_nomap(ktext_start, ktext_size);
memblock_clear_nomap(krodata_start, krodata_size); memblock_clear_nomap(krodata_start, krodata_size);
#endif #endif
#ifdef CONFIG_KFENCE #ifdef CONFIG_KFENCE
create_linear_mapping_range(kfence_pool, create_linear_mapping_range(kfence_pool, kfence_pool + KFENCE_POOL_SIZE, PAGE_SIZE, NULL);
kfence_pool + KFENCE_POOL_SIZE,
PAGE_SIZE);
memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE); memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE);
#endif #endif
...@@ -1439,7 +1431,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, ...@@ -1439,7 +1431,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
* memory hotplug, we are not able to update all the page tables with * memory hotplug, we are not able to update all the page tables with
* the new PMDs. * the new PMDs.
*/ */
return vmemmap_populate_hugepages(start, end, node, NULL); return vmemmap_populate_hugepages(start, end, node, altmap);
} }
#endif #endif
...@@ -1493,11 +1485,19 @@ static void __init preallocate_pgd_pages_range(unsigned long start, unsigned lon ...@@ -1493,11 +1485,19 @@ static void __init preallocate_pgd_pages_range(unsigned long start, unsigned lon
panic("Failed to pre-allocate %s pages for %s area\n", lvl, area); panic("Failed to pre-allocate %s pages for %s area\n", lvl, area);
} }
#define PAGE_END KASAN_SHADOW_START
void __init pgtable_cache_init(void) void __init pgtable_cache_init(void)
{ {
preallocate_pgd_pages_range(VMALLOC_START, VMALLOC_END, "vmalloc"); preallocate_pgd_pages_range(VMALLOC_START, VMALLOC_END, "vmalloc");
if (IS_ENABLED(CONFIG_MODULES)) if (IS_ENABLED(CONFIG_MODULES))
preallocate_pgd_pages_range(MODULES_VADDR, MODULES_END, "bpf/modules"); preallocate_pgd_pages_range(MODULES_VADDR, MODULES_END, "bpf/modules");
if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) {
preallocate_pgd_pages_range(VMEMMAP_START, VMEMMAP_END, "vmemmap");
preallocate_pgd_pages_range(PAGE_OFFSET, PAGE_END, "direct map");
if (IS_ENABLED(CONFIG_KASAN))
preallocate_pgd_pages_range(KASAN_SHADOW_START, KASAN_SHADOW_END, "kasan");
}
} }
#endif #endif
...@@ -1534,3 +1534,270 @@ struct execmem_info __init *execmem_arch_setup(void) ...@@ -1534,3 +1534,270 @@ struct execmem_info __init *execmem_arch_setup(void)
} }
#endif /* CONFIG_MMU */ #endif /* CONFIG_MMU */
#endif /* CONFIG_EXECMEM */ #endif /* CONFIG_EXECMEM */
#ifdef CONFIG_MEMORY_HOTPLUG
static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd)
{
struct page *page = pmd_page(*pmd);
struct ptdesc *ptdesc = page_ptdesc(page);
pte_t *pte;
int i;
for (i = 0; i < PTRS_PER_PTE; i++) {
pte = pte_start + i;
if (!pte_none(*pte))
return;
}
pagetable_pte_dtor(ptdesc);
if (PageReserved(page))
free_reserved_page(page);
else
pagetable_free(ptdesc);
pmd_clear(pmd);
}
static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud)
{
struct page *page = pud_page(*pud);
struct ptdesc *ptdesc = page_ptdesc(page);
pmd_t *pmd;
int i;
for (i = 0; i < PTRS_PER_PMD; i++) {
pmd = pmd_start + i;
if (!pmd_none(*pmd))
return;
}
pagetable_pmd_dtor(ptdesc);
if (PageReserved(page))
free_reserved_page(page);
else
pagetable_free(ptdesc);
pud_clear(pud);
}
static void __meminit free_pud_table(pud_t *pud_start, p4d_t *p4d)
{
struct page *page = p4d_page(*p4d);
pud_t *pud;
int i;
for (i = 0; i < PTRS_PER_PUD; i++) {
pud = pud_start + i;
if (!pud_none(*pud))
return;
}
if (PageReserved(page))
free_reserved_page(page);
else
free_pages((unsigned long)page_address(page), 0);
p4d_clear(p4d);
}
static void __meminit free_vmemmap_storage(struct page *page, size_t size,
struct vmem_altmap *altmap)
{
int order = get_order(size);
if (altmap) {
vmem_altmap_free(altmap, size >> PAGE_SHIFT);
return;
}
if (PageReserved(page)) {
unsigned int nr_pages = 1 << order;
while (nr_pages--)
free_reserved_page(page++);
return;
}
free_pages((unsigned long)page_address(page), order);
}
static void __meminit remove_pte_mapping(pte_t *pte_base, unsigned long addr, unsigned long end,
bool is_vmemmap, struct vmem_altmap *altmap)
{
unsigned long next;
pte_t *ptep, pte;
for (; addr < end; addr = next) {
next = (addr + PAGE_SIZE) & PAGE_MASK;
if (next > end)
next = end;
ptep = pte_base + pte_index(addr);
pte = ptep_get(ptep);
if (!pte_present(*ptep))
continue;
pte_clear(&init_mm, addr, ptep);
if (is_vmemmap)
free_vmemmap_storage(pte_page(pte), PAGE_SIZE, altmap);
}
}
static void __meminit remove_pmd_mapping(pmd_t *pmd_base, unsigned long addr, unsigned long end,
bool is_vmemmap, struct vmem_altmap *altmap)
{
unsigned long next;
pte_t *pte_base;
pmd_t *pmdp, pmd;
for (; addr < end; addr = next) {
next = pmd_addr_end(addr, end);
pmdp = pmd_base + pmd_index(addr);
pmd = pmdp_get(pmdp);
if (!pmd_present(pmd))
continue;
if (pmd_leaf(pmd)) {
pmd_clear(pmdp);
if (is_vmemmap)
free_vmemmap_storage(pmd_page(pmd), PMD_SIZE, altmap);
continue;
}
pte_base = (pte_t *)pmd_page_vaddr(*pmdp);
remove_pte_mapping(pte_base, addr, next, is_vmemmap, altmap);
free_pte_table(pte_base, pmdp);
}
}
static void __meminit remove_pud_mapping(pud_t *pud_base, unsigned long addr, unsigned long end,
bool is_vmemmap, struct vmem_altmap *altmap)
{
unsigned long next;
pud_t *pudp, pud;
pmd_t *pmd_base;
for (; addr < end; addr = next) {
next = pud_addr_end(addr, end);
pudp = pud_base + pud_index(addr);
pud = pudp_get(pudp);
if (!pud_present(pud))
continue;
if (pud_leaf(pud)) {
if (pgtable_l4_enabled) {
pud_clear(pudp);
if (is_vmemmap)
free_vmemmap_storage(pud_page(pud), PUD_SIZE, altmap);
}
continue;
}
pmd_base = pmd_offset(pudp, 0);
remove_pmd_mapping(pmd_base, addr, next, is_vmemmap, altmap);
if (pgtable_l4_enabled)
free_pmd_table(pmd_base, pudp);
}
}
static void __meminit remove_p4d_mapping(p4d_t *p4d_base, unsigned long addr, unsigned long end,
bool is_vmemmap, struct vmem_altmap *altmap)
{
unsigned long next;
p4d_t *p4dp, p4d;
pud_t *pud_base;
for (; addr < end; addr = next) {
next = p4d_addr_end(addr, end);
p4dp = p4d_base + p4d_index(addr);
p4d = p4dp_get(p4dp);
if (!p4d_present(p4d))
continue;
if (p4d_leaf(p4d)) {
if (pgtable_l5_enabled) {
p4d_clear(p4dp);
if (is_vmemmap)
free_vmemmap_storage(p4d_page(p4d), P4D_SIZE, altmap);
}
continue;
}
pud_base = pud_offset(p4dp, 0);
remove_pud_mapping(pud_base, addr, next, is_vmemmap, altmap);
if (pgtable_l5_enabled)
free_pud_table(pud_base, p4dp);
}
}
static void __meminit remove_pgd_mapping(unsigned long va, unsigned long end, bool is_vmemmap,
struct vmem_altmap *altmap)
{
unsigned long addr, next;
p4d_t *p4d_base;
pgd_t *pgd;
for (addr = va; addr < end; addr = next) {
next = pgd_addr_end(addr, end);
pgd = pgd_offset_k(addr);
if (!pgd_present(*pgd))
continue;
if (pgd_leaf(*pgd))
continue;
p4d_base = p4d_offset(pgd, 0);
remove_p4d_mapping(p4d_base, addr, next, is_vmemmap, altmap);
}
flush_tlb_all();
}
static void __meminit remove_linear_mapping(phys_addr_t start, u64 size)
{
unsigned long va = (unsigned long)__va(start);
unsigned long end = (unsigned long)__va(start + size);
remove_pgd_mapping(va, end, false, NULL);
}
struct range arch_get_mappable_range(void)
{
struct range mhp_range;
mhp_range.start = __pa(PAGE_OFFSET);
mhp_range.end = __pa(PAGE_END - 1);
return mhp_range;
}
int __ref arch_add_memory(int nid, u64 start, u64 size, struct mhp_params *params)
{
int ret = 0;
create_linear_mapping_range(start, start + size, 0, &params->pgprot);
ret = __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, params);
if (ret) {
remove_linear_mapping(start, size);
goto out;
}
max_pfn = PFN_UP(start + size);
max_low_pfn = max_pfn;
out:
flush_tlb_all();
return ret;
}
void __ref arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
{
__remove_pages(start >> PAGE_SHIFT, size >> PAGE_SHIFT, altmap);
remove_linear_mapping(start, size);
flush_tlb_all();
}
void __ref vmemmap_free(unsigned long start, unsigned long end, struct vmem_altmap *altmap)
{
remove_pgd_mapping(start, end, true, altmap);
}
#endif /* CONFIG_MEMORY_HOTPLUG */
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
#include <linux/efi.h> #include <linux/efi.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/debugfs.h> #include <linux/debugfs.h>
#include <linux/memory_hotplug.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/ptdump.h> #include <linux/ptdump.h>
...@@ -370,7 +371,9 @@ bool ptdump_check_wx(void) ...@@ -370,7 +371,9 @@ bool ptdump_check_wx(void)
static int ptdump_show(struct seq_file *m, void *v) static int ptdump_show(struct seq_file *m, void *v)
{ {
get_online_mems();
ptdump_walk(m, m->private); ptdump_walk(m, m->private);
put_online_mems();
return 0; return 0;
} }
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#define RV_MAX_REG_ARGS 8 #define RV_MAX_REG_ARGS 8
#define RV_FENTRY_NINSNS 2 #define RV_FENTRY_NINSNS 2
#define RV_FENTRY_NBYTES (RV_FENTRY_NINSNS * 4)
/* imm that allows emit_imm to emit max count insns */ /* imm that allows emit_imm to emit max count insns */
#define RV_MAX_COUNT_IMM 0x7FFF7FF7FF7FF7FF #define RV_MAX_COUNT_IMM 0x7FFF7FF7FF7FF7FF
...@@ -676,7 +677,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type, ...@@ -676,7 +677,7 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
if (ret) if (ret)
return ret; return ret;
if (memcmp(ip, old_insns, RV_FENTRY_NINSNS * 4)) if (memcmp(ip, old_insns, RV_FENTRY_NBYTES))
return -EFAULT; return -EFAULT;
ret = gen_jump_or_nops(new_addr, ip, new_insns, is_call); ret = gen_jump_or_nops(new_addr, ip, new_insns, is_call);
...@@ -685,8 +686,8 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type, ...@@ -685,8 +686,8 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type,
cpus_read_lock(); cpus_read_lock();
mutex_lock(&text_mutex); mutex_lock(&text_mutex);
if (memcmp(ip, new_insns, RV_FENTRY_NINSNS * 4)) if (memcmp(ip, new_insns, RV_FENTRY_NBYTES))
ret = patch_text(ip, new_insns, RV_FENTRY_NINSNS); ret = patch_text(ip, new_insns, RV_FENTRY_NBYTES);
mutex_unlock(&text_mutex); mutex_unlock(&text_mutex);
cpus_read_unlock(); cpus_read_unlock();
......
...@@ -152,3 +152,16 @@ void arch_efi_call_virt_teardown(void) ...@@ -152,3 +152,16 @@ void arch_efi_call_virt_teardown(void)
{ {
efi_virtmap_unload(); efi_virtmap_unload();
} }
static int __init riscv_dmi_init(void)
{
/*
* On riscv, DMI depends on UEFI, and dmi_setup() needs to
* be called early because dmi_id_init(), which is an arch_initcall
* itself, depends on dmi_scan_machine() having been called already.
*/
dmi_setup();
return 0;
}
core_initcall(riscv_dmi_init);
...@@ -122,7 +122,7 @@ config VIRTIO_BALLOON ...@@ -122,7 +122,7 @@ config VIRTIO_BALLOON
config VIRTIO_MEM config VIRTIO_MEM
tristate "Virtio mem driver" tristate "Virtio mem driver"
depends on X86_64 || ARM64 depends on X86_64 || ARM64 || RISCV
depends on VIRTIO depends on VIRTIO
depends on MEMORY_HOTPLUG depends on MEMORY_HOTPLUG
depends on MEMORY_HOTREMOVE depends on MEMORY_HOTREMOVE
......
...@@ -9,7 +9,9 @@ ...@@ -9,7 +9,9 @@
#include <linux/bitrev.h> #include <linux/bitrev.h>
u32 __pure crc32_le(u32 crc, unsigned char const *p, size_t len); u32 __pure crc32_le(u32 crc, unsigned char const *p, size_t len);
u32 __pure crc32_le_base(u32 crc, unsigned char const *p, size_t len);
u32 __pure crc32_be(u32 crc, unsigned char const *p, size_t len); u32 __pure crc32_be(u32 crc, unsigned char const *p, size_t len);
u32 __pure crc32_be_base(u32 crc, unsigned char const *p, size_t len);
/** /**
* crc32_le_combine - Combine two crc32 check values into one. For two * crc32_le_combine - Combine two crc32 check values into one. For two
...@@ -37,6 +39,7 @@ static inline u32 crc32_le_combine(u32 crc1, u32 crc2, size_t len2) ...@@ -37,6 +39,7 @@ static inline u32 crc32_le_combine(u32 crc1, u32 crc2, size_t len2)
} }
u32 __pure __crc32c_le(u32 crc, unsigned char const *p, size_t len); u32 __pure __crc32c_le(u32 crc, unsigned char const *p, size_t len);
u32 __pure __crc32c_le_base(u32 crc, unsigned char const *p, size_t len);
/** /**
* __crc32c_le_combine - Combine two crc32c check values into one. For two * __crc32c_le_combine - Combine two crc32c check values into one. For two
......
...@@ -49,6 +49,7 @@ bool filter_reg(__u64 reg) ...@@ -49,6 +49,7 @@ bool filter_reg(__u64 reg)
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_SVNAPOT: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_SVNAPOT:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_SVPBMT: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_SVPBMT:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZACAS: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZACAS:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZAWRS:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBA: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBA:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBB: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBB:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBC: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBC:
...@@ -56,6 +57,11 @@ bool filter_reg(__u64 reg) ...@@ -56,6 +57,11 @@ bool filter_reg(__u64 reg)
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBKC: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBKC:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBKX: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBKX:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBS: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZBS:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZCA:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZCB:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZCD:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZCF:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZCMOP:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFA: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFA:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFH: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFH:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFHMIN: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZFHMIN:
...@@ -68,6 +74,7 @@ bool filter_reg(__u64 reg) ...@@ -68,6 +74,7 @@ bool filter_reg(__u64 reg)
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZIHINTNTL: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZIHINTNTL:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZIHINTPAUSE: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZIHINTPAUSE:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZIHPM: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZIHPM:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZIMOP:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZKND: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZKND:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZKNE: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZKNE:
case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZKNH: case KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_ZKNH:
...@@ -415,6 +422,7 @@ static const char *isa_ext_single_id_to_str(__u64 reg_off) ...@@ -415,6 +422,7 @@ static const char *isa_ext_single_id_to_str(__u64 reg_off)
KVM_ISA_EXT_ARR(SVNAPOT), KVM_ISA_EXT_ARR(SVNAPOT),
KVM_ISA_EXT_ARR(SVPBMT), KVM_ISA_EXT_ARR(SVPBMT),
KVM_ISA_EXT_ARR(ZACAS), KVM_ISA_EXT_ARR(ZACAS),
KVM_ISA_EXT_ARR(ZAWRS),
KVM_ISA_EXT_ARR(ZBA), KVM_ISA_EXT_ARR(ZBA),
KVM_ISA_EXT_ARR(ZBB), KVM_ISA_EXT_ARR(ZBB),
KVM_ISA_EXT_ARR(ZBC), KVM_ISA_EXT_ARR(ZBC),
...@@ -422,6 +430,11 @@ static const char *isa_ext_single_id_to_str(__u64 reg_off) ...@@ -422,6 +430,11 @@ static const char *isa_ext_single_id_to_str(__u64 reg_off)
KVM_ISA_EXT_ARR(ZBKC), KVM_ISA_EXT_ARR(ZBKC),
KVM_ISA_EXT_ARR(ZBKX), KVM_ISA_EXT_ARR(ZBKX),
KVM_ISA_EXT_ARR(ZBS), KVM_ISA_EXT_ARR(ZBS),
KVM_ISA_EXT_ARR(ZCA),
KVM_ISA_EXT_ARR(ZCB),
KVM_ISA_EXT_ARR(ZCD),
KVM_ISA_EXT_ARR(ZCF),
KVM_ISA_EXT_ARR(ZCMOP),
KVM_ISA_EXT_ARR(ZFA), KVM_ISA_EXT_ARR(ZFA),
KVM_ISA_EXT_ARR(ZFH), KVM_ISA_EXT_ARR(ZFH),
KVM_ISA_EXT_ARR(ZFHMIN), KVM_ISA_EXT_ARR(ZFHMIN),
...@@ -434,6 +447,7 @@ static const char *isa_ext_single_id_to_str(__u64 reg_off) ...@@ -434,6 +447,7 @@ static const char *isa_ext_single_id_to_str(__u64 reg_off)
KVM_ISA_EXT_ARR(ZIHINTNTL), KVM_ISA_EXT_ARR(ZIHINTNTL),
KVM_ISA_EXT_ARR(ZIHINTPAUSE), KVM_ISA_EXT_ARR(ZIHINTPAUSE),
KVM_ISA_EXT_ARR(ZIHPM), KVM_ISA_EXT_ARR(ZIHPM),
KVM_ISA_EXT_ARR(ZIMOP),
KVM_ISA_EXT_ARR(ZKND), KVM_ISA_EXT_ARR(ZKND),
KVM_ISA_EXT_ARR(ZKNE), KVM_ISA_EXT_ARR(ZKNE),
KVM_ISA_EXT_ARR(ZKNH), KVM_ISA_EXT_ARR(ZKNH),
...@@ -939,6 +953,7 @@ KVM_ISA_EXT_SIMPLE_CONFIG(svinval, SVINVAL); ...@@ -939,6 +953,7 @@ KVM_ISA_EXT_SIMPLE_CONFIG(svinval, SVINVAL);
KVM_ISA_EXT_SIMPLE_CONFIG(svnapot, SVNAPOT); KVM_ISA_EXT_SIMPLE_CONFIG(svnapot, SVNAPOT);
KVM_ISA_EXT_SIMPLE_CONFIG(svpbmt, SVPBMT); KVM_ISA_EXT_SIMPLE_CONFIG(svpbmt, SVPBMT);
KVM_ISA_EXT_SIMPLE_CONFIG(zacas, ZACAS); KVM_ISA_EXT_SIMPLE_CONFIG(zacas, ZACAS);
KVM_ISA_EXT_SIMPLE_CONFIG(zawrs, ZAWRS);
KVM_ISA_EXT_SIMPLE_CONFIG(zba, ZBA); KVM_ISA_EXT_SIMPLE_CONFIG(zba, ZBA);
KVM_ISA_EXT_SIMPLE_CONFIG(zbb, ZBB); KVM_ISA_EXT_SIMPLE_CONFIG(zbb, ZBB);
KVM_ISA_EXT_SIMPLE_CONFIG(zbc, ZBC); KVM_ISA_EXT_SIMPLE_CONFIG(zbc, ZBC);
...@@ -946,6 +961,11 @@ KVM_ISA_EXT_SIMPLE_CONFIG(zbkb, ZBKB); ...@@ -946,6 +961,11 @@ KVM_ISA_EXT_SIMPLE_CONFIG(zbkb, ZBKB);
KVM_ISA_EXT_SIMPLE_CONFIG(zbkc, ZBKC); KVM_ISA_EXT_SIMPLE_CONFIG(zbkc, ZBKC);
KVM_ISA_EXT_SIMPLE_CONFIG(zbkx, ZBKX); KVM_ISA_EXT_SIMPLE_CONFIG(zbkx, ZBKX);
KVM_ISA_EXT_SIMPLE_CONFIG(zbs, ZBS); KVM_ISA_EXT_SIMPLE_CONFIG(zbs, ZBS);
KVM_ISA_EXT_SIMPLE_CONFIG(zca, ZCA),
KVM_ISA_EXT_SIMPLE_CONFIG(zcb, ZCB),
KVM_ISA_EXT_SIMPLE_CONFIG(zcd, ZCD),
KVM_ISA_EXT_SIMPLE_CONFIG(zcf, ZCF),
KVM_ISA_EXT_SIMPLE_CONFIG(zcmop, ZCMOP);
KVM_ISA_EXT_SIMPLE_CONFIG(zfa, ZFA); KVM_ISA_EXT_SIMPLE_CONFIG(zfa, ZFA);
KVM_ISA_EXT_SIMPLE_CONFIG(zfh, ZFH); KVM_ISA_EXT_SIMPLE_CONFIG(zfh, ZFH);
KVM_ISA_EXT_SIMPLE_CONFIG(zfhmin, ZFHMIN); KVM_ISA_EXT_SIMPLE_CONFIG(zfhmin, ZFHMIN);
...@@ -958,6 +978,7 @@ KVM_ISA_EXT_SIMPLE_CONFIG(zifencei, ZIFENCEI); ...@@ -958,6 +978,7 @@ KVM_ISA_EXT_SIMPLE_CONFIG(zifencei, ZIFENCEI);
KVM_ISA_EXT_SIMPLE_CONFIG(zihintntl, ZIHINTNTL); KVM_ISA_EXT_SIMPLE_CONFIG(zihintntl, ZIHINTNTL);
KVM_ISA_EXT_SIMPLE_CONFIG(zihintpause, ZIHINTPAUSE); KVM_ISA_EXT_SIMPLE_CONFIG(zihintpause, ZIHINTPAUSE);
KVM_ISA_EXT_SIMPLE_CONFIG(zihpm, ZIHPM); KVM_ISA_EXT_SIMPLE_CONFIG(zihpm, ZIHPM);
KVM_ISA_EXT_SIMPLE_CONFIG(zimop, ZIMOP);
KVM_ISA_EXT_SIMPLE_CONFIG(zknd, ZKND); KVM_ISA_EXT_SIMPLE_CONFIG(zknd, ZKND);
KVM_ISA_EXT_SIMPLE_CONFIG(zkne, ZKNE); KVM_ISA_EXT_SIMPLE_CONFIG(zkne, ZKNE);
KVM_ISA_EXT_SIMPLE_CONFIG(zknh, ZKNH); KVM_ISA_EXT_SIMPLE_CONFIG(zknh, ZKNH);
...@@ -995,6 +1016,7 @@ struct vcpu_reg_list *vcpu_configs[] = { ...@@ -995,6 +1016,7 @@ struct vcpu_reg_list *vcpu_configs[] = {
&config_svnapot, &config_svnapot,
&config_svpbmt, &config_svpbmt,
&config_zacas, &config_zacas,
&config_zawrs,
&config_zba, &config_zba,
&config_zbb, &config_zbb,
&config_zbc, &config_zbc,
...@@ -1002,6 +1024,11 @@ struct vcpu_reg_list *vcpu_configs[] = { ...@@ -1002,6 +1024,11 @@ struct vcpu_reg_list *vcpu_configs[] = {
&config_zbkc, &config_zbkc,
&config_zbkx, &config_zbkx,
&config_zbs, &config_zbs,
&config_zca,
&config_zcb,
&config_zcd,
&config_zcf,
&config_zcmop,
&config_zfa, &config_zfa,
&config_zfh, &config_zfh,
&config_zfhmin, &config_zfhmin,
...@@ -1014,6 +1041,7 @@ struct vcpu_reg_list *vcpu_configs[] = { ...@@ -1014,6 +1041,7 @@ struct vcpu_reg_list *vcpu_configs[] = {
&config_zihintntl, &config_zihintntl,
&config_zihintpause, &config_zihintpause,
&config_zihpm, &config_zihpm,
&config_zimop,
&config_zknd, &config_zknd,
&config_zkne, &config_zkne,
&config_zknh, &config_zknh,
......
...@@ -88,16 +88,16 @@ int main(void) ...@@ -88,16 +88,16 @@ int main(void)
return -2; return -2;
} }
if (!(pair.value & RISCV_HWPROBE_IMA_V)) { if (!(pair.value & RISCV_HWPROBE_EXT_ZVE32X)) {
rc = prctl(PR_RISCV_V_GET_CONTROL); rc = prctl(PR_RISCV_V_GET_CONTROL);
if (rc != -1 || errno != EINVAL) { if (rc != -1 || errno != EINVAL) {
ksft_test_result_fail("GET_CONTROL should fail on kernel/hw without V\n"); ksft_test_result_fail("GET_CONTROL should fail on kernel/hw without ZVE32X\n");
return -3; return -3;
} }
rc = prctl(PR_RISCV_V_SET_CONTROL, PR_RISCV_V_VSTATE_CTRL_ON); rc = prctl(PR_RISCV_V_SET_CONTROL, PR_RISCV_V_VSTATE_CTRL_ON);
if (rc != -1 || errno != EINVAL) { if (rc != -1 || errno != EINVAL) {
ksft_test_result_fail("GET_CONTROL should fail on kernel/hw without V\n"); ksft_test_result_fail("SET_CONTROL should fail on kernel/hw without ZVE32X\n");
return -4; return -4;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment