Commit b7b98f86 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Alexei Starovoitov says:

====================
pull-request: bpf-next 2021-11-01

We've added 181 non-merge commits during the last 28 day(s) which contain
a total of 280 files changed, 11791 insertions(+), 5879 deletions(-).

The main changes are:

1) Fix bpf verifier propagation of 64-bit bounds, from Alexei.

2) Parallelize bpf test_progs, from Yucong and Andrii.

3) Deprecate various libbpf apis including af_xdp, from Andrii, Hengqi, Magnus.

4) Improve bpf selftests on s390, from Ilya.

5) bloomfilter bpf map type, from Joanne.

6) Big improvements to JIT tests especially on Mips, from Johan.

7) Support kernel module function calls from bpf, from Kumar.

8) Support typeless and weak ksym in light skeleton, from Kumar.

9) Disallow unprivileged bpf by default, from Pawan.

10) BTF_KIND_DECL_TAG support, from Yonghong.

11) Various bpftool cleanups, from Quentin.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (181 commits)
  libbpf: Deprecate AF_XDP support
  kbuild: Unify options for BTF generation for vmlinux and modules
  selftests/bpf: Add a testcase for 64-bit bounds propagation issue.
  bpf: Fix propagation of signed bounds from 64-bit min/max into 32-bit.
  bpf: Fix propagation of bounds from 64-bit min/max into 32-bit and var_off.
  selftests/bpf: Fix also no-alu32 strobemeta selftest
  bpf: Add missing map_delete_elem method to bloom filter map
  selftests/bpf: Add bloom map success test for userspace calls
  bpf: Add alignment padding for "map_extra" + consolidate holes
  bpf: Bloom filter map naming fixups
  selftests/bpf: Add test cases for struct_ops prog
  bpf: Add dummy BPF STRUCT_OPS for test purpose
  bpf: Factor out helpers for ctx access checking
  bpf: Factor out a helper to prepare trampoline for struct_ops prog
  selftests, bpf: Fix broken riscv build
  riscv, libbpf: Add RISC-V (RV64) support to bpf_tracing.h
  tools, build: Add RISC-V to HOSTARCH parsing
  riscv, bpf: Increase the maximum number of iterations
  selftests, bpf: Add one test for sockmap with strparser
  selftests, bpf: Fix test_txmsg_ingress_parser error
  ...
====================

Link: https://lore.kernel.org/r/20211102013123.9005-1-alexei.starovoitov@gmail.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents 52fa3ee0 0b170456
...@@ -85,7 +85,7 @@ sequentially and type id is assigned to each recognized type starting from id ...@@ -85,7 +85,7 @@ sequentially and type id is assigned to each recognized type starting from id
#define BTF_KIND_VAR 14 /* Variable */ #define BTF_KIND_VAR 14 /* Variable */
#define BTF_KIND_DATASEC 15 /* Section */ #define BTF_KIND_DATASEC 15 /* Section */
#define BTF_KIND_FLOAT 16 /* Floating point */ #define BTF_KIND_FLOAT 16 /* Floating point */
#define BTF_KIND_TAG 17 /* Tag */ #define BTF_KIND_DECL_TAG 17 /* Decl Tag */
Note that the type section encodes debug info, not just pure types. Note that the type section encodes debug info, not just pure types.
``BTF_KIND_FUNC`` is not a type, and it represents a defined subprogram. ``BTF_KIND_FUNC`` is not a type, and it represents a defined subprogram.
...@@ -107,7 +107,7 @@ Each type contains the following common data:: ...@@ -107,7 +107,7 @@ Each type contains the following common data::
* "size" tells the size of the type it is describing. * "size" tells the size of the type it is describing.
* *
* "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT, * "type" is used by PTR, TYPEDEF, VOLATILE, CONST, RESTRICT,
* FUNC, FUNC_PROTO and TAG. * FUNC, FUNC_PROTO and DECL_TAG.
* "type" is a type_id referring to another type. * "type" is a type_id referring to another type.
*/ */
union { union {
...@@ -466,30 +466,30 @@ map definition. ...@@ -466,30 +466,30 @@ map definition.
No additional type data follow ``btf_type``. No additional type data follow ``btf_type``.
2.2.17 BTF_KIND_TAG 2.2.17 BTF_KIND_DECL_TAG
~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
``struct btf_type`` encoding requirement: ``struct btf_type`` encoding requirement:
* ``name_off``: offset to a non-empty string * ``name_off``: offset to a non-empty string
* ``info.kind_flag``: 0 * ``info.kind_flag``: 0
* ``info.kind``: BTF_KIND_TAG * ``info.kind``: BTF_KIND_DECL_TAG
* ``info.vlen``: 0 * ``info.vlen``: 0
* ``type``: ``struct``, ``union``, ``func`` or ``var`` * ``type``: ``struct``, ``union``, ``func``, ``var`` or ``typedef``
``btf_type`` is followed by ``struct btf_tag``.:: ``btf_type`` is followed by ``struct btf_decl_tag``.::
struct btf_tag { struct btf_decl_tag {
__u32 component_idx; __u32 component_idx;
}; };
The ``name_off`` encodes btf_tag attribute string. The ``name_off`` encodes btf_decl_tag attribute string.
The ``type`` should be ``struct``, ``union``, ``func`` or ``var``. The ``type`` should be ``struct``, ``union``, ``func``, ``var`` or ``typedef``.
For ``var`` type, ``btf_tag.component_idx`` must be ``-1``. For ``var`` or ``typedef`` type, ``btf_decl_tag.component_idx`` must be ``-1``.
For the other three types, if the btf_tag attribute is For the other three types, if the btf_decl_tag attribute is
applied to the ``struct``, ``union`` or ``func`` itself, applied to the ``struct``, ``union`` or ``func`` itself,
``btf_tag.component_idx`` must be ``-1``. Otherwise, ``btf_decl_tag.component_idx`` must be ``-1``. Otherwise,
the attribute is applied to a ``struct``/``union`` member or the attribute is applied to a ``struct``/``union`` member or
a ``func`` argument, and ``btf_tag.component_idx`` should be a a ``func`` argument, and ``btf_decl_tag.component_idx`` should be a
valid index (starting from 0) pointing to a member or an argument. valid index (starting from 0) pointing to a member or an argument.
3. BTF Kernel API 3. BTF Kernel API
......
...@@ -150,6 +150,46 @@ mirror of the mainline's version of libbpf for a stand-alone build. ...@@ -150,6 +150,46 @@ mirror of the mainline's version of libbpf for a stand-alone build.
However, all changes to libbpf's code base must be upstreamed through However, all changes to libbpf's code base must be upstreamed through
the mainline kernel tree. the mainline kernel tree.
API documentation convention
============================
The libbpf API is documented via comments above definitions in
header files. These comments can be rendered by doxygen and sphinx
for well organized html output. This section describes the
convention in which these comments should be formated.
Here is an example from btf.h:
.. code-block:: c
/**
* @brief **btf__new()** creates a new instance of a BTF object from the raw
* bytes of an ELF's BTF section
* @param data raw bytes
* @param size number of bytes passed in `data`
* @return new BTF object instance which has to be eventually freed with
* **btf__free()**
*
* On error, error-code-encoded-as-pointer is returned, not a NULL. To extract
* error code from such a pointer `libbpf_get_error()` should be used. If
* `libbpf_set_strict_mode(LIBBPF_STRICT_CLEAN_PTRS)` is enabled, NULL is
* returned on error instead. In both cases thread-local `errno` variable is
* always set to error code as well.
*/
The comment must start with a block comment of the form '/\*\*'.
The documentation always starts with a @brief directive. This line is a short
description about this API. It starts with the name of the API, denoted in bold
like so: **api_name**. Please include an open and close parenthesis if this is a
function. Follow with the short description of the API. A longer form description
can be added below the last directive, at the bottom of the comment.
Parameters are denoted with the @param directive, there should be one for each
parameter. If this is a function with a non-void return, use the @return directive
to document it.
License License
------------------- -------------------
......
...@@ -3442,6 +3442,7 @@ S: Supported ...@@ -3442,6 +3442,7 @@ S: Supported
F: arch/arm64/net/ F: arch/arm64/net/
BPF JIT for MIPS (32-BIT AND 64-BIT) BPF JIT for MIPS (32-BIT AND 64-BIT)
M: Johan Almbladh <johan.almbladh@anyfinetworks.com>
M: Paul Burton <paulburton@kernel.org> M: Paul Burton <paulburton@kernel.org>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: bpf@vger.kernel.org L: bpf@vger.kernel.org
......
...@@ -480,6 +480,8 @@ LZ4 = lz4c ...@@ -480,6 +480,8 @@ LZ4 = lz4c
XZ = xz XZ = xz
ZSTD = zstd ZSTD = zstd
PAHOLE_FLAGS = $(shell PAHOLE=$(PAHOLE) $(srctree)/scripts/pahole-flags.sh)
CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \ CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \
-Wbitwise -Wno-return-void -Wno-unknown-attribute $(CF) -Wbitwise -Wno-return-void -Wno-unknown-attribute $(CF)
NOSTDINC_FLAGS := NOSTDINC_FLAGS :=
...@@ -534,6 +536,7 @@ export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE ...@@ -534,6 +536,7 @@ export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
export PAHOLE_FLAGS
# Files to ignore in find ... statements # Files to ignore in find ... statements
......
...@@ -1882,11 +1882,6 @@ static int validate_code(struct jit_ctx *ctx) ...@@ -1882,11 +1882,6 @@ static int validate_code(struct jit_ctx *ctx)
return 0; return 0;
} }
void bpf_jit_compile(struct bpf_prog *prog)
{
/* Nothing to do here. We support Internal BPF. */
}
bool bpf_jit_needs_zext(void) bool bpf_jit_needs_zext(void)
{ {
return true; return true;
......
...@@ -57,7 +57,6 @@ config MIPS ...@@ -57,7 +57,6 @@ config MIPS
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES
select HAVE_ASM_MODVERSIONS select HAVE_ASM_MODVERSIONS
select HAVE_CBPF_JIT if !64BIT && !CPU_MICROMIPS
select HAVE_CONTEXT_TRACKING select HAVE_CONTEXT_TRACKING
select HAVE_TIF_NOHZ select HAVE_TIF_NOHZ
select HAVE_C_RECORDMCOUNT select HAVE_C_RECORDMCOUNT
...@@ -65,7 +64,10 @@ config MIPS ...@@ -65,7 +64,10 @@ config MIPS
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_EBPF_JIT if 64BIT && !CPU_MICROMIPS && TARGET_ISA_REV >= 2 select HAVE_EBPF_JIT if !CPU_MICROMIPS && \
!CPU_DADDI_WORKAROUNDS && \
!CPU_R4000_WORKAROUNDS && \
!CPU_R4400_WORKAROUNDS
select HAVE_EXIT_THREAD select HAVE_EXIT_THREAD
select HAVE_FAST_GUP select HAVE_FAST_GUP
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
...@@ -1212,15 +1214,6 @@ config SYS_SUPPORTS_RELOCATABLE ...@@ -1212,15 +1214,6 @@ config SYS_SUPPORTS_RELOCATABLE
The platform must provide plat_get_fdt() if it selects CONFIG_USE_OF The platform must provide plat_get_fdt() if it selects CONFIG_USE_OF
to allow access to command line and entropy sources. to allow access to command line and entropy sources.
config MIPS_CBPF_JIT
def_bool y
depends on BPF_JIT && HAVE_CBPF_JIT
config MIPS_EBPF_JIT
def_bool y
depends on BPF_JIT && HAVE_EBPF_JIT
# #
# Endianness selection. Sufficiently obscure so many users don't know what to # Endianness selection. Sufficiently obscure so many users don't know what to
# answer,so we try hard to limit the available choices. Also the use of a # answer,so we try hard to limit the available choices. Also the use of a
......
...@@ -145,6 +145,7 @@ Ip_u1(_mtlo); ...@@ -145,6 +145,7 @@ Ip_u1(_mtlo);
Ip_u3u1u2(_mul); Ip_u3u1u2(_mul);
Ip_u1u2(_multu); Ip_u1u2(_multu);
Ip_u3u1u2(_mulu); Ip_u3u1u2(_mulu);
Ip_u3u1u2(_muhu);
Ip_u3u1u2(_nor); Ip_u3u1u2(_nor);
Ip_u3u1u2(_or); Ip_u3u1u2(_or);
Ip_u2u1u3(_ori); Ip_u2u1u3(_ori);
...@@ -248,7 +249,11 @@ static inline void uasm_l##lb(struct uasm_label **lab, u32 *addr) \ ...@@ -248,7 +249,11 @@ static inline void uasm_l##lb(struct uasm_label **lab, u32 *addr) \
#define uasm_i_bnezl(buf, rs, off) uasm_i_bnel(buf, rs, 0, off) #define uasm_i_bnezl(buf, rs, off) uasm_i_bnel(buf, rs, 0, off)
#define uasm_i_ehb(buf) uasm_i_sll(buf, 0, 0, 3) #define uasm_i_ehb(buf) uasm_i_sll(buf, 0, 0, 3)
#define uasm_i_move(buf, a, b) UASM_i_ADDU(buf, a, 0, b) #define uasm_i_move(buf, a, b) UASM_i_ADDU(buf, a, 0, b)
#ifdef CONFIG_CPU_NOP_WORKAROUNDS
#define uasm_i_nop(buf) uasm_i_or(buf, 1, 1, 0)
#else
#define uasm_i_nop(buf) uasm_i_sll(buf, 0, 0, 0) #define uasm_i_nop(buf) uasm_i_sll(buf, 0, 0, 0)
#endif
#define uasm_i_ssnop(buf) uasm_i_sll(buf, 0, 0, 1) #define uasm_i_ssnop(buf) uasm_i_sll(buf, 0, 0, 1)
static inline void uasm_i_drotr_safe(u32 **p, unsigned int a1, static inline void uasm_i_drotr_safe(u32 **p, unsigned int a1,
......
...@@ -90,7 +90,7 @@ static const struct insn insn_table[insn_invalid] = { ...@@ -90,7 +90,7 @@ static const struct insn insn_table[insn_invalid] = {
RS | RT | RD}, RS | RT | RD},
[insn_dmtc0] = {M(cop0_op, dmtc_op, 0, 0, 0, 0), RT | RD | SET}, [insn_dmtc0] = {M(cop0_op, dmtc_op, 0, 0, 0, 0), RT | RD | SET},
[insn_dmultu] = {M(spec_op, 0, 0, 0, 0, dmultu_op), RS | RT}, [insn_dmultu] = {M(spec_op, 0, 0, 0, 0, dmultu_op), RS | RT},
[insn_dmulu] = {M(spec_op, 0, 0, 0, dmult_dmul_op, dmultu_op), [insn_dmulu] = {M(spec_op, 0, 0, 0, dmultu_dmulu_op, dmultu_op),
RS | RT | RD}, RS | RT | RD},
[insn_drotr] = {M(spec_op, 1, 0, 0, 0, dsrl_op), RT | RD | RE}, [insn_drotr] = {M(spec_op, 1, 0, 0, 0, dsrl_op), RT | RD | RE},
[insn_drotr32] = {M(spec_op, 1, 0, 0, 0, dsrl32_op), RT | RD | RE}, [insn_drotr32] = {M(spec_op, 1, 0, 0, 0, dsrl32_op), RT | RD | RE},
...@@ -150,6 +150,8 @@ static const struct insn insn_table[insn_invalid] = { ...@@ -150,6 +150,8 @@ static const struct insn insn_table[insn_invalid] = {
[insn_mtlo] = {M(spec_op, 0, 0, 0, 0, mtlo_op), RS}, [insn_mtlo] = {M(spec_op, 0, 0, 0, 0, mtlo_op), RS},
[insn_mulu] = {M(spec_op, 0, 0, 0, multu_mulu_op, multu_op), [insn_mulu] = {M(spec_op, 0, 0, 0, multu_mulu_op, multu_op),
RS | RT | RD}, RS | RT | RD},
[insn_muhu] = {M(spec_op, 0, 0, 0, multu_muhu_op, multu_op),
RS | RT | RD},
#ifndef CONFIG_CPU_MIPSR6 #ifndef CONFIG_CPU_MIPSR6
[insn_mul] = {M(spec2_op, 0, 0, 0, 0, mul_op), RS | RT | RD}, [insn_mul] = {M(spec2_op, 0, 0, 0, 0, mul_op), RS | RT | RD},
#else #else
......
...@@ -59,7 +59,7 @@ enum opcode { ...@@ -59,7 +59,7 @@ enum opcode {
insn_lddir, insn_ldpte, insn_ldx, insn_lh, insn_lhu, insn_ll, insn_lld, insn_lddir, insn_ldpte, insn_ldx, insn_lh, insn_lhu, insn_ll, insn_lld,
insn_lui, insn_lw, insn_lwu, insn_lwx, insn_mfc0, insn_mfhc0, insn_mfhi, insn_lui, insn_lw, insn_lwu, insn_lwx, insn_mfc0, insn_mfhc0, insn_mfhi,
insn_mflo, insn_modu, insn_movn, insn_movz, insn_mtc0, insn_mthc0, insn_mflo, insn_modu, insn_movn, insn_movz, insn_mtc0, insn_mthc0,
insn_mthi, insn_mtlo, insn_mul, insn_multu, insn_mulu, insn_nor, insn_mthi, insn_mtlo, insn_mul, insn_multu, insn_mulu, insn_muhu, insn_nor,
insn_or, insn_ori, insn_pref, insn_rfe, insn_rotr, insn_sb, insn_sc, insn_or, insn_ori, insn_pref, insn_rfe, insn_rotr, insn_sb, insn_sc,
insn_scd, insn_seleqz, insn_selnez, insn_sd, insn_sh, insn_sll, insn_scd, insn_seleqz, insn_selnez, insn_sd, insn_sh, insn_sll,
insn_sllv, insn_slt, insn_slti, insn_sltiu, insn_sltu, insn_sra, insn_sllv, insn_slt, insn_slti, insn_sltiu, insn_sltu, insn_sra,
...@@ -344,6 +344,7 @@ I_u1(_mtlo) ...@@ -344,6 +344,7 @@ I_u1(_mtlo)
I_u3u1u2(_mul) I_u3u1u2(_mul)
I_u1u2(_multu) I_u1u2(_multu)
I_u3u1u2(_mulu) I_u3u1u2(_mulu)
I_u3u1u2(_muhu)
I_u3u1u2(_nor) I_u3u1u2(_nor)
I_u3u1u2(_or) I_u3u1u2(_or)
I_u2u1u3(_ori) I_u2u1u3(_ori)
......
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
# MIPS networking code # MIPS networking code
obj-$(CONFIG_MIPS_CBPF_JIT) += bpf_jit.o bpf_jit_asm.o obj-$(CONFIG_BPF_JIT) += bpf_jit_comp.o
obj-$(CONFIG_MIPS_EBPF_JIT) += ebpf_jit.o
ifeq ($(CONFIG_32BIT),y)
obj-$(CONFIG_BPF_JIT) += bpf_jit_comp32.o
else
obj-$(CONFIG_BPF_JIT) += bpf_jit_comp64.o
endif
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Just-In-Time compiler for BPF filters on MIPS
*
* Copyright (c) 2014 Imagination Technologies Ltd.
* Author: Markos Chandras <markos.chandras@imgtec.com>
*/
#ifndef BPF_JIT_MIPS_OP_H
#define BPF_JIT_MIPS_OP_H
/* Registers used by JIT */
#define MIPS_R_ZERO 0
#define MIPS_R_V0 2
#define MIPS_R_A0 4
#define MIPS_R_A1 5
#define MIPS_R_T4 12
#define MIPS_R_T5 13
#define MIPS_R_T6 14
#define MIPS_R_T7 15
#define MIPS_R_S0 16
#define MIPS_R_S1 17
#define MIPS_R_S2 18
#define MIPS_R_S3 19
#define MIPS_R_S4 20
#define MIPS_R_S5 21
#define MIPS_R_S6 22
#define MIPS_R_S7 23
#define MIPS_R_SP 29
#define MIPS_R_RA 31
/* Conditional codes */
#define MIPS_COND_EQ 0x1
#define MIPS_COND_GE (0x1 << 1)
#define MIPS_COND_GT (0x1 << 2)
#define MIPS_COND_NE (0x1 << 3)
#define MIPS_COND_ALL (0x1 << 4)
/* Conditionals on X register or K immediate */
#define MIPS_COND_X (0x1 << 5)
#define MIPS_COND_K (0x1 << 6)
#define r_ret MIPS_R_V0
/*
* Use 2 scratch registers to avoid pipeline interlocks.
* There is no overhead during epilogue and prologue since
* any of the $s0-$s6 registers will only be preserved if
* they are going to actually be used.
*/
#define r_skb_hl MIPS_R_S0 /* skb header length */
#define r_skb_data MIPS_R_S1 /* skb actual data */
#define r_off MIPS_R_S2
#define r_A MIPS_R_S3
#define r_X MIPS_R_S4
#define r_skb MIPS_R_S5
#define r_M MIPS_R_S6
#define r_skb_len MIPS_R_S7
#define r_s0 MIPS_R_T4 /* scratch reg 1 */
#define r_s1 MIPS_R_T5 /* scratch reg 2 */
#define r_tmp_imm MIPS_R_T6 /* No need to preserve this */
#define r_tmp MIPS_R_T7 /* No need to preserve this */
#define r_zero MIPS_R_ZERO
#define r_sp MIPS_R_SP
#define r_ra MIPS_R_RA
#ifndef __ASSEMBLY__
/* Declare ASM helpers */
#define DECLARE_LOAD_FUNC(func) \
extern u8 func(unsigned long *skb, int offset); \
extern u8 func##_negative(unsigned long *skb, int offset); \
extern u8 func##_positive(unsigned long *skb, int offset)
DECLARE_LOAD_FUNC(sk_load_word);
DECLARE_LOAD_FUNC(sk_load_half);
DECLARE_LOAD_FUNC(sk_load_byte);
#endif
#endif /* BPF_JIT_MIPS_OP_H */
/*
* bpf_jib_asm.S: Packet/header access helper functions for MIPS/MIPS64 BPF
* compiler.
*
* Copyright (C) 2015 Imagination Technologies Ltd.
* Author: Markos Chandras <markos.chandras@imgtec.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; version 2 of the License.
*/
#include <asm/asm.h>
#include <asm/isa-rev.h>
#include <asm/regdef.h>
#include "bpf_jit.h"
/* ABI
*
* r_skb_hl skb header length
* r_skb_data skb data
* r_off(a1) offset register
* r_A BPF register A
* r_X PF register X
* r_skb(a0) *skb
* r_M *scratch memory
* r_skb_le skb length
* r_s0 Scratch register 0
* r_s1 Scratch register 1
*
* On entry:
* a0: *skb
* a1: offset (imm or imm + X)
*
* All non-BPF-ABI registers are free for use. On return, we only
* care about r_ret. The BPF-ABI registers are assumed to remain
* unmodified during the entire filter operation.
*/
#define skb a0
#define offset a1
#define SKF_LL_OFF (-0x200000) /* Can't include linux/filter.h in assembly */
/* We know better :) so prevent assembler reordering etc */
.set noreorder
#define is_offset_negative(TYPE) \
/* If offset is negative we have more work to do */ \
slti t0, offset, 0; \
bgtz t0, bpf_slow_path_##TYPE##_neg; \
/* Be careful what follows in DS. */
#define is_offset_in_header(SIZE, TYPE) \
/* Reading from header? */ \
addiu $r_s0, $r_skb_hl, -SIZE; \
slt t0, $r_s0, offset; \
bgtz t0, bpf_slow_path_##TYPE; \
LEAF(sk_load_word)
is_offset_negative(word)
FEXPORT(sk_load_word_positive)
is_offset_in_header(4, word)
/* Offset within header boundaries */
PTR_ADDU t1, $r_skb_data, offset
.set reorder
lw $r_A, 0(t1)
.set noreorder
#ifdef CONFIG_CPU_LITTLE_ENDIAN
# if MIPS_ISA_REV >= 2
wsbh t0, $r_A
rotr $r_A, t0, 16
# else
sll t0, $r_A, 24
srl t1, $r_A, 24
srl t2, $r_A, 8
or t0, t0, t1
andi t2, t2, 0xff00
andi t1, $r_A, 0xff00
or t0, t0, t2
sll t1, t1, 8
or $r_A, t0, t1
# endif
#endif
jr $r_ra
move $r_ret, zero
END(sk_load_word)
LEAF(sk_load_half)
is_offset_negative(half)
FEXPORT(sk_load_half_positive)
is_offset_in_header(2, half)
/* Offset within header boundaries */
PTR_ADDU t1, $r_skb_data, offset
lhu $r_A, 0(t1)
#ifdef CONFIG_CPU_LITTLE_ENDIAN
# if MIPS_ISA_REV >= 2
wsbh $r_A, $r_A
# else
sll t0, $r_A, 8
srl t1, $r_A, 8
andi t0, t0, 0xff00
or $r_A, t0, t1
# endif
#endif
jr $r_ra
move $r_ret, zero
END(sk_load_half)
LEAF(sk_load_byte)
is_offset_negative(byte)
FEXPORT(sk_load_byte_positive)
is_offset_in_header(1, byte)
/* Offset within header boundaries */
PTR_ADDU t1, $r_skb_data, offset
lbu $r_A, 0(t1)
jr $r_ra
move $r_ret, zero
END(sk_load_byte)
/*
* call skb_copy_bits:
* (prototype in linux/skbuff.h)
*
* int skb_copy_bits(sk_buff *skb, int offset, void *to, int len)
*
* o32 mandates we leave 4 spaces for argument registers in case
* the callee needs to use them. Even though we don't care about
* the argument registers ourselves, we need to allocate that space
* to remain ABI compliant since the callee may want to use that space.
* We also allocate 2 more spaces for $r_ra and our return register (*to).
*
* n64 is a bit different. The *caller* will allocate the space to preserve
* the arguments. So in 64-bit kernels, we allocate the 4-arg space for no
* good reason but it does not matter that much really.
*
* (void *to) is returned in r_s0
*
*/
#ifdef CONFIG_CPU_LITTLE_ENDIAN
#define DS_OFFSET(SIZE) (4 * SZREG)
#else
#define DS_OFFSET(SIZE) ((4 * SZREG) + (4 - SIZE))
#endif
#define bpf_slow_path_common(SIZE) \
/* Quick check. Are we within reasonable boundaries? */ \
LONG_ADDIU $r_s1, $r_skb_len, -SIZE; \
sltu $r_s0, offset, $r_s1; \
beqz $r_s0, fault; \
/* Load 4th argument in DS */ \
LONG_ADDIU a3, zero, SIZE; \
PTR_ADDIU $r_sp, $r_sp, -(6 * SZREG); \
PTR_LA t0, skb_copy_bits; \
PTR_S $r_ra, (5 * SZREG)($r_sp); \
/* Assign low slot to a2 */ \
PTR_ADDIU a2, $r_sp, DS_OFFSET(SIZE); \
jalr t0; \
/* Reset our destination slot (DS but it's ok) */ \
INT_S zero, (4 * SZREG)($r_sp); \
/* \
* skb_copy_bits returns 0 on success and -EFAULT \
* on error. Our data live in a2. Do not bother with \
* our data if an error has been returned. \
*/ \
/* Restore our frame */ \
PTR_L $r_ra, (5 * SZREG)($r_sp); \
INT_L $r_s0, (4 * SZREG)($r_sp); \
bltz v0, fault; \
PTR_ADDIU $r_sp, $r_sp, 6 * SZREG; \
move $r_ret, zero; \
NESTED(bpf_slow_path_word, (6 * SZREG), $r_sp)
bpf_slow_path_common(4)
#ifdef CONFIG_CPU_LITTLE_ENDIAN
# if MIPS_ISA_REV >= 2
wsbh t0, $r_s0
jr $r_ra
rotr $r_A, t0, 16
# else
sll t0, $r_s0, 24
srl t1, $r_s0, 24
srl t2, $r_s0, 8
or t0, t0, t1
andi t2, t2, 0xff00
andi t1, $r_s0, 0xff00
or t0, t0, t2
sll t1, t1, 8
jr $r_ra
or $r_A, t0, t1
# endif
#else
jr $r_ra
move $r_A, $r_s0
#endif
END(bpf_slow_path_word)
NESTED(bpf_slow_path_half, (6 * SZREG), $r_sp)
bpf_slow_path_common(2)
#ifdef CONFIG_CPU_LITTLE_ENDIAN
# if MIPS_ISA_REV >= 2
jr $r_ra
wsbh $r_A, $r_s0
# else
sll t0, $r_s0, 8
andi t1, $r_s0, 0xff00
andi t0, t0, 0xff00
srl t1, t1, 8
jr $r_ra
or $r_A, t0, t1
# endif
#else
jr $r_ra
move $r_A, $r_s0
#endif
END(bpf_slow_path_half)
NESTED(bpf_slow_path_byte, (6 * SZREG), $r_sp)
bpf_slow_path_common(1)
jr $r_ra
move $r_A, $r_s0
END(bpf_slow_path_byte)
/*
* Negative entry points
*/
.macro bpf_is_end_of_data
li t0, SKF_LL_OFF
/* Reading link layer data? */
slt t1, offset, t0
bgtz t1, fault
/* Be careful what follows in DS. */
.endm
/*
* call skb_copy_bits:
* (prototype in linux/filter.h)
*
* void *bpf_internal_load_pointer_neg_helper(const struct sk_buff *skb,
* int k, unsigned int size)
*
* see above (bpf_slow_path_common) for ABI restrictions
*/
#define bpf_negative_common(SIZE) \
PTR_ADDIU $r_sp, $r_sp, -(6 * SZREG); \
PTR_LA t0, bpf_internal_load_pointer_neg_helper; \
PTR_S $r_ra, (5 * SZREG)($r_sp); \
jalr t0; \
li a2, SIZE; \
PTR_L $r_ra, (5 * SZREG)($r_sp); \
/* Check return pointer */ \
beqz v0, fault; \
PTR_ADDIU $r_sp, $r_sp, 6 * SZREG; \
/* Preserve our pointer */ \
move $r_s0, v0; \
/* Set return value */ \
move $r_ret, zero; \
bpf_slow_path_word_neg:
bpf_is_end_of_data
NESTED(sk_load_word_negative, (6 * SZREG), $r_sp)
bpf_negative_common(4)
jr $r_ra
lw $r_A, 0($r_s0)
END(sk_load_word_negative)
bpf_slow_path_half_neg:
bpf_is_end_of_data
NESTED(sk_load_half_negative, (6 * SZREG), $r_sp)
bpf_negative_common(2)
jr $r_ra
lhu $r_A, 0($r_s0)
END(sk_load_half_negative)
bpf_slow_path_byte_neg:
bpf_is_end_of_data
NESTED(sk_load_byte_negative, (6 * SZREG), $r_sp)
bpf_negative_common(1)
jr $r_ra
lbu $r_A, 0($r_s0)
END(sk_load_byte_negative)
fault:
jr $r_ra
addiu $r_ret, zero, 1
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Just-In-Time compiler for eBPF bytecode on 32-bit and 64-bit MIPS.
*
* Copyright (c) 2021 Anyfi Networks AB.
* Author: Johan Almbladh <johan.almbladh@gmail.com>
*
* Based on code and ideas from
* Copyright (c) 2017 Cavium, Inc.
* Copyright (c) 2017 Shubham Bansal <illusionist.neo@gmail.com>
* Copyright (c) 2011 Mircea Gherzan <mgherzan@gmail.com>
*/
#ifndef _BPF_JIT_COMP_H
#define _BPF_JIT_COMP_H
/* MIPS registers */
#define MIPS_R_ZERO 0 /* Const zero */
#define MIPS_R_AT 1 /* Asm temp */
#define MIPS_R_V0 2 /* Result */
#define MIPS_R_V1 3 /* Result */
#define MIPS_R_A0 4 /* Argument */
#define MIPS_R_A1 5 /* Argument */
#define MIPS_R_A2 6 /* Argument */
#define MIPS_R_A3 7 /* Argument */
#define MIPS_R_A4 8 /* Arg (n64) */
#define MIPS_R_A5 9 /* Arg (n64) */
#define MIPS_R_A6 10 /* Arg (n64) */
#define MIPS_R_A7 11 /* Arg (n64) */
#define MIPS_R_T0 8 /* Temp (o32) */
#define MIPS_R_T1 9 /* Temp (o32) */
#define MIPS_R_T2 10 /* Temp (o32) */
#define MIPS_R_T3 11 /* Temp (o32) */
#define MIPS_R_T4 12 /* Temporary */
#define MIPS_R_T5 13 /* Temporary */
#define MIPS_R_T6 14 /* Temporary */
#define MIPS_R_T7 15 /* Temporary */
#define MIPS_R_S0 16 /* Saved */
#define MIPS_R_S1 17 /* Saved */
#define MIPS_R_S2 18 /* Saved */
#define MIPS_R_S3 19 /* Saved */
#define MIPS_R_S4 20 /* Saved */
#define MIPS_R_S5 21 /* Saved */
#define MIPS_R_S6 22 /* Saved */
#define MIPS_R_S7 23 /* Saved */
#define MIPS_R_T8 24 /* Temporary */
#define MIPS_R_T9 25 /* Temporary */
/* MIPS_R_K0 26 Reserved */
/* MIPS_R_K1 27 Reserved */
#define MIPS_R_GP 28 /* Global ptr */
#define MIPS_R_SP 29 /* Stack ptr */
#define MIPS_R_FP 30 /* Frame ptr */
#define MIPS_R_RA 31 /* Return */
/*
* Jump address mask for immediate jumps. The four most significant bits
* must be equal to PC.
*/
#define MIPS_JMP_MASK 0x0fffffffUL
/* Maximum number of iterations in offset table computation */
#define JIT_MAX_ITERATIONS 8
/*
* Jump pseudo-instructions used internally
* for branch conversion and branch optimization.
*/
#define JIT_JNSET 0xe0
#define JIT_JNOP 0xf0
/* Descriptor flag for PC-relative branch conversion */
#define JIT_DESC_CONVERT BIT(31)
/* JIT context for an eBPF program */
struct jit_context {
struct bpf_prog *program; /* The eBPF program being JITed */
u32 *descriptors; /* eBPF to JITed CPU insn descriptors */
u32 *target; /* JITed code buffer */
u32 bpf_index; /* Index of current BPF program insn */
u32 jit_index; /* Index of current JIT target insn */
u32 changes; /* Number of PC-relative branch conv */
u32 accessed; /* Bit mask of read eBPF registers */
u32 clobbered; /* Bit mask of modified CPU registers */
u32 stack_size; /* Total allocated stack size in bytes */
u32 saved_size; /* Size of callee-saved registers */
u32 stack_used; /* Stack size used for function calls */
};
/* Emit the instruction if the JIT memory space has been allocated */
#define __emit(ctx, func, ...) \
do { \
if ((ctx)->target != NULL) { \
u32 *p = &(ctx)->target[ctx->jit_index]; \
uasm_i_##func(&p, ##__VA_ARGS__); \
} \
(ctx)->jit_index++; \
} while (0)
#define emit(...) __emit(__VA_ARGS__)
/* Workaround for R10000 ll/sc errata */
#ifdef CONFIG_WAR_R10000
#define LLSC_beqz beqzl
#else
#define LLSC_beqz beqz
#endif
/* Workaround for Loongson-3 ll/sc errata */
#ifdef CONFIG_CPU_LOONGSON3_WORKAROUNDS
#define LLSC_sync(ctx) emit(ctx, sync, 0)
#define LLSC_offset 4
#else
#define LLSC_sync(ctx)
#define LLSC_offset 0
#endif
/* Workaround for Loongson-2F jump errata */
#ifdef CONFIG_CPU_JUMP_WORKAROUNDS
#define JALR_MASK 0xffffffffcfffffffULL
#else
#define JALR_MASK (~0ULL)
#endif
/*
* Mark a BPF register as accessed, it needs to be
* initialized by the program if expected, e.g. FP.
*/
static inline void access_reg(struct jit_context *ctx, u8 reg)
{
ctx->accessed |= BIT(reg);
}
/*
* Mark a CPU register as clobbered, it needs to be
* saved/restored by the program if callee-saved.
*/
static inline void clobber_reg(struct jit_context *ctx, u8 reg)
{
ctx->clobbered |= BIT(reg);
}
/*
* Push registers on the stack, starting at a given depth from the stack
* pointer and increasing. The next depth to be written is returned.
*/
int push_regs(struct jit_context *ctx, u32 mask, u32 excl, int depth);
/*
* Pop registers from the stack, starting at a given depth from the stack
* pointer and increasing. The next depth to be read is returned.
*/
int pop_regs(struct jit_context *ctx, u32 mask, u32 excl, int depth);
/* Compute the 28-bit jump target address from a BPF program location */
int get_target(struct jit_context *ctx, u32 loc);
/* Compute the PC-relative offset to relative BPF program offset */
int get_offset(const struct jit_context *ctx, int off);
/* dst = imm (32-bit) */
void emit_mov_i(struct jit_context *ctx, u8 dst, s32 imm);
/* dst = src (32-bit) */
void emit_mov_r(struct jit_context *ctx, u8 dst, u8 src);
/* Validate ALU/ALU64 immediate range */
bool valid_alu_i(u8 op, s32 imm);
/* Rewrite ALU/ALU64 immediate operation */
bool rewrite_alu_i(u8 op, s32 imm, u8 *alu, s32 *val);
/* ALU immediate operation (32-bit) */
void emit_alu_i(struct jit_context *ctx, u8 dst, s32 imm, u8 op);
/* ALU register operation (32-bit) */
void emit_alu_r(struct jit_context *ctx, u8 dst, u8 src, u8 op);
/* Atomic read-modify-write (32-bit) */
void emit_atomic_r(struct jit_context *ctx, u8 dst, u8 src, s16 off, u8 code);
/* Atomic compare-and-exchange (32-bit) */
void emit_cmpxchg_r(struct jit_context *ctx, u8 dst, u8 src, u8 res, s16 off);
/* Swap bytes and truncate a register word or half word */
void emit_bswap_r(struct jit_context *ctx, u8 dst, u32 width);
/* Validate JMP/JMP32 immediate range */
bool valid_jmp_i(u8 op, s32 imm);
/* Prepare a PC-relative jump operation with immediate conditional */
void setup_jmp_i(struct jit_context *ctx, s32 imm, u8 width,
u8 bpf_op, s16 bpf_off, u8 *jit_op, s32 *jit_off);
/* Prepare a PC-relative jump operation with register conditional */
void setup_jmp_r(struct jit_context *ctx, bool same_reg,
u8 bpf_op, s16 bpf_off, u8 *jit_op, s32 *jit_off);
/* Finish a PC-relative jump operation */
int finish_jmp(struct jit_context *ctx, u8 jit_op, s16 bpf_off);
/* Conditional JMP/JMP32 immediate */
void emit_jmp_i(struct jit_context *ctx, u8 dst, s32 imm, s32 off, u8 op);
/* Conditional JMP/JMP32 register */
void emit_jmp_r(struct jit_context *ctx, u8 dst, u8 src, s32 off, u8 op);
/* Jump always */
int emit_ja(struct jit_context *ctx, s16 off);
/* Jump to epilogue */
int emit_exit(struct jit_context *ctx);
/*
* Build program prologue to set up the stack and registers.
* This function is implemented separately for 32-bit and 64-bit JITs.
*/
void build_prologue(struct jit_context *ctx);
/*
* Build the program epilogue to restore the stack and registers.
* This function is implemented separately for 32-bit and 64-bit JITs.
*/
void build_epilogue(struct jit_context *ctx, int dest_reg);
/*
* Convert an eBPF instruction to native instruction, i.e
* JITs an eBPF instruction.
* Returns :
* 0 - Successfully JITed an 8-byte eBPF instruction
* >0 - Successfully JITed a 16-byte eBPF instruction
* <0 - Failed to JIT.
* This function is implemented separately for 32-bit and 64-bit JITs.
*/
int build_insn(const struct bpf_insn *insn, struct jit_context *ctx);
#endif /* _BPF_JIT_COMP_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -11,14 +11,23 @@ ...@@ -11,14 +11,23 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
#ifdef CONFIG_BPF_JIT
int rv_bpf_fixup_exception(const struct exception_table_entry *ex, struct pt_regs *regs);
#endif
int fixup_exception(struct pt_regs *regs) int fixup_exception(struct pt_regs *regs)
{ {
const struct exception_table_entry *fixup; const struct exception_table_entry *fixup;
fixup = search_exception_tables(regs->epc); fixup = search_exception_tables(regs->epc);
if (fixup) { if (!fixup)
regs->epc = fixup->fixup; return 0;
return 1;
} #ifdef CONFIG_BPF_JIT
return 0; if (regs->epc >= BPF_JIT_REGION_START && regs->epc < BPF_JIT_REGION_END)
return rv_bpf_fixup_exception(fixup, regs);
#endif
regs->epc = fixup->fixup;
return 1;
} }
...@@ -71,6 +71,7 @@ struct rv_jit_context { ...@@ -71,6 +71,7 @@ struct rv_jit_context {
int ninsns; int ninsns;
int epilogue_offset; int epilogue_offset;
int *offset; /* BPF to RV */ int *offset; /* BPF to RV */
int nexentries;
unsigned long flags; unsigned long flags;
int stack_size; int stack_size;
}; };
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
* *
*/ */
#include <linux/bitfield.h>
#include <linux/bpf.h> #include <linux/bpf.h>
#include <linux/filter.h> #include <linux/filter.h>
#include "bpf_jit.h" #include "bpf_jit.h"
...@@ -27,6 +28,21 @@ static const int regmap[] = { ...@@ -27,6 +28,21 @@ static const int regmap[] = {
[BPF_REG_AX] = RV_REG_T0, [BPF_REG_AX] = RV_REG_T0,
}; };
static const int pt_regmap[] = {
[RV_REG_A0] = offsetof(struct pt_regs, a0),
[RV_REG_A1] = offsetof(struct pt_regs, a1),
[RV_REG_A2] = offsetof(struct pt_regs, a2),
[RV_REG_A3] = offsetof(struct pt_regs, a3),
[RV_REG_A4] = offsetof(struct pt_regs, a4),
[RV_REG_A5] = offsetof(struct pt_regs, a5),
[RV_REG_S1] = offsetof(struct pt_regs, s1),
[RV_REG_S2] = offsetof(struct pt_regs, s2),
[RV_REG_S3] = offsetof(struct pt_regs, s3),
[RV_REG_S4] = offsetof(struct pt_regs, s4),
[RV_REG_S5] = offsetof(struct pt_regs, s5),
[RV_REG_T0] = offsetof(struct pt_regs, t0),
};
enum { enum {
RV_CTX_F_SEEN_TAIL_CALL = 0, RV_CTX_F_SEEN_TAIL_CALL = 0,
RV_CTX_F_SEEN_CALL = RV_REG_RA, RV_CTX_F_SEEN_CALL = RV_REG_RA,
...@@ -440,6 +456,69 @@ static int emit_call(bool fixed, u64 addr, struct rv_jit_context *ctx) ...@@ -440,6 +456,69 @@ static int emit_call(bool fixed, u64 addr, struct rv_jit_context *ctx)
return 0; return 0;
} }
#define BPF_FIXUP_OFFSET_MASK GENMASK(26, 0)
#define BPF_FIXUP_REG_MASK GENMASK(31, 27)
int rv_bpf_fixup_exception(const struct exception_table_entry *ex,
struct pt_regs *regs)
{
off_t offset = FIELD_GET(BPF_FIXUP_OFFSET_MASK, ex->fixup);
int regs_offset = FIELD_GET(BPF_FIXUP_REG_MASK, ex->fixup);
*(unsigned long *)((void *)regs + pt_regmap[regs_offset]) = 0;
regs->epc = (unsigned long)&ex->fixup - offset;
return 1;
}
/* For accesses to BTF pointers, add an entry to the exception table */
static int add_exception_handler(const struct bpf_insn *insn,
struct rv_jit_context *ctx,
int dst_reg, int insn_len)
{
struct exception_table_entry *ex;
unsigned long pc;
off_t offset;
if (!ctx->insns || !ctx->prog->aux->extable || BPF_MODE(insn->code) != BPF_PROBE_MEM)
return 0;
if (WARN_ON_ONCE(ctx->nexentries >= ctx->prog->aux->num_exentries))
return -EINVAL;
if (WARN_ON_ONCE(insn_len > ctx->ninsns))
return -EINVAL;
if (WARN_ON_ONCE(!rvc_enabled() && insn_len == 1))
return -EINVAL;
ex = &ctx->prog->aux->extable[ctx->nexentries];
pc = (unsigned long)&ctx->insns[ctx->ninsns - insn_len];
offset = pc - (long)&ex->insn;
if (WARN_ON_ONCE(offset >= 0 || offset < INT_MIN))
return -ERANGE;
ex->insn = pc;
/*
* Since the extable follows the program, the fixup offset is always
* negative and limited to BPF_JIT_REGION_SIZE. Store a positive value
* to keep things simple, and put the destination register in the upper
* bits. We don't need to worry about buildtime or runtime sort
* modifying the upper bits because the table is already sorted, and
* isn't part of the main exception table.
*/
offset = (long)&ex->fixup - (pc + insn_len * sizeof(u16));
if (!FIELD_FIT(BPF_FIXUP_OFFSET_MASK, offset))
return -ERANGE;
ex->fixup = FIELD_PREP(BPF_FIXUP_OFFSET_MASK, offset) |
FIELD_PREP(BPF_FIXUP_REG_MASK, dst_reg);
ctx->nexentries++;
return 0;
}
int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
bool extra_pass) bool extra_pass)
{ {
...@@ -893,52 +972,86 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, ...@@ -893,52 +972,86 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
/* LDX: dst = *(size *)(src + off) */ /* LDX: dst = *(size *)(src + off) */
case BPF_LDX | BPF_MEM | BPF_B: case BPF_LDX | BPF_MEM | BPF_B:
if (is_12b_int(off)) { case BPF_LDX | BPF_MEM | BPF_H:
emit(rv_lbu(rd, off, rs), ctx); case BPF_LDX | BPF_MEM | BPF_W:
case BPF_LDX | BPF_MEM | BPF_DW:
case BPF_LDX | BPF_PROBE_MEM | BPF_B:
case BPF_LDX | BPF_PROBE_MEM | BPF_H:
case BPF_LDX | BPF_PROBE_MEM | BPF_W:
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
{
int insn_len, insns_start;
switch (BPF_SIZE(code)) {
case BPF_B:
if (is_12b_int(off)) {
insns_start = ctx->ninsns;
emit(rv_lbu(rd, off, rs), ctx);
insn_len = ctx->ninsns - insns_start;
break;
}
emit_imm(RV_REG_T1, off, ctx);
emit_add(RV_REG_T1, RV_REG_T1, rs, ctx);
insns_start = ctx->ninsns;
emit(rv_lbu(rd, 0, RV_REG_T1), ctx);
insn_len = ctx->ninsns - insns_start;
if (insn_is_zext(&insn[1]))
return 1;
break; break;
} case BPF_H:
if (is_12b_int(off)) {
insns_start = ctx->ninsns;
emit(rv_lhu(rd, off, rs), ctx);
insn_len = ctx->ninsns - insns_start;
break;
}
emit_imm(RV_REG_T1, off, ctx); emit_imm(RV_REG_T1, off, ctx);
emit_add(RV_REG_T1, RV_REG_T1, rs, ctx); emit_add(RV_REG_T1, RV_REG_T1, rs, ctx);
emit(rv_lbu(rd, 0, RV_REG_T1), ctx); insns_start = ctx->ninsns;
if (insn_is_zext(&insn[1])) emit(rv_lhu(rd, 0, RV_REG_T1), ctx);
return 1; insn_len = ctx->ninsns - insns_start;
break; if (insn_is_zext(&insn[1]))
case BPF_LDX | BPF_MEM | BPF_H: return 1;
if (is_12b_int(off)) {
emit(rv_lhu(rd, off, rs), ctx);
break; break;
} case BPF_W:
if (is_12b_int(off)) {
insns_start = ctx->ninsns;
emit(rv_lwu(rd, off, rs), ctx);
insn_len = ctx->ninsns - insns_start;
break;
}
emit_imm(RV_REG_T1, off, ctx); emit_imm(RV_REG_T1, off, ctx);
emit_add(RV_REG_T1, RV_REG_T1, rs, ctx); emit_add(RV_REG_T1, RV_REG_T1, rs, ctx);
emit(rv_lhu(rd, 0, RV_REG_T1), ctx); insns_start = ctx->ninsns;
if (insn_is_zext(&insn[1])) emit(rv_lwu(rd, 0, RV_REG_T1), ctx);
return 1; insn_len = ctx->ninsns - insns_start;
break; if (insn_is_zext(&insn[1]))
case BPF_LDX | BPF_MEM | BPF_W: return 1;
if (is_12b_int(off)) {
emit(rv_lwu(rd, off, rs), ctx);
break; break;
} case BPF_DW:
if (is_12b_int(off)) {
insns_start = ctx->ninsns;
emit_ld(rd, off, rs, ctx);
insn_len = ctx->ninsns - insns_start;
break;
}
emit_imm(RV_REG_T1, off, ctx); emit_imm(RV_REG_T1, off, ctx);
emit_add(RV_REG_T1, RV_REG_T1, rs, ctx); emit_add(RV_REG_T1, RV_REG_T1, rs, ctx);
emit(rv_lwu(rd, 0, RV_REG_T1), ctx); insns_start = ctx->ninsns;
if (insn_is_zext(&insn[1])) emit_ld(rd, 0, RV_REG_T1, ctx);
return 1; insn_len = ctx->ninsns - insns_start;
break;
case BPF_LDX | BPF_MEM | BPF_DW:
if (is_12b_int(off)) {
emit_ld(rd, off, rs, ctx);
break; break;
} }
emit_imm(RV_REG_T1, off, ctx); ret = add_exception_handler(insn, ctx, rd, insn_len);
emit_add(RV_REG_T1, RV_REG_T1, rs, ctx); if (ret)
emit_ld(rd, 0, RV_REG_T1, ctx); return ret;
break; break;
}
/* speculation barrier */ /* speculation barrier */
case BPF_ST | BPF_NOSPEC: case BPF_ST | BPF_NOSPEC:
break; break;
......
...@@ -11,7 +11,7 @@ ...@@ -11,7 +11,7 @@
#include "bpf_jit.h" #include "bpf_jit.h"
/* Number of iterations to try until offsets converge. */ /* Number of iterations to try until offsets converge. */
#define NR_JIT_ITERATIONS 16 #define NR_JIT_ITERATIONS 32
static int build_body(struct rv_jit_context *ctx, bool extra_pass, int *offset) static int build_body(struct rv_jit_context *ctx, bool extra_pass, int *offset)
{ {
...@@ -41,12 +41,12 @@ bool bpf_jit_needs_zext(void) ...@@ -41,12 +41,12 @@ bool bpf_jit_needs_zext(void)
struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
{ {
unsigned int prog_size = 0, extable_size = 0;
bool tmp_blinded = false, extra_pass = false; bool tmp_blinded = false, extra_pass = false;
struct bpf_prog *tmp, *orig_prog = prog; struct bpf_prog *tmp, *orig_prog = prog;
int pass = 0, prev_ninsns = 0, i; int pass = 0, prev_ninsns = 0, i;
struct rv_jit_data *jit_data; struct rv_jit_data *jit_data;
struct rv_jit_context *ctx; struct rv_jit_context *ctx;
unsigned int image_size = 0;
if (!prog->jit_requested) if (!prog->jit_requested)
return orig_prog; return orig_prog;
...@@ -73,7 +73,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) ...@@ -73,7 +73,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
if (ctx->offset) { if (ctx->offset) {
extra_pass = true; extra_pass = true;
image_size = sizeof(*ctx->insns) * ctx->ninsns; prog_size = sizeof(*ctx->insns) * ctx->ninsns;
goto skip_init_ctx; goto skip_init_ctx;
} }
...@@ -102,10 +102,13 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) ...@@ -102,10 +102,13 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
if (ctx->ninsns == prev_ninsns) { if (ctx->ninsns == prev_ninsns) {
if (jit_data->header) if (jit_data->header)
break; break;
/* obtain the actual image size */
extable_size = prog->aux->num_exentries *
sizeof(struct exception_table_entry);
prog_size = sizeof(*ctx->insns) * ctx->ninsns;
image_size = sizeof(*ctx->insns) * ctx->ninsns;
jit_data->header = jit_data->header =
bpf_jit_binary_alloc(image_size, bpf_jit_binary_alloc(prog_size + extable_size,
&jit_data->image, &jit_data->image,
sizeof(u32), sizeof(u32),
bpf_fill_ill_insns); bpf_fill_ill_insns);
...@@ -131,9 +134,13 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) ...@@ -131,9 +134,13 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
goto out_offset; goto out_offset;
} }
if (extable_size)
prog->aux->extable = (void *)ctx->insns + prog_size;
skip_init_ctx: skip_init_ctx:
pass++; pass++;
ctx->ninsns = 0; ctx->ninsns = 0;
ctx->nexentries = 0;
bpf_jit_build_prologue(ctx); bpf_jit_build_prologue(ctx);
if (build_body(ctx, extra_pass, NULL)) { if (build_body(ctx, extra_pass, NULL)) {
...@@ -144,11 +151,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) ...@@ -144,11 +151,11 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
bpf_jit_build_epilogue(ctx); bpf_jit_build_epilogue(ctx);
if (bpf_jit_enable > 1) if (bpf_jit_enable > 1)
bpf_jit_dump(prog->len, image_size, pass, ctx->insns); bpf_jit_dump(prog->len, prog_size, pass, ctx->insns);
prog->bpf_func = (void *)ctx->insns; prog->bpf_func = (void *)ctx->insns;
prog->jited = 1; prog->jited = 1;
prog->jited_len = image_size; prog->jited_len = prog_size;
bpf_flush_icache(jit_data->header, ctx->insns + ctx->ninsns); bpf_flush_icache(jit_data->header, ctx->insns + ctx->ninsns);
......
...@@ -721,6 +721,20 @@ static void maybe_emit_mod(u8 **pprog, u32 dst_reg, u32 src_reg, bool is64) ...@@ -721,6 +721,20 @@ static void maybe_emit_mod(u8 **pprog, u32 dst_reg, u32 src_reg, bool is64)
*pprog = prog; *pprog = prog;
} }
/*
* Similar version of maybe_emit_mod() for a single register
*/
static void maybe_emit_1mod(u8 **pprog, u32 reg, bool is64)
{
u8 *prog = *pprog;
if (is64)
EMIT1(add_1mod(0x48, reg));
else if (is_ereg(reg))
EMIT1(add_1mod(0x40, reg));
*pprog = prog;
}
/* LDX: dst_reg = *(u8*)(src_reg + off) */ /* LDX: dst_reg = *(u8*)(src_reg + off) */
static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off) static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
{ {
...@@ -951,10 +965,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, ...@@ -951,10 +965,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
/* neg dst */ /* neg dst */
case BPF_ALU | BPF_NEG: case BPF_ALU | BPF_NEG:
case BPF_ALU64 | BPF_NEG: case BPF_ALU64 | BPF_NEG:
if (BPF_CLASS(insn->code) == BPF_ALU64) maybe_emit_1mod(&prog, dst_reg,
EMIT1(add_1mod(0x48, dst_reg)); BPF_CLASS(insn->code) == BPF_ALU64);
else if (is_ereg(dst_reg))
EMIT1(add_1mod(0x40, dst_reg));
EMIT2(0xF7, add_1reg(0xD8, dst_reg)); EMIT2(0xF7, add_1reg(0xD8, dst_reg));
break; break;
...@@ -968,10 +980,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, ...@@ -968,10 +980,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
case BPF_ALU64 | BPF_AND | BPF_K: case BPF_ALU64 | BPF_AND | BPF_K:
case BPF_ALU64 | BPF_OR | BPF_K: case BPF_ALU64 | BPF_OR | BPF_K:
case BPF_ALU64 | BPF_XOR | BPF_K: case BPF_ALU64 | BPF_XOR | BPF_K:
if (BPF_CLASS(insn->code) == BPF_ALU64) maybe_emit_1mod(&prog, dst_reg,
EMIT1(add_1mod(0x48, dst_reg)); BPF_CLASS(insn->code) == BPF_ALU64);
else if (is_ereg(dst_reg))
EMIT1(add_1mod(0x40, dst_reg));
/* /*
* b3 holds 'normal' opcode, b2 short form only valid * b3 holds 'normal' opcode, b2 short form only valid
...@@ -1028,19 +1038,30 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, ...@@ -1028,19 +1038,30 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
case BPF_ALU64 | BPF_MOD | BPF_X: case BPF_ALU64 | BPF_MOD | BPF_X:
case BPF_ALU64 | BPF_DIV | BPF_X: case BPF_ALU64 | BPF_DIV | BPF_X:
case BPF_ALU64 | BPF_MOD | BPF_K: case BPF_ALU64 | BPF_MOD | BPF_K:
case BPF_ALU64 | BPF_DIV | BPF_K: case BPF_ALU64 | BPF_DIV | BPF_K: {
EMIT1(0x50); /* push rax */ bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
EMIT1(0x52); /* push rdx */
if (dst_reg != BPF_REG_0)
if (BPF_SRC(insn->code) == BPF_X) EMIT1(0x50); /* push rax */
/* mov r11, src_reg */ if (dst_reg != BPF_REG_3)
EMIT_mov(AUX_REG, src_reg); EMIT1(0x52); /* push rdx */
else
if (BPF_SRC(insn->code) == BPF_X) {
if (src_reg == BPF_REG_0 ||
src_reg == BPF_REG_3) {
/* mov r11, src_reg */
EMIT_mov(AUX_REG, src_reg);
src_reg = AUX_REG;
}
} else {
/* mov r11, imm32 */ /* mov r11, imm32 */
EMIT3_off32(0x49, 0xC7, 0xC3, imm32); EMIT3_off32(0x49, 0xC7, 0xC3, imm32);
src_reg = AUX_REG;
}
/* mov rax, dst_reg */ if (dst_reg != BPF_REG_0)
EMIT_mov(BPF_REG_0, dst_reg); /* mov rax, dst_reg */
emit_mov_reg(&prog, is64, BPF_REG_0, dst_reg);
/* /*
* xor edx, edx * xor edx, edx
...@@ -1048,33 +1069,30 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, ...@@ -1048,33 +1069,30 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
*/ */
EMIT2(0x31, 0xd2); EMIT2(0x31, 0xd2);
if (BPF_CLASS(insn->code) == BPF_ALU64) /* div src_reg */
/* div r11 */ maybe_emit_1mod(&prog, src_reg, is64);
EMIT3(0x49, 0xF7, 0xF3); EMIT2(0xF7, add_1reg(0xF0, src_reg));
else
/* div r11d */ if (BPF_OP(insn->code) == BPF_MOD &&
EMIT3(0x41, 0xF7, 0xF3); dst_reg != BPF_REG_3)
/* mov dst_reg, rdx */
if (BPF_OP(insn->code) == BPF_MOD) emit_mov_reg(&prog, is64, dst_reg, BPF_REG_3);
/* mov r11, rdx */ else if (BPF_OP(insn->code) == BPF_DIV &&
EMIT3(0x49, 0x89, 0xD3); dst_reg != BPF_REG_0)
else /* mov dst_reg, rax */
/* mov r11, rax */ emit_mov_reg(&prog, is64, dst_reg, BPF_REG_0);
EMIT3(0x49, 0x89, 0xC3);
if (dst_reg != BPF_REG_3)
EMIT1(0x5A); /* pop rdx */ EMIT1(0x5A); /* pop rdx */
EMIT1(0x58); /* pop rax */ if (dst_reg != BPF_REG_0)
EMIT1(0x58); /* pop rax */
/* mov dst_reg, r11 */
EMIT_mov(dst_reg, AUX_REG);
break; break;
}
case BPF_ALU | BPF_MUL | BPF_K: case BPF_ALU | BPF_MUL | BPF_K:
case BPF_ALU64 | BPF_MUL | BPF_K: case BPF_ALU64 | BPF_MUL | BPF_K:
if (BPF_CLASS(insn->code) == BPF_ALU64) maybe_emit_mod(&prog, dst_reg, dst_reg,
EMIT1(add_2mod(0x48, dst_reg, dst_reg)); BPF_CLASS(insn->code) == BPF_ALU64);
else if (is_ereg(dst_reg))
EMIT1(add_2mod(0x40, dst_reg, dst_reg));
if (is_imm8(imm32)) if (is_imm8(imm32))
/* imul dst_reg, dst_reg, imm8 */ /* imul dst_reg, dst_reg, imm8 */
...@@ -1089,10 +1107,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, ...@@ -1089,10 +1107,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
case BPF_ALU | BPF_MUL | BPF_X: case BPF_ALU | BPF_MUL | BPF_X:
case BPF_ALU64 | BPF_MUL | BPF_X: case BPF_ALU64 | BPF_MUL | BPF_X:
if (BPF_CLASS(insn->code) == BPF_ALU64) maybe_emit_mod(&prog, src_reg, dst_reg,
EMIT1(add_2mod(0x48, src_reg, dst_reg)); BPF_CLASS(insn->code) == BPF_ALU64);
else if (is_ereg(dst_reg) || is_ereg(src_reg))
EMIT1(add_2mod(0x40, src_reg, dst_reg));
/* imul dst_reg, src_reg */ /* imul dst_reg, src_reg */
EMIT3(0x0F, 0xAF, add_2reg(0xC0, src_reg, dst_reg)); EMIT3(0x0F, 0xAF, add_2reg(0xC0, src_reg, dst_reg));
...@@ -1105,10 +1121,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, ...@@ -1105,10 +1121,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
case BPF_ALU64 | BPF_LSH | BPF_K: case BPF_ALU64 | BPF_LSH | BPF_K:
case BPF_ALU64 | BPF_RSH | BPF_K: case BPF_ALU64 | BPF_RSH | BPF_K:
case BPF_ALU64 | BPF_ARSH | BPF_K: case BPF_ALU64 | BPF_ARSH | BPF_K:
if (BPF_CLASS(insn->code) == BPF_ALU64) maybe_emit_1mod(&prog, dst_reg,
EMIT1(add_1mod(0x48, dst_reg)); BPF_CLASS(insn->code) == BPF_ALU64);
else if (is_ereg(dst_reg))
EMIT1(add_1mod(0x40, dst_reg));
b3 = simple_alu_opcodes[BPF_OP(insn->code)]; b3 = simple_alu_opcodes[BPF_OP(insn->code)];
if (imm32 == 1) if (imm32 == 1)
...@@ -1139,10 +1153,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, ...@@ -1139,10 +1153,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
} }
/* shl %rax, %cl | shr %rax, %cl | sar %rax, %cl */ /* shl %rax, %cl | shr %rax, %cl | sar %rax, %cl */
if (BPF_CLASS(insn->code) == BPF_ALU64) maybe_emit_1mod(&prog, dst_reg,
EMIT1(add_1mod(0x48, dst_reg)); BPF_CLASS(insn->code) == BPF_ALU64);
else if (is_ereg(dst_reg))
EMIT1(add_1mod(0x40, dst_reg));
b3 = simple_alu_opcodes[BPF_OP(insn->code)]; b3 = simple_alu_opcodes[BPF_OP(insn->code)];
EMIT2(0xD3, add_1reg(b3, dst_reg)); EMIT2(0xD3, add_1reg(b3, dst_reg));
...@@ -1452,10 +1464,8 @@ st: if (is_imm8(insn->off)) ...@@ -1452,10 +1464,8 @@ st: if (is_imm8(insn->off))
case BPF_JMP | BPF_JSET | BPF_K: case BPF_JMP | BPF_JSET | BPF_K:
case BPF_JMP32 | BPF_JSET | BPF_K: case BPF_JMP32 | BPF_JSET | BPF_K:
/* test dst_reg, imm32 */ /* test dst_reg, imm32 */
if (BPF_CLASS(insn->code) == BPF_JMP) maybe_emit_1mod(&prog, dst_reg,
EMIT1(add_1mod(0x48, dst_reg)); BPF_CLASS(insn->code) == BPF_JMP);
else if (is_ereg(dst_reg))
EMIT1(add_1mod(0x40, dst_reg));
EMIT2_off32(0xF7, add_1reg(0xC0, dst_reg), imm32); EMIT2_off32(0xF7, add_1reg(0xC0, dst_reg), imm32);
goto emit_cond_jmp; goto emit_cond_jmp;
...@@ -1488,10 +1498,8 @@ st: if (is_imm8(insn->off)) ...@@ -1488,10 +1498,8 @@ st: if (is_imm8(insn->off))
} }
/* cmp dst_reg, imm8/32 */ /* cmp dst_reg, imm8/32 */
if (BPF_CLASS(insn->code) == BPF_JMP) maybe_emit_1mod(&prog, dst_reg,
EMIT1(add_1mod(0x48, dst_reg)); BPF_CLASS(insn->code) == BPF_JMP);
else if (is_ereg(dst_reg))
EMIT1(add_1mod(0x40, dst_reg));
if (is_imm8(imm32)) if (is_imm8(imm32))
EMIT3(0x83, add_1reg(0xF8, dst_reg), imm32); EMIT3(0x83, add_1reg(0xF8, dst_reg), imm32);
......
...@@ -168,6 +168,7 @@ struct bpf_map { ...@@ -168,6 +168,7 @@ struct bpf_map {
u32 key_size; u32 key_size;
u32 value_size; u32 value_size;
u32 max_entries; u32 max_entries;
u64 map_extra; /* any per-map-type extra fields */
u32 map_flags; u32 map_flags;
int spin_lock_off; /* >=0 valid offset, <0 error */ int spin_lock_off; /* >=0 valid offset, <0 error */
int timer_off; /* >=0 valid offset, <0 error */ int timer_off; /* >=0 valid offset, <0 error */
...@@ -175,15 +176,15 @@ struct bpf_map { ...@@ -175,15 +176,15 @@ struct bpf_map {
int numa_node; int numa_node;
u32 btf_key_type_id; u32 btf_key_type_id;
u32 btf_value_type_id; u32 btf_value_type_id;
u32 btf_vmlinux_value_type_id;
struct btf *btf; struct btf *btf;
#ifdef CONFIG_MEMCG_KMEM #ifdef CONFIG_MEMCG_KMEM
struct mem_cgroup *memcg; struct mem_cgroup *memcg;
#endif #endif
char name[BPF_OBJ_NAME_LEN]; char name[BPF_OBJ_NAME_LEN];
u32 btf_vmlinux_value_type_id;
bool bypass_spec_v1; bool bypass_spec_v1;
bool frozen; /* write-once; write-protected by freeze_mutex */ bool frozen; /* write-once; write-protected by freeze_mutex */
/* 22 bytes hole */ /* 14 bytes hole */
/* The 3rd and 4th cacheline with misc members to avoid false sharing /* The 3rd and 4th cacheline with misc members to avoid false sharing
* particularly with refcounting. * particularly with refcounting.
...@@ -513,7 +514,7 @@ struct bpf_verifier_ops { ...@@ -513,7 +514,7 @@ struct bpf_verifier_ops {
const struct btf_type *t, int off, int size, const struct btf_type *t, int off, int size,
enum bpf_access_type atype, enum bpf_access_type atype,
u32 *next_btf_id); u32 *next_btf_id);
bool (*check_kfunc_call)(u32 kfunc_btf_id); bool (*check_kfunc_call)(u32 kfunc_btf_id, struct module *owner);
}; };
struct bpf_prog_offload_ops { struct bpf_prog_offload_ops {
...@@ -877,6 +878,7 @@ struct bpf_prog_aux { ...@@ -877,6 +878,7 @@ struct bpf_prog_aux {
void *jit_data; /* JIT specific data. arch dependent */ void *jit_data; /* JIT specific data. arch dependent */
struct bpf_jit_poke_descriptor *poke_tab; struct bpf_jit_poke_descriptor *poke_tab;
struct bpf_kfunc_desc_tab *kfunc_tab; struct bpf_kfunc_desc_tab *kfunc_tab;
struct bpf_kfunc_btf_tab *kfunc_btf_tab;
u32 size_poke_tab; u32 size_poke_tab;
struct bpf_ksym ksym; struct bpf_ksym ksym;
const struct bpf_prog_ops *ops; const struct bpf_prog_ops *ops;
...@@ -886,6 +888,7 @@ struct bpf_prog_aux { ...@@ -886,6 +888,7 @@ struct bpf_prog_aux {
struct bpf_prog *prog; struct bpf_prog *prog;
struct user_struct *user; struct user_struct *user;
u64 load_time; /* ns since boottime */ u64 load_time; /* ns since boottime */
u32 verified_insns;
struct bpf_map *cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE]; struct bpf_map *cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE];
char name[BPF_OBJ_NAME_LEN]; char name[BPF_OBJ_NAME_LEN];
#ifdef CONFIG_SECURITY #ifdef CONFIG_SECURITY
...@@ -1000,6 +1003,10 @@ bool bpf_struct_ops_get(const void *kdata); ...@@ -1000,6 +1003,10 @@ bool bpf_struct_ops_get(const void *kdata);
void bpf_struct_ops_put(const void *kdata); void bpf_struct_ops_put(const void *kdata);
int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key, int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
void *value); void *value);
int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs,
struct bpf_prog *prog,
const struct btf_func_model *model,
void *image, void *image_end);
static inline bool bpf_try_module_get(const void *data, struct module *owner) static inline bool bpf_try_module_get(const void *data, struct module *owner)
{ {
if (owner == BPF_MODULE_OWNER) if (owner == BPF_MODULE_OWNER)
...@@ -1014,6 +1021,22 @@ static inline void bpf_module_put(const void *data, struct module *owner) ...@@ -1014,6 +1021,22 @@ static inline void bpf_module_put(const void *data, struct module *owner)
else else
module_put(owner); module_put(owner);
} }
#ifdef CONFIG_NET
/* Define it here to avoid the use of forward declaration */
struct bpf_dummy_ops_state {
int val;
};
struct bpf_dummy_ops {
int (*test_1)(struct bpf_dummy_ops_state *cb);
int (*test_2)(struct bpf_dummy_ops_state *cb, int a1, unsigned short a2,
char a3, unsigned long a4);
};
int bpf_struct_ops_test_run(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr);
#endif
#else #else
static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id) static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id)
{ {
...@@ -1642,10 +1665,33 @@ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog, ...@@ -1642,10 +1665,33 @@ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
const union bpf_attr *kattr, const union bpf_attr *kattr,
union bpf_attr __user *uattr); union bpf_attr __user *uattr);
bool bpf_prog_test_check_kfunc_call(u32 kfunc_id); bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner);
bool btf_ctx_access(int off, int size, enum bpf_access_type type, bool btf_ctx_access(int off, int size, enum bpf_access_type type,
const struct bpf_prog *prog, const struct bpf_prog *prog,
struct bpf_insn_access_aux *info); struct bpf_insn_access_aux *info);
static inline bool bpf_tracing_ctx_access(int off, int size,
enum bpf_access_type type)
{
if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
return false;
if (type != BPF_READ)
return false;
if (off % size != 0)
return false;
return true;
}
static inline bool bpf_tracing_btf_ctx_access(int off, int size,
enum bpf_access_type type,
const struct bpf_prog *prog,
struct bpf_insn_access_aux *info)
{
if (!bpf_tracing_ctx_access(off, size, type))
return false;
return btf_ctx_access(off, size, type, prog, info);
}
int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf, int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf,
const struct btf_type *t, int off, int size, const struct btf_type *t, int off, int size,
enum bpf_access_type atype, enum bpf_access_type atype,
...@@ -1863,7 +1909,8 @@ static inline int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, ...@@ -1863,7 +1909,8 @@ static inline int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
return -ENOTSUPP; return -ENOTSUPP;
} }
static inline bool bpf_prog_test_check_kfunc_call(u32 kfunc_id) static inline bool bpf_prog_test_check_kfunc_call(u32 kfunc_id,
struct module *owner)
{ {
return false; return false;
} }
...@@ -2094,6 +2141,7 @@ extern const struct bpf_func_proto bpf_skc_to_tcp_sock_proto; ...@@ -2094,6 +2141,7 @@ extern const struct bpf_func_proto bpf_skc_to_tcp_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_tcp_timewait_sock_proto; extern const struct bpf_func_proto bpf_skc_to_tcp_timewait_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_tcp_request_sock_proto; extern const struct bpf_func_proto bpf_skc_to_tcp_request_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_udp6_sock_proto; extern const struct bpf_func_proto bpf_skc_to_udp6_sock_proto;
extern const struct bpf_func_proto bpf_skc_to_unix_sock_proto;
extern const struct bpf_func_proto bpf_copy_from_user_proto; extern const struct bpf_func_proto bpf_copy_from_user_proto;
extern const struct bpf_func_proto bpf_snprintf_btf_proto; extern const struct bpf_func_proto bpf_snprintf_btf_proto;
extern const struct bpf_func_proto bpf_snprintf_proto; extern const struct bpf_func_proto bpf_snprintf_proto;
...@@ -2108,6 +2156,7 @@ extern const struct bpf_func_proto bpf_for_each_map_elem_proto; ...@@ -2108,6 +2156,7 @@ extern const struct bpf_func_proto bpf_for_each_map_elem_proto;
extern const struct bpf_func_proto bpf_btf_find_by_name_kind_proto; extern const struct bpf_func_proto bpf_btf_find_by_name_kind_proto;
extern const struct bpf_func_proto bpf_sk_setsockopt_proto; extern const struct bpf_func_proto bpf_sk_setsockopt_proto;
extern const struct bpf_func_proto bpf_sk_getsockopt_proto; extern const struct bpf_func_proto bpf_sk_getsockopt_proto;
extern const struct bpf_func_proto bpf_kallsyms_lookup_name_proto;
const struct bpf_func_proto *tracing_prog_func_proto( const struct bpf_func_proto *tracing_prog_func_proto(
enum bpf_func_id func_id, const struct bpf_prog *prog); enum bpf_func_id func_id, const struct bpf_prog *prog);
......
...@@ -125,6 +125,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_STACK, stack_map_ops) ...@@ -125,6 +125,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_STACK, stack_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_STRUCT_OPS, bpf_struct_ops_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_STRUCT_OPS, bpf_struct_ops_map_ops)
#endif #endif
BPF_MAP_TYPE(BPF_MAP_TYPE_RINGBUF, ringbuf_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_RINGBUF, ringbuf_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_BLOOM_FILTER, bloom_filter_map_ops)
BPF_LINK_TYPE(BPF_LINK_TYPE_RAW_TRACEPOINT, raw_tracepoint) BPF_LINK_TYPE(BPF_LINK_TYPE_RAW_TRACEPOINT, raw_tracepoint)
BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING, tracing) BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING, tracing)
......
...@@ -527,5 +527,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log, ...@@ -527,5 +527,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
const struct bpf_prog *tgt_prog, const struct bpf_prog *tgt_prog,
u32 btf_id, u32 btf_id,
struct bpf_attach_target_info *tgt_info); struct bpf_attach_target_info *tgt_info);
void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab);
#endif /* _LINUX_BPF_VERIFIER_H */ #endif /* _LINUX_BPF_VERIFIER_H */
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#ifndef _LINUX_BPFPTR_H #ifndef _LINUX_BPFPTR_H
#define _LINUX_BPFPTR_H #define _LINUX_BPFPTR_H
#include <linux/mm.h>
#include <linux/sockptr.h> #include <linux/sockptr.h>
typedef sockptr_t bpfptr_t; typedef sockptr_t bpfptr_t;
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#define _LINUX_BTF_H 1 #define _LINUX_BTF_H 1
#include <linux/types.h> #include <linux/types.h>
#include <linux/bpfptr.h>
#include <uapi/linux/btf.h> #include <uapi/linux/btf.h>
#include <uapi/linux/bpf.h> #include <uapi/linux/bpf.h>
...@@ -238,4 +239,42 @@ static inline const char *btf_name_by_offset(const struct btf *btf, ...@@ -238,4 +239,42 @@ static inline const char *btf_name_by_offset(const struct btf *btf,
} }
#endif #endif
struct kfunc_btf_id_set {
struct list_head list;
struct btf_id_set *set;
struct module *owner;
};
struct kfunc_btf_id_list;
#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
struct kfunc_btf_id_set *s);
void unregister_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
struct kfunc_btf_id_set *s);
bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist, u32 kfunc_id,
struct module *owner);
#else
static inline void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
struct kfunc_btf_id_set *s)
{
}
static inline void unregister_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
struct kfunc_btf_id_set *s)
{
}
static inline bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist,
u32 kfunc_id, struct module *owner)
{
return false;
}
#endif
#define DEFINE_KFUNC_BTF_ID_SET(set, name) \
struct kfunc_btf_id_set name = { LIST_HEAD_INIT(name.list), (set), \
THIS_MODULE }
extern struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list;
extern struct kfunc_btf_id_list prog_test_kfunc_list;
#endif #endif
...@@ -553,9 +553,9 @@ struct bpf_binary_header { ...@@ -553,9 +553,9 @@ struct bpf_binary_header {
}; };
struct bpf_prog_stats { struct bpf_prog_stats {
u64 cnt; u64_stats_t cnt;
u64 nsecs; u64_stats_t nsecs;
u64 misses; u64_stats_t misses;
struct u64_stats_sync syncp; struct u64_stats_sync syncp;
} __aligned(2 * sizeof(u64)); } __aligned(2 * sizeof(u64));
...@@ -612,13 +612,14 @@ static __always_inline u32 __bpf_prog_run(const struct bpf_prog *prog, ...@@ -612,13 +612,14 @@ static __always_inline u32 __bpf_prog_run(const struct bpf_prog *prog,
if (static_branch_unlikely(&bpf_stats_enabled_key)) { if (static_branch_unlikely(&bpf_stats_enabled_key)) {
struct bpf_prog_stats *stats; struct bpf_prog_stats *stats;
u64 start = sched_clock(); u64 start = sched_clock();
unsigned long flags;
ret = dfunc(ctx, prog->insnsi, prog->bpf_func); ret = dfunc(ctx, prog->insnsi, prog->bpf_func);
stats = this_cpu_ptr(prog->stats); stats = this_cpu_ptr(prog->stats);
u64_stats_update_begin(&stats->syncp); flags = u64_stats_update_begin_irqsave(&stats->syncp);
stats->cnt++; u64_stats_inc(&stats->cnt);
stats->nsecs += sched_clock() - start; u64_stats_add(&stats->nsecs, sched_clock() - start);
u64_stats_update_end(&stats->syncp); u64_stats_update_end_irqrestore(&stats->syncp, flags);
} else { } else {
ret = dfunc(ctx, prog->insnsi, prog->bpf_func); ret = dfunc(ctx, prog->insnsi, prog->bpf_func);
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -64,6 +64,7 @@ config BPF_JIT_DEFAULT_ON ...@@ -64,6 +64,7 @@ config BPF_JIT_DEFAULT_ON
config BPF_UNPRIV_DEFAULT_OFF config BPF_UNPRIV_DEFAULT_OFF
bool "Disable unprivileged BPF by default" bool "Disable unprivileged BPF by default"
default y
depends on BPF_SYSCALL depends on BPF_SYSCALL
help help
Disables unprivileged BPF by default by setting the corresponding Disables unprivileged BPF by default by setting the corresponding
...@@ -72,6 +73,12 @@ config BPF_UNPRIV_DEFAULT_OFF ...@@ -72,6 +73,12 @@ config BPF_UNPRIV_DEFAULT_OFF
disable it by setting it to 1 (from which no other transition to disable it by setting it to 1 (from which no other transition to
0 is possible anymore). 0 is possible anymore).
Unprivileged BPF could be used to exploit certain potential
speculative execution side-channel vulnerabilities on unmitigated
affected hardware.
If you are unsure how to answer this question, answer Y.
source "kernel/bpf/preload/Kconfig" source "kernel/bpf/preload/Kconfig"
config BPF_LSM config BPF_LSM
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment