Commit f438ee21 authored by Daniel Borkmann's avatar Daniel Borkmann Committed by Andrii Nakryiko

Merge branch 'bpf-mips-jit'

Johan Almbladh says:

====================
This is an implementation of an eBPF JIT for MIPS I-V and MIPS32/64 r1-r6.
The new JIT is written from scratch, but uses the same overall structure
as other eBPF JITs.

Before, the MIPS JIT situation looked like this.

  - 32-bit: MIPS32, cBPF-only, tests fail
  - 64-bit: MIPS64r2-r6, eBPF, tests fail, incomplete eBPF ISA support

The new JIT implementation raises the bar to the following level.

  - 32/64-bit: all MIPS ISA, eBPF, all tests pass, full eBPF ISA support

Overview
--------
The implementation supports all 32-bit and 64-bit eBPF instructions
defined as of this writing, including the recently-added atomics. It is
intended to provide good performance for native word size operations,
while also being complete so the JIT never has to fall back to the
interpreter. The new JIT replaces the current cBPF and eBPF JITs for MIPS.

The implementation is divided into separate files as follows. The source
files contains comments describing internal mechanisms and details on
things like eBPF-to-CPU register mappings, so I won't repeat that here.

  - jit_comp.[ch]    code shared between 32-bit and 64-bit JITs
  - jit_comp32.c     32-bit JIT implementation
  - jit_comp64.c     64-bit JIT implementation

Both the 32-bit and 64-bit versions map all eBPF registers to native MIPS
CPU registers. There are also enough unmapped CPU registers available to
allow all eBPF operations implemented natively by the JIT to use only CPU
registers without having to resort to stack scratch space.

Some operations are deemed too complex to implement natively in the JIT.
Those are instead implemented as a function call to a helper that performs
the operation. This is done in the following cases.

  - 64-bit div and mod on a 32-bit CPU
  - 64-bit atomics on a 32-bit CPU
  - 32-bit atomics on a 32-bit CPU that lacks ll/sc instructions

CPU errata workarounds
----------------------
The JIT implements workarounds for R10000, Loongson-2F and Loongson-3 CPU
errata. For the Loongson workarounds, I have used the public information
available on the matter.

Link: https://sourceware.org/legacy-ml/binutils/2009-11/msg00387.html

Testing
-------
During the development of the JIT, I have added a number of new test cases
to the test_bpf.ko test suite to be able to verify correctness of JIT
implementations in a more systematic way. The new additions increase the
test suite roughly three-fold, with many of the new tests being very
extensive and even exhaustive when feasible.

Link: https://lore.kernel.org/bpf/20211001130348.3670534-1-johan.almbladh@anyfinetworks.com/
Link: https://lore.kernel.org/bpf/20210914091842.4186267-1-johan.almbladh@anyfinetworks.com/
Link: https://lore.kernel.org/bpf/20210809091829.810076-1-johan.almbladh@anyfinetworks.com/

The JIT has been tested by running the test_bpf.ko test suite in QEMU with
the following MIPS ISAs, in both big and little endian mode, with and
without JIT hardening enabled.

  MIPS32r2, MIPS32r6, MIPS64r2, MIPS64r6

For the QEMU r2 targets, the correctness of pre-r2 code emitted has been
tested by manually overriding each of the following macros with 0.

  cpu_has_llsc, cpu_has_mips_2, cpu_has_mips_r1, cpu_has_mips_r2

Similarly, CPU errata workaround code has been tested by enabling the
each of the following configurations for the MIPS64r2 targets.

  CONFIG_WAR_R10000
  CONFIG_CPU_LOONGSON3_WORKAROUNDS
  CONFIG_CPU_NOP_WORKAROUNDS
  CONFIG_CPU_JUMP_WORKAROUNDS

The JIT passes all tests in all configurations. Below is the summary for
MIPS32r2 in little endian mode.

  test_bpf: Summary: 1006 PASSED, 0 FAILED, [994/994 JIT'ed]
  test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [8/8 JIT'ed]
  test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED

According to MIPS ISA reference documentation, the result of a 32-bit ALU
arithmetic operation on a 64-bit CPU is unpredictable if an operand
register value is not properly sign-extended to 64 bits. To verify the
code emitted by the JIT, the code generation engine in QEMU was modifed to
flip all low 32 bits if the above condition was not met. With this
trip-wire installed, the kernel booted properly in qemu-system-mips64el
and all test_bpf.ko tests passed.

Remaining features
------------------
While the JIT is complete is terms of eBPF ISA support, this series does
not include support for BPF-to-BPF calls and BPF trampolines. Those
features are planned to be added in another patch series.

The BPF_ST | BPF_NOSPEC instruction currently emits nothing. This is
consistent with the behavior if the MIPS interpreter and the existing
eBPF JIT.

Why not build on the existing eBPF JIT?
---------------------------------------
The existing eBPF JIT was originally written for MIPS64. An effort was
made to add MIPS32 support to it in commit 716850ab ("MIPS: eBPF:
Initial eBPF support for MIPS32 architecture."). That turned out to
contain a number of flaws, so eBPF support for MIPS32 was disabled in
commit 36366e36 ("MIPS: BPF: Restore MIPS32 cBPF JIT").

Link: https://lore.kernel.org/bpf/5deaa994.1c69fb81.97561.647e@mx.google.com/

The current eBPF JIT for MIPS64 lacks a lot of functionality regarding
ALU32, JMP32 and atomic operations. It also lacks 32-bit CPU support on a
fundamental level, for example 32-bit CPU register mappings and o32 ABI
calling conventions. For optimization purposes, it tracks register usage
through the program control flow in order to do zero-extension and sign-
extension only when necessary, a static analysis of sorts. In my opinion,
having this kind of complexity in JITs, and for which there is not
adequate test coverage, is a problem. Such analysis should be done by the
verifier, if needed at all. Finally, when I run the BPF test suite
test_bpf.ko on the current JIT, there are errors and warnings.

I believe that an eBPF JIT should strive to be correct, complete and
optimized, and in that order. The JIT runs after the verifer has audited
the program and given its approval. If the JIT then emits code that does
something else, it will undermine the eBPF security model. A simple
implementation is easier to get correct than a complex one. Furthermore,
the real performance hit is not an extra CPU instruction here and there,
but when the JIT bails on an unimplemented eBPF instruction and cause the
whole program to fall back to the interpreter. My reasoning here boils
down to the following.

* The JIT should not contain a static analyzer that tracks branches.

* It is acceptable to emit possibly superfluous sign-/zero-extensions for
  ALU32 and JMP32 operations on a 64-bit MIPS to guarantee correctness.

* The JIT should handle all eBPF instructions on all MIPS CPUs.

I conclude that the current eBPF MIPS JIT is complex, incomplete and
incorrect. For the reasons stated above, I decided to not use the existing
JIT implementation.
====================
Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
parents 9d057872 ebcbacfa
......@@ -3422,6 +3422,7 @@ S: Supported
F: arch/arm64/net/
BPF JIT for MIPS (32-BIT AND 64-BIT)
M: Johan Almbladh <johan.almbladh@anyfinetworks.com>
M: Paul Burton <paulburton@kernel.org>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
......
......@@ -57,7 +57,6 @@ config MIPS
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES
select HAVE_ASM_MODVERSIONS
select HAVE_CBPF_JIT if !64BIT && !CPU_MICROMIPS
select HAVE_CONTEXT_TRACKING
select HAVE_TIF_NOHZ
select HAVE_C_RECORDMCOUNT
......@@ -65,7 +64,10 @@ config MIPS
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE
select HAVE_EBPF_JIT if 64BIT && !CPU_MICROMIPS && TARGET_ISA_REV >= 2
select HAVE_EBPF_JIT if !CPU_MICROMIPS && \
!CPU_DADDI_WORKAROUNDS && \
!CPU_R4000_WORKAROUNDS && \
!CPU_R4400_WORKAROUNDS
select HAVE_EXIT_THREAD
select HAVE_FAST_GUP
select HAVE_FTRACE_MCOUNT_RECORD
......
......@@ -145,6 +145,7 @@ Ip_u1(_mtlo);
Ip_u3u1u2(_mul);
Ip_u1u2(_multu);
Ip_u3u1u2(_mulu);
Ip_u3u1u2(_muhu);
Ip_u3u1u2(_nor);
Ip_u3u1u2(_or);
Ip_u2u1u3(_ori);
......@@ -248,7 +249,11 @@ static inline void uasm_l##lb(struct uasm_label **lab, u32 *addr) \
#define uasm_i_bnezl(buf, rs, off) uasm_i_bnel(buf, rs, 0, off)
#define uasm_i_ehb(buf) uasm_i_sll(buf, 0, 0, 3)
#define uasm_i_move(buf, a, b) UASM_i_ADDU(buf, a, 0, b)
#ifdef CONFIG_CPU_NOP_WORKAROUNDS
#define uasm_i_nop(buf) uasm_i_or(buf, 1, 1, 0)
#else
#define uasm_i_nop(buf) uasm_i_sll(buf, 0, 0, 0)
#endif
#define uasm_i_ssnop(buf) uasm_i_sll(buf, 0, 0, 1)
static inline void uasm_i_drotr_safe(u32 **p, unsigned int a1,
......
......@@ -90,7 +90,7 @@ static const struct insn insn_table[insn_invalid] = {
RS | RT | RD},
[insn_dmtc0] = {M(cop0_op, dmtc_op, 0, 0, 0, 0), RT | RD | SET},
[insn_dmultu] = {M(spec_op, 0, 0, 0, 0, dmultu_op), RS | RT},
[insn_dmulu] = {M(spec_op, 0, 0, 0, dmult_dmul_op, dmultu_op),
[insn_dmulu] = {M(spec_op, 0, 0, 0, dmultu_dmulu_op, dmultu_op),
RS | RT | RD},
[insn_drotr] = {M(spec_op, 1, 0, 0, 0, dsrl_op), RT | RD | RE},
[insn_drotr32] = {M(spec_op, 1, 0, 0, 0, dsrl32_op), RT | RD | RE},
......@@ -150,6 +150,8 @@ static const struct insn insn_table[insn_invalid] = {
[insn_mtlo] = {M(spec_op, 0, 0, 0, 0, mtlo_op), RS},
[insn_mulu] = {M(spec_op, 0, 0, 0, multu_mulu_op, multu_op),
RS | RT | RD},
[insn_muhu] = {M(spec_op, 0, 0, 0, multu_muhu_op, multu_op),
RS | RT | RD},
#ifndef CONFIG_CPU_MIPSR6
[insn_mul] = {M(spec2_op, 0, 0, 0, 0, mul_op), RS | RT | RD},
#else
......
......@@ -59,7 +59,7 @@ enum opcode {
insn_lddir, insn_ldpte, insn_ldx, insn_lh, insn_lhu, insn_ll, insn_lld,
insn_lui, insn_lw, insn_lwu, insn_lwx, insn_mfc0, insn_mfhc0, insn_mfhi,
insn_mflo, insn_modu, insn_movn, insn_movz, insn_mtc0, insn_mthc0,
insn_mthi, insn_mtlo, insn_mul, insn_multu, insn_mulu, insn_nor,
insn_mthi, insn_mtlo, insn_mul, insn_multu, insn_mulu, insn_muhu, insn_nor,
insn_or, insn_ori, insn_pref, insn_rfe, insn_rotr, insn_sb, insn_sc,
insn_scd, insn_seleqz, insn_selnez, insn_sd, insn_sh, insn_sll,
insn_sllv, insn_slt, insn_slti, insn_sltiu, insn_sltu, insn_sra,
......@@ -344,6 +344,7 @@ I_u1(_mtlo)
I_u3u1u2(_mul)
I_u1u2(_multu)
I_u3u1u2(_mulu)
I_u3u1u2(_muhu)
I_u3u1u2(_nor)
I_u3u1u2(_or)
I_u2u1u3(_ori)
......
......@@ -2,4 +2,10 @@
# MIPS networking code
obj-$(CONFIG_MIPS_CBPF_JIT) += bpf_jit.o bpf_jit_asm.o
obj-$(CONFIG_MIPS_EBPF_JIT) += ebpf_jit.o
obj-$(CONFIG_MIPS_EBPF_JIT) += bpf_jit_comp.o
ifeq ($(CONFIG_32BIT),y)
obj-$(CONFIG_MIPS_EBPF_JIT) += bpf_jit_comp32.o
else
obj-$(CONFIG_MIPS_EBPF_JIT) += bpf_jit_comp64.o
endif
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Just-In-Time compiler for BPF filters on MIPS
*
* Copyright (c) 2014 Imagination Technologies Ltd.
* Author: Markos Chandras <markos.chandras@imgtec.com>
*/
#ifndef BPF_JIT_MIPS_OP_H
#define BPF_JIT_MIPS_OP_H
/* Registers used by JIT */
#define MIPS_R_ZERO 0
#define MIPS_R_V0 2
#define MIPS_R_A0 4
#define MIPS_R_A1 5
#define MIPS_R_T4 12
#define MIPS_R_T5 13
#define MIPS_R_T6 14
#define MIPS_R_T7 15
#define MIPS_R_S0 16
#define MIPS_R_S1 17
#define MIPS_R_S2 18
#define MIPS_R_S3 19
#define MIPS_R_S4 20
#define MIPS_R_S5 21
#define MIPS_R_S6 22
#define MIPS_R_S7 23
#define MIPS_R_SP 29
#define MIPS_R_RA 31
/* Conditional codes */
#define MIPS_COND_EQ 0x1
#define MIPS_COND_GE (0x1 << 1)
#define MIPS_COND_GT (0x1 << 2)
#define MIPS_COND_NE (0x1 << 3)
#define MIPS_COND_ALL (0x1 << 4)
/* Conditionals on X register or K immediate */
#define MIPS_COND_X (0x1 << 5)
#define MIPS_COND_K (0x1 << 6)
#define r_ret MIPS_R_V0
/*
* Use 2 scratch registers to avoid pipeline interlocks.
* There is no overhead during epilogue and prologue since
* any of the $s0-$s6 registers will only be preserved if
* they are going to actually be used.
*/
#define r_skb_hl MIPS_R_S0 /* skb header length */
#define r_skb_data MIPS_R_S1 /* skb actual data */
#define r_off MIPS_R_S2
#define r_A MIPS_R_S3
#define r_X MIPS_R_S4
#define r_skb MIPS_R_S5
#define r_M MIPS_R_S6
#define r_skb_len MIPS_R_S7
#define r_s0 MIPS_R_T4 /* scratch reg 1 */
#define r_s1 MIPS_R_T5 /* scratch reg 2 */
#define r_tmp_imm MIPS_R_T6 /* No need to preserve this */
#define r_tmp MIPS_R_T7 /* No need to preserve this */
#define r_zero MIPS_R_ZERO
#define r_sp MIPS_R_SP
#define r_ra MIPS_R_RA
#ifndef __ASSEMBLY__
/* Declare ASM helpers */
#define DECLARE_LOAD_FUNC(func) \
extern u8 func(unsigned long *skb, int offset); \
extern u8 func##_negative(unsigned long *skb, int offset); \
extern u8 func##_positive(unsigned long *skb, int offset)
DECLARE_LOAD_FUNC(sk_load_word);
DECLARE_LOAD_FUNC(sk_load_half);
DECLARE_LOAD_FUNC(sk_load_byte);
#endif
#endif /* BPF_JIT_MIPS_OP_H */
/*
* bpf_jib_asm.S: Packet/header access helper functions for MIPS/MIPS64 BPF
* compiler.
*
* Copyright (C) 2015 Imagination Technologies Ltd.
* Author: Markos Chandras <markos.chandras@imgtec.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; version 2 of the License.
*/
#include <asm/asm.h>
#include <asm/isa-rev.h>
#include <asm/regdef.h>
#include "bpf_jit.h"
/* ABI
*
* r_skb_hl skb header length
* r_skb_data skb data
* r_off(a1) offset register
* r_A BPF register A
* r_X PF register X
* r_skb(a0) *skb
* r_M *scratch memory
* r_skb_le skb length
* r_s0 Scratch register 0
* r_s1 Scratch register 1
*
* On entry:
* a0: *skb
* a1: offset (imm or imm + X)
*
* All non-BPF-ABI registers are free for use. On return, we only
* care about r_ret. The BPF-ABI registers are assumed to remain
* unmodified during the entire filter operation.
*/
#define skb a0
#define offset a1
#define SKF_LL_OFF (-0x200000) /* Can't include linux/filter.h in assembly */
/* We know better :) so prevent assembler reordering etc */
.set noreorder
#define is_offset_negative(TYPE) \
/* If offset is negative we have more work to do */ \
slti t0, offset, 0; \
bgtz t0, bpf_slow_path_##TYPE##_neg; \
/* Be careful what follows in DS. */
#define is_offset_in_header(SIZE, TYPE) \
/* Reading from header? */ \
addiu $r_s0, $r_skb_hl, -SIZE; \
slt t0, $r_s0, offset; \
bgtz t0, bpf_slow_path_##TYPE; \
LEAF(sk_load_word)
is_offset_negative(word)
FEXPORT(sk_load_word_positive)
is_offset_in_header(4, word)
/* Offset within header boundaries */
PTR_ADDU t1, $r_skb_data, offset
.set reorder
lw $r_A, 0(t1)
.set noreorder
#ifdef CONFIG_CPU_LITTLE_ENDIAN
# if MIPS_ISA_REV >= 2
wsbh t0, $r_A
rotr $r_A, t0, 16
# else
sll t0, $r_A, 24
srl t1, $r_A, 24
srl t2, $r_A, 8
or t0, t0, t1
andi t2, t2, 0xff00
andi t1, $r_A, 0xff00
or t0, t0, t2
sll t1, t1, 8
or $r_A, t0, t1
# endif
#endif
jr $r_ra
move $r_ret, zero
END(sk_load_word)
LEAF(sk_load_half)
is_offset_negative(half)
FEXPORT(sk_load_half_positive)
is_offset_in_header(2, half)
/* Offset within header boundaries */
PTR_ADDU t1, $r_skb_data, offset
lhu $r_A, 0(t1)
#ifdef CONFIG_CPU_LITTLE_ENDIAN
# if MIPS_ISA_REV >= 2
wsbh $r_A, $r_A
# else
sll t0, $r_A, 8
srl t1, $r_A, 8
andi t0, t0, 0xff00
or $r_A, t0, t1
# endif
#endif
jr $r_ra
move $r_ret, zero
END(sk_load_half)
LEAF(sk_load_byte)
is_offset_negative(byte)
FEXPORT(sk_load_byte_positive)
is_offset_in_header(1, byte)
/* Offset within header boundaries */
PTR_ADDU t1, $r_skb_data, offset
lbu $r_A, 0(t1)
jr $r_ra
move $r_ret, zero
END(sk_load_byte)
/*
* call skb_copy_bits:
* (prototype in linux/skbuff.h)
*
* int skb_copy_bits(sk_buff *skb, int offset, void *to, int len)
*
* o32 mandates we leave 4 spaces for argument registers in case
* the callee needs to use them. Even though we don't care about
* the argument registers ourselves, we need to allocate that space
* to remain ABI compliant since the callee may want to use that space.
* We also allocate 2 more spaces for $r_ra and our return register (*to).
*
* n64 is a bit different. The *caller* will allocate the space to preserve
* the arguments. So in 64-bit kernels, we allocate the 4-arg space for no
* good reason but it does not matter that much really.
*
* (void *to) is returned in r_s0
*
*/
#ifdef CONFIG_CPU_LITTLE_ENDIAN
#define DS_OFFSET(SIZE) (4 * SZREG)
#else
#define DS_OFFSET(SIZE) ((4 * SZREG) + (4 - SIZE))
#endif
#define bpf_slow_path_common(SIZE) \
/* Quick check. Are we within reasonable boundaries? */ \
LONG_ADDIU $r_s1, $r_skb_len, -SIZE; \
sltu $r_s0, offset, $r_s1; \
beqz $r_s0, fault; \
/* Load 4th argument in DS */ \
LONG_ADDIU a3, zero, SIZE; \
PTR_ADDIU $r_sp, $r_sp, -(6 * SZREG); \
PTR_LA t0, skb_copy_bits; \
PTR_S $r_ra, (5 * SZREG)($r_sp); \
/* Assign low slot to a2 */ \
PTR_ADDIU a2, $r_sp, DS_OFFSET(SIZE); \
jalr t0; \
/* Reset our destination slot (DS but it's ok) */ \
INT_S zero, (4 * SZREG)($r_sp); \
/* \
* skb_copy_bits returns 0 on success and -EFAULT \
* on error. Our data live in a2. Do not bother with \
* our data if an error has been returned. \
*/ \
/* Restore our frame */ \
PTR_L $r_ra, (5 * SZREG)($r_sp); \
INT_L $r_s0, (4 * SZREG)($r_sp); \
bltz v0, fault; \
PTR_ADDIU $r_sp, $r_sp, 6 * SZREG; \
move $r_ret, zero; \
NESTED(bpf_slow_path_word, (6 * SZREG), $r_sp)
bpf_slow_path_common(4)
#ifdef CONFIG_CPU_LITTLE_ENDIAN
# if MIPS_ISA_REV >= 2
wsbh t0, $r_s0
jr $r_ra
rotr $r_A, t0, 16
# else
sll t0, $r_s0, 24
srl t1, $r_s0, 24
srl t2, $r_s0, 8
or t0, t0, t1
andi t2, t2, 0xff00
andi t1, $r_s0, 0xff00
or t0, t0, t2
sll t1, t1, 8
jr $r_ra
or $r_A, t0, t1
# endif
#else
jr $r_ra
move $r_A, $r_s0
#endif
END(bpf_slow_path_word)
NESTED(bpf_slow_path_half, (6 * SZREG), $r_sp)
bpf_slow_path_common(2)
#ifdef CONFIG_CPU_LITTLE_ENDIAN
# if MIPS_ISA_REV >= 2
jr $r_ra
wsbh $r_A, $r_s0
# else
sll t0, $r_s0, 8
andi t1, $r_s0, 0xff00
andi t0, t0, 0xff00
srl t1, t1, 8
jr $r_ra
or $r_A, t0, t1
# endif
#else
jr $r_ra
move $r_A, $r_s0
#endif
END(bpf_slow_path_half)
NESTED(bpf_slow_path_byte, (6 * SZREG), $r_sp)
bpf_slow_path_common(1)
jr $r_ra
move $r_A, $r_s0
END(bpf_slow_path_byte)
/*
* Negative entry points
*/
.macro bpf_is_end_of_data
li t0, SKF_LL_OFF
/* Reading link layer data? */
slt t1, offset, t0
bgtz t1, fault
/* Be careful what follows in DS. */
.endm
/*
* call skb_copy_bits:
* (prototype in linux/filter.h)
*
* void *bpf_internal_load_pointer_neg_helper(const struct sk_buff *skb,
* int k, unsigned int size)
*
* see above (bpf_slow_path_common) for ABI restrictions
*/
#define bpf_negative_common(SIZE) \
PTR_ADDIU $r_sp, $r_sp, -(6 * SZREG); \
PTR_LA t0, bpf_internal_load_pointer_neg_helper; \
PTR_S $r_ra, (5 * SZREG)($r_sp); \
jalr t0; \
li a2, SIZE; \
PTR_L $r_ra, (5 * SZREG)($r_sp); \
/* Check return pointer */ \
beqz v0, fault; \
PTR_ADDIU $r_sp, $r_sp, 6 * SZREG; \
/* Preserve our pointer */ \
move $r_s0, v0; \
/* Set return value */ \
move $r_ret, zero; \
bpf_slow_path_word_neg:
bpf_is_end_of_data
NESTED(sk_load_word_negative, (6 * SZREG), $r_sp)
bpf_negative_common(4)
jr $r_ra
lw $r_A, 0($r_s0)
END(sk_load_word_negative)
bpf_slow_path_half_neg:
bpf_is_end_of_data
NESTED(sk_load_half_negative, (6 * SZREG), $r_sp)
bpf_negative_common(2)
jr $r_ra
lhu $r_A, 0($r_s0)
END(sk_load_half_negative)
bpf_slow_path_byte_neg:
bpf_is_end_of_data
NESTED(sk_load_byte_negative, (6 * SZREG), $r_sp)
bpf_negative_common(1)
jr $r_ra
lbu $r_A, 0($r_s0)
END(sk_load_byte_negative)
fault:
jr $r_ra
addiu $r_ret, zero, 1
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Just-In-Time compiler for eBPF bytecode on 32-bit and 64-bit MIPS.
*
* Copyright (c) 2021 Anyfi Networks AB.
* Author: Johan Almbladh <johan.almbladh@gmail.com>
*
* Based on code and ideas from
* Copyright (c) 2017 Cavium, Inc.
* Copyright (c) 2017 Shubham Bansal <illusionist.neo@gmail.com>
* Copyright (c) 2011 Mircea Gherzan <mgherzan@gmail.com>
*/
#ifndef _BPF_JIT_COMP_H
#define _BPF_JIT_COMP_H
/* MIPS registers */
#define MIPS_R_ZERO 0 /* Const zero */
#define MIPS_R_AT 1 /* Asm temp */
#define MIPS_R_V0 2 /* Result */
#define MIPS_R_V1 3 /* Result */
#define MIPS_R_A0 4 /* Argument */
#define MIPS_R_A1 5 /* Argument */
#define MIPS_R_A2 6 /* Argument */
#define MIPS_R_A3 7 /* Argument */
#define MIPS_R_A4 8 /* Arg (n64) */
#define MIPS_R_A5 9 /* Arg (n64) */
#define MIPS_R_A6 10 /* Arg (n64) */
#define MIPS_R_A7 11 /* Arg (n64) */
#define MIPS_R_T0 8 /* Temp (o32) */
#define MIPS_R_T1 9 /* Temp (o32) */
#define MIPS_R_T2 10 /* Temp (o32) */
#define MIPS_R_T3 11 /* Temp (o32) */
#define MIPS_R_T4 12 /* Temporary */
#define MIPS_R_T5 13 /* Temporary */
#define MIPS_R_T6 14 /* Temporary */
#define MIPS_R_T7 15 /* Temporary */
#define MIPS_R_S0 16 /* Saved */
#define MIPS_R_S1 17 /* Saved */
#define MIPS_R_S2 18 /* Saved */
#define MIPS_R_S3 19 /* Saved */
#define MIPS_R_S4 20 /* Saved */
#define MIPS_R_S5 21 /* Saved */
#define MIPS_R_S6 22 /* Saved */
#define MIPS_R_S7 23 /* Saved */
#define MIPS_R_T8 24 /* Temporary */
#define MIPS_R_T9 25 /* Temporary */
/* MIPS_R_K0 26 Reserved */
/* MIPS_R_K1 27 Reserved */
#define MIPS_R_GP 28 /* Global ptr */
#define MIPS_R_SP 29 /* Stack ptr */
#define MIPS_R_FP 30 /* Frame ptr */
#define MIPS_R_RA 31 /* Return */
/*
* Jump address mask for immediate jumps. The four most significant bits
* must be equal to PC.
*/
#define MIPS_JMP_MASK 0x0fffffffUL
/* Maximum number of iterations in offset table computation */
#define JIT_MAX_ITERATIONS 8
/*
* Jump pseudo-instructions used internally
* for branch conversion and branch optimization.
*/
#define JIT_JNSET 0xe0
#define JIT_JNOP 0xf0
/* Descriptor flag for PC-relative branch conversion */
#define JIT_DESC_CONVERT BIT(31)
/* JIT context for an eBPF program */
struct jit_context {
struct bpf_prog *program; /* The eBPF program being JITed */
u32 *descriptors; /* eBPF to JITed CPU insn descriptors */
u32 *target; /* JITed code buffer */
u32 bpf_index; /* Index of current BPF program insn */
u32 jit_index; /* Index of current JIT target insn */
u32 changes; /* Number of PC-relative branch conv */
u32 accessed; /* Bit mask of read eBPF registers */
u32 clobbered; /* Bit mask of modified CPU registers */
u32 stack_size; /* Total allocated stack size in bytes */
u32 saved_size; /* Size of callee-saved registers */
u32 stack_used; /* Stack size used for function calls */
};
/* Emit the instruction if the JIT memory space has been allocated */
#define __emit(ctx, func, ...) \
do { \
if ((ctx)->target != NULL) { \
u32 *p = &(ctx)->target[ctx->jit_index]; \
uasm_i_##func(&p, ##__VA_ARGS__); \
} \
(ctx)->jit_index++; \
} while (0)
#define emit(...) __emit(__VA_ARGS__)
/* Workaround for R10000 ll/sc errata */
#ifdef CONFIG_WAR_R10000
#define LLSC_beqz beqzl
#else
#define LLSC_beqz beqz
#endif
/* Workaround for Loongson-3 ll/sc errata */
#ifdef CONFIG_CPU_LOONGSON3_WORKAROUNDS
#define LLSC_sync(ctx) emit(ctx, sync, 0)
#define LLSC_offset 4
#else
#define LLSC_sync(ctx)
#define LLSC_offset 0
#endif
/* Workaround for Loongson-2F jump errata */
#ifdef CONFIG_CPU_JUMP_WORKAROUNDS
#define JALR_MASK 0xffffffffcfffffffULL
#else
#define JALR_MASK (~0ULL)
#endif
/*
* Mark a BPF register as accessed, it needs to be
* initialized by the program if expected, e.g. FP.
*/
static inline void access_reg(struct jit_context *ctx, u8 reg)
{
ctx->accessed |= BIT(reg);
}
/*
* Mark a CPU register as clobbered, it needs to be
* saved/restored by the program if callee-saved.
*/
static inline void clobber_reg(struct jit_context *ctx, u8 reg)
{
ctx->clobbered |= BIT(reg);
}
/*
* Push registers on the stack, starting at a given depth from the stack
* pointer and increasing. The next depth to be written is returned.
*/
int push_regs(struct jit_context *ctx, u32 mask, u32 excl, int depth);
/*
* Pop registers from the stack, starting at a given depth from the stack
* pointer and increasing. The next depth to be read is returned.
*/
int pop_regs(struct jit_context *ctx, u32 mask, u32 excl, int depth);
/* Compute the 28-bit jump target address from a BPF program location */
int get_target(struct jit_context *ctx, u32 loc);
/* Compute the PC-relative offset to relative BPF program offset */
int get_offset(const struct jit_context *ctx, int off);
/* dst = imm (32-bit) */
void emit_mov_i(struct jit_context *ctx, u8 dst, s32 imm);
/* dst = src (32-bit) */
void emit_mov_r(struct jit_context *ctx, u8 dst, u8 src);
/* Validate ALU/ALU64 immediate range */
bool valid_alu_i(u8 op, s32 imm);
/* Rewrite ALU/ALU64 immediate operation */
bool rewrite_alu_i(u8 op, s32 imm, u8 *alu, s32 *val);
/* ALU immediate operation (32-bit) */
void emit_alu_i(struct jit_context *ctx, u8 dst, s32 imm, u8 op);
/* ALU register operation (32-bit) */
void emit_alu_r(struct jit_context *ctx, u8 dst, u8 src, u8 op);
/* Atomic read-modify-write (32-bit) */
void emit_atomic_r(struct jit_context *ctx, u8 dst, u8 src, s16 off, u8 code);
/* Atomic compare-and-exchange (32-bit) */
void emit_cmpxchg_r(struct jit_context *ctx, u8 dst, u8 src, u8 res, s16 off);
/* Swap bytes and truncate a register word or half word */
void emit_bswap_r(struct jit_context *ctx, u8 dst, u32 width);
/* Validate JMP/JMP32 immediate range */
bool valid_jmp_i(u8 op, s32 imm);
/* Prepare a PC-relative jump operation with immediate conditional */
void setup_jmp_i(struct jit_context *ctx, s32 imm, u8 width,
u8 bpf_op, s16 bpf_off, u8 *jit_op, s32 *jit_off);
/* Prepare a PC-relative jump operation with register conditional */
void setup_jmp_r(struct jit_context *ctx, bool same_reg,
u8 bpf_op, s16 bpf_off, u8 *jit_op, s32 *jit_off);
/* Finish a PC-relative jump operation */
int finish_jmp(struct jit_context *ctx, u8 jit_op, s16 bpf_off);
/* Conditional JMP/JMP32 immediate */
void emit_jmp_i(struct jit_context *ctx, u8 dst, s32 imm, s32 off, u8 op);
/* Conditional JMP/JMP32 register */
void emit_jmp_r(struct jit_context *ctx, u8 dst, u8 src, s32 off, u8 op);
/* Jump always */
int emit_ja(struct jit_context *ctx, s16 off);
/* Jump to epilogue */
int emit_exit(struct jit_context *ctx);
/*
* Build program prologue to set up the stack and registers.
* This function is implemented separately for 32-bit and 64-bit JITs.
*/
void build_prologue(struct jit_context *ctx);
/*
* Build the program epilogue to restore the stack and registers.
* This function is implemented separately for 32-bit and 64-bit JITs.
*/
void build_epilogue(struct jit_context *ctx, int dest_reg);
/*
* Convert an eBPF instruction to native instruction, i.e
* JITs an eBPF instruction.
* Returns :
* 0 - Successfully JITed an 8-byte eBPF instruction
* >0 - Successfully JITed a 16-byte eBPF instruction
* <0 - Failed to JIT.
* This function is implemented separately for 32-bit and 64-bit JITs.
*/
int build_insn(const struct bpf_insn *insn, struct jit_context *ctx);
#endif /* _BPF_JIT_COMP_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment