Commit bc38fae3 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf

Daniel Borkmann says:

====================
pull-request: bpf 2022-07-02

We've added 7 non-merge commits during the last 14 day(s) which contain
a total of 6 files changed, 193 insertions(+), 86 deletions(-).

The main changes are:

1) Fix clearing of page contiguity when unmapping XSK pool, from Ivan Malov.

2) Two verifier fixes around bounds data propagation, from Daniel Borkmann.

3) Fix fprobe sample module's parameter descriptions, from Masami Hiramatsu.

4) General BPF maintainer entry revamp to better scale patch reviews.

* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf:
  bpf, selftests: Add verifier test case for jmp32's jeq/jne
  bpf, selftests: Add verifier test case for imm=0,umin=0,umax=1 scalar
  bpf: Fix insufficient bounds propagation from adjust_scalar_min_max_vals
  bpf: Fix incorrect verifier simulation around jmp32's jeq/jne
  xsk: Clear page contiguity bit when unmapping pool
  bpf, docs: Better scale maintenance of BPF subsystem
  fprobe, samples: Add module parameter descriptions
====================

Link: https://lore.kernel.org/r/20220701230121.10354-1-daniel@iogearbox.netSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents 8dfeee9d a49b8ce7
......@@ -3617,16 +3617,18 @@ S: Maintained
F: Documentation/devicetree/bindings/iio/accel/bosch,bma400.yaml
F: drivers/iio/accel/bma400*
BPF (Safe dynamic programs and tools)
BPF [GENERAL] (Safe Dynamic Programs and Tools)
M: Alexei Starovoitov <ast@kernel.org>
M: Daniel Borkmann <daniel@iogearbox.net>
M: Andrii Nakryiko <andrii@kernel.org>
R: Martin KaFai Lau <kafai@fb.com>
R: Song Liu <songliubraving@fb.com>
R: Martin KaFai Lau <martin.lau@linux.dev>
R: Song Liu <song@kernel.org>
R: Yonghong Song <yhs@fb.com>
R: John Fastabend <john.fastabend@gmail.com>
R: KP Singh <kpsingh@kernel.org>
L: netdev@vger.kernel.org
R: Stanislav Fomichev <sdf@google.com>
R: Hao Luo <haoluo@google.com>
R: Jiri Olsa <jolsa@kernel.org>
L: bpf@vger.kernel.org
S: Supported
W: https://bpf.io/
......@@ -3658,12 +3660,9 @@ F: scripts/pahole-version.sh
F: tools/bpf/
F: tools/lib/bpf/
F: tools/testing/selftests/bpf/
N: bpf
K: bpf
BPF JIT for ARM
M: Shubham Bansal <illusionist.neo@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Odd Fixes
F: arch/arm/net/
......@@ -3672,7 +3671,6 @@ BPF JIT for ARM64
M: Daniel Borkmann <daniel@iogearbox.net>
M: Alexei Starovoitov <ast@kernel.org>
M: Zi Shen Lim <zlim.lnx@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/arm64/net/
......@@ -3680,14 +3678,12 @@ F: arch/arm64/net/
BPF JIT for MIPS (32-BIT AND 64-BIT)
M: Johan Almbladh <johan.almbladh@anyfinetworks.com>
M: Paul Burton <paulburton@kernel.org>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/mips/net/
BPF JIT for NFP NICs
M: Jakub Kicinski <kuba@kernel.org>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Odd Fixes
F: drivers/net/ethernet/netronome/nfp/bpf/
......@@ -3695,7 +3691,6 @@ F: drivers/net/ethernet/netronome/nfp/bpf/
BPF JIT for POWERPC (32-BIT AND 64-BIT)
M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
M: Michael Ellerman <mpe@ellerman.id.au>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/powerpc/net/
......@@ -3703,7 +3698,6 @@ F: arch/powerpc/net/
BPF JIT for RISC-V (32-bit)
M: Luke Nelson <luke.r.nels@gmail.com>
M: Xi Wang <xi.wang@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/riscv/net/
......@@ -3711,7 +3705,6 @@ X: arch/riscv/net/bpf_jit_comp64.c
BPF JIT for RISC-V (64-bit)
M: Björn Töpel <bjorn@kernel.org>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Maintained
F: arch/riscv/net/
......@@ -3721,7 +3714,6 @@ BPF JIT for S390
M: Ilya Leoshkevich <iii@linux.ibm.com>
M: Heiko Carstens <hca@linux.ibm.com>
M: Vasily Gorbik <gor@linux.ibm.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/s390/net/
......@@ -3729,14 +3721,12 @@ X: arch/s390/net/pnet.c
BPF JIT for SPARC (32-BIT AND 64-BIT)
M: David S. Miller <davem@davemloft.net>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Odd Fixes
F: arch/sparc/net/
BPF JIT for X86 32-BIT
M: Wang YanQing <udknight@gmail.com>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Odd Fixes
F: arch/x86/net/bpf_jit_comp32.c
......@@ -3744,13 +3734,60 @@ F: arch/x86/net/bpf_jit_comp32.c
BPF JIT for X86 64-BIT
M: Alexei Starovoitov <ast@kernel.org>
M: Daniel Borkmann <daniel@iogearbox.net>
L: netdev@vger.kernel.org
L: bpf@vger.kernel.org
S: Supported
F: arch/x86/net/
X: arch/x86/net/bpf_jit_comp32.c
BPF LSM (Security Audit and Enforcement using BPF)
BPF [CORE]
M: Alexei Starovoitov <ast@kernel.org>
M: Daniel Borkmann <daniel@iogearbox.net>
R: John Fastabend <john.fastabend@gmail.com>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/verifier.c
F: kernel/bpf/tnum.c
F: kernel/bpf/core.c
F: kernel/bpf/syscall.c
F: kernel/bpf/dispatcher.c
F: kernel/bpf/trampoline.c
F: include/linux/bpf*
F: include/linux/filter.h
BPF [BTF]
M: Martin KaFai Lau <martin.lau@linux.dev>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/btf.c
F: include/linux/btf*
BPF [TRACING]
M: Song Liu <song@kernel.org>
R: Jiri Olsa <jolsa@kernel.org>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/trace/bpf_trace.c
F: kernel/bpf/stackmap.c
BPF [NETWORKING] (tc BPF, sock_addr)
M: Martin KaFai Lau <martin.lau@linux.dev>
M: Daniel Borkmann <daniel@iogearbox.net>
R: John Fastabend <john.fastabend@gmail.com>
L: bpf@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
F: net/core/filter.c
F: net/sched/act_bpf.c
F: net/sched/cls_bpf.c
BPF [NETWORKING] (struct_ops, reuseport)
M: Martin KaFai Lau <martin.lau@linux.dev>
L: bpf@vger.kernel.org
L: netdev@vger.kernel.org
S: Maintained
F: kernel/bpf/bpf_struct*
BPF [SECURITY & LSM] (Security Audit and Enforcement using BPF)
M: KP Singh <kpsingh@kernel.org>
R: Florent Revest <revest@chromium.org>
R: Brendan Jackman <jackmanb@chromium.org>
......@@ -3761,7 +3798,27 @@ F: include/linux/bpf_lsm.h
F: kernel/bpf/bpf_lsm.c
F: security/bpf/
BPF L7 FRAMEWORK
BPF [STORAGE & CGROUPS]
M: Martin KaFai Lau <martin.lau@linux.dev>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/cgroup.c
F: kernel/bpf/*storage.c
F: kernel/bpf/bpf_lru*
BPF [RINGBUF]
M: Andrii Nakryiko <andrii@kernel.org>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/ringbuf.c
BPF [ITERATOR]
M: Yonghong Song <yhs@fb.com>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/*iter.c
BPF [L7 FRAMEWORK] (sockmap)
M: John Fastabend <john.fastabend@gmail.com>
M: Jakub Sitnicki <jakub@cloudflare.com>
L: netdev@vger.kernel.org
......@@ -3774,13 +3831,31 @@ F: net/ipv4/tcp_bpf.c
F: net/ipv4/udp_bpf.c
F: net/unix/unix_bpf.c
BPFTOOL
BPF [LIBRARY] (libbpf)
M: Andrii Nakryiko <andrii@kernel.org>
L: bpf@vger.kernel.org
S: Maintained
F: tools/lib/bpf/
BPF [TOOLING] (bpftool)
M: Quentin Monnet <quentin@isovalent.com>
L: bpf@vger.kernel.org
S: Maintained
F: kernel/bpf/disasm.*
F: tools/bpf/bpftool/
BPF [SELFTESTS] (Test Runners & Infrastructure)
M: Andrii Nakryiko <andrii@kernel.org>
R: Mykola Lysenko <mykolal@fb.com>
L: bpf@vger.kernel.org
S: Maintained
F: tools/testing/selftests/bpf/
BPF [MISC]
L: bpf@vger.kernel.org
S: Odd Fixes
K: (?:\b|_)bpf(?:\b|_)
BROADCOM B44 10/100 ETHERNET DRIVER
M: Michael Chan <michael.chan@broadcom.com>
L: netdev@vger.kernel.org
......
......@@ -1562,6 +1562,21 @@ static void __reg_bound_offset(struct bpf_reg_state *reg)
reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);
}
static void reg_bounds_sync(struct bpf_reg_state *reg)
{
/* We might have learned new bounds from the var_off. */
__update_reg_bounds(reg);
/* We might have learned something about the sign bit. */
__reg_deduce_bounds(reg);
/* We might have learned some bits from the bounds. */
__reg_bound_offset(reg);
/* Intersecting with the old var_off might have improved our bounds
* slightly, e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
* then new var_off is (0; 0x7f...fc) which improves our umax.
*/
__update_reg_bounds(reg);
}
static bool __reg32_bound_s64(s32 a)
{
return a >= 0 && a <= S32_MAX;
......@@ -1603,16 +1618,8 @@ static void __reg_combine_32_into_64(struct bpf_reg_state *reg)
* so they do not impact tnum bounds calculation.
*/
__mark_reg64_unbounded(reg);
__update_reg_bounds(reg);
}
/* Intersecting with the old var_off might have improved our bounds
* slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
* then new var_off is (0; 0x7f...fc) which improves our umax.
*/
__reg_deduce_bounds(reg);
__reg_bound_offset(reg);
__update_reg_bounds(reg);
reg_bounds_sync(reg);
}
static bool __reg64_bound_s32(s64 a)
......@@ -1628,7 +1635,6 @@ static bool __reg64_bound_u32(u64 a)
static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
{
__mark_reg32_unbounded(reg);
if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) {
reg->s32_min_value = (s32)reg->smin_value;
reg->s32_max_value = (s32)reg->smax_value;
......@@ -1637,14 +1643,7 @@ static void __reg_combine_64_into_32(struct bpf_reg_state *reg)
reg->u32_min_value = (u32)reg->umin_value;
reg->u32_max_value = (u32)reg->umax_value;
}
/* Intersecting with the old var_off might have improved our bounds
* slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
* then new var_off is (0; 0x7f...fc) which improves our umax.
*/
__reg_deduce_bounds(reg);
__reg_bound_offset(reg);
__update_reg_bounds(reg);
reg_bounds_sync(reg);
}
/* Mark a register as having a completely unknown (scalar) value. */
......@@ -6943,9 +6942,7 @@ static void do_refine_retval_range(struct bpf_reg_state *regs, int ret_type,
ret_reg->s32_max_value = meta->msize_max_value;
ret_reg->smin_value = -MAX_ERRNO;
ret_reg->s32_min_value = -MAX_ERRNO;
__reg_deduce_bounds(ret_reg);
__reg_bound_offset(ret_reg);
__update_reg_bounds(ret_reg);
reg_bounds_sync(ret_reg);
}
static int
......@@ -8202,11 +8199,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type))
return -EINVAL;
__update_reg_bounds(dst_reg);
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
reg_bounds_sync(dst_reg);
if (sanitize_check_bounds(env, insn, dst_reg) < 0)
return -EACCES;
if (sanitize_needed(opcode)) {
......@@ -8944,10 +8937,7 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
/* ALU32 ops are zero extended into 64bit register */
if (alu32)
zext_32_to_64(dst_reg);
__update_reg_bounds(dst_reg);
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
reg_bounds_sync(dst_reg);
return 0;
}
......@@ -9136,10 +9126,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
insn->dst_reg);
}
zext_32_to_64(dst_reg);
__update_reg_bounds(dst_reg);
__reg_deduce_bounds(dst_reg);
__reg_bound_offset(dst_reg);
reg_bounds_sync(dst_reg);
}
} else {
/* case: R = imm
......@@ -9577,26 +9564,33 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
return;
switch (opcode) {
/* JEQ/JNE comparison doesn't change the register equivalence.
*
* r1 = r2;
* if (r1 == 42) goto label;
* ...
* label: // here both r1 and r2 are known to be 42.
*
* Hence when marking register as known preserve it's ID.
*/
case BPF_JEQ:
if (is_jmp32) {
__mark_reg32_known(true_reg, val32);
true_32off = tnum_subreg(true_reg->var_off);
} else {
___mark_reg_known(true_reg, val);
true_64off = true_reg->var_off;
}
break;
case BPF_JNE:
{
struct bpf_reg_state *reg =
opcode == BPF_JEQ ? true_reg : false_reg;
/* JEQ/JNE comparison doesn't change the register equivalence.
* r1 = r2;
* if (r1 == 42) goto label;
* ...
* label: // here both r1 and r2 are known to be 42.
*
* Hence when marking register as known preserve it's ID.
*/
if (is_jmp32)
__mark_reg32_known(reg, val32);
else
___mark_reg_known(reg, val);
if (is_jmp32) {
__mark_reg32_known(false_reg, val32);
false_32off = tnum_subreg(false_reg->var_off);
} else {
___mark_reg_known(false_reg, val);
false_64off = false_reg->var_off;
}
break;
}
case BPF_JSET:
if (is_jmp32) {
false_32off = tnum_and(false_32off, tnum_const(~val32));
......@@ -9735,21 +9729,8 @@ static void __reg_combine_min_max(struct bpf_reg_state *src_reg,
dst_reg->smax_value);
src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off,
dst_reg->var_off);
/* We might have learned new bounds from the var_off. */
__update_reg_bounds(src_reg);
__update_reg_bounds(dst_reg);
/* We might have learned something about the sign bit. */
__reg_deduce_bounds(src_reg);
__reg_deduce_bounds(dst_reg);
/* We might have learned some bits from the bounds. */
__reg_bound_offset(src_reg);
__reg_bound_offset(dst_reg);
/* Intersecting with the old var_off might have improved our bounds
* slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
* then new var_off is (0; 0x7f...fc) which improves our umax.
*/
__update_reg_bounds(src_reg);
__update_reg_bounds(dst_reg);
reg_bounds_sync(src_reg);
reg_bounds_sync(dst_reg);
}
static void reg_combine_min_max(struct bpf_reg_state *true_src,
......
......@@ -332,6 +332,7 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
for (i = 0; i < dma_map->dma_pages_cnt; i++) {
dma = &dma_map->dma_pages[i];
if (*dma) {
*dma &= ~XSK_NEXT_PG_CONTIG_MASK;
dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
DMA_BIDIRECTIONAL, attrs);
*dma = 0;
......
......@@ -25,12 +25,19 @@ static unsigned long nhit;
static char symbol[MAX_SYMBOL_LEN] = "kernel_clone";
module_param_string(symbol, symbol, sizeof(symbol), 0644);
MODULE_PARM_DESC(symbol, "Probed symbol(s), given by comma separated symbols or a wildcard pattern.");
static char nosymbol[MAX_SYMBOL_LEN] = "";
module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644);
MODULE_PARM_DESC(nosymbol, "Not-probed symbols, given by a wildcard pattern.");
static bool stackdump = true;
module_param(stackdump, bool, 0644);
MODULE_PARM_DESC(stackdump, "Enable stackdump.");
static bool use_trace = false;
module_param(use_trace, bool, 0644);
MODULE_PARM_DESC(use_trace, "Use trace_printk instead of printk. This is only for debugging.");
static void show_backtrace(void)
{
......
......@@ -864,3 +864,24 @@
.result = ACCEPT,
.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
},
{
"jeq32/jne32: bounds checking",
.insns = {
BPF_MOV64_IMM(BPF_REG_6, 563),
BPF_MOV64_IMM(BPF_REG_2, 0),
BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0),
BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0),
BPF_ALU32_REG(BPF_OR, BPF_REG_2, BPF_REG_6),
BPF_JMP32_IMM(BPF_JNE, BPF_REG_2, 8, 5),
BPF_JMP_IMM(BPF_JSGE, BPF_REG_2, 500, 2),
BPF_MOV64_IMM(BPF_REG_0, 2),
BPF_EXIT_INSN(),
BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
BPF_EXIT_INSN(),
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = ACCEPT,
.retval = 1,
},
......@@ -373,3 +373,25 @@
.result = ACCEPT,
.retval = 3,
},
{
"jump & dead code elimination",
.insns = {
BPF_MOV64_IMM(BPF_REG_0, 1),
BPF_MOV64_IMM(BPF_REG_3, 0),
BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0),
BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0),
BPF_ALU64_IMM(BPF_OR, BPF_REG_3, 32767),
BPF_JMP_IMM(BPF_JSGE, BPF_REG_3, 0, 1),
BPF_EXIT_INSN(),
BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 0x8000, 1),
BPF_EXIT_INSN(),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -32767),
BPF_MOV64_IMM(BPF_REG_0, 2),
BPF_JMP_IMM(BPF_JLE, BPF_REG_3, 0, 1),
BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
BPF_EXIT_INSN(),
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = ACCEPT,
.retval = 2,
},
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment