- 25 Oct, 2022 7 commits
-
-
Quentin Monnet authored
For offloaded BPF programs, instead of failing to create the LLVM disassembler without even looking for a triple at all, do run the function that attempts to retrieve a valid architecture name for the device. It will still fail for the LLVM disassembler, because currently we have no valid triple to return (NFP disassembly is not supported by LLVM). But failing in that function is more logical than to assume in jit_disasm.c that passing an "arch" name is simply not supported. Suggested-by: Song Liu <song@kernel.org> Signed-off-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/r/20221025150329.97371-8-quentin@isovalent.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Quentin Monnet authored
To disassemble instructions for JIT-ed programs, bpftool has relied on the libbfd library. This has been problematic in the past: libbfd's interface is not meant to be stable and has changed several times. For building bpftool, we have to detect how the libbfd version on the system behaves, which is why we have to handle features disassembler-four-args and disassembler-init-styled in the Makefile. When it comes to shipping bpftool, this has also caused issues with several distribution maintainers unwilling to support the feature (see for example Debian's page for binutils-dev, which ships libbfd: "Note that building Debian packages which depend on the shared libbfd is Not Allowed." [0]). For these reasons, we add support for LLVM as an alternative to libbfd for disassembling instructions of JIT-ed programs. Thanks to the preparation work in the previous commits, it's easy to add the library by passing the relevant compilation options in the Makefile, and by adding the functions for setting up the LLVM disassembler in file jit_disasm.c. The LLVM disassembler requires the LLVM development package (usually llvm-dev or llvm-devel). The expectation is that the interface for this disassembler will be more stable. There is a note in LLVM's Developer Policy [1] stating that the stability for the C API is "best effort" and not guaranteed, but at least there is some effort to keep compatibility when possible (which hasn't really been the case for libbfd so far). Furthermore, the Debian page for the related LLVM package does not caution against linking to the lib, as binutils-dev page does. Naturally, the display of disassembled instructions comes with a few minor differences. Here is a sample output with libbfd (already supported before this patch): # bpftool prog dump jited id 56 bpf_prog_6deef7357e7b4530: 0: nopl 0x0(%rax,%rax,1) 5: xchg %ax,%ax 7: push %rbp 8: mov %rsp,%rbp b: push %rbx c: push %r13 e: push %r14 10: mov %rdi,%rbx 13: movzwq 0xb4(%rbx),%r13 1b: xor %r14d,%r14d 1e: or $0x2,%r14d 22: mov $0x1,%eax 27: cmp $0x2,%r14 2b: jne 0x000000000000002f 2d: xor %eax,%eax 2f: pop %r14 31: pop %r13 33: pop %rbx 34: leave 35: ret LLVM supports several variants that we could set when initialising the disassembler, for example with: LLVMSetDisasmOptions(*ctx, LLVMDisassembler_Option_AsmPrinterVariant); but the default printer is used for now. Here is the output with LLVM: # bpftool prog dump jited id 56 bpf_prog_6deef7357e7b4530: 0: nopl (%rax,%rax) 5: nop 7: pushq %rbp 8: movq %rsp, %rbp b: pushq %rbx c: pushq %r13 e: pushq %r14 10: movq %rdi, %rbx 13: movzwq 180(%rbx), %r13 1b: xorl %r14d, %r14d 1e: orl $2, %r14d 22: movl $1, %eax 27: cmpq $2, %r14 2b: jne 0x2f 2d: xorl %eax, %eax 2f: popq %r14 31: popq %r13 33: popq %rbx 34: leave 35: retq The LLVM disassembler comes as the default choice, with libbfd as a fall-back. Of course, we could replace libbfd entirely and avoid supporting two different libraries. One reason for keeping libbfd is that, right now, it works well, we have all we need in terms of features detection in the Makefile, so it provides a fallback for disassembling JIT-ed programs if libbfd is installed but LLVM is not. The other motivation is that libbfd supports nfp instruction for Netronome's SmartNICs and can be used to disassemble offloaded programs, something that LLVM cannot do. If libbfd's interface breaks again in the future, we might reconsider keeping support for it. [0] https://packages.debian.org/buster/binutils-dev [1] https://llvm.org/docs/DeveloperPolicy.html#c-api-changesSigned-off-by: Quentin Monnet <quentin@isovalent.com> Tested-by: Niklas Söderlund <niklas.soderlund@corigine.com> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20221025150329.97371-7-quentin@isovalent.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Quentin Monnet authored
Refactor disasm_print_insn() to extract the code specific to libbfd and move it to dedicated functions. There is no functional change. This is in preparation for supporting an alternative library for disassembling the instructions. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Tested-by: Niklas Söderlund <niklas.soderlund@corigine.com> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20221025150329.97371-6-quentin@isovalent.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Quentin Monnet authored
Bpftool uses libbfd for disassembling JIT-ed programs. But the feature is optional, and the tool can be compiled without libbfd support. The Makefile sets the relevant variables accordingly. It also sets variables related to libbfd's interface, given that it has changed over time. Group all those libbfd-related definitions so that it's easier to understand what we are testing for, and only use variables related to libbfd's interface if we need libbfd in the first place. In addition to make the Makefile clearer, grouping the definitions related to disassembling JIT-ed programs will help support alternatives to libbfd. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Tested-by: Niklas Söderlund <niklas.soderlund@corigine.com> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20221025150329.97371-5-quentin@isovalent.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Quentin Monnet authored
Make FEATURE_TESTS and FEATURE_DISPLAY easier to read and less likely to be subject to conflicts on updates by having one feature per line. Suggested-by: Andres Freund <andres@anarazel.de> Signed-off-by: Quentin Monnet <quentin@isovalent.com> Tested-by: Niklas Söderlund <niklas.soderlund@corigine.com> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20221025150329.97371-4-quentin@isovalent.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Quentin Monnet authored
The JIT disassembler in bpftool is the only components (with the JSON writer) using asserts to check the return values of functions. But it does not do so in a consistent way, and diasm_print_insn() returns no value, although sometimes the operation failed. Remove the asserts, and instead check the return values, print messages on errors, and propagate the error to the caller from prog.c. Remove the inclusion of assert.h from jit_disasm.c, and also from map.c where it is unused. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Tested-by: Niklas Söderlund <niklas.soderlund@corigine.com> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20221025150329.97371-3-quentin@isovalent.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Quentin Monnet authored
_GNU_SOURCE is defined in several source files for bpftool, but only one of them takes the precaution of checking whether the value is already defined. Add #ifndef for other occurrences too. This is in preparation for the support of disassembling JIT-ed programs with LLVM, with $(llvm-config --cflags) passing -D_GNU_SOURCE as a compilation argument. Signed-off-by: Quentin Monnet <quentin@isovalent.com> Tested-by: Niklas Söderlund <niklas.soderlund@corigine.com> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20221025150329.97371-2-quentin@isovalent.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 22 Oct, 2022 4 commits
-
-
Dave Marchevsky authored
Modify iter prog in existing bpf_iter_bpf_array_map.c, which currently dumps arraymap key/val, to also do a write of (val, key) into a newly-added hashmap. Confirm that the write succeeds as expected by modifying the userspace runner program. Before a change added in an earlier commit - considering PTR_TO_BUF reg a valid input to helpers which expect MAP_{KEY,VAL} - the verifier would've rejected this prog change due to type mismatch. Since using current iter's key/val to access a separate map is a reasonable usecase, let's add support for it. Note that the test prog cannot directly write (val, key) into hashmap via bpf_map_update_elem when both come from iter context because key is marked MEM_RDONLY. This is due to bpf_map_update_elem - and other basic map helpers - taking ARG_PTR_TO_MAP_{KEY,VALUE} w/o MEM_RDONLY type flag. bpf_map_{lookup,update,delete}_elem don't modify their input key/val so it should be possible to tag their args READONLY, but due to the ubiquitous use of these helpers and verifier checks for type == MAP_VALUE, such a change is nontrivial and seems better to address in a followup series. Also fixup some 'goto's in test runner's map checking loop. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221020160721.4030492-4-davemarchevsky@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dave Marchevsky authored
Add a test_ringbuf_map_key test prog, borrowing heavily from extant test_ringbuf.c. The program tries to use the result of bpf_ringbuf_reserve as map_key, which was not possible before previouis commits in this series. The test runner added to prog_tests/ringbuf.c verifies that the program loads and does basic sanity checks to confirm that it runs as expected. Also, refactor test_ringbuf such that runners for existing test_ringbuf and newly-added test_ringbuf_map_key are subtests of 'ringbuf' top-level test. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221020160721.4030492-3-davemarchevsky@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dave Marchevsky authored
After the previous patch, which added PTR_TO_MEM | MEM_ALLOC type map_key_value_types, the only difference between map_key_value_types and mem_types sets is PTR_TO_BUF and PTR_TO_MEM, which are in the latter set but not the former. Helpers which expect ARG_PTR_TO_MAP_KEY or ARG_PTR_TO_MAP_VALUE already effectively expect a valid blob of arbitrary memory that isn't necessarily explicitly associated with a map. When validating a PTR_TO_MAP_{KEY,VALUE} arg, the verifier expects meta->map_ptr to have already been set, either by an earlier ARG_CONST_MAP_PTR arg, or custom logic like that in process_timer_func or process_kptr_func. So let's get rid of map_key_value_types and just use mem_types for those args. This has the effect of adding PTR_TO_BUF and PTR_TO_MEM to the set of compatible types for ARG_PTR_TO_MAP_KEY and ARG_PTR_TO_MAP_VALUE. PTR_TO_BUF is used by various bpf_iter implementations to represent a chunk of valid r/w memory in ctx args for iter prog. PTR_TO_MEM is used by networking, tracing, and ringbuf helpers to represent a chunk of valid memory. The PTR_TO_MEM | MEM_ALLOC type added in previous commit is specific to ringbuf helpers. Presence or absence of MEM_ALLOC doesn't change the validity of using PTR_TO_MEM as a map_{key,val} input. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221020160721.4030492-2-davemarchevsky@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dave Marchevsky authored
This patch adds support for the following pattern: struct some_data *data = bpf_ringbuf_reserve(&ringbuf, sizeof(struct some_data, 0)); if (!data) return; bpf_map_lookup_elem(&another_map, &data->some_field); bpf_ringbuf_submit(data); Currently the verifier does not consider bpf_ringbuf_reserve's PTR_TO_MEM | MEM_ALLOC ret type a valid key input to bpf_map_lookup_elem. Since PTR_TO_MEM is by definition a valid region of memory, it is safe to use it as a key for lookups. Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221020160721.4030492-1-davemarchevsky@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 21 Oct, 2022 15 commits
-
-
Andrii Nakryiko authored
Manu Bretelle says: ==================== This patchset adds initial support for running BPF's vmtest on aarch64 architecture. It includes a `config.aarch64` heavily based on `config.s390x` Makes vmtest.sh handle aarch64 and set QEMU variables to values that works on that arch. Finally, it provides a DENYLIST.aarch64 that takes care of currently broken tests on aarch64 so the vmtest run passes. This was tested by running: LLVM_STRIP=llvm-strip-16 CLANG=clang-16 \ tools/testing/selftests/bpf/vmtest.sh -- \ ./test_progs -d \ \"$(cat tools/testing/selftests/bpf/DENYLIST{,.aarch64} \ | cut -d'#' -f1 \ | sed -e 's/^[[:space:]]*//' \ -e 's/[[:space:]]*$//' \ | tr -s '\n' ','\ )\" on an aarch64 host. ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Manu Bretelle authored
Those tests are currently failing on aarch64, ignore them until they are individually addressed. Using this deny list, vmtest.sh ran successfully using LLVM_STRIP=llvm-strip-16 CLANG=clang-16 \ tools/testing/selftests/bpf/vmtest.sh -- \ ./test_progs -d \ \"$(cat tools/testing/selftests/bpf/DENYLIST{,.aarch64} \ | cut -d'#' -f1 \ | sed -e 's/^[[:space:]]*//' \ -e 's/[[:space:]]*$//' \ | tr -s '\n' ','\ )\" Signed-off-by: Manu Bretelle <chantr4@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221021210701.728135-5-chantr4@gmail.com
-
Manu Bretelle authored
Add handling of aarch64 when setting QEMU options and provide the right path to aarch64 kernel image. Signed-off-by: Manu Bretelle <chantr4@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221021210701.728135-4-chantr4@gmail.com
-
Manu Bretelle authored
config.aarch64, similarly to config.{s390x,x86_64} is a config enabling building a kernel on aarch64 to be used in bpf's selftests/kernel-patches CI. Signed-off-by: Manu Bretelle <chantr4@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221021210701.728135-3-chantr4@gmail.com
-
Manu Bretelle authored
`config.s390x` had entries already present in `config`. When generating the config used by vmtest, we concatenate the `config` file with the `config.{arch}` one, making those entries duplicated. This patch removes that duplication. Before: $ comm -1 -2 <(sort tools/testing/selftests/bpf/config.s390x) <(sort tools/testing/selftests/bpf/config) CONFIG_MODULE_SIG=y CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y $ Ater: $ comm -1 -2 <(sort tools/testing/selftests/bpf/config.s390x) <(sort tools/testing/selftests/bpf/config) $ Signed-off-by: Manu Bretelle <chantr4@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221021210701.728135-2-chantr4@gmail.com
-
Quentin Monnet authored
Along with the version number, "bpftool version" displays a list of features that were selected at compilation time for bpftool. It would be useful to indicate in that list whether a binary is a bootstrap version of bpftool. Given that an increasing number of components rely on bootstrap versions for generating skeletons, this could help understand what a binary is capable of if it has been copied outside of the usual "bootstrap" directory. To detect a bootstrap version, we simply rely on the absence of implementation for the do_prog() function. To do this, we must move the (unchanged) list of commands before do_version(), which in turn requires renaming this "cmds" array to avoid shadowing it with the "cmds" argument in cmd_select(). Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20221020100332.69563-1-quentin@isovalent.com
-
Quentin Monnet authored
Commands "bpftool help" or "bpftool version" use argv[0] to display the name of the binary. While it is a convenient way to retrieve the string, it does not always produce the most readable output. For example, because of the way bpftool is currently packaged on Ubuntu (using a wrapper script), the command displays the absolute path for the binary: $ bpftool version | head -n 1 /usr/lib/linux-tools/5.15.0-50-generic/bpftool v5.15.60 More generally, there is no apparent reason for keeping the whole path and exact binary name in this output. If the user wants to understand what binary is being called, there are other ways to do so. This commit replaces argv[0] with "bpftool", to simply reflect what the tool is called. This is aligned on what "ip" or "tc" do, for example. As an additional benefit, this seems to help with integration with Meson for packaging [0]. [0] https://github.com/NixOS/nixpkgs/pull/195934Suggested-by: Vladimír Čunát <vladimir.cunat@nic.cz> Signed-off-by: Quentin Monnet <quentin@isovalent.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221020100300.69328-1-quentin@isovalent.com
-
Xu Kuohai authored
The reg_name in parse_usdt_arg() is used to hold register name, which is short enough to be held in a 16-byte array, so we could define reg_name as char reg_name[16] to avoid dynamically allocating reg_name with sscanf. Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com> Signed-off-by: Xu Kuohai <xukuohai@huawei.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/bpf/20221018145538.2046842-1-xukuohai@huaweicloud.com
-
Delyan Kratunov authored
BPF CI has revealed flakiness in the task_local_storage/exit_creds test. The failure point in CI [1] is that null_ptr_count is equal to 0, which indicates that the program hasn't run yet. This points to the kern_sync_rcu (sys_membarrier -> synchronize_rcu underneath) not waiting sufficiently. Indeed, synchronize_rcu only waits for read-side sections that started before the call. If the program execution starts *during* the synchronize_rcu invocation (due to, say, preemption), the test won't wait long enough. As a speculative fix, make the synchornize_rcu calls in a loop until an explicit run counter has gone up. [1]: https://github.com/kernel-patches/bpf/actions/runs/3268263235/jobs/5374940791Signed-off-by: Delyan Kratunov <delyank@meta.com> Link: https://lore.kernel.org/r/156d4ef82275a074e8da8f4cffbd01b0c1466493.camel@meta.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Wang Yufen says: ==================== This patchset add "autoattach" optional for "bpftool prog load(_all)" to support one-step load-attach-pin_link. v8 -> v9: fix link leak, and change pathname_concat(specify not just buffer pointer, but also it's size) v7 -> v8: for the programs not supporting autoattach, fall back to reguler pinning instead of skipping v6 -> v7: add info msg print and update doc for the skip program v5 -> v6: skip the programs not supporting auto-attach, and change optional name from "auto_attach" to "autoattach" v4 -> v5: some formatting nits of doc v3 -> v4: rename functions, update doc, bash and do_help() v2 -> v3: switch to extend prog load command instead of extend perf v2: https://patchwork.kernel.org/project/netdevbpf/patch/20220824033837.458197-1-weiyongjun1@huawei.com/ v1: https://patchwork.kernel.org/project/netdevbpf/patch/20220816151725.153343-1-weiyongjun1@huawei.com/ ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Wang Yufen authored
Add autoattach optional to prog load|loadall for supporting one-step load-attach-pin_link. Signed-off-by: Wang Yufen <wangyufen@huawei.com> Link: https://lore.kernel.org/r/1665736275-28143-4-git-send-email-wangyufen@huawei.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Wang Yufen authored
Add autoattach optional to prog load|loadall for supporting one-step load-attach-pin_link. Signed-off-by: Wang Yufen <wangyufen@huawei.com> Link: https://lore.kernel.org/r/1665736275-28143-3-git-send-email-wangyufen@huawei.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Wang Yufen authored
Add autoattach optional to support one-step load-attach-pin_link. For example, $ bpftool prog loadall test.o /sys/fs/bpf/test autoattach $ bpftool link 26: tracing name test1 tag f0da7d0058c00236 gpl loaded_at 2022-09-09T21:39:49+0800 uid 0 xlated 88B jited 55B memlock 4096B map_ids 3 btf_id 55 28: kprobe name test3 tag 002ef1bef0723833 gpl loaded_at 2022-09-09T21:39:49+0800 uid 0 xlated 88B jited 56B memlock 4096B map_ids 3 btf_id 55 57: tracepoint name oncpu tag 7aa55dfbdcb78941 gpl loaded_at 2022-09-09T21:41:32+0800 uid 0 xlated 456B jited 265B memlock 4096B map_ids 17,13,14,15 btf_id 82 $ bpftool link 1: tracing prog 26 prog_type tracing attach_type trace_fentry 3: perf_event prog 28 10: perf_event prog 57 The autoattach optional can support tracepoints, k(ret)probes, u(ret)probes. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Wang Yufen <wangyufen@huawei.com> Link: https://lore.kernel.org/r/1665736275-28143-2-git-send-email-wangyufen@huawei.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Wang Yufen authored
After commit afef88e6 ("selftests/bpf: Store BPF object files with .bpf.o extension"), we should use *.bpf.o instead of *.o. In addition, use the BPF_FILE variable to save the BPF object file name, which can be better identified and modified. Fixes: afef88e6 ("selftests/bpf: Store BPF object files with .bpf.o extension") Signed-off-by: Wang Yufen <wangyufen@huawei.com> Acked-by: Stanislav Fomichev <sdf@google.com> Link: https://lore.kernel.org/r/1666235134-562-1-git-send-email-wangyufen@huawei.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Donald Hunter authored
Add a more complete introduction, with links to man pages. Move toctree of map types above usage notes. Format usage notes to improve readability. Signed-off-by: Donald Hunter <donald.hunter@gmail.com> Link: https://lore.kernel.org/r/20221012152715.25073-1-donald.hunter@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 19 Oct, 2022 14 commits
-
-
Alexei Starovoitov authored
Jie Meng says: ==================== With baseline x64 instruction set, shift count can only be an immediate or in %cl. The implicit dependency on %cl makes it necessary to shuffle registers around and/or add push/pop operations. BMI2 provides shift instructions that can use any general register as the shift count, saving us instructions and a few bytes in most cases. Suboptimal codegen when %ecx is source and/or destination is also addressed and unnecessary instructions are removed. test_progs: Summary: 267/1340 PASSED, 25 SKIPPED, 0 FAILED test_progs-no_alu32: Summary: 267/1333 PASSED, 26 SKIPPED, 0 FAILED test_verifier: Summary: 1367 PASSED, 636 SKIPPED, 0 FAILED (same result with or without BMI2) test_maps: OK, 0 SKIPPED lib/test_bpf: test_bpf: Summary: 1026 PASSED, 0 FAILED, [1014/1014 JIT'ed] test_bpf: test_tail_calls: Summary: 10 PASSED, 0 FAILED, [10/10 JIT'ed] test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED --- v4 -> v5: - More comments regarding instruction encoding v3 -> v4: - Fixed a regression when BMI2 isn't available ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jie Meng authored
Current tests cover only shifts with an immediate as the source operand/shift counts; add a new test case to cover register operand. Signed-off-by: Jie Meng <jmeng@fb.com> Link: https://lore.kernel.org/r/20221007202348.1118830-4-jmeng@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jie Meng authored
BMI2 provides 3 shift instructions (shrx, sarx and shlx) that use VEX encoding but target general purpose registers [1]. They allow the shift count in any general purpose register and have the same performance as non BMI2 shift instructions [2]. Instead of shr/sar/shl that implicitly use %cl (lowest 8 bit of %rcx), emit their more flexible alternatives provided in BMI2 when advantageous; keep using the non BMI2 instructions when shift count is already in BPF_REG_4/%rcx as non BMI2 instructions are shorter. To summarize, when BMI2 is available: ------------------------------------------------- | arbitrary dst ================================================= src == ecx | shl dst, cl ------------------------------------------------- src != ecx | shlx dst, dst, src ------------------------------------------------- And no additional register shuffling is needed. A concrete example between non BMI2 and BMI2 codegen. To shift %rsi by %rdi: Without BMI2: ef3: push %rcx 51 ef4: mov %rdi,%rcx 48 89 f9 ef7: shl %cl,%rsi 48 d3 e6 efa: pop %rcx 59 With BMI2: f0b: shlx %rdi,%rsi,%rsi c4 e2 c1 f7 f6 [1] https://en.wikipedia.org/wiki/X86_Bit_manipulation_instruction_set [2] https://www.agner.org/optimize/instruction_tables.pdfSigned-off-by: Jie Meng <jmeng@fb.com> Link: https://lore.kernel.org/r/20221007202348.1118830-3-jmeng@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Jie Meng authored
x64 JIT produces redundant instructions when a shift operation's destination register is BPF_REG_4/ecx and this patch removes them. Specifically, when dest reg is BPF_REG_4 but the src isn't, we needn't push and pop ecx around shift only to get it overwritten by r11 immediately afterwards. In the rare case when both dest and src registers are BPF_REG_4, a single shift instruction is sufficient and we don't need the two MOV instructions around the shift. To summarize using shift left as an example, without patch: ------------------------------------------------- | dst == ecx | dst != ecx ================================================= src == ecx | mov r11, ecx | shl dst, cl | shl r11, ecx | | mov ecx, r11 | ------------------------------------------------- src != ecx | mov r11, ecx | push ecx | push ecx | mov ecx, src | mov ecx, src | shl dst, cl | shl r11, cl | pop ecx | pop ecx | | mov ecx, r11 | ------------------------------------------------- With patch: ------------------------------------------------- | dst == ecx | dst != ecx ================================================= src == ecx | shl ecx, cl | shl dst, cl ------------------------------------------------- src != ecx | mov r11, ecx | push ecx | mov ecx, src | mov ecx, src | shl r11, cl | shl dst, cl | mov ecx, r11 | pop ecx ------------------------------------------------- Signed-off-by: Jie Meng <jmeng@fb.com> Link: https://lore.kernel.org/r/20221007202348.1118830-2-jmeng@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Make libbpf more conservative in using BPF_F_MMAPABLE flag with internal BPF array maps that are backing global data sections. See patch #2 for full description and justification. Changes in this dataset support having bpf_spinlock, kptr, rb_tree nodes and other "special" variables as global variables. Combining this with libbpf's existing support for multiple custom .data.* sections allows BPF programs to utilize multiple spinlock/rbtree_node/kptr variables in a pretty natural way by just putting all such variables into separate data sections (and thus ARRAY maps). v1->v2: - address Stanislav's feedback, adds acks. ==================== Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add non-mmapable data section to test_skeleton selftest and make sure it really isn't mmapable by trying to mmap() it anyways. Also make sure that libbpf doesn't report BPF_F_MMAPABLE flag to users. Additional, some more manual testing was performed that this feature works as intended. Looking at created map through bpftool shows that flags passed to kernel are indeed zero: $ bpftool map show ... 1782: array name .data.non_mmapa flags 0x0 key 4B value 16B max_entries 1 memlock 4096B btf_id 1169 pids test_progs(8311) ... Checking BTF uploaded to kernel for this map shows that zero_key and zero_value are indeed marked as static, even though zero_key is actually original global (but STV_HIDDEN) variable: $ bpftool btf dump id 1169 ... [51] VAR 'zero_key' type_id=2, linkage=static [52] VAR 'zero_value' type_id=7, linkage=static ... [62] DATASEC '.data.non_mmapable' size=16 vlen=2 type_id=51 offset=0 size=4 (VAR 'zero_key') type_id=52 offset=4 size=12 (VAR 'zero_value') ... And original BTF does have zero_key marked as linkage=global: $ bpftool btf dump file test_skeleton.bpf.linked3.o ... [51] VAR 'zero_key' type_id=2, linkage=global [52] VAR 'zero_value' type_id=7, linkage=static ... [62] DATASEC '.data.non_mmapable' size=16 vlen=2 type_id=51 offset=0 size=4 (VAR 'zero_key') type_id=52 offset=4 size=12 (VAR 'zero_value') Bpftool didn't require any changes at all because it checks whether internal map is mmapable already, but just to double-check generated skeleton, we see that .data.non_mmapable neither sets mmaped pointer nor has a corresponding field in the skeleton: $ grep non_mmapable test_skeleton.skel.h struct bpf_map *data_non_mmapable; s->maps[7].name = ".data.non_mmapable"; s->maps[7].map = &obj->maps.data_non_mmapable; But .data.read_mostly has all of those things: $ grep read_mostly test_skeleton.skel.h struct bpf_map *data_read_mostly; struct test_skeleton__data_read_mostly { int read_mostly_var; } *data_read_mostly; s->maps[6].name = ".data.read_mostly"; s->maps[6].map = &obj->maps.data_read_mostly; s->maps[6].mmaped = (void **)&obj->data_read_mostly; _Static_assert(sizeof(s->data_read_mostly->read_mostly_var) == 4, "unexpected size of 'read_mostly_var'"); Acked-by: Stanislav Fomichev <sdf@google.com> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221019002816.359650-4-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Teach libbpf to not add BPF_F_MMAPABLE flag unnecessarily for ARRAY maps that are backing data sections, if such data sections don't expose any variables to user-space. Exposed variables are those that have STB_GLOBAL or STB_WEAK ELF binding and correspond to BTF VAR's BTF_VAR_GLOBAL_ALLOCATED linkage. The overall idea is that if some data section doesn't have any variable that is exposed through BPF skeleton, then there is no reason to make such BPF array mmapable. Making BPF array mmapable is not a free no-op action, because BPF verifier doesn't allow users to put special objects (such as BPF spin locks, RB tree nodes, linked list nodes, kptrs, etc; anything that has a sensitive internal state that should not be modified arbitrarily from user space) into mmapable arrays, as there is no way to prevent user space from corrupting such sensitive state through direct memory access through memory-mapped region. By making sure that libbpf doesn't add BPF_F_MMAPABLE flag to BPF array maps corresponding to data sections that only have static variables (which are not supposed to be visible to user space according to libbpf and BPF skeleton rules), users now can have spinlocks, kptrs, etc in either default .bss/.data sections or custom .data.* sections (assuming there are no global variables in such sections). The only possible hiccup with this approach is the need to use global variables during BPF static linking, even if it's not intended to be shared with user space through BPF skeleton. To allow such scenarios, extend libbpf's STV_HIDDEN ELF visibility attribute handling to variables. Libbpf is already treating global hidden BPF subprograms as static subprograms and adjusts BTF accordingly to make BPF verifier verify such subprograms as static subprograms with preserving entire BPF verifier state between subprog calls. This patch teaches libbpf to treat global hidden variables as static ones and adjust BTF information accordingly as well. This allows to share variables between multiple object files during static linking, but still keep them internal to BPF program and not get them exposed through BPF skeleton. Note, that if the user has some advanced scenario where they absolutely need BPF_F_MMAPABLE flag on .data/.bss/.rodata BPF array map despite only having static variables, they still can achieve this by forcing it through explicit bpf_map__set_map_flags() API. Acked-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Dave Marchevsky <davemarchevsky@fb.com> Link: https://lore.kernel.org/r/20221019002816.359650-3-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Refactor libbpf's BTF fixup step during BPF object open phase. The only functional change is that we now ignore BTF_VAR_GLOBAL_EXTERN variables during fix up, not just BTF_VAR_STATIC ones, which shouldn't cause any change in behavior as there shouldn't be any extern variable in data sections for valid BPF object anyways. Otherwise it's just collapsing two functions that have no reason to be separate, and switching find_elf_var_offset() helper to return entire symbol pointer, not just its offset. This will be used by next patch to get ELF symbol visibility. While refactoring, also "normalize" debug messages inside btf_fixup_datasec() to follow general libbpf style and print out data section name consistently, where it's available. Acked-by: Stanislav Fomichev <sdf@google.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20221019002816.359650-2-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Müller authored
This change adds a brief summary of the BPF continuous integration (CI) to the BPF selftest documentation. The summary focuses not so much on actual workings of the CI, as it is maintained outside of the repository, but aims to document the few bits of it that are sourced from this repository and that developers may want to adjust as part of patch submissions: the BPF kernel configuration and the deny list file(s). Changelog: - v1->v2: - use s390x instead of s390 for consistency Signed-off-by: Daniel Müller <deso@posteo.net> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20221018164015.1970862-1-deso@posteo.netSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Daniel Müller authored
This change fixes some typos found in the BPF samples README file. Signed-off-by: Daniel Müller <deso@posteo.net> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20221018163231.1926462-1-deso@posteo.netSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Shaomin Deng authored
Remove the repeated word "by" in comments. Signed-off-by: Shaomin Deng <dengshaomin@cdjrlc.com> Link: https://lore.kernel.org/r/20221017142303.8299-1-dengshaomin@cdjrlc.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Gerhard Engleder authored
xdp2_kern rewrites and forwards packets out on the same interface. Forwarding still works but rewrite got broken when xdp multibuffer support has been added. With xdp multibuffer a local copy of the packet has been introduced. The MAC address is now swapped in the local copy, but the local copy in not written back. Fix MAC address swapping be adding write back of modified packet. Fixes: 77225174 ("samples/bpf: fixup some tools to be able to support xdp multibuffer") Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Reviewed-by: Andy Gospodarek <gospo@broadcom.com> Link: https://lore.kernel.org/r/20221015213050.65222-1-gerhard@engleder-embedded.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Gerhard Engleder authored
BPF map iteration in xdp1_user results in endless loop without any output, because the return value of bpf_map_get_next_key() is checked against the wrong value. Other call locations of bpf_map_get_next_key() check for equal 0 for continuing the iteration. xdp1_user checks against unequal -1. This is wrong for a function which can return arbitrary negative errno values, because a return value of e.g. -2 results in an endless loop. With this fix xdp1_user is printing statistics again: proto 0: 1 pkt/s proto 0: 1 pkt/s proto 17: 107383 pkt/s proto 17: 881655 pkt/s proto 17: 882083 pkt/s proto 17: 881758 pkt/s Fixes: bd054102 ("libbpf: enforce strict libbpf 1.0 behaviors") Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com> Acked-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20221013200922.17167-1-gerhard@engleder-embedded.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Alexandru Tachici authored
No need to use more than one SPI transfer for reads. Use only one from now as ADIN1110/2111 does not tolerate CS changes during reads. The BCM2711/2708 SPI controllers worked fine, but the NXP IMX8MM could not keep CS lowered during SPI bursts. This change aims to make the ADIN1110/2111 driver compatible with both SPI controllers, without any loss of bandwidth/other capabilities. Fixes: bc93e19d ("net: ethernet: adi: Add ADIN1110 support") Signed-off-by: Alexandru Tachici <alexandru.tachici@analog.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-