- 21 Aug, 2020 6 commits
-
-
Lorenz Bauer authored
Initializing psock->sk_proto and other saved callbacks is only done in sk_psock_update_proto, after sk_psock_init has returned. The logic for this is difficult to follow, and needlessly complex. Instead, initialize psock->sk_proto whenever we allocate a new psock. Additionally, assert the following invariants: * The SK has no ULP: ULP does it's own finagling of sk->sk_prot * sk_user_data is unused: we need it to store sk_psock Protect our access to sk_user_data with sk_callback_lock, which is what other users like reuseport arrays, etc. do. The result is that an sk_psock is always fully initialized, and that psock->sk_proto is always the "original" struct proto. The latter allows us to use psock->sk_proto when initializing IPv6 TCP / UDP callbacks for sockmap. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20200821102948.21918-2-lmb@cloudflare.com
-
Andrii Nakryiko authored
Add a set of APIs to perf_buffer manage to allow applications to integrate perf buffer polling into existing epoll-based infrastructure. One example is applications using libevent already and wanting to plug perf_buffer polling, instead of relying on perf_buffer__poll() and waste an extra thread to do it. But perf_buffer is still extremely useful to set up and consume perf buffer rings even for such use cases. So to accomodate such new use cases, add three new APIs: - perf_buffer__buffer_cnt() returns number of per-CPU buffers maintained by given instance of perf_buffer manager; - perf_buffer__buffer_fd() returns FD of perf_event corresponding to a specified per-CPU buffer; this FD is then polled independently; - perf_buffer__consume_buffer() consumes data from single per-CPU buffer, identified by its slot index. To support a simpler, but less efficient, way to integrate perf_buffer into external polling logic, also expose underlying epoll FD through perf_buffer__epoll_fd() API. It will need to be followed by perf_buffer__poll(), wasting extra syscall, or perf_buffer__consume(), wasting CPU to iterate buffers with no data. But could be simpler and more convenient for some cases. These APIs allow for great flexiblity, but do not sacrifice general usability of perf_buffer. Also exercise and check new APIs in perf_buffer selftest. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20200821165927.849538-1-andriin@fb.com
-
Alexei Starovoitov authored
Yonghong Song says: ==================== "link" has been an important concept for bpf ecosystem to connect bpf program with other properties. Currently, the information related information can be queried from userspace through bpf command BPF_LINK_GET_NEXT_ID, BPF_LINK_GET_FD_BY_ID and BPF_OBJ_GET_INFO_BY_FD. The information is also available by "cating" /proc/<pid>/fdinfo/<link_fd>. Raw_tracepoint, tracing, cgroup, netns and xdp links are already supported in the kernel and bpftool. This patch added support for bpf iterator. Patch #1 added generic support for link querying interface. Patch #2 implemented callback functions for map element bpf iterators. Patch #3 added bpftool support. Changelogs: v3 -> v4: . return target specific link_info even if target_name buffer is empty. (Andrii) v2 -> v3: . remove extra '\t' when fdinfo prints map_id to make parsing consistent. (Andrii) v1 -> v2: . fix checkpatch.pl warnings. (Jakub) ==================== Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
The link query for bpf iterators is implemented. Besides being shown to the user what bpf iterator the link represents, the target_name is also used to filter out what additional information should be printed out, e.g., whether map_id should be shown or not. The following is an example of bpf_iter link dump, plain output or pretty output. $ bpftool link show 11: iter prog 59 target_name task pids test_progs(1749) 34: iter prog 173 target_name bpf_map_elem map_id 127 pids test_progs_1(1753) $ bpftool -p link show [{ "id": 11, "type": "iter", "prog_id": 59, "target_name": "task", "pids": [{ "pid": 1749, "comm": "test_progs" } ] },{ "id": 34, "type": "iter", "prog_id": 173, "target_name": "bpf_map_elem", "map_id": 127, "pids": [{ "pid": 1753, "comm": "test_progs_1" } ] } ] Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200821184420.574430-1-yhs@fb.com
-
Yonghong Song authored
For bpf_map_elem and bpf_sk_local_storage bpf iterators, additional map_id should be shown for fdinfo and userspace query. For example, the following is for a bpf_map_elem iterator. $ cat /proc/1753/fdinfo/9 pos: 0 flags: 02000000 mnt_id: 14 link_type: iter link_id: 34 prog_tag: 104be6d3fe45e6aa prog_id: 173 target_name: bpf_map_elem map_id: 127 Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200821184419.574240-1-yhs@fb.com
-
Yonghong Song authored
This patch implemented bpf_link callback functions show_fdinfo and fill_link_info to support link_query interface. The general interface for show_fdinfo and fill_link_info will print/fill the target_name. Each targets can register show_fdinfo and fill_link_info callbacks to print/fill more target specific information. For example, the below is a fdinfo result for a bpf task iterator. $ cat /proc/1749/fdinfo/7 pos: 0 flags: 02000000 mnt_id: 14 link_type: iter link_id: 11 prog_tag: 990e1f8152f7e54f prog_id: 59 target_name: task Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200821184418.574122-1-yhs@fb.com
-
- 20 Aug, 2020 10 commits
-
-
Andrii Nakryiko authored
Record which built-ins are optional and needed for some of recent BPF CO-RE subtests. Document Clang diff that fixed corner-case issue with __builtin_btf_type_id(). Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-4-andriin@fb.com
-
Andrii Nakryiko authored
GCC 4.9 seems to be more strict in some regards. Fix two minor issue it reported. Fixes: 1c1052e0 ("tools/testing/selftests/bpf: Add self-tests for new helper bpf_get_ns_current_pid_tgid.") Fixes: 2d7824ff ("selftests: bpf: Add test for sk_assign") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-3-andriin@fb.com
-
Andrii Nakryiko authored
GCC compilers older than version 5 don't support __builtin_mul_overflow yet. Given GCC 4.9 is the minimal supported compiler for building kernel and the fact that libbpf is a dependency of resolve_btfids, which is dependency of CONFIG_DEBUG_INFO_BTF=y, this needs to be handled. This patch fixes the issue by falling back to slower detection of integer overflow in such cases. Fixes: 029258d7 ("libbpf: Remove any use of reallocarray() in libbpf") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-2-andriin@fb.com
-
Andrii Nakryiko authored
BPF_CALL | BPF_JMP32 is explicitly not allowed by verifier for BPF helper calls, so don't detect it as a valid call. Also drop the check on func_id pointer, as it's currently always non-null. Fixes: 109cea5a ("libbpf: Sanitize BPF program code for bpf_probe_read_{kernel, user}[_str]") Reported-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200820061411.1755905-1-andriin@fb.com
-
Daniel Borkmann authored
Alexei Starovoitov says: ==================== This patch set is the first real user of user mode driver facility. The general use case for user mode driver is to ship vmlinux with preloaded BPF programs. In this particular case the user mode driver populates bpffs instance with two BPF iterators. In several months BPF_LSM project would need to preload the kernel with its own set of BPF programs and attach to LSM hooks instead of bpffs. BPF iterators and BPF_LSM are unstable from uapi perspective. They are tracing based and peek into arbitrary kernel data structures. One can question why a kernel module cannot embed BPF programs inside. The reason is that libbpf is necessary to load them. First libbpf loads BPF Type Format, then creates BPF maps, populates them. Then it relocates code sections inside BPF programs, loads BPF programs, and finally attaches them to events. Theoretically libbpf can be rewritten to work in the kernel, but that is massive undertaking. The maintenance of in-kernel libbpf and user space libbpf would be another challenge. Another obstacle to embedding BPF programs into kernel module is sys_bpf api. Loading of programs, BTF, maps goes through the verifier. It validates and optimizes the code. It's possible to provide in-kernel api to all of sys_bpf commands (load progs, create maps, update maps, load BTF, etc), but that is huge amount of work and forever maintenance headache. Hence the decision is to ship vmlinux with user mode drivers that load BPF programs. Just like kernel modules extend vmlinux BPF programs are safe extensions of the kernel and some of them need to ship with vmlinux. This patch set adds a kernel module with user mode driver that populates bpffs with two BPF iterators. $ mount bpffs /my/bpffs/ -t bpf $ ls -la /my/bpffs/ total 4 drwxrwxrwt 2 root root 0 Jul 2 00:27 . drwxr-xr-x 19 root root 4096 Jul 2 00:09 .. -rw------- 1 root root 0 Jul 2 00:27 maps.debug -rw------- 1 root root 0 Jul 2 00:27 progs.debug The user mode driver will load BPF Type Formats, create BPF maps, populate BPF maps, load two BPF programs, attach them to BPF iterators, and finally send two bpf_link IDs back to the kernel. The kernel will pin two bpf_links into newly mounted bpffs instance under names "progs.debug" and "maps.debug". These two files become human readable. $ cat /my/bpffs/progs.debug id name attached 11 dump_bpf_map bpf_iter_bpf_map 12 dump_bpf_prog bpf_iter_bpf_prog 27 test_pkt_access 32 test_main test_pkt_access test_pkt_access 33 test_subprog1 test_pkt_access_subprog1 test_pkt_access 34 test_subprog2 test_pkt_access_subprog2 test_pkt_access 35 test_subprog3 test_pkt_access_subprog3 test_pkt_access 36 new_get_skb_len get_skb_len test_pkt_access 37 new_get_skb_ifindex get_skb_ifindex test_pkt_access 38 new_get_constant get_constant test_pkt_access The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about all BPF programs currently loaded in the system. This information is unstable and will change from kernel to kernel. In some sence this output is similar to 'bpftool prog show' that is using stable api to retreive information about BPF programs. The BPF subsytems grows quickly and there is always demand to show as much info about BPF things as possible. But we cannot expose all that info via stable uapi of bpf syscall, since the details change so much. Right now a BPF program can be attached to only one other BPF program. Folks are working on patches to enable multi-attach, but for debugging it's necessary to see the current state. There is no uapi for that, but above output shows it: 37 new_get_skb_ifindex get_skb_ifindex test_pkt_access 38 new_get_constant get_constant test_pkt_access [1] [2] [3] [1] is the full name of BPF prog from BTF. [2] is the name of function inside target BPF prog. [3] is the name of target BPF prog. [2] and [3] are not exposed via uapi, since they will change from single to multi soon. There are many other cases where bpf internals are useful for debugging, but shouldn't be exposed via uapi due to high rate of changes. systemd mounts /sys/fs/bpf at the start, so this kernel module with user mode driver needs to be available early. BPF_LSM most likely would need to preload BPF programs even earlier. Few interesting observations: - though bpffs comes with two human readble files "progs.debug" and "maps.debug" they can be removed. 'rm -f /sys/fs/bpf/progs.debug' will remove bpf_link and kernel will automatically unload corresponding BPF progs, maps, BTFs. In the future '-o remount' will be able to restore them. This is not implemented yet. - 'ps aux|grep bpf_preload' shows nothing. User mode driver loaded BPF iterators and exited. Nothing is lingering in user space at this point. - We can consider giving 0644 permissions to "progs.debug" and "maps.debug" to allow unprivileged users see BPF things loaded in the system. We cannot do so with "bpftool prog show", since it's using cap_sys_admin parts of bpf syscall. - The functionality split between core kernel, bpf_preload kernel module and user mode driver is very similar to bpfilter style of interaction. - Similar BPF iterators can be used as unstable extensions to /proc. Like mounting /proc can prepopolate some subdirectory in there with a BPF iterator that will print QUIC sockets instead of tcp and udp. Changelog: v5->v6: - refactored Makefiles with Andrii's help - switched to explicit $(MAKE) style - switched to userldlibs instead of userldflags - fixed build issue with libbpf Makefile due to invocation from kbuild - fixed menuconfig order as spotted by Daniel - introduced CONFIG_USERMODE_DRIVER bool that is selected by bpfilter and bpf_preload v4->v5: - addressed Song and Andrii feedback. s/pages/max_entries/ v3->v4: - took THIS_MODULE in patch 3 as suggested by Daniel to simplify the code. - converted BPF iterator to use BTF (when available) to print full BPF program name instead of 16-byte truncated version. This is something I've been using drgn scripts for. Take a look at get_name() in iterators.bpf.c to see how short it is comparing to what user space bpftool would have to do to print the same full name: . get prog info via obj_info_by_fd . do get_fd_by_id from info->btf_id . fetch potentially large BTF of the program from the kernel . parse that BTF in user space to figure out all type boundaries and string section . read info->func_info to get btf_id of func_proto from there . find that btf_id in the parsed BTF That's quite a bit work for bpftool comparing to few lines in get_name(). I guess would be good to make bpftool do this info extraction anyway. While doing this BTF reading in the kernel realized that the verifier is not smart enough to follow double pointers (added to my todo list), otherwise get_name() would have been even shorter. v2->v3: - fixed module unload race (Daniel) - added selftest (Daniel) - fixed build bot warning v1->v2: - changed names to 'progs.debug' and 'maps.debug' to hopefully better indicate instability of the text output. Having dot in the name also guarantees that these special files will not conflict with normal bpf objects pinned in bpffs, since dot is disallowed for normal pins. - instead of hard coding link_name in the core bpf moved into UMD. - cleanedup error handling. - addressed review comments from Yonghong and Andrii. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
Add a test that mounts two bpffs instances and checks progs.debug and maps.debug for sanity data. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200819042759.51280-5-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Add kernel module with user mode driver that populates bpffs with BPF iterators. $ mount bpffs /my/bpffs/ -t bpf $ ls -la /my/bpffs/ total 4 drwxrwxrwt 2 root root 0 Jul 2 00:27 . drwxr-xr-x 19 root root 4096 Jul 2 00:09 .. -rw------- 1 root root 0 Jul 2 00:27 maps.debug -rw------- 1 root root 0 Jul 2 00:27 progs.debug The user mode driver will load BPF Type Formats, create BPF maps, populate BPF maps, load two BPF programs, attach them to BPF iterators, and finally send two bpf_link IDs back to the kernel. The kernel will pin two bpf_links into newly mounted bpffs instance under names "progs.debug" and "maps.debug". These two files become human readable. $ cat /my/bpffs/progs.debug id name attached 11 dump_bpf_map bpf_iter_bpf_map 12 dump_bpf_prog bpf_iter_bpf_prog 27 test_pkt_access 32 test_main test_pkt_access test_pkt_access 33 test_subprog1 test_pkt_access_subprog1 test_pkt_access 34 test_subprog2 test_pkt_access_subprog2 test_pkt_access 35 test_subprog3 test_pkt_access_subprog3 test_pkt_access 36 new_get_skb_len get_skb_len test_pkt_access 37 new_get_skb_ifindex get_skb_ifindex test_pkt_access 38 new_get_constant get_constant test_pkt_access The BPF program dump_bpf_prog() in iterators.bpf.c is printing this data about all BPF programs currently loaded in the system. This information is unstable and will change from kernel to kernel as ".debug" suffix conveys. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200819042759.51280-4-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
The program and map iterators work similar to seq_file-s. Once the program is pinned in bpffs it can be read with "cat" tool to print human readable output. In this case about BPF programs and maps. For example: $ cat /sys/fs/bpf/progs.debug id name attached 5 dump_bpf_map bpf_iter_bpf_map 6 dump_bpf_prog bpf_iter_bpf_prog $ cat /sys/fs/bpf/maps.debug id name max_entries 3 iterator.rodata 1 To avoid kernel build dependency on clang 10 separate bpf skeleton generation into manual "make" step and instead check-in generated .skel.h into git. Unlike 'bpftool prog show' in-kernel BTF name is used (when available) to print full name of BPF program instead of 16-byte truncated name. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200819042759.51280-3-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Refactor the code a bit to extract bpf_link_by_id() helper. It's similar to existing bpf_prog_by_id(). Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200819042759.51280-2-alexei.starovoitov@gmail.com
-
Xu Wang authored
Simplify the return expression. Signed-off-by: Xu Wang <vulab@iscas.ac.cn> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819025324.14680-1-vulab@iscas.ac.cn
-
- 19 Aug, 2020 24 commits
-
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== This patch set adds libbpf support for two new classes of CO-RE relocations: type-based (TYPE_EXISTS/TYPE_SIZE/TYPE_ID_LOCAL/TYPE_ID_TARGET) and enum value-vased (ENUMVAL_EXISTS/ENUMVAL_VALUE): - TYPE_EXISTS allows to detect presence in kernel BTF of a locally-recorded BTF type. Useful for feature detection (new functionality often comes with new internal kernel types), as well as handling type renames and bigger refactorings. - TYPE_SIZE allows to get the real size (in bytes) of a specified kernel type. Useful for dumping internal structure as-is through perfbuf or ringbuf. - TYPE_ID_LOCAL/TYPE_ID_TARGET allow to capture BTF type ID of a BTF type in program's BTF or kernel BTF, respectively. These could be used for high-performance and space-efficient generic data dumping/logging by relying on small and cheap BTF type ID as a data layout descriptor, for post-processing on user-space side. - ENUMVAL_EXISTS can be used for detecting the presence of enumerator value in kernel's enum type. Most direct application is to detect BPF helper support in kernel. - ENUMVAL_VALUE allows to relocate real integer value of kernel enumerator value, which is subject to change (e.g., always a potential issue for internal, non-UAPI, kernel enums). I've indicated potential applications for these relocations, but relocations themselves are generic and unassuming and are designed to work correctly even in unintended applications. Furthermore, relocated values become constants, known to the verifier and could and would be used for dead branch code detection and elimination. This makes them ideal to do all sorts of feature detection and guarding functionality that's not available on some older (but still supported by BPF program) kernels, while having to compile and maintain one unified source code. Selftests are added for all the new features. Selftests utilizing new Clang built-ins are designed such that they will compile with older Clangs and will be skipped during test runs. So this shouldn't cause any build and test failures on systems with slightly outdated Clang compiler. LLVM patches adding these relocation in Clang: - __builtin_btf_type_id() ([0], [1], [2]); - __builtin_preserve_type_info(), __builtin_preserve_enum_value() ([3], [4]). [0] https://reviews.llvm.org/D74572 [1] https://reviews.llvm.org/D74668 [2] https://reviews.llvm.org/D85174 [3] https://reviews.llvm.org/D83878 [4] https://reviews.llvm.org/D83242 v2->v3: - fix feature detection for __builtin_btf_type_id() test (Yonghong); - fix extra empty lines at the end of files (Yonghong); v1->v2: - selftests detect built-in support and are skipped if not found (Alexei). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add tests validating existence and value relocations for enum value-based relocations. If __builtin_preserve_enum_value() built-in is not supported, skip tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-6-andriin@fb.com
-
Andrii Nakryiko authored
Implement two relocations of a new enumerator value-based CO-RE relocation kind: ENUMVAL_EXISTS and ENUMVAL_VALUE. First, ENUMVAL_EXISTS, allows to detect the presence of a named enumerator value in the target (kernel) BTF. This is useful to do BPF helper/map/program type support detection from BPF program side. bpf_core_enum_value_exists() macro helper is provided to simplify built-in usage. Second, ENUMVAL_VALUE, allows to capture enumerator integer value and relocate it according to the target BTF, if it changes. This is useful to have a guarantee against intentional or accidental re-ordering/re-numbering of some of the internal (non-UAPI) enumerations, where kernel developers don't care about UAPI backwards compatiblity concerns. bpf_core_enum_value() allows to capture this succinctly and use correct enum values in code. LLVM uses ldimm64 instruction to capture enumerator value-based relocations, so add support for ldimm64 instruction patching as well. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-5-andriin@fb.com
-
Andrii Nakryiko authored
Add tests for BTF type ID relocations. To allow testing this, enhance core_relo.c test runner to allow dynamic initialization of test inputs. If Clang doesn't have necessary support for new functionality, test is skipped. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-4-andriin@fb.com
-
Andrii Nakryiko authored
Add selftests for TYPE_EXISTS and TYPE_SIZE relocations, testing correctness of relocations and handling of type compatiblity/incompatibility. If __builtin_preserve_type_info() is not supported by compiler, skip tests. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-3-andriin@fb.com
-
Andrii Nakryiko authored
Implement support for TYPE_EXISTS/TYPE_SIZE/TYPE_ID_LOCAL/TYPE_ID_REMOTE relocations. These are examples of type-based relocations, as opposed to field-based relocations supported already. The difference is that they are calculating relocation values based on the type itself, not a field within a struct/union. Type-based relos have slightly different semantics when matching local types to kernel target types, see comments in bpf_core_types_are_compat() for details. Their behavior on failure to find target type in kernel BTF also differs. Instead of "poisoning" relocatable instruction and failing load subsequently in kernel, they return 0 (which is rarely a valid return result, so user BPF code can use that to detect success/failure of the relocation and deal with it without extra "guarding" relocations). Also, it's always possible to check existence of the type in target kernel with TYPE_EXISTS relocation, similarly to a field-based FIELD_EXISTS. TYPE_ID_LOCAL relocation is a bit special in that it always succeeds (barring any libbpf/Clang bugs) and resolved to BTF ID using **local** BTF info of BPF program itself. Tests in subsequent patches demonstrate the usage and semantics of new relocations. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200819194519.3375898-2-andriin@fb.com
-
Maciej Żenczykowski authored
This reduces likelihood of incorrect use. Test: builds Signed-off-by: Maciej Żenczykowski <maze@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819020027.4072288-1-zenczykowski@gmail.com
-
Maciej Żenczykowski authored
This provides a minor performance boost by virtue of inlining instead of cross module function calls. Test: builds Signed-off-by: Maciej Żenczykowski <maze@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819010710.3959310-2-zenczykowski@gmail.com
-
Maciej Żenczykowski authored
This reduces likelihood of incorrect use. Test: builds Signed-off-by: Maciej Żenczykowski <maze@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819010710.3959310-1-zenczykowski@gmail.com
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Get rid of two feature detectors: reallocarray and libelf-mmap. Optional feature detections complicate libbpf Makefile and cause more troubles for various applications that want to integrate libbpf as part of their build. Patch #1 replaces all reallocarray() uses into libbpf-internal reallocarray() implementation. Patches #2 and #3 makes sure we won't re-introduce reallocarray() accidentally. Patch #2 also removes last use of libbpf_internal.h header inside bpftool. There is still nlattr.h that's used by both libbpf and bpftool, but that's left for a follow up patch to split. Patch #4 removed libelf-mmap feature detector and all its uses, as it's trivial to handle missing mmap support in libbpf, the way objtool has been doing it for a while. v1->v2 and v2->v3: - rebase to latest bpf-next (Alexei). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
It's trivial to handle missing ELF_C_MMAP_READ support in libelf the way that objtool has solved it in ("774bec3f objtool: Add fallback from ELF_C_READ_MMAP to ELF_C_READ"). So instead of having an entire feature detector for that, just do what objtool does for perf and libbpf. And keep their Makefiles a bit simpler. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-5-andriin@fb.com
-
Andrii Nakryiko authored
Most of libbpf source files already include libbpf_internal.h, so it's a good place to centralize identifier poisoning. So move kernel integer type poisoning there. And also add reallocarray to a poison list to prevent accidental use of it. libbpf_reallocarray() should be used universally instead. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-4-andriin@fb.com
-
Andrii Nakryiko authored
Most netlink-related functions were unique to bpftool usage, so I moved them into net.c. Few functions are still used by both bpftool and libbpf itself internally, so I've copy-pasted them (libbpf_nl_get_link, libbpf_netlink_open). It's a bit of duplication of code, but better separation of libbpf as a library with public API and bpftool, relying on unexposed functions in libbpf. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-3-andriin@fb.com
-
Andrii Nakryiko authored
Re-implement glibc's reallocarray() for libbpf internal-only use. reallocarray(), unfortunately, is not available in all versions of glibc, so requires extra feature detection and using reallocarray() stub from <tools/libc_compat.h> and COMPAT_NEED_REALLOCARRAY. All this complicates build of libbpf unnecessarily and is just a maintenance burden. Instead, it's trivial to implement libbpf-specific internal version and use it throughout libbpf. Which is what this patch does, along with converting some realloc() uses that should really have been reallocarray() in the first place. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200819013607.3607269-2-andriin@fb.com
-
Andrii Nakryiko authored
Add test simulating ambiguous field size relocation, while fields themselves are at the exact same offset. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818223921.2911963-5-andriin@fb.com
-
Andrii Nakryiko authored
Split the instruction patching logic into relocation value calculation and application of relocation to instruction. Using this, evaluate relocation against each matching candidate and validate that all candidates agree on relocated value. If not, report ambiguity and fail load. This logic is necessary to avoid dangerous (however unlikely) accidental match against two incompatible candidate types. Without this change, libbpf will pick a random type as *the* candidate and apply potentially invalid relocation. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818223921.2911963-4-andriin@fb.com
-
Andrii Nakryiko authored
Add logging of local/target type kind (struct/union/typedef/etc). Preserve unresolved root type ID (for cases of typedef). Improve the format of CO-RE reloc spec output format to contain only relevant and succinct info. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818223921.2911963-3-andriin@fb.com
-
Andrii Nakryiko authored
Instead of printing out integer value of BTF kind, print out a string representation of a kind. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818223921.2911963-2-andriin@fb.com
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== This patch set refactors libbpf feature probing to be done lazily on as-needed basis, instead of proactively testing all possible features libbpf knows about. This allows to scale such detections and mitigations better, without issuing unnecessary syscalls on each bpf_object__load() call. It's also now memoized globally, instead of per-bpf_object. Building on that, libbpf will now detect availability of bpf_probe_read_kernel() helper (which means also -user and -str variants), and will sanitize BPF program code by replacing such references to generic variants (bpf_probe_read[_str]()). This allows to migrate all BPF programs into proper -kernel/-user probing helpers, without the fear of breaking them for old kernels. With that, update BPF_CORE_READ() and related macros to use bpf_probe_read_kernel(), as it doesn't make much sense to do CO-RE relocations against user-space types. And the only class of cases in which BPF program might read kernel type from user-space are UAPI data structures which by definition are fixed in their memory layout and don't need relocating. This is exemplified by test_vmlinux test, which is fixed as part of this patch set as well. BPF_CORE_READ() is useful for chainingg bpf_probe_read_{kernel,user}() calls together even without relocation, so we might add user-space variants, if there is a need. While at making libbpf more useful for older kernels, also improve handling of a complete lack of BTF support in kernel by not even attempting to load BTF info into kernel. This eliminates annoying warning about lack of BTF support in the kernel and map creation retry without BTF. If user is using features that require kernel BTF support, it will still fail, of course. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Detect whether a kernel supports any BTF at all, and if not, don't even attempt loading BTF to avoid unnecessary log messages like: libbpf: Error loading BTF: Invalid argument(22) libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-8-andriin@fb.com
-
Andrii Nakryiko authored
Now that libbpf can automatically fallback to bpf_probe_read() on old kernels not yet supporting bpf_probe_read_kernel(), switch libbpf BPF-side helper macros to use appropriate BPF helper for reading kernel data. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Cc: Ilya Leoshkevich <iii@linux.ibm.com> Link: https://lore.kernel.org/bpf/20200818213356.2629020-7-andriin@fb.com
-
Andrii Nakryiko authored
The test is reading UAPI kernel structure from user-space. So it doesn't need CO-RE relocations and has to use bpf_probe_read_user(). Fixes: acbd0620 ("selftests/bpf: Add vmlinux.h selftest exercising tracing of syscalls") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-6-andriin@fb.com
-
Andrii Nakryiko authored
Add BPF program code sanitization pass, replacing calls to BPF bpf_probe_read_{kernel,user}[_str]() helpers with bpf_probe_read[_str](), if libbpf detects that kernel doesn't support new variants. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-5-andriin@fb.com
-
Andrii Nakryiko authored
Factor out common piece of logic that detects support for a feature based on successfully created FD. Also take care of closing FD, if it was created. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200818213356.2629020-4-andriin@fb.com
-