- 10 Nov, 2022 1 commit
-
-
Eduard Zingerman authored
An update for libbpf's hashmap interface from void* -> void* to a polymorphic one, allowing both long and void* keys and values. This simplifies many use cases in libbpf as hashmaps there are mostly integer to integer. Perf copies hashmap implementation from libbpf and has to be updated as well. Changes to libbpf, selftests/bpf and perf are packed as a single commit to avoid compilation issues with any future bisect. Polymorphic interface is acheived by hiding hashmap interface functions behind auxiliary macros that take care of necessary type casts, for example: #define hashmap_cast_ptr(p) \ ({ \ _Static_assert((p) == NULL || sizeof(*(p)) == sizeof(long),\ #p " pointee should be a long-sized integer or a pointer"); \ (long *)(p); \ }) bool hashmap_find(const struct hashmap *map, long key, long *value); #define hashmap__find(map, key, value) \ hashmap_find((map), (long)(key), hashmap_cast_ptr(value)) - hashmap__find macro casts key and value parameters to long and long* respectively - hashmap_cast_ptr ensures that value pointer points to a memory of appropriate size. This hack was suggested by Andrii Nakryiko in [1]. This is a follow up for [2]. [1] https://lore.kernel.org/bpf/CAEf4BzZ8KFneEJxFAaNCCFPGqp20hSpS2aCj76uRk3-qZUH5xg@mail.gmail.com/ [2] https://lore.kernel.org/bpf/af1facf9-7bc8-8a3d-0db4-7b3f333589a2@meta.com/T/#m65b28f1d6d969fcd318b556db6a3ad499a42607d Signed-off-by:
Eduard Zingerman <eddyz87@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20221109142611.879983-2-eddyz87@gmail.com
-
- 30 Sep, 2022 1 commit
-
-
Tianyi Liu authored
strerror() expects a positive errno, however variable err will never be positive when an error occurs. This causes bpftool to output too many "unknown error", even a simple "file not exist" error can not get an accurate message. This patch fixed all "strerror(err)" patterns in bpftool. Specially in btf.c#L823, hashmap__append() is an internal function of libbpf and will not change errno, so there's a little difference. Some libbpf_get_error() calls are kept for return values. Changes since v1: https://lore.kernel.org/bpf/SY4P282MB1084B61CD8671DFA395AA8579D539@SY4P282MB1084.AUSP282.PROD.OUTLOOK.COM/ Check directly for NULL values instead of calling libbpf_get_error(). Signed-off-by:
Tianyi Liu <i.pear@outlook.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/SY4P282MB1084AD9CD84A920F08DF83E29D549@SY4P282MB1084.AUSP282.PROD.OUTLOOK.COM
-
- 29 Jul, 2022 1 commit
-
-
Jörn-Thorben Hinz authored
A skeleton generated by bpftool previously contained a return followed by an expression in OBJ_NAME__detach(), which has return type void. This did not hurt, the bpf_object__detach_skeleton() called there returns void itself anyway, but led to a warning when compiling with e.g. -pedantic. Signed-off-by:
Jörn-Thorben Hinz <jthinz@mailbox.tu-berlin.de> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220726133203.514087-1-jthinz@mailbox.tu-berlin.de
-
- 08 Jul, 2022 1 commit
-
-
Daniel Müller authored
This change adjusts bpftool's type marking logic, as used in conjunction with TYPE_EXISTS relocations, to correctly recognize and handle the RESTRICT BTF kind. Suggested-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Daniel Müller <deso@posteo.net> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220623212205.2805002-1-deso@posteo.net/T/#m4c75205145701762a4b398e0cdb911d5b5305ffc Link: https://lore.kernel.org/bpf/20220706212855.1700615-2-deso@posteo.net
-
- 06 Jul, 2022 1 commit
-
-
Daniel Müller authored
bpftool needs to know about the newly introduced BPF_CORE_TYPE_MATCHES relocation for its 'gen min_core_btf' command to work properly in the present of this relocation. Specifically, we need to make sure to mark types and fields so that they are present in the minimized BTF for "type match" checks to work out. However, contrary to the existing btfgen_record_field_relo, we need to rely on the BTF -- and not the spec -- to find fields. With this change we handle this new variant correctly. The functionality will be tested with follow on changes to BPF selftests, which already run against a minimized BTF created with bpftool. Signed-off-by:
Daniel Müller <deso@posteo.net> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220628160127.607834-3-deso@posteo.net
-
- 07 Jun, 2022 1 commit
-
-
Yonghong Song authored
Add BTF_KIND_ENUM64 support. For example, the following enum is defined in uapi bpf.h. $ cat core.c enum A { BPF_F_INDEX_MASK = 0xffffffffULL, BPF_F_CURRENT_CPU = BPF_F_INDEX_MASK, BPF_F_CTXLEN_MASK = (0xfffffULL << 32), } g; Compiled with clang -target bpf -O2 -g -c core.c Using bpftool to dump types and generate format C file: $ bpftool btf dump file core.o ... [1] ENUM64 'A' encoding=UNSIGNED size=8 vlen=3 'BPF_F_INDEX_MASK' val=4294967295ULL 'BPF_F_CURRENT_CPU' val=4294967295ULL 'BPF_F_CTXLEN_MASK' val=4503595332403200ULL $ bpftool btf dump file core.o format c ... enum A { BPF_F_INDEX_MASK = 4294967295ULL, BPF_F_CURRENT_CPU = 4294967295ULL, BPF_F_CTXLEN_MASK = 4503595332403200ULL, }; ... Note that for raw btf output, the encoding (UNSIGNED or SIGNED) is printed out as well. The 64bit value is also represented properly in BTF and C dump. Acked-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/20220607062652.3722649-1-yhs@fb.com Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 02 Jun, 2022 1 commit
-
-
Michael Mullin authored
bpf_object__btf() can return a NULL value. If bpf_object__btf returns null, do not progress through codegen_asserts(). This avoids a null ptr dereference at the call btf__type_cnt() in the function find_type_for_map() Signed-off-by:
Michael Mullin <masmullin@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220523194917.igkgorco42537arb@jup
-
- 10 May, 2022 2 commits
-
-
KP Singh authored
bpf_link_get_from_fd currently returns a NULL fd for LSM programs. LSM programs are similar to tracing programs and can also use skel_raw_tracepoint_open. Signed-off-by:
KP Singh <kpsingh@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220509214905.3754984-1-kpsingh@kernel.org
-
Jason Wang authored
Most code generators declare its name so did this for bfptool. Signed-off-by:
Jason Wang <jasowang@redhat.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220509090247.5457-1-jasowang@redhat.com
-
- 30 Mar, 2022 1 commit
-
-
Delyan Kratunov authored
Andrii noticed that since f97b8b9b ("bpftool: Fix a bug in subskeleton code generation") the subskeleton code allows bpf_object__destroy_subskeleton to overwrite the errno that subskeleton__open would return with. While this is not currently an issue, let's make it future-proof. This patch explicitly tracks err in subskeleton__open and skeleton__create (i.e. calloc failure is explicitly ENOMEM) and ensures that errno is -err on the error return path. The skeleton code had to be changed since maps and progs codegen is shared with subskeletons. Fixes: f97b8b9b ("bpftool: Fix a bug in subskeleton code generation") Signed-off-by:
Delyan Kratunov <delyank@fb.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/3b6bfbb770c79ae64d8de26c1c1bd9d53a4b85f8.camel@fb.com
-
- 29 Mar, 2022 1 commit
-
-
Jiri Olsa authored
Arnaldo reported perf compilation fail with: $ make -k BUILD_BPF_SKEL=1 CORESIGHT=1 PYTHON=python3 ... In file included from util/bpf_counter.c:28: /tmp/build/perf//util/bpf_skel/bperf_leader.skel.h: In function ‘bperf_leader_bpf__assert’: /tmp/build/perf//util/bpf_skel/bperf_leader.skel.h:351:51: error: unused parameter ‘s’ [-Werror=unused-parameter] 351 | bperf_leader_bpf__assert(struct bperf_leader_bpf *s) | ~~~~~~~~~~~~~~~~~~~~~~~~~^ cc1: all warnings being treated as errors If there's nothing to generate in the new assert function, we will get unused 's' warn/error, adding 'unused' attribute to it. Fixes: 08d4dba6 ("bpftool: Bpf skeletons assert type sizes") Reported-by:
Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Tested-by:
Arnaldo Carvalho de Melo <acme@redhat.com> Link: https://lore.kernel.org/bpf/20220328083703.2880079-1-jolsa@kernel.org
-
- 21 Mar, 2022 1 commit
-
-
Yonghong Song authored
Compiled with clang by adding LLVM=1 both kernel and selftests/bpf build, I hit the following compilation error: In file included from /.../tools/testing/selftests/bpf/prog_tests/subskeleton.c:6: ./test_subskeleton_lib.subskel.h:168:6: error: variable 'err' is used uninitialized whenever 'if' condition is true [-Werror,-Wsometimes-uninitialized] if (!s->progs) ^~~~~~~~~ ./test_subskeleton_lib.subskel.h:181:11: note: uninitialized use occurs here errno = -err; ^~~ ./test_subskeleton_lib.subskel.h:168:2: note: remove the 'if' if its condition is always false if (!s->progs) ^~~~~~~~~~~~~~ The compilation error is triggered by the following code ... int err; obj = (struct test_subskeleton_lib *)calloc(1, sizeof(*obj)); if (!obj) { errno = ENOMEM; goto err; } ... err: test_subskeleton_lib__destroy(obj); errno = -err; ... in test_subskeleton_lib__open(). The 'err' is not initialized, yet it is used in 'errno = -err' later. The fix is to remove 'errno = -err' since errno has been set properly in all incoming branches. Fixes: 00389c58 ("bpftool: Add support for subskeletons") Signed-off-by:
Yonghong Song <yhs@fb.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220320032009.3106133-1-yhs@fb.com
-
- 18 Mar, 2022 1 commit
-
-
Delyan Kratunov authored
Subskeletons are headers which require an already loaded program to operate. For example, when a BPF library is linked into a larger BPF object file, the library userspace needs a way to access its own global variables without requiring knowledge about the larger program at build time. As a result, subskeletons require a loaded bpf_object to open(). Further, they find their own symbols in the larger program by walking BTF type data at run time. At this time, programs, maps, and globals are supported through non-owning pointers. Signed-off-by:
Delyan Kratunov <delyank@fb.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/ca8a48b4841c72d285ecce82371bef4a899756cb.1647473511.git.delyank@fb.com
-
- 24 Feb, 2022 1 commit
-
-
Delyan Kratunov authored
When emitting type declarations in skeletons, bpftool will now also emit static assertions on the size of the data/bss/rodata/etc fields. This ensures that in situations where userspace and kernel types have the same name but differ in size we do not silently produce incorrect results but instead break the build. This was reported in [1] and as expected the repro in [2] fails to build on the new size assert after this change. [1]: Closes: https://github.com/libbpf/libbpf/issues/433 [2]: https://github.com/fuweid/iovisor-bcc-pr-3777 Signed-off-by:
Delyan Kratunov <delyank@fb.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Tested-by:
Hengqi Chen <hengqi.chen@gmail.com> Acked-by:
Hengqi Chen <hengqi.chen@gmail.com> Link: https://lore.kernel.org/bpf/f562455d7b3cf338e59a7976f4690ec5a0057f7f.camel@fb.com
-
- 17 Feb, 2022 1 commit
-
-
Andrii Nakryiko authored
Mark C++-specific T::open() and other methods as static inline to avoid symbol redefinition when multiple files use the same skeleton header in an application. Fixes: bb8ffe61 ("bpftool: Add C++-specific open/load/etc skeleton wrappers") Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220216233540.216642-1-andrii@kernel.org
-
- 16 Feb, 2022 3 commits
-
-
Mauricio Vásquez authored
The last part of the BTFGen algorithm is to create a new BTF object with all the types that were recorded in the previous steps. This function performs two different steps: 1. Add the types to the new BTF object by using btf__add_type(). Some special logic around struct and unions is implemented to only add the members that are really used in the field-based relocations. The type ID on the new and old BTF objects is stored on a map. 2. Fix all the type IDs on the new BTF object by using the IDs saved in the previous step. Signed-off-by:
Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by:
Rafael David Tinoco <rafael.tinoco@aquasec.com> Signed-off-by:
Lorenzo Fontana <lorenzo.fontana@elastic.co> Signed-off-by:
Leonardo Di Donato <leonardo.didonato@elastic.co> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220215225856.671072-6-mauricio@kinvolk.io
-
Mauricio Vásquez authored
This commit implements the logic for the gen min_core_btf command. Specifically, it implements the following functions: - minimize_btf(): receives the path of a source and destination BTF files and a list of BPF objects. This function records the relocations for all objects and then generates the BTF file by calling btfgen_get_btf() (implemented in the following commit). - btfgen_record_obj(): loads the BTF and BTF.ext sections of the BPF objects and loops through all CO-RE relocations. It uses bpf_core_calc_relo_insn() from libbpf and passes the target spec to btfgen_record_reloc(), that calls one of the following functions depending on the relocation kind. - btfgen_record_field_relo(): uses the target specification to mark all the types that are involved in a field-based CO-RE relocation. In this case types resolved and marked recursively using btfgen_mark_type(). Only the struct and union members (and their types) involved in the relocation are marked to optimize the size of the generated BTF file. - btfgen_record_type_relo(): marks the types involved in a type-based CO-RE relocation. In this case no members for the struct and union types are marked as libbpf doesn't use them while performing this kind of relocation. Pointed types are marked as they are used by libbpf in this case. - btfgen_record_enumval_relo(): marks the whole enum type for enum-based relocations. Signed-off-by:
Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by:
Rafael David Tinoco <rafael.tinoco@aquasec.com> Signed-off-by:
Lorenzo Fontana <lorenzo.fontana@elastic.co> Signed-off-by:
Leonardo Di Donato <leonardo.didonato@elastic.co> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220215225856.671072-5-mauricio@kinvolk.io
-
Mauricio Vásquez authored
This command is implemented under the "gen" command in bpftool and the syntax is the following: $ bpftool gen min_core_btf INPUT OUTPUT OBJECT [OBJECT...] INPUT is the file that contains all the BTF types for a kernel and OUTPUT is the path of the minimize BTF file that will be created with only the types needed by the objects. Signed-off-by:
Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by:
Rafael David Tinoco <rafael.tinoco@aquasec.com> Signed-off-by:
Lorenzo Fontana <lorenzo.fontana@elastic.co> Signed-off-by:
Leonardo Di Donato <leonardo.didonato@elastic.co> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220215225856.671072-4-mauricio@kinvolk.io
-
- 15 Feb, 2022 1 commit
-
-
Andrii Nakryiko authored
Add C++-specific static methods for code-generated BPF skeleton for each skeleton operation: open, open_opts, open_and_load, load, attach, detach, destroy, and elf_bytes. This is to facilitate easier C++ templating on top of pure C BPF skeleton. In C, open/load/destroy/etc "methods" are of the form <skeleton_name>__<method>() to avoid name collision with similar "methods" of other skeletons withint the same application. This works well, but is very inconvenient for C++ applications that would like to write generic (templated) wrappers around BPF skeleton to fit in with C++ code base and take advantage of destructors and other convenient C++ constructs. This patch makes it easier to build such generic templated wrappers by additionally defining C++ static methods for skeleton's struct with fixed names. This allows to refer to, say, open method as `T::open()` instead of having to somehow generate `T__open()` function call. Next patch adds an example template to test_cpp selftest to demonstrate how it's possible to have all the operations wrapped in a generic Skeleton<my_skeleton> type without explicitly passing function references. An example of generated declaration section without %1$s placeholders: #ifdef __cplusplus static struct test_attach_probe *open(const struct bpf_object_open_opts *opts = nullptr); static struct test_attach_probe *open_and_load(); static int load(struct test_attach_probe *skel); static int attach(struct test_attach_probe *skel); static void detach(struct test_attach_probe *skel); static void destroy(struct test_attach_probe *skel); static const void *elf_bytes(size_t *sz); #endif /* __cplusplus */ Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220212055733.539056-2-andrii@kernel.org
-
- 10 Feb, 2022 1 commit
-
-
Alexei Starovoitov authored
Generealize light skeleton by hiding mmap details in skel_internal.h In this form generated lskel.h is usable both by user space and by the kernel. Note that previously #include <bpf/bpf.h> was in *.lskel.h file. To avoid #ifdef-s in a generated lskel.h the include of bpf.h is moved to skel_internal.h, but skel_internal.h is also used by gen_loader.c which is part of libbpf. Therefore skel_internal.h does #include "bpf.h" in case of user space, so gen_loader.c and lskel.h have necessary definitions. Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Yonghong Song <yhs@fb.com> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220209232001.27490-4-alexei.starovoitov@gmail.com
-
- 01 Feb, 2022 2 commits
-
-
Alexei Starovoitov authored
Open code raw_tracepoint_open and link_create used by light skeleton to be able to avoid full libbpf eventually. Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-4-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
bpf iterator programs should use bpf_link_create to attach instead of bpf_raw_tracepoint_open like other tracing programs. Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-2-alexei.starovoitov@gmail.com
-
- 26 Jan, 2022 1 commit
-
-
Andrii Nakryiko authored
Use bpf_program__type() instead of discouraged bpf_program__get_type(). Also switch to bpf_map__set_max_entries() instead of bpf_map__resize(). Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220124194254.2051434-5-andrii@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- 13 Jan, 2022 2 commits
-
-
Wei Fu authored
After `bpftool gen skeleton`, the ${bpf_app}.skel.h will provide that ${bpf_app_name}__open helper to load bpf. If there is some error like ENOMEM, the ${bpf_app_name}__open will rollback(free) the allocated object, including `bpf_object_skeleton`. Since the ${bpf_app_name}__create_skeleton set the obj->skeleton first and not rollback it when error, it will cause double-free in ${bpf_app_name}__destory at ${bpf_app_name}__open. Therefore, we should set the obj->skeleton before return 0; Fixes: 5dc7a8b2 ("bpftool, selftests/bpf: Embed object file inside skeleton") Signed-off-by:
Wei Fu <fuweid89@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220108084008.1053111-1-fuweid89@gmail.com
-
Christy Lee authored
libbpf bpf_map__def() API is being deprecated, replace bpftool's usage with the appropriate getters and setters Signed-off-by:
Christy Lee <christylee@fb.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220108004218.355761-3-christylee@fb.com
-
- 10 Dec, 2021 1 commit
-
-
Andrii Nakryiko authored
Switch all the uses of to-be-deprecated bpf_object__load_xattr() into a simple bpf_object__load() calls with optional log_level passed through open_opts.kernel_log_level, if -d option is specified. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211209193840.1248570-13-andrii@kernel.org
-
- 15 Nov, 2021 1 commit
-
-
Hengqi Chen authored
Currently, LIBBPF_STRICT_ALL mode is enabled by default for bpftool which means on error cases, some libbpf APIs would return NULL pointers. This makes IS_ERR check failed to detect such cases and result in segfault error. Use libbpf_get_error() instead like we do in libbpf itself. Signed-off-by:
Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211115012436.3143318-1-hengqi.chen@gmail.com
-
- 12 Nov, 2021 1 commit
-
-
Andrii Nakryiko authored
Use v1.0-compatible variants of btf_dump and perf_buffer "constructors". This is also a demonstration of reusing struct perf_buffer_raw_opts as OPTS-style option struct for new perf_buffer__new_raw() API. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20211111053624.190580-10-andrii@kernel.org
-
- 22 Oct, 2021 3 commits
-
-
Hengqi Chen authored
Replace the call to btf__get_nr_types with new API btf__type_cnt. The old API will be deprecated in libbpf v0.7+. No functionality change. Signed-off-by:
Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211022130623.1548429-5-hengqi.chen@gmail.com
-
Andrii Nakryiko authored
It can happen that some data sections (e.g., .rodata.cst16, containing compiler populated string constants) won't have a corresponding BTF DATASEC type. Now that libbpf supports .rodata.* and .data.* sections, situation like that will cause invalid BPF skeleton to be generated that won't compile successfully, as some parts of skeleton would assume memory-mapped struct definitions for each special data section. Fix this by generating empty struct definitions for such data sections. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-7-andrii@kernel.org
-
Andrii Nakryiko authored
Remove the assumption about only single instance of each of .rodata and .data internal maps. Nothing changes for '.rodata' and '.data' maps, but new '.rodata.something' map will get 'rodata_something' section in BPF skeleton for them (as well as having struct bpf_map * field in maps section with the same field name). Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211021014404.2635234-6-andrii@kernel.org
-
- 08 Oct, 2021 1 commit
-
-
Quentin Monnet authored
It seems that the header file was never necessary to compile bpftool, and it is not part of the headers exported from libbpf. Let's remove the includes from prog.c and gen.c. Fixes: d510296d ("bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command.") Signed-off-by:
Quentin Monnet <quentin@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20211007194438.34443-3-quentin@isovalent.com
-
- 28 Sep, 2021 1 commit
-
-
Yucong Sun authored
"?:" is a GNU C extension, some environment has warning flags for its use, or even prohibit it directly. This patch avoid triggering these problems by simply expand it to its full form, no functionality change. Signed-off-by:
Yucong Sun <fallentree@fb.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210928184221.1545079-1-fallentree@fb.com
-
- 08 Sep, 2021 1 commit
-
-
Matt Smith authored
This adds a skeleton method X__elf_bytes() which returns the binary data of the compiled and embedded BPF object file. It additionally sets the size of the return data to the provided size_t pointer argument. The assignment to s->data is cast to void * to ensure no warning is issued if compiled with a previous version of libbpf where the bpf_object_skeleton field is void * instead of const void * Signed-off-by:
Matt Smith <alastorze@fb.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210901194439.3853238-3-alastorze@fb.com
-
- 30 Jul, 2021 2 commits
-
-
Quentin Monnet authored
The -L|--use-loader option for using loader programs when loading, or when generating a skeleton, did not have any documentation or bash completion. Same thing goes for -B|--base-btf, used to pass a path to a base BTF object for split BTF such as BTF for kernel modules. This patch documents and adds bash completion for those options. Fixes: 75fa1777 ("tools/bpftool: Add bpftool support for split BTF") Fixes: d510296d ("bpftool: Use syscall/loader program in "prog load" and "gen skeleton" command.") Signed-off-by:
Quentin Monnet <quentin@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210730215435.7095-7-quentin@isovalent.com
-
Quentin Monnet authored
All bpftool commands support the options for JSON output and debug from libbpf. In addition, some commands support additional options corresponding to specific use cases. The list of options described in the man pages for the different commands are not always accurate. The messages for interactive help are mostly limited to HELP_SPEC_OPTIONS, and are even less representative of the actual set of options supported for the commands. Let's update the lists: - HELP_SPEC_OPTIONS is modified to contain the "default" options (JSON and debug), and to be extensible (no ending curly bracket). - All commands use HELP_SPEC_OPTIONS in their help message, and then complete the list with their specific options. - The lists of options in the man pages are updated. - The formatting of the list for bpftool.rst is adjusted to match formatting for the other man pages. This is for consistency, and also because it will be helpful in a future patch to automatically check that the files are synchronised. Signed-off-by:
Quentin Monnet <quentin@isovalent.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210730215435.7095-5-quentin@isovalent.com
-
- 26 May, 2021 1 commit
-
-
Andrii Nakryiko authored
Follow libbpf's error handling conventions and pass through errors and errno properly. Skeleton code always returned NULL on errors (not ERR_PTR(err)), so there are no backwards compatibility concerns. But now we also set errno properly, so it's possible to distinguish different reasons for failure, if necessary. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
John Fastabend <john.fastabend@gmail.com> Acked-by:
Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20210525035935.1461796-6-andrii@kernel.org
-
- 18 May, 2021 1 commit
-
-
Alexei Starovoitov authored
Add -L flag to bpftool to use libbpf gen_trace facility and syscall/loader program for skeleton generation and program loading. "bpftool gen skeleton -L" command will generate a "light skeleton" or "loader skeleton" that is similar to existing skeleton, but has one major difference: $ bpftool gen skeleton lsm.o > lsm.skel.h $ bpftool gen skeleton -L lsm.o > lsm.lskel.h $ diff lsm.skel.h lsm.lskel.h @@ -5,34 +4,34 @@ #define __LSM_SKEL_H__ #include <stdlib.h> -#include <bpf/libbpf.h> +#include <bpf/bpf.h> The light skeleton does not use majority of libbpf infrastructure. It doesn't need libelf. It doesn't parse .o file. It only needs few sys_bpf wrappers. All of them are in bpf/bpf.h file. In future libbpf/bpf.c can be inlined into bpf.h, so not even libbpf.a would be needed to work with light skeleton. "bpftool prog load -L file.o" command is introduced for debugging of syscall/loader program generation. Just like the same command without -L it will try to load the programs from file.o into the kernel. It won't even try to pin them. "bpftool prog load -L -d file.o" command will provide additional debug messages on how syscall/loader program was generated. Also the execution of syscall/loader program will use bpf_trace_printk() for each step of loading BTF, creating maps, and loading programs. The user can do "cat /.../trace_pipe" for further debug. An example of fexit_sleep.lskel.h generated from progs/fexit_sleep.c: struct fexit_sleep { struct bpf_loader_ctx ctx; struct { struct bpf_map_desc bss; } maps; struct { struct bpf_prog_desc nanosleep_fentry; struct bpf_prog_desc nanosleep_fexit; } progs; struct { int nanosleep_fentry_fd; int nanosleep_fexit_fd; } links; struct fexit_sleep__bss { int pid; int fentry_cnt; int fexit_cnt; } *bss; }; Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210514003623.28033-18-alexei.starovoitov@gmail.com
-
- 11 May, 2021 2 commits
-
-
Andrii Nakryiko authored
As discussed in [0], stop emitting static variables in BPF skeletons to avoid issues with name-conflicting static variables across multiple statically-linked BPF object files. Users using static variables to pass data between BPF programs and user-space should do a trivial one-time switch according to the following simple rules: - read-only `static volatile const` variables should be converted to `volatile const`; - read/write `static volatile` variables should just drop `static volatile` modifiers to become global variables/symbols. To better handle older Clang versions, such newly converted global variables should be explicitly initialized with a specific value or `= 0`/`= {}`, whichever is appropriate. [0] https://lore.kernel.org/bpf/CAEf4BzZo7_r-hsNvJt3w3kyrmmBJj7ghGY8+k4nvKF0KLjma=w@mail.gmail.com/T/#m664d4b0d6b31ac8b2669360e0fc2d6962e9f5ec1 Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210507054119.270888-5-andrii@kernel.org
-
Andrii Nakryiko authored
For better future extensibility add per-file linker options. Currently the set of available options is empty. This changes bpf_linker__add_file() API, but it's not a breaking change as bpf_linker APIs hasn't been released yet. Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210507054119.270888-3-andrii@kernel.org
-