- 05 Jan, 2024 4 commits
-
-
Jiri Olsa authored
The following case can cause a crash due to missing attach_btf: 1) load rawtp program 2) load fentry program with rawtp as target_fd 3) create tracing link for fentry program with target_fd = 0 4) repeat 3 In the end we have: - prog->aux->dst_trampoline == NULL - tgt_prog == NULL (because we did not provide target_fd to link_create) - prog->aux->attach_btf == NULL (the program was loaded with attach_prog_fd=X) - the program was loaded for tgt_prog but we have no way to find out which one BUG: kernel NULL pointer dereference, address: 0000000000000058 Call Trace: <TASK> ? __die+0x20/0x70 ? page_fault_oops+0x15b/0x430 ? fixup_exception+0x22/0x330 ? exc_page_fault+0x6f/0x170 ? asm_exc_page_fault+0x22/0x30 ? bpf_tracing_prog_attach+0x279/0x560 ? btf_obj_id+0x5/0x10 bpf_tracing_prog_attach+0x439/0x560 __sys_bpf+0x1cf4/0x2de0 __x64_sys_bpf+0x1c/0x30 do_syscall_64+0x41/0xf0 entry_SYSCALL_64_after_hwframe+0x6e/0x76 Return -EINVAL in this situation. Fixes: f3a95075 ("bpf: Allow trampoline re-attach for tracing and lsm programs") Cc: stable@vger.kernel.org Signed-off-by: Jiri Olsa <olsajiri@gmail.com> Acked-by: Jiri Olsa <olsajiri@gmail.com> Acked-by: Song Liu <song@kernel.org> Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com> Link: https://lore.kernel.org/r/20240103190559.14750-4-9erthalion6@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dmitrii Dolgov authored
Verify the fact that only one fentry prog could be attached to another fentry, building up an attachment chain of limited size. Use existing bpf_testmod as a start of the chain. Acked-by: Jiri Olsa <olsajiri@gmail.com> Acked-by: Song Liu <song@kernel.org> Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com> Link: https://lore.kernel.org/r/20240103190559.14750-3-9erthalion6@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dmitrii Dolgov authored
Currently, it's not allowed to attach an fentry/fexit prog to another one fentry/fexit. At the same time it's not uncommon to see a tracing program with lots of logic in use, and the attachment limitation prevents usage of fentry/fexit for performance analysis (e.g. with "bpftool prog profile" command) in this case. An example could be falcosecurity libs project that uses tp_btf tracing programs. Following the corresponding discussion [1], the reason for that is to avoid tracing progs call cycles without introducing more complex solutions. But currently it seems impossible to load and attach tracing programs in a way that will form such a cycle. The limitation is coming from the fact that attach_prog_fd is specified at the prog load (thus making it impossible to attach to a program loaded after it in this way), as well as tracing progs not implementing link_detach. Replace "no same type" requirement with verification that no more than one level of attachment nesting is allowed. In this way only one fentry/fexit program could be attached to another fentry/fexit to cover profiling use case, and still no cycle could be formed. To implement, add a new field into bpf_prog_aux to track nested attachment for tracing programs. [1]: https://lore.kernel.org/bpf/20191108064039.2041889-16-ast@kernel.org/Acked-by: Jiri Olsa <olsajiri@gmail.com> Acked-by: Song Liu <song@kernel.org> Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com> Link: https://lore.kernel.org/r/20240103190559.14750-2-9erthalion6@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Leon Hwang authored
Move emit_nops() before emit_prologue() and replace memcpy(prog, x86_nops[5], X86_PATCH_SIZE) with emit_nops(&prog, X86_PATCH_SIZE). Signed-off-by: Leon Hwang <hffilwlqm@gmail.com> Link: https://lore.kernel.org/r/20240104142226.87869-2-hffilwlqm@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 04 Jan, 2024 31 commits
-
-
Alexei Starovoitov authored
Ilya Leoshkevich says: ==================== s390/bpf: Fix gotol with large offsets Hi, While looking at a pyperf180 failure on s390x (must be related to [1], I'm not done with the investigation yet) I noticed that I have unfortunately messed up the gotol implementation. Patch 1 is the fix, patch 2 is a small test infrastructure tweak, and patch 3 adds a test. [1] https://github.com/llvm/llvm-project/issues/55669 Best regards, Ilya ==================== Link: https://lore.kernel.org/r/20240102193531.3169422-1-iii@linux.ibm.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Ilya Leoshkevich authored
Test gotol with offsets that don't fit into a short (i.e., larger than 32k or smaller than -32k). Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/r/20240102193531.3169422-4-iii@linux.ibm.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Ilya Leoshkevich authored
Testing long jumps requires having >32k instructions. That many instructions require the verifier log buffer of 2 megabytes. The regular test_progs run doesn't need an increased buffer, since gotol test with 40k instructions doesn't request a log, but test_progs -v will set the verifier log level. Hence to avoid breaking gotol test with -v increase the buffer size. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/r/20240102193531.3169422-3-iii@linux.ibm.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Ilya Leoshkevich authored
The gotol implementation uses a wrong data type for the offset: it should be s32, not s16. Fixes: c690191e ("s390/bpf: Implement unconditional jump with 32-bit offset") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/r/20240102193531.3169422-2-iii@linux.ibm.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Quentin Deslandes authored
bpfilter was supposed to convert iptables filtering rules into BPF programs on the fly, from the kernel, through a usermode helper. The base code for the UMH was introduced in 2018, and couple of attempts (2, 3) tried to introduce the BPF program generate features but were abandoned. bpfilter now sits in a kernel tree unused and unusable, occasionally causing confusion amongst Linux users (4, 5). As bpfilter is now developed in a dedicated repository on GitHub (6), it was suggested a couple of times this year (LSFMM/BPF 2023, LPC 2023) to remove the deprecated kernel part of the project. This is the purpose of this patch. [1]: https://lore.kernel.org/lkml/20180522022230.2492505-1-ast@kernel.org/ [2]: https://lore.kernel.org/bpf/20210829183608.2297877-1-me@ubique.spb.ru/#t [3]: https://lore.kernel.org/lkml/20221224000402.476079-1-qde@naccy.de/ [4]: https://dxuuu.xyz/bpfilter.html [5]: https://github.com/linuxkit/linuxkit/pull/3904 [6]: https://github.com/facebook/bpfilterSigned-off-by: Quentin Deslandes <qde@naccy.de> Link: https://lore.kernel.org/r/20231226130745.465988-1-qde@naccy.deSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
After merging the patch set [1] to reduce memory usage for bpf_global_percpu_ma, Alexei found a redundant check (cpu == 0) in function bpf_mem_alloc_percpu_unit_init() ([2]). Indeed, the check is unnecessary since c->unit_size will be all NULL or all non-NULL for all cpus before for_each_possible_cpu() loop. Removing the check makes code less confusing. [1] https://lore.kernel.org/all/20231222031729.1287957-1-yonghong.song@linux.dev/ [2] https://lore.kernel.org/all/20231222031745.1289082-1-yonghong.song@linux.dev/Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20240104165744.702239-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Libbpf-side __arg_ctx fallback support Support __arg_ctx global function argument tag semantics even on older kernels that don't natively support it through btf_decl_tag("arg:ctx"). Patches #2-#6 are preparatory work to allow to postpone BTF loading into the kernel until after all the BPF program relocations (including global func appending to main programs) are done. Patch #4 is perhaps the most important and establishes pre-created stable placeholder FDs, so that relocations can embed valid map FDs into ldimm64 instructions. Once BTF is done after relocation, what's left is to adjust BTF information to have each main program's copy of each used global subprog to point to its own adjusted FUNC -> FUNC_PROTO type chain (if they use __arg_ctx) in such a way as to satisfy type expectations of BPF verifier regarding the PTR_TO_CTX argument definition. See patch #8 for details. Patch #8 adds few more __arg_ctx use cases (edge cases like multiple arguments having __arg_ctx, etc) to test_global_func_ctx_args.c, to make it simple to validate that this logic indeed works on old kernels. It does. But just to be 100% sure patch #9 adds a test validating that libbpf uploads func_info with properly modified BTF data. v2->v3: - drop renaming patch (Alexei, Eduard); - use memfd_create() instead of /dev/null for placeholder FD (Eduard); - add one more test for validating BTF rewrite logic (Eduard); - fixed wrong -errno usage, reshuffled some BTF rewrite bits (Eduard); v1->v2: - do internal functions renaming in patch #1 (Alexei); - extract cloning of FUNC -> FUNC_PROTO information into separate function (Alexei); ==================== Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20240104013847.3875810-1-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add a test validating that libbpf uploads BTF and func_info with rewritten type information for arguments of global subprogs that are marked with __arg_ctx tag. Suggested-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240104013847.3875810-10-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add a few extra cases of global funcs with context arguments. This time rely on "arg:ctx" decl_tag (__arg_ctx macro), but put it next to "classic" cases where context argument has to be of an exact type that BPF verifier expects (e.g., bpf_user_pt_regs_t for kprobe/uprobe). Colocating all these cases separately from other global func args that rely on arg:xxx decl tags (in verifier_global_subprogs.c) allows for simpler backwards compatibility testing on old kernels. All the cases in test_global_func_ctx_args.c are supposed to work on older kernels, which was manually validated during development. Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240104013847.3875810-9-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Out of all special global func arg tag annotations, __arg_ctx is practically is the most immediately useful and most critical to have working across multitude kernel version, if possible. This would allow end users to write much simpler code if __arg_ctx semantics worked for older kernels that don't natively understand btf_decl_tag("arg:ctx") in verifier logic. Luckily, it is possible to ensure __arg_ctx works on old kernels through a bit of extra work done by libbpf, at least in a lot of common cases. To explain the overall idea, we need to go back at how context argument was supported in global funcs before __arg_ctx support was added. This was done based on special struct name checks in kernel. E.g., for BPF_PROG_TYPE_PERF_EVENT the expectation is that argument type `struct bpf_perf_event_data *` mark that argument as PTR_TO_CTX. This is all good as long as global function is used from the same BPF program types only, which is often not the case. If the same subprog has to be called from, say, kprobe and perf_event program types, there is no single definition that would satisfy BPF verifier. Subprog will have context argument either for kprobe (if using bpf_user_pt_regs_t struct name) or perf_event (with bpf_perf_event_data struct name), but not both. This limitation was the reason to add btf_decl_tag("arg:ctx"), making the actual argument type not important, so that user can just define "generic" signature: __noinline int global_subprog(void *ctx __arg_ctx) { ... } I won't belabor how libbpf is implementing subprograms, see a huge comment next to bpf_object_relocate_calls() function. The idea is that each main/entry BPF program gets its own copy of global_subprog's code appended. This per-program copy of global subprog code *and* associated func_info .BTF.ext information, pointing to FUNC -> FUNC_PROTO BTF type chain allows libbpf to simulate __arg_ctx behavior transparently, even if the kernel doesn't yet support __arg_ctx annotation natively. The idea is straightforward: each time we append global subprog's code and func_info information, we adjust its FUNC -> FUNC_PROTO type information, if necessary (that is, libbpf can detect the presence of btf_decl_tag("arg:ctx") just like BPF verifier would do it). The rest is just mechanical and somewhat painful BTF manipulation code. It's painful because we need to clone FUNC -> FUNC_PROTO, instead of reusing it, as same FUNC -> FUNC_PROTO chain might be used by another main BPF program within the same BPF object, so we can't just modify it in-place (and cloning BTF types within the same struct btf object is painful due to constant memory invalidation, see comments in code). Uploaded BPF object's BTF information has to work for all BPF programs at the same time. Once we have FUNC -> FUNC_PROTO clones, we make sure that instead of using some `void *ctx` parameter definition, we have an expected `struct bpf_perf_event_data *ctx` definition (as far as BPF verifier and kernel is concerned), which will mark it as context for BPF verifier. Same global subprog relocated and copied into another main BPF program will get different type information according to main program's type. It all works out in the end in a completely transparent way for end user. Libbpf maintains internal program type -> expected context struct name mapping internally. Note, not all BPF program types have named context struct, so this approach won't work for such programs (just like it didn't before __arg_ctx). So native __arg_ctx is still important to have in kernel to have generic context support across all BPF program types. Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240104013847.3875810-8-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
With all the preparations in previous patches done we are ready to postpone BTF loading and sanitization step until after all the relocations are performed. Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240104013847.3875810-7-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Move the logic of finding and assigning exception callback indices from BTF sanitization step to program relocations step, which seems more logical and will unblock moving BTF loading to after relocation step. Exception callbacks discovery and assignment has no dependency on BTF being loaded into the kernel, it only uses BTF information. It does need to happen before subprogram relocations happen, though. Which is why the split. No functional changes. Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240104013847.3875810-6-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Move map creation to later during BPF object loading by pre-creating stable placeholder FDs (utilizing memfd_create()). Use dup2() syscall to then atomically make those placeholder FDs point to real kernel BPF map objects. This change allows to delay BPF map creation to after all the BPF program relocations. That, in turn, allows to delay BTF finalization and loading into kernel to after all the relocations as well. We'll take advantage of the latter in subsequent patches to allow libbpf to adjust BTF in a way that helps with BPF global function usage. Clean up a few places where we close map->fd, which now shouldn't happen, because map->fd should be a valid FD regardless of whether map was created or not. Surprisingly and nicely it simplifies a bunch of error handling code. If this change doesn't backfire, I'm tempted to pre-create such stable FDs for other entities (progs, maybe even BTF). We previously did some manipulations to make gen_loader work with fake map FDs, with stable map FDs this hack is not necessary for maps (we still have it for BTF, but I left it as is for now). Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240104013847.3875810-5-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
With the upcoming switch to preallocated placeholder FDs for maps, switch various getters/setter away from checking map->fd. Use map_is_created() helper that detect whether BPF map can be modified based on map->obj->loaded state, with special provision for maps set up with bpf_map__reuse_fd(). For backwards compatibility, we take map_is_created() into account in bpf_map__fd() getter as well. This way before bpf_object__load() phase bpf_map__fd() will always return -1, just as before the changes in subsequent patches adding stable map->fd placeholders. We also get rid of all internal uses of bpf_map__fd() getter, as it's more oriented for uses external to libbpf. The above map_is_created() check actually interferes with some of the internal uses, if map FD is fetched through bpf_map__fd(). Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240104013847.3875810-4-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Instead of inferring whether map already point to previously created/pinned BPF map (which user can specify with bpf_map__reuse_fd()) API), use explicit map->reused flag that is set in such case. Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240104013847.3875810-3-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
It makes future grepping and code analysis a bit easier. Acked-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240104013847.3875810-2-andrii@kernel.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Yonghong Song says: ==================== bpf: Reduce memory usage for bpf_global_percpu_ma Currently when a bpf program intends to allocate memory for percpu kptr, the verifier will call bpf_mem_alloc_init() to prefill all supported unit sizes and this caused memory consumption very big for large number of cpus. For example, for 128-cpu system, the total memory consumption with initial prefill is ~175MB. Things will become worse for systems with even more cpus. Patch 1 avoids unnecessary extra percpu memory allocation. Patch 2 adds objcg to bpf_mem_alloc at init stage so objcg can be associated with root cgroup and objcg can be passed to later bpf_mem_alloc_percpu_unit_init(). Patch 3 addresses memory consumption issue by avoiding to prefill with all unit sizes, i.e. only prefilling with user specified size. Patch 4 further reduces memory consumption by limiting the number of prefill entries for percpu memory allocation. Patch 5 has much smaller low/high watermarks for percpu allocation to reduce memory consumption. Patch 6 rejects percpu memory allocation with bpf_global_percpu_ma when allocation size is greater than 512 bytes. Patch 7 fixed test_bpf_ma test due to Patch 5. Patch 8 added one test to show the verification failure log message. Changelogs: v5 -> v6: . Change bpf_mem_alloc_percpu_init() to add objcg as one of parameters. For bpf_global_percpu_ma, the objcg is NULL, corresponding root memcg. v4 -> v5: . Do not do bpf_global_percpu_ma initialization at init stage, instead doing initialization when the verifier knows it is going to be used by bpf prog. . Using much smaller low/high watermarks for percpu allocation. v3 -> v4: . Add objcg to bpf_mem_alloc during init stage. . Initialize objcg at init stage but use it in bpf_mem_alloc_percpu_unit_init(). . Remove check_obj_size() in bpf_mem_alloc_percpu_unit_init(). v2 -> v3: . Clear the bpf_mem_cache if prefill fails. . Change test_bpf_ma percpu allocation tests to use bucket_size as allocation size instead of bucket_size - 8. . Remove __GFP_ZERO flag from __alloc_percpu_gfp() call. v1 -> v2: . Avoid unnecessary extra percpu memory allocation. . Add a separate function to do bpf_global_percpu_ma initialization . promote. . Promote function static 'sizes' array to file static. . Add comments to explain to refill only one item for percpu alloc. ==================== Link: https://lore.kernel.org/r/20231222031729.1287957-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Add a selftest to capture the verification failure when the allocation size is greater than 512. Acked-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231222031812.1293190-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
In the previous patch, the maximum data size for bpf_global_percpu_ma is 512 bytes. This breaks selftest test_bpf_ma. The test is adjusted in two aspects: - Since the maximum allowed data size for bpf_global_percpu_ma is 512, remove all tests beyond that, names sizes 1024, 2048 and 4096. - Previously the percpu data size is bucket_size - 8 in order to avoid percpu allocation into the next bucket. This patch removed such data size adjustment thanks to Patch 1. Also, a better way to generate BTF type is used than adding a member to the value struct. Acked-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231222031807.1292853-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
For percpu data structure allocation with bpf_global_percpu_ma, the maximum data size is 4K. But for a system with large number of cpus, bigger data size (e.g., 2K, 4K) might consume a lot of memory. For example, the percpu memory consumption with unit size 2K and 1024 cpus will be 2K * 1K * 1k = 2GB memory. We should discourage such usage. Let us limit the maximum data size to be 512 for bpf_global_percpu_ma allocation. Acked-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231222031801.1290841-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Currently, refill low/high marks are set with the assumption of normal non-percpu memory allocation. For example, for an allocation size 256, for non-percpu memory allocation, low mark is 32 and high mark is 96, resulting in the batch allocation of 48 elements and the allocated memory will be 48 * 256 = 12KB for this particular cpu. Assuming an 128-cpu system, the total memory consumption across all cpus will be 12K * 128 = 1.5MB memory. This might be okay for non-percpu allocation, but may not be good for percpu allocation, which will consume 1.5MB * 128 = 192MB memory in the worst case if every cpu has a chance of memory allocation. In practice, percpu allocation is very rare compared to non-percpu allocation. So let us have smaller low/high marks which can avoid unnecessary memory consumption. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231222031755.1289671-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Typically for percpu map element or data structure, once allocated, most operations are lookup or in-place update. Deletion are really rare. Currently, for percpu data strcture, 4 elements will be refilled if the size is <= 256. Let us just do with one element for percpu data. For example, for size 256 and 128 cpus, the potential saving will be 3 * 256 * 128 * 128 = 12MB. Acked-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231222031750.1289290-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Commit 41a5db8d ("Add support for non-fix-size percpu mem allocation") added support for non-fix-size percpu memory allocation. Such allocation will allocate percpu memory for all buckets on all cpus and the memory consumption is in the order to quadratic. For example, let us say, 4 cpus, unit size 16 bytes, so each cpu has 16 * 4 = 64 bytes, with 4 cpus, total will be 64 * 4 = 256 bytes. Then let us say, 8 cpus with the same unit size, each cpu has 16 * 8 = 128 bytes, with 8 cpus, total will be 128 * 8 = 1024 bytes. So if the number of cpus doubles, the number of memory consumption will be 4 times. So for a system with large number of cpus, the memory consumption goes up quickly with quadratic order. For example, for 4KB percpu allocation, 128 cpus. The total memory consumption will 4KB * 128 * 128 = 64MB. Things will become worse if the number of cpus is bigger (e.g., 512, 1024, etc.) In Commit 41a5db8d, the non-fix-size percpu memory allocation is done in boot time, so for system with large number of cpus, the initial percpu memory consumption is very visible. For example, for 128 cpu system, the total percpu memory allocation will be at least (16 + 32 + 64 + 96 + 128 + 196 + 256 + 512 + 1024 + 2048 + 4096) * 128 * 128 = ~138MB. which is pretty big. It will be even bigger for larger number of cpus. Note that the current prefill also allocates 4 entries if the unit size is less than 256. So on top of 138MB memory consumption, this will add more consumption with 3 * (16 + 32 + 64 + 96 + 128 + 196 + 256) * 128 * 128 = ~38MB. Next patch will try to reduce this memory consumption. Later on, Commit 1fda5bb6 ("bpf: Do not allocate percpu memory at init stage") moved the non-fix-size percpu memory allocation to bpf verificaiton stage. Once a particular bpf_percpu_obj_new() is called by bpf program, the memory allocator will try to fill in the cache with all sizes, causing the same amount of percpu memory consumption as in the boot stage. To reduce the initial percpu memory consumption for non-fix-size percpu memory allocation, instead of filling the cache with all supported allocation sizes, this patch intends to fill the cache only for the requested size. As typically users will not use large percpu data structure, this can save memory significantly. For example, the allocation size is 64 bytes with 128 cpus. Then total percpu memory amount will be 64 * 128 * 128 = 1MB, much less than previous 138MB. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Hou Tao <houtao1@huawei.com> Link: https://lore.kernel.org/r/20231222031745.1289082-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
The objcg is a bpf_mem_alloc level property since all bpf_mem_cache's are with the same objcg. This patch made such a property explicit. The next patch will use this property to save and restore objcg for percpu unit allocator. Acked-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231222031739.1288590-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Yonghong Song authored
Currently, for percpu memory allocation, say if the user requests allocation size to be 32 bytes, the actually calculated size will be 40 bytes and it further rounds to 64 bytes, and eventually 64 bytes are allocated, wasting 32-byte memory. Change bpf_mem_alloc() to calculate the cache index based on the user-provided allocation size so unnecessary extra memory can be avoided. Suggested-by: Hou Tao <houtao1@huawei.com> Acked-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Link: https://lore.kernel.org/r/20231222031734.1288400-1-yonghong.song@linux.devSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
John Fastabend says: ==================== There was a memleak when streaming af_unix sockets were inserted into multiple sockmap slots and/or maps. This is because each insert would call a proto update operatino and these must be allowed to be called multiple times. The streaming af_unix implementation recently added a refcnt to handle a use after free issue, however it introduced a memleak when inserted into multiple maps. This series fixes the memleak, adds a note in the code so we remember that proto updates need to support this. And then we add three tests for each of the slightly different iterations of adding sockets into multiple maps. I kept them as 3 independent test cases here. I have some slight preference for this they could however be a single test, but then you don't get to run them independently which was sort of useful while debugging. ==================== Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
John Fastabend authored
Add test that replaces the same socket with itself. This exercises a corner case where old element and new element have the same posck. Test protocols: TCP, UDP, stream af_unix and dgram af_unix. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/r/20231221232327.43678-6-john.fastabend@gmail.com
-
John Fastabend authored
Add test with multiple maps where each socket is inserted in multiple maps. Test protocols: TCP, UDP, stream af_unix and dgram af_unix. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/r/20231221232327.43678-5-john.fastabend@gmail.com
-
John Fastabend authored
Add test with a single map where each socket is inserted multiple times. Test protocols: TCP, UDP, stream af_unix and dgram af_unix. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/r/20231221232327.43678-4-john.fastabend@gmail.com
-
John Fastabend authored
Add a comment describing that the psock update proto callbback can be called multiple times and this must be safe. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/r/20231221232327.43678-3-john.fastabend@gmail.com
-
John Fastabend authored
When sockets are added to a sockmap or sockhash we allocate and init a psock. Then update the proto ops with sock_map_init_proto the flow is sock_hash_update_common sock_map_link psock = sock_map_psock_get_checked() <-returns existing psock sock_map_init_proto(sk, psock) <- updates sk_proto If the socket is already in a map this results in the sock_map_init_proto being called multiple times on the same socket. We do this because when a socket is added to multiple maps this might result in a new set of BPF programs being attached to the socket requiring an updated ops struct. This creates a rule where it must be safe to call psock_update_sk_prot multiple times. When we added a fix for UAF through unix sockets in patch 4dd9a38a753fc we broke this rule by adding a sock_hold in that path to ensure the sock is not released. The result is if a af_unix stream sock is placed in multiple maps it results in a memory leak because we call sock_hold multiple times with only a single sock_put on it. Fixes: 8866730a ("bpf, sockmap: af_unix stream sockets need to hold ref for pair sock") Reported-by: Xingwei Lee <xrivendell7@gmail.com> Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/r/20231221232327.43678-2-john.fastabend@gmail.com
-
- 03 Jan, 2024 5 commits
-
-
Andrii Nakryiko authored
Alexei Starovoitov says: ==================== bpf: volatile compare From: Alexei Starovoitov <ast@kernel.org> v2->v3: Debugged profiler.c regression. It was caused by basic block layout. Introduce bpf_cmp_likely() and bpf_cmp_unlikely() macros. Debugged redundant <<=32, >>=32 with u32 variables. Added cast workaround. v1->v2: Fixed issues pointed out by Daniel, added more tests, attempted to convert profiler.c, but barrier_var() wins vs bpf_cmp(). To be investigated. Patches 1-4 are good to go, but 5 needs more work. ==================== Link: https://lore.kernel.org/r/20231226191148.48536-1-alexei.starovoitov@gmail.comSigned-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Alexei Starovoitov authored
Convert profiler[123].c to "volatile compare" to compare barrier_var() approach vs bpf_cmp_likely() vs bpf_cmp_unlikely(). bpf_cmp_unlikely() produces correct code, but takes much longer to verify: ./veristat -C -e prog,insns,states before after_with_unlikely Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF) ------------------------------------ --------- --------- ------------------ ---------- ---------- ----------------- kprobe__proc_sys_write 1603 19606 +18003 (+1123.08%) 123 1678 +1555 (+1264.23%) kprobe__vfs_link 11815 70305 +58490 (+495.05%) 971 4967 +3996 (+411.53%) kprobe__vfs_symlink 5464 42896 +37432 (+685.07%) 434 3126 +2692 (+620.28%) kprobe_ret__do_filp_open 5641 44578 +38937 (+690.25%) 446 3162 +2716 (+608.97%) raw_tracepoint__sched_process_exec 2770 35962 +33192 (+1198.27%) 226 3121 +2895 (+1280.97%) raw_tracepoint__sched_process_exit 1526 2135 +609 (+39.91%) 133 208 +75 (+56.39%) raw_tracepoint__sched_process_fork 265 337 +72 (+27.17%) 19 24 +5 (+26.32%) tracepoint__syscalls__sys_enter_kill 18782 140407 +121625 (+647.56%) 1286 12176 +10890 (+846.81%) bpf_cmp_likely() is equivalent to barrier_var(): ./veristat -C -e prog,insns,states before after_with_likely Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF) ------------------------------------ --------- --------- -------------- ---------- ---------- ------------- kprobe__proc_sys_write 1603 1663 +60 (+3.74%) 123 127 +4 (+3.25%) kprobe__vfs_link 11815 12090 +275 (+2.33%) 971 971 +0 (+0.00%) kprobe__vfs_symlink 5464 5448 -16 (-0.29%) 434 426 -8 (-1.84%) kprobe_ret__do_filp_open 5641 5739 +98 (+1.74%) 446 446 +0 (+0.00%) raw_tracepoint__sched_process_exec 2770 2608 -162 (-5.85%) 226 216 -10 (-4.42%) raw_tracepoint__sched_process_exit 1526 1526 +0 (+0.00%) 133 133 +0 (+0.00%) raw_tracepoint__sched_process_fork 265 265 +0 (+0.00%) 19 19 +0 (+0.00%) tracepoint__syscalls__sys_enter_kill 18782 18970 +188 (+1.00%) 1286 1286 +0 (+0.00%) kprobe__proc_sys_write 2700 2809 +109 (+4.04%) 107 109 +2 (+1.87%) kprobe__vfs_link 12238 12366 +128 (+1.05%) 267 269 +2 (+0.75%) kprobe__vfs_symlink 7139 7365 +226 (+3.17%) 167 175 +8 (+4.79%) kprobe_ret__do_filp_open 7264 7070 -194 (-2.67%) 180 182 +2 (+1.11%) raw_tracepoint__sched_process_exec 3768 3453 -315 (-8.36%) 211 199 -12 (-5.69%) raw_tracepoint__sched_process_exit 3138 3138 +0 (+0.00%) 83 83 +0 (+0.00%) raw_tracepoint__sched_process_fork 265 265 +0 (+0.00%) 19 19 +0 (+0.00%) tracepoint__syscalls__sys_enter_kill 26679 24327 -2352 (-8.82%) 1067 1037 -30 (-2.81%) kprobe__proc_sys_write 1833 1833 +0 (+0.00%) 157 157 +0 (+0.00%) kprobe__vfs_link 9995 10127 +132 (+1.32%) 803 803 +0 (+0.00%) kprobe__vfs_symlink 5606 5672 +66 (+1.18%) 451 451 +0 (+0.00%) kprobe_ret__do_filp_open 5716 5782 +66 (+1.15%) 462 462 +0 (+0.00%) raw_tracepoint__sched_process_exec 3042 3042 +0 (+0.00%) 278 278 +0 (+0.00%) raw_tracepoint__sched_process_exit 1680 1680 +0 (+0.00%) 146 146 +0 (+0.00%) raw_tracepoint__sched_process_fork 299 299 +0 (+0.00%) 25 25 +0 (+0.00%) tracepoint__syscalls__sys_enter_kill 18372 18372 +0 (+0.00%) 1558 1558 +0 (+0.00%) default (mcpu=v3), no_alu32, cpuv4 have similar differences. Note one place where bpf_nop_mov() is used to workaround the verifier lack of link between the scalar register and its spill to stack. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231226191148.48536-7-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
bpf_nop_mov(var) asm macro emits nop register move: rX = rX. If 'var' is a scalar and not a fixed constant the verifier will assign ID to it. If it's later spilled the stack slot will carry that ID as well. Hence the range refining comparison "if rX < const" will update all copies including spilled slot. This macro is a temporary workaround until the verifier gets smarter. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20231226191148.48536-6-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Since the last user was converted to bpf_cmp, remove bpf_assert_eq/ne/... macros. __bpf_assert_op() macro is kept for experiments, since it's slightly more efficient than bpf_assert(bpf_cmp_unlikely()) until LLVM is fixed. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20231226191148.48536-5-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Convert exceptions_assert.c to bpf_cmp_unlikely() macro. Since bpf_assert(bpf_cmp_unlikely(var, ==, 100)); other code; will generate assembly code: if r1 == 100 goto L2; r0 = 0 call bpf_throw L1: other code; ... L2: goto L1; LLVM generates redundant basic block with extra goto. LLVM will be fixed eventually. Right now it's less efficient than __bpf_assert(var, ==, 100) macro that produces: if r1 == 100 goto L1; r0 = 0 call bpf_throw L1: other code; But extra goto doesn't hurt the verification process. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20231226191148.48536-4-alexei.starovoitov@gmail.com
-