- 04 Mar, 2023 3 commits
-
-
Eduard Zingerman authored
Function verifier.c:convert_ctx_access() applies some rewrites to BPF instructions that read or write BPF program context. This commit adds machinery to allow test cases that inspect BPF program after these rewrites are applied. An example of a test case: { // Shorthand for field offset and size specification N(CGROUP_SOCKOPT, struct bpf_sockopt, retval), // Pattern generated for field read .read = "$dst = *(u64 *)($ctx + bpf_sockopt_kern::current_task);" "$dst = *(u64 *)($dst + task_struct::bpf_ctx);" "$dst = *(u32 *)($dst + bpf_cg_run_ctx::retval);", // Pattern generated for field write .write = "*(u64 *)($ctx + bpf_sockopt_kern::tmp_reg) = r9;" "r9 = *(u64 *)($ctx + bpf_sockopt_kern::current_task);" "r9 = *(u64 *)(r9 + task_struct::bpf_ctx);" "*(u32 *)(r9 + bpf_cg_run_ctx::retval) = $src;" "r9 = *(u64 *)($ctx + bpf_sockopt_kern::tmp_reg);" , }, For each test case, up to three programs are created: - One that uses BPF_LDX_MEM to read the context field. - One that uses BPF_STX_MEM to write to the context field. - One that uses BPF_ST_MEM to write to the context field. The disassembly of each program is compared with the pattern specified in the test case. Kernel code for disassembly is reused (as is in the bpftool). To keep Makefile changes to the minimum, symbolic links to `kernel/bpf/disasm.c` and `kernel/bpf/disasm.h ` are added. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230304011247.566040-4-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Check that verifier tracks pointer types for BPF_ST_MEM instructions and reports error if pointer types do not match for different execution branches. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230304011247.566040-3-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Eduard Zingerman authored
Lift verifier restriction to use BPF_ST_MEM instructions to write to context data structures. This requires the following changes: - verifier.c:do_check() for BPF_ST updated to: - no longer forbid writes to registers of type PTR_TO_CTX; - track dst_reg type in the env->insn_aux_data[...].ptr_type field (same way it is done for BPF_STX and BPF_LDX instructions). - verifier.c:convert_ctx_access() and various callbacks invoked by it are updated to handled BPF_ST instruction alongside BPF_STX. Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20230304011247.566040-2-eddyz87@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 03 Mar, 2023 14 commits
-
-
Kumar Kartikeya Dwivedi authored
Martin suggested that instead of using a byte in the hole (which he has a use for in his future patch) in bpf_local_storage_elem, we can dispatch a different call_rcu callback based on whether we need to free special fields in bpf_local_storage_elem data. The free path, described in commit 9db44fdd ("bpf: Support kptrs in local storage maps"), only waits for call_rcu callbacks when there are special (kptrs, etc.) fields in the map value, hence it is necessary that we only access smap in this case. Therefore, dispatch different RCU callbacks based on the BPF map has a valid btf_record, which dereference and use smap's btf_record only when it is valid. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230303141542.300068-1-memxor@gmail.comSigned-off-by: Martin KaFai Lau <martin.lau@kernel.org>
-
Daniel Borkmann authored
Alexei Starovoitov says: ==================== v4->v5: fix typos, add acks. v3->v4: - patch 3 got much cleaner after BPF_KPTR_RCU was removed as suggested by David. - make KF_RCU stronger and require that bpf program checks for NULL before passing such pointers into kfunc. The prog has to do that anyway to access fields and it aligns with BTF_TYPE_SAFE_RCU allowlist. - New patch 6: refactor RCU enforcement in the verifier. The patches 2,3,6 are part of one feature. The 2 and 3 alone are incomplete, since RCU pointers are barely useful without bpf_rcu_read_lock/unlock in GCC compiled kernel. Even if GCC lands support for btf_type_tag today it will take time to mandate that version for kernel builds. Hence go with allow list approach. See patch 6 for details. This allows to start strict enforcement of TRUSTED | UNTRUSTED in one part of PTR_TO_BTF_ID accesses. One step closer to KF_TRUSTED_ARGS by default. v2->v3: - Instead of requiring bpf progs to tag fields with __kptr_rcu teach the verifier to infer RCU properties based on the type. BPF_KPTR_RCU becomes kernel internal type of struct btf_field. - Add patch 2 to tag cgroups and dfl_cgrp as trusted. That bug was spotted by BPF CI on clang compiler kernels, since patch 3 is doing: static bool in_rcu_cs(struct bpf_verifier_env *env) { return env->cur_state->active_rcu_lock || !env->prog->aux->sleepable; } which makes all non-sleepable programs behave like they have implicit rcu_read_lock around them. Which is the case in practice. It was fine on gcc compiled kernels where task->cgroup deference was producing PTR_TO_BTF_ID, but on clang compiled kernels task->cgroup deference was producing PTR_TO_BTF_ID | MEM_RCU | MAYBE_NULL, which is more correct, but selftests were failing. Patch 2 fixes this discrepancy. With few more patches like patch 2 we can make KF_TRUSTED_ARGS default for kfuncs and helpers. - Add comment in selftest patch 5 that it's verifier only check. v1->v2: Instead of agressively allow dereferenced kptr_rcu pointers into KF_TRUSTED_ARGS kfuncs only allow them into KF_RCU funcs. The KF_RCU flag is a weaker version of KF_TRUSTED_ARGS. The kfuncs marked with KF_RCU expect either PTR_TRUSTED or MEM_RCU arguments. The verifier guarantees that the objects are valid and there is no use-after-free, but the pointers maybe NULL and pointee object's reference count could have reached zero, hence kfuncs must do != NULL check and consider refcnt==0 case when accessing such arguments. No changes in patch 1. Patches 2,3,4 adjusted with above behavior. v1: The __kptr_ref turned out to be too limited, since any "trusted" pointer access requires bpf_kptr_xchg() which is impractical when the same pointer needs to be dereferenced by multiple cpus. The __kptr "untrusted" only access isn't very useful in practice. Rename __kptr to __kptr_untrusted with eventual goal to deprecate it, and rename __kptr_ref to __kptr, since that looks to be more common use of kptrs. Introduce __kptr_rcu that can be directly dereferenced and used similar to native kernel C code. Once bpf_cpumask and task_struct kfuncs are converted to observe RCU GP when refcnt goes to zero, both __kptr and __kptr_untrusted can be deprecated and __kptr_rcu can become the only __kptr tag. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Alexei Starovoitov authored
bpf_rcu_read_lock/unlock() are only available in clang compiled kernels. Lack of such key mechanism makes it impossible for sleepable bpf programs to use RCU pointers. Allow bpf_rcu_read_lock/unlock() in GCC compiled kernels (though GCC doesn't support btf_type_tag yet) and allowlist certain field dereferences in important data structures like tast_struct, cgroup, socket that are used by sleepable programs either as RCU pointer or full trusted pointer (which is valid outside of RCU CS). Use BTF_TYPE_SAFE_RCU and BTF_TYPE_SAFE_TRUSTED macros for such tagging. They will be removed once GCC supports btf_type_tag. With that refactor check_ptr_to_btf_access(). Make it strict in enforcing PTR_TRUSTED and PTR_UNTRUSTED while deprecating old PTR_TO_BTF_ID without modifier flags. There is a chance that this strict enforcement might break existing programs (especially on GCC compiled kernels), but this cleanup has to start sooner than later. Note PTR_TO_CTX access still yields old deprecated PTR_TO_BTF_ID. Once it's converted to strict PTR_TRUSTED or PTR_UNTRUSTED the kfuncs and helpers will be able to default to KF_TRUSTED_ARGS. KF_RCU will remain as a weaker version of KF_TRUSTED_ARGS where obj refcnt could be 0. Adjust rcu_read_lock selftest to run on gcc and clang compiled kernels. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20230303041446.3630-7-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Adjust cgroup kfunc test to dereference RCU protected cgroup pointer as PTR_TRUSTED and pass into KF_TRUSTED_ARGS kfunc. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20230303041446.3630-6-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
Tweak existing map_kptr test to check kptr_rcu. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20230303041446.3630-5-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
The life time of certain kernel structures like 'struct cgroup' is protected by RCU. Hence it's safe to dereference them directly from __kptr tagged pointers in bpf maps. The resulting pointer is MEM_RCU and can be passed to kfuncs that expect KF_RCU. Derefrence of other kptr-s returns PTR_UNTRUSTED. For example: struct map_value { struct cgroup __kptr *cgrp; }; SEC("tp_btf/cgroup_mkdir") int BPF_PROG(test_cgrp_get_ancestors, struct cgroup *cgrp_arg, const char *path) { struct cgroup *cg, *cg2; cg = bpf_cgroup_acquire(cgrp_arg); // cg is PTR_TRUSTED and ref_obj_id > 0 bpf_kptr_xchg(&v->cgrp, cg); cg2 = v->cgrp; // This is new feature introduced by this patch. // cg2 is PTR_MAYBE_NULL | MEM_RCU. // When cg2 != NULL, it's a valid cgroup, but its percpu_ref could be zero if (cg2) bpf_cgroup_ancestor(cg2, level); // safe to do. } Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Tejun Heo <tj@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20230303041446.3630-4-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
bpf programs sometimes do: bpf_cgrp_storage_get(&map, task->cgroups->dfl_cgrp, ...); It is safe to do, because cgroups->dfl_cgrp pointer is set diring init and never changes. The task->cgroups is also never NULL. It is also set during init and will change when task switches cgroups. For any trusted task pointer dereference of cgroups and dfl_cgrp should yield trusted pointers. The verifier wasn't aware of this. Hence in gcc compiled kernels task->cgroups dereference was producing PTR_TO_BTF_ID without modifiers while in clang compiled kernels the verifier recognizes __rcu tag in cgroups field and produces PTR_TO_BTF_ID | MEM_RCU | MAYBE_NULL. Tag cgroups and dfl_cgrp as trusted to equalize clang and gcc behavior. When GCC supports btf_type_tag such tagging will done directly in the type. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: David Vernet <void@manifault.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/bpf/20230303041446.3630-3-alexei.starovoitov@gmail.com
-
Alexei Starovoitov authored
__kptr meant to store PTR_UNTRUSTED kernel pointers inside bpf maps. The concept felt useful, but didn't get much traction, since bpf_rdonly_cast() was added soon after and bpf programs received a simpler way to access PTR_UNTRUSTED kernel pointers without going through restrictive __kptr usage. Rename __kptr_ref -> __kptr and __kptr -> __kptr_untrusted to indicate its intended usage. The main goal of __kptr_untrusted was to read/write such pointers directly while bpf_kptr_xchg was a mechanism to access refcnted kernel pointers. The next patch will allow RCU protected __kptr access with direct read. At that point __kptr_untrusted will be deprecated. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20230303041446.3630-2-alexei.starovoitov@gmail.com
-
Tero Kristo authored
Add test for the absolute BPF timer under the existing timer tests. This will run the timer two times with 1us expiration time, and then re-arm the timer at ~35s in the future. At the end, it is verified that the absolute timer expired exactly two times. Signed-off-by: Tero Kristo <tero.kristo@linux.intel.com> Link: https://lore.kernel.org/r/20230302114614.2985072-3-tero.kristo@linux.intel.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Tero Kristo authored
Add a new flag BPF_F_TIMER_ABS that can be passed to bpf_timer_start() to start an absolute value timer instead of the default relative value. This makes the timer expire at an exact point in time, instead of a time with latencies induced by both the BPF and timer subsystems. Suggested-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Tero Kristo <tero.kristo@linux.intel.com> Link: https://lore.kernel.org/r/20230302114614.2985072-2-tero.kristo@linux.intel.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Dave Marchevsky authored
Per C99 standard [0], Section 6.7.8, Paragraph 10: If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate. And in the same document, in appendix "J.2 Undefined behavior": The behavior is undefined in the following circumstances: [...] The value of an object with automatic storage duration is used while it is indeterminate (6.2.4, 6.7.8, 6.8). This means that use of an uninitialized stack variable is undefined behavior, and therefore that clang can choose to do a variety of scary things, such as not generating bytecode for "bunch of useful code" in the below example: void some_func() { int i; if (!i) return; // bunch of useful code } To add insult to injury, if some_func above is a helper function for some BPF program, clang can choose to not generate an "exit" insn, causing verifier to fail with "last insn is not an exit or jmp". Going from that verification failure to the root cause of uninitialized use is certain to be frustrating. This patch adds -Wuninitialized to the cflags for selftest BPF progs and fixes up existing instances of uninitialized use. [0]: https://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdfSigned-off-by: Dave Marchevsky <davemarchevsky@fb.com> Cc: David Vernet <void@manifault.com> Cc: Tejun Heo <tj@kernel.org> Acked-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230303005500.1614874-1-davemarchevsky@fb.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Tejun Heo authored
These helpers are safe to call from any context and there's no reason to restrict access to them. Remove them from bpf_trace and filter lists and add to bpf_base_func_proto() under perfmon_capable(). v2: After consulting with Andrii, relocated in bpf_base_func_proto() so that they require bpf_capable() but not perfomon_capable() as it doesn't read from or affect others on the system. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/ZAD8QyoszMZiTzBY@slm.duckdns.orgSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
David Vernet authored
maps.rst in the BPF documentation links to the /userspace-api/ebpf/syscall document (Documentation/userspace-api/ebpf/syscall.rst). For some reason, if you try to reference the document with :doc:, the docs build emits the following warning: ./Documentation/bpf/maps.rst:13: WARNING: \ unknown document: '/userspace-api/ebpf/syscall' It appears that other places in the docs tree also don't support using :doc:. Elsewhere in the BPF documentation, we just reference the kernel docs page directly. Let's do that here to clean up the last remaining noise in the docs build. Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230302183918.54190-2-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
David Vernet authored
The BPF devel Q&A documentation page makes frequent reference to the netdev-QA page via the netdev-FAQ rst link. This link is currently broken, as is evidenced by the build output when making BPF docs: ./Documentation/bpf/bpf_devel_QA.rst:150: WARNING: undefined label: 'netdev-faq' ./Documentation/bpf/bpf_devel_QA.rst:206: WARNING: undefined label: 'netdev-faq' ./Documentation/bpf/bpf_devel_QA.rst:231: WARNING: undefined label: 'netdev-faq' ./Documentation/bpf/bpf_devel_QA.rst:396: WARNING: undefined label: 'netdev-faq' ./Documentation/bpf/bpf_devel_QA.rst:412: WARNING: undefined label: 'netdev-faq' Fix the links to point to the actual netdev-faq page. Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230302183918.54190-1-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 02 Mar, 2023 7 commits
-
-
Joanne Koong authored
Change bpf_dynptr_slice and bpf_dynptr_slice_rdwr to return NULL instead of 0, in accordance with the codebase guidelines. Fixes: 66e3a13e ("bpf: Add bpf_dynptr_slice and bpf_dynptr_slice_rdwr") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Joanne Koong <joannelkoong@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230302053014.1726219-1-joannelkoong@gmail.com
-
Andrii Nakryiko authored
Daniel Müller says: ==================== On Android, APKs (android packages; zip packages with somewhat prescriptive contents) are first class citizens in the system: the shared objects contained in them don't exist in unpacked form on the file system. Rather, they are mmaped directly from within the archive and the archive is also what the kernel is aware of. For users that complicates the process of attaching a uprobe to a function contained in a shared object in one such APK: they'd have to find the byte offset of said function from the beginning of the archive. That is cumbersome to do manually and can be fragile, because various changes could invalidate said offset. That is why for uprobes inside ELF files (not inside an APK), commit d112c9ce249b ("libbpf: Support function name-based attach uprobes") added support for attaching to symbols by name. On Android, that mechanism currently does not work, because this logic is not APK aware. This patch set introduces first class support for attaching uprobes to functions inside ELF objects contained in APKs via function names. We add support for recognizing the following syntax for a binary path: <archive>!/<binary-in-archive> (e.g., /system/app/test-app.apk!/lib/arm64-v8a/libc++.so) This syntax is common in the Android eco system and used by tools such as simpleperf. It is also what is being proposed for bcc [0]. If the user provides such a binary path, we find <binary-in-archive> (lib/arm64-v8a/libc++.so in the example) inside of <archive> (/system/app/test-app.apk). We perform the regular ELF offset search inside the binary and add that to the offset within the archive itself, to retrieve the offset at which to attach the uprobe. [0] https://github.com/iovisor/bcc/pull/4440 Changelog --------- v3->v4: - use ERR_PTR instead of libbpf_err_ptr() in zip_archive_open() - eliminated err variable from elf_find_func_offset_from_archive() v2->v3: - adjusted zip_archive_open() to report errno - fixed provided libbpf_strlcpy() buffer size argument - adjusted find_cd() to handle errors better - use fewer local variables in get_entry_at_offset() v1->v2: - removed unaligned_* types - switched to using __u32 and __u16 - switched to using errno constants instead of hard-coded negative values - added another pr_debug() message - shortened central_directory_* to cd_* - inlined cd_file_header_at_offset() function - bunch of syntactical changes ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Daniel Müller authored
This change adds support for attaching uprobes to shared objects located in APKs, which is relevant for Android systems where various libraries may reside in APKs. To make that happen, we extend the syntax for the "binary path" argument to attach to with that supported by various Android tools: <archive>!/<binary-in-archive> For example: /system/app/test-app/test-app.apk!/lib/arm64-v8a/libc++_shared.so APKs need to be specified via full path, i.e., we do not attempt to resolve mere file names by searching system directories. We cannot currently test this functionality end-to-end in an automated fashion, because it relies on an Android system being present, but there is no support for that in CI. I have tested the functionality manually, by creating a libbpf program containing a uretprobe, attaching it to a function inside a shared object inside an APK, and verifying the sanity of the returned values. Signed-off-by: Daniel Müller <deso@posteo.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230301212308.1839139-4-deso@posteo.net
-
Daniel Müller authored
This change splits the elf_find_func_offset() function in two: elf_find_func_offset(), which now accepts an already opened Elf object instead of a path to a file that is to be opened, as well as elf_find_func_offset_from_file(), which opens a binary based on a path and then invokes elf_find_func_offset() on the Elf object. Having this split in responsibilities will allow us to call elf_find_func_offset() from other code paths on Elf objects that did not necessarily come from a file on disk. Signed-off-by: Daniel Müller <deso@posteo.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230301212308.1839139-3-deso@posteo.net
-
Daniel Müller authored
This change implements support for reading zip archives, including opening an archive, finding an entry based on its path and name in it, and closing it. The code was copied from https://github.com/iovisor/bcc/pull/4440, which implements similar functionality for bcc. The author confirmed that he is fine with this usage and the corresponding relicensing. I adjusted it to adhere to libbpf coding standards. Signed-off-by: Daniel Müller <deso@posteo.net> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Michał Gregorczyk <michalgr@meta.com> Link: https://lore.kernel.org/bpf/20230301212308.1839139-2-deso@posteo.net
-
David Vernet authored
In commit d96d937d ("bpf: Add __uninit kfunc annotation"), the __uninit kfunc annotation was documented in kfuncs.rst. You have to fully underline a section in rst, or the build will issue a warning that the title underline is too short: ./Documentation/bpf/kfuncs.rst:104: WARNING: Title underline too short. 2.2.2 __uninit Annotation -------------------- This patch fixes that title underline. Fixes: d96d937d ("bpf: Add __uninit kfunc annotation") Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230301194910.602738-2-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
David Vernet authored
In commit 66e3a13e ("bpf: Add bpf_dynptr_slice and bpf_dynptr_slice_rdwr"), the bpf_dynptr_slice() and bpf_dynptr_slice_rdwr() kfuncs were added to BPF. These kfuncs included doxygen headers, but unfortunately those headers are not properly formatted according to [0], and causes the following warnings during the docs build: ./kernel/bpf/helpers.c:2225: warning: \ Excess function parameter 'returns' description in 'bpf_dynptr_slice' ./kernel/bpf/helpers.c:2303: warning: \ Excess function parameter 'returns' description in 'bpf_dynptr_slice_rdwr' ... This patch fixes those doxygen comments. [0]: https://docs.kernel.org/doc-guide/kernel-doc.html#function-documentation Fixes: 66e3a13e ("bpf: Add bpf_dynptr_slice and bpf_dynptr_slice_rdwr") Signed-off-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/r/20230301194910.602738-1-void@manifault.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 01 Mar, 2023 16 commits
-
-
Andrii Nakryiko authored
Eduard Zingerman says: ==================== This patch allows to specify program flags and multiple verifier log messages for the test_loader kind of tests. For example: tools/testing/selftets/bpf/progs/foobar.c: SEC("tc") __success __log_level(7) __msg("first message") __msg("next message") __flag(BPF_F_ANY_ALIGNMENT) int buz(struct __sk_buff *skb) { ... } It was developed by Andrii Nakryiko ([1]), I reused it in a "test_verifier tests migration to inline assembly" patch series ([2]), but the series is currently stuck on my side. Andrii asked to spin this particular patch separately ([3]). [1] https://lore.kernel.org/bpf/CAEf4BzZH0ZxorCi7nPDbRqSK9f+410RooNwNJGwfw8=0a5i1nw@mail.gmail.com/ [2] https://lore.kernel.org/bpf/20230123145148.2791939-1-eddyz87@gmail.com/ [3] https://lore.kernel.org/bpf/20230123145148.2791939-1-eddyz87@gmail.com/T/#m52e806c5a679a2aa8f484d011be7ec105939127a ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Andrii Nakryiko authored
Extend __flag attribute by allowing to specify one of the following: * BPF_F_STRICT_ALIGNMENT * BPF_F_ANY_ALIGNMENT * BPF_F_TEST_RND_HI32 * BPF_F_TEST_STATE_FREQ * BPF_F_SLEEPABLE * BPF_F_XDP_HAS_FRAGS * Some numeric value Extend __msg attribute by allowing to specify multiple exepcted messages. All messages are expected to be present in the verifier log in the order of application. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20230301175417.3146070-2-eddyz87@gmail.com [ Eduard: added commit message, formatting, comments ]
-
Andrii Nakryiko authored
Viktor Malik says: ==================== Fixing several issues reported by Coverity and Clang Static Analyzer (scan-build) for libbpf, mostly removing unnecessary symbols and assignments. No functional changes should be introduced. ==================== Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
-
Viktor Malik authored
Clang Static Analyser (scan-build) reports some unused symbols and dead assignments in the linker_append_elf_relos function. Clean these up. Signed-off-by: Viktor Malik <vmalik@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/c5c8fe9f411b69afada8399d23bb048ef2a70535.1677658777.git.vmalik@redhat.com
-
Viktor Malik authored
Clang Static Analyzer (scan-build) reports several dead assignments in libbpf where the assigned value is unconditionally overridden by another value before it is read. Remove these assignments. Signed-off-by: Viktor Malik <vmalik@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/5503d18966583e55158471ebbb2f67374b11bf5e.1677658777.git.vmalik@redhat.com
-
Viktor Malik authored
Coverity reports that the first check of 'err' in bpf_object__init_maps is always false as 'err' is initialized to 0 at that point. Remove the unnecessary ternary operator. Signed-off-by: Viktor Malik <vmalik@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/78a3702f2ea9f32a84faaae9b674c56269d330a7.1677658777.git.vmalik@redhat.com
-
Tiezhu Yang authored
If target is bpf, there is no __loongarch__ definition, __BITS_PER_LONG defaults to 32, __NR_nanosleep is not defined: #if defined(__ARCH_WANT_TIME32_SYSCALLS) || __BITS_PER_LONG != 32 #define __NR_nanosleep 101 __SC_3264(__NR_nanosleep, sys_nanosleep_time32, sys_nanosleep) #endif Work around this problem, by explicitly setting __BITS_PER_LONG to __loongarch_grlen which is defined by compiler as 64 for LA64. This is similar with commit 36e70b9b ("selftests, bpf: Fix broken riscv build"). Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/1677585781-21628-1-git-send-email-yangtiezhu@loongson.cn
-
Alexei Starovoitov authored
Kumar Kartikeya Dwivedi says: ==================== This set adds support for kptrs in percpu hashmaps, percpu LRU hashmaps, and local storage maps (covering sk, cgrp, task, inode). Tests are expanded to test more existing maps at runtime and also test the code path for the local storage maps (which is shared by all implementations). A question for reviewers is what the position of the BPF runtime should be on dealing with reference cycles that can be created by BPF programs at runtime using this additional support. For instance, one can store the kptr of the task in its own task local storage, creating a cycle which prevents destruction of task local storage. Cycles can be formed using arbitrarily long kptr ownership chains. Therefore, just preventing storage of such kptrs in some maps is not a sufficient solution, and is more likely to hurt usability. There is precedence in existing runtimes which promise memory safety, like Rust, where reference cycles and memory leaks are permitted. However, traditionally the safety guarantees of BPF have been stronger. Thus, more discussion and thought is invited on this topic to ensure we cover all usage aspects. Changelog: ---------- v2 -> v3 v2: https://lore.kernel.org/bpf/20230221200646.2500777-1-memxor@gmail.com/ * Fix a use-after-free bug in local storage patch * Fix selftest for aarch64 (don't use fentry/fmod_ret) * Wait for RCU Tasks Trace GP along with RCU GP in selftest v1 -> v2 v1: https://lore.kernel.org/bpf/20230219155249.1755998-1-memxor@gmail.com * Simplify selftests, fix a couple of bugs ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kumar Kartikeya Dwivedi authored
Firstly, ensure programs successfully load when using all of the supported maps. Then, extend existing tests to test more cases at runtime. We are currently testing both the synchronous freeing of items and asynchronous destruction when map is freed, but the code needs to be adjusted a bit to be able to also accomodate support for percpu maps. We now do a delete on the item (and update for array maps which has a similar effect for kptrs) to perform a synchronous free of the kptr, and test destruction both for the synchronous and asynchronous deletion. Next time the program runs, it should observe the refcount as 1 since all existing references should have been released by then. By running the program after both possible paths freeing kptrs, we establish that they correctly release resources. Next, we augment the existing test to also test the same code path shared by all local storage maps using a task local storage map. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230225154010.391965-4-memxor@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kumar Kartikeya Dwivedi authored
Enable support for kptrs in local storage maps by wiring up the freeing of these kptrs from map value. Freeing of bpf_local_storage_map is only delayed in case there are special fields, therefore bpf_selem_free_* path can also only dereference smap safely in that case. This is recorded using a bool utilizing a hole in bpF_local_storage_elem. It could have been tagged in the pointer value smap using the lowest bit (since alignment > 1), but since there was already a hole I went with the simpler option. Only the map structure freeing is delayed using RCU barriers, as the buckets aren't used when selem is being freed, so they can be freed once all readers of the bucket lists can no longer access it. Cc: Martin KaFai Lau <martin.lau@kernel.org> Cc: KP Singh <kpsingh@kernel.org> Cc: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230225154010.391965-3-memxor@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Kumar Kartikeya Dwivedi authored
Enable support for kptrs in percpu BPF hashmap and percpu BPF LRU hashmap by wiring up the freeing of these kptrs from percpu map elements. Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20230225154010.391965-2-memxor@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Alexei Starovoitov authored
Joanne Koong says: ==================== This patchset is the 2nd in the dynptr series. The 1st can be found here [0]. This patchset adds skb and xdp type dynptrs, which have two main benefits for packet parsing: * allowing operations on sizes that are not statically known at compile-time (eg variable-sized accesses). * more ergonomic and less brittle iteration through data (eg does not need manual if checking for being within bounds of data_end) When comparing the differences in runtime for packet parsing without dynptrs vs. with dynptrs, there is no noticeable difference. Patch 9 contains more details as well as examples of how to use skb and xdp dynptrs. [0] https://lore.kernel.org/bpf/20220523210712.3641569-1-joannelkoong@gmail.com/ --- Changelog: v12 = https://lore.kernel.org/bpf/20230226085120.3907863-1-joannelkoong@gmail.com/ v12 -> v13: * Fix missing { } for case statement v11 = https://lore.kernel.org/bpf/20230222060747.2562549-1-joannelkoong@gmail.com/ v11 -> v12: * Change constant mem size checking to use "__szk" kfunc annotation for slices * Use autoloading for success selftests v10 = https://lore.kernel.org/bpf/20230216225524.1192789-1-joannelkoong@gmail.com/ v10 -> v11: * Reject bpf_dynptr_slice_rdwr() for non-writable progs at load time instead of runtime * Add additional patch (__uninit kfunc annotation) * Expand on documentation * Add bpf_dynptr_write() calls for persisting writes in tests v9 = https://lore.kernel.org/bpf/20230127191703.3864860-1-joannelkoong@gmail.com/ v9 -> v10: * Add bpf_dynptr_slice and bpf_dynptr_slice_rdwr interface * Add some more tests * Split up patchset into more parts to make it easier to review v8 = https://lore.kernel.org/bpf/20230126233439.3739120-1-joannelkoong@gmail.com/ v8 -> v9: * Fix dynptr_get_type() to check non-stack dynptrs v7 = https://lore.kernel.org/bpf/20221021011510.1890852-1-joannelkoong@gmail.com/ v7 -> v8: * Change helpers to kfuncs * Add 2 new patches (1/5 and 2/5) v6 = https://lore.kernel.org/bpf/20220907183129.745846-1-joannelkoong@gmail.com/ v6 -> v7 * Change bpf_dynptr_data() to return read-only data slices if the skb prog is read-only (Martin) * Add test "skb_invalid_write" to test that writes to rd-only data slices are rejected v5 = https://lore.kernel.org/bpf/20220831183224.3754305-1-joannelkoong@gmail.com/ v5 -> v6 * Address kernel test robot errors by static inlining v4 = https://lore.kernel.org/bpf/20220822235649.2218031-1-joannelkoong@gmail.com/ v4 -> v5 * Address kernel test robot errors for configs w/out CONFIG_NET set * For data slices, return PTR_TO_MEM instead of PTR_TO_PACKET (Kumar) * Split selftests into subtests (Andrii) * Remove insn patching. Use rdonly and rdwr protos for dynptr skb construction (Andrii) * bpf_dynptr_data() returns NULL for rd-only dynptrs. There will be a separate bpf_dynptr_data_rdonly() added later (Andrii and Kumar) v3 = https://lore.kernel.org/bpf/20220822193442.657638-1-joannelkoong@gmail.com/ v3 -> v4 * Forgot to commit --amend the kernel test robot error fixups v2 = https://lore.kernel.org/bpf/20220811230501.2632393-1-joannelkoong@gmail.com/ v2 -> v3 * Fix kernel test robot build test errors v1 = https://lore.kernel.org/bpf/20220726184706.954822-1-joannelkoong@gmail.com/ v1 -> v2 * Return data slices to rd-only skb dynptrs (Martin) * bpf_dynptr_write allows writes to frags for skb dynptrs, but always invalidates associated data slices (Martin) * Use switch casing instead of ifs (Andrii) * Use 0xFD for experimental kind number in the selftest (Zvi) * Put selftest conversions w/ dynptrs into new files (Alexei) * Add new selftest "test_cls_redirect_dynptr.c" ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Joanne Koong authored
Test skb and xdp dynptr functionality in the following ways: 1) progs/test_cls_redirect_dynptr.c * Rewrite "progs/test_cls_redirect.c" test to use dynptrs to parse skb data * This is a great example of how dynptrs can be used to simplify a lot of the parsing logic for non-statically known values. When measuring the user + system time between the original version vs. using dynptrs, and averaging the time for 10 runs (using "time ./test_progs -t cls_redirect"): original version: 0.092 sec with dynptrs: 0.078 sec 2) progs/test_xdp_dynptr.c * Rewrite "progs/test_xdp.c" test to use dynptrs to parse xdp data When measuring the user + system time between the original version vs. using dynptrs, and averaging the time for 10 runs (using "time ./test_progs -t xdp_attach"): original version: 0.118 sec with dynptrs: 0.094 sec 3) progs/test_l4lb_noinline_dynptr.c * Rewrite "progs/test_l4lb_noinline.c" test to use dynptrs to parse skb data When measuring the user + system time between the original version vs. using dynptrs, and averaging the time for 10 runs (using "time ./test_progs -t l4lb_all"): original version: 0.062 sec with dynptrs: 0.081 sec For number of processed verifier instructions: original version: 6268 insns with dynptrs: 2588 insns 4) progs/test_parse_tcp_hdr_opt_dynptr.c * Add sample code for parsing tcp hdr opt lookup using dynptrs. This logic is lifted from a real-world use case of packet parsing in katran [0], a layer 4 load balancer. The original version "progs/test_parse_tcp_hdr_opt.c" (not using dynptrs) is included here as well, for comparison. When measuring the user + system time between the original version vs. using dynptrs, and averaging the time for 10 runs (using "time ./test_progs -t parse_tcp_hdr_opt"): original version: 0.031 sec with dynptrs: 0.045 sec 5) progs/dynptr_success.c * Add test case "test_skb_readonly" for testing attempts at writes on a prog type with read-only skb ctx. * Add "test_dynptr_skb_data" for testing that bpf_dynptr_data isn't supported for skb progs. 6) progs/dynptr_fail.c * Add test cases "skb_invalid_data_slice{1,2,3,4}" and "xdp_invalid_data_slice{1,2}" for testing that helpers that modify the underlying packet buffer automatically invalidate the associated data slice. * Add test cases "skb_invalid_ctx" and "xdp_invalid_ctx" for testing that prog types that do not support bpf_dynptr_from_skb/xdp don't have access to the API. * Add test case "dynptr_slice_var_len{1,2}" for testing that variable-sized len can't be passed in to bpf_dynptr_slice * Add test case "skb_invalid_slice_write" for testing that writes to a read-only data slice are rejected by the verifier. * Add test case "data_slice_out_of_bounds_skb" for testing that writes to an area outside the slice are rejected. * Add test case "invalid_slice_rdwr_rdonly" for testing that prog types that don't allow writes to packet data don't accept any calls to bpf_dynptr_slice_rdwr. [0] https://github.com/facebookincubator/katran/blob/main/katran/lib/bpf/pckt_parsing.hSigned-off-by: Joanne Koong <joannelkoong@gmail.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20230301154953.641654-11-joannelkoong@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Joanne Koong authored
Two new kfuncs are added, bpf_dynptr_slice and bpf_dynptr_slice_rdwr. The user must pass in a buffer to store the contents of the data slice if a direct pointer to the data cannot be obtained. For skb and xdp type dynptrs, these two APIs are the only way to obtain a data slice. However, for other types of dynptrs, there is no difference between bpf_dynptr_slice(_rdwr) and bpf_dynptr_data. For skb type dynptrs, the data is copied into the user provided buffer if any of the data is not in the linear portion of the skb. For xdp type dynptrs, the data is copied into the user provided buffer if the data is between xdp frags. If the skb is cloned and a call to bpf_dynptr_data_rdwr is made, then the skb will be uncloned (see bpf_unclone_prologue()). Please note that any bpf_dynptr_write() automatically invalidates any prior data slices of the skb dynptr. This is because the skb may be cloned or may need to pull its paged buffer into the head. As such, any bpf_dynptr_write() will automatically have its prior data slices invalidated, even if the write is to data in the skb head of an uncloned skb. Please note as well that any other helper calls that change the underlying packet buffer (eg bpf_skb_pull_data()) invalidates any data slices of the skb dynptr as well, for the same reasons. Signed-off-by: Joanne Koong <joannelkoong@gmail.com> Link: https://lore.kernel.org/r/20230301154953.641654-10-joannelkoong@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Joanne Koong authored
Add xdp dynptrs, which are dynptrs whose underlying pointer points to a xdp_buff. The dynptr acts on xdp data. xdp dynptrs have two main benefits. One is that they allow operations on sizes that are not statically known at compile-time (eg variable-sized accesses). Another is that parsing the packet data through dynptrs (instead of through direct access of xdp->data and xdp->data_end) can be more ergonomic and less brittle (eg does not need manual if checking for being within bounds of data_end). For reads and writes on the dynptr, this includes reading/writing from/to and across fragments. Data slices through the bpf_dynptr_data API are not supported; instead bpf_dynptr_slice() and bpf_dynptr_slice_rdwr() should be used. For examples of how xdp dynptrs can be used, please see the attached selftests. Signed-off-by: Joanne Koong <joannelkoong@gmail.com> Link: https://lore.kernel.org/r/20230301154953.641654-9-joannelkoong@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-
Joanne Koong authored
Add skb dynptrs, which are dynptrs whose underlying pointer points to a skb. The dynptr acts on skb data. skb dynptrs have two main benefits. One is that they allow operations on sizes that are not statically known at compile-time (eg variable-sized accesses). Another is that parsing the packet data through dynptrs (instead of through direct access of skb->data and skb->data_end) can be more ergonomic and less brittle (eg does not need manual if checking for being within bounds of data_end). For bpf prog types that don't support writes on skb data, the dynptr is read-only (bpf_dynptr_write() will return an error) For reads and writes through the bpf_dynptr_read() and bpf_dynptr_write() interfaces, reading and writing from/to data in the head as well as from/to non-linear paged buffers is supported. Data slices through the bpf_dynptr_data API are not supported; instead bpf_dynptr_slice() and bpf_dynptr_slice_rdwr() (added in subsequent commit) should be used. For examples of how skb dynptrs can be used, please see the attached selftests. Signed-off-by: Joanne Koong <joannelkoong@gmail.com> Link: https://lore.kernel.org/r/20230301154953.641654-8-joannelkoong@gmail.comSigned-off-by: Alexei Starovoitov <ast@kernel.org>
-