- 20 Dec, 2018 19 commits
-
-
Daniel Borkmann authored
John Fastabend says: ==================== Set of bpf fixes and improvements to make sockmap with kTLS usable with "real" applications. This set came as the fallout of pulling kTLS+sockmap into Cilium[1] and running in container environment. Roughly broken into three parts, Patches 1-3: resolve/improve handling of size field in sk_msg_md Patch 4: it became difficult to use this in Cilium when the SK_PASS verdict was not correctly handle. So handle the case correctly. Patch 5-8: Set of issues found while running OpenSSL TX kTLS enabled applications. This resolves the most obvious issues and gets applications using kTLS TX up and running with sock{map|has}. Other than the "sk_msg, zap ingress queue on psock down" (PATCH 6/8) which can potentially cause a WARNING the issues fixed in this series do not cause kernel side warnings, BUG, etc. but instead cause stalls and other odd behavior in the user space applications when using kTLS with BPF policies applied. Primarily tested with 'curl' compiled with latest openssl and also 'openssl s_client/s_server' containers using Cilium network plugin with docker/k8s. Some basic testing with httpd was also enabled. Cilium CI tests will be added shortly to cover these cases as well. We also have 'wrk' and other test and benchmarking tools we can run now. We have two more sets of patches currently under testing that will be sent shortly to address a few more issues. First the OpenSSL RX kTLS side breaks when both sk_msg and sk_skb_verdict programs are used with kTLS, the sk_skb_verdict programs are not enforced. Second skmsg needs to call into tcp stack to send to indicate consumed data. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
The existing code did not expect users would initialize the TLS ULP without subsequently calling the TLS TX enabling socket option. If the application tries to send data after the TLS ULP enable op but before the TLS TX enable op the BPF sk_msg verdict program is skipped. This patch resolves this by converting the ipv4 sock ops to be calculated at init time the same way ipv6 ops are done. This pulls in any changes to the sock ops structure that have been made after the socket was created including the changes from adding the socket to a sock{map|hash}. This was discovered by running OpenSSL master branch which calls the TLS ULP setsockopt early in TLS handshake but only enables the TLS TX path once the handshake has completed. As a result the datapath missed the initial handshake messages. Fixes: 02c558b2 ("bpf: sockmap, support for msg_peek in sk_msg with redirect ingress") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
A sockmap program that redirects through a kTLS ULP enabled socket will not work correctly because the ULP layer is skipped. This fixes the behavior to call through the ULP layer on redirect to ensure any operations required on the data stream at the ULP layer continue to be applied. To do this we add an internal flag MSG_SENDPAGE_NOPOLICY to avoid calling the BPF layer on a redirected message. This is required to avoid calling the BPF layer multiple times (possibly recursively) which is not the current/expected behavior without ULPs. In the future we may add a redirect flag if users _do_ want the policy applied again but this would need to work for both ULP and non-ULP sockets and be opt-in to avoid breaking existing programs. Also to avoid polluting the flag space with an internal flag we reuse the flag space overlapping MSG_SENDPAGE_NOPOLICY with MSG_WAITFORONE. Here WAITFORONE is specific to recv path and SENDPAGE_NOPOLICY is only used for sendpage hooks. The last thing to verify is user space API is masked correctly to ensure the flag can not be set by user. (Note this needs to be true regardless because we have internal flags already in-use that user space should not be able to set). But for completeness we have two UAPI paths into sendpage, sendfile and splice. In the sendfile case the function do_sendfile() zero's flags, ./fs/read_write.c: static ssize_t do_sendfile(int out_fd, int in_fd, loff_t *ppos, size_t count, loff_t max) { ... fl = 0; #if 0 /* * We need to debate whether we can enable this or not. The * man page documents EAGAIN return for the output at least, * and the application is arguably buggy if it doesn't expect * EAGAIN on a non-blocking file descriptor. */ if (in.file->f_flags & O_NONBLOCK) fl = SPLICE_F_NONBLOCK; #endif file_start_write(out.file); retval = do_splice_direct(in.file, &pos, out.file, &out_pos, count, fl); } In the splice case the pipe_to_sendpage "actor" is used which masks flags with SPLICE_F_MORE. ./fs/splice.c: static int pipe_to_sendpage(struct pipe_inode_info *pipe, struct pipe_buffer *buf, struct splice_desc *sd) { ... more = (sd->flags & SPLICE_F_MORE) ? MSG_MORE : 0; ... } Confirming what we expect that internal flags are in fact internal to socket side. Fixes: d3b18ad3 ("tls: add bpf support to sk_msg handling") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
In addition to releasing any cork'ed data on a psock when the psock is removed we should also release any skb's in the ingress work queue. Otherwise the skb's eventually get free'd but late in the tear down process so we see the WARNING due to non-zero sk_forward_alloc. void sk_stream_kill_queues(struct sock *sk) { ... WARN_ON(sk->sk_forward_alloc); ... } Fixes: 604326b4 ("bpf, sockmap: convert to generic sk_msg interface") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
When a skb verdict program is in-use and either another BPF program redirects to that socket or the new SK_PASS support is used the data_ready callback does not wake up application. Instead because the stream parser/verdict is using the sk data_ready callback we wake up the stream parser/verdict block. Fix this by adding a helper to check if the stream parser block is enabled on the sk and if so call the saved pointer which is the upper layers wake up function. This fixes application stalls observed when an application is waiting for data in a blocking read(). Fixes: d829e9c4 ("tls: convert to generic sk_msg interface") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
Add SK_PASS verdict support to SK_SKB_VERDICT programs. Now that support for redirects exists we can implement SK_PASS as a redirect to the same socket. This simplifies the BPF programs and avoids an extra map lookup on RX path for simple visibility cases. Further, reduces user (BPF programmer in this context) confusion when their program drops skb due to lack of support. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
Enforce comment on structure layout dependency with a BUILD_BUG_ON to ensure the condition is maintained. Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
The check for max offset in sk_msg_is_valid_access uses sizeof() which is incorrect because it would allow accessing possibly past the end of the struct in the padded case. Further, it doesn't preclude accessing any padding that may be added in the middle of a struct. All told this makes it fragile to rely on. To fix this explicitly check offsets with fields using the bpf_ctx_range() and bpf_ctx_range_till() macros. For reference the current structure layout looks as follows (reported by pahole) struct sk_msg_md { union { void * data; /* 8 */ }; /* 0 8 */ union { void * data_end; /* 8 */ }; /* 8 8 */ __u32 family; /* 16 4 */ __u32 remote_ip4; /* 20 4 */ __u32 local_ip4; /* 24 4 */ __u32 remote_ip6[4]; /* 28 16 */ __u32 local_ip6[4]; /* 44 16 */ __u32 remote_port; /* 60 4 */ /* --- cacheline 1 boundary (64 bytes) --- */ __u32 local_port; /* 64 4 */ __u32 size; /* 68 4 */ /* size: 72, cachelines: 2, members: 10 */ /* last cacheline: 8 bytes */ }; So there should be no padding at the moment but fixing this now prevents future errors. Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
Currently, the test to ensure reads past the end of the sk_msg_md data structure fail is incorrectly expecting success. Fix this typo and use correct expected error. Fixes: 945a47d8 ("bpf: sk_msg, add tests for size field") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jesper Dangaard Brouer authored
The frame_size passed to build_skb must be aligned, else it is possible that the embedded struct skb_shared_info gets unaligned. For correctness make sure that xdpf->headroom in included in the alignment. No upstream drivers can hit this, as all XDP drivers provide an aligned headroom. This was discovered when playing with implementing XDP support for mvneta, which have a 2 bytes DSA header, and this Marvell ARM64 platform didn't like doing atomic operations on an unaligned skb_shinfo(skb)->dataref addresses. Fixes: 1c601d82 ("bpf: cpumap xdp_buff to skb conversion and allocation") Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
Jakub Kicinski says: ==================== This is a v2 of the patch set to teach the verifier about BPF_JSET instruction. There is also a number of tests include for both basic functioning of the instruction and the verifier logic. The NFP JIT handling of JSET is tweaked. Last patch adds missing file to gitignore. Reposting part of previous series without the dead code elimination. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
commit 435f90a3 ("selftests/bpf: add a test case for sock_ops perf-event notification") missed adding new test to gitignore. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
The top word of the constant can only have bits set if sign extension set it to all-1, therefore we don't really have to mask the top half of the register. We can just OR it into the result as is. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
The verifier will now understand the JSET instruction, so don't mark the dead branch in the JIT as noop. We won't generate any code, anyway. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Reorder the calls to check_max_stack_depth() and sanitize_dead_code() to separate functions which can rewrite instructions from pure checks. No functional changes. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Validate that the verifier reasons correctly about the bounds and removes dead code based on results of JSET instruction. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Some JITs (nfp) try to optimize code on their own. It could make sense in case of BPF_JSET instruction which is currently not interpreted by the verifier, meaning for instance that dead could would not be detected if it was under BPF_JSET branch. Teach the verifier basics of BPF_JSET, JIT optimizations will be removed shortly. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Jiong Wang <jiong.wang@netronome.com> Acked-by: Edward Cree <ecree@solarflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
We seem to have no JSET instruction test, and LLVM does not generate it at all, so let's add a simple hand-coded test to make sure JIT implementations are correct. v2: - extend test_verifier to handle multiple inputs and add the sample there (Daniel) - add a sign extension case Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Martin KaFai Lau authored
This patch enables sparc64's bpf_int_jit_compile() to provide bpf_line_info by calling bpf_prog_fill_jited_linfo(). Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 19 Dec, 2018 5 commits
-
-
Alexei Starovoitov authored
Martin KaFai Lau says: ==================== This series ensures the line_info (passed by the userspace during bpf_prog_load) cannot have its line_info.insn_off pointing to a zero bpf insn code. F.e. a broken userspace tool might generate a line_info.insn_off that points to the second 8 bytes of a BPF_LD_IMM64. The first patch is the kernel change. The second patch is a new test case. ==================== Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
This patch adds a BPF_LD_IMM64 case to the line_info test to ensure the kernel rejects linfo_info.insn_off pointing to the 2nd 8 bytes of the BPF_LD_IMM64. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Martin KaFai Lau authored
This patch rejects a line_info if the bpf insn code referred by line_info.insn_off is 0. F.e. a broken userspace tool might generate a line_info.insn_off that points to the second 8 bytes of a BPF_LD_IMM64. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Ivan Babrou authored
This allows transparent cross-compilation with CROSS_COMPILE by relying on 7ed1c190 ("tools: fix cross-compile var clobbering"). Signed-off-by: Ivan Babrou <ivan@cloudflare.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Björn Töpel authored
Prior this commit, when the struct socket object was being released, the UMEM did not have its reference count decreased. Instead, this was done in the struct sock sk_destruct function. There is no reason to keep the UMEM reference around when the socket is being orphaned, so in this patch the xdp_put_mem is called in the xsk_release function. This results in that the xsk_destruct function can be removed! Note that, it still holds that a struct xsk_sock reference might still linger in the XSKMAP after the UMEM is released, e.g. if a user does not clear the XSKMAP prior to closing the process. This sock will be in a "released" zombie like state, until the XSKMAP is removed. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 18 Dec, 2018 16 commits
-
-
Yonghong Song authored
Current btf internal verbose logger logs fwd type as [2] FWD A type_id=0 where A is the type name. Commit 9d5f9f70 ("bpf: btf: fix struct/union/fwd types with kind_flag") introduced kind_flag which can be used to distinguish whether a forward type is a struct or union. Also, "type_id=0" does not carry any meaningful information for fwd type as btf_type.type = 0 is simply enforced during btf verification and is not used anywhere else. This commit changed the log to [2] FWD A struct if kind_flag = 0, or [2] FWD A union if kind_flag = 1. Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
John Fastabend says: ==================== This adds a size field to the sk_msg_md data structure used by SK_MSG programs. Without this in the zerocopy case and in the copy case where multiple iovs are in use its difficult to know how much data can be pulled in. The normal method of reading data and data_end only give the current contiguous buffer. BPF programs can attempt to pull in extra data but have to guess if it exists. This can result in multiple "guesses" its much better if we know upfront the size of the sk_msg. ==================== Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
This adds tests to read the size field to test_verifier. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
Add the size field to sk_msg_md for tools. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
John Fastabend authored
This adds metadata to sk_msg_md for BPF programs to read the sk_msg size. When the SK_MSG program is running under an application that is using sendfile the data is not copied into sk_msg buffers by default. Rather the BPF program uses sk_msg_pull_data to read the bytes in. This avoids doing the costly memcopy instructions when they are not in fact needed. However, if we don't know the size of the sk_msg we have to guess if needed bytes are available by doing a pull request which may fail. By including the size of the sk_msg BPF programs can check the size before issuing sk_msg_pull_data requests. Additionally, the same applies for sendmsg calls when the application provides multiple iovs. Here the BPF program needs to pull in data to update data pointers but its not clear where the data ends without a size parameter. In many cases "guessing" is not easy to do and results in multiple calls to pull and without bounded loops everything gets fairly tricky. Clean this up by including a u32 size field. Note, all writes into sk_msg_md are rejected already from sk_msg_is_valid_access so nothing additional is needed there. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jiong Wang authored
Verifier is supposed to support sharing stack slot allocated to ptr with SCALAR_VALUE for privileged program. However this doesn't happen for some cases. The reason is verifier is not clearing slot_type STACK_SPILL for all bytes, it only clears part of them, while verifier is using: slot_type[0] == STACK_SPILL as a convention to check one slot is ptr type. So, the consequence of partial clearing slot_type is verifier could treat a partially overridden ptr slot, which should now be a SCALAR_VALUE slot, still as ptr slot, and rejects some valid programs. Before this patch, test_xdp_noinline.o under bpf selftests, bpf_lxc.o and bpf_netdev.o under Cilium bpf repo, when built with -mattr=+alu32 are rejected due to this issue. After this patch, they all accepted. There is no processed insn number change before and after this patch on Cilium bpf programs. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Matt Mullins authored
Distributions build drivers as modules, including network and filesystem drivers which export numerous tracepoints. This enables bpf(BPF_RAW_TRACEPOINT_OPEN) to attach to those tracepoints. Signed-off-by: Matt Mullins <mmullins@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Daniel Borkmann authored
Quentin Monnet says: ==================== This series focus on mounting (or not mounting) tracefs with bpftool. First patch makes bpftool attempt to mount tracefs if tracefs is not found when running "bpftool prog tracelog". Second patch adds an option to bpftool to prevent it from attempting to mount any file system (tracefs or bpffs), in case this behaviour is undesirable for some users. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
In order to make life easier for users, bpftool automatically attempts to mount the BPF virtual file system, if it is not mounted already, before trying to pin objects in it. Similarly, it attempts to mount tracefs if necessary before trying to dump the trace pipe to the console. While mounting file systems on-the-fly can improve user experience, some administrators might prefer to avoid that. Let's add an option to block these mount attempts. Note that it does not prevent automatic mounting of tracefs by debugfs for the "bpftool prog tracelog" command. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Quentin Monnet authored
As a follow-up to commit 30da46b5 ("tools: bpftool: add a command to dump the trace pipe"), attempt to mount the tracefs virtual file system if it is not detected on the system before trying to dump content of the tracing pipe on an invocation of "bpftool prog tracelog". Usually, tracefs in automatically mounted by debugfs when the user tries to access it (e.g. "ls /sys/kernel/debug/tracing" mounts the tracefs). So if we failed to find it, it is probably that debugfs is not here either. Therefore, we just attempt a single mount, at a location that does not involve debugfs: /sys/kernel/tracing. Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Yonghong Song authored
Current btf func_info, line_info and jited_line are designed to be extensible. The record sizes for {func,line}_info are passed to kernel, and the record sizes for {func,line,jited_line}_info are returned to userspace during bpf_prog_info query. In bpf selftests test_btf.c, when testing whether kernel returns a legitimate {func,line, jited_line)_info rec_size, the test only compares to the minimum allowed size. If the returned rec_size is smaller than the minimum allowed size, it is considered incorrect. The minimum allowed size for these three info sizes are equal to current value of sizeof(struct bpf_func_info), sizeof(struct bpf_line_info) and sizeof(__u64). The original thinking was that in the future when rec_size is increased in kernel, the same test should run correctly. But this sacrificed the precision of testing under the very kernel the test is shipped with, and bpf selftest is typically run with the same repo kernel. So this patch changed the testing of rec_size such that the kernel returned value should be equal to the size defined by tools uapi header bpf.h which syncs with kernel uapi header. Martin discovered a bug in one of rec_size comparisons. Instead of comparing to minimum func_info rec_size 8, it compares to 4. This patch fixed that issue as well. Fixes: 999d82cb ("tools/bpf: enhance test_btf file testing to test func info") Fixes: 05687352 ("bpf: Refactor and bug fix in test_func_type in test_btf.c") Fixes: 4d6304c7 ("bpf: Add unit tests for bpf_line_info") Suggested-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Prashant Bhole authored
This patch fixes a memory leak in libbpf by freeing up line_info member of struct bpf_program while unloading a program. Fixes: 3d650141 ("bpf: libbpf: Add btf_line_info support to libbpf") Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Daniel Borkmann authored
Yonghong Song says: ==================== Commit 69b693f0 ("bpf: btf: Introduce BPF Type Format (BTF)") introduced BTF, a debug info format for BTF. The original design has a couple of issues though. First, the bitfield size is only encoded in int type. If the struct member bitfield type is enum, pahole ([1]) or llvm is forced to replace enum with int type. As a result, the original type information gets lost. Second, the original BTF design does not envision the possibility of BTF=>header_file conversion ([2]), hence does not encode "struct" or "union" info for a forward type. Such information is necessary to convert BTF to a header file. This patch set fixed the issue by introducing kind_flag, using one bit in type->info. When kind_flag, the struct/union btf_member->offset will encode both bitfield_size and bit_offset, covering both int and enum base types. The kind_flag is also used to indicate whether the forward type is a union (when set) or a struct. Patch #1 refactors function btf_int_bits_seq_show() so Patch #2 can reuse part of the function. Patch #2 implemented kind_flag support for struct/union/fwd types. Patch #3 added kind_flag support for cgroup local storage map pretty print. Patch #4 syncs kernel uapi btf.h to tools directory. Patch #5 added unit tests for kind_flag. Patch #6 added tests for kernel bpffs based pretty print with kind_flag. Patch #7 refactors function btf_dumper_int_bits() so Patch #8 can reuse part of the function. Patch #8 added bpftool support of pretty print with kind_flag set. [1] https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?id=b18354f64cc215368c3bc0df4a7e5341c55c378c [2] https://lwn.net/SubscriberLink/773198/fe3074838f5c3f26/ Change logs: v2 -> v3: . Relocated comments about bitfield_size/bit_offset interpretation of the "offset" field right before the "offset" struct member. . Added missing byte alignment checking for non-bitfield enum member of a struct with kind_flag set. . Added two test cases in unit tests for struct type, kind_flag set, non-bitfield int/enum member, not-byte aligned bit offsets. . Added comments to help understand there is no overflow for total_bits_offset in bpftool function btf_dumper_int_bits(). . Added explanation of typedef type dumping fix in Patch #8 commit message. v1 -> v2: . If kind_flag is set for a structure, ensure an int member, whether it is a bitfield or not, is a regular int type. . Added support so cgroup local storage map pretty print works with kind_flag. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Yonghong Song authored
The following example shows map pretty print with structures which include bitfield members. enum A { A1, A2, A3, A4, A5 }; typedef enum A ___A; struct tmp_t { char a1:4; int a2:4; int :4; __u32 a3:4; int b; ___A b1:4; enum A b2:4; }; struct bpf_map_def SEC("maps") tmpmap = { .type = BPF_MAP_TYPE_ARRAY, .key_size = sizeof(__u32), .value_size = sizeof(struct tmp_t), .max_entries = 1, }; BPF_ANNOTATE_KV_PAIR(tmpmap, int, struct tmp_t); and the following map update in the bpf program: key = 0; struct tmp_t t = {}; t.a1 = 2; t.a2 = 4; t.a3 = 6; t.b = 7; t.b1 = 8; t.b2 = 10; bpf_map_update_elem(&tmpmap, &key, &t, 0); With this patch, I am able to print out the map values correctly with this patch: bpftool map dump id 187 [{ "key": 0, "value": { "a1": 0x2, "a2": 0x4, "a3": 0x6, "b": 7, "b1": 0x8, "b2": 0xa } } ] Previously, if a function prototype argument has a typedef type, the prototype is not printed since function __btf_dumper_type_only() bailed out with error if the type is a typedef. This commit corrected this behavior by printing out typedef properly. The following example shows forward type and typedef type can be properly printed in function prototype with modified test_btf_haskv.c. struct t; union u; __attribute__((noinline)) static int test_long_fname_1(struct dummy_tracepoint_args *arg, struct t *p1, union u *p2, __u32 unused) ... int _dummy_tracepoint(struct dummy_tracepoint_args *arg) { return test_long_fname_1(arg, 0, 0, 0); } $ bpftool p d xlated id 24 ... int test_long_fname_1(struct dummy_tracepoint_args * arg, struct t * p1, union u * p2, __u32 unused) ... Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Yonghong Song authored
The core dump funcitonality in btf_dumper_int_bits() is refactored into a separate function btf_dumper_bitfield() which will be used by the next patch. Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Yonghong Song authored
The new tests are added to test bpffs map pretty print in kernel with kind_flag for structure type. $ test_btf -p ...... BTF pretty print array(#1)......OK BTF pretty print array(#2)......OK PASS:8 SKIP:0 FAIL:0 Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-