- 20 Dec, 2019 10 commits
-
-
Andrey Ignatov authored
__cgroup_bpf_attach has a lot of identical code to handle two scenarios: BPF_F_ALLOW_MULTI is set and unset. Simplify it by splitting the two main steps: * First, the decision is made whether a new bpf_prog_list entry should be allocated or existing entry should be reused for the new program. This decision is saved in replace_pl pointer; * Next, replace_pl pointer is used to handle both possible states of BPF_F_ALLOW_MULTI flag (set / unset) instead of doing similar work for them separately. This splitting, in turn, allows to make further simplifications: * The check for attaching same program twice in BPF_F_ALLOW_MULTI mode can be done before allocating cgroup storage, so that if user tries to attach same program twice no alloc/free happens as it was before; * pl_was_allocated becomes redundant so it's removed. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/c6193db6fe630797110b0d3ff06c125d093b834c.1576741281.git.rdna@fb.com
-
Alexei Starovoitov authored
Björn Töpel says: ==================== This series aims to simplify the XDP maps and xdp_do_redirect_map()/xdp_do_flush_map(), and to crank out some more performance from XDP_REDIRECT scenarios. The first part of the series simplifies all XDP_REDIRECT capable maps, so that __XXX_flush_map() does not require the map parameter, by moving the flush list from the map to global scope. This results in that the map_to_flush member can be removed from struct bpf_redirect_info, and its corresponding logic. Simpler code, and more performance due to that checks/code per-packet is moved to flush. Pre-series performance: $ sudo taskset -c 22 ./xdpsock -i enp134s0f0 -q 20 -n 1 -r -z sock0@enp134s0f0:20 rxdrop xdp-drv pps pkts 1.00 rx 20,797,350 230,942,399 tx 0 0 $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0 Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs XDP-cpumap CPU:to pps drop-pps extra-info XDP-RX 20 7723038 0 0 XDP-RX total 7723038 0 cpumap_kthread total 0 0 0 redirect_err total 0 0 xdp_exception total 0 0 Post-series performance: $ sudo taskset -c 22 ./xdpsock -i enp134s0f0 -q 20 -n 1 -r -z sock0@enp134s0f0:20 rxdrop xdp-drv pps pkts 1.00 rx 21,524,979 86,835,327 tx 0 0 $ sudo ./xdp_redirect_cpu --dev enp134s0f0 --cpu 22 xdp_cpu_map0 Running XDP/eBPF prog_name:xdp_cpu_map5_lb_hash_ip_pairs XDP-cpumap CPU:to pps drop-pps extra-info XDP-RX 20 7840124 0 0 XDP-RX total 7840124 0 cpumap_kthread total 0 0 0 redirect_err total 0 0 xdp_exception total 0 0 Results: +3.5% and +1.5% for the ubenchmarks. v1->v2 [1]: * Removed 'unused-variable' compiler warning (Jakub) [1] https://lore.kernel.org/bpf/20191218105400.2895-1-bjorn.topel@gmail.com/ ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Björn Töpel authored
The explicit error checking is not needed. Simply return the error instead. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191219061006.21980-9-bjorn.topel@gmail.com
-
Björn Töpel authored
Now that all XDP maps that can be used with bpf_redirect_map() tracks entries to be flushed in a global fashion, there is not need to track that the map has changed and flush from xdp_do_generic_map() anymore. All entries will be flushed in xdp_do_flush_map(). This means that the map_to_flush can be removed, and the corresponding checks. Moving the flush logic to one place, xdp_do_flush_map(), give a bulking behavior and performance boost. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191219061006.21980-8-bjorn.topel@gmail.com
-
Björn Töpel authored
The cpumap flush list is used to track entries that need to flushed from via the xdp_do_flush_map() function. This list used to be per-map, but there is really no reason for that. Instead make the flush list global for all devmaps, which simplifies __cpu_map_flush() and cpu_map_alloc(). Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191219061006.21980-7-bjorn.topel@gmail.com
-
Björn Töpel authored
The devmap flush list is used to track entries that need to flushed from via the xdp_do_flush_map() function. This list used to be per-map, but there is really no reason for that. Instead make the flush list global for all devmaps, which simplifies __dev_map_flush() and dev_map_init_map(). Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191219061006.21980-6-bjorn.topel@gmail.com
-
Björn Töpel authored
The xskmap flush list is used to track entries that need to flushed from via the xdp_do_flush_map() function. This list used to be per-map, but there is really no reason for that. Instead make the flush list global for all xskmaps, which simplifies __xsk_map_flush() and xsk_map_alloc(). Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191219061006.21980-5-bjorn.topel@gmail.com
-
Björn Töpel authored
Simple spelling fix. Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191219061006.21980-4-bjorn.topel@gmail.com
-
Björn Töpel authored
After the RCU flavor consolidation [1], call_rcu() and synchronize_rcu() waits for preempt-disable regions (NAPI) in addition to the read-side critical sections. As a result of this, the cleanup code in cpumap can be simplified * There is no longer a need to flush in __cpu_map_entry_free, since we know that this has been done when the call_rcu() callback is triggered. * When freeing the map, there is no need to explicitly wait for a flush. It's guaranteed to be done after the synchronize_rcu() call in cpu_map_free(). [1] https://lwn.net/Articles/777036/Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191219061006.21980-3-bjorn.topel@gmail.com
-
Björn Töpel authored
After the RCU flavor consolidation [1], call_rcu() and synchronize_rcu() waits for preempt-disable regions (NAPI) in addition to the read-side critical sections. As a result of this, the cleanup code in devmap can be simplified * There is no longer a need to flush in __dev_map_entry_free, since we know that this has been done when the call_rcu() callback is triggered. * When freeing the map, there is no need to explicitly wait for a flush. It's guaranteed to be done after the synchronize_rcu() call in dev_map_free(). The rcu_barrier() is still needed, so that the map is not freed prior the elements. [1] https://lwn.net/Articles/777036/Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191219061006.21980-2-bjorn.topel@gmail.com
-
- 19 Dec, 2019 23 commits
-
-
Aditya Pakki authored
The two callers of bpf_prog_realloc - bpf_patch_insn_single and bpf_migrate_filter dereference the struct fp_old, before passing it to the function. Thus assertion to check fp_old is unnecessary and can be removed. Signed-off-by: Aditya Pakki <pakki001@umn.edu> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191219175735.19231-1-pakki001@umn.edu
-
Andrii Nakryiko authored
Fix yet another printf warning for %llu specifier on ppc64le. This time size_t casting won't work, so cast to verbose `unsigned long long`. Fixes: 166750bc ("libbpf: Support libbpf-provided extern variables") Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191219052103.3515-1-andriin@fb.com
-
Toke Høiland-Jørgensen authored
Naresh pointed out that libbpf builds fail on 32-bit architectures because rlimit.rlim_cur is defined as 'unsigned long long' on those architectures. Fix this by using %zu in printf and casting to size_t. Fixes: dc3a2d25 ("libbpf: Print hint about ulimit when getting permission denied error") Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191219090236.905059-1-toke@redhat.com
-
Alexei Starovoitov authored
Fix two issues in test_attach_probe: 1. it was not able to parse /proc/self/maps beyond the first line, since %s means parse string until white space. 2. offset has to be accounted for otherwise uprobed address is incorrect. Fixes: 1e8611bb ("selftests/bpf: add kprobe/uprobe selftests") Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191219020442.1922617-1-ast@kernel.org
-
Toke Høiland-Jørgensen authored
The error log output in the opts validation macro was missing a newline. Fixes: 2ce8450e ("libbpf: add bpf_object__open_{file, mem} w/ extensible opts") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191219120714.928380-1-toke@redhat.com
-
Daniel Borkmann authored
Björn Töpel says: ==================== This series contain one non-critical fix, support for far jumps, and some optimizations for the BPF JIT. Previously, the JIT only supported 12b branch targets for conditional branches, and 21b for unconditional branches. Starting with this series, 32b branching is supported. As part of supporting far jumps, branch relaxation was introduced. The idea is to start with a pessimistic jump (e.g. auipc/jalr) and for each pass the JIT will have an opportunity to pick a better instruction (e.g. jal) and shrink the image. Instead of two passes, the JIT requires more passes. It typically converges after 3 passes. The optimizations mentioned in the subject are for calls and tail calls. In the tail call generation we can save one instruction by using the offset in jalr. Calls are optimized by doing (auipc)/jal(r) relative jumps instead of loading the entire absolute address and doing jalr. This required that the JIT image allocator was made RISC-V specific, so we can ensure that the JIT image and the kernel text are in range (32b). The last two patches of the series is not critical to the series, but are two UAPI build issues for BPF events. A closer look from the RV-folks would be much appreciated. The test_bpf.ko module, selftests/bpf/test_verifier and selftests/seccomp/seccomp_bpf pass all tests. RISC-V is still missing proper kprobe and tracepoint support, so a lot of BPF selftests cannot be run. v1->v2: [1] * Removed unused function parameter from emit_branch() * Added patch to support far branch in tail call emit [1] https://lore.kernel.org/bpf/20191209173136.29615-1-bjorn.topel@gmail.com/ ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Björn Töpel authored
RISC-V was missing a proper perf_arch_bpf_user_pt_regs macro for CONFIG_PERF_EVENT builds. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-10-bjorn.topel@gmail.com
-
Björn Töpel authored
Add missing uapi header the BPF_PROG_TYPE_PERF_EVENT programs by exporting struct user_regs_struct instead of struct pt_regs which is in-kernel only. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-9-bjorn.topel@gmail.com
-
Björn Töpel authored
Instead of using emit_imm() and emit_jalr() which can expand to six instructions, start using jal or auipc+jalr. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-8-bjorn.topel@gmail.com
-
Björn Töpel authored
This commit makes sure that the JIT images is kept close to the kernel text, so BPF calls can use relative calling with auipc/jalr or jal instead of loading the full 64-bit address and jalr. The BPF JIT image region is 128 MB before the kernel text. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-7-bjorn.topel@gmail.com
-
Björn Töpel authored
Remove one addi, and instead use the offset part of jalr. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-6-bjorn.topel@gmail.com
-
Björn Töpel authored
This commit add support for far (offset > 21b) jumps and exits. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Luke Nelson <lukenels@cs.washington.edu> Link: https://lore.kernel.org/bpf/20191216091343.23260-5-bjorn.topel@gmail.com
-
Björn Töpel authored
Start use the emit_branch() function in the tail call emitter in order to support far branching. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-4-bjorn.topel@gmail.com
-
Björn Töpel authored
This commit adds branch relaxation to the BPF JIT, and with that support for far (offset greater than 12b) branching. The branch relaxation requires more than two passes to converge. For most programs it is three passes, but for larger programs it can be more. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Luke Nelson <lukenels@cs.washington.edu> Link: https://lore.kernel.org/bpf/20191216091343.23260-3-bjorn.topel@gmail.com
-
Björn Töpel authored
The BPF JIT incorrectly clobbered the a0 register, and did not flag usage of s5 register when BPF stack was being used. Fixes: 2353ecc6 ("bpf, riscv: add BPF JIT for RV64G") Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-2-bjorn.topel@gmail.com
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Based on latest feedback and discussions, this patch set implements the following changes: - Kconfig-provided externs have to be in .kconfig section, for which bpf_helpers.h provides convenient __kconfig macro (Daniel); - instead of allowing to override Kconfig file path, switch this to ability to extend and override system Kconfig with user-provided custom values (Alexei); - BTF is required when externs are used. ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
BTF is required to get type information about extern variables. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191219002837.3074619-4-andriin@fb.com
-
Andrii Nakryiko authored
Instead of all or nothing approach of overriding Kconfig file location, allow to extend it with extra values and override chosen subset of values though optional user-provided extra config, passed as a string through open options' .kconfig option. If same config key is present in both user-supplied config and Kconfig, user-supplied one wins. This allows applications to more easily test various conditions despite host kernel's real configuration. If all of BPF object's __kconfig externs are satisfied from user-supplied config, system Kconfig won't be read at all. Simplify selftests by not needing to create temporary Kconfig files. Suggested-by: Alexei Starovoitov <ast@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191219002837.3074619-3-andriin@fb.com
-
Andrii Nakryiko authored
Move Kconfig-provided externs into custom .kconfig section. Add __kconfig into bpf_helpers.h for user convenience. Update selftests accordingly. Suggested-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191219002837.3074619-2-andriin@fb.com
-
Andrii Nakryiko authored
There are cases in which BPF resource (program, map, etc) has to outlive userspace program that "installed" it in the system in the first place. When BPF program is attached, libbpf returns bpf_link object, which is supposed to be destroyed after no longer necessary through bpf_link__destroy() API. Currently, bpf_link destruction causes both automatic detachment and frees up any resources allocated to for bpf_link in-memory representation. This is inconvenient for the case described above because of coupling of detachment and resource freeing. This patch introduces bpf_link__disconnect() API call, which marks bpf_link as disconnected from its underlying BPF resouces. This means that when bpf_link is destroyed later, all its memory resources will be freed, but BPF resource itself won't be detached. This design allows to follow strict and resource-leak-free design by default, while giving easy and straightforward way for user code to opt for keeping BPF resource attached beyond lifetime of a bpf_link. For some BPF programs (i.e., FS-based tracepoints, kprobes, raw tracepoint, etc), user has to make sure to pin BPF program to prevent kernel to automatically detach it on process exit. This should typically be achived by pinning BPF program (or map in some cases) in BPF FS. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191218225039.2668205-1-andriin@fb.com
-
Nikita V. Shirokov authored
allow to pass skb's mark field into bpf_prog_test_run ctx for BPF_PROG_TYPE_SCHED_CLS prog type. that would allow to test bpf programs which are doing decision based on this field Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Work-around what appears to be a bug in rst2man convertion tool, used to create man pages out of reStructureText-formatted documents. If text line starts with dot, rst2man will put it in resulting man file verbatim. This seems to cause man tool to interpret it as a directive/command (e.g., `.bs`), and subsequently not render entire line because it's unrecognized one. Enclose '.xxx' words in extra formatting to work around. Fixes: cb21ac58 ("bpftool: Add gen subcommand manpage") Reported-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com Link: https://lore.kernel.org/bpf/20191218221707.2552199-1-andriin@fb.com
-
Andrii Nakryiko authored
Change format string referring to just single argument out of two available. Some versions of libc can reject such format string. Reported-by: Nikita Shirokov <tehnerd@tehnerd.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191218214314.2403729-1-andriin@fb.com
-
- 18 Dec, 2019 5 commits
-
-
Alexei Starovoitov authored
Andrii Nakryiko says: ==================== Simplify skeleton usage by embedding source BPF object file inside skeleton itself. This allows to keep skeleton and object file in sync at all times with no chance of confusion. Also, add bpftool-gen.rst manpage, explaining concepts and ideas behind skeleton. In examples section it also includes a complete small BPF application utilizing skeleton, as a demonstration of API. Patch #2 also removes BPF_EMBED_OBJ, as there is currently no use of it. v2->v3: - (void) in no-args function (Alexei); - bpftool-gen.rst code block formatting fix (Alexei); - simplified xxx__create_skeleton to fill in obj and return error code; v1->v2: - remove whitespace from empty lines in code blocks (Yonghong). ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add bpftool-gen.rst describing skeleton on the high level. Also include a small, but complete, example BPF app (BPF side, userspace side, generated skeleton) in example section to demonstrate skeleton API and its usage. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191218052552.2915188-4-andriin@fb.com
-
Andrii Nakryiko authored
Drop BPF_EMBED_OBJ and struct bpf_embed_data now that skeleton automatically embeds contents of its source object file. While BPF_EMBED_OBJ is useful independently of skeleton, we are currently don't have any use cases utilizing it, so let's remove them until/if we need it. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191218052552.2915188-3-andriin@fb.com
-
Andrii Nakryiko authored
Embed contents of BPF object file used for BPF skeleton generation inside skeleton itself. This allows to keep BPF object file and its skeleton in sync at all times, and simpifies skeleton instantiation. Also switch existing selftests to not require BPF_EMBED_OBJ anymore. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191218052552.2915188-2-andriin@fb.com
-
Andrii Nakryiko authored
Libbpf is trying to recognize BPF program type based on its section name during bpf_object__open() phase. This is not strictly enforced and user code has ability to specify/override correct BPF program type after open. But if BPF program is using custom section name, libbpf will still emit warnings, which can be quite annoying to users. This patch reduces log level of information messages emitted by libbpf if section name is not canonical. User can still get a list of all supported section names as debug-level message. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191217234228.1739308-1-andriin@fb.com
-
- 17 Dec, 2019 2 commits
-
-
Toke Høiland-Jørgensen authored
This fixes two issues with the newly introduced libbpf_common.h file: - The header failed to include <string.h> for the definition of memset() - The new file was not included in the install_headers rule in the Makefile Both of these issues cause breakage when installing libbpf with 'make install' and trying to use it in applications. Fixes: 544402d4 ("libbpf: Extract common user-facing helpers") Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191217112810.768078-1-toke@redhat.com
-
Andrii Nakryiko authored
Similarly to bpftool/libbpf output, make selftests/bpf output succinct per-item output line. Output is roughly as follows: $ make ... CLANG-LLC [test_maps] pyperf600.o CLANG-LLC [test_maps] strobemeta.o CLANG-LLC [test_maps] pyperf100.o EXTRA-OBJ [test_progs] cgroup_helpers.o EXTRA-OBJ [test_progs] trace_helpers.o BINARY test_align BINARY test_verifier_log GEN-SKEL [test_progs] fexit_bpf2bpf.skel.h GEN-SKEL [test_progs] test_global_data.skel.h GEN-SKEL [test_progs] sendmsg6_prog.skel.h ... To see the actual command invocation, verbose mode can be turned on with V=1 argument: $ make V=1 ... very verbose output ... Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20191217061425.2346359-1-andriin@fb.com
-