- 09 Apr, 2021 9 commits
-
-
Yauheni Kaliuta authored
Test map__set_inner_map_fd() interaction with map-in-map initialization. Use hashmap of maps just to make it different to existing array of maps. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210408061310.95877-9-yauheni.kaliuta@redhat.com
-
Yauheni Kaliuta authored
Set bpf table sizes dynamically according to the runtime page size value. Do not switch to ASSERT macros, keep CHECK, for consistency with the rest of the test. Can be a separate cleanup patch. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210408061310.95877-8-yauheni.kaliuta@redhat.com
-
Andrii Nakryiko authored
The API gives access to inner map for map in map types (array or hash of map). It will be used to dynamically set max_entries in it. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210408061310.95877-7-yauheni.kaliuta@redhat.com
-
Yauheni Kaliuta authored
Replace hardcoded 4096 with runtime value in the userspace part of the test and set bpf table sizes dynamically according to the value. Do not switch to ASSERT macros, keep CHECK, for consistency with the rest of the test. Can be a separate cleanup patch. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210408061310.95877-6-yauheni.kaliuta@redhat.com
-
Yauheni Kaliuta authored
Replace hardcoded 4096 with runtime value in the userspace part of the test and set bpf table sizes dynamically according to the value. Do not switch to ASSERT macros, keep CHECK, for consistency with the rest of the test. Can be a separate cleanup patch. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210408061310.95877-5-yauheni.kaliuta@redhat.com
-
Yauheni Kaliuta authored
Use ASSERT to check result but keep CHECK where format was used to report error. Use bpf_map__set_max_entries() to set map size dynamically from userspace according to page size. Zero-initialize the variable in bpf prog, otherwise it will cause problems on some versions of Clang. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210408061310.95877-4-yauheni.kaliuta@redhat.com
-
Yauheni Kaliuta authored
Since there is no convenient way for bpf program to get PAGE_SIZE from inside of the kernel, pass the value from userspace. Zero-initialize the variable in bpf prog, otherwise it will cause problems on some versions of Clang. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210408061310.95877-3-yauheni.kaliuta@redhat.com
-
Yauheni Kaliuta authored
Switch the test to use BPF skeleton to save some boilerplate and make it easy to access bpf program bss segment. The latter will be used to pass PAGE_SIZE from userspace since there is no convenient way for bpf program to get it from inside of the kernel. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210408061310.95877-2-yauheni.kaliuta@redhat.com
-
Yauheni Kaliuta authored
As pointed by Andrii Nakryiko, _version is useless now, remove it. Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210408061310.95877-1-yauheni.kaliuta@redhat.com
-
- 06 Apr, 2021 2 commits
-
-
Muhammad Usama Anjum authored
bpf_preload_lock is already defined with DEFINE_MUTEX(). There is no need to initialize it again. Remove the extraneous initialization. Signed-off-by: Muhammad Usama Anjum <musamaanjum@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210405194904.GA148013@LEGION
-
Cong Wang authored
These comments in udp_bpf_update_proto() are copied from the original TCP code and apparently do not apply to UDP. Just remove them. Reported-by: Jakub Sitnicki <jakub@cloudflare.com> Signed-off-by: Cong Wang <cong.wang@bytedance.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20210403052715.13854-1-xiyou.wangcong@gmail.com
-
- 05 Apr, 2021 1 commit
-
-
Hengqi Chen authored
Add missing ')' for KERNEL_VERSION macro. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210405040119.802188-1-hengqi.chen@gmail.com
-
- 03 Apr, 2021 1 commit
-
-
Martin KaFai Lau authored
The tracing test and the recent kfunc call test require CONFIG_DYNAMIC_FTRACE. This patch adds it to the config file. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20210403002921.3419721-1-kafai@fb.com
-
- 02 Apr, 2021 27 commits
-
-
Yang Yingliang authored
Remove redundant semi-colon in finalize_btf_ext(). Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20210402012634.1965453-1-yangyingliang@huawei.com
-
Wan Jiabing authored
struct btf_type is declared twice. One is declared at 35th line. The below one is not needed, hence remove the duplicate. Signed-off-by: Wan Jiabing <wanjiabing@vivo.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20210401072037.995849-1-wanjiabing@vivo.com
-
Wan Jiabing authored
struct bpf_prog is declared twice. There is one declaration which is independent on the macro at 18th line. So the below one is not needed though. Remove the duplicate. Signed-off-by: Wan Jiabing <wanjiabing@vivo.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20210401064637.993327-1-wanjiabing@vivo.com
-
He Fengqing authored
'stack' parameter is not used in ___bpf_prog_run() after f696b8f4 ("bpf: split bpf core interpreter"), the base address have been set to FP reg. So consequently remove it. Signed-off-by: He Fengqing <hefengqing@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20210331075135.3850782-1-hefengqing@huawei.com
-
John Fastabend authored
With a relatively recent clang master branch test_map skips a section, libbpf: elf: skipping unrecognized data section(5) .rodata.str1.1 the cause is some pointless strings from bpf_printks in the BPF program loaded during testing. After just removing the prints to fix above error Daniel points out the program is a bit pointless and could be simply the empty program returning SK_PASS. Here we do just that and return simply SK_PASS. This program is used with test_maps selftests to test insert/remove of a program into the sockmap and sockhash maps. Its not testing actual functionality of the TCP sockmap programs, these are tested from test_sockmap. So we shouldn't lose in test coverage and fix above warnings. This original test was added before test_sockmap existed and has been copied around ever since, clean it up now. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/161731595664.74613.1603087410166945302.stgit@john-XPS-13-9370
-
Eric Dumazet authored
Group all the often used fields in the first cache line, to reduce cache line misses. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Order fields to increase locality for most used protocols. udplite and icmp are moved at the end. Same for proc_net_devsnmp6 which is not used in fast path. This potentially saves one cache line miss for typical TCP/UDP over IPv4/IPv6. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
If the "type_a->nfcid_len" is too large then it would lead to memory corruption in pn533_target_found_type_a() when we do: memcpy(nfc_tgt->nfcid1, tgt_type_a->nfcid_data, nfc_tgt->nfcid1_len); Fixes: c3b1e1e8 ("NFC: Export NFCID1 from pn533") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ioana Ciornei says: ==================== dpaa2-eth: add rx copybreak support DMA unmapping, allocating a new buffer and DMA mapping it back on the refill path is really not that efficient. Proper buffer recycling (page pool, flipping the page and using the other half) cannot be done for DPAA2 since it's not a ring based controller but it rather deals with multiple queues which all get their buffers from the same buffer pool on Rx. To circumvent these limitations, add support for Rx copybreak in dpaa2-eth. Below you can find a summary of the tests that were run to end up with the default rx copybreak value of 512. A bit about the setup - a LS2088A SoC, 8 x Cortex A72 @ 1.8GHz, IPfwd zero loss test @ 20Gbit/s throughput. I tested multiple frame sizes to get an idea where is the break even point. Here are 2 sets of results, (1) is the baseline and (2) is just allocating a new skb for all frames sizes received (as if the copybreak was even to the MTU). All numbers are in Mpps. 64 128 256 512 640 768 896 (1) 3.23 3.23 3.24 3.21 3.1 2.76 2.71 (2) 3.95 3.88 3.79 3.62 3.3 3.02 2.65 It seems that even for 512 bytes frame sizes it's comfortably better when allocating a new skb. After that, we see diminishing rewards or even worse. Changes in v2: - properly marked dpaa2_eth_copybreak as static ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
It's useful, especially for debugging purposes, to have the Rx copybreak value changeable at runtime. Export it as an ethtool tunable. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
DMA unmapping, allocating a new buffer and DMA mapping it back on the refill path is really not that efficient. Proper buffer recycling (page pool, flipping the page and using the other half) cannot be done for DPAA2 since it's not a ring based controller but it rather deals with multiple queues which all get their buffers from the same buffer pool on Rx. To circumvent these limitations, add support for Rx copybreak. For small sized packets instead of creating a skb around the buffer in which the frame was received, allocate a new sk buffer altogether, copy the contents of the frame and release the initial page back into the buffer pool. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Rename the dpaa2_eth_xdp_release_buf function into dpaa2_eth_recycle_buf since in the next patches we'll be using the same recycle mechanism for the normal stack path beside for XDP_DROP. Also, rename the array which holds the buffers to be recycled so that it does not have any reference to XDP. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Mat Martineau says: ==================== MPTCP: Miscellaneous changes Here is a collection of patches from the MPTCP tree: Patches 1 and 2 add some helpful MIB counters for connection information. Patch 3 cleans up some unnecessary checks. Patch 4 is a new feature, support for the MP_TCPRST option. This option is used when resetting one subflow within a MPTCP connection, and provides a reason code that the recipient can use when deciding how to adapt to the lost subflow. Patches 5-7 update the existing MPTCP selftests to improve timeout handling and to share better information when tests fail. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthieu Baerts authored
Very occasionally, MPTCP selftests fail. Yeah, I saw that at least once! Here we provide more details in case of errors with mptcp_join.sh script like it was done with mptcp_connect.sh, see commit 767389c8 ("selftests: mptcp: dump more info on errors") Suggested-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthieu Baerts authored
Not to be impacted by packets sent between sub-tests. Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthieu Baerts authored
'mptcp_connect' already has a timeout for poll() but in some cases, it is not enough. With "timeout" tool, we will force the command to fail if it doesn't finish on time. Thanks to that, the script will continue and display details about the current state before marking the test as failed. Displaying this state is very important to be able to understand the issue. Best to have our CI reporting the issue than just "the test hanged". Note that in mptcp_connect.sh, we were using a long timeout to validate the fact we cannot create a socket if a sysctl is set. We don't need this timeout. In diag.sh, we want to send signals to mptcp_connect instances that have been started in the netns. But we cannot send this signal to 'timeout' otherwise that will stop the timeout and messages telling us SIGUSR1 has been received will be printed. Instead of trying to find the right PID and storing them in an array, we can simply use the output of 'ip netns pids' which is all the PIDs we want to send signal to. Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/160Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Westphal authored
The MPTCP reset option allows to carry a mptcp-specific error code that provides more information on the nature of a connection reset. Reset option data received gets stored in the subflow context so it can be sent to userspace via the 'subflow closed' netlink event. When a subflow is closed, the desired error code that should be sent to the peer is also placed in the subflow context structure. If a reset is sent before subflow establishment could complete, e.g. on HMAC failure during an MP_JOIN operation, the mptcp skb extension is used to store the reset information. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
Currently we explicitly check for the first subflow being NULL in a couple of places, even if we don't need any special actions in such scenario. Just drop the unneeded checks, to avoid confusion. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
We are not currently tracking the active MPTCP connection attempts. Let's add the related counters. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paolo Abeni authored
If the MPTCP protocol is unable to create a new token, the socket fallback to plain TCP, let's keep track of such events via a specific MIB. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Shannon Nelson says: ==================== ionic: add PTP and hw clock support This patchset adds support for accessing the DSC hardware clock and for offloading PTP timestamping. Tx packet timestamping happens through a separate Tx queue set up with expanded completion descriptors that can report the timestamp. Rx timestamping can happen either on all queues, or on a separate timestamping queue when specific filtering is requested. Again, the timestamps are reported with the expanded completion descriptors. The timestamping offload ability is advertised but not enabled until an OS service asks for it. At that time the driver's queues are reconfigured to use the different completion descriptors and the private processing queues as needed. Reading the raw clock value comes through a new pair of values in the device info registers in BAR0. These high and low values are interpreted with help from new clock mask, mult, and shift values in the device identity information. First we add the ability to detect new queue features, then the handling of the new descriptor sizes. After adding the new interface structures, we start adding the support code, saving the advertising to the stack for last. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Let the network stack know we've got support for timestamping the packets. Signed-off-by: Allen Hubbe <allenbh@pensando.io> Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Add the new hwstamp stats to our ethtool stats output. Signed-off-by: Allen Hubbe <allenbh@pensando.io> Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Add the get_ts_info() callback for ethtool support of timestamping information. Signed-off-by: Allen Hubbe <allenbh@pensando.io> Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
The Tx and Rx timestamped packets are handled through separate queues. Here we set them up, service them, and tear them down along with the normal Tx and Rx queues. Signed-off-by: Allen Hubbe <allenbh@pensando.io> Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
We do hardware timestamping through a separate Tx queue, and optionally through a separate Rx queue. These queues are allocated, freed, and tracked separately from the basic queue arrays. Signed-off-by: Allen Hubbe <allenbh@pensando.io> Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Add handling of the new Rx packet classification filter type. This simple bit of classification allows for steering packets to a separate Rx queue for processing. Signed-off-by: Allen Hubbe <allenbh@pensando.io> Signed-off-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
-