- 16 Jan, 2018 2 commits
-
-
Wei Yongjun authored
Fixes the following sparse warnings: kernel/bpf/cpumap.c:146:6: warning: symbol '__cpu_map_queue_destructor' was not declared. Should it be static? kernel/bpf/cpumap.c:225:16: warning: symbol 'cpu_map_build_skb' was not declared. Should it be static? kernel/bpf/cpumap.c:340:26: warning: symbol '__cpu_map_entry_alloc' was not declared. Should it be static? kernel/bpf/cpumap.c:398:6: warning: symbol '__cpu_map_entry_free' was not declared. Should it be static? kernel/bpf/cpumap.c:441:6: warning: symbol '__cpu_map_entry_replace' was not declared. Should it be static? kernel/bpf/cpumap.c:454:5: warning: symbol 'cpu_map_delete_elem' was not declared. Should it be static? kernel/bpf/cpumap.c:467:5: warning: symbol 'cpu_map_update_elem' was not declared. Should it be static? kernel/bpf/cpumap.c:505:6: warning: symbol 'cpu_map_free' was not declared. Should it be static? Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Roman Gushchin authored
Bpftool doesn't recognize BPF_PROG_TYPE_CGROUP_DEVICE programs, so the prog show command prints the numeric type value: $ bpftool prog show 1: type 15 name bpf_prog1 tag ac9f93dbfd6d9b74 loaded_at Jan 15/07:58 uid 0 xlated 96B jited 105B memlock 4096B This patch defines the corresponding textual representation: $ bpftool prog show 1: cgroup_device name bpf_prog1 tag ac9f93dbfd6d9b74 loaded_at Jan 15/07:58 uid 0 xlated 96B jited 105B memlock 4096B Signed-off-by: Roman Gushchin <guro@fb.com> Cc: Jakub Kicinski <jakub.kicinski@netronome.com> Cc: Quentin Monnet <quentin.monnet@netronome.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@kernel.org> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 14 Jan, 2018 16 commits
-
-
Daniel Borkmann authored
Jakub Kicinski says: ==================== This set adds support for creating maps on networking devices. BPF is programs+maps, the pure program offload has been around for quite some time, this patchset adds the map part of the equation. Maps are allocated on the target device from the start. There is no host copy when map is created on the device. Device maps are represented by struct bpf_offloaded_map, regardless of type. Host programs can't access such maps, access is only possible from a program also loaded to the same device and/or via the BPF syscall. Offloaded programs are currently only allowed to perform lookups, control plane is responsible for populating the maps. For brevity only infrastructure and basic NFP patches are included. Target device reporting, netdevsim and tests will follow up as well as some further optimizations to the NFP code. v2: - leave out the array maps, we will add them trivially later to avoid merge conflicts with ongoing spectere&meltdown mitigations. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Plug in to the stack's map offload callbacks for BPF map offload. Get next call needs some special handling on the FW side, since we can't send a NULL pointer to the FW there is a get first entry FW command. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Map memory needs to use 40 bit addressing. Add handling of such accesses. Since 40 bit addresses are formed by using both 32 bit operands we need to pre-calculate the actual address instead of adding in the offset inside the instruction, like we did in 32 bit mode. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Verify our current constraints on the location of the key are met and generate the code for calling map lookup on the datapath. New relocation types have to be added - for helpers and return addresses. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Immediate loads are used to load the return address of a helper. We need to be able to update those loads for relocations. Immediate loads can be slightly more complex and spread over two instructions in general, but here we only care about simple loads of small (< 65k) constants, so complex cases are not handled. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Parse helper function and supported map FW TLV capabilities. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Implement calls for FW map communication. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
For map support we will need to send and receive control messages. Add basic support for sending a message to FW, and waiting for a reply. Control messages are tagged with a 16 bit ID. Add a simple ID allocator and make sure we don't allow too many messages in flight, to avoid request <> reply mismatches. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
To be able to split code into reasonable chunks we need to add the map data structures already. Later patches will add code piece by piece. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
BPF map offload follow similar path to program offload. At creation time users may specify ifindex of the device on which they want to create the map. Map will be validated by the kernel's .map_alloc_check callback and device driver will be called for the actual allocation. Map will have an empty set of operations associated with it (save for alloc and free callbacks). The real device callbacks are kept in map->offload->dev_ops because they have slightly different signatures. Map operations are called in process context so the driver may communicate with HW freely, msleep(), wait() etc. Map alloc and free callbacks are muxed via existing .ndo_bpf, and are always called with rtnl lock held. Maps and programs are guaranteed to be destroyed before .ndo_uninit (i.e. before unregister_netdev() returns). Map callbacks are invoked with bpf_devs_lock *read* locked, drivers must take care of exclusive locking if necessary. All offload-specific branches are marked with unlikely() (through bpf_map_is_dev_bound()), given that branch penalty will be negligible compared to IO anyway, and we don't want to penalize SW path unnecessarily. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Add a helper to check if netdev could be found and whether it has .ndo_bpf callback. There is no need to check the callback every time it's invoked, ndos can't reasonably be swapped for a set without .ndp_bpf while program is loaded. bpf_dev_offload_check() will also be used by map offload. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
With map offload coming, we need to call program offload structure something less ambiguous. Pure rename, no functional changes. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
All map types reimplement the field-by-field copy of union bpf_attr members into struct bpf_map. Add a helper to perform this operation. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Use the new callback to perform allocation checks for hash maps. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
Number of attribute checks are currently performed after hashtab is already allocated. Move them to be able to split them out to the check function later on. Checks have to now be performed on the attr union directly instead of the members of bpf_map, since bpf_map will be allocated later. No functional changes. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
Jakub Kicinski authored
.map_alloc callbacks contain a number of checks validating user- -provided map attributes against constraints of a particular map type. For offloaded maps we will need to check map attributes without actually allocating any memory on the host. Add a new callback for validating attributes before any memory is allocated. This callback can be selectively implemented by map types for sharing code with offloads, or simply to separate the logical steps of validation and allocation. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
- 13 Jan, 2018 6 commits
-
-
Alexei Starovoitov authored
Masami Hiramatsu says: ==================== Here are the 5th version of patches to moving error injection table from kprobes. This version fixes a bug and update fail-function to support multiple function error injection. Here is the previous version: https://patchwork.ozlabs.org/cover/858663/ Changes in v5: - [3/5] Fix a bug that within_error_injection returns false always. - [5/5] Update to support multiple function error injection. Thank you, ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Masami Hiramatsu authored
Support in-kernel fault-injection framework via debugfs. This allows you to inject a conditional error to specified function using debugfs interfaces. Here is the result of test script described in Documentation/fault-injection/fault-injection.txt =========== # ./test_fail_function.sh 1+0 records in 1+0 records out 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227404 s, 46.1 MB/s btrfs-progs v4.4 See http://btrfs.wiki.kernel.org for more information. Label: (null) UUID: bfa96010-12e9-4360-aed0-42eec7af5798 Node size: 16384 Sector size: 4096 Filesystem size: 1001.00MiB Block group profiles: Data: single 8.00MiB Metadata: DUP 58.00MiB System: DUP 12.00MiB SSD detected: no Incompat features: extref, skinny-metadata Number of devices: 1 Devices: ID SIZE PATH 1 1001.00MiB /dev/loop2 mount: mount /dev/loop2 on /opt/tmpmnt failed: Cannot allocate memory SUCCESS! =========== Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Masami Hiramatsu authored
Add injectable error types for each error-injectable function. One motivation of error injection test is to find software flaws, mistakes or mis-handlings of expectable errors. If we find such flaws by the test, that is a program bug, so we need to fix it. But if the tester miss input the error (e.g. just return success code without processing anything), it causes unexpected behavior even if the caller is correctly programmed to handle any errors. That is not what we want to test by error injection. To clarify what type of errors the caller must expect for each injectable function, this introduces injectable error types: - EI_ETYPE_NULL : means the function will return NULL if it fails. No ERR_PTR, just a NULL. - EI_ETYPE_ERRNO : means the function will return -ERRNO if it fails. - EI_ETYPE_ERRNO_NULL : means the function will return -ERRNO (ERR_PTR) or NULL. ALLOW_ERROR_INJECTION() macro is expanded to get one of NULL, ERRNO, ERRNO_NULL to record the error type for each function. e.g. ALLOW_ERROR_INJECTION(open_ctree, ERRNO) This error types are shown in debugfs as below. ==== / # cat /sys/kernel/debug/error_injection/list open_ctree [btrfs] ERRNO io_ctl_init [btrfs] ERRNO ==== Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Masami Hiramatsu authored
Since error-injection framework is not limited to be used by kprobes, nor bpf. Other kernel subsystems can use it freely for checking safeness of error-injection, e.g. livepatch, ftrace etc. So this separate error-injection framework from kprobes. Some differences has been made: - "kprobe" word is removed from any APIs/structures. - BPF_ALLOW_ERROR_INJECTION() is renamed to ALLOW_ERROR_INJECTION() since it is not limited for BPF too. - CONFIG_FUNCTION_ERROR_INJECTION is the config item of this feature. It is automatically enabled if the arch supports error injection feature for kprobe or ftrace etc. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Masami Hiramatsu authored
Compare instruction pointer with original one on the stack instead using per-cpu bpf_kprobe_override flag. This patch also consolidates reset_current_kprobe() and preempt_enable_no_resched() blocks. Those can be done in one place. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
Masami Hiramatsu authored
Check whether error injectable event is on function entry or not. Currently it checks the event is ftrace-based kprobes or not, but that is wrong. It should check if the event is on the entry of target function. Since error injection will override a function to just return with modified return value, that operation must be done before the target function starts making stackframe. As a side effect, bpf error injection is no need to depend on function-tracer. It can work with sw-breakpoint based kprobe events too. Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
-
- 12 Jan, 2018 16 commits
-
-
Jesper Dangaard Brouer authored
As pointed out by Daniel Borkmann, using bpf_target_off() is not necessary for xdp_rxq_info when extracting queue_index and ifindex, as these members are u32 like BPF_W. Also fix trivial spelling mistake introduced in same commit. Fixes: 02dd3291 ("bpf: finally expose xdp_rxq_info to XDP bpf-programs") Reported-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-
David S. Miller authored
Peng Li says: ==================== hns3: add some new features and fix some bugs This patchset adds 3 ethtool features: get_channels, get_coalesce and get_coalesce, and fix some bugs. [patch 1/11] adds ethtool_ops.get_channels (ethtool -l) support for VF. [patch 2/11] removes TSO config command from VF driver, as only main PF can config TSO MSS length according to hardware. [patch 3/11 - 4/11] add ethtool_ops {get|set}_coalesce (ethtool -c/-C) support to PF. [patch 5/11 - 9/11] fix some bugs related to {get|set}_coalesce. [patch 10/11 - 11/11] fix the features handling in hns3_nic_set_features(). Local variable "changed" was defined to indicates features changed, but was used only for feature NETIF_F_HW_VLAN_CTAG_RX. Add checking to improve the reliability. --- Change log: V1 -> V2: 1, Rewrite the cover letter requested by David Miller. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
It's necessary to check hook whether being defined before calling, improve the reliability. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
Local variable "changed" was defined to indicates features changed, but was used only for feature NETIF_F_HW_VLAN_CTAG_RX. Add checking for other features. Fixes: 052ece6d ("net: hns3: add ethtool related offload command") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
If the int_gl_idx does not be set, the default interrupt coalesce index is 0. The TX queues and the RX queues will both use the GL0 as the interrupt coalesce GL switch. But it should be GL1 for TX queues and GL0 for RX queues. This patch adds the int_gl_idx setup for TX queues and RX queues. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
Previously, driver used 2us as the GL unit. The time unit ethtool command "-c" and "-C" use is 1us, so now the GL unit driver uses actually is 1us. This patch changes the unit of GL value macro from 2us to 1us. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
Since the TX GL and the RX GL need to be set separately, hns3_set_vector_coalesc_gl() has been replaced with hns3_set_vector_coalesce_rx_gl() and hns3_set_vector_coalesce_tx_gl(). This patch removes hns3_set_vector_coalesc_gl(). Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
The GL update function uses the max GL value between tx_int_gl and rx_int_gl to set both new tx_int_gl and new rx_int_gl. Therefore, User can not enable TX GL self-adaptive or RX GL self-adaptive individually. This patch refactors the code to update the TX GL and the RX GL separately, making user can enable TX GL self-adaptive or RX GL self-adaptive individually. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
In the hardware, the coalesce configurable registers include GL0, GL1, GL2. In the driver, the TX queues use the register GL1 and the RX queues use the register GL0. This function initializes the configuration of the interrupt coalescing, but does not distinguish between the TX direction and the RX direction. It will cause some confusion. This patch refactors the function to initialize the TX GL and the RX GL separately. And the initialization of related variables also is added to this patch. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
This patch adds ethtool_ops.set_coalesce support to PF. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
This patch adds ethtool_ops.get_coalesce support to PF. Whilst our hardware supports per queue values, external interfaces support only a single shared value. As such we use the values for queue 0. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
Only main PF can config TSO MSS length according to hardware. This patch removes TSO config command from VF driver. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
This patch supports the ethtool's get_channels() for VF. Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller authored
BPF alignment tests got a conflict because the registers are output as Rn_w instead of just Rn in net-next, and in net a fixup for a testcase prohibits logical operations on pointers before using them. Also, we should attempt to patch BPF call args if JIT always on is enabled. Instead, if we fail to JIT the subprogs we should pass an error back up and fail immediately. Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://github.com/ceph/ceph-clientLinus Torvalds authored
Pull ceph fixes from Ilya Dryomov: "Two rbd fixes for 4.12 and 4.2 issues respectively, marked for stable" * tag 'ceph-for-4.15-rc8' of git://github.com/ceph/ceph-client: rbd: set max_segments to USHRT_MAX rbd: reacquire lock should update lock owner client id
-
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpioLinus Torvalds authored
Pull GPIO fix from Linus Walleij: "Fix a raw vs elaborate GPIO descriptor bug introduced by yours truly" * tag 'gpio-v4.15-4' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio: gpio: Add missing open drain/source handling to gpiod_set_value_cansleep()
-