- 10 Jul, 2019 7 commits
-
-
Santosh Shilimkar authored
Connections with legitimate tos values can get into usual connection race. It can result in consumer reject. We don't want tos value or protocol version to be demoted for such connections otherwise piers would end up different tos values which can results in no connection. Example a peer initiated connection with say tos 8 while usual connection racing can get downgraded to tos 0 which is not desirable. Patch fixes above issue introduced by commit commit d021fabf ("rds: rdma: add consumer reject") Reported-by: Yanjun Zhu <yanjun.zhu@oracle.com> Tested-by: Yanjun Zhu <yanjun.zhu@oracle.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Gerd Rausch authored
The proper "tos" value needs to be returned to user-space (sockopt RDS_INFO_CONNECTIONS). Fixes: 3eb45036 ("rds: add type of service(tos) infrastructure") Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Gerd Rausch authored
Prior to commit d021fabf ("rds: rdma: add consumer reject") function "rds_rdma_cm_event_handler_cmn" would always honor a rejected connection attempt by issuing a "rds_conn_drop". The commit mentioned above added a "break", eliminating the "fallthrough" case and made the "rds_conn_drop" rather conditional: Now it only happens if a "consumer defined" reject (i.e. "rdma_reject") carries an integer-value of "1" inside "private_data": if (!conn) break; err = (int *)rdma_consumer_reject_data(cm_id, event, &len); if (!err || (err && ((*err) == RDS_RDMA_REJ_INCOMPAT))) { pr_warn("RDS/RDMA: conn <%pI6c, %pI6c> rejected, dropping connection\n", &conn->c_laddr, &conn->c_faddr); conn->c_proposed_version = RDS_PROTOCOL_COMPAT_VERSION; rds_conn_drop(conn); } rdsdebug("Connection rejected: %s\n", rdma_reject_msg(cm_id, event->status)); break; /* FALLTHROUGH */ A number of issues are worth mentioning here: #1) Previous versions of the RDS code simply rejected a connection by calling "rdma_reject(cm_id, NULL, 0);" So the value of the payload in "private_data" will not be "1", but "0". #2) Now the code has become dependent on host byte order and sizing. If one peer is big-endian, the other is little-endian, or there's a difference in sizeof(int) (e.g. ILP64 vs LP64), the *err check does not work as intended. #3) There is no check for "len" to see if the data behind *err is even valid. Luckily, it appears that the "rdma_reject(cm_id, NULL, 0)" will always carry 148 bytes of zeroized payload. But that should probably not be relied upon here. #4) With the added "break;", we might as well drop the misleading "/* FALLTHROUGH */" comment. This commit does _not_ address issue #2, as the sender would have to agree on a byte order as well. Here is the sequence of messages in this observed error-scenario: Host-A is pre-QoS changes (excluding the commit mentioned above) Host-B is post-QoS changes (including the commit mentioned above) #1 Host-B issues a connection request via function "rds_conn_path_transition" connection state transitions to "RDS_CONN_CONNECTING" #2 Host-A rejects the incompatible connection request (from #1) It does so by calling "rdma_reject(cm_id, NULL, 0);" #3 Host-B receives an "RDMA_CM_EVENT_REJECTED" event (from #2) But since the code is changed in the way described above, it won't drop the connection here, simply because "*err == 0". #4 Host-A issues a connection request #5 Host-B receives an "RDMA_CM_EVENT_CONNECT_REQUEST" event and ends up calling "rds_ib_cm_handle_connect". But since the state is already in "RDS_CONN_CONNECTING" (as of #1) it will end up issuing a "rdma_reject" without dropping the connection: if (rds_conn_state(conn) == RDS_CONN_CONNECTING) { /* Wait and see - our connect may still be succeeding */ rds_ib_stats_inc(s_ib_connect_raced); } goto out; #6 Host-A receives an "RDMA_CM_EVENT_REJECTED" event (from #5), drops the connection and tries again (goto #4) until it gives up. Tested-by: Zhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Gerd Rausch authored
This reverts commit 56012459. RDS kept spinning inside function "rds_ib_post_reg_frmr", waiting for "i_fastreg_wrs" to become incremented: while (atomic_dec_return(&ibmr->ic->i_fastreg_wrs) <= 0) { atomic_inc(&ibmr->ic->i_fastreg_wrs); cpu_relax(); } Looking at the original commit: commit 56012459 ("RDS: IB: split the mr registration and invalidation path") In there, the "rds_ib_mr_cqe_handler" was changed in the following way: void rds_ib_mr_cqe_handler(struct rds_ib_connection *ic, struct ib_wc *wc) if (frmr->fr_inv) { frmr->fr_state = FRMR_IS_FREE; frmr->fr_inv = false; atomic_inc(&ic->i_fastreg_wrs); } else { atomic_inc(&ic->i_fastunreg_wrs); } It looks like it's got it exactly backwards: Function "rds_ib_post_reg_frmr" keeps track of the outstanding requests via "i_fastreg_wrs". Function "rds_ib_post_inv" keeps track of the outstanding requests via "i_fastunreg_wrs" (post original commit). It also sets: frmr->fr_inv = true; However the completion handler "rds_ib_mr_cqe_handler" adjusts "i_fastreg_wrs" when "fr_inv" had been true, and adjusts "i_fastunreg_wrs" otherwise. The original commit was done in the name of performance: to remove the performance bottleneck No performance benefit could be observed with a fixed-up version of the original commit measured between two Oracle X7 servers, both equipped with Mellanox Connect-X5 HCAs. The prudent course of action is to revert this commit. Signed-off-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Santosh Shilimkar authored
RDS composite message(rdma + control) user notification needs to be triggered once the full message is delivered and such a fix was added as part of commit 941f8d55 ("RDS: RDMA: Fix the composite message user notification"). But rds_send_remove_from_sock is missing data part notify check and hence at times the user don't get notification which isn't desirable. One way is to fix the rds_send_remove_from_sock to check of that case but considering the ordering complexity with completion handler and rdma + control messages are always dispatched back to back in same send context, just delaying the signaled completion on rmda work request also gets the desired behaviour. i.e Notifying application only after RDMA + control message send completes. So patch updates the earlier fix with this approach. The delay signaling completions of rdma op till the control message send completes fix was done by Venkat Venkatsubra in downstream kernel. Reviewed-and-tested-by: Zhu Yanjun <yanjun.zhu@oracle.com> Reviewed-by: Gerd Rausch <gerd.rausch@oracle.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
-
Nathan Chancellor authored
clang warns: drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c:251:2: warning: variable 'rec_seq_sz' is used uninitialized whenever switch default is taken [-Wsometimes-uninitialized] default: ^~~~~~~ drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c:255:46: note: uninitialized use occurs here skip_static_post = !memcmp(rec_seq, &rn_be, rec_seq_sz); ^~~~~~~~~~ drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_tx.c:239:16: note: initialize the variable 'rec_seq_sz' to silence this warning u16 rec_seq_sz; ^ = 0 1 warning generated. This case statement was clearly designed to be one that should not be hit during runtime because of the WARN_ON statement so just return early to prevent copying uninitialized memory up into rn_be. Fixes: d2ead1f3 ("net/mlx5e: Add kTLS TX HW offload support") Link: https://github.com/ClangBuiltLinux/linux/issues/590Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Return value was changes to 'int' from void but this return statement was not updated, or it slipped in via a merge. Fixes: b5d9a834 ("net/tls: don't clear TX resync flag on error") Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 09 Jul, 2019 33 commits
-
-
Vivien Didelot authored
This patch adds support for enabling or disabling the flooding of unknown multicast traffic on the CPU ports, depending on the value of the switchdev SWITCHDEV_ATTR_ID_BRIDGE_MROUTER attribute. The current behavior is kept unchanged but a user can now prevent the CPU conduit to be flooded with a lot of unregistered traffic that the network stack needs to filter in software with e.g.: echo 0 > /sys/class/net/br0/multicast_router Signed-off-by: Vivien Didelot <vivien.didelot@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Commit 9903c8dc changed TC_ETF defines to use _BITUL instead of BIT but did not add the dependecy on linux/const.h. As a consequence, importing the uapi headers into iproute2 causes builds to fail. Add the dependency. Fixes: 9903c8dc ("etf: Don't use BIT() in UAPI headers.") Cc: Vedang Patel <vedang.patel@intel.com> Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ilias Apalodimas authored
On commit ba2b2321 ("net: netsec: add XDP support") a static declaration for netsec_set_tx_de() was added to make the diff easier to read. Now that the patch is merged let's move the functions around and get rid of that Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ilias Apalodimas authored
While freeing tx buffers the memory has to be unmapped if the packet was an skb or was used for .ndo_xdp_xmit using the same arguments. Get rid of the unneeded extra 'else if' statement Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Pablo Neira Ayuso says: ==================== netfilter: add hardware offload infrastructure This patchset adds support for Netfilter hardware offloads. This patchset reuses the existing block infrastructure, the netdev_ops->ndo_setup_tc() interface, TC_SETUP_CLSFLOWER classifier and the flow rule API. Patch #1 adds flow_block_cb_setup_simple(), most drivers do the same thing to set up flow blocks, to reduce the number of changes, consolidate codebase. Use _simple() postfix as requested by Jakub Kicinski. This new function resides in net/core/flow_offload.c Patch #2 renames TC_BLOCK_{UN}BIND to FLOW_BLOCK_{UN}BIND. Patch #3 renames TCF_BLOCK_BINDER_TYPE_* to FLOW_BLOCK_BINDER_TYPE_*. Patch #4 adds flow_block_cb_alloc() and flow_block_cb_free() helper functions, this is the first patch of the flow block API. Patch #5 adds the helper to deal with list operations in the flow block API. This includes flow_block_cb_lookup(), flow_block_cb_add() and flow_block_cb_remove(). Patch #6 adds flow_block_cb_priv(), flow_block_cb_incref() and flow_block_cb_decref() which completes the flow block API. Patch #7 updates the cls_api to use the flow block API from the new tcf_block_setup(). This infrastructure transports these objects via list (through the tc_block_offload object) back to the core for registration. CLS_API DRIVER TC_SETUP_BLOCK ----------> setup flow_block_cb object & it adds object to flow_block_offload->cb_list | CLS_API <-----------------------' registers list with flow blocks flow_block_cb & travels back to calls ->reoffload the core for registration drivers allocate and sets up (configure the blocks), then registration happens from the core (cls_api and netfilter). Patch #8 updates drivers to use the flow block API. Patch #9 removes the tcf block callback API, which is replaced by the flow block API. Patch #10 adds the flow_block_cb_is_busy() helper to check if the block is already used by a subsystem. This helper is invoked from drivers. Once drivers are updated to support for multiple subsystems, they can remove this check. Patch #11 rename tc structure and definitions for the block bind/unbind path. Patch #12 introduces basic netfilter hardware offload infrastructure for the ingress chain. This includes 5-tuple exact matching and accept / drop rule actions. Only basechains are supported at this stage, no .reoffload callback is implemented either. Default policy to "accept" is only supported for now. table netdev filter { chain ingress { type filter hook ingress device eth0 priority 0; flags offload; ip daddr 192.168.0.10 tcp dport 22 drop } } This patchset reuses the existing tcf block callback API and it places it in the flow block callback API in net/core/flow_offload.c. This series aims to address Jakub and Jiri's feedback, please see specific patches in this batch for changelog in this v4. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
This patch adds hardware offload support for nftables through the existing netdev_ops->ndo_setup_tc() interface, the TC_SETUP_CLSFLOWER classifier and the flow rule API. This hardware offload support is available for the NFPROTO_NETDEV family and the ingress hook. Each nftables expression has a new ->offload interface, that is used to populate the flow rule object that is attached to the transaction object. There is a new per-table NFT_TABLE_F_HW flag, that is set on to offload an entire table, including all of its chains. This patch supports for basic metadata (layer 3 and 4 protocol numbers), 5-tuple payload matching and the accept/drop actions; this also includes basechain hardware offload only. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
And any other existing fields in this structure that refer to tc. Specifically: * tc_cls_flower_offload_flow_rule() to flow_cls_offload_flow_rule(). * TC_CLSFLOWER_* to FLOW_CLS_*. * tc_cls_common_offload to tc_cls_common_offload. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
This patch adds a function to check if flow block callback is already in use. Call this new function from flow_block_cb_setup_simple() and from drivers. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
Unused, now replaced by flow block API. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
This patch updates flow_block_cb_setup_simple() to use the flow block API. Several drivers are also adjusted to use it. This patch introduces the per-driver list of flow blocks to account for blocks that are already in use. Remove tc_block_offload alias. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
This patch adds tcf_block_setup() which uses the flow block API. This infrastructure takes the flow block callbacks coming from the driver and register/unregister to/from the cls_api core. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
This patch completes the flow block API to introduce: * flow_block_cb_priv() to access callback private data. * flow_block_cb_incref() to bump reference counter on this flow block. * flow_block_cb_decref() to decrement the reference counter. These functions are taken from the existing tcf_block_cb_priv(), tcf_block_cb_incref() and tcf_block_cb_decref(). Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
This patch adds the list handling functions for the flow block API: * flow_block_cb_lookup() allows drivers to look up for existing flow blocks. * flow_block_cb_add() adds a flow block to the per driver list to be registered by the core. * flow_block_cb_remove() to remove a flow block from the list of existing flow blocks per driver and to request the core to unregister this. The flow block API also annotates the netns this flow block belongs to. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
Add a new helper function to allocate flow_block_cb objects. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
Rename from TCF_BLOCK_BINDER_TYPE_* to FLOW_BLOCK_BINDER_TYPE_* and remove temporary tcf_block_binder_type alias. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
Rename from TC_BLOCK_{UN}BIND to FLOW_BLOCK_{UN}BIND and remove temporary tc_block_command alias. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pablo Neira Ayuso authored
Most drivers do the same thing to set up the flow block callbacks, this patch adds a helper function to do this. This preparation patch reduces the number of changes to adapt the existing drivers to use the flow block callback API. This new helper function takes a flow block list per-driver, which is set to NULL until this driver list is used. This patch also introduces the flow_block_command and flow_block_binder_type enumerations, which are renamed to use FLOW_BLOCK_* in follow up patches. There are three definitions (aliases) in order to reduce the number of updates in this patch, which go away once drivers are fully adapted to use this flow block API. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jiangfeng Xiao says: ==================== net: hisilicon: Add support for HI13X1 to hip04_eth The main purpose of this patch series is to extend the hip04_eth driver to support HI13X1_GMAC. The offset and bitmap of some registers of HI13X1_GMAC are different from hip04_eth common soc. In addition, the definition of send descriptor and parsing descriptor are different from hip04_eth common soc. So the macro of the register offset is redefined to adapt the HI13X1_GMAC. Clean up the sparse warning by the way. Change since v1: * Add a cover letter. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
HI13X1 changed the offsets and bitmaps for tx_desc registers in the same peripheral device on different models of the hip04_eth. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
HI13X1 changed the offsets and bitmaps for rx_desc registers in the same peripheral device on different models of the hip04_eth. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
The buf unit size of HI13X1_GMAC is cache_line_size, which is 64, so the address we write to the buf register needs to be shifted right by 6 bits. The 31st bit of the PPE_CFG_CPU_ADD_ADDR register of HI13X1_GMAC indicates whether to release the buffer of the message, and the low indicates that it is valid. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
In general, group is the same as the port, but some boards specify a special group for better load balancing of each processing unit. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
In general, group is the same as the port, but some boards specify a special group for better load balancing of each processing unit. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
HI13X1_GMAC delete request for soft reset at first, otherwise, the subsequent initialization will not take effect. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
HI13X1_GMAC changed the offsets and bitmaps for GE_TX_LOCAL_PAGE_REG registers in the same peripheral device on different models of the hip04_eth. With the default configuration, HI13X1_GMAC can also work without any writes to the GE_TX_LOCAL_PAGE_REG register. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
This patch fixes the following warning from sparse: hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:533:23: warning: cast to restricted __be16 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 hip04_eth.c:534:23: warning: cast to restricted __be32 Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
This patch fixes the following warning from sparse: hip04_eth.c:468:25: warning: incorrect type in assignment hip04_eth.c:468:25: expected unsigned int [usertype] send_addr hip04_eth.c:468:25: got restricted __be32 [usertype] hip04_eth.c:469:25: warning: incorrect type in assignment hip04_eth.c:469:25: expected unsigned int [usertype] send_size hip04_eth.c:469:25: got restricted __be32 [usertype] hip04_eth.c:470:19: warning: incorrect type in assignment hip04_eth.c:470:19: expected unsigned int [usertype] cfg hip04_eth.c:470:19: got restricted __be32 [usertype] hip04_eth.c:472:23: warning: incorrect type in assignment hip04_eth.c:472:23: expected unsigned int [usertype] wb_addr hip04_eth.c:472:23: got restricted __be32 [usertype] Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiangfeng Xiao authored
Extend the hip04_eth driver to support HI13X1_GMAC. Enable it with CONFIG_HI13X1_GMAC option. Signed-off-by: Jiangfeng Xiao <xiaojiangfeng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Biao Huang says: ==================== stmmac: fix out-of-boundary issue and add taller hash table support Fix mac address out-of-boundary issue in net-next tree. and resend the patch which was discussed in https://lore.kernel.org/patchwork/patch/1082117 but with no further progress. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Biao Huang authored
1. get hash table size in hw feature reigster, and add support for taller hash table(128/256) in dwmac4. 2. only clear GMAC_PACKET_FILTER bits used in this function, to avoid side effect to functions of other bits. stmmac selftests output log with flow control on: ethtool -t eth0 The test result is PASS The test extra info: 1. MAC Loopback 0 2. PHY Loopback -95 3. MMC Counters 0 4. EEE -95 5. Hash Filter MC 0 6. Perfect Filter UC 0 7. MC Filter 0 8. UC Filter 0 9. Flow Control 0 Signed-off-by: Biao Huang <biao.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Biao Huang authored
The mac address array size is GMAC_MAX_PERFECT_ADDRESSES, so the 'reg' should be less than it, or will affect other registers. Signed-off-by: Biao Huang <biao.huang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Lucas Bates says: ==================== tc-testing: Add plugin for simple traffic generation This series supersedes the previous submission that included a patch for test case verification using JSON output. It adds a new tdc plugin, scapyPlugin, as a way to send traffic to test tc filters and actions. The first patch makes a change to the TdcPlugin module that will allow tdc plugins to examine the test case currently being executed, so plugins can play a more active role in testing by accepting information or commands from the test case. This is required for scapyPlugin to work. The second patch adds scapyPlugin itself, and an example test case file to demonstrate how the scapy block works in the test cases. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lucas Bates authored
The scapyPlugin allows for simple traffic generation in tdc to test various tc features. It was tested with scapy v2.4.2, but should work with any successive version. In order to use the plugin's functionality, scapy must be installed. This can be done with: pip3 install scapy or to install 2.4.2: pip3 install scapy==2.4.2 If the plugin is unable to import the scapy module, it will terminate the tdc run. The plugin makes use of a new key in the test case data, 'scapy'. This block contains three other elements: 'iface', 'count', and 'packet': "scapy": { "iface": "$DEV0", "count": 1, "packet": "Ether(type=0x800)/IP(src='16.61.16.61')/ICMP()" }, * iface is the name of the device on the host machine from which the packet(s) will be sent. Values contained within tdc_config.py's NAMES dict can be used here - this is useful if paired with nsPlugin * count is the number of copies of this packet to be sent * packet is a string detailing the different layers of the packet to be sent. If a property isn't explicitly set, scapy will set default values for you. Layers in the packet info are separated by slashes. For info about common TCP and IP properties, see: https://blogs.sans.org/pen-testing/files/2016/04/ScapyCheatSheet_v0.2.pdf Caution is advised when running tests using the scapy functionality, since the plugin blindly sends the packet as defined in the test case data. See creating-testcases/scapy-example.json for sample test cases; the first test is intended to pass while the second is intended to fail. Signed-off-by: Lucas Bates <lucasb@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-