- 12 Feb, 2019 40 commits
-
-
Vlad Buslov authored
All users of block->chain_list rely on rtnl lock and assume that no new chains are added when traversing the list. Use tcf_get_next_chain() to traverse chain list without relying on rtnl mutex. This function iterates over chains by taking reference to current iterator chain only and doesn't assume external synchronization of chain list. Don't take reference to all chains in block when flushing and use tcf_get_next_chain() to safely iterate over chain list instead. Remove tcf_block_put_all_chains() that is no longer used. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock, use block->lock to protect chain0 struct from concurrent modification. Rearrange code in chain0 callback add and del functions to only access chain0 when block->lock is held. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock, modify chain API to use block->lock to protect chain from concurrent modification. Rearrange tc_ctl_chain() code to call tcf_chain_hold() while holding block->lock to prevent concurrent chain removal. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock, protect tcf_chain->explicitly_created flag with block->lock. Consolidate code that checks and resets 'explicitly_created' flag into __tcf_chain_put() to execute it atomically with rest of code that puts chain reference. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Currently, tcf_block doesn't use any synchronization mechanisms to protect critical sections that manage lifetime of its chains. block->chain_list and multiple variables in tcf_chain that control its lifetime assume external synchronization provided by global rtnl lock. Converting chain reference counting to atomic reference counters is not possible because cls API uses multiple counters and flags to control chain lifetime, so all of them must be synchronized in chain get/put code. Use single per-block lock to protect block data and manage lifetime of all chains on the block. Always take block->lock when accessing chain_list. Chain get and put modify chain lifetime-management data and parent block's chain_list, so take the lock in these functions. Verify block->lock state with assertions in functions that expect to be called with the lock taken and are called from multiple places. Take block->lock when accessing filter_chain_list. In order to allow parallel update of rules on single block, move all calls to classifiers outside of critical sections protected by new block->lock. Rearrange chain get and put functions code to only access protected chain data while holding block lock: - Rearrange code to only access chain reference counter and chain action reference counter while holding block lock. - Extract code that requires block->lock from tcf_chain_destroy() into standalone tcf_chain_destroy() function that is called by __tcf_chain_put() in same critical section that changes chain reference counters. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch fixes the following warnings: drivers/isdn/i4l/isdn_v110.c: In function ‘EncodeMatrix’: drivers/isdn/i4l/isdn_v110.c:353:7: warning: this statement may fall through [-Wimplicit-fallthrough=] if (line >= mlen) { ^ drivers/isdn/i4l/isdn_v110.c:358:3: note: here case 128: ^~~~ Warning level 3 was used: -Wimplicit-fallthrough=3 Notice that, in this particular case, the code comment is modified in accordance with what GCC is expecting to find. This patch is part of the ongoing efforts to enable -Wimplicit-fallthrough. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch fixes the following warnings: drivers/isdn/i4l/isdn_tty.c: In function ‘isdn_tty_edit_at’: drivers/isdn/i4l/isdn_tty.c:3644:18: warning: this statement may fall through [-Wimplicit-fallthrough=] m->mdmcmdl = 0; ~~~~~~~~~~~^~~ drivers/isdn/i4l/isdn_tty.c:3646:5: note: here case 0: ^~~~ Warning level 3 was used: -Wimplicit-fallthrough=3 Notice that, in this particular case, the code comment is modified in accordance with what GCC is expecting to find. This patch is part of the ongoing efforts to enable -Wimplicit-fallthrough. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch fixes the following warning: drivers/isdn/gigaset/ser-gigaset.c: In function ‘gigaset_tty_ioctl’: drivers/isdn/gigaset/ser-gigaset.c:627:3: warning: this statement may fall through [-Wimplicit-fallthrough=] switch (arg) { ^~~~~~ drivers/isdn/gigaset/ser-gigaset.c:638:2: note: here default: ^~~~~~~ Warning level 3 was used: -Wimplicit-fallthrough=3 Notice that, in this particular case, the code comment is modified in accordance with what GCC is expecting to find. This patch is part of the ongoing efforts to enable -Wimplicit-fallthrough. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Julian Wiedmann says: ==================== s390/qeth: updates 2019-02-12 please apply one more round of qeth patches to net-next. This series targets the driver's control paths. It primarily brings improvements to the error handling for sent cmds and received responses, along with the usual cleanup and consolidation efforts. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
This calls the existing errno translation helpers from the callbacks, adding trivial wrappers where necessary. For cmds that have no sophisticated errno translation, default to -EIO. For IPA cmds with no callback, fall back to a minimal default. This is currently being used by qeth_l3_send_setrouting(). Thus having all converted all callbacks, remove the legacy path in qeth_send_control_data_cb(). Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
By letting the callbacks deal with error translation, we no longer need to pass the raw error codes back to the originator. This allows us to slim down the callback's private data, and nicely simplifies the code. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Error propagation from cmd callbacks currently works in a way where qeth_send_control_data_cb() picks the raw HW code from the response, and the cmd's originator later translates this into an errno. The callback itself only returns 0 ("done") or 1 ("expect more data"). This is 1. limiting, as the only means for the callback to report an internal error is to invent pseudo HW codes (such as IPA_RC_ENOMEM), that the originator then needs to understand. For non-IPA callbacks, we even provide a separate field in the IO buffer metadata (iob->rc) so the callback can pass back a return value. 2. fragile, as the originator must take care to not translate any errno that is returned by qeth's own IO code paths (eg -ENOMEM). Also, any originator that forgets to translate the HW codes potentially passes garbage back to its caller. For instance, see commit 2aa48671 ("s390/qeth: translate SETVLAN/DELVLAN errors"). Introduce a new model where all HW error translation is done within the callback, and the callback returns > 0, if it expects more data (as before) == 0, on success < 0, with an errno Start off with converting all callbacks to the new model that either a) pass back pseudo HW codes, or b) have a dependency on a specific HW error code. Also convert c) the one callback that uses iob->rc, and d) qeth_setadpparms_change_macaddr_cb() so that it can pass back an error back to qeth_l2_request_initial_mac() even when the cmd itself was successful. The old model remains supported: if the callback returns 0, we still propagate the response's HW error code back to the originator. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
When sending cmds via qeth_send_control_data(), qeth puts the request on the IO channel and then blocks on the reply object until the response has been received. If the IO completes with error, there will never be a response and we block until the reply-wait hits its timeout. For this case, connect the request buffer to its reply object, so that we can immediately cancel the wait. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Current code enqueues & dequeues a reply object from the waiter list in various places. In particular, the dequeue & enqueue in qeth_send_control_data_cb() looks fragile - this can cause qeth_clear_ipacmd_list() to skip the active object. Add some helpers, and boil the logic down by giving qeth_send_control_data() the sole responsibility to add and remove objects. qeth_send_control_data_cb() and qeth_clear_ipacmd_list() will now only notify the reply object to interrupt its wait cycle. This can cause a slight delay in the removal, but that's no concern. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
'len' specifies how much data we send to the HW, don't dump beyond this boundary. As of today this is no big concern - commands are built in full, zeroed pages. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
csum offload and TSO have similar programming requirements. The TSO code was reworked with commit "s390/qeth: enhance TSO control sequence", adjust the csum control flow accordingly. Primarily this means replacing custom helpers with more generic infrastructure. Also, change the LP2LP check so that it warns on TX offload (not RX). This is where reduced csum capability actually matters. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Current code attempts to enable all advertised HW csum offload features. Future-proof this by enabling only those features that we actually use. Also, the IPv4 header csum feature is only needed for TX on L3 devices. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
The code to fill the IPA length fields is duplicated three times across the driver: 1. qeth_send_ipa_cmd() sets IPA_CMD_LENGTH, which matches the defaults in the IPA_PDU_HEADER template. 2. for OSN, qeth_osn_send_ipa_cmd() bypasses this logic and inserts the length passed by the caller. 3. SNMP commands (that can outgrow IPA_CMD_LENGTH) have their own way of setting the length fields, via qeth_send_ipa_snmp_cmd(). Consolidate this into qeth_prepare_ipa_cmd(), which all originators of IPA cmds already call during setup of their cmd. Let qeth_send_ipa_cmd() pull the length from the cmd instead of hard-coding IPA_CMD_LENGTH. For now, the SNMP code still needs to fix-up its length fields manually. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
qeth_l3_query_arp_cache_info() indicates a data length that's much larger than the actual length of its request (ie. the value passed to qeth_get_setassparms_cmd()). The confusion presumably comes from the fact that the cmd _response_ can be quite large - but that's no concern for the initial request IO. Fixing this up allows us to use the generic qeth_send_ipa_cmd() infrastructure. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Yangbo Lu says: ==================== Add ENETC PTP clock driver There is same QorIQ 1588 timer IP block on the new ENETC Ethernet controller with eTSEC/DPAA Ethernet controllers. However it's different endianness (little-endian) and using PCI driver. To support ENETC PTP driver, ptp_qoriq driver needed to be reworked to make functions global for reusing, to add little- endian support, to add ENETC memory map support, and to add ENETC dependency for ptp_qoriq driver. In addition, although ENETC PTP driver is a PCI driver, the dts node still could be used. Currently the ls1028a dtsi which is the only platform by now using ENETC is not complete, so there is still dependency for ENETC PTP node upstreaming. This will be done in the near future. The hardware timestamping support for ENETC is done but needs to be reworked with new method in internal git tree, and will be sent out soon. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Saeed Mahameed authored
When an ethernet frame is padded to meet the minimum ethernet frame size, the padding octets are not covered by the hardware checksum. Fortunately the padding octets are usually zero's, which don't affect checksum. However, it is not guaranteed. For example, switches might choose to make other use of these octets. This repeatedly causes kernel hardware checksum fault. Prior to the cited commit below, skb checksum was forced to be CHECKSUM_NONE when padding is detected. After it, we need to keep skb->csum updated. However, fixing up CHECKSUM_COMPLETE requires to verify and parse IP headers, it does not worth the effort as the packets are so small that CHECKSUM_COMPLETE has no significant advantage. Future work: when reporting checksum complete is not an option for IP non-TCP/UDP packets, we can actually fallback to report checksum unnecessary, by looking at cqe IPOK bit. Fixes: 88078d98 ("net: pskb_trim_rcsum() and CHECKSUM_COMPLETE are friends") Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangbo Lu authored
This patch to add enetc_ptp driver into QorIQ PTP list for maintaining. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangbo Lu authored
This patch is to add PTP clock driver for ENETC. The driver reused QorIQ PTP clock driver. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangbo Lu authored
This patch is to add QorIQ PTP support for ENETC. ENETC PTP driver which is a PCI driver for same 1588 timer IP block will reuse QorIQ PTP driver. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangbo Lu authored
The 1588 timer on eTSEC Ethernet controller uses different register memory map with DPAA Ethernet controller. Now the new ENETC Ethernet controller uses same reigster memory map with DPAA. To support ENETC, let's use register memory map of DPAA/ENETC in default. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangbo Lu authored
Specify "little-endian" property if the 1588 timer IP block is little-endian mode. The default endian mode is big-endian. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangbo Lu authored
There is QorIQ 1588 timer IP block on the new ENETC Ethernet controller. However it uses little endian mode which is different with before. This patch is to add little endian support for the driver by using "little-endian" dts node property. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangbo Lu authored
Moved QorIQ PTP clock initialization/free into new functions ptp_qoriq_init()/ptp_qoriq_free(). These functions could also be reused by ENETC PTP drvier which is a PCI driver for same 1588 timer IP block. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangbo Lu authored
This patch is to make functions of ptp operations global, so that ENETC PTP driver which is a PCI driver for same 1588 timer IP block could reuse them. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yangbo Lu authored
Strings containing "ptp_qoriq" or "qoriq_ptp" which were used for structure/function names were complained by users. Let's just use the unique "ptp_qoriq" to make these names more consistent. This patch is just to unify the names using "ptp_qoriq". It hasn't changed any functions. Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brian Norris authored
There are several skb_* functions where the locked and unlocked functions are confusingly documented. For several of them, the kernel-doc for the unlocked version is placed above the locked version, which to the casual reader makes it seems like the locked version "takes no locks and you must therefore hold required locks before calling it." One can see, for example, that this link claims to document skb_queue_head(), while instead describing __skb_queue_head(). https://www.kernel.org/doc/html/latest/networking/kapi.html#c.skb_queue_head The correct documentation for skb_queue_head() is also included further down the page. This diff tested via: $ scripts/kernel-doc -rst include/linux/skbuff.h net/core/skbuff.c No new warnings were seen, and the output makes a little more sense. Signed-off-by: Brian Norris <briannorris@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Florian Fainelli says: ==================== Remove getting SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS AFAICT there is no code that attempts to get the value of the attribute SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS while it is used with switchdev_port_attr_set(). This is effectively no doing anything and it can slow down future work that tries to make modifications in these areas so remove that. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
There is no code that tries to get the attribute SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS, remove support for doing that. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
There is no code that attempts to get the SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS attribute, remove support for that. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
There is no code that will query the SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS attribute remove support for that. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Use new function phy_modify_mmd_changed(), the result speaks for itself. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vakul Garg authored
Addition of tls1.3 support broke tls1.2 handshake when async crypto accelerator is used. This is because the record type for non-data records is not propagated to user application. Also when async decryption happens, the decryption does not stop when two different types of records get dequeued and submitted for decryption. To address it, we decrypt tls1.2 non-data records in synchronous way. We check whether the record we just processed has same type as the previous one before checking for async condition and jumping to dequeue next record. Fixes: 130b392c ("net: tls: Add tls 1.3 support") Signed-off-by: Vakul Garg <vakul.garg@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Li RongQing authored
This can remove redundant check Signed-off-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Russell King authored
There are several places which make the decision whether to access the XLGMAC vs GMAC that only check for PHY_INTERFACE_MODE_10GKR and not its XAUI variant. Switch these to use the new helper so that we have consistency through the driver. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Russell King authored
Add a mvpp2_is_xlg() helper to identify whether the interface mode should be using the XLGMAC rather than the GMAC. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-