- 26 Jan, 2019 8 commits
-
-
Julian Wiedmann authored
Re-order the code flow a bit so that all initial HW setup is done before putting the netdevice into play. For a netdevice that hasn't been registered before, we also don't need to re-enable its HW features or check for recovery actions. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
At best this is redundant, at worst it papers over a race in the offline / online code. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
commit 4789a218 ("s390/qeth: fix race when setting MAC address") resolved a race where our initial programming of dev_addr into the HW and a call to ndo_set_mac_address() could run concurrently. In this case, we could end up getting confused about which address was actually set in the HW. The quick fix was to introduce additional locking that blocks any ndo_set_mac_address() while the device is being set online. But the race primarily originated from the fact that we first register the netdevice, and only then program its dev_addr. By re-ordering this sequence, userspace will only be able to change the MAC address _after_ we have finished with setting the initial dev_addr. Still, the same MAC address race can also occur during a subsequent call to qeth_l2_set_online(). So keep around the locking for now, until a follow-up patch fully resolves this. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
The L2 and L3 code for these ops is almost identical, we only need to provide a custom ndo_validate_addr() for L2 that checks whether programming the MAC address succeeded. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
qeth_qdio_cq_handler() doesn't replenish the Output Queue(s), and thus has no reason to wake the txq. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Consolidate the code that marks the current buffer to be flushed, and let qeth_fill_buffer() advance the Output Queue's buffer cursor. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Recent changes to the phylib API - removed phy_stop_interrupts - replaced phy_start_interrupts with phy_request_interrupt - moved some functionality from phy_connect() and phy_disconnect() to phy_start() and phy_stop() respectively. Reflect these changes in the documentation. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2019-01-25 This series provides some updates to mlx5 driver, From Tariq, 1) Make sure RX packet header does not cross page boundary To avoid page boundary crossing, use stride size that fits the maximum possible header. Stride is increased form 64B to 256B. 2) CQ struct cleanup: Take CQ decompress fields into a separate structure From Moshe, 3) Expand XPS cpumask to cover all online cpus From Jason Gunthorpe and Tariq: 4) Compilation warning cleanup From Or, 5) Add trace points for flow tables create/destroy From Saeed, 6) Software stats update/folding improvements this also solves a compilation warning on 32bit systems that was reported last release cycle by Arnd and Andrew. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 25 Jan, 2019 31 commits
-
-
Saeed Mahameed authored
Representors software stats are basic, this patch is reusing the mlx5e_fold_sw_stats in representors, which sums up the basic stats64 for a mlx5e netdevice. Fixes: 8bfaf07f ("net/mlx5e: Present SW stats when state is not opened") Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
This behavior is already adopted for all other cases in the cited patch. The representor's functions were missed, here we modify the them to behave similarly. Fixes: 8bfaf07f ("net/mlx5e: Present SW stats when state is not opened") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Saeed Mahameed authored
mlx5e_grp_sw_update_stats can be called from two threads, 1) ndo_get_stats64 2) get_ethtool_stats For this reason and to minimize concurrency issue impact on 64bit machines mlx5e_grp_sw_update_stats folds the software stats into a temporary variable then copies it to the global driver stats, both ethtool and ndo statistics callbacks will use the global software stats variable to report whatever stats they need. Actually ndo_get_stats64 doesn't need to fold the whole software stats (mlx5e_grp_sw_update_stats), all it needs is five counters to fill the rtnl_link_stats64 relevant stats parameter. Hence this patch introduces a simpler helper function to fold software stats for ndo_get_stats64 which will work directly on rtnl_link_stats64 stats parameter and not on the global or even temporary mlx5e_sw_stats variable. Since now mlx5e_grp_sw_update_stats is not called by ndo_get_stats64 we can make it static and remove the temp var. Unlike mlx5e_grp_sw_update_stats the new fold stats function doesn't need to zero out the output statistics parameter since it is already done by the stack @dev_get_stats(). This patch is fixing stack usage of mlx5e_grp_sw_update_stats on x86 gcc-4.9 and higher, the concurrency issue between mlx5's ndo_get_stats64 and get_ethtool_stats is resolved as well. Fixes: 8bfaf07f ("net/mlx5e: Present SW stats when state is not opened") Reported-by: Arnd Bergmann <arnd@arndb.de> Reported-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Or Gerlitz authored
We were not tracking flow tables so far, add it up. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Jason Gunthorpe authored
This confusing construction confuses the compiler which can't see that flow is initialized if !err: drivers/net/ethernet/mellanox/mlx5/core/en_tc.c: In function `mlx5e_configure_flower` drivers/net/ethernet/mellanox/mlx5/core/en_tc.c:2727:28: warning: `flow` may be used uninitialized in this function [-Wmaybe-uninitialized] There is no reason for two function outputs, just return the pointer directly and use ERR_PTR to encode a failure. Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Moshe Shemesh authored
Currently we have one cpu in XPS cpumask per tx queue, this is good enough for default configuration where there is a tx queue per cpu. However, once configuration changes to use less tx queues, part of the cpus are not XPS-mapped and so the select queue decision falls back to hash calculation and balancing is not guaranteed. Expand XPS cpumask to enable using all cpus even when number of tx queues is smaller than number of cpus. Signed-off-by: Moshe Shemesh <moshe@mellanox.com> Reviewed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
Only the Receive CQ makes use of these fields. Take them out into a separate struct and save space in the generic CQ structure. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
Tariq Toukan authored
In the non-linear SKB memory scheme of Striding RQ, a packet header could cross page boundary. This requires special care in fast path that costs LoC, additional runtime instructions and branches. It could happen when the header (up to 256B) does not fit in a single stride. Avoid this by working with a stride size that fits the maximum possible header. Stride is increased form 64B to 256B. Performance: Tested packet rate for UDP streams, single ring, on ConnectX-5. Configuration: Set Striding RQ and LRO ON (to enabled the non-linear SKB scheme). GRO OFF, early drop by TC rule. 64B: 4x worse memory utilization, no page-crossers headers - No degradation (5,887,305 pps). - The reduction in memory utilization is compensated by the saving of branches tests. 192B: 1.33x worse memory utilization, avoid page-crossers headers - Before: 5,727,252. After: 5,777,037. ~1% gain. 256B: Same memory util, no page-crossers - Before: 5,691,885. After: 5,748,007. ~1% gain. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
-
David S. Miller authored
This reverts the devlink health changes from 9/17/2019, Jiri wants things to be designed differently and it was agreed that the easiest way to do this is start from the beginning again. Commits reverted: cb5ccfbe 880ee82f c7af343b ff253fed 6f9d5613 fcd852c6 8a66704a 12bd0dce aba25279 ce019faa b8c45a03 And the follow-on build fix: o33a0efa4Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Remove duplicated include. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shalom Toledo authored
Signed-off-by: Shalom Toledo <shalomt@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Zhaolong Zhang authored
max_rcvbuf_size is no longer used since commit "414574a0". Signed-off-by: Zhaolong Zhang <zhangzl2013@126.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Priyaranjan Jha says: ==================== tcp_bbr: Improving TCP BBR performance for WiFi and cellular networks Ack aggregation is quite prevalent with wifi, cellular and cable modem link tchnologies, ACK decimation in middleboxes, and common offloading techniques such as TSO and GRO, at end hosts. Previously, BBR was often cwnd-limited in the presence of severe ACK aggregation, which resulted in low throughput due to insufficient data in flight. To achieve good throughput for wifi and other paths with aggregation, this patch series implements an ACK aggregation estimator for BBR, which estimates the maximum recent degree of ACK aggregation and adapts cwnd based on it. The algorithm is further described by the following presentation: https://datatracker.ietf.org/meeting/101/materials/slides-101-iccrg-an-update-on-bbr-work-at-google-00 (1) A preparatory patch, which refactors bbr_target_cwnd for generic inflight provisioning. (2) Implements BBR ack aggregation estimator and adapts cwnd based on measured degree of ACK aggregation. ==================== Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Priyaranjan Jha authored
Aggregation effects are extremely common with wifi, cellular, and cable modem link technologies, ACK decimation in middleboxes, and LRO and GRO in receiving hosts. The aggregation can happen in either direction, data or ACKs, but in either case the aggregation effect is visible to the sender in the ACK stream. Previously BBR's sending was often limited by cwnd under severe ACK aggregation/decimation because BBR sized the cwnd at 2*BDP. If packets were acked in bursts after long delays (e.g. one ACK acking 5*BDP after 5*RTT), BBR's sending was halted after sending 2*BDP over 2*RTT, leaving the bottleneck idle for potentially long periods. Note that loss-based congestion control does not have this issue because when facing aggregation it continues increasing cwnd after bursts of ACKs, growing cwnd until the buffer is full. To achieve good throughput in the presence of aggregation effects, this algorithm allows the BBR sender to put extra data in flight to keep the bottleneck utilized during silences in the ACK stream that it has evidence to suggest were caused by aggregation. A summary of the algorithm: when a burst of packets are acked by a stretched ACK or a burst of ACKs or both, BBR first estimates the expected amount of data that should have been acked, based on its estimated bandwidth. Then the surplus ("extra_acked") is recorded in a windowed-max filter to estimate the recent level of observed ACK aggregation. Then cwnd is increased by the ACK aggregation estimate. The larger cwnd avoids BBR being cwnd-limited in the face of ACK silences that recent history suggests were caused by aggregation. As a sanity check, the ACK aggregation degree is upper-bounded by the cwnd (at the time of measurement) and a global max of BW * 100ms. The algorithm is further described by the following presentation: https://datatracker.ietf.org/meeting/101/materials/slides-101-iccrg-an-update-on-bbr-work-at-google-00 In our internal testing, we observed a significant increase in BBR throughput (measured using netperf), in a basic wifi setup. - Host1 (sender on ethernet) -> AP -> Host2 (receiver on wifi) - 2.4 GHz -> BBR before: ~73 Mbps; BBR after: ~102 Mbps; CUBIC: ~100 Mbps - 5.0 GHz -> BBR before: ~362 Mbps; BBR after: ~593 Mbps; CUBIC: ~601 Mbps Also, this code is running globally on YouTube TCP connections and produced significant bandwidth increases for YouTube traffic. This is based on Ian Swett's max_ack_height_ algorithm from the QUIC BBR implementation. Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Priyaranjan Jha authored
Because bbr_target_cwnd() is really a general-purpose BBR helper for computing some volume of inflight data as a function of the estimated BDP, refactor it into following helper functions: - bbr_bdp() - bbr_quantization_budget() - bbr_inflight() Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Few chip versions use the same sequence to adjust 10M and ALDPS, so let's factor it out. This patch also fixes a (most likely) typo in rtl8168g_1_hw_phy_config. There bit 8 in reg 0x14 on page 0x0bcc was set and not cleared. According to the vendor driver this bit needs to be cleared in all cases. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Chip versions from RTL8168g onward use the same sequence to disable ALDPS (Advanced Link-Down Power Saving). So let's factor this out. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
I made a dumb mistake when I summed up the slave stats, obviously slaves can come and go which would make the master stats unreliable. Count and export the master stats separately. Fixes: a258aeac ("bonding: add support for xstats and export 3ad stats") Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Heiner Kallweit says: ==================== net: phy: improve starting PHY This patch series improves few aspects of starting the PHY. v2: - improve a warning in patch 4 v3: - extend commit message for patch 2 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Now that we enable the interrupts in phy_start() we don't have to do it before. Therefore remove enabling interrupts from phy_start_interrupts() and rename this function to reflect the changed functionality. v2: - improve warning to clearly state that we fall back to polling Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Interrupts don't have to be enabled before calling phy_start(). Therefore let's enable them in phy_start(). In a subsequent step we'll remove enabling interrupts from phy_connect_direct(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
phy_start() should be called from states PHY_READY or PHY_HALTED only. Check for this to detect misbehaving drivers. Also the state machine should be started only when being called from one of the valid states. Some more background: For all invalid states phy_start() basically was a no-op. All it did was triggering a state machine run, but for all "running" states the poll loop was active anyway. And if called from PHY_DOWN, the state machine does nothing. v3: - extended commit message Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
The state machine is a no-op before phy_start() has been called. Therefore let's enable it in phy_start() only. In phy_start() let's call phy_start_machine() instead of phy_trigger_machine(). phy_start_machine is an alias for phy_trigger_machine but it makes clearer that we start the state machine here instead of just triggering a run. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
In case of error, the function devm_clk_get() returns ERR_PTR() and never returns NULL. The NULL test in the return value check should be replaced with IS_ERR(). Fixes: a7c30e62 ("net: stmmac: Add driver for Qualcomm ethqos") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Vinod Koul <vkoul@kernel.org> Acked-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
Two statements are incorrecly indented, fix these by removing a space. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Claudiu Manoil says: ==================== Introduce ENETC ethernet drivers ENETC is a multi-port virtualized Ethernet controller supporting GbE designs and Time-Sensitive Networking (TSN) functionality. ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated Endpoint (RCIE). As such, it contains multiple physical (PF) and virtual (VF) PCIe functions, discoverable by standard PCI Express. The patch series adds basic enablement for these otherwise standard buffer descriptor (BD) ring based ethernet devices (PCIe PFs and VFs), currently included in the 64-bit dual ARMv8 processors LS1028A SoC. The driver is portable to 32-bit designs, and it's independent of CPU endianness. Contributors: Alex Marginean <alexandru.marginean@nxp.com> Catalin Horghidan <catalin.horghidan@nxp.com> TODO list: * IEEE 1588 PTP support; * TSN support; * MDIO support and VF link management; * power management support; * flow control support; * TC offloading with h/w MQPRIO; * interrupt coalescing, configurable BD ring sizes, and other usual config options if missing. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
A ternary match table is used for RFS. If multiple entries in the table match, the entry with the lowest numerical values index is chosen as the matching entry. Entries in the table are identified using an index which takes a value from 0 to PRFSCAPR[NUM_RFS]-1 when accessed by the PSI (PF). Portions of the RFS table can be assigned to each SI by the PSI (PF) driver in PSIaRFSCFGR. Assignments are cumulative, the entries assigned to SIn start after those assigned to SIn-1. The total assignments to all SIs must be equal to or less than the number available to the port as found in PRFSCAPR. For RSS, the Toeplitz hash function used requires two inputs, a 40B random secret key that is supplied through the PRSSKR0-9 registers as well as the relevant pieces of the packet header (n-tuple). The 6 LSB bits of the hash function result will then be used as a pointer to obtain the tag referenced in the 64 entry indirection table. The result will provide a winning group which will be used to help route the received packet. Signed-off-by: Alex Marginean <alexandru.marginean@nxp.com> Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
VSIs (VFs) may send a message to the PSI (PF) for general notification or to gain access to hardware resources which requires host inspection. These messages may vary in size and are handled as a partition copy between two memory regions owned by the respective participants. The PSI will respond with fail or success and a 16-bit message code. The patch implements the vf to pf messaging mechanism above and, as the first application making use of this support, it enables the VF to configure its own primary MAC address. Signed-off-by: Catalin Horghidan <catalin.horghidan@nxp.com> Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
This adds most h/w statistics counters: non-privileged SI conters, as well as privileged Port and MAC counters available only to the PF. Per ring software stats are also included. Signed-off-by: Alex Marginean <alexandru.marginean@nxp.com> Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Claudiu Manoil authored
ENETC is a multi-port virtualized Ethernet controller supporting GbE designs and Time-Sensitive Networking (TSN) functionality. ENETC is operating as an SR-IOV multi-PF capable Root Complex Integrated Endpoint (RCIE). As such, it contains multiple physical (PF) and virtual (VF) PCIe functions, discoverable by standard PCI Express. Introduce basic PF and VF ENETC ethernet drivers. The PF has access to the ENETC Port registers and resources and makes the required privileged configurations for the underlying VF devices. Common functionality is controlled through so called System Interface (SI) register blocks, PFs and VFs own a SI each. Though SI register blocks are almost identical, there are a few privileged SI level controls that are accessible only to PFs, and so the distinction is made between PF SIs (PSI) and VF SIs (VSI). As such, the bulk of the code, including datapath processing, basic h/w offload support and generic pci related configuration, is shared between the 2 drivers and is factored out in common source files (i.e. enetc.c). Major functionalities included (for both drivers): MSI-X support for Rx and Tx processing, assignment of Rx/Tx BD ring pairs to MSI-X entries, multi-queue support, Rx S/G (Rx frame fragmentation) and jumbo frame (up to 9600B) support, Rx paged allocation and reuse, Tx S/G support (NETIF_F_SG), Rx and Tx checksum offload, PF MAC filtering and initial control ring support, VLAN extraction/ insertion, PF Rx VLAN CTAG filtering, VF mac address config support, VF VLAN isolation support, etc. Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tariq Toukan authored
Soften the memory barrier call of mb() by a sufficient wmb() in the consumer index update of the event queues. Suggested-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 23 Jan, 2019 1 commit
-
-
Heiner Kallweit authored
So far member rtl_fw has three states: - IS_ERR(rtl_fw): firmware not loaded - !rtl_fw: no firmware available - other: firmware loaded This can be made simpler and clearer by adding the firmware name as member fw_name to struct rtl8169_private. Then: - !fw_name: no firmware available - !rtl_fw: firmware not loaded - rtl_fw: firmware loaded This change also allows to easily merge rtl_request_uncached_firmware into rtl_request_firmware. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-