- 07 May, 2020 22 commits
-
-
Devulapally Shiva Krishna authored
This patch fixes two issues observed during self tests with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled. 1. gcm(aes) hang issue , that happens during decryption. 2. rfc4106-gcm-aes-chcr encryption unexpectedly succeeded. For gcm-aes decryption , authtag is not mapped due to sg_nents_for_len(upto size: assoclen+ cryptlen - authsize). So fix it by dma_mapping authtag. Also replaced sg_nents() to sg_nents_for_len() in case of aead_dma_unmap(). For rfc4106-gcm-aes-chcr, used crypto_ipsec_check_assoclen() for checking the validity of assoclen. Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Devulapally Shiva Krishna <shiva@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alex Elder says: ==================== net: ipa: kill endpoint stop workaround It turns out that a workaround that performs a small DMA operation between retried attempts to stop a GSI channel is not needed for any supported hardware. The hardware quirk that required the extra DMA operation was fixed after IPA v3.1. So this series gets rid of that workaround code, along with some other code that was only present to support it. NOTE: This series depends on (and includes/duplicates) another patch that has already been committed in the net tree: 713b6ebb net: ipa: fix a bug in ipa_endpoint_stop() ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
A recent commit removed the only use of ipa_cmd_dma_task_32b_addr_add(). This function (and the IPA immediate command it implements) is no longer needed, so get rid of it, along with all of the definitions associated with it. Isolate its removal in a commit so it can be easily added back again if needed. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
The previous commit made ipa_endpoint_stop() be a trivial wrapper around gsi_channel_stop(). Since it no longer does anything special, just open-code it in the three places it's used. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
The only reason ipa_endpoint_stop() had a retry loop was that the just-removed workaround required an IPA DMA command to occur between attempts. The gsi_channel_stop() call that implements the stop does its own retry loop, to cover a channel's transition from started to stop-in-progress to stopped state. Get rid of the unnecessary retry loop in ipa_endpoint_stop(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
In ipa_endpoint_stop(), a workaround is used for IPA version 3.5.1 where a 1-byte DMA request is issued between GSI channel stop retries. It turns out that this workaround is only required for IPA versions 3.1 and 3.2, and we don't support those. So remove the call to ipa_endpoint_stop_rx_dma() in that function. That leaves that function unused, so get rid of it. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
In ipa_endpoint_stop(), for TX endpoints we set the number of retries to 0. When we break out of the loop, retries being 0 means we return EIO rather than the value of ret (which should be 0). Fix this by using a non-zero retry count for both RX and TX channels, and just break out of the loop after calling gsi_channel_stop() for TX channels. This way only RX channels will retry, and the retry count will be non-zero at the end for TX channels (so the proper value gets returned). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net> (cherry picked from commit 713b6ebb) Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alex Elder says: ==================== net: ipa: kill endpoint delay mode workaround A "delay mode" feature was put in place to work around a problem where packets could passed to the modem before it was ready to handle them. That problem no longer exists, and we don't need the workaround any more so get rid of it. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
A "delay mode" feature was put in place to work around a problem that was observed during development of the upstream IPA driver. It used TX endpoint "delay mode" in order to prevent transmitting packets toward the modem before it was ready. A race condition that would explain the problem has long since been fixed, and we have concluded that the "delay mode" feature is no longer required. So get rid of it. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Create a new helper function that encapsulates enabling or disabling suspend on an RX endpoint. It returns the previous state of the endpoint (true means suspend mode was enabled). Create another function that handles enabling or disabling delay mode on a TX endpoint. Delay mode does not work correctly on IPA version 4.2, so we don't currently use it (and shouldn't). We only set delay mode in one case, and although we don't expect an endpoint to already be in delay mode, it doesn't really matter if it was. So the delay function doesn't return a value. Stop issuing warnings if the previous suspend or delay mode state differs from what is expected. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Change ipa_endpoint_init_ctrl() so it returns the previous state (whether suspend or delay mode was enabled) rather than indicating whether the request caused a change in state. This makes it easier to understand what's happening where called. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alex Elder says: ==================== net: ipa: limit special reset handling Some special handling done during channel reset should only be done for IPA hardare version 3.5.1. This series generalizes the meaning of a flag passed to indicate special behavior, then has the special handling be used only when appropriate. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
In gsi_channel_reset(), RX channels are subjected to two consecutive CHANNEL_RESET commands. This workaround should only be used for IPA version 3.5.1, and for newer hardware "can lead to unwanted behavior." Only issue the second CHANNEL_RESET command for legacy hardware. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
In several places, a Boolean flag is used in the GSI code to indicate whether the "doorbell engine" should be enabled or not when a channel is configured. This is basically done to abstract this property from the IPA version; the GSI code doesn't otherwise "know" what the IPA hardware version is. The doorbell engine is enabled only for IPA v3.5.1, not for IPA v4.0 and later. The next patch makes another change that affects behavior during channel reset (which also involves programming the channel). It also distinguishes IPA v3.5.1 hardware from newer hardware. Rather than creating another flag whose value matches the "db_enable" value, just rename "db_enable" to be "legacy" so it can be used to signal more than just the special doorbell handling. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== tcp: minor adjustments for low pacing rates After pacing horizon addition, we have to adjust how we arm rto timer, otherwise we might freeze very low pacing rate flows. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
As hinted in prior change ("tcp: refine tcp_pacing_delay() for very low pacing rates"), it is probably best arming the xmit timer only when all the packets have been scheduled, rather than when the head of rtx queue has been re-sent. This does matter for flows having extremely low pacing rates, since their tp->tcp_wstamp_ns could be far in the future. Note that the regular xmit path has a stronger limit in tcp_small_queue_check(), meaning it is less likely to go beyond the pacing horizon. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
With the addition of horizon feature to sch_fq, we noticed some suboptimal behavior of extremely low pacing rate TCP flows, especially when TCP is not aware of a drop happening in lower stacks. Back in commit 3f80e08f ("tcp: add tcp_reset_xmit_timer() helper"), tcp_pacing_delay() was added to estimate an extra delay to add to standard rto timers. This patch removes the skb argument from this helper and tcp_reset_xmit_timer() because it makes more sense to simply consider the time at which next packet is allowed to be sent, instead of the time of whatever packet has been sent. This avoids arming RTO timer too soon and removes spurious horizon drops. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Add an "iommus" property to the IPA node in "sdm845.dtsi". It is required because there are two regions of memory the IPA accesses through an SMMU. The next few patches define and map those regions. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Heiner Kallweit says: ==================== timer: add fsleep for flexible sleeping Sleeping for a certain amount of time requires use of different functions, depending on the time period. Documentation/timers/timers-howto.rst explains when to use which function, and also checkpatch checks for some potentially problematic cases. So let's create a helper that automatically chooses the appropriate sleep function -> fsleep(), for flexible sleeping Not sure why such a helper doesn't exist yet, or where the pitfall is, because it's a quite obvious idea. If the delay is a constant, then the compiler should be able to ensure that the new helper doesn't create overhead. If the delay is not constant, then the new helper can save some code. First user is the r8169 network driver. If nothing speaks against it, then this series could go through the netdev tree. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Use new flexible sleep function fsleep() to merge the udelay and msleep polling functions. We can safely do this because no polling function is used in atomic context in this driver. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Sleeping for a certain amount of time requires use of different functions, depending on the time period. Documentation/timers/timers-howto.rst explains when to use which function, and also checkpatch checks for some potentially problematic cases. So let's create a helper that automatically chooses the appropriate sleep function -> fsleep(), for flexible sleeping If the delay is a constant, then the compiler should be able to ensure that the new helper doesn't create overhead. If the delay is not constant, then the new helper can save some code. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fernando Gont authored
Implement the upcoming rev of RFC4941 (IPv6 temporary addresses): https://tools.ietf.org/html/draft-ietf-6man-rfc4941bis-09 * Reduces the default Valid Lifetime to 2 days The number of extra addresses employed when Valid Lifetime was 7 days exacerbated the stress caused on network elements/devices. Additionally, the motivation for temporary addresses is indeed privacy and reduced exposure. With a default Valid Lifetime of 7 days, an address that becomes revealed by active communication is reachable and exposed for one whole week. The only use case for a Valid Lifetime of 7 days could be some application that is expecting to have long lived connections. But if you want to have a long lived connections, you shouldn't be using a temporary address in the first place. Additionally, in the era of mobile devices, general applications should nevertheless be prepared and robust to address changes (e.g. nodes swap wifi <-> 4G, etc.) * Employs different IIDs for different prefixes To avoid network activity correlation among addresses configured for different prefixes * Uses a simpler algorithm for IID generation No need to store "history" anywhere Signed-off-by: Fernando Gont <fgont@si6networks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 06 May, 2020 18 commits
-
-
David S. Miller authored
Michael Walle says: ==================== add phy shared storage Introduce the concept of a shared PHY storage which can be used by some QSGMII PHYs to ease initialization and access to global per-package registers. Changes since v2: - restore page to standard after reading the base address in the mscc driver, thanks Antoine. Changes since v1: - fix typos and add a comment, thanks Florian. - check for "addr < 0" in phy_package_join() - remove multiple blank lines and make "checkpatch.pl --strict" happy Changes since RFC: - check return code of kzalloc() - fix local variable ordering (reverse christmas tree) - add priv_size argument to phy_package_join() - add Tested-by tag, thanks Vladimir. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Walle authored
Use the new phy_package_shared common storage to ease the package initialization and to access the global registers. Signed-off-by: Michael Walle <michael@walle.cc> Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Walle authored
Use the new phy_package_shared common storage to ease the package initialization and to access the global registers. Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Walle authored
There are packages which contain multiple PHY devices, eg. a quad PHY transceiver. Provide functions to allocate and free shared storage. Usually, a quad PHY contains global registers, which don't belong to any PHY. Provide convenience functions to access these registers. Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ayush Sawal authored
This reverts commit 27c6feb0. For ipsec offload the chelsio's ethernet driver expects a single mtu sized packet. But when ipsec traffic is running using iperf, most of the packets in that traffic are gso packets(large sized skbs) because GSO is enabled by default in TCP, due to this commit 0a6b2a1d ("tcp: switch to GSO being always on"), so chcr_ipsec_offload_ok() receives a gso skb(with gso_size non zero). Due to the check in chcr_ipsec_offload_ok(), this function returns false for most of the packet, then ipsec offload is skipped and the skb goes out taking the coprocessor path which reduces the bandwidth for inline ipsec. If this check is removed then for most of the packets(large sized skbs) the chcr_ipsec_offload_ok() returns true and then as GSO is on, the segmentation of the packet happens in the kernel and then finally the driver_xmit is called, which receives a segmented mtu sized packet which is what the driver expects for ipsec offload. So this case becomes unnecessary here, therefore removing it. Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunjian Wang authored
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunjian Wang authored
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunjian Wang authored
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunjian Wang authored
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
ChenTao authored
Fix the following warning: drivers/net/ethernet/freescale/enetc/enetc_qos.c:427:20: warning: symbol 'enetc_act_fwd' was not declared. Should it be static? drivers/net/ethernet/freescale/enetc/enetc_qos.c:966:20: warning: symbol 'enetc_check_flow_actions' was not declared. Should it be static? Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: ChenTao <chentao107@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunjian Wang authored
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunjian Wang authored
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunjian Wang authored
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Julian Wiedmann says: ==================== s390/qeth: updates 2020-05-06 please apply the following patch series for qeth to netdev's net-next tree. Same patches as yesterday, except that the ethtool reset has been dropped for now. This primarily adds infrastructure to deal with HW offloads when the packets get forwarded over the adapter's internal switch. Aside from that, just some minor tweaking for the TX code. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Remove a stale doc link. While at it also reword the help text to get rid of an outdated marketing term. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
When starting the reset worker via sysfs is unsuccessful, return an error to the user. Modernize the sysfs input parsing while at it. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Alexandra Winter <wintera@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
When qeth_flush_buffers() gets called for a group of TX buffers (currently up to 2 for OSA-style devices), the code iterates over each buffer for some final processing. During this processing, it sets the TX IRQ marker on the leading buffer rather than the last one. This can result in delayed TX completion of the trailing buffers. So pull the IRQ marker code out of the loop, and apply it to the final buffer. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
The TX path usually maps the full content of a page into a buffer element. But there's specific skb layouts (ie. linearized TSO skbs) where the HW header (1) requires a separate buffer element, and (2) is page-contiguous with the packet data that's mapped into the next buffer element. Flag such buffer elements accordingly, so that HW can optimize its data access for them. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-