- 23 Sep, 2018 2 commits
-
-
Haiyang Zhang authored
This patch adds the handler for LRO setting change, so that a user can use ethtool command to enable / disable LRO feature. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Haiyang Zhang authored
LRO/RSC in the vSwitch is a feature available in Windows Server 2019 hosts and later. It reduces the per packet processing overhead by coalescing multiple TCP segments when possible. This patch adds netvsc driver support for this feature. Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 22 Sep, 2018 38 commits
-
-
David S. Miller authored
Florian Fainelli says: ==================== net: dsa: b53: SGMII modes fixes Here are two additional fixes that are required in order for SGMII to work correctly. This was discovered with using a copper SFP which would make us use SGMII mode, we would actually leave the HW configured in its default mode: Fiber. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
In both 802.3z and SGMII modes we need to configure the MAC accordingly to flip between Fiber and SGMII modes, and we need to read the MAC status from the SGMII in-band control word. Fixes: 0e01491d ("net: dsa: b53: Add SerDes support") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Maths went wrong, to get 0x20, we need to do 0x1e + (x) * 2, not 0x18, fix that offset so we access the correct registers. This would make us not access the correct SerDes Digital control words, status would be fine and so we would not be correctly flipping between Fiber and SGMII modes resulting in incorrect status words being pulled into the SerDes digital status register. Fixes: 0e01491d ("net: dsa: b53: Add SerDes support") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
PHYLINK takes care of filing the right information into state->an_enabled, get rid of the read from the SerDes's BMCR register. Fixes: 0e01491d ("net: dsa: b53: Add SerDes support") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nathan Chancellor authored
Clang warns that the address of a pointer will always evaluated as true in a boolean context. net/decnet/dn_dev.c:1366:10: warning: address of array 'dev->name' will always evaluate to 'true' [-Wpointer-bool-conversion] dev->name ? dev->name : "???", ~~~~~^~~~ ~ 1 warning generated. Link: https://github.com/ClangBuiltLinux/linux/issues/116Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter Oskolkov authored
This patch adds ipv6 defragmentation tests to ip_defrag selftest, to complement existing ipv4 tests. Signed-off-by: Peter Oskolkov <posk@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter Oskolkov authored
Currently, ip[6]frag_high_thresh sysctl values in new namespaces are hard-limited to those of the root/init ns. There are at least two use cases when it would be desirable to set the high_thresh values higher in a child namespace vs the global hard limit: - a security/ddos protection policy may lower the thresholds in the root/init ns but allow for a special exception in a child namespace - testing: a test running in a namespace may want to set these thresholds higher in its namespace than what is in the root/init ns The new behavior: # ip netns add testns # ip netns exec testns bash # sysctl -w net.ipv4.ipfrag_high_thresh=9000000 net.ipv4.ipfrag_high_thresh = 9000000 # sysctl net.ipv4.ipfrag_high_thresh net.ipv4.ipfrag_high_thresh = 9000000 # sysctl -w net.ipv6.ip6frag_high_thresh=9000000 net.ipv6.ip6frag_high_thresh = 9000000 # sysctl net.ipv6.ip6frag_high_thresh net.ipv6.ip6frag_high_thresh = 9000000 The old behavior: # ip netns add testns # ip netns exec testns bash # sysctl -w net.ipv4.ipfrag_high_thresh=9000000 net.ipv4.ipfrag_high_thresh = 9000000 # sysctl net.ipv4.ipfrag_high_thresh net.ipv4.ipfrag_high_thresh = 4194304 # sysctl -w net.ipv6.ip6frag_high_thresh=9000000 net.ipv6.ip6frag_high_thresh = 9000000 # sysctl net.ipv6.ip6frag_high_thresh net.ipv6.ip6frag_high_thresh = 4194304 Signed-off-by: Peter Oskolkov <posk@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter Oskolkov authored
This is similar to how ipv4 now behaves: commit 0ff89efb ("ip: fail fast on IP defrag errors"). Signed-off-by: Peter Oskolkov <posk@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
net/ipv4/fib_frontend.c: In function 'fib_info_nh_uses_dev': net/ipv4/fib_frontend.c:322:6: error: unused variable 'ret' [-Werror=unused-variable] cc1: all warnings being treated as errors Fixes: 78f2756c ("net/ipv4: Move device validation to helper") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: David Ahern <dsahern@gmail.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Eric Dumazet says: ==================== tcp: switch to Early Departure Time model In the early days, pacing has been implemented in sch_fq (FQ) in a generic way : - SO_MAX_PACING_RATE could be used by any sockets. - TCP would vary effective pacing rate based on CWND*MSS/SRTT - FQ would ensure delays between packets based on current sk->sk_pacing_rate, but with some quantum based artifacts. (inflating RPC tail latencies) - BBR then tweaked the pacing rate in its various phases (PROBE, DRAIN, ...) This worked reasonably well, but had the side effect that TCP RTT samples would be inflated by the sojourn time of the packets in FQ. Also note that when FQ is not used and TCP wants pacing, the internal pacing fallback has very different behavior, since TCP emits packets at the time they should be sent (with unreasonable assumptions about scheduling costs) Van Jacobson gave a talk at Netdev 0x12 in Montreal, about letting TCP (or applications for UDP messages) decide of the Earliest Departure Time, instead of letting packet schedulers derive it from pacing rate. https://www.netdevconf.org/0x12/session.html?evolving-from-afap-teaching-nics-about-time https://www.files.netdevconf.org/d/46def75c2ef345809bbe/files/?p=/Evolving%20from%20AFAP%20%E2%80%93%20Teaching%20NICs%20about%20time.pdf Recent additions in linux provided SO_TXTIME and a new ETF qdisc supporting the new skb->tstamp role This patch series converts TCP and FQ to the same model. This might in the future allow us to relax tight TSQ limits (if FQ is present in the output path), and thus lower number of callbacks to tcp_write_xmit(), thanks to batching. This will be followed by FQ change allowing SO_TXTIME support so that QUIC servers can let the pacing being done in FQ (or offloaded if network device permits) For example, a TCP flow rated at 24Mbps now shows a more meaningful RTT Before : ESTAB 0 211408 10.246.7.151:41558 10.246.7.152:33723 cubic wscale:8,8 rto:203 rtt:2.195/0.084 mss:1448 rcvmss:536 advmss:1448 cwnd:20 ssthresh:20 bytes_acked:36897937 segs_out:25488 segs_in:12454 data_segs_out:25486 send 105.5Mbps lastsnd:1 lastrcv:12851 lastack:1 pacing_rate 24.0Mbps/24.0Mbps delivery_rate 22.9Mbps busy:12851ms unacked:4 rcv_space:29200 notsent:205616 minrtt:0.026 After : ESTAB 0 192584 10.246.7.151:61612 10.246.7.152:34375 cubic wscale:8,8 rto:201 rtt:0.165/0.129 mss:1448 rcvmss:536 advmss:1448 cwnd:20 ssthresh:20 bytes_acked:170755401 segs_out:117931 segs_in:57651 data_segs_out:117929 send 1404.1Mbps lastsnd:1 lastrcv:56915 lastack:1 pacing_rate 24.0Mbps/24.0Mbps delivery_rate 24.2Mbps busy:56915ms unacked:4 rcv_space:29200 notsent:186792 minrtt:0.054 A nice side effect of this patch series is a reduction of max/p99 latencies of RPC workloads, since the FQ quantum no longer adds artifact. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
With the earliest departure time model, we no longer plan special casing TCP retransmits. We therefore remove dead code (since most compilers understood skb_is_retransmit() was false) Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Now TCP keeps track of tcp_wstamp_ns, recording the earliest departure time of next packet, we can remove duplicate code from tcp_internal_pacing() This removes one ktime_get_tai_ns() call, and a divide. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
TCP keeps track of tcp_wstamp_ns by itself, meaning sch_fq no longer has to do it. Thanks to this model, TCP can get more accurate RTT samples, since pacing no longer inflates them. This has the nice effect of removing some delays caused by FQ quantum mechanism, causing inflated max/P99 latencies. Also we might relax TCP Small Queue tight limits in the future, since this new model allow TCP to build bigger batches, since sch_fq (or a device with earliest departure time offload) ensure these packets will be delivered on time. Note that other protocols are not converted (they will probably never be) so sch_fq has still support for SO_MAX_PACING_RATE Tested: Test showing FQ pacing quantum artifact for low-rate flows, adding unexpected throttles for RPC flows, inflating max and P99 latencies. The parameters chosen here are to show what happens typically when a TCP flow has a reduced pacing rate (this can be caused by a reduced cwin after few losses, or/and rtt above few ms) MIBS="MIN_LATENCY,MEAN_LATENCY,MAX_LATENCY,P99_LATENCY,STDDEV_LATENCY" Before : $ netperf -H 10.246.7.133 -t TCP_RR -Cc -T6,6 -- -q 2000000 -r 100,100 -o $MIBS MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.246.7.133 () port 0 AF_INET : first burst 0 : cpu bind Minimum Latency Microseconds,Mean Latency Microseconds,Maximum Latency Microseconds,99th Percentile Latency Microseconds,Stddev Latency Microseconds 19,82.78,5279,3825,482.02 After : $ netperf -H 10.246.7.133 -t TCP_RR -Cc -T6,6 -- -q 2000000 -r 100,100 -o $MIBS MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.246.7.133 () port 0 AF_INET : first burst 0 : cpu bind Minimum Latency Microseconds,Mean Latency Microseconds,Maximum Latency Microseconds,99th Percentile Latency Microseconds,Stddev Latency Microseconds 20,49.94,128,63,3.18 Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Next patch will use tcp_wstamp_ns to feed internal TCP pacing timer, so switch to CLOCK_TAI to share same base. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Switch internal TCP skb->skb_mstamp to skb->skb_mstamp_ns, from usec units to nsec units. Do not clear skb->tstamp before entering IP stacks in TX, so that qdisc or devices can implement pacing based on the earliest departure time instead of socket sk->sk_pacing_rate Packets are fed with tcp_wstamp_ns, and following patch will update tcp_wstamp_ns when both TCP and sch_fq switch to the earliest departure time mechanism. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
TCP will soon provide earliest departure time on TX skbs. It needs to track this in a new variable. tcp_mstamp_refresh() needs to update this variable, and became too big to stay an inline. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
TCP will soon provide per skb->tstamp with earliest departure time, so that sch_fq does not have to determine departure time by looking at socket sk_pacing_rate. We chose in linux-4.19 CLOCK_TAI as the clock base for transports, qdiscs, and NIC offloads. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
There are few places where TCP reads skb->skb_mstamp expecting a value in usec unit. skb->tstamp (aka skb->skb_mstamp) will soon store CLOCK_TAI nsec value. Add tcp_skb_timestamp_us() to provide proper conversion when needed. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
TCP pacing is either implemented in sch_fq or internally. We have the goal of being able to offload pacing on the NICS. TCP will soon provide per skb skb->tstamp as early departure time. Like ETF in commit 25db26a9 ("net/sched: Introduce the ETF Qdisc") we chose CLOCK_T as the clock base, so that TCP and pacers can share a common clock, to get better RTT samples (without pacing artificially inflating these samples). Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Salil Mehta says: ==================== Bug fixes, snall modifications & cleanup for HNS3 driver This patch presents some bug fixes, small modifications and cleanups to the HNS3 VF and PF driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
This patch removes hclge_get_port_type which is redundant. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fuyun Liang authored
Our VF has not implemented the ops for get_port_type. So when we executing ethtool ethx cmd of VF, hns3_get_link_ksettings will return directly. And we can not query anything. To support get_link_ksettings for VF, this patch replaces get_port_type with get_media_type. If the media type is HNAE3_MEDIA_TYPE_NONE, hns3_get_link_ksettings will return link information of VF. Fixes: 12f46bc1 ("net: hns3: Refine hns3_get_link_ksettings()") Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
This patch adds the ops of get_media_type support for VF. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
There are already multiple types packets statistics for error packets, it's unnecessary to print them, which may affect the rx performance if print too many. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
For dma_mapping_error is unlikely happened, this patch adds unlikely for dma_mapping_error check. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
When nic down, it firstly calls netif_tx_stop_all_queues(), then calls napi_disable(). But napi_disable() will wait current napi_poll finish, it may call netif_tx_wake_queue(). This patch fixes it by add nic state checking. Fixes: 424eb834 ("net: hns3: Unified HNS3 {VF|PF} Ethernet Driver for hip08 SoC") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
There are a few "switch-case" codes missed handle for default case. For some abnormal case, it should return error code instead of return 0. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
The prefix of most functions for vf are hclgevf. This patch renames the function with inconsistent prefix. Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
There are two tqp_num variables "hdev->tqp_num" and "kinfo->tqp_num" used in VF. "hdev->tqp_num" is the total tqp number allocated to the VF, and "kinfo->tqp_num" indicates the tqp number being used by the VF. Usually the two variables are equal. But for the case hdev->tqp_num larger than rss_size_max, and num_tc is 1, "kinfo->tqp_num" will be less than "hdev->tqp_num". In original codes, "hdev->tqp_num" is always used to traverse the tqp array of kinfo. It may cause null pointer error when "hdev->tqp_num" is larger than "kinfo->tqp_num" Fixes: e2cb1dec ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
Some prefix of tx/rx statistic names are redundant, this patch modifies these names. The new prefix looks like below: rxq#1_ -> rxq1_ txq#1_ -> txq1_ tx_dropped -> dropped tx_wake -> wake tx_busy -> busy rx_dropped -> dropped Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
For desc.data is already point to the address of struct member "data[6]", it's unnecessary to use '&' to get its address. This patch unifies all the type convert for dest.data, using "req = (struct name *)dest.data". Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
There is a defect in hclge_ets_validate(). If each member of tc_tsa is not IEEE_8021QAZ_TSA_ETS, the variable total_ets_bw won't be updated. In this case, the check for value of total_ets_bw will fail. This patch fixes it by checking total_ets_bw only after it has been updated. Fixes: cacde272 ("net: hns3: Add hclge_dcb module for the support of DCB feature") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Biju Das authored
Document RZ/G1N (R8A7744) SoC bindings. Signed-off-by: Biju Das <biju.das@bp.renesas.com> Reviewed-by: Fabrizio Castro <fabrizio.castro@bp.renesas.com> Reviewed-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
The previous commit to ravb had the side effect of making the PHY advertise Pause and Asym Pause, which previously did not happen. By default, phydev->supported has both forms of pause enabled, but phydev->advertising does not. The new phy_remove_link_mode() copies phydev->supported to phydev->advertising after removing the requested link mode. These Pause configuration bits appears it stops the PHY from completing Auto-Neg and the link remains down. Be explicit and remove the Pause and Asym Pause modes, so restoring the old behavior. Fixes: 41124fa6 ("net: ethernet: Add helper to remove a supported link mode") Reported-by: Simon Horman <horms@verge.net.au> Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Håkon Bugge says: ==================== net: if_arp: use define instead of hard-coded value Struct arpreq contains the name of the device. All other places in the kernel, the define IFNAMSIZ is used to designate its size. But in if_arp.h, a literal constant is used. As it could be good reasons to use constants instead of the defines in include files under uapi, it seems to be OK to use the define here, without opening a can of worms in user-land. This because if_arp.h includes netdevice.h, which also uses IFNAMSIZ. For the distros I have checked, this also holds true for the use-land side. The series also fixes some incorrect indents. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Håkon Bugge authored
uapi/linux/if_arp.h includes linux/netdevice.h, which uses IFNAMSIZ. Hence, use it instead of hard-coded value. Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Håkon Bugge authored
Fixing incorrect indents and align comments. Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vakul Garg authored
In current implementation, tls records are encrypted & transmitted serially. Till the time the previously submitted user data is encrypted, the implementation waits and on finish starts transmitting the record. This approach of encrypt-one record at a time is inefficient when asynchronous crypto accelerators are used. For each record, there are overheads of interrupts, driver softIRQ scheduling etc. Also the crypto accelerator sits idle most of time while an encrypted record's pages are handed over to tcp stack for transmission. This patch enables encryption of multiple records in parallel when an async capable crypto accelerator is present in system. This is achieved by allowing the user space application to send more data using sendmsg() even while previously issued data is being processed by crypto accelerator. This requires returning the control back to user space application after submitting encryption request to accelerator. This also means that zero-copy mode of encryption cannot be used with async accelerator as we must be done with user space application buffer before returning from sendmsg(). There can be multiple records in flight to/from the accelerator. Each of the record is represented by 'struct tls_rec'. This is used to store the memory pages for the record. After the records are encrypted, they are added in a linked list called tx_ready_list which contains encrypted tls records sorted as per tls sequence number. The records from tx_ready_list are transmitted using a newly introduced function called tls_tx_records(). The tx_ready_list is polled for any record ready to be transmitted in sendmsg(), sendpage() after initiating encryption of new tls records. This achieves parallel encryption and transmission of records when async accelerator is present. There could be situation when crypto accelerator completes encryption later than polling of tx_ready_list by sendmsg()/sendpage(). Therefore we need a deferred work context to be able to transmit records from tx_ready_list. The deferred work context gets scheduled if applications are not sending much data through the socket. If the applications issue sendmsg()/sendpage() in quick succession, then the scheduling of tx_work_handler gets cancelled as the tx_ready_list would be polled from application's context itself. This saves scheduling overhead of deferred work. The patch also brings some side benefit. We are able to get rid of the concept of CLOSED record. This is because the records once closed are either encrypted and then placed into tx_ready_list or if encryption fails, the socket error is set. This simplifies the kernel tls sendpath. However since tls_device.c is still using macros, accessory functions for CLOSED records have been retained. Signed-off-by: Vakul Garg <vakul.garg@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-