- 13 Jan, 2014 25 commits
-
-
Andy Fleming authored
10G PHYs don't currently support running the state machine, which is implicitly setup via of_phy_connect(). Therefore, it is necessary to implement an OF version of phy_attach(), which does everything except start the state machine. Signed-off-by: Andy Fleming <afleming@gmail.com> Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Fleming authored
phy_attach_direct() may now attach to a generic 10G driver. It can also be used exactly as phy_connect_direct(), which will be useful when using of_mdio, as phy_connect (and therefore of_phy_connect) start the PHY state machine, which is currently irrelevant for 10G PHYs. Signed-off-by: Andy Fleming <afleming@gmail.com> Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Fleming authored
Very incomplete, but will allow for binding an ethernet controller to it. Signed-off-by: Andy Fleming <afleming@gmail.com> Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shaohui Xie authored
Then other generic phy driver such as generic 10g phy driver can join it. Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Fleming authored
Signed-off-by: Andy Fleming <afleming@gmail.com> Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Fleming authored
Need an extra parameter to read or write Clause 45 PHYs, so need a different API with the extra parameter. Signed-off-by: Andy Fleming <afleming@gmail.com> Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Avoid needless export of local functions Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: James Chapman <jchapman@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Fix a bunch of whole lot of namespace issues with the Broadcom bnx2x driver found by running 'make namespacecheck' * global variables must be prefixed with bnx2x_ naming a variable int_mode, or num_queue is invitation to disaster * make local functions static * move some inline's used in one file out of header (this driver has a bad case of inline-itis) * remove resulting dead code fallout bnx2x_pfc_statistic, bnx2x_emac_get_pfc_stat bnx2x_init_vlan_mac_obj, Looks like vlan mac support in this driver was a botch from day one either never worked, or not implemented or missing support functions Compile tested only. Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pankaj Dubey authored
If used 64 bit compiler GCC warns that: drivers/net/ethernet/smsc/smc91x.c:1897:7: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] This patch fixes this by changing typecast from "unsigned int" to "unsigned long" CC: "David S. Miller" <davem@davemloft.net> CC: Jingoo Han <jg1.han@samsung.com> CC: netdev@vger.kernel.org Signed-off-by: Pankaj Dubey <pankaj.dubey@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Neal Cardwell authored
Simplify the GRE header length calculation in gre_gso_segment(). Switch to an approach that is simpler, faster, and more general. The new approach will continue to be correct even if we add support for the optional variable-length routing info that may be present in a GRE header. Signed-off-by: Neal Cardwell <ncardwell@google.com> Cc: Eric Dumazet <edumazet@google.com> Cc: H.K. Jerry Chu <hkchu@google.com> Cc: Pravin B Shelar <pshelar@nicira.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
WANG Cong authored
It is not necessary at all. Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
WANG Cong authored
tp->root is a void* pointer, no need to cast it. Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
WANG Cong authored
tcf_match_indev() is called in fast path, it is not wise to search for a netdev by ifindex and then compare by its name, just compare the ifindex. Also, dev->name could be changed by user-space, therefore the match would be always fail, but dev->ifindex could be consistent. BTW, this will also save some bytes from the core struct of u32. Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
WANG Cong authored
It will be needed by the next patch. Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
WANG Cong authored
Refactor tcf_add_notify() and factor out tcf_del_notify(). Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
WANG Cong authored
There is no need to store the index separatedly since tcf_hashinfo is allocated statically too. Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
GRO layer has a limit of 8 flows being held in GRO list, for performance reason. When a packet comes for a flow not yet in the list, and list is full, we immediately give it to upper stacks, lowering aggregation performance. With TSO auto sizing and FQ packet scheduler, this situation happens more often. This patch changes strategy to simply evict the oldest flow of the list. This works better because of the nature of packet trains for which GRO is efficient. This also has the effect of lowering the GRO latency if many flows are competing. Tested : Used a 40Gbps NIC, with 4 RX queues, and 200 concurrent TCP_STREAM netperf. Before patch, aggregate rate is 11Gbps (while a single flow can reach 30Gbps) After patch, line rate is reached. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Jerry Chu <hkchu@google.com> Cc: Neal Cardwell <ncardwell@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
In order to use the native GRO handling of encapsulated protocols on mlx4, we need to call napi_gro_receive() instead of netif_receive_skb() unless busy polling is in action. While we are at it, rename mlx4_en_cq_ll_polling() to mlx4_en_cq_busy_polling() Tested with GRE tunnel : GRO aggregation is now performed on the ethernet device instead of being done later on gre device. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Amir Vadai <amirv@mellanox.com> Cc: Jerry Chu <hkchu@google.com> Cc: Or Gerlitz <ogerlitz@mellanox.com> Acked-By: Amir Vadai <amirv@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
Fixes the following sparse warning: net/ipv4/gre_offload.c:253:5: warning: symbol 'gre_gro_complete' was not declared. Should it be static? Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Hannes Frederic Sowa says: ==================== path mtu hardening patches After a lot of back and forth I want to propose these changes regarding path mtu hardening and give an outline why I think this is the best way how to proceed: This set contains the following patches: * ipv4: introduce ip_dst_mtu_maybe_forward and protect forwarding path against pmtu spoofing * ipv6: introduce ip6_dst_mtu_forward and protect forwarding path with it * ipv4: introduce hardened ip_no_pmtu_disc mode The first one switches the forwarding path of IPv4 to use the interface mtu by default and ignore a possible discovered path mtu. It provides a sysctl to switch back to the original behavior (see discussion below). The second patch does the same thing unconditionally for IPv6. I don't provide a knob for IPv6 to switch to original behavior (please see below). The third patch introduces a hardened pmtu mode, where only pmtu information are accepted where the protocol is able to do more stringent checks on the icmp piggyback payload (please see the patch commit msg for further details). Why is this change necessary? First of all, RFC 1191 4. Router specification says: "When a router is unable to forward a datagram because it exceeds the MTU of the next-hop network and its Don't Fragment bit is set, the router is required to return an ICMP Destination Unreachable message to the source of the datagram, with the Code indicating "fragmentation needed and DF set". ..." For some time now fragmentation has been considered problematic, e.g.: * http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-87-3.pdf * http://tools.ietf.org/search/rfc4963 Most of them seem to agree that fragmentation should be avoided because of efficiency, data corruption or security concerns. Recently it was shown possible that correctly guessing IP ids could lead to data injection on DNS packets: <https://sites.google.com/site/hayashulman/files/fragmentation-poisoning.pdf> While we can try to completly stop fragmentation on the end host (this is e.g. implemented via IP_PMTUDISC_INTERFACE), we cannot stop fragmentation completly on the forwarding path. On the end host the application has to deal with MTUs and has to choose fallback methods if fragmentation could be an attack vector. This is already the case for most DNS software, where a maximum UDP packet size can be configured. But until recently they had no control over local fragmentation and could thus emit fragmented packets. On the forwarding path we can just try to delay the fragmentation to the last hop where this is really necessary. Current kernel already does that but only because routers don't receive feedback of path mtus, these are only send back to the end host system. But it is possible to maliciously insert path mtu inforamtion via ICMP packets which have an icmp echo_reply payload, because we cannot validate those notifications against local sockets. DHCP clients which establish an any-bound RAW-socket could also start processing unwanted fragmentation-needed packets. Why does IPv4 has a knob to revert to old behavior while IPv6 doesn't? IPv4 does fragmentation on the path while IPv6 does always respond with packet-too-big errors. The interface MTU will always be greater than the path MTU information. So we would discard packets we could actually forward because of malicious information. After this change we would let the hop, which really could not forward the packet, notify the host of this problem. IPv4 allowes fragmentation mid-path. In case someone does use a software which tries to discover such paths and assumes that the kernel is handling the discovered pmtu information automatically. This should be an extremly rare case, but because I could not exclude the possibility this knob is provided. Also this software could insert non-locked mtu information into the kernel. We cannot distinguish that from path mtu information currently. Premature fragmentation could solve some problems in wrongly configured networks, thus this switch is provided. One frag-needed packet could reduce the path mtu down to 522 bytes (route/min_pmtu). Misc: IPv6 neighbor discovery could advertise mtu information for an interface. These information update the ipv6-specific interface mtu and thus get used by the forwarding path. Tunnel and xfrm output path will still honour path mtu and also respond with Packet-too-Big or fragmentation-needed errors if needed. Changelog for all patches: v2) * enabled ip_forward_use_pmtu by default * reworded v3) * disabled ip_forward_use_pmtu by default * reworded v4) * renamed ip_dst_mtu_secure to ip_dst_mtu_maybe_forward * updated changelog accordingly * removed unneeded !!(... & ...) double negations v2) * by default we honour pmtu information 3) * only honor interface mtu * rewritten and simplified * no knob to fall back to old mode any more v2) * reworded Documentation ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
This new ip_no_pmtu_disc mode only allowes fragmentation-needed errors to be honored by protocols which do more stringent validation on the ICMP's packet payload. This knob is useful for people who e.g. want to run an unmodified DNS server in a namespace where they need to use pmtu for TCP connections (as they are used for zone transfers or fallback for requests) but don't want to use possibly spoofed UDP pmtu information. Currently the whitelisted protocols are TCP, SCTP and DCCP as they check if the returned packet is in the window or if the association is valid. Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: John Heffner <johnwheffner@gmail.com> Suggested-by: Florian Weimer <fweimer@redhat.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
In the IPv6 forwarding path we are only concerend about the outgoing interface MTU, but also respect locked MTUs on routes. Tunnel provider or IPSEC already have to recheck and if needed send PtB notifications to the sending host in case the data does not fit into the packet with added headers (we only know the final header sizes there, while also using path MTU information). The reason for this change is, that path MTU information can be injected into the kernel via e.g. icmp_err protocol handler without verification of local sockets. As such, this could cause the IPv6 forwarding path to wrongfully emit Packet-too-Big errors and drop IPv6 packets. Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: John Heffner <johnwheffner@gmail.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Hannes Frederic Sowa authored
While forwarding we should not use the protocol path mtu to calculate the mtu for a forwarded packet but instead use the interface mtu. We mark forwarded skbs in ip_forward with IPSKB_FORWARDED, which was introduced for multicast forwarding. But as it does not conflict with our usage in unicast code path it is perfect for reuse. I moved the functions ip_sk_accept_pmtu, ip_sk_use_pmtu and ip_skb_dst_mtu along with the new ip_dst_mtu_maybe_forward to net/ip.h to fix circular dependencies because of IPSKB_FORWARDED. Because someone might have written a software which does probe destinations manually and expects the kernel to honour those path mtus I introduced a new per-namespace "ip_forward_use_pmtu" knob so someone can disable this new behaviour. We also still use mtus which are locked on a route for forwarding. The reason for this change is, that path mtus information can be injected into the kernel via e.g. icmp_err protocol handler without verification of local sockets. As such, this could cause the IPv4 forwarding path to wrongfully emit fragmentation needed notifications or start to fragment packets along a path. Tunnel and ipsec output paths clear IPCB again, thus IPSKB_FORWARDED won't be set and further fragmentation logic will use the path mtu to determine the fragmentation size. They also recheck packet size with help of path mtu discovery and report appropriate errors. Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: David Miller <davem@davemloft.net> Cc: John Heffner <johnwheffner@gmail.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Terry Lam authored
This is to be compatible with the use of "get_time" (i.e. default time unit in us) in iproute2 patch for HHF as requested by Stephen. Signed-off-by: Terry Lam <vtlam@google.com> Acked-by: Nandita Dukkipati <nanditad@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
vmalloc is a limited resource. Don't use it unnecessarily. It seems this allocation should work with kcalloc. Remove unnecessary memset(,0,) of buf as it's completely overwritten as the previously only unset field in struct qlcnic_pci_func_cfg is now set to 0. Use kfree instead of vfree. Use ETH_ALEN instead of 6. Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 12 Jan, 2014 10 commits
-
-
Veaceslav Falico authored
That code has been around for ages without being used. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Veaceslav Falico authored
CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Veaceslav Falico authored
It's a huge mess currently, that is really hard to read. This cleanup doesn't touch the logic at all, it only breaks easy-to-fix long lines and updates comment styles. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Sabrina Dubroca says: ==================== alx: add statistics Currently, the alx driver doesn't support statistics [1,2]. The original alx driver [3] that Johannes Berg modified provided statistics. This patch is an adaptation of the statistics code from the original driver to the alx driver included in the kernel. v4: - modified the assignements of hw stats to netstats (Ben Hutchings) - added comments to describe the stats fields (copied from atlx) v3: - renamed __alx_update_hw_stats to alx_update_hw_stats (Stephen Hemminger) v2: - use u64 instead of unsigned long (Ben Hutchings) - implement ndo_get_stats64 instead of ndo_get_stats (Ben Hutchings) - use EINVAL instead of ENOTSUPP (Ben Hutchings) - add BUILD_BUG_ON to check the size of the stats (Johannes Berg, Ben Hutchings) - add a comment regarding persistence of the stats (Stephen Hemminger) - align assignments in __alx_update_hw_stats [1] https://bugzilla.kernel.org/show_bug.cgi?id=63401 [2] http://www.spinics.net/lists/netdev/msg245544.html [3] https://github.com/mcgrof/alx ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sabrina Dubroca authored
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Reviewed-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sabrina Dubroca authored
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Reviewed-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sabrina Dubroca authored
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sabrina Dubroca authored
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sabrina Dubroca authored
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-nextDavid S. Miller authored
Jeff Kirsher says: ==================== Intel Wired LAN Driver Updates This series contains updates to i40e and now i40evf. Most notable is Jacob's patch to add PTP support to i40e. Mitch cleans up additional memcpy's and use struct assignment instead. Then fixes long lines to appease checkpatch.pl. Mitch then provides a fix to keep us from spamming the log with confusing errors. If you use ip to change the MAC address of a VF while the VF driver is loaded, closing the VF interface or unloading the VF driver will cause the VF driver to remove the MAC filter for its original (now invalid) MAC address. Jesse cleans up macros which are no longer needed or used. I (Jeff) cleanup function header comments to ensure Doxygen/kdoc works correctly to generate documentation without warnings. Anjali fixes a bug where ethtool set-channels would return failure when configuring only one Rx queue. Then fixes a bug where the driver was erroneously exiting the driver unload path if one part of the unload failed. Shannon fixes if the IPV6EXADD but is set in the Rx descriptor status, there was an optional extension header with an alternate IP address detected and the hardware checksum was not handling the alternate IP address correctly. Then adjusts the ITR max and min values to match the hardware max value and recommended min value. Shannon makes sure to clear the PXE mode after the adminq is initialized. v2: - fix patch 14 "i40e: enable PTP" to address Richard Cochran's spelling catch and Ben Hutchings Kconfig, SIOCGHWTSTAMP and sizeof() suggestions - added Paul Gortmaker's i40evf fix patch v3: - fix patch 14 "i40e: enable PTP" to address Ben Hutchings concerns about a race with PTP init and cleanup and i40e_get_ts_info(). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 11 Jan, 2014 5 commits
-
-
Paul Gortmaker authored
As of commit 7f12ad74 ("i40evf: transmit and receive functionality") the s390 builds (allyesconfig) fail with: drivers/net/ethernet/intel/i40evf/i40e_txrx.c: In function 'i40e_clean_rx_irq': drivers/net/ethernet/intel/i40evf/i40e_txrx.c:818:3: error: implicit declaration of function 'prefetch' make[5]: *** [drivers/net/ethernet/intel/i40evf/i40e_txrx.o] Error 1 due to an implicit assumption that the prototype from linux/prefetch.h will be present. Cc: Mitch Williams <mitch.a.williams@intel.com> Cc: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Acked-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Catherine Sullivan authored
Update the driver version to 0.3.28-k. Signed-off-by: Catherine Sullivan <catherine.sullivan@intel.com> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Shannon Nelson authored
Change the redundant "vsi VSI" to VSI. Change-ID: Ic16ea5820a99abc7831713cde39e7d032a7ba4d3 Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
New feature: Enable PTP support in the i40e driver. Change-ID: I6a8e799f582705191f9583afb1b9231a8db96cc8 Cc: Richard Cochran <richardcochran@gmail.com> Cc: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: Matthew Vick <matthew.vick@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Shannon Nelson authored
In the latest firmware the clear_pxe_mode function will use the AdminQ request, so call this after AdminQ is set up rather than relying on i40e_pf_reset() to clear the PXE mode. Change-ID: Ice8cba2e9cbc3c7bde0a0bcf8eaf5009abef040b Signed-off-by: Shannon Nelson <shannon.nelson@intel.com> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Kavindya Deegala <kavindya.s.deegala@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-