- 02 Nov, 2013 13 commits
-
-
Bjørn Mork authored
This makes it a lot easier to test modified versions Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
We can avoid the costly division for the common case where we pad the frame to tx_max size as long as we ensure that tx_max is either the device specified dwNtbOutMaxSize or not a multiplum of wMaxPacketSize. Using the preconverted 'maxpacket' field avoids converting wMaxPacketSize to CPU endianness for every transmitted frame And since we only will hit the one byte padding rule for short frames, we can drop testing the skb for tailroom. The change means that tx_max now represents the real maximum skb size, enabling us to allocate the correct size instead of always making room for one extra byte. Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
A number of devices in the wild have turned out to require ZLPs. Even if this is a spec violation, our priority is to make any device work as good as possible. Devices needing ZLPs will fail to receive any full sized frame we send. On the other hand, devices which do not need the ZLP will still work if we send them. This gives us no other option than sending ZLPs by default. This will prevent devices conforming to the spec from making the optimizations which are possible without ZLPs. Adding known such devices to a whitelist, to avoid the possible negative impact of the new spec violating default. Cc: Greg Suarez <gsuarez@smithmicro.com> Cc: Alexey Orishko <alexey.orishko@gmail.com> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bjørn Mork authored
MBIM is a point-to-point protocol transporting raw IP packets with no L2 headers. Only IPv4 and IPv6 are supported. ARP in particular is not, which is quite logical given the lack of L2 headers. The driver still emulates an ethernet interface, dropping all unsupported protocols, and avoiding neigbour resolving by setting the IFF_NOARP flag. The MBIM specification does not explicitly forbid IPv6 Neighbor Discovery, and it seems the other OS support will respond to Neighbor Solicitations on MBIM links. There are therefore buggy devices out there, which despite the pointlessness, still require Neighbor Discovery for IPv6 over MBIM. This is incompatible with the IFF_NOARP flag which disables both ARP and ND. We cannot support ARP in any case, so we have to keep that flag. This patch implements a workaround for the buggy devices, letting the driver respond directly to Neighbor Solicitations from the device. This is not optimal, but will have minimal effect on any sane device. Cc: Greg Suarez <gsuarez@smithmicro.com> Reported-and-tested-by: Thomas Schäfer <tschaefer@t-online.de> Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also snipes some trailing whitespace. Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also snipes some whitespace errors. Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also fixes an incorrect function comment (probably copy/paste). Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also snipes some whitespace errors. Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Boeckel authored
Also snipes some whitespace errors. Signed-off-by: Ben Boeckel <mathstuf@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-nextDavid S. Miller authored
Jeff Kirsher says: ==================== This series contains updates to e1000, igb, ixgbe and ixgbevf. Hong Zhiguo provides a fix for e1000 where tx_ring and adapter->tx_ring are already of type "struct e1000_tx_ring" so no need to divide by e1000_tx_ring size in the idx calculation. Emil provides a fix for ixgbevf to remove a redundant workaround related to header split and a fix for ixgbe to resolve an issue where the MTA table can be cleared when the interface is reset while in promisc mode. Todd provides a fix for igb to prevent ethtool from writing to the iNVM in i210/i211 devices. This issue was reported by Marek Vasut <marex@denx.de>. Anton Blanchard provides a fix for ixgbe to reduce memory consumption with larger page sizes, seen on PPC. Don provides a cleanup in ixgbe to replace the IXGBE_DESC_UNUSED macro with the inline function ixgbevf_desc_unused() to make the logic a bit more readable. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/bwh/sfc-nextDavid S. Miller authored
Ben Hutchings says: ==================== A single fix by Alexandre Rames for the recent changes to TSO. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 01 Nov, 2013 6 commits
-
-
Emil Tantilov authored
This patch resolves an issue where the MTA table can be cleared when the interface is reset while in promisc mode. As result IPv6 traffic between VFs will be interrupted. This patch makes the update of the MTA table unconditional to avoid the inconsistent clearing on reset. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Don Skidmore authored
This patch just replaces the IXGBE_DESC_UNUSED macro with a like named inline function ixgbevf_desc_unused. The inline function makes the logic a bit more readable. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anton Blanchard authored
The ixgbe driver allocates pages for its receive rings. It currently uses 512 pages, regardless of page size. During receive handling it adds the unused part of the page back into the rx ring, avoiding the need for a new allocation. On a ppc64 box with 64 threads and 64kB pages, we end up with 512 entries * 64 rx queues * 64kB = 2GB memory used. Even more of a concern is that we use up 2GB of IOMMU space in order to map all this memory. The driver makes a number of decisions based on if PAGE_SIZE is less than 8kB, so use this as the breakpoint and only allocate 128 entries on 8kB or larger page sizes. Signed-off-by: Anton Blanchard <anton@samba.org> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Fujinaka, Todd authored
Don't let ethtool try to write to iNVM in i210/i211. This fixes an issue seen by Marek Vasut. Reported-by: Marek Vasut <marex@denx.de> Signed-off-by: Todd Fujinaka <todd.fujinaka@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Emil Tantilov authored
This patch removes a workaround related to header split, which is redundant because the driver does not support splitting packet headers on Rx. Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Hong Zhiguo authored
tx_ring and adapter->tx_ring are already of type "struct e1000_tx_ring *" Signed-off-by: Hong Zhiguo <zhiguohong@tencent.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 31 Oct, 2013 1 commit
-
-
Alexandre Rames authored
When using firmware assisted TSO, we use a single DMA mapping for the linear area of a TSO skb. We still have to segment the super-packet and insert a descriptor containing the original headers before each segment of payload, so we can unmap the linear area only after the last segment is completed. The unmapping information for the linear area is therefore associated with the last header descriptor. We calculate the DMA address to unmap from using the map length and the invariant that the end of the DMA mapping matches the end of the data referenced by the last descriptor. But this invariant is broken when there is TCP payload in the linear area. Fix this by adding and using an explicit dma_offset field. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
-
- 30 Oct, 2013 13 commits
-
-
David S. Miller authored
Alexander Aring says: ==================== This patch series cleanup the 6LoWPAN header creation and extend the use of skb_*_header functions. Patch 2/4 fix issues of parsing the mac header. The ieee802.15.4 header has a dynamic size which depends on frame control bits. This patch replaces the static mac header len calculation with a dynamic one. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Aring authored
This patch drops the direct memcpy on skb and uses the right skb memcpy functions. Also remove an unnecessary check if plen is non zero. Signed-off-by: Alexander Aring <alex.aring@gmail.com> Reviewed-by: Werner Almesberger <werner@almesberger.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Aring authored
This is necessary to access network header with the skb_network_header function instead of calculate the position with mac_len, etc. Do the same for the transport header, when we replace the IPv6 header with the 6LoWPAN header. Signed-off-by: Alexander Aring <alex.aring@gmail.com> Acked-by: Werner Almesberger <werner@almesberger.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Aring authored
Set the mac header length while creating the 802.15.4 mac header. Drop the function for recalculate mac header length in upper layers which was static and works for intra pan communication only. Signed-off-by: Alexander Aring <alex.aring@gmail.com> Reviewed-by: Werner Almesberger <werner@almesberger.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Aring authored
On receiving side we don't need to set any headers in skb because the 6LoWPAN layer do not access it. Currently these values will set twice after calling netif_rx. Signed-off-by: Alexander Aring <alex.aring@gmail.com> Reviewed-by: Werner Almesberger <werner@almesberger.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Duan Jiong authored
After reading the function rt6_check_neigh(), we can know that the RT6_NUD_FAIL_SOFT can be returned only when the IS_ENABLE(CONFIG_IPV6_ROUTER_PREF) is false. so in function find_match(), there is no need to execute the statement !IS_ENABLED(CONFIG_IPV6_ROUTER_PREF). Signed-off-by: Duan Jiong <duanj.fnst@cn.fujitsu.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Chen Weilong authored
This change is inspired by checkpatch. Signed-off-by: Weilong Chen <chenweilong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rafał Miłecki authored
Copying whole packets with skb_copy_from_linear_data_offset is a pretty bad idea. CPU was spending time in __copy_user_common and network performance was lower. With the new solution iperf-measured speed increased from 116Mb/s to 134Mb/s. Signed-off-by: Rafał Miłecki <zajec5@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ying Xue authored
The message dispatching part of tipc_recv_msg() is wrapped layers of while/if/if/switch, causing out-of-control indentation and does not look very good. We reduce two indentation levels by separating the message dispatching from the blocks that checks link state and sequence numbers, allowing longer function and arg names to be consistently indented without wrapping. Additionally we also rename "cont" label to "discard" and add one new label called "unlock_discard" to make code clearer. In all, these are cosmetic changes that do not alter the operation of TIPC in any way. Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Erik Hugne <erik.hugne@ericsson.com> Cc: David Laight <david.laight@aculab.com> Cc: Andreas Bofjäll <andreas.bofjall@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luka Perkov authored
Checking if MAC address is valid using is_valid_ether_addr() is already done in of_get_mac_address(). While at it, reorganize checking so it matches checks in other drivers. Signed-off-by: Luka Perkov <luka@openwrt.org> CC: Alexey Brodkin <Alexey.Brodkin@synopsys.com> CC: David Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luka Perkov authored
Checking if MAC address is valid using is_valid_ether_addr() is already done in of_get_mac_address(). Signed-off-by: Luka Perkov <luka@openwrt.org> Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> CC: David Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luka Perkov authored
Checking if MAC address is valid using is_valid_ether_addr() is already done in of_get_mac_address(). Signed-off-by: Luka Perkov <luka@openwrt.org> Acked-by: David Daney <david.daney@cavium.com> CC: David Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuchung Cheng authored
Fast Open currently has a fall back feature to address SYN-data being dropped but it requires the middle-box to pass on regular SYN retry after SYN-data. This is implemented in commit aab48743 ("net-tcp: Fast Open client - detecting SYN-data drops") However some NAT boxes will drop all subsequent packets after first SYN-data and blackholes the entire connections. An example is in commit 356d7d88 "netfilter: nf_conntrack: fix tcp_in_window for Fast Open". The sender should note such incidents and fall back to use the regular TCP handshake on subsequent attempts temporarily as well: after the second SYN timeouts the original Fast Open SYN is most likely lost. When such an event recurs Fast Open is disabled based on the number of recurrences exponentially. Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 29 Oct, 2013 7 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-nextDavid S. Miller authored
Jeff Kirsher says: ==================== This series contains updates to vxlan, net, ixgbe, ixgbevf, and i40e. Joseph provides a single patch against vxlan which removes the burden from the NIC drivers to check if the vxlan driver is enabled in the kernel and also makes available the vxlan headrooms to the drivers. Jacob provides majority of the patches, with patches against net, ixgbe and ixgbevf. His net patch adds might_sleep() call to napi_disable so that every use of napi_disable during atomic context will be visible. Then Jacob provides a patch to fix qv_lock_napi call in ixgbe_napi_disable_all. The other ixgbe patches cleanup ixgbe_check_minimum_link function to correctly show that there are some minor loss of encoding, even though we don't calculate it and remove unnecessary duplication of PCIe bandwidth display. Lastly, Jacob provides 4 patches against ixgbevf to add ixgbevf_rx_skb in line with how ixgbe handles the variations on how packets can be received, adds support in order to track how many packets were cleaned during busy poll as part of the extended statistics. Wei Yongjun provides a fix for i40e to return -ENOMEN in the memory allocation error handling case instead of returning 0, as done elsewhere in this function. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Leigh Brown authored
Amend the documentation in the mvmdio driver to note the fact that it is now used by both the mvneta and mv643xx_eth drivers. Signed-off-by: Leigh Brown <leigh@solinno.co.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Leigh Brown authored
Make only a single call to mutex_unlock in orion_mdio_write. Signed-off-by: Leigh Brown <leigh@solinno.co.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Leigh Brown authored
Replace manual poll of MVMDIO_SMI_READ_VALID with a call to orion_mdio_wait_ready. This ensures a consistent timeout, eliminates a busy loop, and allows for use of interrupts on systems that support them. Signed-off-by: Leigh Brown <leigh@solinno.co.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Leigh Brown authored
Amend orion_mdio_wait_ready so that the same timeout is used when polling or using wait_event_timeout. Set the timeout to 1ms. Replace udelay with usleep_range to avoid a busy loop, and set the polling interval range as 45us to 55us, so that the first sleep will be enough in almost all cases. Generate the same log message at timeout when polling or using wait_event_timeout. Signed-off-by: Leigh Brown <leigh@solinno.co.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gavin Shan authored
The function needn't to be public, so to make it as static. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gavin Shan authored
The interface type, which is being traced by "struct be_adapter:: if_type", isn't used currently. So we can remove that safely according to Sathya's comments. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-