- 10 Jul, 2014 4 commits
-
-
Stefan Assmann authored
To properly re-initialize SR-IOV it is necessary to reset the device even if it is already down. Not doing this may result in Tx unit hangs. Cc: stable <stable@vger.kernel.org> Signed-off-by: Stefan Assmann <sassmann@kpanic.de> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Todd Fujinaka authored
On some devices, the internal PLL circuit occasionally provides the wrong clock frequency after power up. The probability of failure is less than one failure per 1000 power cycles. When the failure occurs, the internal clock frequency is around 1/20 of the correct frequency. Cc: stable <stable@vger.kernel.org> Signed-off-by: Todd Fujinaka <todd.fujinaka@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joonyoung Shim authored
The smsc95xx needs to resume with reset operation. Otherwise it causes system hang by network error like below after resume. This case appears on odroid u3 board. [ 9.727600] smsc95xx 1-2:1.0 eth0: kevent 2 may have been dropped [ 9.727648] smsc95xx 1-2:1.0 eth0: kevent 2 may have been dropped [ 9.727689] smsc95xx 1-2:1.0 eth0: kevent 2 may have been dropped [ 9.727728] smsc95xx 1-2:1.0 eth0: kevent 2 may have been dropped [ 9.729486] PM: resume of devices complete after 2011.219 msecs [ 10.117609] Restarting tasks ... done. [ 11.725099] smsc95xx 1-2:1.0 eth0: kevent 2 may have been dropped [ 13.480846] smsc95xx 1-2:1.0 eth0: kevent 2 may have been dropped [ 13.481361] smsc95xx 1-2:1.0 eth0: kevent 2 may have been dropped ... Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stefan Sørensen authored
Currently status frames are only handled when packet timestamping is enabled, but status frames are also needed for pin event timestamping. Fix by moving packet timestamping check to after status frame decode. Signed-off-by: Stefan Sørensen <stefan.sorensen@spectralink.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 09 Jul, 2014 8 commits
-
-
hayeswang authored
For RTL8411, RTL8111G, RTL8402, RTL8105, and RTL8106, disable the feature of entering the L2/L3 link state of the PCIe. When the nic starts the process of entering the L2/L3 link state and the PCI reset occurs before the work is finished, the work would be queued and continue after the next the PCI reset occurs. This causes the device stays in L2/L3 link state, and the system couldn't find the device. Signed-off-by: Hayes Wang <hayeswang@realtek.com> Acked-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Pfaff authored
netlink_dump() returns a negative errno value on error. Until now, netlink_recvmsg() directly recorded that negative value in sk->sk_err, but that's wrong since sk_err takes positive errno values. (This manifests as userspace receiving a positive return value from the recv() system call, falsely indicating success.) This bug was introduced in the commit that started checking the netlink_dump() return value, commit b44d211e (netlink: handle errors from netlink_dump()). Multithreaded Netlink dumps are one way to trigger this behavior in practice, as described in the commit message for the userspace workaround posted here: http://openvswitch.org/pipermail/dev/2014-June/042339.html This commit also fixes the same bug in netlink_poll(), introduced in commit cd1df525 (netlink: add flow control for memory mapped I/O). Signed-off-by: Ben Pfaff <blp@nicira.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Fitzsimmons authored
This commit fixes the command value generated for CSUM calculation when running in big endian mode. The Ethernet protocol ID for IP was being unconditionally byte-swapped in the layer 3 protocol check (with swab16), which caused the mvneta driver to not function correctly in big endian mode. This patch byte-swaps the ID conditionally with htons. Cc: <stable@vger.kernel.org> # v3.13+ Signed-off-by: Thomas Fitzsimmons <fitzsim@fitzsim.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Petazzoni authored
As reported by Maggie Mae Roxas, the mvneta driver doesn't behave properly in 10 Mbit/s mode. This is due to a misconfiguration of the MVNETA_GMAC_AUTONEG_CONFIG register: bit MVNETA_GMAC_CONFIG_MII_SPEED must be set for a 100 Mbit/s speed, but cleared for a 10 Mbit/s speed, which the driver was not properly doing. This commit adjusts that by setting the MVNETA_GMAC_CONFIG_MII_SPEED bit only in 100 Mbit/s mode, and relying on the fact that all the speed related bits of this register are cleared at the beginning of the mvneta_adjust_link() function. This problem exists since c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") which is the commit that introduced the mvneta driver in the kernel. Cc: <stable@vger.kernel.org> # v3.8+ Fixes: c5aff182 ("net: mvneta: driver for Marvell Armada 370/XP network unit") Reported-by: Maggie Mae Roxas <maggie.mae.roxas@gmail.com> Cc: Maggie Mae Roxas <maggie.mae.roxas@gmail.com> Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amir Vadai authored
It is recommended that TX work not count against the quota. The cost of TX packet liberation is a minute percentage of what it costs to process an RX frame. Furthermore, that SKB freeing makes memory available for other paths in the stack. Give the TX a larger budget and be more aggressive about cleaning up the Tx descriptors this budget could be changed using ethtool: $ ethtool -C eth1 tx-frames-irq <budget> Signed-off-by: Amir Vadai <amirv@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
hayeswang authored
The device should be waked up from runtime suspend before dumping the hw counter. Signed-off-by: Hayes Wang <hayeswang@realtek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrey Utkin authored
Setting just skb->sk without taking its reference and setting a destructor is invalid. However, in the places where this was done, skb is used in a way not requiring skb->sk setting. So dropping the setting of skb->sk. Thanks to Eric Dumazet <eric.dumazet@gmail.com> for correct solution. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=79441Reported-by: Ed Martin <edman007@edman007.com> Signed-off-by: Andrey Utkin <andrey.krieger.utkin@gmail.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dmitry Popov authored
This patch fixes 3 similar bugs where incoming packets might be routed into wrong non-wildcard tunnels: 1) Consider the following setup: ip address add 1.1.1.1/24 dev eth0 ip address add 1.1.1.2/24 dev eth0 ip tunnel add ipip1 remote 2.2.2.2 local 1.1.1.1 mode ipip dev eth0 ip link set ipip1 up Incoming ipip packets from 2.2.2.2 were routed into ipip1 even if it has dst = 1.1.1.2. Moreover even if there was wildcard tunnel like ip tunnel add ipip0 remote 2.2.2.2 local any mode ipip dev eth0 but it was created before explicit one (with local 1.1.1.1), incoming ipip packets with src = 2.2.2.2 and dst = 1.1.1.2 were still routed into ipip1. Same issue existed with all tunnels that use ip_tunnel_lookup (gre, vti) 2) ip address add 1.1.1.1/24 dev eth0 ip tunnel add ipip1 remote 2.2.146.85 local 1.1.1.1 mode ipip dev eth0 ip link set ipip1 up Incoming ipip packets with dst = 1.1.1.1 were routed into ipip1, no matter what src address is. Any remote ip address which has ip_tunnel_hash = 0 raised this issue, 2.2.146.85 is just an example, there are more than 4 million of them. And again, wildcard tunnel like ip tunnel add ipip0 remote any local 1.1.1.1 mode ipip dev eth0 wouldn't be ever matched if it was created before explicit tunnel like above. Gre & vti tunnels had the same issue. 3) ip address add 1.1.1.1/24 dev eth0 ip tunnel add gre1 remote 2.2.146.84 local 1.1.1.1 key 1 mode gre dev eth0 ip link set gre1 up Any incoming gre packet with key = 1 were routed into gre1, no matter what src/dst addresses are. Any remote ip address which has ip_tunnel_hash = 0 raised the issue, 2.2.146.84 is just an example, there are more than 4 million of them. Wildcard tunnel like ip tunnel add gre2 remote any local any key 1 mode gre dev eth0 wouldn't be ever matched if it was created before explicit tunnel like above. All this stuff happened because while looking for a wildcard tunnel we didn't check that matched tunnel is a wildcard one. Fixed. Signed-off-by: Dmitry Popov <ixaphire@qrator.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 08 Jul, 2014 16 commits
-
-
Rickard Strandqvist authored
There is otherwise a risk of a possible null pointer dereference. Was largely found by using a static code analysis program called cppcheck. Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se>
-
Jon Paul Maloy authored
Since commit 37e22164 ("tipc: rename and move message reassembly function") reassembly of long broadcast messages has been broken. This is because we test for a non-NULL return value of the *buf parameter as criteria for succesful reassembly. However, this parameter is left defined even after reception of the first fragment, when reassebly is still incomplete. This leads to a kernel crash as soon as a the first fragment of a long broadcast message is received. We fix this with this commit, by implementing a stricter behavior of the function and its return values. This commit should be applied to both net and net-next. Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Aring authored
This patch changes the IEEE 802.15.4 subsystem maintainer to Alexander Aring. We discussed this change before via e-mail and I collected the acks from the current maintainers. Signed-off-by: Alexander Aring <alex.aring@gmail.com> Acked-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Acked-by: Alexander Smirnov <alex.bluesman.sminov@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
David Vrabel says: ==================== xen-netfront: multi-queue related locking fixes Two fixes to the per-queue locking bugs in xen-netfront that were introduced in 3.16-rc1 with the multi-queue support. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Vrabel authored
In xennet_disconnect_backend(), netif_carrier_off() was called once per queue when it needs to only be called once. The queue locking around the netif_carrier_off() call looked very odd. I think they were supposed to synchronize any NAPI instances with the expectation that no further NAPI instances would be scheduled because of the carrier being off (see the check in xennet_rx_interrupt()). But I can't easily tell if this works correctly. Instead, add a napi_synchronize() call after disabling the interrupts. This is obviously correct as with no Rx interrupts, no further NAPI instances will be scheduled. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Vrabel authored
The nesting of the per-queue rx_lock and tx_lock in xennet_connect() is confusing to both humans and lockdep. The locking is safe because this is the only place where the locks are nested in this way but lockdep still warns. Instead of adding the missing lockdep annotations, refactor the locking to avoid the confusing nesting. This is still safe, because the xenbus connection state changes are all serialized by the xenwatch thread. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reported-by: Sander Eikelenboom <linux@eikelenboom.it> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yuchung Cheng authored
The undo code assumes that, upon entering loss recovery, TCP 1) always retransmit something 2) the retransmission never fails locally (e.g., qdisc drop) so undo_marker is set in tcp_enter_recovery() and undo_retrans is incremented only when tcp_retransmit_skb() is successful. When the assumption is broken because TCP's cwnd is too small to retransmit or the retransmit fails locally. The next (DUP)ACK would incorrectly revert the cwnd and the congestion state in tcp_try_undo_dsack() or tcp_may_undo(). Subsequent (DUP)ACKs may enter the recovery state. The sender repeatedly enter and (incorrectly) exit recovery states if the retransmits continue to fail locally while receiving (DUP)ACKs. The fix is to initialize undo_retrans to -1 and start counting on the first retransmission. Always increment undo_retrans even if the retransmissions fail locally because they couldn't cause DSACKs to undo the cwnd reduction. Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Or Gerlitz authored
The add_vxlan_port ndo driver code was wrongly testing whether HW vxlan offloads are supported by the device instead of checking if they are currently enabled. This causes the driver to configure the HW parser to conduct matching for vxlan packets but since no steering rules were set, vxlan packets are dropped on RX. Fix that by doing the right test, as done in the del_vxlan_port ndo handler. Fixes: 1b136de1 ('net/mlx4: Implement vxlan ndo calls') Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
dingtianhong authored
The problem was triggered by these steps: 1) create socket, bind and then setsockopt for add mc group. mreq.imr_multiaddr.s_addr = inet_addr("255.0.0.37"); mreq.imr_interface.s_addr = inet_addr("192.168.1.2"); setsockopt(sockfd, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq)); 2) drop the mc group for this socket. mreq.imr_multiaddr.s_addr = inet_addr("255.0.0.37"); mreq.imr_interface.s_addr = inet_addr("0.0.0.0"); setsockopt(sockfd, IPPROTO_IP, IP_DROP_MEMBERSHIP, &mreq, sizeof(mreq)); 3) and then drop the socket, I found the mc group was still used by the dev: netstat -g Interface RefCnt Group --------------- ------ --------------------- eth2 1 255.0.0.37 Normally even though the IP_DROP_MEMBERSHIP return error, the mc group still need to be released for the netdev when drop the socket, but this process was broken when route default is NULL, the reason is that: The ip_mc_leave_group() will choose the in_dev by the imr_interface.s_addr, if input addr is NULL, the default route dev will be chosen, then the ifindex is got from the dev, then polling the inet->mc_list and return -ENODEV, but if the default route dev is NULL, the in_dev and ifIndex is both NULL, when polling the inet->mc_list, the mc group will be released from the mc_list, but the dev didn't dec the refcnt for this mc group, so when dropping the socket, the mc_list is NULL and the dev still keep this group. v1->v2: According Hideaki's suggestion, we should align with IPv6 (RFC3493) and BSDs, so I add the checking for the in_dev before polling the mc_list, make sure when we remove the mc group, dec the refcnt to the real dev which was using the mc address. The problem would never happened again. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Loic Prylli authored
A bug was introduced in NETDEV_CHANGE notifier sequence causing the arp table to be sometimes spuriously cleared (including manual arp entries marked permanent), upon network link carrier changes. The changed argument for the notifier was applied only to a single caller of NETDEV_CHANGE, missing among others netdev_state_change(). So upon net_carrier events induced by the network, which are triggering a call to netdev_state_change(), arp_netdev_event() would decide whether to clear or not arp cache based on random/junk stack values (a kind of read buffer overflow). Fixes: be9efd36 ("net: pass changed flags along with NETDEV_CHANGE event") Fixes: 6c8b4e3f ("arp: flush arp cache on IFF_NOARP change") Signed-off-by: Loic Prylli <loicp@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bernd Wachter authored
There's a new version of the Telewell 4G modem working with, but not recognized by this driver. Signed-off-by: Bernd Wachter <bernd.wachter@jolla.com> Acked-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This reverts commit 0acf1676. Breaks the build due to missing reference to phy_resume in the resulting dwmac-socfpga.o object. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Zhao Qiang authored
deal with a compile warning: comparison between 'enum qe_fltr_largest_external_tbl_lookup_key_size' and 'enum qe_fltr_tbl_lookup_key_size' the code: "if (ug_info->largestexternallookupkeysize == QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES)" is warned because different enum, so modify it. "enum qe_fltr_largest_external_tbl_lookup_key_size largestexternallookupkeysize; enum qe_fltr_tbl_lookup_key_size { QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES = 0x3f, /* LookupKey parsed by the Generate LookupKey CMD is truncated to 8 bytes */ QE_FLTR_TABLE_LOOKUP_KEY_SIZE_16_BYTES = 0x5f, /* LookupKey parsed by the Generate LookupKey CMD is truncated to 16 bytes */ }; /* QE FLTR extended filtering Largest External Table Lookup Key Size */ enum qe_fltr_largest_external_tbl_lookup_key_size { QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_NONE = 0x0,/* not used */ QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_8_BYTES = QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES, /* 8 bytes */ QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_16_BYTES = QE_FLTR_TABLE_LOOKUP_KEY_SIZE_16_BYTES, /* 16 bytes */ };" Signed-off-by: Zhao Qiang <B45475@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/pshelar/openvswitchDavid S. Miller authored
Pravin B Shelar says: ==================== Open vSwitch A set of fixes for net. First bug is related flow-table management. Second one is in sample action. Third is related flow stats and last one add gre-err handler for ovs. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tom Herbert authored
In process_backlog the input_pkt_queue is only checked once for new packets and quota is artificially reduced to reflect precisely the number of packets on the input_pkt_queue so that the loop exits appropriately. This patches changes the behavior to be more straightforward and less convoluted. Packets are processed until either the quota is met or there are no more packets to process. This patch seems to provide a small, but noticeable performance improvement. The performance improvement is a result of staying in the process_backlog loop longer which can reduce number of IPI's. Performance data using super_netperf TCP_RR with 200 flows: Before fix: 88.06% CPU utilization 125/190/309 90/95/99% latencies 1.46808e+06 tps 1145382 intrs.sec. With fix: 87.73% CPU utilization 122/183/296 90/95/99% latencies 1.4921e+06 tps 1021674.30 intrs./sec. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Allcutt authored
Some older router implementations still send Fragmentation Needed errors with the Next-Hop MTU field set to zero. This is explicitly described as an eventuality that hosts must deal with by the standard (RFC 1191) since older standards specified that those bits must be zero. Linux had a generic (for all of IPv4) implementation of the algorithm described in the RFC for searching a list of MTU plateaus for a good value. Commit 46517008 ("ipv4: Kill ip_rt_frag_needed().") removed this as part of the changes to remove the routing cache. Subsequently any Fragmentation Needed packet with a zero Next-Hop MTU has been discarded without being passed to the per-protocol handlers or notifying userspace for raw sockets. When there is a router which does not implement RFC 1191 on an MTU limited path then this results in stalled connections since large packets are discarded and the local protocols are not notified so they never attempt to lower the pMTU. One example I have seen is an OpenBSD router terminating IPSec tunnels. It's worth pointing out that this case is distinct from the BSD 4.2 bug which incorrectly calculated the Next-Hop MTU since the commit in question dismissed that as a valid concern. All of the per-protocols handlers implement the simple approach from RFC 1191 of immediately falling back to the minimum value. Although this is sub-optimal it is vastly preferable to connections hanging indefinitely. Remove the Next-Hop MTU != 0 check and allow such packets to follow the normal path. Fixes: 46517008 ("ipv4: Kill ip_rt_frag_needed().") Signed-off-by: Edward Allcutt <edward.allcutt@openmarket.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 03 Jul, 2014 12 commits
-
-
David S. Miller authored
Vince Bridgers says: ==================== net: stmmac: Correct socfpga init/exit and This patch series adds platform specific init/exit code so that socfpga suspend/resume works as expected, and corrects a minor issue detected by cppcheck. V2: Address review comments by adding a line break at end of function and structure declaration. Add another trivial cppcheck patch. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vince Bridgers authored
Cppcheck found a case where a local variable was being assigned a value, but not used. There seems to be no reason to read this register before assigning a new value, so addressing thie issue. cppcheck --force --enable=all --inline-suppr . shows ... Variable 'value' is reassigned a value before the old one has been used. Signed-off-by: Vince Bridgers <vbridgers2013@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vince Bridgers authored
Cppcheck found a duplicate if/then/else case where a receive descriptor was being processed. This patch corrects that issue. cppcheck --force --enable=all --inline-suppr . ... Checking enh_desc.c... [enh_desc.c:148] -> [enh_desc.c:144]: (style) Found duplicate if expressions. ... Signed-off-by: Vince Bridgers <vbridgers2013@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vince Bridgers authored
This patch adds platform init/exit functions and modifications to support suspend/resume for the Altera Cyclone 5 SOC Ethernet controller. The platform exit function puts the controller into reset using the socfpga reset controller driver. The platform init function sets up the Synopsys mac by first making sure the Ethernet controller is held in reset, programming the phy mode through external support logic, then deasserts reset through the socfpga reset manager driver. Signed-off-by: Vince Bridgers <vbridgers2013@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Aring authored
The max_dsize attribute in ctl_table for lowpan_frags_ns_ctl_table is configured with integer accessing methods. This patch change the max_dsize attribute to int to avoid a possible buffer overflow. Signed-off-by: Alexander Aring <alex.aring@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Amir Vadai says: ==================== Mellanox EN driver fixes 2014-06-23 Below are some fixes to patches submitted to 3.16. First patch is according to discussions with Ben [1] and Thomas [2] - to do not use affinity notifier, since it breaks RFS. Instead detect changes in IRQ affinity map, by checking if current CPU is set in affinity map on NAPI poll. The two other patches fix some bugs introduced in commit [3]. Patches were applied and tested over commit dba63115: ('powerpc: bpf: Fix the broken LD_VLAN_TAG_PRESENT test') ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amir Vadai authored
Need to remove affinity hint at mlx4_en_deactivate_cq() and not at mlx4_en_destroy_cq() - since affinity_mask might be free'd while still being used by procfs. Signed-off-by: Amir Vadai <amirv@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amir Vadai authored
When device is non numa aware (numa_node == -1), use all online cpu's. Signed-off-by: Amir Vadai <amirv@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amir Vadai authored
IRQ affinity notifier can only have a single notifier - cpu_rmap notifier. Can't use it to track changes in IRQ affinity map. Detect IRQ affinity changes by comparing CPU to current IRQ affinity map during NAPI poll thread. CC: Thomas Gleixner <tglx@linutronix.de> CC: Ben Hutchings <ben@decadent.org.uk> Fixes: 2eacc23c ("net/mlx4_core: Enforce irq affinity changes immediatly") Signed-off-by: Amir Vadai <amirv@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maciej W. Rozycki authored
This fixes compilation warnings: drivers/net/fddi/defxx.c:294: warning: 'dfx_rcv_flush' declared inline after being called drivers/net/fddi/defxx.c:294: warning: previous declaration of 'dfx_rcv_flush' was here drivers/net/fddi/defxx.c:2854: warning: 'my_skb_align' defined but not used triggered when the driver is built with DYNAMIC_BUFFERS undefined. Code tested to work just fine with these changes and a few DEFPA and DEFTA boards. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maciej W. Rozycki authored
The RX handler of the driver has two paths switched between, depending on the size of the frame received, as determined by SKBUFF_RX_COPYBREAK. When a small frame is received, a new skb allocated has data space large enough to hold the incoming frame only, and data is copied there from the original skb whose buffer is returned to the DMA RX ring; in that case `rx_in_place' is 0. When a large frame is received, a new skb allocated has data space large enough to hold the largest frame possible, including the overhead for alignment, the receive status and padding, over 4.5kiB overall, and its buffer is placed on the DMA RX ring while the original buffer is passed up to the network stack avoiding the need to copy data; in that case `rx_in_place' is 1. However the latter scenario is only possible when dynamic buffers are used, as determined by DYNAMIC_BUFFERS, because otherwise the buffers used for the DMA RX ring are fixed at the time the interface is brought up. That leads to an observation that the preprocessor conditional around the `rx_in_place' check is inverted, the check only really matters when dynamic buffers are in use. It has gone unnoticed for many years since support for using dynamic buffers on the DMA RX ring was introduced in 2.1.40 -- because the only problem that results is in the case where `rx_in_place' is 1 frame data received is unnecessarily copied to the newly-allocated buffer, before the buffer placed on the the DMA receive RX and its contents ignored. Therefore the only symptom is some performance loss. Rather than flipping the condition though I decided to discard the conditional altogether -- in the case of static buffers `rx_in_place' is always 0 so GCC will optimise the C conditional away instead. Tested on a few DEFPA and DEFTA boards successfully using both small and large frames, both with DYNAMIC_BUFFERS defined and with the macro undefined. Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christoph Paasch authored
When in repair-mode and TCP_RECV_QUEUE is set, we end up calling tcp_push with mss_now being 0. If data is in the send-queue and tcp_set_skb_tso_segs gets called, we crash because it will divide by mss_now: [ 347.151939] divide error: 0000 [#1] SMP [ 347.152907] Modules linked in: [ 347.152907] CPU: 1 PID: 1123 Comm: packetdrill Not tainted 3.16.0-rc2 #4 [ 347.152907] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007 [ 347.152907] task: f5b88540 ti: f3c82000 task.ti: f3c82000 [ 347.152907] EIP: 0060:[<c1601359>] EFLAGS: 00210246 CPU: 1 [ 347.152907] EIP is at tcp_set_skb_tso_segs+0x49/0xa0 [ 347.152907] EAX: 00000b67 EBX: f5acd080 ECX: 00000000 EDX: 00000000 [ 347.152907] ESI: f5a28f40 EDI: f3c88f00 EBP: f3c83d10 ESP: f3c83d00 [ 347.152907] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 [ 347.152907] CR0: 80050033 CR2: 083158b0 CR3: 35146000 CR4: 000006b0 [ 347.152907] Stack: [ 347.152907] c167f9d9 f5acd080 000005b4 00000002 f3c83d20 c16013e6 f3c88f00 f5acd080 [ 347.152907] f3c83da0 c1603b5a f3c83d38 c10a0188 00000000 00000000 f3c83d84 c10acc85 [ 347.152907] c1ad5ec0 00000000 00000000 c1ad679c 010003e0 00000000 00000000 f3c88fc8 [ 347.152907] Call Trace: [ 347.152907] [<c167f9d9>] ? apic_timer_interrupt+0x2d/0x34 [ 347.152907] [<c16013e6>] tcp_init_tso_segs+0x36/0x50 [ 347.152907] [<c1603b5a>] tcp_write_xmit+0x7a/0xbf0 [ 347.152907] [<c10a0188>] ? up+0x28/0x40 [ 347.152907] [<c10acc85>] ? console_unlock+0x295/0x480 [ 347.152907] [<c10ad24f>] ? vprintk_emit+0x1ef/0x4b0 [ 347.152907] [<c1605716>] __tcp_push_pending_frames+0x36/0xd0 [ 347.152907] [<c15f4860>] tcp_push+0xf0/0x120 [ 347.152907] [<c15f7641>] tcp_sendmsg+0xf1/0xbf0 [ 347.152907] [<c116d920>] ? kmem_cache_free+0xf0/0x120 [ 347.152907] [<c106a682>] ? __sigqueue_free+0x32/0x40 [ 347.152907] [<c106a682>] ? __sigqueue_free+0x32/0x40 [ 347.152907] [<c114f0f0>] ? do_wp_page+0x3e0/0x850 [ 347.152907] [<c161c36a>] inet_sendmsg+0x4a/0xb0 [ 347.152907] [<c1150269>] ? handle_mm_fault+0x709/0xfb0 [ 347.152907] [<c15a006b>] sock_aio_write+0xbb/0xd0 [ 347.152907] [<c1180b79>] do_sync_write+0x69/0xa0 [ 347.152907] [<c1181023>] vfs_write+0x123/0x160 [ 347.152907] [<c1181d55>] SyS_write+0x55/0xb0 [ 347.152907] [<c167f0d8>] sysenter_do_call+0x12/0x28 This can easily be reproduced with the following packetdrill-script (the "magic" with netem, sk_pacing and limit_output_bytes is done to prevent the kernel from pushing all segments, because hitting the limit without doing this is not so easy with packetdrill): 0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 +0 bind(3, ..., ...) = 0 +0 listen(3, 1) = 0 +0 < S 0:0(0) win 32792 <mss 1460> +0 > S. 0:0(0) ack 1 <mss 1460> +0.1 < . 1:1(0) ack 1 win 65000 +0 accept(3, ..., ...) = 4 // This forces that not all segments of the snd-queue will be pushed +0 `tc qdisc add dev tun0 root netem delay 10ms` +0 `sysctl -w net.ipv4.tcp_limit_output_bytes=2` +0 setsockopt(4, SOL_SOCKET, 47, [2], 4) = 0 +0 write(4,...,10000) = 10000 +0 write(4,...,10000) = 10000 // Set tcp-repair stuff, particularly TCP_RECV_QUEUE +0 setsockopt(4, SOL_TCP, 19, [1], 4) = 0 +0 setsockopt(4, SOL_TCP, 20, [1], 4) = 0 // This now will make the write push the remaining segments +0 setsockopt(4, SOL_SOCKET, 47, [20000], 4) = 0 +0 `sysctl -w net.ipv4.tcp_limit_output_bytes=130000` // Now we will crash +0 write(4,...,1000) = 1000 This happens since ec342325 (tcp: fix retransmission in repair mode). Prior to that, the call to tcp_push was prevented by a check for tp->repair. The patch fixes it, by adding the new goto-label out_nopush. When exiting tcp_sendmsg and a push is not required, which is the case for tp->repair, we go to this label. When repairing and calling send() with TCP_RECV_QUEUE, the data is actually put in the receive-queue. So, no push is required because no data has been added to the send-queue. Cc: Andrew Vagin <avagin@openvz.org> Cc: Pavel Emelyanov <xemul@parallels.com> Fixes: ec342325 (tcp: fix retransmission in repair mode) Signed-off-by: Christoph Paasch <christoph.paasch@uclouvain.be> Acked-by: Andrew Vagin <avagin@openvz.org> Acked-by: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-