- 27 Apr, 2010 40 commits
-
-
Nick Nunley authored
Signed-off-by: Nicholas Nunley <nicholasx.d.nunley@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nick Nunley authored
Signed-off-by: Nicholas Nunley <nicholasx.d.nunley@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nick Nunley authored
Signed-off-by: Nicholas Nunley <nicholasx.d.nunley@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nick Nunley authored
Signed-off-by: Nicholas Nunley <nicholasx.d.nunley@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch makes it so that igb now uses the DMA API functions instead of the PCI API functions. To do this the pci_dev pointer that was in the rings has been replaced with a device pointer, and as a result all references to [tr]x_ring->pdev have been replaced with [tr]x_ring->dev. This patch is based of of work originally done by Nicholas Nunley. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nick Nunley authored
Signed-off-by: Nicholas Nunley <nicholasx.d.nunley@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nick Nunley authored
Signed-off-by: Nicholas Nunley <nicholasx.d.nunley@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Noticed by Michał Mirosław. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dimitris Michailidis authored
Implement the ->set_flags ethtool method to control NETIF_F_RXHASH and set skb->rxhash to the HW calculated hash accordingly. Follow Eric Dumazet's suggestion and use the hash value raw. Signed-off-by: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sebastian Andrzej Siewior authored
doing it within the driver does not look good. And surely isn't how platform devices were meat to be used. Acked-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sebastian Andrzej Siewior authored
CONFIG_SIBYTE_STANDALONE is gone since v2.6.31-rc1 ("MIPS: Sibyte: Remove standalone kernel support") This is a missing piece. Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Now there's no need to use this fuction directly because it's handled by register_pernet_device. So to make this simple and easy to understand, make this static to do not tempt potentional users. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dimitris Michailidis authored
Some boards have longer serial numbers in their VPD, up to 24 bytes. Signed-off-by: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dimitris Michailidis authored
Some boards' VPDs contain additional keywords or have longer serial numbers, meaning the keyword locations are variable. Ditch the static layout and use the pci_vpd_* family of functions to parse the VPD instead. Signed-off-by: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Current socket backlog limit is not enough to really stop DDOS attacks, because user thread spend many time to process a full backlog each round, and user might crazy spin on socket lock. We should add backlog size and receive_queue size (aka rmem_alloc) to pace writers, and let user run without being slow down too much. Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in stress situations. Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp receiver can now process ~200.000 pps (instead of ~100 pps before the patch) on a 8 core machine. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Changli Gao authored
batch skb dequeueing from softnet input_pkt_queue to reduce potential lock contention when RPS is enabled. Note: in the worst case, the number of packets in a softnet_data may be double of netdev_max_backlog. Signed-off-by: Changli Gao <xiaosuo@gmail.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Idea from Eric Dumazet. As for placement inside of struct sock, I tried to choose a place that otherwise has a 32-bit hole on 64-bit systems. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
-
Anjali Singhai authored
Description: When using Intel smartspeed, the patch displays a warning when the link down shifts to 1 Gig. Signed-off-by: Anjali Singhai <anjali.singhai@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Don Skidmore authored
The way we were setting autoneg via ethtool was inconstant with that of our other drivers. It will change the following: If autoneg is off: >ethtool -a eth0 Pause parameters for eth0: Autonegotiate: off RX: off TX: off Before: >ethtool -A eth0 autoneg on >ethtool -a eth0 Pause parameters for eth0: Autonegotiate: off RX: off TX: off Now: >ethtool -A eth0 autoneg on >ethtool -a eth0 Pause parameters for eth0: Autonegotiate: on RX: on TX: on Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Greg Rose authored
The ixgbevf driver would always report 10Gig speeds even when the link speed is downshifted to 1Gig. This patch fixes that problem. Signed-off-by: Greg Rose <gregory.v.rose@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Changli Gao authored
reimplement softnet_data.output_queue as a FIFO queue to keep the fairness among the qdiscs rescheduled. Signed-off-by: Changli Gao <xiaosuo@gmail.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> ---- include/linux/netdevice.h | 1 + net/core/dev.c | 22 ++++++++++++---------- 2 files changed, 13 insertions(+), 10 deletions(-) Signed-off-by: David S. Miller <davem@davemloft.net>
-
-
Joe Perches authored
Convert DEBUGOUTx to pr_debug Convert DEBUGFUNC to more commonly used ENTER Convert mac address output to %pM Use #define pr_fmt Convert a few printks to pr_<level> Improve ixgb_mc_addr_list_update: use a temporary for current mc address Use etherdevice.h functions for mac address testing Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Fastabend authored
There is a small race between when the tx queues are stopped and when netif_carrier_off() is called in ixgbe_down. If the dev_watchdog() timer fires during this time it is possible for a false tx timeout to occur. This patch moves the netif_carrier_off() so that it is called before the tx queues are stopped preventing the dev_watchdog timer from detecting false tx timeouts. The race is seen occosionally when FCoE or DCB settings are being configured or changed. Testing note, running ifconfig up/down will not reproduce this issue because dev_open/dev_close call dev_deactivate() and then dev_activate(). Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This change corrects the fact that we were not reporting Gen2 link speeds when we were in fact connected at Gen2 rates. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jesse Brandeburg authored
writebacks can be held indefinitely by hardware if EITR=0, when combined with TXDCTL.WTHRESH=8. When EITR=0, WTHRESH should be set back to zero. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jesse Brandeburg authored
82598/82599 can support EITR == 0, which allows for the absolutely lowest latency setting in the hardware. This disables writeback batching and anything else that relies upon a delayed interrupt. This patch enables the feature of "override" when a user sets rx-usecs to zero, the driver will respect that setting over using RSC, and automatically disable RSC. If rx-usecs is used to set the EITR value to 0, then the driver should disable LRO (aka RSC) internally until EITR is set to non-zero again. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Koki Sanagi authored
There is no need to increment nr_frags because skb_fill_page_desc increments it. Signed-off-by: Koki Sanagi <sanagi.koki@jp.fujitsu.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Koki Sanagi authored
There is no need to increment nr_frags because skb_fill_page_desc increments it. Signed-off-by: Koki Sanagi <sanagi.koki@jp.fujitsu.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
__sk_dst_set() might be called while no state can be integrated in a rcu_dereference_check() condition. So use rcu_dereference_raw() to shutup lockdep warnings (if CONFIG_PROVE_RCU is set) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
const qualifier on sock argument is misleading, since we can modify rxhash. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Flavio Leitner authored
RFC 1122 says the following: ... Keep-alive packets MUST only be sent when no data or acknowledgement packets have been received for the connection within an interval. ... The acknowledgement packet is reseting the keepalive timer but the data packet isn't. This patch fixes it by checking the timestamp of the last received data packet too when the keepalive timer expires. Signed-off-by: Flavio Leitner <fleitner@redhat.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
I prefer that the hlist be only accessed through the hlist macro objects. Explicit twiddling of links (especially with RCU) exposes the code to future bugs. Compile tested only. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Use existing inline function. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Conflicts: drivers/net/e100.c drivers/net/e1000e/netdev.c
-
Paul E. McKenney authored
Calls to twsk_net() are in some cases protected by reference counting as an alternative to RCU protection. Cases covered by reference counts include __inet_twsk_kill(), inet_twsk_free(), inet_twdr_do_twkill_work(), inet_twdr_twcal_tick(), and tcp_timewait_state_process(). RCU is used by inet_twsk_purge(). Locking is used by established_get_first() and established_get_next(). Finally, __inet_twsk_hashdance() is an initialization case. It appears to be non-trivial to locate the appropriate locks and reference counts from within twsk_net(), so used rcu_dereference_raw(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andre Detsch authored
In some Power7 platforms, when using VIOS (Virtual I/O Server), we need to wait longer for control packets to finish transfer during initialization. Without this change, initialization may fail prematurely. Signed-off-by: Wen Xiong <wenxiong@us.ibm.com> Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Acked-by: Divy Le Ray <divy@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bruce Allan authored
Prompted by a previous patch submitted by Matthew Garret <mjg@redhat.com>, further digging into errata documentation reveals the current enabling or disabling of ASPM L0s and L1 states for certain parts supported by this driver are incorrect. 82571 and 82572 should always disable L1. For standard frames, 82573/82574/82583 can enable L1 but L0s must be disabled, and for jumbo frames 82573/82574 must disable L1. This allows for some parts to enable L1 in certain configurations leading to better power savings. Also according to the same errata, Early Receive (ERT) should be disabled on 82573 when using jumbo frames. Cc: Matthew Garret <mjg@redhat.com> Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peter Waskiewicz authored
The PHY laser is still on during driver init. It's allowing garbage to hit our FIFO, which eventually can cause the entire device to die. Power down the laser while setting up the device, and re-enable the laser before getting link. Signed-off-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Based upon a report from Stephen Rothwell: -------------------- net/bridge/br_multicast.c: In function 'br_ip6_multicast_alloc_query': net/bridge/br_multicast.c:469: error: implicit declaration of function 'csum_ipv6_magic' Introduced by commit 08b202b6 ("bridge br_multicast: IPv6 MLD support") from the net tree. csum_ipv6_magic is declared in net/ip6_checksum.h ... -------------------- Signed-off-by: David S. Miller <davem@davemloft.net>
-