- 17 Nov, 2009 10 commits
-
-
Wolfram Sang authored
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wolfram Sang authored
The upper layer does not support it yet. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wolfram Sang authored
- remove whitespaces - use ! and ?: when apropriate - make braces consistent Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wolfram Sang authored
Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Conflicts: drivers/net/can/Kconfig
-
zeal authored
ks8695_rx() will call refill_buffers() for every incoming packet. Its not necessary. We just need do it after finishing receiving thing. And the 'RX dma engine' is in the same situation. This blocks our user space application. The following patch may fix it. Signed-off-by: zeal <zealcook@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
zeal authored
ks8695 rx irq is edge-level. Before arriving at irq handler, the corresponding status bit has been clear(irq's ack). So we should not check it after that. Signed-off-by: zeal <zealcook@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben McKeegan authored
PPP does not correctly call pskb_may_pull() on all necessary receive paths before reading the PPP protocol, thus causing PPP to report seemingly random 'unsupported protocols' and eventually trigger BUG_ON(skb->len < skb->data_len) in skb_pull_rcsum() when receiving multilink protocol in non-linear skbs. ppp_receive_nonmp_frame() does not call pskb_may_pull() before reading the protocol number. For the non-mp receive path this is not a problem, as this check is done in ppp_receive_frame(). For the mp receive path, ppp_mp_reconstruct() usually copies the data into a new linear skb. However, in the case where the frame is made up of a single mp fragment, the mp header is pulled and the existing skb used. This skb was then passed to ppp_receive_nonmp_frame() without checking if the encapsulated protocol header could safely be read. Signed-off-by: Ben McKeegan <ben@netservers.co.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Breno Leitao authored
After commmit 4b77b0a2 EEH breaks after the second error, since it calls pci_restore_state() but it returns 0, since pci->state_saved is false. So, this patch just call pci_save_state() after pci_restore_state(). Signed-off-by: Breno Leitao <leitao@linux.vnet.ibm.com> Acked-by: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 16 Nov, 2009 30 commits
-
-
Eric Dumazet authored
net: Fix the rollback test in dev_change_name() In dev_change_name() an err variable is used for storing the original call_netdevice_notifiers() errno (negative) and testing for a rollback error later, but the test for non-zero is wrong, because the err might have positive value as well - from dev_alloc_name(). It means the rollback for a netdevice with a number > 0 will never happen. (The err test is reordered btw. to make it more readable.) Signed-off-by: Jarek Poplawski <jarkao2@gmail.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Frank Blaschka authored
Technically there is no need to set the card offline to change RX checksumming. Get rid of this stupid limitation. Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Frank Blaschka authored
The maximum TSO size OSA can handle is 15 * PAGE_SIZE. This patch reduces gso_max_size to this value and adds some sanity checks and statistics to the TSO implementation. Since only layer 3 is able to do TSO move all TSO related functions to the qeth_l3 module. Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
Setting a qeth device online requires to call function ccw_device_set_online() for read-, write-, and data-subchannel. Failures should be detected immediately without an attempt to invoke follow-on activity qeth_qdio_clear_card()., In addition, ccw_device_set_online calls are consolidated in qeth_core_main.c only. Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com> Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
EDDP code has been removed from qeth in 2009. This patch removes two useless remaining EDDP-references. Signed-off-by: Ursula Braun <ursula.braun@de.ibm.com> Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Einar Lueck authored
Isolate data connection to a shared OSA card against other data connections to the same OSA card. Connectivity between isolated data connections sharing the same OSA card is therefore possible only through external network gear (e.g. a router). Signed-off-by: Einar Lueck <elelueck@de.ibm.com> Signed-off-by: Frank Blaschka <frank.blaschka@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This reverts commit 38783e67. It causes kernel bugzilla #14594 Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marin Mitov authored
The function print_mac in net/ethernet/eth.c is marked __deprecated and not used. Remove it. Signed-off-by: Marin Mitov <mitov@issp.bas.bg> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jay Vosburgh authored
The language of 802.3ad 43.4.9 requires the "recordPDU" function to, in part, compare the Partner parameter values in a received LACPDU to the stored Actor values. If those match, then the Partner's synchronization state is set to true. The current 802.3ad implementation is performing these steps out of order; first, the synchronization check is done, then the paramters are checked to see if they match (the synch check being done against a match check of a prior LACPDU). This causes delays in establishing aggregators in some circumstances. This patch modifies the 802.3ad code to call __choose_matched, the function that does the "match" comparisions, as the first step of __record_pdu, instead of immediately afterwards. This new behavior is in compliance with the language of the standard. Some additional commentary relating to code vs. standard is also added. Reported by Martin Patterson <martin@gear6.com> who also supplied the logic of the fix and verified the patch. Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Commit 05423b24 (vlan: allow null VLAN ID to be used) forgot to update __vlan_hwaccel_rx() & vlan_gro_common() We need to set VLAN_TAG_PRESENT flag in skb->vlan_tci Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sriram authored
In the NAPI poll function(emac_poll), check for netif_running() is unnecassary. In addition to associated runtime overhead, it also results in a continuous softirq loop when the interface is brought down under heavy traffic(tested wit Traffic Generator). Once the interface is disabled, the poll function always returns zero(with the check for netif_running) and napi_complete() would never get called resulting in softirq loop. Signed-off-by: Sriramakrishnan <srk@ti.com> Acked-by: Chaithrika U S <chaithrika@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sriram authored
In the NAPI poll function, check for netif_running() is unnecassary. In addition to associated runtime overhead, it also results in continuous softirq loop when the interface is brought down under heavy traffic(tested with Traffic Generator).Once the interface is disabled, the poll function always returns zero(with the check for netif_running) and napi_complete() would never get called resulting in softirq loop. Signed-off-by: Sriramakrishnan <srk@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch updates the tg3 version to 3.104. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch fixes the 5717 variant device ID enumerations and adds those DIDs to the PCI ID table. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch adds code to funnel each MSI-X vector's rx packet buffers into a single set of producer rings which will then be submitted to the hardware. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
The rx producer mailbox registers are used in several spots in the code. The addition of TG3_64BIT_REG_LOW makes register references uncomfortably long. This patch creates an alias for the standard and jumbo ring producer index registers to make the code cleaner. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
The patch increases the number of producer rings available and implements the constructor and destructor code that deals with them. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch changes how the code uses the rx_std_prod_idx member. In the following patch, the code will be changed so that it will act just like a hardware mailbox. This patch prepares the code so that memory barriers can be more easily inserted. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
A later patch is going to add consumer indicies for the producer rings. To keep things readable, this patch renames rx_[std|jmb]_ptr to rx_[std|jmb]_prod_idx. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch converts the tnapi argument of tg3_alloc_rx_skb() to tp. The level of indirection is unnecessary. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch changes the tg3_alloc_rx_skb() implementation to accept the destination producer ring set pointer as a parameter rather than assuming the source and destination producer rings are the same. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch removes the source index parameter of tg3_alloc_rx_skb(). A later patch will make it possible for the source and destination producer rings to be different. This patch opts to make tg3_alloc_rx_skb() a destination-only implementation and move the code sensitive to the difference elsewhere. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
tg3_get_invariants(), among other things, discovers whether or not the device is MSI-X capable and how many interrupts it supports. This discovery needs to happen before registering NAPI instances with netdev. This patch moves the code block that calls napi_add later in tg3_init_one() so that tg3_get_invariants() has a chance to run first. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch gives all non-zero MSIX vectors their own NAPI handler. This will make NAPI handling for those vectors slightly more efficient. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
By default, the 5717 (and future chips) break up PCIe DMA packets across cacheline boundaries. This isn't necessary on x86. This patch selectively loosens the restriction. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
The A0 revision of the 5717 has problems with short packet fragments. It needs to use the tg3_start_xmit_dma_bug() routine. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
The 5717 sets up TSO slightly differently in the transmit path. It looks like this method will be the new way of doing things. This patch defines a flag to indicate this. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch consolidates the TSO capability discovery code into its own code block. The code that decides whether or not to allow TSO is then cleaned up. Finally, the patch consolidates all MSI and MSIX capability code into a single code block. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
We need room for another TSO flag and it would be most efficient if it resided in tg3_flags2. This patch moves the TG3_FLG2_PROTECTED_NVRAM to tg3_flags3 to make room. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
This patch converts tg3_start_xmit_dma_bug() to accomodate multiple NAPI instances. This is prep work for a later patch in this series. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Reviewed-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-