- 08 May, 2012 13 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhostDavid S. Miller authored
Michael S. Tsirkin says: -------------------- There are mostly bugfixes here. I hope to merge some more patches by 3.5, in particular vlan support fixes are waiting for Eric's ack, and a version of tracepoint patch might be ready in time, but let's merge what's ready so it's testable. This includes a ton of zerocopy fixes by Jason - good stuff but too intrusive for 3.4 and zerocopy is experimental anyway. virtio supported delayed interrupt for a while now so adding support to the virtio tool made sense -------------------- Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Until now, struct mreq has not been recognized and it was worked with as with struct in_addr. That means imr_multiaddr was copied to imr_address. So do recognize struct mreq here and copy that correctly. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Daney authored
The GPIO pins select which sub bus is connected to the master. Initially tested with an sn74cbtlv3253 switch device wired into the MDIO bus. Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Daney authored
This patch adds a somewhat generic framework for MDIO bus multiplexers. It is modeled on the I2C multiplexer. The multiplexer is needed if there are multiple PHYs with the same address connected to the same MDIO bus adepter, or if there is insufficient electrical drive capability for all the connected PHY devices. Conceptually it could look something like this: ------------------ | Control Signal | --------+--------- | --------------- --------+------ | MDIO MASTER |---| Multiplexer | --------------- --+-------+---- | | C C h h i i l l d d | | --------- A B --------- | | | | | | | PHY@1 +-------+ +---+ PHY@1 | | | | | | | --------- | | --------- --------- | | --------- | | | | | | | PHY@2 +-------+ +---+ PHY@2 | | | | | --------- --------- This framework configures the bus topology from device tree data. The mechanics of switching the multiplexer is left to device specific drivers. The follow-on patch contains a multiplexer driven by GPIO lines. Signed-off-by: David Daney <david.daney@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Daney authored
Add of_mdio_find_bus() which allows an mii_bus to be located given its associated the device tree node. This is needed by the follow-on patch to add a driver for MDIO bus multiplexers. The of_mdiobus_register() function is modified so that the device tree node is recorded in the mii_bus. Then we can find it again by iterating over all mdio_bus_class devices. Because the OF device tree has now become an integral part of the kernel, this can live in mdio_bus.c (which contains the needed mdio_bus_class structure) instead of of_mdio.c. Signed-off-by: David Daney <david.daney@cavium.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
If Kernel CAPI is compiled without CONFIG_ISDN_CAPI_MIDDLEWARE, the structure retrieved via capincci_find() is never actually used, so don't compile that function in that case. Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
Fix up some of the readibility deterioration caused by the recent whitespace coding style cleanup. Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
Various functions in the Gigaset driver were using different conventions for the meaning of their int return values. Align them to the usual negative error numbers convention. Inspired-by: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
Functions clear_at_state and free_strings did the same thing; drop one of them, keeping the more descriptive name. Drop a redundant call. Rename function dealloc_at_states to dealloc_temp_at_states to clarify its purpose. Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
Fix up some of the readibility deterioration caused by the recent whitespace coding style cleanup. Signed-off-by: Tilman Schmidt <tilman@imap.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
An out-of-place "OK" response to the "AT+GMR" (get firmware version) command turns out to be, more often than not, a delayed response to a previous command rather than an actual error, so continue waiting for the version number in that case. Signed-off-by: Tilman Schmidt <tilman@imap.cc> CC: stable <stable@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
If DISCONNECT_B3_IND was synthesized because of a DISCONNECT_REQ with existing logical connections, the connection state wasn't updated accordingly. Also the emitted DISCONNECT_B3_IND message wasn't included in the debug log as requested. This patch fixes both of these issues. Signed-off-by: Tilman Schmidt <tilman@imap.cc> CC: stable <stable@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tilman Schmidt authored
Introduce a global ratelimit for CAPI message dumps to protect against possible log flood. Drop the ratelimit for ignored messages which is now covered by the global one. Signed-off-by: Tilman Schmidt <tilman@imap.cc> CC: stable <stable@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 07 May, 2012 1 commit
-
- 06 May, 2012 3 commits
-
-
Alexander Duyck authored
With the recent changes for how we compute the skb truesize it occurs to me we are probably going to have a lot of calls to skb_end_pointer - skb->head. Instead of running all over the place doing that it would make more sense to just make it a separate inline skb_end_offset(skb) that way we can return the correct value without having gcc having to do all the optimization to cancel out skb->head - skb->head. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
Since there is now only one spot that actually uses "fastpath" there isn't much point in carrying it. Instead we can just use a check for skb_cloned to verify if we can perform the fast-path free for the head or not. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
The fast-path for pskb_expand_head contains a check where the size plus the unaligned size of skb_shared_info is compared against the size of the data buffer. This code path has two issues. First is the fact that after the recent changes by Eric Dumazet to __alloc_skb and build_skb the shared info is always placed in the optimal spot for a buffer size making this check unnecessary. The second issue is the fact that the check doesn't take into account the aligned size of shared info. As a result the code burns cycles doing a memcpy with nothing actually being shifted. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 05 May, 2012 4 commits
-
-
John Fastabend authored
PFC stats are only tabulated when PFC is enabled. However in IEEE mode the ieee_pfc pfc_tc bits were not checked and the calculation was aborted. This results in statistics not being reported through ethtool and possible a false Tx hang occurring when receiving pause frames. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Bruce Allan authored
Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Richard Alpe authored
Clear the REQ and GNT bit in the eeprom control register (EECD). This is required if the eeprom is to be accessed with auto read EERD register. After a cold reset this doesn't matter but if PBIST MAC test was executed before booting, the register was left in a dirty state (the 2 bits where set), which caused the read operation to time out and returning 0. Reference (page 312): http://download.intel.com/design/network/manuals/316080.pdfReported-by: Aleksandar Igic <aleksandar.igic@dektech.com.au> Signed-off-by: Richard Alpe <richard.alpe@ericsson.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Bruce Allan authored
Like other supported (igp) PHYs, the driver needs to be able to force the master/slave mode on 82577. Since the code is the same as what already exists in the code flow for igp PHYs, move it to a new function to be called for both flows. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 04 May, 2012 16 commits
-
-
Eric Dumazet authored
It appears some networks play bad games with the two bits reserved for ECN. This can trigger false congestion notifications and very slow transferts. Since RFC 3168 (6.1.1) forbids SYN packets to carry CT bits, we can disable TCP ECN negociation if it happens we receive mangled CT bits in the SYN packet. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Perry Lorier <perryl@google.com> Cc: Matt Mathis <mattmathis@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Wilmer van der Gaast <wilmer@google.com> Cc: Ankur Jain <jankur@google.com> Cc: Tom Herbert <therbert@google.com> Cc: Dave Täht <dave.taht@bufferbloat.net> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Keil authored
With multiple cards is hard to figure out which port caused trouble int the layer2 routines (e.g. got a timeout). Now we have the informations in the log output. Signed-off-by: Karsten Keil <kkeil@linux-pingi.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Keil authored
The timer3 and the activation delay timer need to be independent. If timer3 fires do not reqest power up we have to send only INFO 0. Now layer1 pass TBR3 again. Signed-off-by: Karsten Keil <kkeil@linux-pingi.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Keil authored
For certification test it is very useful to change the layer1 timer3 value on runtime. Signed-off-by: Karsten Keil <kkeil@linux-pingi.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Keil authored
To be full preemptiv safe, we cannot handle a L2 timeout in the timer context itself, we should do all actions via the D-channel thread. Signed-off-by: Karsten Keil <kkeil@linux-pingi.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Keil authored
Under some configs it was still not possible to unload the driver, because the module use count was srewed up. Signed-off-by: Karsten Keil <keil@b1-systems.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andreas Eversberg authored
Tei manager reports current layer 1 state on creation. On state change it reports it to the socket interface. Signed-off-by: Andreas Eversberg <andreas@eversberg.eu> Signed-off-by: Karsten Keil <keil@b1-systems.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use qdisc_drop() helper where possible. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This change updates the link flow control configuration so that we correctly set the link flow control settings for DCB. Previously we would have to call the fc_enable call 8 times, once for each packet buffer. If we move that logic into the fc_enable call itself we can avoid multiple unnecessary register writes. This change also corrects an issue in which we were only shifting the water marks for 82599 parts by 6 instead of 10. This was resulting in us only using 1/16 of the packet buffer when flow control was enabled. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
We can avoid many of the forward declarations found in ixgbe_common.c by just reordering things so this patch does that to help cleanup the code. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change replaces the calls to put_page with calls to __free_page. Since the FCoE code is able to access order 1 pages I thought it would be a good idea to change things over to using __free_pages since that is the preferred approach for freeing pages. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change makes it so that ixgbe_fc_autoneg is a void and always sets the current_mode. Previously if the link was down we would return an error, however there is no harm in simply treating a link down case as a case in which autoneg simply failed. This allows us to rely on the return value of the ixgbe_fc_enable call now since there should be no cases where it returns an error that would normally be ignored. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change reorders the mapping of rings to q_vectors in the case that the number of rings exceeds the number of q_vectors. Previously we would allocate the first R/N queues to the first q_vector where R is the number of rings and N is the number of q_vectors. Instead of doing this we can do a better job of interleaving the rings to the CPUs by assigning every Nth ring to the q_vector. The below tables illustrate this change for the R = 16 N = 4 case. Before patch After patch q_vector: 0 1 2 3 0 1 2 3 Rings: 0 4 8 12 0 1 2 3 1 5 9 13 4 5 6 7 3 6 10 14 8 9 10 11 4 7 11 15 12 13 14 15 This should improve the performance for both DCB or ATR when the number of rings exceeds the number of q_vectors allocated by the adapter. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change makes it so that we can track instances of where a packet was dropped due to a packet being received when there are no DMA buffers available in the ring. For some reason this was only being enabled with RSC, however it makes more sense to always have this feature on so that we can track any cases where we might drop a buffer due to an Rx ring being full. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Bruce Allan authored
i217 is the next-generation LOM that will be available on systems with the Lynx Point Platform Controller Hub (PCH) chipset from Intel. This patch provides the initial support for the device. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Matthew Vick authored
Version bump to 1.11.3-k. Signed-off-by: Matthew Vick <matthew.vick@intel.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 03 May, 2012 3 commits
-
-
Sebastian Andrzej Siewior authored
The idea here seems to be to get a 44bit DMA mask working and if this fails it should fallback to a 32bit DMA mask. The dma_mask variable is assigned once to 44bit and never updated. pci_set_dma_mask() and pci_set_consistent_dma_mask() are both implemented as functions so there is no evil macro which might update dma_mask. Looking at the assembly, I see a call to dma_set_mask() followed by dma_supported() and then a jump passed the second dma_set_mask(). The only way to get to second dma_set_mask() call is by an error code in the first one. So I hereby remove the check since it looks superfluous. Please ignore the path if there is black magic involved. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch adds support for a skb_head_is_locked helper function. It is meant to be used any time we are considering transferring the head from skb->head to a paged frag. If the head is locked it means we cannot remove the head from the skb so it must be copied or we must take the skb as a whole. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-