- 29 Jun, 2011 1 commit
-
-
Stephen Rothwell authored
fixes these build errors: drivers/net/can/sja1000/sja1000_of_platform.c: In function 'sja1000_ofp_read_reg': drivers/net/can/sja1000/sja1000_of_platform.c:61:2: error: implicit declaration of function 'in_8' drivers/net/can/sja1000/sja1000_of_platform.c: In function 'sja1000_ofp_write_reg': drivers/net/can/sja1000/sja1000_of_platform.c:67:2: error: implicit declaration of function 'out_8' drivers/net/can/sja1000/sja1000_of_platform.c: In function 'sja1000_ofp_remove': drivers/net/can/sja1000/sja1000_of_platform.c:81:2: error: implicit declaration of function 'iounmap' drivers/net/can/sja1000/sja1000_of_platform.c: In function 'sja1000_ofp_probe': drivers/net/can/sja1000/sja1000_of_platform.c:113:2: error: implicit declaration of function 'ioremap_nocache' drivers/net/can/sja1000/sja1000_of_platform.c:113:7: warning: assignment makes pointer from integer without a cast Caused by commit b7f080cf ("net: remove mm.h inclusion from netdevice.h"). Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 28 Jun, 2011 17 commits
-
-
Jesse Gross authored
When receiving packets from another guest on the same hypervisor, it's generally possible to receive large packets because no segmentation is necessary and these packets are handled by LRO. However, when doing routing or bridging we must disable LRO and lose this benefit. In these cases GRO can still be used and it is very effective because the packets which are segmented in the hypervisor are received very close together and can easily be merged. CC: Shreyas Bhatewara <sbhatewara@vmware.com> CC: Scott Goldman <scottjg@vmware.com> CC: VMware PV-Drivers <pv-drivers@vmware.com> Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: Scott J. Goldman <scottjg@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Also, pci_is_pcie is a better way of determining if the device is PCIE or not (as it uses the same saved PCIE capability offset). Signed-off-by: Jon Mason <jdmason@kudzu.us> Acked-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. Use the value from pci_dev instead of checking in the driver and saving it off the the driver specific structure. Also, it will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Also, pci_is_pcie is a better way of determining if the device is PCIE or not (as it uses the same saved PCIE capability offset). Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Also, pci_is_pcie is a better way of determining if the device is PCIE or not (as it uses the same saved PCIE capability offset). Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Mason authored
The PCIE capability offset is saved during PCI bus walking. Use the value from pci_dev instead of checking in the driver and saving it off the the driver specific structure. It will remove an unnecessary search in the PCI configuration space if this value is referenced instead of reacquiring it. v2 of the patch re-adds the PCI_EXPRESS flag and adds comments describing why it is necessary. [ pdev->pcie_cap --> pci_pcie_cap(pdev) -DaveM ] Signed-off-by: Jon Mason <jdmason@kudzu.us> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kuninori Morimoto authored
This patch tidyup below warning ${LINUX}/drivers/net/sh_eth.c:1773: warning: 'mdp' may be used uninitialized in this function Cc: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jesse Gross authored
This converts the vmxnet3 driver to use the new vlan model. In doing so it fixes missing tags in tcpdump and failure to do checksum offload when tx vlan offload is disabled. CC: Shreyas Bhatewara <sbhatewara@vmware.com> CC: VMware PV-Drivers <pv-drivers@vmware.com> Signed-off-by: Jesse Gross <jesse@nicira.com> Signed-off-by: Scott J. Goldman <scottjg@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mike Frysinger authored
Since we have a struct that defines the sizes of the registers, we don't need to explicitly use the 16bit read/write helpers. Let the code figure out which size access to make based on the size of the C type. There should be no functional changes here. Signed-off-by: Mike Frysinger <vapier@gentoo.org> Acked-by: Wolfgang Grandegger <wg@grandegger.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Mike Frysinger authored
If we look closely, the 4 writes to TRANSMIT_CHL.id1 can be collapsed down into much simpler code. So do just that. This also fixes a build failure due to the I/O macros no longer getting pulled in. Their minor (and accidental) usage here gets dropped as part of the unification. Signed-off-by: Mike Frysinger <vapier@gentoo.org> Acked-by: Wolfgang Grandegger <wg@grandegger.com> Acked-by: Kurt Van Dijck <kurt.van.dijck@eia.be> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sergei Shtylyov authored
Commit 725c8999 (mlx4_en: Reporting HW revision in ethtool -i) added code to read the revision ID from the PCI configuration register while it's already stored by PCI subsystem in the 'revision' field of 'struct pci_dev'... While at it, move the code being changed a bit in order to not break the initialization sequence. Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 27 Jun, 2011 13 commits
-
-
jamal authored
Results on dummy device can be seen in my netconf 2011 slides. These results are for a 10Gige IXGBE intel nic - on another i5 machine, very similar specs to the one used in the netconf2011 results. It turns out - this is a hell lot worse than dummy and so this patch is even more beneficial for 10G. Test setup: ---------- System under test sending packets out. Additional box connected directly dropping packets. Installed prio qdisc on the eth device and default netdev default length of 1000 used as is. The 3 prio bands each were set to 100 (didnt factor in the results). 5 packet runs were made and the middle 3 picked. results ------- The "cpu" column indicates the which cpu the sample was taken on, The "Pkt runx" carries the number of packets a cpu dequeued when forced to be in the "dequeuer" role. The "avg" for each run is the number of times each cpu should be a "dequeuer" if the system was fair. 3.0-rc4 (plain) cpu Pkt run1 Pkt run2 Pkt run3 ================================================ cpu0 21853354 21598183 22199900 cpu1 431058 473476 393159 cpu2 481975 477529 458466 cpu3 23261406 23412299 22894315 avg 11506948 11490372 11486460 3.0-rc4 with patch and default weight 64 cpu Pkt run1 Pkt run2 Pkt run3 ================================================ cpu0 13205312 13109359 13132333 cpu1 10189914 10159127 10122270 cpu2 10213871 10124367 10168722 cpu3 13165760 13164767 13096705 avg 11693714 11639405 11630008 As you can see the system is still not perfect but is a lot better than what it was before... At the moment we use the old backlog weight, weight_p which is 64 packets. It seems to be reasonably fine with that value. The system could be made more fair if we reduce the weight_p (as per my presentation), but we are going to affect the shared backlog weight. Unless deemed necessary, I think the default value is fine. If not we could add yet another knob. Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
-
Joe Perches authored
Use pr_fmt, pr_<level> and netdev_<level> as appropriate. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use pr_fmt, pr_<level> and netdev_<level> as appropriate. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use pr_fmt, pr_<level> and netdev_<level> as appropriate. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use pr_fmt, pr_<level> and netdev_<level> as appropriate. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use pr_fmt, pr_<level> and netdev_<level> as appropriate. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use pr_fmt, pr_<level> and netdev_<level> as appropriate. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use pr_fmt, pr_<level> and netdev_<level> as appropriate. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Use pr_fmt, pr_<level> and netdev_<level> as appropriate. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
Initialization of this field to "all priorities" must be done before MCC queue creation. As soon as the MCC queue is created, an event modifying this value may be received. Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
Problem initially reproted and fixed by Eric Dumazet <eric.dumazet@gmail.com> netdev_stats_update() resets netdev->stats and then accumulates stats from various rings. This is wrong as stats readers can sometimes catch zero values. Use temporary variables instead for accumulating per-ring values. Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 25 Jun, 2011 9 commits
-
-
John Fastabend authored
Implement DCB ops dcb_ieee_del() and set FCoE to the default priority when no priority exists. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
The fcoe.tc field is no longer used so remove it. After the field is removed there is no need to keep fcoe_setapp() around so remove it as well. And finally we can get rid of some DCB #ifdef's in the fcoe code. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
John Fastabend authored
Commit, commit c8ca76eb Author: John Fastabend <john.r.fastabend@intel.com> Date: Sat Mar 12 03:50:53 2011 +0000 ixgbe: DCB, further cleanups to app configuration Removed the getapp() routines from ixgbe because they are no longer needed. It also allowed the set hardware routines to use both IEEE 802.1Qaz app types and CEE app types. This added code to do bit shifting in the IEEE case. This patch reverts the checks and handles the IEEE case from the setapp entry point. I prefer this because it keeps the two paths from having to be aware of the DCB mode. This resolves a bug where I missed setting the selector bit in the IEEE spec value and left it in the CEE value. Now that they are separate routines these types of errors should not occur. Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Tested-by: Ross Brattain <ross.b.brattain@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Lior Levy authored
There is a need to configure MMW_SIZE in register RTTBCNRM with a correct value (0x4 for non jumbo frames and 0x14 for jumbo frames support). For 82599 the value is 0x4 and for X540 the value is 0x14. Signed-off-by: Lior Levy <lior.levy@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch updates the current methods used for determining if we have enough space to transmit a given skb. The current method is quite wasteful as it has us go through and determine how each page is going to be broken up. That only needs to be done if pages are larger than our maximum data per TXD. As such I have wrapped that in a page size check. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
There is a significant amount of shared functionality between the checksum and TSO offload configuration that is shared in regards to how they setup the context descriptors. Since so much of the functionality is shared it makes sense to move the shared functionality into a single function and just call that function from the two context descriptor specific routines. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change updates all values dealing with count, next_to_use, and next_to_clean so that they stay u16 values. The advantage of this is that there is no re-casting of type during the propagation through the stack. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change is a minor cleanup that converts the IXGBE_DESC_UNUSED macro into a static inline function just for the case of the code being a bit cleaner. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This change makes it so that we pass the adapter struct instead of the netdev for most of the basic interrupts that are not associated with q_vectors. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-