- 02 Jun, 2010 30 commits
-
-
Eric Dumazet authored
Use read_pnet() and write_pnet() to reduce number of ifdef CONFIG_NET_NS Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Sparse complains about shadowed declaration of skb. So use other name. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Finn Thain authored
Use the request_irq() error code as the return value for mac8390_open(). EAGAIN doesn't make sense for Nubus slot IRQs. Only this driver can claim this IRQ (until the NIC is removed, which means everything is powered down). Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
I added newlines after the declarations in caif_serial.c. This is normal kernel style, although I can't see anywhere it's documented. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
We don't use the "ser" variable so I've removed it. Signed-off-by: Dan Carpenter <error27@gmail.com> Acked-by: Sjur Braendeland <sjur.brandeland@stericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
When many cpus compete for sending frames on a given qdisc, the qdisc spinlock suffers from very high contention. The cpu owning __QDISC_STATE_RUNNING bit has same priority to acquire the lock, and cannot dequeue packets fast enough, since it must wait for this lock for each dequeued packet. One solution to this problem is to force all cpus spinning on a second lock before trying to get the main lock, when/if they see __QDISC_STATE_RUNNING already set. The owning cpu then compete with at most one other cpu for the main lock, allowing for higher dequeueing rate. Based on a previous patch from Alexander Duyck. I added the heuristic to avoid the atomic in fast path, and put the new lock far away from the cache line used by the dequeue worker. Also try to release the busylock lock as late as possible. Tests with following script gave a boost from ~50.000 pps to ~600.000 pps on a dual quad core machine (E5450 @3.00GHz), tg3 driver. (A single netperf flow can reach ~800.000 pps on this platform) for j in `seq 0 3`; do for i in `seq 0 7`; do netperf -H 192.168.0.1 -t UDP_STREAM -l 60 -N -T $i -- -m 6 & done done Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
In the worst case, when the first loop breaks an the end of the slave list, the slave list is iterated through twice. This patch reduces this function only to one loop. Also makes it simpler. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
This is stored but never restored. So remove this as it is useless. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Move the code that copies slave's mac address in case that's the first slave into bond_enslave. Ifenslave app does this also but that's not a problem. This is something that should be done in bond_enslave, and it shound not matter from where is it called. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wolfram Sang authored
- don't free bus->irq (obsoleted by ca816d98) - don't dispose irqs (should be done in of_mdiobus_register()) - use fec-pointer consistently in transfer() - use resource_size() - cosmetic fixes Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
This patch makes bonding_store_slaves function nicer and easier to understand. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
(it's actually the same as v1) Remove checks that duplicates similar checks in bond_enslave. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
V1->V2: corrected res/ret use For some reason, MTU handling (storing, and restoring) is taking place in bond_sysfs. The correct place for this code is in bond_enslave, bond_release. So move it there. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
__QDISC_STATE_RUNNING is always changed while qdisc lock is held. We can avoid two atomic operations in xmit path, if we move this bit in a new __state container. Location of this __state container is carefully chosen so that fast path only dirties one qdisc cache line. THROTTLED bit could later be moved into this __state location too, to avoid dirtying first qdisc cache line. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Define three helpers to manipulate QDISC_STATE_RUNNIG flag, that a second patch will move on another location. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anirban Chakraborty authored
Added support for NIC functions that work in non privileged mode where these functions are privileged to do IO only, the control operations are handled via privileged functions. Bumped up version number to 5.0.3. Signed-off-by: Anirban Chakraborty <anirban.chakraborty@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anirban Chakraborty authored
Following changes have been added to enable the adapter to work in NIC partitioning mode where multiple PCI functions of an adapter port can be configured to work as NIC functions. The first function that is enumerated on the PCI bus assumes the role of management function which, besides being able to do all the NIC functionality, can configure other NIC partitions. Other NIC functions can be configured as privileged or non privileged functions. Privileged function can not configure other NIC functions but can do all the NIC functionality including any firmware initialization, chip reset etc. Non privileged functions can do only basic IO. For chip reset etc, it depends on the privilege or management function. 1. Added code to determine PCI function number independent of kernel API. 2. Added Driver - FW version 2.0 support. 3. Changed producer and consumer register offset calculation. 4. Added management and privileged operation modes for npar functions. A module parameter has been added to control it. 5. Added support for configuring the eswitch in the adapter. Signed-off-by: Anirban Chakraborty <anirban.chakraborty@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
A single shared memory region used to communicate with firmware is mapped into both PCI PFs of the SFC9020 and SFL9021. Drivers must be able to identify which port they are addressing in order to use the correct sub-region. Currently we use the PCI function number, but the PCI address may be virtualised. Use the CS_PORT_NUM register field defined for just this purpose. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
rx_errors is defined as 'bad packets received', but we are currently including various overflow errors as well. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
Insert a structure at the start of the shared page that tracks the dma mapping refcnt. DMA into the next cache line of the (shared) page (plus EFX_PAGE_IP_ALIGN). When recycling a page, check the page refcnt. If the page is otherwise unused, then resurrect the other receive buffer that previously referenced the page. Be careful not to overflow the receive ring, since we can now resurrect n receive buffers in a row. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
The cut-through design of the receive path means that packets that fail to match the appropriate MAC filter are not discarded at the MAC but are flagged in the completion event as 'to be discarded'. On networks with heavy multicast traffic, this can account for a significant proportion of received packets, so it is worthwhile to recycle the buffer immediately in this case rather than freeing it and then reallocating it shortly after. The only complication here is dealing with a page shared between two receive buffers. In that case, we need to be careful to free the dma mapping when both buffers have been free'd by the kernel. This means that we can only recycle such a page if both receive buffers are discarded. Unfortunately, in an environment with 1500mtu, rx_alloc_method=PAGE, and a mixture of discarded and not-discarded frames hitting the same receive queue, buffer recycling won't always be possible. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
- Pull the loop handling into efx_init_rx_buffers_(skb|page) - Remove rx_queue->buf_page, and associated clean up code - Remove unmap_addr, since unmap_addr is trivially calculable This will allow us to recycle discarded buffers directly from efx_rx_packet(), since will never be in the middle of splitting a page. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
Ensure that efx_fast_push_rx_descriptors() must only run from efx_process_channel() [NAPI], or when napi_disable() has been executed. Reimplement the slow fill by sending an event to the channel, so that NAPI runs, and hanging the subsequent fast fill off the event handler. Replace the sfc_refill workqueue and delayed work items with a timer. We do not need to stop this timer in efx_flush_all() because it's safe to send the event always; receiving it will be delayed until NAPI is restarted. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
Formerly, efx_test_eventq_irq() assumed it was the only user of driver generated events. Allow it to interoperate with other users. We can create more than 16 channels, so align event codes with a multiple of 256 not 16. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
It's been observed that some phys (such as the qt2025c) can do down-up-down-up transitions, presumably as pcs block lock settles down. The loopback selftest will start sending data immediately after the link comes up. Work around this by waiting for the link state to stay up for two consecutive polls, rather than one. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
All of the ethtool code paths keep them in sync, but we need to ensure they are sync'd at start of day. Matches the sft9001 driver. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
Under certain conditions a PHY may backpressure Falcon B0 in such a way that flushes timeout. In normal circumstances the phy poller would fix the PHY, and the flush could complete. But efx_nic_flush_queues() is always called after efx_stop_all(), so the poller has been stopped. Even if this weren't the case, how long would we have to wait for the poller to fix this? And several callers of efx_nic_flush_queues() are about to reset the device anyway - so we don't need to do anything. Work around this bug by scheduling a reset. Ensure that the MAC is never rewired back into the datapath before the reset runs (we already ignore all rx events anyway). Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
efx_pm_freeze() sets efx->state = STATE_FINI, which means efx_reset_work() will abort any scheduled resets. efx_pm_thaw() should reschedule efx_reset_work() again, since a freeze/thaw will not have reset the hardware. This bug was spotted by inspection - there is no real world example of this happening. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
Most of its members are constant capabilities, not configuration. The new name is also consistent with the name of the pointer to it in struct efx_nic and the names of structures used by other PHY drivers. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 01 Jun, 2010 10 commits
-
-
Jie Yang authored
Add AR8151 v2.0 Gigabit 1000 support Change jumbo frame size to 6K Update L0s/L1 rountine when link speed is 100M or 1G, set L1 link timer to 4 for l1d_2 and l2c_b2 set L1 link timer to 7 for l2c_b, set L1 link timer to 0xF for others. Update atl1c_suspend routine just refactory the function, add atl1c_phy_power_saving routine, when Wake On Lan enable, this func will be called to save power, it will reautoneg PHY to 10/100M speed depend on the link partners link capability. Update atl1c_configure_des_ring do not use l2c_b default SRAM configuration. Signed-off-by: Jie Yang <Jie.Yang@atheros.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
As noticed by Julia Lawall, ipip6_tunnel_add_prl() incorrectly calls kzallloc(..., GFP_KERNEL) while a spinlock is held. She provided a patch to use GFP_ATOMIC instead. One possibility would be to convert this spinlock to a mutex, or preallocate the thing before taking the lock. After RCU conversion, it appears we dont need this lock, since caller already holds RTNL Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Richard Cochran authored
This patch adds support for the IFF_ALLMULTI flag. Previously only the IFF_PROMISC flag was supported. Signed-off-by: Richard Cochran <richard.cochran@omicron.at> Acked-By: Krzysztof Halasa <khc@pm.waw.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
As mbox polling is done only in process context, it is better to use schedule_timeout() instead of udelay(). Signed-off-by: Sathya Perla <sathyap@serverengines.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
This patch adds cleanup code (things like unregistering irq, disabling napi etc) to be_open() when an error occurs inside the routine. Signed-off-by: Sathya Perla <sathyap@serverengines.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steven Walter authored
Based on a patch from http://simon.baatz.info/wol-support-for-an983b/ Tested to resume from suspend by magic packet. Signed-off-by: Steven Walter <stevenrwalter@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-