- 03 Jun, 2010 9 commits
-
-
Eric Dumazet authored
cleanup patch. Use new __packed annotation in net/ and include/ (except netfilter) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
cleanup patch. Use new __packed annotation in drivers/net/ Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
Cleanup PHY probing: use helpers from phylib Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Denis Kirjanov authored
Convert TX hook return value to netdev_tx_t Signed-off-by: Denis Kirjanov <dkirjanov@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Geert Uytterhoeven authored
commit 5c7fffd0 ("drivers/net/mac8390.c: Remove useless memcpy casting") removed too many casts, introducing the following warnings: | drivers/net/mac8390.c:248: warning: passing argument 1 of '__builtin_memcpy' makes pointer from integer without a cast | drivers/net/mac8390.c:253: warning: passing argument 1 of 'word_memcpy_tocard' makes pointer from integer without a cast | drivers/net/mac8390.c:255: warning: passing argument 2 of 'word_memcpy_fromcard' makes pointer from integer without a cast Instead of just readding the casts, - move all casts inside word_memcpy_{to,from}card(), - replace an incorrect memcpy() by memcpy_toio(), - add memcmp_withio() as a wrapper around memcmp(), - replace an incorrect memcpy_toio() by memcpy_fromio(). Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Tested-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Roland Dreier authored
CONFIG_CHELSIO_T1_COUGAR cannot be set (it appears nowhere in any Kconfig files), and the code it protects could never build (cspi.h was never added to the kernel tree). Therefore it's pretty safe to remove all vestiges of this dead code. Signed-off-by: Roland Dreier <rolandd@cisco.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Avoid two atomic ops on struct in_device refcount per incoming packet, if slow path taken, (or route cache disabled) Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Christoph Lameter mentioned that packets could be dropped in input path because of rp_filter settings, without any SNMP counter being incremented. System administrator can have a hard time to track the problem. This patch introduces a new counter, LINUX_MIB_IPRPFILTER, incremented each time we drop a packet because Reverse Path Filter triggers. (We receive an IPv4 datagram on a given interface, and find the route to send an answer would use another interface) netstat -s | grep IPReversePathFilter IPReversePathFilter: 21714 Reported-by: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wu Fengguang authored
Now it's possible to update the DNS record for $HOST_NAME with ip=::::$HOST_NAME::dhcp CC: Andi Kleen <ak@linux.intel.com> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 02 Jun, 2010 31 commits
-
-
-
Jiri Pirko authored
What this patch does is it removes two receive frame hooks (for bridge and for macvlan) from __netif_receive_skb. These are replaced them with a single hook for both. It only supports one hook per device because it makes no sense to do bridging and macvlan on the same device. Then a network driver (of virtual netdev like macvlan or bridge) can register an rx_handler for needed net device. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arnaud Ebalard authored
There are more than a dozen occurrences of following code in the IPv6 stack: if (opt && opt->srcrt) { struct rt0_hdr *rt0 = (struct rt0_hdr *) opt->srcrt; ipv6_addr_copy(&final, &fl.fl6_dst); ipv6_addr_copy(&fl.fl6_dst, rt0->addr); final_p = &final; } Replace those with a helper. Note that the helper overrides final_p in all cases. This is ok as final_p was previously initialized to NULL when declared. Signed-off-by: Arnaud Ebalard <arno@natisbad.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Finn Thain authored
Log error conditions using KERN_ERR priority. Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wu Fengguang authored
Normally dhclient can be configured to send the "host-name" option in DHCP requests to update the client's DNS record. However for an NFSROOT system, dhclient shall never be called (which may change the IP addr and therefore lose your root NFS mount connection). So enable updating the DNS record with kernel parameter ip=::::$HOST_NAME::dhcp Signed-off-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Scott McMillan authored
This patch adds a setting, PACKET_TIMESTAMP, to specify the packet timestamp source that is exported to capture utilities like tcpdump by packet_mmap. PACKET_TIMESTAMP accepts the same integer bit field as SO_TIMESTAMPING. However, only the SOF_TIMESTAMPING_SYS_HARDWARE and SOF_TIMESTAMPING_RAW_HARDWARE values are currently recognized by PACKET_TIMESTAMP. SOF_TIMESTAMPING_SYS_HARDWARE takes precedence over SOF_TIMESTAMPING_RAW_HARDWARE if both bits are set. If PACKET_TIMESTAMP is not set, a software timestamp generated inside the networking stack is used (the behavior before this setting was added). Signed-off-by: Scott McMillan <scott.a.mcmillan@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Prarit Bhargava authored
Linux 2.6.33 reports this checkstack warning: drivers/net/vxge/vxge-main.c: In function 'vxge_probe': drivers/net/vxge/vxge-main.c:4409: warning: the frame size of 1028 bytes is larger than 1024 bytes This warning does not occur in the latest linux-2.6 or linux-next, however, when I do a 'make -j32 CONFIG_FRAME_WARN=512' instead of 1024 I see drivers/net/vxge/vxge-main.c: In function ‘vxge_probe’: drivers/net/vxge/vxge-main.c:4423: warning: the frame size of 1024 bytes is larger than 512 bytes This patch moves the large vxge_config struct off the stack. Signed-off-by: Prarit Bhargava <prarit@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Use read_pnet() and write_pnet() to reduce number of ifdef CONFIG_NET_NS Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
stephen hemminger authored
Sparse complains about shadowed declaration of skb. So use other name. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Finn Thain authored
Use the request_irq() error code as the return value for mac8390_open(). EAGAIN doesn't make sense for Nubus slot IRQs. Only this driver can claim this IRQ (until the NIC is removed, which means everything is powered down). Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
I added newlines after the declarations in caif_serial.c. This is normal kernel style, although I can't see anywhere it's documented. Signed-off-by: Dan Carpenter <error27@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dan Carpenter authored
We don't use the "ser" variable so I've removed it. Signed-off-by: Dan Carpenter <error27@gmail.com> Acked-by: Sjur Braendeland <sjur.brandeland@stericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
When many cpus compete for sending frames on a given qdisc, the qdisc spinlock suffers from very high contention. The cpu owning __QDISC_STATE_RUNNING bit has same priority to acquire the lock, and cannot dequeue packets fast enough, since it must wait for this lock for each dequeued packet. One solution to this problem is to force all cpus spinning on a second lock before trying to get the main lock, when/if they see __QDISC_STATE_RUNNING already set. The owning cpu then compete with at most one other cpu for the main lock, allowing for higher dequeueing rate. Based on a previous patch from Alexander Duyck. I added the heuristic to avoid the atomic in fast path, and put the new lock far away from the cache line used by the dequeue worker. Also try to release the busylock lock as late as possible. Tests with following script gave a boost from ~50.000 pps to ~600.000 pps on a dual quad core machine (E5450 @3.00GHz), tg3 driver. (A single netperf flow can reach ~800.000 pps on this platform) for j in `seq 0 3`; do for i in `seq 0 7`; do netperf -H 192.168.0.1 -t UDP_STREAM -l 60 -N -T $i -- -m 6 & done done Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
In the worst case, when the first loop breaks an the end of the slave list, the slave list is iterated through twice. This patch reduces this function only to one loop. Also makes it simpler. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
This is stored but never restored. So remove this as it is useless. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Move the code that copies slave's mac address in case that's the first slave into bond_enslave. Ifenslave app does this also but that's not a problem. This is something that should be done in bond_enslave, and it shound not matter from where is it called. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wolfram Sang authored
- don't free bus->irq (obsoleted by ca816d98) - don't dispose irqs (should be done in of_mdiobus_register()) - use fec-pointer consistently in transfer() - use resource_size() - cosmetic fixes Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
This patch makes bonding_store_slaves function nicer and easier to understand. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
(it's actually the same as v1) Remove checks that duplicates similar checks in bond_enslave. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
V1->V2: corrected res/ret use For some reason, MTU handling (storing, and restoring) is taking place in bond_sysfs. The correct place for this code is in bond_enslave, bond_release. So move it there. Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Signed-off-by: Jiri Pirko <jpirko@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
__QDISC_STATE_RUNNING is always changed while qdisc lock is held. We can avoid two atomic operations in xmit path, if we move this bit in a new __state container. Location of this __state container is carefully chosen so that fast path only dirties one qdisc cache line. THROTTLED bit could later be moved into this __state location too, to avoid dirtying first qdisc cache line. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Define three helpers to manipulate QDISC_STATE_RUNNIG flag, that a second patch will move on another location. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anirban Chakraborty authored
Added support for NIC functions that work in non privileged mode where these functions are privileged to do IO only, the control operations are handled via privileged functions. Bumped up version number to 5.0.3. Signed-off-by: Anirban Chakraborty <anirban.chakraborty@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Anirban Chakraborty authored
Following changes have been added to enable the adapter to work in NIC partitioning mode where multiple PCI functions of an adapter port can be configured to work as NIC functions. The first function that is enumerated on the PCI bus assumes the role of management function which, besides being able to do all the NIC functionality, can configure other NIC partitions. Other NIC functions can be configured as privileged or non privileged functions. Privileged function can not configure other NIC functions but can do all the NIC functionality including any firmware initialization, chip reset etc. Non privileged functions can do only basic IO. For chip reset etc, it depends on the privilege or management function. 1. Added code to determine PCI function number independent of kernel API. 2. Added Driver - FW version 2.0 support. 3. Changed producer and consumer register offset calculation. 4. Added management and privileged operation modes for npar functions. A module parameter has been added to control it. 5. Added support for configuring the eswitch in the adapter. Signed-off-by: Anirban Chakraborty <anirban.chakraborty@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
A single shared memory region used to communicate with firmware is mapped into both PCI PFs of the SFC9020 and SFL9021. Drivers must be able to identify which port they are addressing in order to use the correct sub-region. Currently we use the PCI function number, but the PCI address may be virtualised. Use the CS_PORT_NUM register field defined for just this purpose. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
rx_errors is defined as 'bad packets received', but we are currently including various overflow errors as well. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
Insert a structure at the start of the shared page that tracks the dma mapping refcnt. DMA into the next cache line of the (shared) page (plus EFX_PAGE_IP_ALIGN). When recycling a page, check the page refcnt. If the page is otherwise unused, then resurrect the other receive buffer that previously referenced the page. Be careful not to overflow the receive ring, since we can now resurrect n receive buffers in a row. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
The cut-through design of the receive path means that packets that fail to match the appropriate MAC filter are not discarded at the MAC but are flagged in the completion event as 'to be discarded'. On networks with heavy multicast traffic, this can account for a significant proportion of received packets, so it is worthwhile to recycle the buffer immediately in this case rather than freeing it and then reallocating it shortly after. The only complication here is dealing with a page shared between two receive buffers. In that case, we need to be careful to free the dma mapping when both buffers have been free'd by the kernel. This means that we can only recycle such a page if both receive buffers are discarded. Unfortunately, in an environment with 1500mtu, rx_alloc_method=PAGE, and a mixture of discarded and not-discarded frames hitting the same receive queue, buffer recycling won't always be possible. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
- Pull the loop handling into efx_init_rx_buffers_(skb|page) - Remove rx_queue->buf_page, and associated clean up code - Remove unmap_addr, since unmap_addr is trivially calculable This will allow us to recycle discarded buffers directly from efx_rx_packet(), since will never be in the middle of splitting a page. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Steve Hodgson authored
Ensure that efx_fast_push_rx_descriptors() must only run from efx_process_channel() [NAPI], or when napi_disable() has been executed. Reimplement the slow fill by sending an event to the channel, so that NAPI runs, and hanging the subsequent fast fill off the event handler. Replace the sfc_refill workqueue and delayed work items with a timer. We do not need to stop this timer in efx_flush_all() because it's safe to send the event always; receiving it will be delayed until NAPI is restarted. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-