- 13 Oct, 2010 3 commits
-
-
Kumar Sanghvi authored
Based on suggestion by Rémi Denis-Courmont to implement 'connect' for Pipe controller logic, this patch implements 'connect' socket call for the Pipe controller logic. The patch does following:- - Removes setsockopts for PNPIPE_CREATE and PNPIPE_DESTROY - Adds setsockopt for setting the Pipe handle value - Implements connect socket call - Updates the Pipe controller logic User-space should now follow below sequence with Pipe controller:- -socket -bind -setsockopt for PNPIPE_PIPE_HANDLE -connect -setsockopt for PNPIPE_ENCAP_IP -setsockopt for PNPIPE_ENABLE GPRS/3G data has been tested working fine with this. Signed-off-by: Kumar Sanghvi <kumar.sanghvi@stericsson.com> Acked-by: Rémi Denis-Courmont <remi.denis-courmont@nokia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paul Gortmaker authored
Remove all instances of legacy, or as yet to be implemented code that is currently living within an #if 0 ... #endif block. In the rare instance that some of it be needed in the future, it can still be dragged out of history, but there is no need for it to sit in mainline. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Reported-by: Sachin Sant <sachinp@in.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 12 Oct, 2010 9 commits
-
-
Eric Dumazet authored
We tried very hard to remove all possible dev_hold()/dev_put() pairs in network stack, using RCU conversions. There is still an unavoidable device refcount change for every dst we create/destroy, and this can slow down some workloads (routers or some app servers, mmap af_packet) We can switch to a percpu refcount implementation, now dynamic per_cpu infrastructure is mature. On a 64 cpus machine, this consumes 256 bytes per device. On x86, dev_hold(dev) code : before lock incl 0x280(%ebx) after: movl 0x260(%ebx),%eax incl fs:(%eax) Stress bench : (Sending 160.000.000 UDP frames, IP route cache disabled, dual E5540 @2.53GHz, 32bit kernel, FIB_TRIE) Before: real 1m1.662s user 0m14.373s sys 12m55.960s After: real 0m51.179s user 0m15.329s sys 10m15.942s Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dmitry Kravkov authored
Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: Vladislav Zolotarov <vladz@broadcom.com> Tested-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
-
Gerrit Renker authored
This omits the redundant "DCCP:" in warning messages, since DCCP_WARN() already echoes the function name, avoiding messages like kernel: [10988.766503] dccp_close: DCCP: ABORT -- 209 bytes unread Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
-
Gerrit Renker authored
This schedules an Ack when receiving a timestamp, exploiting the existing inet_csk_schedule_ack() function, saving one case in the `dccp_ack_pending()' function. Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
-
Ivo Calado authored
This patch generalises the task of determining data loss from RFC 4340, 7.7.1. Let S_A, S_B be sequence numbers such that S_B is "after" S_A, and let N_B be the NDP count of packet S_B. Then, using modulo-2^48 arithmetic, D = S_B - S_A - 1 is an upper bound of the number of lost data packets, D - N_B is an approximation of the number of lost data packets (there are cases where this is not exact). The patch implements this as dccp_loss_count(S_A, S_B, N_B) := max(S_B - S_A - 1 - N_B, 0) Signed-off-by: Ivo Calado <ivocalado@embedded.ufcg.edu.br> Signed-off-by: Erivaldo Xavier <desadoc@gmail.com> Signed-off-by: Leandro Sales <leandroal@gmail.com> Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
-
Gerrit Renker authored
This removes the argument `more' from ccid_hc_tx_packet_sent, since it was nowhere used in the entire code. (Btw, this argument was not even used in the original KAME code where the function initially came from; compare the variable moreToSend in the freebsd61-dccp-kame-28.08.2006.patch kept by Emmanuel Lochin.) Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
-
Gerrit Renker authored
After moving the assignment of GAR/ISS from dccp_connect_init() to dccp_transmit_skb(), the former function becomes very small, so that a merger with dccp_connect() suggests itself. Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
-
Gerrit Renker authored
This fixes a problem and a potential loophole with regard to seqno/ackno validity: currently the initial adjustments to AWL/SWL are only performed once at the begin of the connection, during the handshake. Since the Sequence Window feature is always greater than Wmin=32 (7.5.2), it is however necessary to perform these adjustments at least for the first W/W' (variables as per 7.5.1) packets in the lifetime of a connection. This requirement is complicated by the fact that W/W' can change at any time during the lifetime of a connection. Therefore it is better to perform that safety check each time SWL/AWL are updated, as implemented by the patch. A second problem solved by this patch is that the remote/local Sequence Window feature values (which set the bounds for AWL/SWL/SWH) are undefined until the feature negotiation has completed. During the initial handshake we have more stringent sequence number protection; the changes added by this patch effect that {A,S}W{L,H} are within the correct bounds at the instant that feature negotiation completes (since the SeqWin feature activation handlers call dccp_update_gsr/gss()). Signed-off-by: Gerrit Renker <gerrit@erg.abdn.ac.uk>
-
- 11 Oct, 2010 19 commits
-
-
Michael Chan authored
To prevent unnecessary error message. pci_save_state() is also moved to the end of ->probe() so that all PCI config, including AER state, will be saved. Update version to 2.0.18. Signed-off-by: Michael Chan <mchan@broadcom.com> Reviewed-by: Benjamin Li <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Chan authored
- Improved flow control and simplified interface - Use hardware RSS indirection table instead of the slower firmware- based table - Lower latency interrupt on 5709 Signed-off-by: Michael Chan <mchan@broadcom.com> Reviewed-by: Benjamin Li <benli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Le mardi 12 octobre 2010 à 00:02 +0200, Eric Dumazet a écrit : > Here is the followup patch. > > Thanks ! > Oops, this was an old version, the up2date ones also took care of "used" field. I guess its time for a sleep, sorry again. [PATCH net-next V2] neigh: reorder struct neighbour fields (refcnt) and (ha_lock, ha, used, dev, output, ops, primary_key) should be placed on a separate cache lines. refcnt can be often written, while other fields are mostly read. This gave me good result on stress test : before: real 0m45.570s user 0m15.525s sys 9m56.669s After: real 0m41.841s user 0m15.261s sys 8m45.949s Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
struct dst_ops tracks number of allocated dst in an atomic_t field, subject to high cache line contention in stress workload. Switch to a percpu_counter, to reduce number of time we need to dirty a central location. Place it on a separate cache line to avoid dirtying read only fields. Stress test : (Sending 160.000.000 UDP frames, IP route cache disabled, dual E5540 @2.53GHz, 32bit kernel, FIB_TRIE, SLUB/NUMA) Before: real 0m51.179s user 0m15.329s sys 10m15.942s After: real 0m45.570s user 0m15.525s sys 9m56.669s With a small reordering of struct neighbour fields, subject of a following patch, (to separate refcnt from other read mostly fields) real 0m41.841s user 0m15.261s sys 8m45.949s Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Add a seqlock in struct neighbour to protect neigh->ha[], and avoid dirtying neighbour in stress situation (many different flows / dsts) Dirtying takes place because of read_lock(&n->lock) and n->used writes. Switching to a seqlock, and writing n->used only on jiffies changes permits less dirtying. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Conflicts: net/core/ethtool.c
-
Kees Cook authored
Several other ethtool functions leave heap uncleared (potentially) by drivers. Some interfaces appear safe (eeprom, etc), in that the sizes are well controlled. In some situations (e.g. unchecked error conditions), the heap will remain unchanged in areas before copying back to userspace. Note that these are less of an issue since these all require CAP_NET_ADMIN. Cc: stable@kernel.org Signed-off-by: Kees Cook <kees.cook@canonical.com> Acked-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Slaby authored
Stanse found that pch_gbe_xmit_frame uses skb after it is freed. Fix that. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: Masayuki Ohtake <masa-korg@dsn.okisemi.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Slaby authored
Stanse found that i2400m_rx frees skb, but still uses skb->len even though it has skb_len defined. So use skb_len properly in the code. And also define it unsinged int rather than size_t to solve compilation warnings. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> Cc: linux-wimax@intel.com Acked-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Slaby authored
Stanse found that ia_init_one locks a spinlock and inside of that it calls ia_start which calls: * request_irq * tx_init which does kmalloc(GFP_KERNEL) Both of them can thus sleep and result in a deadlock. I don't see a reason to have a per-device spinlock there which is used only there and inited right before the lock location. So remove it completely. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: Chas Williams <chas@cmf.nrl.navy.mil> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Slaby authored
Stanse found that mpc_push frees skb and then it dereferences it. It is a typo, new_skb should be dereferenced there. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Slaby authored
Stanse found we do in console_show: kfree_skb(skb); return skb->len; which is not good. Fix that by remembering the len and use it in the function instead. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: Chas Williams <chas@cmf.nrl.navy.mil> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
When a new dst is used to send a frame, neigh_resolve_output() tries to associate an struct hh_cache to this dst, calling neigh_hh_init() with the neigh rwlock write locked. Most of the time, hh_cache is already known and linked into neighbour, so we find it and increment its refcount. This patch changes the logic so that we call neigh_hh_init() with neighbour lock read locked only, so that fast path can be run in parallel by concurrent cpus. This brings part of the speedup we got with commit c7d4426a (introduce DST_NOCACHE flag) for non cached dsts, even for cached ones, removing one of the contention point that routers hit on multiqueue enabled machines. Further improvements would need to use a seqlock instead of an rwlock to protect neigh->ha[], to not dirty neigh too often and remove two atomic ops. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Oskar Schirmer authored
with hardware slow in negotiation, the system did freeze while trying to mount root on nfs at boot time. the link state has not been initialised so network stack tried to start transmission right away. this caused instant retries, as the driver solely stated business upon link down, rendering the system unusable. notify carrier off initially to prevent transmission until phylib will report link up. Signed-off-by: Oskar Schirmer <oskar@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Samuel Ortiz authored
While parsing the GetValuebyClass command frame, we could potentially write passed the skb->data pointer. Cc: stable@kernel.org Reported-by: Ilja Van Sprundel <ivansprundel@ioactive.com> Signed-off-by: Samuel Ortiz <samuel@sortiz.org>
-
Samuel Ortiz authored
Cc: stable@kernel.org Reported-by: Ilja Van Sprundel <ivansprundel@ioactive.com> Signed-off-by: Samuel Ortiz <samuel@sortiz.org>
-
Roel Kluin authored
Test whether index exceeds fw->size before reading the element Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Signed-off-by: Samuel Ortiz <samuel@sortiz.org>
-
Samuel Ortiz authored
The code intends to lock the irnet_socket, so adding a mutex to it allows for a complet BKL removal. Signed-off-by: Samuel Ortiz <samuel@sortiz.org>
-
Samuel Ortiz authored
Most of the times, lock_kernel() was pointless or could simply be replaced by lock_sock(). Signed-off-by: Samuel Ortiz <samuel@sortiz.org>
-
- 09 Oct, 2010 6 commits
-
-
Eric Dumazet authored
sundance get_stats() should not be run concurrently, add a lock to avoid potential losses. Note: Remove unused rx_lock field Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Kaiser authored
Simplify: ((a && !b) || (!a && b)) => (a != b) Signed-off-by: Nicolas Kaiser <nikai@nikai.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Kaiser authored
Simplify: ((a && b) || (!a && !b)) => (a == b) Signed-off-by: Nicolas Kaiser <nikai@nikai.net> Acked-by: Breno Leitao <leitao@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Changli Gao authored
Signed-off-by: Changli Gao <xiaosuo@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stanislaw Gruszka authored
Use DMA API as PCI equivalents will be deprecated. This change also allow to allocate with GFP_KERNEL where possible. Tested-by: Neal Becker <ndbecker2@gmail.com> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stanislaw Gruszka authored
We have fedora bug report where driver fail to initialize after suspend/resume because of memory allocation errors: https://bugzilla.redhat.com/show_bug.cgi?id=629158 To fix use GFP_KERNEL allocation where possible. Tested-by: Neal Becker <ndbecker2@gmail.com> Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 08 Oct, 2010 3 commits
-
-
Tom Herbert authored
The rx->count reference is used to track reference counts to the number of rx-queue kobjects created for the device. This patch eliminates initialization of the counter in netif_alloc_rx_queues and instead increments the counter each time a kobject is created. This is now symmetric with the decrement that is done when an object is released. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rémi Denis-Courmont authored
There are a bunch of issues that need to be fixed, including: - GFP_KERNEL allocations from atomic context (and GFP_ATOMIC in process context), - abuse of the setsockopt() call convention, - unprotected/unlocked static variables... IMHO, we will need to alter the userspace ABI when we fix it. So mark the configuration option as EXPERIMENTAL for the time being (or should it be BROKEN instead?). Signed-off-by: Rémi Denis-Courmont <remi.denis-courmont@nokia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Rémi Denis-Courmont authored
The current code works like this: int garbage, status; socklen_t len = sizeof(status); /* enable pipe */ setsockopt(fd, SOL_PNPIPE, PNPIPE_ENABLE, &garbage, sizeof(garbage)); /* disable pipe */ setsockopt(fd, SOL_PNPIPE, PNPIPE_DISABLE, &garbage, sizeof(garbage)); /* get status */ getsockopt(fd, SOL_PNPIPE, PNPIPE_INQ, &status, &len); ...which does not follow the usual socket option pattern. This patch merges all three "options" into a single gettable&settable option, before Linux 2.6.37 gets out: int status; socklen_t len = sizeof(status); /* enable pipe */ status = 1; setsockopt(fd, SOL_PNPIPE, PNPIPE_ENABLE, &status, sizeof(status)); /* disable pipe */ status = 0; setsockopt(fd, SOL_PNPIPE, PNPIPE_ENABLE, &status, sizeof(status)); /* get status */ getsockopt(fd, SOL_PNPIPE, PNPIPE_ENABLE, &status, &len); This also fixes the error code from EFAULT to ENOTCONN. Signed-off-by: Rémi Denis-Courmont <remi.denis-courmont@nokia.com> Cc: Kumar Sanghvi <kumar.sanghvi@stericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-