- 07 Mar, 2011 6 commits
-
-
Padmanabh Ratnakar authored
L4 checksum field is valid only for TCP/UDP packets in Lancer Signed-off-by: Padmanabh Ratnakar <padmanabh.ratnakar@emulex.com> Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: Subramanian Seetharaman <subbu.seetharaman@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Padmanabh Ratnakar authored
Workaround added for Lancer in handling RX ERR completion received when no RX buffers are posted is not needed. Signed-off-by: Padmanabh Ratnakar <padmanabh.ratnakar@emulex.com> Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: Subramanian Seetharaman <subbu.seetharaman@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This elimiates a lot of pure overhead due to parameter passing. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
fib_semantic_match() requires that if the type doesn't signal an automatic error, it must be of type RTN_UNICAST, RTN_LOCAL, RTN_BROADCAST, RTN_ANYCAST, or RTN_MULTICAST. Checking this every route lookup is pointless work. Instead validate it during route insertion, via fib_create_info(). Also, there was nothing making sure the type value was less than RTN_MAX, so add that missing check while we're here. Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
This allows any caller to be prefaced by any specific pr_fmt to better identify which device driver is using this function inappropriately. Add terminating newline. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
-
- 05 Mar, 2011 32 commits
-
-
Sven Eckelmann authored
When trying to associate a net_device with another net_device which already exists, batman-adv assumes that this interface is a fully initialized batman mesh interface without checking it. The behaviour when accessing data behind netdev_priv of a random net_device is undefined and potentially dangerous. Reported-by: Linus Lüssing <linus.luessing@ascom.ch> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Linus Lüssing authored
Signed-off-by: Linus Lüssing <linus.luessing@ascom.ch> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Batman-adv works with "hard interfaces" as well as "soft interfaces". The new name should better make clear which kind of interfaces this list stores. Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
It might be possible that 2 threads access the same data in the same rcu grace period. The first thread calls call_rcu() to decrement the refcount and free the data while the second thread increases the refcount to use the data. To avoid this race condition all refcount operations have to be atomic. Reported-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Note: The function compare_ether_addr() provided by the Linux kernel requires aligned memory. Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Linus Lüssing authored
When printing the soft interface table the number of entries in the softif neigh list are first being counted and a fitting buffer allocated. After that the softif neigh list gets locked again and the buffer printed - which has the following two issues: For one thing, the softif neigh list might have grown when reacquiring the rcu lock, which results in writing outside of the allocated buffer. Furthermore 31 Bytes are not enough for printing an entry with a vid of more than 2 digits. The manual buffering is unnecessary, we can safely print to the seq directly during the rcu_read_lock(). Signed-off-by: Linus Lüssing <linus.luessing@ascom.ch> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Linus Lüssing authored
When unicast_send_skb() is increasing the orig_node's refcount another thread might have been freeing this orig_node already. We need to increase the refcount in the rcu read lock protected area to avoid that. Signed-off-by: Linus Lüssing <linus.luessing@ascom.ch> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Linus Lüssing authored
The rcu protected macros rcu_dereference() and rcu_assign_pointer() for the bat_priv->curr_gw need to be used, as well as spin/rcu locking. Otherwise we might end up using a curr_gw pointer pointing to already freed memory. Reported-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Linus Lüssing <linus.luessing@ascom.ch> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Batman-adv could receive several payload broadcasts at the same time that would trigger access to the broadcast seqno sliding window to determine whether this is a new broadcast or not. If these incoming broadcasts are accessing the sliding window simultaneously it could be left in an inconsistent state. Therefore it is necessary to make sure this access is atomic. Reported-by: Linus Lüssing <linus.luessing@web.de> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Reported-by: Linus Lüssing <linus.luessing@saxnet.de> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
It might be possible that 2 threads access the same data in the same rcu grace period. The first thread calls call_rcu() to decrement the refcount and free the data while the second thread increases the refcount to use the data. To avoid this race condition all refcount operations have to be atomic. Reported-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
It might be possible that 2 threads access the same data in the same rcu grace period. The first thread calls call_rcu() to decrement the refcount and free the data while the second thread increases the refcount to use the data. To avoid this race condition all refcount operations have to be atomic. Reported-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
It might be possible that 2 threads access the same data in the same rcu grace period. The first thread calls call_rcu() to decrement the refcount and free the data while the second thread increases the refcount to use the data. To avoid this race condition all refcount operations have to be atomic. Reported-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
It might be possible that 2 threads access the same data in the same rcu grace period. The first thread calls call_rcu() to decrement the refcount and free the data while the second thread increases the refcount to use the data. To avoid this race condition all refcount operations have to be atomic. Reported-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Simon Wunderlich authored
bonding / alternating candidates need to be secured by rcu locks as well. This patch therefore converts the bonding list from a plain pointer list to a rcu securable lists and references the bonding candidates. Signed-off-by: Simon Wunderlich <siwu@hrz.tu-chemnitz.de> Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
hardif_disable_interface() calls purge_orig_ref() to immediately free all neighbors associated with the interface that is going down. purge_orig_neighbors() checked if the interface status is IF_INACTIVE which is set to IF_NOT_IN_USE shortly before calling purge_orig_ref(). Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
Marek Lindner authored
Signed-off-by: Marek Lindner <lindner_marek@yahoo.de>
-
David S. Miller authored
The only necessary parts are the src/dst addresses, the interface indexes, the TOS, and the mark. The rest is unnecessary bloat, which amounts to nearly 50 bytes on 64-bit. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
rt->rt_iif is only ever inspected on input routes, for example DCCP uses this to populate a route lookup flow key when generating replies to another packet. Therefore, setting it to anything other than zero on output routes makes no sense. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
We know this is a new route object, so doing atomics and stuff makes no sense at all. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
We burn a lot of useless cycles, cpu store buffer traffic, and memory operations memset()'ing the on-stack flow used to perform output route lookups in __ip_route_output_key(). Only the first half of the flow object members even matter for output route lookups in this context, specifically: FIB rules matching cares about: dst, src, tos, iif, oif, mark FIB trie lookup cares about: dst FIB semantic match cares about: tos, scope, oif Therefore only initialize these specific members and elide the memset entirely. On Niagara2 this kills about ~300 cycles from the output route lookup path. Likely, we can take things further, since all callers of output route lookups essentially throw away the on-stack flow they use. So they don't care if we use it as a scratch-pad to compute the final flow key. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
-
- 04 Mar, 2011 2 commits
-
-
Eric Dumazet authored
David noticed : ------------------ Eric, I was profiling the non-routing-cache case and something that stuck out is the case of calling inet_getpeer() with create==0. If an entry is not found, we have to redo the lookup under a spinlock to make certain that a concurrent writer rebalancing the tree does not "hide" an existing entry from us. This makes the case of a create==0 lookup for a not-present entry really expensive. It is on the order of 600 cpu cycles on my Niagara2. I added a hack to not do the relookup under the lock when create==0 and it now costs less than 300 cycles. This is now a pretty common operation with the way we handle COW'd metrics, so I think it's definitely worth optimizing. ----------------- One solution is to use a seqlock instead of a spinlock to protect struct inet_peer_base. After a failed avl tree lookup, we can easily detect if a writer did some changes during our lookup. Taking the lock and redo the lookup is only necessary in this case. Note: Add one private rcu_deref_locked() macro to place in one spot the access to spinlock included in seqlock. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge branch 'for-davem' of ssh://master.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next-2.6
-