Commit 09d3e84d authored by Herbert Xu's avatar Herbert Xu Committed by Thomas Graf

[NET]: Add missing memory barrier to kfree_skb().

Also kill kfree_skb_fast(), that is a relic from fast switching
which was killed off years ago.

The bug is that in the case where we do the atomic_read()
optimization, we need to make sure that reads of skb state
later in __kfree_skb() processing (particularly the skb->list
BUG check) are not reordered to occur before the counter
read by the cpu.

Thanks to Olaf Kirch and Anton Blanchard for discovering
and helping fix this bug.
Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 97d52752
......@@ -353,17 +353,13 @@ static inline struct sk_buff *skb_get(struct sk_buff *skb)
*/
static inline void kfree_skb(struct sk_buff *skb)
{
if (atomic_read(&skb->users) == 1 || atomic_dec_and_test(&skb->users))
if (likely(atomic_read(&skb->users) == 1))
smp_rmb();
else if (likely(!atomic_dec_and_test(&skb->users)))
return;
__kfree_skb(skb);
}
/* Use this if you didn't touch the skb state [for fast switching] */
static inline void kfree_skb_fast(struct sk_buff *skb)
{
if (atomic_read(&skb->users) == 1 || atomic_dec_and_test(&skb->users))
kfree_skbmem(skb);
}
/**
* skb_cloned - is the buffer a clone
* @skb: buffer to check
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment