Commit 90c1d870 authored by Greg Kroah-Hartman's avatar Greg Kroah-Hartman

tcp: make challenge acks faster

When backporting upstream commit 75ff39cc ("tcp: make challenge acks
less predictable") I negelected to use the correct ACCESS* type macros.
This fixes that up to hopefully speed things up a bit more.

Thanks to Chas Wiliams for the 3.10 backport which reminded me of this.

Cc: Yue Cao <ycao009@ucr.edu>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Chas Williams <ciwillia@brocade.com>
Cc: Willy Tarreau <w@1wt.eu>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent c0e754d6
...@@ -3299,12 +3299,12 @@ static void tcp_send_challenge_ack(struct sock *sk) ...@@ -3299,12 +3299,12 @@ static void tcp_send_challenge_ack(struct sock *sk)
u32 half = (sysctl_tcp_challenge_ack_limit + 1) >> 1; u32 half = (sysctl_tcp_challenge_ack_limit + 1) >> 1;
challenge_timestamp = now; challenge_timestamp = now;
challenge_count = half + ACCESS_ONCE(challenge_count) = half +
prandom_u32_max(sysctl_tcp_challenge_ack_limit); prandom_u32_max(sysctl_tcp_challenge_ack_limit);
} }
count = challenge_count; count = ACCESS_ONCE(challenge_count);
if (count > 0) { if (count > 0) {
challenge_count = count - 1; ACCESS_ONCE(challenge_count) = count - 1;
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK); NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK);
tcp_send_ack(sk); tcp_send_ack(sk);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment