Commit 6d4634d1 authored by Cambda Zhu's avatar Cambda Zhu Committed by Jakub Kicinski

net: Limit logical shift left of TCP probe0 timeout

For each TCP zero window probe, the icsk_backoff is increased by one and
its max value is tcp_retries2. If tcp_retries2 is greater than 63, the
probe0 timeout shift may exceed its max bits. On x86_64/ARMv8/MIPS, the
shift count would be masked to range 0 to 63. And on ARMv7 the result is
zero. If the shift count is masked, only several probes will be sent
with timeout shorter than TCP_RTO_MAX. But if the timeout is zero, it
needs tcp_retries2 times probes to end this false timeout. Besides,
bitwise shift greater than or equal to the width is an undefined
behavior.

This patch adds a limit to the backoff. The max value of max_when is
TCP_RTO_MAX and the min value of timeout base is TCP_RTO_MIN. The limit
is the backoff from TCP_RTO_MIN to TCP_RTO_MAX.
Signed-off-by: default avatarCambda Zhu <cambda@linux.alibaba.com>
Link: https://lore.kernel.org/r/20201208091910.37618-1-cambda@linux.alibaba.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parent ebf32282
...@@ -1326,7 +1326,9 @@ static inline unsigned long tcp_probe0_base(const struct sock *sk) ...@@ -1326,7 +1326,9 @@ static inline unsigned long tcp_probe0_base(const struct sock *sk)
static inline unsigned long tcp_probe0_when(const struct sock *sk, static inline unsigned long tcp_probe0_when(const struct sock *sk,
unsigned long max_when) unsigned long max_when)
{ {
u64 when = (u64)tcp_probe0_base(sk) << inet_csk(sk)->icsk_backoff; u8 backoff = min_t(u8, ilog2(TCP_RTO_MAX / TCP_RTO_MIN) + 1,
inet_csk(sk)->icsk_backoff);
u64 when = (u64)tcp_probe0_base(sk) << backoff;
return (unsigned long)min_t(u64, when, max_when); return (unsigned long)min_t(u64, when, max_when);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment