Commit 5a45f008 authored by Neal Cardwell's avatar Neal Cardwell Committed by David S. Miller

tcp: fix undo after RTO for CUBIC

This patch fixes CUBIC so that cwnd reductions made during RTOs can be
undone (just as they already can be undone when using the default/Reno
behavior).

When undoing cwnd reductions, BIC-derived congestion control modules
were restoring the cwnd from last_max_cwnd. There were two problems
with using last_max_cwnd to restore a cwnd during undo:

(a) last_max_cwnd was set to 0 on state transitions into TCP_CA_Loss
(by calling the module's reset() functions), so cwnd reductions from
RTOs could not be undone.

(b) when fast_covergence is enabled (which it is by default)
last_max_cwnd does not actually hold the value of snd_cwnd before the
loss; instead, it holds a scaled-down version of snd_cwnd.

This patch makes the following changes:

(1) upon undo, revert snd_cwnd to ca->loss_cwnd, which is already, as
the existing comment notes, the "congestion window at last loss"

(2) stop forgetting ca->loss_cwnd on TCP_CA_Loss events

(3) use ca->last_max_cwnd to check if we're in slow start
Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
Acked-by: default avatarStephen Hemminger <shemminger@vyatta.com>
Acked-by: default avatarSangtae Ha <sangtae.ha@gmail.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent fc16dcd8
...@@ -107,7 +107,6 @@ static inline void bictcp_reset(struct bictcp *ca) ...@@ -107,7 +107,6 @@ static inline void bictcp_reset(struct bictcp *ca)
{ {
ca->cnt = 0; ca->cnt = 0;
ca->last_max_cwnd = 0; ca->last_max_cwnd = 0;
ca->loss_cwnd = 0;
ca->last_cwnd = 0; ca->last_cwnd = 0;
ca->last_time = 0; ca->last_time = 0;
ca->bic_origin_point = 0; ca->bic_origin_point = 0;
...@@ -142,7 +141,10 @@ static inline void bictcp_hystart_reset(struct sock *sk) ...@@ -142,7 +141,10 @@ static inline void bictcp_hystart_reset(struct sock *sk)
static void bictcp_init(struct sock *sk) static void bictcp_init(struct sock *sk)
{ {
bictcp_reset(inet_csk_ca(sk)); struct bictcp *ca = inet_csk_ca(sk);
bictcp_reset(ca);
ca->loss_cwnd = 0;
if (hystart) if (hystart)
bictcp_hystart_reset(sk); bictcp_hystart_reset(sk);
...@@ -275,7 +277,7 @@ static inline void bictcp_update(struct bictcp *ca, u32 cwnd) ...@@ -275,7 +277,7 @@ static inline void bictcp_update(struct bictcp *ca, u32 cwnd)
* The initial growth of cubic function may be too conservative * The initial growth of cubic function may be too conservative
* when the available bandwidth is still unknown. * when the available bandwidth is still unknown.
*/ */
if (ca->loss_cwnd == 0 && ca->cnt > 20) if (ca->last_max_cwnd == 0 && ca->cnt > 20)
ca->cnt = 20; /* increase cwnd 5% per RTT */ ca->cnt = 20; /* increase cwnd 5% per RTT */
/* TCP Friendly */ /* TCP Friendly */
...@@ -342,7 +344,7 @@ static u32 bictcp_undo_cwnd(struct sock *sk) ...@@ -342,7 +344,7 @@ static u32 bictcp_undo_cwnd(struct sock *sk)
{ {
struct bictcp *ca = inet_csk_ca(sk); struct bictcp *ca = inet_csk_ca(sk);
return max(tcp_sk(sk)->snd_cwnd, ca->last_max_cwnd); return max(tcp_sk(sk)->snd_cwnd, ca->loss_cwnd);
} }
static void bictcp_state(struct sock *sk, u8 new_state) static void bictcp_state(struct sock *sk, u8 new_state)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment