Commit e317f6f6 authored by Ilpo Järvinen's avatar Ilpo Järvinen Committed by David S. Miller

[TCP]: FRTO undo response falls back to ratehalving one if ECEd

Undoing ssthresh is disabled in fastretrans_alert whenever
FLAG_ECE is set by clearing prior_ssthresh. The clearing does
not protect FRTO because FRTO operates before fastretrans_alert.
Moving the clearing of prior_ssthresh earlier seems to be a
suboptimal solution to the FRTO case because then FLAG_ECE will
cause a second ssthresh reduction in try_to_open (the first
occurred when FRTO was entered). So instead, FRTO falls back
immediately to the rate halving response, which switches TCP to
CA_CWR state preventing the latter reduction of ssthresh.

If the first ECE arrived before the ACK after which FRTO is able
to decide RTO as spurious, prior_ssthresh is already cleared.
Thus no undoing for ssthresh occurs. Besides, FLAG_ECE should be
set also in the following ACKs resulting in rate halving response
that sees TCP is already in CA_CWR, which again prevents an extra
ssthresh reduction on that round-trip.

If the first ECE arrived before RTO, ssthresh has already been
adapted and prior_ssthresh remains cleared on entry because TCP
is in CA_CWR (the same applies also to a case where FRTO is
entered more than once and ECE comes in the middle).

High_seq must not be touched after tcp_enter_cwr because CWR
round-trip calculation depends on it.

I believe that after this patch, FRTO should be ECN-safe and
even able to take advantage of synergy benefits.
Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent e01f9d77
...@@ -2587,13 +2587,14 @@ static void tcp_conservative_spur_to_response(struct tcp_sock *tp) ...@@ -2587,13 +2587,14 @@ static void tcp_conservative_spur_to_response(struct tcp_sock *tp)
*/ */
static void tcp_ratehalving_spur_to_response(struct sock *sk) static void tcp_ratehalving_spur_to_response(struct sock *sk)
{ {
struct tcp_sock *tp = tcp_sk(sk);
tcp_enter_cwr(sk, 0); tcp_enter_cwr(sk, 0);
tp->high_seq = tp->frto_highmark; /* Smoother w/o this? - ij */
} }
static void tcp_undo_spur_to_response(struct sock *sk) static void tcp_undo_spur_to_response(struct sock *sk, int flag)
{ {
if (flag&FLAG_ECE)
tcp_ratehalving_spur_to_response(sk);
else
tcp_undo_cwr(sk, 1); tcp_undo_cwr(sk, 1);
} }
...@@ -2681,7 +2682,7 @@ static int tcp_process_frto(struct sock *sk, u32 prior_snd_una, int flag) ...@@ -2681,7 +2682,7 @@ static int tcp_process_frto(struct sock *sk, u32 prior_snd_una, int flag)
} else /* frto_counter == 2 */ { } else /* frto_counter == 2 */ {
switch (sysctl_tcp_frto_response) { switch (sysctl_tcp_frto_response) {
case 2: case 2:
tcp_undo_spur_to_response(sk); tcp_undo_spur_to_response(sk, flag);
break; break;
case 1: case 1:
tcp_conservative_spur_to_response(tp); tcp_conservative_spur_to_response(tp);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment