1. 14 Apr, 2008 4 commits
  2. 13 Apr, 2008 10 commits
  3. 12 Apr, 2008 10 commits
  4. 11 Apr, 2008 1 commit
  5. 10 Apr, 2008 1 commit
    • David S. Miller's avatar
      [IPV4]: Fix byte value boundary check in do_ip_getsockopt(). · 951e07c9
      David S. Miller authored
      This fixes kernel bugzilla 10371.
      
      As reported by M.Piechaczek@osmosys.tv, if we try to grab a
      char sized socket option value, as in:
      
        unsigned char ttl = 255;
        socklen_t     len = sizeof(ttl);
        setsockopt(socket, IPPROTO_IP, IP_MULTICAST_TTL, &ttl, &len);
      
        getsockopt(socket, IPPROTO_IP, IP_MULTICAST_TTL, &ttl, &len);
      
      The ttl returned will be wrong on big-endian, and on both little-
      endian and big-endian the next three bytes in userspace are written
      with garbage.
      
      It's because of this test in do_ip_getsockopt():
      
      	if (len < sizeof(int) && len > 0 && val>=0 && val<255) {
      
      It should allow a 'val' of 255 to pass here, but it doesn't so it
      copies a full 'int' back to userspace.
      
      On little-endian that will write the correct value into the location
      but it spams on the next three bytes in userspace.  On big endian it
      writes the wrong value into the location and spams the next three
      bytes.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      951e07c9
  6. 09 Apr, 2008 9 commits
  7. 08 Apr, 2008 5 commits
    • David S. Miller's avatar
      [NET]: Undo code bloat in hot paths due to print_mac(). · 21f644f3
      David S. Miller authored
      If print_mac() is used inside of a pr_debug() the compiler
      can't see that the call is redundant so still performs it
      even of pr_debug() ends up being a nop.
      
      So don't use print_mac() in such cases in hot code paths,
      use MAC_FMT et al. instead.
      
      As noted by Joe Perches, pr_debug() could be modified to
      handle this better, but that is a change to an interface
      used by the entire kernel and thus needs to be validated
      carefully.  This here is thus the less risky fix for
      2.6.25
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      21f644f3
    • Ilpo Järvinen's avatar
      [TCP]: Don't allow FRTO to take place while MTU is being probed · 6adb4f73
      Ilpo Järvinen authored
      MTU probe can cause some remedies for FRTO because the normal
      packet ordering may be violated allowing FRTO to make a wrong
      decision (it might not be that serious threat for anything
      though). Thus it's safer to not run FRTO while MTU probe is
      underway.
      
      It seems that the basic FRTO variant should also look for an
      skb at probe_seq.start to check if that's retransmitted one
      but I didn't implement it now (plain seqno in window check
      isn't robust against wraparounds).
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6adb4f73
    • Ilpo Järvinen's avatar
      [TCP]: tcp_simple_retransmit can cause S+L · 882bebaa
      Ilpo Järvinen authored
      This fixes Bugzilla #10384
      
      tcp_simple_retransmit does L increment without any checking
      whatsoever for overflowing S+L when Reno is in use.
      
      The simplest scenario I can currently think of is rather
      complex in practice (there might be some more straightforward
      cases though). Ie., if mss is reduced during mtu probing, it
      may end up marking everything lost and if some duplicate ACKs
      arrived prior to that sacked_out will be non-zero as well,
      leading to S+L > packets_out, tcp_clean_rtx_queue on the next
      cumulative ACK or tcp_fastretrans_alert on the next duplicate
      ACK will fix the S counter.
      
      More straightforward (but questionable) solution would be to
      just call tcp_reset_reno_sack() in tcp_simple_retransmit but
      it would negatively impact the probe's retransmission, ie.,
      the retransmissions would not occur if some duplicate ACKs
      had arrived.
      
      So I had to add reno sacked_out reseting to CA_Loss state
      when the first cumulative ACK arrives (this stale sacked_out
      might actually be the explanation for the reports of left_out
      overflows in kernel prior to 2.6.23 and S+L overflow reports
      of 2.6.24). However, this alone won't be enough to fix kernel
      before 2.6.24 because it is building on top of the commit
      1b6d427b ([TCP]: Reduce sacked_out with reno when purging
      write_queue) to keep the sacked_out from overflowing.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Reported-by: default avatarAlessandro Suardi <alessandro.suardi@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      882bebaa
    • Ilpo Järvinen's avatar
      [TCP]: Fix NewReno's fast rexmit/recovery problems with GSOed skb · c137f3dd
      Ilpo Järvinen authored
      Fixes a long-standing bug which makes NewReno recovery crippled.
      With GSO the whole head skb was marked as LOST which is in
      violation of NewReno procedure that only wants to mark one packet
      and ended up breaking our TCP code by causing counter overflow
      because our code was built on top of assumption about valid
      NewReno procedure. This manifested as triggering a WARN_ON for
      the overflow in a number of places.
      
      It seems relatively safe alternative to just do nothing if
      tcp_fragment fails due to oom because another duplicate ACK is
      likely to be received soon and the fragmentation will be retried.
      
      Special thanks goes to Soeren Sonnenburg <kernel@nn7.de> who was
      lucky enough to be able to reproduce this so that the warning
      for the overflow was hit. It's not as easy task as it seems even
      if this bug happens quite often because the amount of outstanding
      data is pretty significant for the mismarkings to lead to an
      overflow.
      
      Because it's very late in 2.6.25-rc cycle (if this even makes in
      time), I didn't want to touch anything with SACK enabled here.
      Fragmenting might be useful for it as well but it's more or less
      a policy decision rather than mandatory fix. Thus there's no need
      to rush and we can postpone considering tcp_fragment with SACK
      for 2.6.26.
      
      In 2.6.24 and earlier, this very same bug existed but the effect
      is slightly different because of a small changes in the if
      conditions that fit to the patch's context. With them nothing
      got lost marker and thus no retransmissions happened.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c137f3dd
    • Ilpo Järvinen's avatar
      [TCP]: Restore 2.6.24 mark_head_lost behavior for newreno/fack · 1b69d745
      Ilpo Järvinen authored
      The fast retransmission can be forced locally to the rfc3517
      branch in tcp_update_scoreboard instead of making such fragile
      constructs deeper in tcp_mark_head_lost.
      
      This is necessary for the next patch which must not have
      loopholes for cnt > packets check. As one can notice,
      readability got some improvements too because of this :-).
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1b69d745