1. 04 Jun, 2008 7 commits
    • Ilpo Järvinen's avatar
      tcp: fix skb vs fack_count out-of-sync condition · a6604471
      Ilpo Järvinen authored
      This bug is able to corrupt fackets_out in very rare cases.
      In order for this to cause corruption:
        1) DSACK in the middle of previous SACK block must be generated.
        2) In order to take that particular branch, part or all of the
           DSACKed segment must already be SACKed so that we have that
           in cache in the first place.
        3) The new info must be top enough so that fackets_out will be
           updated on this iteration.
      ...then fack_count is updated while skb wasn't, then we walk again
      that particular segment thus updating fack_count twice for
      a single skb and finally that value is assigned to fackets_out
      by tcp_sacktag_one.
      
      It is safe to call tcp_sacktag_one just once for a segment (at
      DSACK), no need to call again for plain SACK.
      
      Potential problem of the miscount are limited to premature entry
      to recovery and to inflated reordering metric (which could even
      cancel each other out in the most the luckiest scenarios :-)).
      Both are quite insignificant in worst case too and there exists
      also code to reset them (fackets_out once sacked_out becomes zero
      and reordering metric on RTO).
      
      This has been reported by a number of people, because it occurred
      quite rarely, it has been very evasive. Andy Furniss was able to
      get it to occur couple of times so that a bit more info was
      collected about the problem using a debug patch, though it still
      required lot of checking around. Thanks also to others who have
      tried to help here.
      
      This is listed as Bugzilla #10346. The bug was introduced by
      me in commit 68f8353b ([TCP]: Rewrite SACK block processing & 
      sack_recv_cache use), I probably thought back then that there's
      need to scan that entry twice or didn't dare to make it go
      through it just once there. Going through twice would have
      required restoring fack_count after the walk but as noted above,
      I chose to drop the additional walk step altogether here.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a6604471
    • Mark Asselstine's avatar
      sunhme: Cleanup use of deprecated calls to save_and_cli and restore_flags. · c03e05d8
      Mark Asselstine authored
      Make use of local_irq_save and local_irq_restore rather then the
      deprecated save_and_cli and restore_flags calls.
      Signed-off-by: default avatarMark Asselstine <mark.asselstine@windriver.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c03e05d8
    • Adrian-Ken Rueegsegger's avatar
      xfrm: xfrm_algo: correct usage of RIPEMD-160 · a13366c6
      Adrian-Ken Rueegsegger authored
      This patch fixes the usage of RIPEMD-160 in xfrm_algo which in turn
      allows hmac(rmd160) to be used as authentication mechanism in IPsec
      ESP and AH (see RFC 2857).
      Signed-off-by: default avatarAdrian-Ken Rueegsegger <rueegsegger@swiss-it.ch>
      Acked-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a13366c6
    • David S. Miller's avatar
    • Ilpo Järvinen's avatar
      tcp: Fix inconsistency source (CA_Open only when !tcp_left_out(tp)) · 8aca6cb1
      Ilpo Järvinen authored
      It is possible that this skip path causes TCP to end up into an
      invalid state where ca_state was left to CA_Open while some
      segments already came into sacked_out. If next valid ACK doesn't
      contain new SACK information TCP fails to enter into
      tcp_fastretrans_alert(). Thus at least high_seq is set
      incorrectly to a too high seqno because some new data segments
      could be sent in between (and also, limited transmit is not
      being correctly invoked there). Reordering in both directions
      can easily cause this situation to occur.
      
      I guess we would want to use tcp_moderate_cwnd(tp) there as well
      as it may be possible to use this to trigger oversized burst to
      network by sending an old ACK with huge amount of SACK info, but
      I'm a bit unsure about its effects (mainly to FlightSize), so to
      be on the safe side I just currently fixed it minimally to keep
      TCP's state consistent (obviously, such nasty ACKs have been
      possible this far). Though it seems that FlightSize is already
      underestimated by some amount, so probably on the long term we
      might want to trigger recovery there too, if appropriate, to make
      FlightSize calculation to resemble reality at the time when the
      losses where discovered (but such change scares me too much now
      and requires some more thinking anyway how to do that as it
      likely involves some code shuffling).
      
      This bug was found by Brian Vowell while running my TCP debug
      patch to find cause of another TCP issue (fackets_out
      miscount).
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8aca6cb1
    • Jarek Poplawski's avatar
      netfilter: nf_conntrack_ipv6: fix inconsistent lock state in nf_ct_frag6_gather() · b9c69896
      Jarek Poplawski authored
      [   63.531438] =================================
      [   63.531520] [ INFO: inconsistent lock state ]
      [   63.531520] 2.6.26-rc4 #7
      [   63.531520] ---------------------------------
      [   63.531520] inconsistent {softirq-on-W} -> {in-softirq-W} usage.
      [   63.531520] tcpsic6/3864 [HC0[0]:SC1[1]:HE1:SE0] takes:
      [   63.531520]  (&q->lock#2){-+..}, at: [<c07175b0>] ipv6_frag_rcv+0xd0/0xbd0
      [   63.531520] {softirq-on-W} state was registered at:
      [   63.531520]   [<c0143bba>] __lock_acquire+0x3aa/0x1080
      [   63.531520]   [<c0144906>] lock_acquire+0x76/0xa0
      [   63.531520]   [<c07a8f0b>] _spin_lock+0x2b/0x40
      [   63.531520]   [<c0727636>] nf_ct_frag6_gather+0x3f6/0x910
       ...
      
      According to this and another similar lockdep report inet_fragment
      locks are taken from nf_ct_frag6_gather() with softirqs enabled, but
      these locks are mainly used in softirq context, so disabling BHs is
      necessary.
      Reported-and-tested-by: default avatarEric Sesterhenn <snakebyte@gmx.de>
      Signed-off-by: default avatarJarek Poplawski <jarkao2@gmail.com>
      Signed-off-by: default avatarPatrick McHardy <kaber@trash.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b9c69896
    • Dong Wei's avatar
      netfilter: xt_connlimit: fix accouning when receive RST packet in ESTABLISHED state · d2ee3f2c
      Dong Wei authored
      In xt_connlimit match module, the counter of an IP is decreased when
      the TCP packet is go through the chain with ip_conntrack state TW.
      Well, it's very natural that the server and client close the socket
      with FIN packet. But when the client/server close the socket with RST
      packet(using so_linger), the counter for this connection still exsit.
      The following patch can fix it which is based on linux-2.6.25.4
      Signed-off-by: default avatarDong Wei <dwei.zh@gmail.com>
      Acked-by: default avatarJan Engelhardt <jengelh@medozas.de>
      Signed-off-by: default avatarPatrick McHardy <kaber@trash.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d2ee3f2c
  2. 03 Jun, 2008 10 commits
  3. 02 Jun, 2008 7 commits
  4. 31 May, 2008 16 commits