1. 25 Nov, 2008 25 commits
    • David S. Miller's avatar
      Revert "hso: Fix free of mutexes still in use." · ab153d84
      David S. Miller authored
      This reverts commit 52429eb2.
      
      On request from Alan Cox.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ab153d84
    • David S. Miller's avatar
      Revert "hso: Add TIOCM ioctl handling." · cd90ee17
      David S. Miller authored
      This reverts commit 7ea3a9ad.
      
      On request from Alan Cox.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cd90ee17
    • Alexey Dobriyan's avatar
      fb7e0674
    • Alexey Dobriyan's avatar
      ah4/ah6: remove useless NULL assignments · 6daad372
      Alexey Dobriyan authored
      struct will be kfreed in a moment, so...
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6daad372
    • Alexander Duyck's avatar
      igb: loopback bits not correctly cleared from RCTL register · 69d728ba
      Alexander Duyck authored
      This change forces the bits to 0 by using an &= operation with an inverted
      mask of all options instead of using an |= with a value of 0.
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      69d728ba
    • Alexander Duyck's avatar
      igb: remove unneeded bit refrence when enabling jumbo frames · 9b07f3d3
      Alexander Duyck authored
      There is a reference to a Buffer Size extention bit that is unneded by
      82575/82576 hardware.  Since it is not needed it should be removed from the
      code.
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9b07f3d3
    • Jeff Kirsher's avatar
      DCB: fix kconfig option · 7a6b6f51
      Jeff Kirsher authored
      Since the netlink option for DCB is necessary to actually be useful,
      simplified the Kconfig option.  In addition, added useful help text for the
      Kconfig option.
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7a6b6f51
    • Harvey Harrison's avatar
      aoe: remove private mac address format function · 411c41ee
      Harvey Harrison authored
      Add %pm to omit the colons when printing a mac address.
      Signed-off-by: default avatarHarvey Harrison <harvey.harrison@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      411c41ee
    • Denis Joseph Barrow's avatar
      hso: Hook up ->reset_resume · 9c8f92ae
      Denis Joseph Barrow authored
      Made usb_drivers reset_resume function point to hso_resume this 
      fixes problems a usb reset is done when the network interface
      is left idle for a few minutes. Possibly reset_resume should
      initialise hardware more but this works in the common case.
      Signed-off-by: default avatarDenis Joseph Barrow <D.Barow@option.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9c8f92ae
    • Denis Joseph Barrow's avatar
      hso: Add TIOCM ioctl handling. · 7ea3a9ad
      Denis Joseph Barrow authored
      Makes TIOCM ioctls for Data Carrier Detect & related functions
      work like /drivers/serial/serial-core.c potentially needed 
      for pppd & similar user programs.   
      Signed-off-by: default avatarDenis Joseph Barrow <D.Barow@option.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7ea3a9ad
    • Denis Joseph Barrow's avatar
      hso: Fix free of mutexes still in use. · 52429eb2
      Denis Joseph Barrow authored
      A new structure hso_mutex_table had to be declared statically
      & used as as hso_device mutex_lock(&serial->parent->mutex) etc
      is freed in hso_serial_open & hso_serial_close by kref_put while
      the mutex is still in use.
      
      This is a substantial change but should make the driver much stabler.
      Signed-off-by: default avatarDenis Joseph Barrow <D.Barow@option.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      52429eb2
    • Denis Joseph Barrow's avatar
      hso: Fix URB submission -EINVAL. · 89930b7b
      Denis Joseph Barrow authored
      Added check for IFF_UP in hso_resume, this should eliminate -EINVAL (-22)
      errors caused from urb's being submitted twice, once by hso_resume
      & once in hso_net_open, if suspend/resume USB power saving  mode is enabled
      Signed-off-by: default avatarDenis Joseph Barrow <D.Barow@option.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      89930b7b
    • Denis Joseph Barrow's avatar
      hso: Fix crashes on close. · 4a3e8181
      Denis Joseph Barrow authored
      Moved serial_open_count in hso_serial_open to
      prevent crashes owing to the serial structure being made NULL
      when hso_serial_close is called even though hso_serial_open
      returned -ENODEV, Alan Cox pointed out this happens,
      also put in sanity check in hso_serial_close
      to check for a valid serial structure which should prevent
      the most reproducable crash in the driver when the hso device
      is disconnected while in use.
      Signed-off-by: default avatarDenis Joseph Barrow <D.Barow@option.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4a3e8181
    • Denis Joseph Barrow's avatar
      bab04c3a
    • Stephen Hemminger's avatar
      netdev: add HAVE_NET_DEVICE_OPS · 47fd5b83
      Stephen Hemminger authored
      As a concession to vendors who have to deal with one source for different
      kernel versions, add a HAVE_NET_DEVICE_OPS so they don't end up hard
      coding ifdef against kernel version.
      Signed-off-by: default avatarStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      47fd5b83
    • Ilpo Järvinen's avatar
      tcp: handle shift/merge of cloned skbs too · 0ace2856
      Ilpo Järvinen authored
      This caused me to get repeatably:
      
        tcpdump: pcap_loop: recvfrom: Bad address
      
      Happens occassionally when I tcpdump my for-looped test xfers:
        while [ : ]; do echo -n "$(date '+%s.%N') "; ./sendfile; sleep 20; done
      
      Rest of the relevant commands:
        ethtool -K eth0 tso off
        tc qdisc add dev eth0 root netem drop 4%
        tcpdump -n -s0 -i eth0 -w sacklog.all
      
      Running net-next under kvm, connection goes to the same host
      (basically just out of kvm). The connection itself works ok
      and data gets sent without corruption even with a large
      number of tests while tcpdump fails usually within less than
      5 tests.
      
      Whether it only happens because of this change or not, I
      don't know for sure but it's the only thing with which
      I've seen that error. The non-cloned variant works w/o it
      for much longer time. I'm yet to debug where the error
      actually comes from.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0ace2856
    • Ilpo Järvinen's avatar
      111cc8b9
    • Ilpo Järvinen's avatar
      tcp: Make shifting not clear the hints · 92ee76b6
      Ilpo Järvinen authored
      The earlier version was just very basic one which is "playing
      safe" by always clearing the hints. However, clearing of a hint
      is extremely costly operation with large windows, so it must be
      avoided at all cost whenever possible, there is a way with
      shifting too achieve not-clearing.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      92ee76b6
    • Ilpo Järvinen's avatar
      tcp: Try to restore large SKBs while SACK processing · 832d11c5
      Ilpo Järvinen authored
      During SACK processing, most of the benefits of TSO are eaten by
      the SACK blocks that one-by-one fragment SKBs to MSS sized chunks.
      Then we're in problems when cleanup work for them has to be done
      when a large cumulative ACK comes. Try to return back to pre-split
      state already while more and more SACK info gets discovered by
      combining newly discovered SACK areas with the previous skb if
      that's SACKed as well.
      
      This approach has a number of benefits:
      
      1) The processing overhead is spread more equally over the RTT
      2) Write queue has less skbs to process (affect everything
         which has to walk in the queue past the sacked areas)
      3) Write queue is consistent whole the time, so no other parts
         of TCP has to be aware of this (this was not the case with
         some other approach that was, well, quite intrusive all
         around).
      4) Clean_rtx_queue can release most of the pages using single
         put_page instead of previous PAGE_SIZE/mss+1 calls
      
      In case a hole is fully filled by the new SACK block, we attempt
      to combine the next skb too which allows construction of skbs
      that are even larger than what tso split them to and it handles
      hole per on every nth patterns that often occur during slow start
      overshoot pretty nicely. Though this to be really useful also
      a retransmission would have to get lost since cumulative ACKs
      advance one hole at a time in the most typical case.
      
      TODO: handle upwards only merging. That should be rather easy
      when segment is fully sacked but I'm leaving that as future
      work item (it won't make very large difference anyway since
      this current approach already covers quite a lot of normal
      cases).
      
      I was earlier thinking of some sophisticated way of tracking
      timestamps of the first and the last segment but later on
      realized that it won't be that necessary at all to store the
      timestamp of the last segment. The cases that can occur are
      basically either:
        1) ambiguous => no sensible measurement can be taken anyway
        2) non-ambiguous is due to reordering => having the timestamp
           of the last segment there is just skewing things more off
           than does some good since the ack got triggered by one of
           the holes (besides some substle issues that would make
           determining right hole/skb even harder problem). Anyway,
           it has nothing to do with this change then.
      
      I choose to route some abnormal looking cases with goto noop,
      some could be handled differently (eg., by stopping the
      walking at that skb but again). In general, they either
      shouldn't happen at all or are rare enough to make no difference
      in practice.
      
      In theory this change (as whole) could cause some macroscale
      regression (global) because of cache misses that are taken over
      the round-trip time but it gets very likely better because of much
      less (local) cache misses per other write queue walkers and the
      big recovery clearing cumulative ack.
      
      Worth to note that these benefits would be very easy to get also
      without TSO/GSO being on as long as the data is in pages so that
      we can merge them. Currently I won't let that happen because
      DSACK splitting at fragment that would mess up pcounts due to
      sk_can_gso in tcp_set_skb_tso_segs. Once DSACKs fragments gets
      avoided, we have some conditions that can be made less strict.
      
      TODO: I will probably have to convert the excessive pointer
      passing to struct sacktag_state... :-)
      
      My testing revealed that considerable amount of skbs couldn't
      be shifted because they were cloned (most likely still awaiting
      tx reclaim)...
      
      [The rest is considering future work instead since I got
      repeatably EFAULT to tcpdump's recvfrom when I added
      pskb_expand_head to deal with clones, so I separated that
      into another, later patch]
      
      ...To counter that, I gave up on the fifth advantage:
      
      5) When growing previous SACK block, less allocs for new skbs
         are done, basically a new alloc is needed only when new hole
         is detected and when the previous skb runs out of frags space
      
      ...which now only happens of if reclaim is fast enough to dispose
      the clone before the SACK block comes in (the window is RTT long),
      otherwise we'll have to alloc some.
      
      With clones being handled I got these numbers (will be somewhat
      worse without that), taken with fine-grained mibs:
      
                        TCPSackShifted 398
                         TCPSackMerged 877
                  TCPSackShiftFallback 320
            TCPSACKCOLLAPSEFALLBACKGSO 0
        TCPSACKCOLLAPSEFALLBACKSKBBITS 0
        TCPSACKCOLLAPSEFALLBACKSKBDATA 0
          TCPSACKCOLLAPSEFALLBACKBELOW 0
          TCPSACKCOLLAPSEFALLBACKFIRST 1
       TCPSACKCOLLAPSEFALLBACKPREVBITS 318
            TCPSACKCOLLAPSEFALLBACKMSS 1
         TCPSACKCOLLAPSEFALLBACKNOHEAD 0
          TCPSACKCOLLAPSEFALLBACKSHIFT 0
                TCPSACKCOLLAPSENOOPSEQ 0
        TCPSACKCOLLAPSENOOPSMALLPCOUNT 0
           TCPSACKCOLLAPSENOOPSMALLLEN 0
                   TCPSACKCOLLAPSEHOLE 12
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      832d11c5
    • Ilpo Järvinen's avatar
      tcp: make tcp_sacktag_one able to handle partial skb too · f58b22fd
      Ilpo Järvinen authored
      This is preparatory work for SACK combiner patch which may
      have to count TCP state changes for only a part of the skb
      because it will intentionally avoids splitting skb to SACKed
      and not sacked parts.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      f58b22fd
    • Ilpo Järvinen's avatar
      tcp: Make SACK code to split only at mss boundaries · adb92db8
      Ilpo Järvinen authored
      Sadly enough, this adds possible divide though we try to avoid
      it by checking one mss as common case.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      adb92db8
    • Ilpo Järvinen's avatar
      tcp: more aggressive skipping · e8bae275
      Ilpo Järvinen authored
      I knew already when rewriting the sacktag that this condition
      was too conservative, change it now since it prevent lot of
      useless work (especially in the sack shifter decision code
      that is being added by a later patch). This shouldn't change
      anything really, just save some processing regardless of the
      shifter.
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e8bae275
    • Ilpo Järvinen's avatar
    • Ilpo Järvinen's avatar
      tcp: collapse more than two on retransmission · 4a17fc3a
      Ilpo Järvinen authored
      I always had thought that collapsing up to two at a time was
      intentional decision to avoid excessive processing if 1 byte
      sized skbs are to be combined for a full mtu, and consecutive
      retransmissions would make the size of the retransmittee
      double each round anyway, but some recent discussion made me
      to understand that was not the case. Thus make collapse work
      more and wait less.
      
      It would be possible to take advantage of the shifting
      machinery (added in the later patch) in the case of paged
      data but that can be implemented on top of this change.
      
      tcp_skb_is_last check is now provided by the loop.
      
      I tested a bit (ss-after-idle-off, fill 4096x4096B xfer,
      10s sleep + 4096 x 1byte writes while dropping them for
      some a while with netem):
      
      . 16774097:16775545(1448) ack 1 win 46
      . 16775545:16776993(1448) ack 1 win 46
      . ack 16759617 win 2399
      P 16776993:16777217(224) ack 1 win 46
      . ack 16762513 win 2399
      . ack 16765409 win 2399
      . ack 16768305 win 2399
      . ack 16771201 win 2399
      . ack 16774097 win 2399
      . ack 16776993 win 2399
      . ack 16777217 win 2399
      P 16777217:16777257(40) ack 1 win 46
      . ack 16777257 win 2399
      P 16777257:16778705(1448) ack 1 win 46
      P 16778705:16780153(1448) ack 1 win 46
      FP 16780153:16781313(1160) ack 1 win 46
      . ack 16778705 win 2399
      . ack 16780153 win 2399
      F 1:1(0) ack 16781314 win 2399
      
      While without drop-all period I get this:
      
      . 16773585:16775033(1448) ack 1 win 46
      . ack 16764897 win 9367
      . ack 16767793 win 9367
      . ack 16770689 win 9367
      . ack 16773585 win 9367
      . 16775033:16776481(1448) ack 1 win 46
      P 16776481:16777217(736) ack 1 win 46
      . ack 16776481 win 9367
      . ack 16777217 win 9367
      P 16777217:16777218(1) ack 1 win 46
      P 16777218:16777219(1) ack 1 win 46
      P 16777219:16777220(1) ack 1 win 46
        ...
      P 16777247:16777248(1) ack 1 win 46
      . ack 16777218 win 9367
      . ack 16777219 win 9367
        ...
      . ack 16777233 win 9367
      . ack 16777248 win 9367
      P 16777248:16778696(1448) ack 1 win 46
      P 16778696:16780144(1448) ack 1 win 46
      FP 16780144:16781313(1169) ack 1 win 46
      . ack 16780144 win 9367
      F 1:1(0) ack 16781314 win 9367
      
      The window seems to be 30-40 segments, which were successfully
      combined into: P 16777217:16777257(40) ack 1 win 46
      Signed-off-by: default avatarIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4a17fc3a
    • Eric Dumazet's avatar
      net: avoid a pair of dst_hold()/dst_release() in ip_push_pending_frames() · a21bba94
      Eric Dumazet authored
      We can reduce pressure on dst entry refcount that slowdown UDP transmit
      path on SMP machines. This pressure is visible on RTP servers when
      delivering content to mediagateways, especially big ones, handling
      thousand of streams. Several cpus send UDP frames to the same
      destination, hence use the same dst entry.
      
      This patch makes ip_push_pending_frames() steal the refcount its
      callers had to take when filling inet->cork.dst.
      
      This doesnt avoid all refcounting, but still gives speedups on SMP,
      on UDP/RAW transmit path.
      Signed-off-by: default avatarEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a21bba94
  2. 24 Nov, 2008 15 commits