1. 06 Jul, 2016 1 commit
    • David Howells's avatar
      rxrpc: Fix some sparse errors · 88b99d0b
      David Howells authored
      Fix the following sparse errors:
      
      ../net/rxrpc/conn_object.c:77:17: warning: incorrect type in assignment (different base types)
      ../net/rxrpc/conn_object.c:77:17:    expected restricted __be32 [usertype] call_id
      ../net/rxrpc/conn_object.c:77:17:    got unsigned int [unsigned] [usertype] call_id
      ../net/rxrpc/conn_object.c:84:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:86:26: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:357:15: warning: incorrect type in assignment (different base types)
      ../net/rxrpc/conn_object.c:357:15:    expected restricted __be32 [usertype] epoch
      ../net/rxrpc/conn_object.c:357:15:    got unsigned int [unsigned] [usertype] epoch
      ../net/rxrpc/conn_object.c:369:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:371:26: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:411:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:413:26: warning: restricted __be32 degrades to integer
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      88b99d0b
  2. 01 Jul, 2016 1 commit
    • David Howells's avatar
      rxrpc: Fix processing of authenticated/encrypted jumbo packets · ac5d2683
      David Howells authored
      When a jumbo packet is being split up and processed, the crypto checksum
      for each split-out packet is in the jumbo header and needs placing in the
      reconstructed packet header.
      
      When the code was changed to keep the stored copy of the packet header in
      host byte order, this reconstruction was missed.
      
      Found with sparse with CF=-D__CHECK_ENDIAN__:
      
          ../net/rxrpc/input.c:479:33: warning: incorrect type in assignment (different base types)
          ../net/rxrpc/input.c:479:33:    expected unsigned short [unsigned] [usertype] _rsvd
          ../net/rxrpc/input.c:479:33:    got restricted __be16 [addressable] [usertype] _rsvd
      
      Fixes: 0d12f8a4 ("rxrpc: Keep the skb private record of the Rx header in host byte order")
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      ac5d2683
  3. 27 Jun, 2016 30 commits
  4. 26 Jun, 2016 2 commits
    • David S. Miller's avatar
      Merge tag 'rxrpc-rewrite-20160622-2' of... · 2b7c4f7a
      David S. Miller authored
      Merge tag 'rxrpc-rewrite-20160622-2' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
      
      David Howells says:
      
      ====================
      rxrpc: Get rid of conn bundle and transport structs
      
      Here's the next part of the AF_RXRPC rewrite.  The primary purpose of this
      set is to get rid of the rxrpc_conn_bundle and rxrpc_transport structs.
      This simplifies things for future development of the connection handling.
      
      To this end, the following significant changes are made:
      
       (1) The rxrpc_connection struct is given pointers to the local and peer
           endpoints, inside the rxrpc_conn_parameters struct.  Pointers to the
           transport's copy of these pointers are then redirected to the
           connection struct.
      
       (2) Exclusive connection handling is fixed.  Exclusive connections should
           do just one call and then be retired.  They are used in security
           negotiations and, I believe, the idea is to avoid reuse of negotiated
           security contexts.
      
           The current code is doing a single connection per socket and doing all
           the calls over that.  With this change it gets a new connection for
           each call made.
      
       (3) A new sendmsg() control message marker is added to make individual
           calls operate over exclusive connections.  This should be used in
           future in preference to the sockopt that marks a socket as "exclusive
           connection".
      
       (4) IDs for client connections initiated by a machine are now allocated
           from a global pool using the IDR facility and are unique across all
           client connections, no matter their destination.  The IDR facility is
           then used to look up a connection on the connection ID alone.  Other
           parameters are then verified afterwards.
      
           Note that the IDR facility may use a lot of memory if the IDs it holds
           are widely scattered.  Given this, in a future commit, client
           connections will be retired if they are more than a certain distance
           from the last ID allocated.
      
           The client epoch is advanced by 1 each time the client ID counter
           wraps.  Connections outside the current epoch will also be retired in
           a future commit.
      
       (5) The connection bundle concept is removed and the client connection
           tree is moved into the local endpoint.  The queue for waiting for a
           call channel is moved to the rxrpc_connection struct as there can only
           be one connection for any particular key going to any particular peer
           now.
      
       (6) The rxrpc_transport struct is removed and the service connection tree
           is moved into the peer struct.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2b7c4f7a
    • Xing Zheng's avatar
      net: stmmac: dwmac-rk: add rk3228-specific data · e7ffd812
      Xing Zheng authored
      Add constants and callback functions for the dwmac on rk3228/rk3229 socs.
      As can be seen, the base structure is the same, only registers and the
      bits in them moved slightly.
      Signed-off-by: default avatarXing Zheng <zhengxing@rock-chips.com>
      Reviewed-by: default avatarHeiko Stuebner <heiko@sntech.de>
      Acked-by: default avatarRob Herring <robh@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e7ffd812
  5. 25 Jun, 2016 6 commits
    • David S. Miller's avatar
      Merge branch 'net-sched-bulk-dequeue' · e83e5bb1
      David S. Miller authored
      Eric Dumazet says:
      
      ====================
      net_sched: bulk dequeue and deferred drops
      
      First patch adds an additional parameter to ->enqueue() qdisc method
      so that drops can be done outside of critical section
      (after locks are released).
      
      Then fq_codel can have a small optimization to reduce number of cache
      lines misses during a drop event
      (possibly accumulating hundreds of packets to be freed).
      
      A small htb change exports the backlog in class dumps.
      
      Final patch adds bulk dequeue to qdiscs that were lacking this feature.
      
      This series brings a nice qdisc performance increase (more than 80 %
      in some cases).
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e83e5bb1
    • Eric Dumazet's avatar
      net_sched: generalize bulk dequeue · 4d202a0d
      Eric Dumazet authored
      When qdisc bulk dequeue was added in linux-3.18 (commit
      5772e9a3 "qdisc: bulk dequeue support for qdiscs
      with TCQ_F_ONETXQUEUE"), it was constrained to some
      specific qdiscs.
      
      With some extra care, we can extend this to all qdiscs,
      so that typical traffic shaping solutions can benefit from
      small batches (8 packets in this patch).
      
      For example, HTB is often used on some multi queue device.
      And bonding/team are multi queue devices...
      
      Idea is to bulk-dequeue packets mapping to the same transmit queue.
      
      This brings between 35 and 80 % performance increase in HTB setup
      under pressure on a bonding setup :
      
      1) NUMA node contention :   610,000 pps -> 1,110,000 pps
      2) No node contention   : 1,380,000 pps -> 1,930,000 pps
      
      Now we should work to add batches on the enqueue() side ;)
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Cc: John Fastabend <john.r.fastabend@intel.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Florian Westphal <fw@strlen.de>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4d202a0d
    • Eric Dumazet's avatar
      net_sched: sch_htb: export class backlog in dumps · 338ed9b4
      Eric Dumazet authored
      We already get child qdisc qlen, we also can get its backlog
      so that class dumps can report it.
      
      Also replace qstats by a single drop counter, but move it in
      a separate cache line so that drops do not dirty useful cache lines.
      
      Tested:
      
      $ tc -s cl sh dev eth0
      class htb 1:1 root leaf 3: prio 0 rate 1Gbit ceil 1Gbit burst 500000b cburst 500000b
       Sent 2183346912 bytes 9021815 pkt (dropped 2340774, overlimits 0 requeues 0)
       rate 1001Mbit 517543pps backlog 120758b 499p requeues 0
       lended: 9021770 borrowed: 0 giants: 0
       tokens: 9 ctokens: 9
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      338ed9b4
    • Eric Dumazet's avatar
      net_sched: fq_codel: cache skb->truesize into skb->cb · 008830bc
      Eric Dumazet authored
      Now we defer skb drops, it makes sense to keep a copy
      of skb->truesize in struct codel_skb_cb to avoid one
      cache line miss per dropped skb in fq_codel_drop(),
      to reduce latencies a bit further.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      008830bc
    • Eric Dumazet's avatar
      net_sched: drop packets after root qdisc lock is released · 520ac30f
      Eric Dumazet authored
      Qdisc performance suffers when packets are dropped at enqueue()
      time because drops (kfree_skb()) are done while qdisc lock is held,
      delaying a dequeue() draining the queue.
      
      Nominal throughput can be reduced by 50 % when this happens,
      at a time we would like the dequeue() to proceed as fast as possible.
      
      Even FQ is vulnerable to this problem, while one of FQ goals was
      to provide some flow isolation.
      
      This patch adds a 'struct sk_buff **to_free' parameter to all
      qdisc->enqueue(), and in qdisc_drop() helper.
      
      I measured a performance increase of up to 12 %, but this patch
      is a prereq so that future batches in enqueue() can fly.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Acked-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      520ac30f
    • David S. Miller's avatar
      Merge branch 'liquidio-next' · 36195d86
      David S. Miller authored
      Raghu Vatsavayi says:
      
      ====================
      liquidio: updates and bug fixes
      
      Please consider following patch series for liquidio bug fixes
      and updates on top of net-next. Following patches should be
      applied in the following order as some of them depend on
      earlier patches in the series.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      36195d86