Commit c2bf5ec2 authored by David S. Miller's avatar David S. Miller

Merge branch 'qdisc_bulk_dequeue'

Jesper Dangaard Brouer says:

====================
qdisc: bulk dequeue support

This patchset uses DaveM's recent API changes to dev_hard_start_xmit(),
from the qdisc layer, to implement dequeue bulking.

Patch01: "qdisc: bulk dequeue support for qdiscs with TCQ_F_ONETXQUEUE"
 - Implement basic qdisc dequeue bulking
 - This time, 100% relying on BQL limits, no magic safe-guard constants

Patch02: "qdisc: dequeue bulking also pickup GSO/TSO packets"
 - Extend bulking to bulk several GSO/TSO packets
 - Seperate patch, as it introduce a small regression, see test section.

We do have a patch03, which exports a userspace tunable as a BQL
tunable, that can byte-cap or disable the bulking/bursting.  But we
could not agree on it internally, thus not sending it now.  We
basically strive to avoid adding any new userspace tunable.

Testing patch01:
================
 Demonstrating the performance improvement of qdisc dequeue bulking, is
tricky because the effect only "kicks-in" once the qdisc system have a
backlog. Thus, for a backlog to form, we need either 1) to exceed wirespeed
of the link or 2) exceed the capability of the device driver.

For practical use-cases, the measureable effect of this will be a
reduction in CPU usage

01-TCP_STREAM:
--------------
Testing effect for TCP involves disabling TSO and GSO, because TCP
already benefit from bulking, via TSO and especially for GSO segmented
packets.  This patch view TSO/GSO as a seperate kind of bulking, and
avoid further bulking of these packet types.

The measured perf diff benefit (at 10Gbit/s) for a single netperf
TCP_STREAM were 9.24% less CPU used on calls to _raw_spin_lock()
(mostly from sch_direct_xmit).

If my E5-2695v2(ES) CPU is tuned according to:
 http://netoptimizer.blogspot.dk/2014/04/basic-tuning-for-network-overload.html
Then it is possible that a single netperf TCP_STREAM, with GSO and TSO
disabled, can utilize all bandwidth on a 10Gbit/s link.  This will
then cause a standing backlog queue at the qdisc layer.

Trying to pressure the system some more CPU util wise, I'm starting
24x TCP_STREAMs and monitoring the overall CPU utilization.  This
confirms bulking saves CPU cycles when it "kicks-in".

Tool mpstat, while stressing the system with netperf 24x TCP_STREAM, shows:
 * Disabled bulking: sys:2.58%  soft:8.50%  idle:88.78%
 * Enabled  bulking: sys:2.43%  soft:7.66%  idle:89.79%

02-UDP_STREAM
-------------
The measured perf diff benefit for UDP_STREAM were 6.41% less CPU used
on calls to _raw_spin_lock().  24x UDP_STREAM with packet size -m 1472 (to
avoid sending UDP/IP fragments).

03-trafgen driver test
----------------------
The performance of the 10Gbit/s ixgbe driver is limited due to
updating the HW ring-queue tail-pointer on every packet.  As
previously demonstrated with pktgen.

Using trafgen to send RAW frames from userspace (via AF_PACKET), and
forcing it through qdisc path (with option --qdisc-path and -t0),
sending with 12 CPUs.

I can demonstrate this driver layer limitation:
 * 12.8 Mpps with no qdisc bulking
 * 14.8 Mpps with qdisc bulking (full 10G-wirespeed)

Testing patch02:
================
Testing Bulking several GSO/TSO packets:

Measuring HoL (Head-of-Line) blocking for TSO and GSO, with
netperf-wrapper. Bulking several TSO show no performance regressions
(requeues were in the area 32 requeues/sec for 10G while transmitting
approx 813Kpps).

Bulking several GSOs does show small regression or very small
improvement (requeues were in the area 8000 requeues/sec, for 10G
while transmitting approx 813Kpps).

 Using ixgbe 10Gbit/s with GSO bulking, we can measure some additional
latency. Base-case, which is "normal" GSO bulking, sees varying
high-prio queue delay between 0.38ms to 0.47ms.  Bulking several GSOs
together, result in a stable high-prio queue delay of 0.50ms.

Corrosponding to:
 (10000*10^6)*((0.50-0.47)/10^3)/8 = 37500 bytes
 (10000*10^6)*((0.50-0.38)/10^3)/8 = 150000 bytes
 37500/1500  = 25 pkts
 150000/1500 = 100 pkts

 Using igb at 100Mbit/s with GSO bulking, shows an improvement.
Base-case sees varying high-prio queue delay between 2.23ms to 2.35ms
diff of 0.12ms corrosponding to 1500 bytes at 100Mbit/s. Bulking
several GSOs together, result in a stable high-prio queue delay of
2.23ms.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 38df6492 808e7ac0
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include <linux/pkt_sched.h> #include <linux/pkt_sched.h>
#include <linux/pkt_cls.h> #include <linux/pkt_cls.h>
#include <linux/percpu.h> #include <linux/percpu.h>
#include <linux/dynamic_queue_limits.h>
#include <net/gen_stats.h> #include <net/gen_stats.h>
#include <net/rtnetlink.h> #include <net/rtnetlink.h>
...@@ -119,6 +120,21 @@ static inline void qdisc_run_end(struct Qdisc *qdisc) ...@@ -119,6 +120,21 @@ static inline void qdisc_run_end(struct Qdisc *qdisc)
qdisc->__state &= ~__QDISC___STATE_RUNNING; qdisc->__state &= ~__QDISC___STATE_RUNNING;
} }
static inline bool qdisc_may_bulk(const struct Qdisc *qdisc)
{
return qdisc->flags & TCQ_F_ONETXQUEUE;
}
static inline int qdisc_avail_bulklimit(const struct netdev_queue *txq)
{
#ifdef CONFIG_BQL
/* Non-BQL migrated drivers will return 0, too. */
return dql_avail(&txq->dql);
#else
return 0;
#endif
}
static inline bool qdisc_is_throttled(const struct Qdisc *qdisc) static inline bool qdisc_is_throttled(const struct Qdisc *qdisc)
{ {
return test_bit(__QDISC_STATE_THROTTLED, &qdisc->state) ? true : false; return test_bit(__QDISC_STATE_THROTTLED, &qdisc->state) ? true : false;
......
...@@ -56,6 +56,35 @@ static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q) ...@@ -56,6 +56,35 @@ static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
return 0; return 0;
} }
static struct sk_buff *try_bulk_dequeue_skb(struct Qdisc *q,
struct sk_buff *head_skb,
int bytelimit)
{
struct sk_buff *skb, *tail_skb = head_skb;
while (bytelimit > 0) {
skb = q->dequeue(q);
if (!skb)
break;
bytelimit -= skb->len; /* covers GSO len */
skb = validate_xmit_skb(skb, qdisc_dev(q));
if (!skb)
break;
while (tail_skb->next) /* GSO list goto tail */
tail_skb = tail_skb->next;
tail_skb->next = skb;
tail_skb = skb;
}
return head_skb;
}
/* Note that dequeue_skb can possibly return a SKB list (via skb->next).
* A requeued skb (via q->gso_skb) can also be a SKB list.
*/
static inline struct sk_buff *dequeue_skb(struct Qdisc *q) static inline struct sk_buff *dequeue_skb(struct Qdisc *q)
{ {
struct sk_buff *skb = q->gso_skb; struct sk_buff *skb = q->gso_skb;
...@@ -70,11 +99,18 @@ static inline struct sk_buff *dequeue_skb(struct Qdisc *q) ...@@ -70,11 +99,18 @@ static inline struct sk_buff *dequeue_skb(struct Qdisc *q)
} else } else
skb = NULL; skb = NULL;
} else { } else {
if (!(q->flags & TCQ_F_ONETXQUEUE) || !netif_xmit_frozen_or_stopped(txq)) { if (!(q->flags & TCQ_F_ONETXQUEUE) ||
!netif_xmit_frozen_or_stopped(txq)) {
int bytelimit = qdisc_avail_bulklimit(txq);
skb = q->dequeue(q); skb = q->dequeue(q);
if (skb) if (skb) {
bytelimit -= skb->len;
skb = validate_xmit_skb(skb, qdisc_dev(q)); skb = validate_xmit_skb(skb, qdisc_dev(q));
} }
if (skb && qdisc_may_bulk(q))
skb = try_bulk_dequeue_skb(q, skb, bytelimit);
}
} }
return skb; return skb;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment