Commit d5b8aa1d authored by jamal's avatar jamal Committed by David S. Miller

net_sched: fix dequeuer fairness

Results on dummy device can be seen in my netconf 2011
slides. These results are for a 10Gige IXGBE intel
nic - on another i5 machine, very similar specs to
the one used in the netconf2011 results.
It turns out - this is a hell lot worse than dummy
and so this patch is even more beneficial for 10G.

Test setup:
----------

System under test sending packets out.
Additional box connected directly dropping packets.
Installed prio qdisc on the eth device and default
netdev default length of 1000 used as is.
The 3 prio bands each were set to 100 (didnt factor in
the results).

5 packet runs were made and the middle 3 picked.

results
-------

The "cpu" column indicates the which cpu the sample
was taken on,
The "Pkt runx" carries the number of packets a cpu
dequeued when forced to be in the "dequeuer" role.
The "avg" for each run is the number of times each
cpu should be a "dequeuer" if the system was fair.

3.0-rc4      (plain)
cpu         Pkt run1        Pkt run2        Pkt run3
================================================
cpu0        21853354        21598183        22199900
cpu1          431058          473476          393159
cpu2          481975          477529          458466
cpu3        23261406        23412299        22894315
avg         11506948        11490372        11486460

3.0-rc4 with patch and default weight 64
cpu 	     Pkt run1        Pkt run2        Pkt run3
================================================
cpu0        13205312        13109359        13132333
cpu1        10189914        10159127        10122270
cpu2        10213871        10124367        10168722
cpu3        13165760        13164767        13096705
avg         11693714        11639405        11630008

As you can see the system is still not perfect but
is a lot better than what it was before...

At the moment we use the old backlog weight, weight_p
which is 64 packets. It seems to be reasonably fine
with that value.
The system could be made more fair if we reduce the
weight_p (as per my presentation), but we are going
to affect the shared backlog weight. Unless deemed
necessary, I think the default value is fine. If not
we could add yet another knob.
Signed-off-by: default avatarJamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 85a43a9e
...@@ -189,15 +189,17 @@ static inline int qdisc_restart(struct Qdisc *q) ...@@ -189,15 +189,17 @@ static inline int qdisc_restart(struct Qdisc *q)
void __qdisc_run(struct Qdisc *q) void __qdisc_run(struct Qdisc *q)
{ {
unsigned long start_time = jiffies; int quota = weight_p;
int work = 0;
while (qdisc_restart(q)) { while (qdisc_restart(q)) {
work++;
/* /*
* Postpone processing if * Ordered by possible occurrence: Postpone processing if
* 1. another process needs the CPU; * 1. we've exceeded packet quota
* 2. we've been doing it for too long. * 2. another process needs the CPU;
*/ */
if (need_resched() || jiffies != start_time) { if (work >= quota || need_resched()) {
__netif_schedule(q); __netif_schedule(q);
break; break;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment