Commit d705939b authored by Jason Madden's avatar Jason Madden

Rework bench_spawn to be perf based for more reliable numbers

I get this on 3.6.4:

.....................
eventlet spawn: Mean +- std dev: 12.2 us +- 1.2 us
.....................
eventlet sleep: Mean +- std dev: 16.3 us +- 0.8 us
.....................
gevent spawn: Mean +- std dev: 13.1 us +- 1.1 us
.....................
gevent sleep: Mean +- std dev: 10.4 us +- 0.8 us
.....................
WARNING: the benchmark result may be unstable
* the standard deviation (2.73 us) is 17% of the mean (16.1 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

geventpool spawn: Mean +- std dev: 16.1 us +- 2.7 us
.....................
geventpool sleep: Mean +- std dev: 11.2 us +- 0.5 us
.....................
geventraw spawn: Mean +- std dev: 4.95 us +- 0.42 us
.....................
geventraw sleep: Mean +- std dev: 7.34 us +- 0.28 us
.....................
none spawn: Mean +- std dev: 1.98 us +- 0.05 us
.....................
geventpool join: Mean +- std dev: 6.59 us +- 0.25 us
.....................
WARNING: the benchmark result may be unstable
* the standard deviation (1.28 us) is 10% of the mean (12.7 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

eventlet spawn kwarg: Mean +- std dev: 12.7 us +- 1.3 us
.....................
gevent spawn kwarg: Mean +- std dev: 14.6 us +- 1.2 us
.....................
WARNING: the benchmark result may be unstable
* the standard deviation (2.81 us) is 17% of the mean (17.0 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

geventpool spawn kwarg: Mean +- std dev: 17.0 us +- 2.8 us
.....................
geventraw spawn kwarg: Mean +- std dev: 6.11 us +- 0.45 us
.....................
none spawn kwarg: Mean +- std dev: 2.22 us +- 0.07 us

And this on 2.7.14:

.....................
WARNING: the benchmark result may be unstable
* the standard deviation (2.10 us) is 11% of the mean (18.4 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

eventlet spawn: Mean +- std dev: 18.4 us +- 2.1 us
.....................
eventlet sleep: Mean +- std dev: 23.1 us +- 0.8 us
.....................
WARNING: the benchmark result may be unstable
* the standard deviation (4.39 us) is 25% of the mean (17.3 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

gevent spawn: Mean +- std dev: 17.3 us +- 4.4 us
.....................
gevent sleep: Mean +- std dev: 10.3 us +- 0.5 us
.....................
WARNING: the benchmark result may be unstable
* the standard deviation (3.92 us) is 16% of the mean (24.7 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

geventpool spawn: Mean +- std dev: 24.7 us +- 3.9 us
.....................
geventpool sleep: Mean +- std dev: 13.5 us +- 0.9 us
.....................
geventraw spawn: Mean +- std dev: 6.91 us +- 0.49 us
.....................
geventraw sleep: Mean +- std dev: 8.95 us +- 0.30 us
.....................
none spawn: Mean +- std dev: 2.21 us +- 0.04 us
.....................
geventpool join: Mean +- std dev: 7.93 us +- 0.28 us
.....................
eventlet spawn kwarg: Mean +- std dev: 17.4 us +- 1.3 us
.....................
WARNING: the benchmark result may be unstable
* the standard deviation (4.11 us) is 27% of the mean (15.1 us)
* the maximum (24.2 us) is 60% greater than the mean (15.1 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

gevent spawn kwarg: Mean +- std dev: 15.1 us +- 4.1 us
.....................
WARNING: the benchmark result may be unstable
* the standard deviation (4.74 us) is 18% of the mean (26.8 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

geventpool spawn kwarg: Mean +- std dev: 26.8 us +- 4.7 us
.....................
WARNING: the benchmark result may be unstable
* the standard deviation (959 ns) is 12% of the mean (8.00 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

geventraw spawn kwarg: Mean +- std dev: 8.00 us +- 0.96 us
.....................
none spawn kwarg: Mean +- std dev: 2.48 us +- 0.06 us

Partial PyPy results:

.........
WARNING: the benchmark result may be unstable
* the standard deviation (5.77 us) is 18% of the mean (32.5 us)
* the maximum (52.2 us) is 61% greater than the mean (32.5 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

eventlet spawn: Mean +- std dev: 32.5 us +- 5.8 us
.........
eventlet sleep: Mean +- std dev: 39.9 us +- 2.4 us
.........
WARNING: the benchmark result may be unstable
* the standard deviation (8.90 us) is 43% of the mean (20.6 us)
* the minimum (8.50 us) is 59% smaller than the mean (20.6 us)
* the maximum (41.5 us) is 102% greater than the mean (20.6 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

gevent spawn: Mean +- std dev: 20.6 us +- 8.9 us
.........
gevent sleep: Mean +- std dev: 4.20 us +- 0.21 us
.........
WARNING: the benchmark result may be unstable
* the standard deviation (8.52 us) is 50% of the mean (17.2 us)
* the minimum (7.74 us) is 55% smaller than the mean (17.2 us)
* the maximum (58.1 us) is 238% greater than the mean (17.2 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

geventpool spawn: Mean +- std dev: 17.2 us +- 8.5 us
.........
WARNING: the benchmark result may be unstable
* the standard deviation (968 ns) is 18% of the mean (5.26 us)
* the maximum (10.5 us) is 99% greater than the mean (5.26 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

geventpool sleep: Mean +- std dev: 5.26 us +- 0.97 us
..........
WARNING: the benchmark result may be unstable
* the standard deviation (1.45 us) is 52% of the mean (2.80 us)
* the maximum (5.50 us) is 96% greater than the mean (2.80 us)

Try to rerun the benchmark with more runs, values and/or loops.
Run 'python -m perf system tune' command to reduce the system jitter.
Use perf stats, perf dump and perf hist to analyze results.
Use --quiet option to hide these warnings.

geventraw spawn: Mean +- std dev: 2.80 us +- 1.45 us
.........
geventraw sleep: Mean +- std dev: 4.24 us +- 0.27 us
.........
none spawn: Mean +- std dev: 1.10 us +- 0.06 us
parent b84e1bd9
"""Benchmarking spawn() performance.
"""
from __future__ import print_function, absolute_import, division
import sys
import os
import time
import perf
try:
xrange
except NameError:
xrange = range
if hasattr(time, "perf_counter"):
curtime = time.perf_counter # 3.3
elif sys.platform.startswith('win'):
curtime = time.clock
else:
curtime = time.time
N = 100000
N = 1000
counter = 0
......@@ -30,133 +23,178 @@ def incr(sleep, **_kwargs):
def noop(_p):
pass
class Options(object):
# TODO: Add back an argument for that
eventlet_hub = None
loops = None
def __init__(self, sleep, join, **kwargs):
self.kwargs = kwargs
self.sleep = sleep
self.join = join
def _report(name, delta):
print('%8s: %3.2f microseconds per greenlet' % (name, delta * 1000000.0 / N))
class Times(object):
def test(spawn, sleep, kwargs):
start = curtime()
def __init__(self,
spawn_duration,
sleep_duration=-1,
join_duration=-1):
self.spawn_duration = spawn_duration
self.sleep_duration = sleep_duration
self.join_duration = join_duration
def _test(spawn, sleep, options):
global counter
counter = 0
before_spawn = perf.perf_counter()
for _ in xrange(N):
spawn(incr, sleep, **kwargs)
_report('spawning', curtime() - start)
assert counter == 0, counter
start = curtime()
sleep(0)
_report('sleep(0)', curtime() - start)
assert counter == N, (counter, N)
spawn(incr, sleep, **options.kwargs)
before_sleep = perf.perf_counter()
if options.sleep:
assert counter == 0, counter
sleep(0)
after_sleep = perf.perf_counter()
assert counter == N, (counter, N)
else:
after_sleep = before_sleep
if options.join:
before_join = perf.perf_counter()
options.join()
after_join = perf.perf_counter()
join_duration = after_join - before_join
else:
join_duration = -1
return Times(before_sleep - before_spawn,
after_sleep - before_sleep,
join_duration)
def test(spawn, sleep, options):
all_times = [_test(spawn, sleep, options)
for _ in xrange(options.loops)]
spawn_duration = sum(x.spawn_duration for x in all_times)
sleep_duration = sum(x.sleep_duration for x in all_times)
join_duration = sum(x.sleep_duration for x in all_times
if x != -1)
return Times(spawn_duration, sleep_duration, join_duration)
def bench_none(options):
kwargs = options.kwargs
start = curtime()
for _ in xrange(N):
incr(noop, **kwargs)
assert counter == N, (counter, N)
_report('noop', curtime() - start)
options.sleep = False
def spawn(f, sleep, **kwargs):
return f(sleep, **kwargs)
from time import sleep
return test(spawn,
sleep,
options)
def bench_gevent(options):
import gevent
print('using gevent from %s' % gevent.__file__)
from gevent import spawn, sleep
test(spawn, sleep, options.kwargs)
return test(spawn, sleep, options)
def bench_geventraw(options):
import gevent
print('using gevent from %s' % gevent.__file__)
from gevent import sleep, spawn_raw
test(spawn_raw, sleep, options.kwargs)
return test(spawn_raw, sleep, options)
def bench_geventpool(options):
import gevent
print('using gevent from %s' % gevent.__file__)
from gevent import sleep
from gevent.pool import Pool
p = Pool()
test(p.spawn, sleep, options.kwargs)
start = curtime()
p.join()
_report('joining', curtime() - start)
if options.join:
options.join = p.join
times = test(p.spawn, sleep, options)
return times
def bench_eventlet(options):
try:
import eventlet
except ImportError:
if options.ignore_import_errors:
return
raise
print('using eventlet from %s' % eventlet.__file__)
from eventlet import spawn, sleep
from eventlet.hubs import use_hub
if options.eventlet_hub is not None:
use_hub(options.eventlet_hub)
test(spawn, sleep, options.kwargs)
def bench_all():
from time import sleep
error = 0
names = sorted(all())
for func in names:
cmd = '%s %s %s --ignore-import-errors' % (sys.executable, __file__, func)
print(cmd)
sys.stdout.flush()
sleep(0.01)
if os.system(cmd):
error = 1
print('%s failed' % cmd)
print('')
for func in names:
cmd = '%s %s --with-kwargs %s --ignore-import-errors' % (sys.executable, __file__, func)
print(cmd)
sys.stdout.flush()
if os.system(cmd):
error = 1
print('%s failed' % cmd)
print('')
if error:
sys.exit(1)
return test(spawn, sleep, options)
def all():
result = [x for x in globals() if x.startswith('bench_') and x != 'bench_all']
try:
result.sort(key=lambda x: globals()[x].func_code.co_firstlineno)
except AttributeError:
result.sort(key=lambda x: globals()[x].__code__.co_firstlineno)
result.sort()
result = [x.replace('bench_', '') for x in result]
return result
def all_functions():
return [globals()['bench_%s' % x] for x in all()]
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--with-kwargs', default=False, action='store_true')
parser.add_argument('--eventlet-hub')
parser.add_argument('--ignore-import-errors', action='store_true')
parser.add_argument('benchmark', choices=all() + ['all'])
options = parser.parse_args()
if options.with_kwargs:
options.kwargs = {'foo': 1, 'bar': 'hello'}
else:
options.kwargs = {}
if options.benchmark == 'all':
bench_all()
def worker_cmd(cmd, args):
cmd.extend(args.benchmark)
runner = perf.Runner(add_cmdline_args=worker_cmd)
runner.argparser.add_argument('benchmark',
nargs='*',
default='all',
choices=all() + ['all'])
def spawn_time(loops, func, options):
options.loops = loops
times = func(options)
return times.spawn_duration
def sleep_time(loops, func, options):
options.loops = loops
times = func(options)
return times.sleep_duration
def join_time(loops, func, options):
options.loops = loops
times = func(options)
return times.join_duration
args = runner.parse_args()
if 'all' in args.benchmark or args.benchmark == 'all':
args.benchmark = ['all']
names = all()
else:
function = globals()['bench_' + options.benchmark]
function(options)
names = args.benchmark
names = sorted(set(names))
for name in names:
runner.bench_time_func(name + ' spawn',
spawn_time,
globals()['bench_' + name],
Options(False, False),
inner_loops=N)
if name != 'none':
runner.bench_time_func(name + ' sleep',
sleep_time,
globals()['bench_' + name],
Options(True, False),
inner_loops=N)
if 'geventpool' in names:
runner.bench_time_func('geventpool join',
join_time,
bench_geventpool,
Options(True, True),
inner_loops=N)
for name in names:
runner.bench_time_func(name + ' spawn kwarg',
spawn_time,
globals()['bench_' + name],
Options(False, False, foo=1, bar='hello'),
inner_loops=N)
if __name__ == '__main__':
main()
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment