- 19 Oct, 2017 1 commit
-
-
Kirill Smelkov authored
The problem with `getent hosts ...` is that /etc/hosts has to be manually prepared for it to work and it just does not scale and is error prone. So just extract machine global IP addresses at runtime as configured on the interfaces.
-
- 17 Oct, 2017 4 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 16 Oct, 2017 8 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 15 Oct, 2017 2 commits
-
-
Kirill Smelkov authored
neotest part pending.
-
Kirill Smelkov authored
-
- 13 Oct, 2017 13 commits
-
-
Test authored
-
Kirill Smelkov authored
-
Test authored
-
Kirill Smelkov authored
-
Test authored
-
Test authored
-
Test authored
-
Test authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 12 Oct, 2017 4 commits
-
-
Test authored
compared to tgro5: tgro0 py -> noise (ZEO, NEO/pylite, NEO/pysql, partly neo/go) tgro10 py=no-noise, tpy~=tgro5; tgo serial~same, +prefetch more noise (?) 40|65 tgro15 lat_tcp 1B 49->45 sometime (consistently) otherwise more-or-less same tgro20 lat_tcp 4K 170µs->167µs (c) tgro25 lat_tcp 4K 170µs->167µs (c), lat_tcp 4K 175 -> 171µs (go) ZEOv a bit go-go + prefetch !sha1 -> noise + higher (~65µs -> 70 - 110µs) txc40µs4f seems to be worse than txc200µs4f
-
Test authored
Link and TCP* latencies are ~ same and stable. Compared to b288e62f py timings stabilize: ZEO ~580-1045μs -> 585µs NEO/pylite ~600-700μs -> ~550-570µs (Cpy) NEO/pylite ~525-580μs -> ~450µs (Cgo) NEO/pysql ~820-930μs -> ~840µs (Cpy) NEO/pysql ~740-800μs -> ~740µs (Cgo) Go timings get a bit worse: NEO/go ~360µs -> ~380µs (Cpy) NEO/go ~160µs -> ~165-170µs (Cgo) NEO/go-nosha1 ~140µs -> ~150µs Compared to 15a9ccef go+pretech128 timings stabilize: go-go+prefetch128 ~65-160µs -> ~40-45µs, 60µs(x1) go-go+prefetch128 (!sha1) ~60-150µs -> ~60µs, 40µs(x1)
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 11 Oct, 2017 1 commit
-
-
Test authored
Compared to run from Oct10 txc settings were not changed and they are now 200μs/4f/0μs-irq/0f-irq on both machines instead of 200μs/4f/0μs-irq/0f-irq on neo1 and 200μs/0f/0μs-irq/0f-irq on neo2. The noise about py runs remains. Go-go serial time remains the same ~155μs without much noise, but Go-go time with +prefetch128 gets worse: NEO/go Cgo +prefetch128 ~45μs -> 65 - 160μs Probably it makes sense to tune txc too...
-
- 10 Oct, 2017 7 commits
-
-
Test authored
See previous commit for details about why latency is bad for TCP payload > MSS without TSO. Compared to latest neo1-neo2 timings from Oct05 (C-states disabled, no rx-delay) it improves: TCP1 ~45μs -> ~45μs TCP1472 ~430μs # TCP lat. anaomaly TCP1400 -> ~120μs # finally TCP1500 -> ~130μs # fixed TCP4096 ~285μs -> ~170μs # ! ZEO ~670μs -> ~580-1045μs (?) NEO/pylite ~605μs -> ~600-700μs (?) (Cpy) NEO/pylite ~505μs -> ~525-580μs (?) (Cgo) NEO/pysql ~900μs -> ~820-930μs (?) (Cpy) NEO/pysql ~780μs -> ~740-800μs (?) (Cgo) NEO/go ~430μs -> ~360μs (Cpy) # <-- NOTE NEO/go ~210μs -> ~160μs (Cgo) # <-- NOTE NEO/go-nosha1 ~190μs -> ~140μs # <-- NOTE not sure about noise in pure py runs but given raw tcp latency absolutely improves this should be a good change to make.
-
Kirill Smelkov authored
On neo1 - neo2 without TSO latency becomes very poor in lat_tcp when payload size becomes greater TCP MSS (lat_tcp -m 1448 ~ 130μs; lat_tcp -m 1449 ~ 500μs and more)
-
Kirill Smelkov authored
e.g. neotest cpustat ls
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Same a before with added information. In particular z6001-z600 shows there is no TCP RR step-wise 400μs increase while going 1400B -> 1500B as it is currently the case on RTL.
-
Kirill Smelkov authored
-
Test authored
Same as before but with more information and adjusted ping and TCP RR. In particular TCP RR shows: TCPRR1400 ~ 120μs TCPRR1500 ~ 430μs The time when this increases step-wise is when TCP packet crosses fitting 1 ethernet frame. It does not happen on z600* with their current settings.
-