- 19 Mar, 2018 40 commits
-
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
Doesn't need to go via defer() since it's always running on the broker thread.
-
David Wilson authored
-
David Wilson authored
Using the same test as in 7af97c0365d86ec851c56d2d3736be2cc281af29, transmitted wire bytes drops from 135,531 to 133,071 (-1.81%), while received drops from 21,073 to 14,775 (-30%). Combined, both changes shave 13,914 bytes (-8.6%) off aggregate bandwidth usage. Make it configurable as compression hurts in some scenarios.
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
For the 52 submodules of ansible.modules.system, this produced a 1602 byte pkg_present list. After stripping it becomes 406 bytes, and the entire LOAD_MODULE size drops from 1988 bytes to 792 bytes (-60%). For the 68 submodules of ansible.module_utils, 1902 bytes pkg_present becomes 474 bytes (-75%), and LOAD_MODULE size drops from 2867 bytes to 1439 bytes (-49%). In a simple test running Ansible's "setup" module followed by its "apt" module, wire bytes sent drops from 140,357 to 135,531 (-3.4%).
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
Farewell, pointless roundtrips, we hardly knew ye.
-
David Wilson authored
-
David Wilson authored
Hoped to avoid it, but it's the obvious solution for Ansible.
-
David Wilson authored
This isn't nearly enough, but it catches the most common victim of EINTR.
-
David Wilson authored
It looks ugly as sin, but this nets about a 20% drop in user CPU time, and close to 15% increase in throughput. The average log call is around 10 opcodes, prefixing with '_v and' costs an extra 2, but both are simple operations, and the remaining 10 are skipped entirely when _v or _vv are False.
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
-
Alex Willmer authored
-
Alex Willmer authored
This matches their respective functions under test, which now reside in mitogen.parent.
-
Alex Willmer authored
Tests should now match changes in 4d31300dd09ae7aaa394e53a48102167521f3f91
-
Alex Willmer authored
Since f44356af3275912f7df78fa58e860f107a71e7e5 mitogen.core.Importer() takes a Router instance.
-
Alex Willmer authored
As of adc8fe3aed470de55a6e5ea3d7c6a31f7c6d1ff1 Receiver objects do not have a get_data() method and Receiver.get() does not unpickle the message.
-
Alex Willmer authored
As of adc8fe3aed470de55a6e5ea3d7c6a31f7c6d1ff1 Receiver objects do not have a get_data() method and Receiver.get() does not unpickle the message.
-
Alex Willmer authored
-
David Wilson authored
-
David Wilson authored
-
David Wilson authored
Turns out it is far too easy to burn through available file descriptors, so try something else: self-pipes are per thread, and only temporarily associated with a Lack that wishes to sleep. Reduce pointless locking by giving Latch its own queue, and removing Queue.Queue() use in some places. Temporarily undo merging of of Waker and Latch, let's do this one step at a time.
-
David Wilson authored
On Python 2.x, operations on pthread objects with a timeout set actually cause internal polling. When polling fails to yield a positive result, it quickly backs off to a 50ms loop, which results in a huge amount of latency throughout. Instead, give up using Queue.Queue.get(timeout=...) and replace it with the UNIX self-pipe trick. Knocks another 45% off my.yml in the Ansible examples directory against a local VM. This has the potential to burn a *lot* of file descriptors, but hell, it's not the 1940s any more, RAM is all but infinite. I can live with that. This gets things down to around 75ms per playbook step, still hunting for additional sources of latency.
-