- 14 Oct, 2019 4 commits
-
-
Kirill Smelkov authored
For example - def send(pych, obj): + def send(pychan pych, obj): Even though Cython allows not to annotate self with type, this makes the code more clear.
-
Kirill Smelkov authored
This prepares pyselect codebase for future logic where all channel element types will be sent via inplace _selcase data. For PyObject we could previously go with "wiring ptx through pycase[1]", but for arbitrary type, that has to be first converted from Python object to C-level object, we would have to store the result somewhere, and that would mean extra allocation and pyselect code complexity increase, even for cases that don't use anything but chan[object]. So do the preparation and switch pyselect into using inplace tx.
-
Kirill Smelkov authored
Currently select, via _selcase, requires users to provide pointers to tx and rx buffers. However if element type itself can fit into a pointer word, we can put the element directly into _selcase and mark the case with a flag, that it contains inplace data instead of referring to external storage. This will be helpful in upcoming patch where we'll teach pychan to work with several element types, not only pyobject elements. This patch does careful introduction of _selcase.flags - in such a way that the size of _selcase stays the same as it was before by using bitfields. The .ptxrx pointer is unioned with newly introduced inplace uint64 .itxrx data, which is used by select instead of .ptxrx if the flag is set. The usage of uint64 should not increase _selcase size on 64-bit platforms. Then _selcase.ptx() and .prx() accessors are adapted accordingly and the rest of the changes are corresponding test and _chanselect2<onstack=false> adaptation. This functionality is kind of low-level and is not exposed via any _selsend() or chan.sends() API changes. Whenever inplace tx should be used, the case should be prepared either completely manually, or with e.g. first calling _selsend() and then manually changing .flags and .itxrx. Added test serves as the example on how to do it. Inplace rx is currently forbidden - because supporting that would require to drop const from casev select argument. However in the future, for symmetry, we might want to support that as well. P.S. Since write to selcase.itxrx requires casting pointers e.g. like this: *(int *)&sel[0].itxrx = 12345; it breaks C99 strict aliasing and by default compiler can generate bad code on such pattern. To the problem we adapt the build system to default compiler to no-strict-aliasing (as many other projects do, e.g. Linux kernel) with the idea that in many cases where strict aliasing was intended to help it actually does not, because e.g. pointer types are the same, while explicitly marking pointers with `restrict` keyword does help indeed. Nothing new for Python2 here, as it is using -fno-strict-aliasing by itself. However Python3 is compiling without -fno-strict-aliasing: https://python.org/dev/peps/pep-3123 .
-
Kirill Smelkov authored
In the next patch we are going to teach _selcase to support both external and inplace data. Before that let's do a couple of preparatory things. This patch: introduces .ptx() and .prx() accessors to get to corresponding data buffer associated with _selcase. Convert _selcase users to use .ptx() and .prx() instead of doing direct .ptxrx access. This way when we'll add inplace support to _selcase, we'll need to adapt only accessors, not clients. The only place that is left using .ptxrx directly is one tricky place in _chanselect2<onstack=false> where gevent-related code needs to carefully deal with proxying tx/rx buffers due to STACK_DEAD_WHILE_PARKED. NOTE even though new accessors may panic, libgolang.cpp always calls them after checking that the conditions for ptx/prx calls are valid.
-
- 13 Oct, 2019 3 commits
-
-
Kirill Smelkov authored
In the next patches we are going to teach _selcase to support both external and inplace data. Before that let's do a couple of preparatory things. This patch: rename .data -> .ptxrx . The new name is more clear: - "p" prefix aligns with other libgolang style, e.g. ptx or prx. - "txrx" suffix says that this is used for both send and recv. Just plain renaming, nothing else in this patch.
-
Kirill Smelkov authored
Before passing objects to _chanselect for send, pyselect increfs them, as just send does, to indicate that one object reference is passed to channel buffer. On exit, since only one case is actually executed by select, pyselect needs to decref incref'ed object from not executed cases. Pyselect already implements the latter cleanup, but currently the cleanup is executed only if control flow reaches _chanselect at all. Which is a bug, since pyselect can panic or raise an exception just in the middle of preparation phase. -> Fix it by associating the finally-decref cleanup with whole prepare+_chanselect code. Without the fix, the second part of added test (abnormal exit) fails e.g. like: @mark.skipif(not hasattr(sys, 'getrefcount'), # skipped e.g. on PyPy reason="needs sys.getrefcount") def test_select_refleak(): ch1 = chan() ch2 = chan() obj1 = object() obj2 = object() tx1 = (ch1.send, obj1) tx2 = (ch2.send, obj2) # normal exit gc.collect() nref1 = sys.getrefcount(obj1) nref2 = sys.getrefcount(obj2) _, _rx = select( tx1, # 0 tx2, # 1 default, # 2 ) assert (_, _rx) == (2, None) gc.collect() assert sys.getrefcount(obj1) == nref1 gc.collect() assert sys.getrefcount(obj1) == nref2 # abnormal exit with raises(AttributeError) as exc: select( tx1, # 0 tx2, # 1 'zzz', # 2 causes pyselect to panic ) assert exc.value.args == ("'str' object has no attribute '__self__'",) gc.collect() > assert sys.getrefcount(obj1) == nref1 E assert 4 == 3 E -4 E +3 golang/golang_test.py:690: AssertionError The bug was introduced in 3b241983 (Port/move channels to C/C++/Pyx).
-
Kirill Smelkov authored
We will soon rework pychan to be python wrapper not only for chan<object>, but also for other channels of various C types - e.g. chan<structZ>, chan<int>, etc. To prepare for this let's first rework pychan from using chan[PyObject*] into raw _chan* functions. This will allow us to use the same functions over raw channels while dynamically dispatching on channel element type.
-
- 05 Oct, 2019 3 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
e.g. go -> pygo, select -> pyselect, etc. This will make the followup transition to non-py mode more clear.
-
Kirill Smelkov authored
- cdef their attributes (else accessing any of them raises AttributeError), in particular use pychan and sync.Mutex in cimport mode. - change `with mu` into explicit lock/unlock (since pyx sync.Mutex does not support with). - use INFINITY instead of None for empty dt.
-
- 04 Oct, 2019 4 commits
-
-
Kirill Smelkov authored
In preparation to start migrating, at least partly, time functionality to nogil mode, move the code from time.py to _time.pyx . This is straight code movement except now -> pynow, and sleep -> pysleep replaces, since in _time.pyx now and sleep were already referring to nogil versions. We don't move just to time.pyx (note no _ prefix), since we will need to continue distinguishing pyx/nogil from py objects/functions, e.g. pyx time.second is C constant, while pyx time.pysecond is pyobject exported to python world as time.second.
-
Kirill Smelkov authored
One place in _time.pyx was overlooked in ce8152a2 (pyx api: Provide sleep).
-
Kirill Smelkov authored
Similarly to 78d85cdc (sync: threading.Event -> chan) replace everywhere threaing.Lock usage with sync.Mutex . This brings 2 goods: - sync.Mutex becomes more well tested; - we untie ourselves from threading python module (threading.Lock was the last user).
-
Kirill Smelkov authored
Until now libgolang had semaphore and mutex functionality, but implemented only internally and not exposed to users. Since for its pinner thread wendelin.core v2 needs not only nogil channels, but also nogil mutexes, timers and so on, it is now time to start providing that. We start with mutexes: - Expose Mutex from insides of libgolang.cpp to public API in libgolang.h . We actually expose both Mutex and Sema because semaphores are sometimes also useful for low-level synchronization, and because it is easier to export Mutex and Sema both at the same time. - Provide pyx/nogil API to those as sync.Mutex and sync.Sema. - Provide corresponding python wrappers. - Add Pyx-level test for semaphore wakeup when wakeup is done not by the same thread which did the original acquire. This is the test that was promised in 5142460d (libgolang/thread: Add links to upstream PyThread_release_lock bug), and it used to corrupt memory and deadlock on macOS due to CPython & PyPy runtime bugs: https://bugs.python.org/issue38106 -> https://github.com/python/cpython/pull/16047 https://bitbucket.org/pypy/pypy/issues/3072 Note about python wrappers: At present, due to https://github.com/cython/cython/issues/3165, C-level panic is not properly translated into Py-level exception in (Py)Sema/Mutex constructors. This should not pose a real problem in practice though.
-
- 03 Oct, 2019 4 commits
-
-
Kirill Smelkov authored
- Fix references to time::sleep and _tasknanosleep in library overview. - add golang::time:: comment before openeing corresponding namespace.
-
Kirill Smelkov authored
We will soon need this functionality for sync_test.py to import tests from _sync_test.pyx . In the future many modules will need to import X_test.pyx from X_test.py as well.
-
Kirill Smelkov authored
- it is in Go style to panic on memory allocation failure. - _makechan can already panic, instead of returning NULL, on e.g. ch._mu initialization failure. - we were already not checking in several places for NULL after _makechan() call (e.g. in golang/runtime/libgolang_test_c.c). This switches _makechan to panic on memory allocation failure and settles on style of future C-level API calls to panic instead of returning NULL on failure.
-
Kirill Smelkov authored
sys is imported in the beginning of main, so there is no need to import it the second time in the middle of main. The bug was there since gpython day1 - since 32a21d5b (gpython: Python interpreter with support for lightweight threads).
-
- 17 Sep, 2019 11 commits
-
-
Kirill Smelkov authored
This release is bugfix-only. Compared to pygolang v0.0.3 (4ca65816) the change in speed is likely within noise: (on i7@2.6GHz) thread runtime: name old time/op new time/op delta go 18.3µs ± 1% 18.3µs ± 1% ~ (p=0.218 n=10+10) chan 2.97µs ± 5% 2.97µs ± 8% ~ (p=0.781 n=10+10) select 3.59µs ± 2% 3.55µs ± 5% ~ (p=0.447 n=9+10) def 56.0ns ± 0% 55.0ns ± 0% -1.79% (p=0.000 n=10+10) func_def 43.7µs ± 1% 43.8µs ± 1% +0.35% (p=0.029 n=10+10) call 65.0ns ± 0% 62.3ns ± 1% -4.15% (p=0.000 n=10+10) func_call 1.06µs ± 1% 1.04µs ± 0% -1.26% (p=0.000 n=10+8) try_finally 137ns ± 1% 137ns ± 0% ~ (p=1.000 n=10+10) defer 2.32µs ± 0% 2.33µs ± 1% +0.43% (p=0.000 n=9+10) workgroup_empty 37.6µs ± 1% 37.1µs ± 2% -1.29% (p=0.003 n=10+10) workgroup_raise 47.9µs ± 1% 47.6µs ± 0% -0.63% (p=0.001 n=9+9) gevent runtime: name old time/op new time/op delta go 15.8µs ± 0% 16.1µs ± 1% +2.18% (p=0.000 n=9+10) chan 7.36µs ± 0% 7.21µs ± 0% -1.97% (p=0.000 n=8+10) select 10.4µs ± 0% 10.5µs ± 0% +0.71% (p=0.000 n=10+10) def 57.0ns ± 0% 55.0ns ± 0% -3.51% (p=0.000 n=10+10) func_def 43.3µs ± 1% 44.1µs ± 2% +1.81% (p=0.000 n=10+10) call 66.0ns ± 0% 65.0ns ± 0% -1.52% (p=0.000 n=10+10) func_call 1.04µs ± 1% 1.06µs ± 1% +1.48% (p=0.000 n=10+10) try_finally 137ns ± 1% 136ns ± 0% -1.31% (p=0.000 n=10+10) defer 2.32µs ± 0% 2.31µs ± 1% ~ (p=0.472 n=8+10) workgroup_empty 56.0µs ± 0% 55.7µs ± 0% -0.49% (p=0.000 n=10+10) workgroup_raise 71.3µs ± 1% 71.7µs ± 1% +0.62% (p=0.001 n=10+10)
-
Kirill Smelkov authored
Like we already do for e.g. _chan, to increase probability that if something is used after free we get a NULL-dereference crash instead of more hard-to-understand segfault.
-
Kirill Smelkov authored
Similarly to situation described in dcf4ebd1 (libgolang: Fix chan.close to dequeue subscribers atomically), select can be also accessing a channel object at the time of wakeup when that channel could be already destroyed: select queues waiters to channels recv/send queues and upon wakeup needs to dequeue them. This requires locking channels, not to mention that a channel destroy with non-empty subscribers queue will trigger bug panic. Contrary to the fix for recv, we cannot rework select not to access channel objects after wakeup, because for select upon wakeup all queued channels could be already destroyed, not only selected one. Thus the fix here is to incref/decref the channels for the duration where we need to access them. The bug was not caught by existing tests and was noted while doing libgolang.cpp review for concurrency issues. With added test (hereby fix is served by a bit amended _test_close_wakeup_all) the bug, if not fixed, renders itself as e.g. the following under TSAN: WARNING: ThreadSanitizer: data race (pid=4421) Write of size 8 at 0x7b1400000650 by thread T9: #0 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:649 (libtsan.so.0+0x2b46a) #1 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:643 (libtsan.so.0+0x2b46a) #2 golang::_chan::decref() golang/runtime/libgolang.cpp:479 (liblibgolang.so.0.1+0x4822) #3 _chanxdecref golang/runtime/libgolang.cpp:461 (liblibgolang.so.0.1+0x487a) #4 golang::chan<int>::operator=(decltype(nullptr)) golang/libgolang.h:296 (_golang_test.so+0x14cf9) #5 operator() golang/runtime/libgolang_test.cpp:304 (_golang_test.so+0x14cf9) #6 __invoke_impl<void, __test_close_wakeup_all(bool)::<lambda()>&> /usr/include/c++/8/bits/invoke.h:60 (_golang_test.so+0x14cf9) #7 __invoke<__test_close_wakeup_all(bool)::<lambda()>&> /usr/include/c++/8/bits/invoke.h:95 (_golang_test.so+0x14cf9) #8 __call<void> /usr/include/c++/8/functional:400 (_golang_test.so+0x14cf9) #9 operator()<> /usr/include/c++/8/functional:484 (_golang_test.so+0x14cf9) #10 _M_invoke /usr/include/c++/8/bits/std_function.h:297 (_golang_test.so+0x14cf9) #11 std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 (_golang_test.so+0x1850c) #12 operator() golang/libgolang.h:273 (_golang_test.so+0x1843a) #13 _FUN golang/libgolang.h:271 (_golang_test.so+0x1843a) #14 <null> <null> (python2.7+0x1929e3) Previous read of size 8 at 0x7b1400000650 by thread T10: #0 golang::Sema::acquire() golang/runtime/libgolang.cpp:168 (liblibgolang.so.0.1+0x413a) #1 golang::Mutex::lock() golang/runtime/libgolang.cpp:179 (liblibgolang.so.0.1+0x424a) #2 operator() golang/runtime/libgolang.cpp:1044 (liblibgolang.so.0.1+0x424a) #3 _M_invoke /usr/include/c++/8/bits/std_function.h:297 (liblibgolang.so.0.1+0x424a) #4 std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 (liblibgolang.so.0.1+0x5f07) #5 golang::_deferred::~_deferred() golang/runtime/libgolang.cpp:215 (liblibgolang.so.0.1+0x5f07) #6 __chanselect2 golang/runtime/libgolang.cpp:1044 (liblibgolang.so.0.1+0x5f07) #7 _chanselect2<true> golang/runtime/libgolang.cpp:968 (liblibgolang.so.0.1+0x6665) #8 _chanselect golang/runtime/libgolang.cpp:963 (liblibgolang.so.0.1+0x6665) #9 select golang/libgolang.h:386 (_golang_test.so+0x14fc1) #10 operator() golang/runtime/libgolang_test.cpp:320 (_golang_test.so+0x14fc1) #11 __invoke_impl<void, __test_close_wakeup_all(bool)::<lambda()>&> /usr/include/c++/8/bits/invoke.h:60 (_golang_test.so+0x14fc1) #12 __invoke<__test_close_wakeup_all(bool)::<lambda()>&> /usr/include/c++/8/bits/invoke.h:95 (_golang_test.so+0x14fc1) #13 __call<void> /usr/include/c++/8/functional:400 (_golang_test.so+0x14fc1) #14 operator()<> /usr/include/c++/8/functional:484 (_golang_test.so+0x14fc1) #15 _M_invoke /usr/include/c++/8/bits/std_function.h:297 (_golang_test.so+0x14fc1) #16 std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 (_golang_test.so+0x1850c) #17 operator() golang/libgolang.h:273 (_golang_test.so+0x183da) #18 _FUN golang/libgolang.h:271 (_golang_test.so+0x183da) #19 <null> <null> (python2.7+0x1929e3) Thread T9 (tid=4661, running) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:123 (liblibgolang.so.0.1+0x3f98) #3 go<__test_close_wakeup_all(bool)::<lambda()> > golang/libgolang.h:271 (_golang_test.so+0x16c94) #4 __test_close_wakeup_all(bool) golang/runtime/libgolang_test.cpp:298 (_golang_test.so+0x16c94) #5 _test_close_wakeup_all_vsselect() golang/runtime/libgolang_test.cpp:342 (_golang_test.so+0x16f64) #6 __pyx_pf_6golang_12_golang_test_24test_close_wakeup_all_vsselect golang/_golang_test.cpp:4013 (_golang_test.so+0xd92a) #7 __pyx_pw_6golang_12_golang_test_25test_close_wakeup_all_vsselect golang/_golang_test.cpp:3978 (_golang_test.so+0xd92a) #8 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) Thread T10 (tid=4662, running) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:123 (liblibgolang.so.0.1+0x3f98) #3 go<__test_close_wakeup_all(bool)::<lambda()> > golang/libgolang.h:271 (_golang_test.so+0x16d96) #4 __test_close_wakeup_all(bool) golang/runtime/libgolang_test.cpp:315 (_golang_test.so+0x16d96) #5 _test_close_wakeup_all_vsselect() golang/runtime/libgolang_test.cpp:342 (_golang_test.so+0x16f64) #6 __pyx_pf_6golang_12_golang_test_24test_close_wakeup_all_vsselect golang/_golang_test.cpp:4013 (_golang_test.so+0xd92a) #7 __pyx_pw_6golang_12_golang_test_25test_close_wakeup_all_vsselect golang/_golang_test.cpp:3978 (_golang_test.so+0xd92a) #8 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) and reliably crashes under regular builds.
-
Kirill Smelkov authored
During the second phase of select - after all cases were polled and noone was found to be ready - cases are queued to corresponding channels recv and send queues. While that queueing is in progress, a case, that was already queued, could win (see f0b592b4 "select: Don't let both a queued and a tried cases win at the same time" for details). If such won case is detected, we break out of queuing loop, but currently don't wait for that case to become ready. This is a bug, because when a case is marked as won, its data is not yet copied - for example for won recv case if we don't wait for that case to become ready, we will be returning from select while corresponding *prx and *rxok for recv waiter is still being copied in progress. An example TSAN reports for this bug are as follows: (1) WARNING: ThreadSanitizer: data race (pid=8223) Read of size 1 at 0x7b1800000a48 by main thread: #0 __chanselect2 golang/runtime/libgolang.cpp:1112 (liblibgolang.so.0.1+0x5fd6) #1 _chanselect2<true> golang/runtime/libgolang.cpp:949 (liblibgolang.so.0.1+0x6665) #2 _chanselect golang/runtime/libgolang.cpp:944 (liblibgolang.so.0.1+0x6665) #3 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.so+0x1deac) #4 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.so+0x1deac) #5 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.so+0x1deac) #6 PyEval_EvalFrameEx <null> (python2.7+0xf0e49) Previous write of size 1 at 0x7b1800000a48 by thread T57: #0 golang::_RecvSendWaiting::wakeup(bool) golang/runtime/libgolang.cpp:346 (liblibgolang.so.0.1+0x459d) #1 golang::_chan::_tryrecv(void*, bool*) golang/runtime/libgolang.cpp:730 (liblibgolang.so.0.1+0x511d) #2 __chanselect2 golang/runtime/libgolang.cpp:1074 (liblibgolang.so.0.1+0x5d4b) #3 _chanselect2<true> golang/runtime/libgolang.cpp:949 (liblibgolang.so.0.1+0x6665) #4 _chanselect golang/runtime/libgolang.cpp:944 (liblibgolang.so.0.1+0x6665) #5 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.so+0x1deac) #6 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.so+0x1deac) #7 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.so+0x1deac) #8 PyEval_EvalFrameEx <null> (python2.7+0xf0e49) #9 <null> <null> (python2.7+0x1929e3) Location is heap block of size 96 at 0x7b1800000a20 allocated by main thread: #0 calloc ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:623 (libtsan.so.0+0x2b323) #1 calloc ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:618 (libtsan.so.0+0x2b323) #2 __chanselect2 golang/runtime/libgolang.cpp:1018 (liblibgolang.so.0.1+0x5b8c) #3 _chanselect2<true> golang/runtime/libgolang.cpp:949 (liblibgolang.so.0.1+0x6665) #4 _chanselect golang/runtime/libgolang.cpp:944 (liblibgolang.so.0.1+0x6665) #5 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.so+0x1deac) #6 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.so+0x1deac) #7 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.so+0x1deac) #8 PyEval_EvalFrameEx <null> (python2.7+0xf0e49) Thread T57 (tid=13758, finished) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:123 (liblibgolang.so.0.1+0x3f98) #3 __pyx_f_6golang_7_golang__taskgo_pyexc golang/_golang.cpp:5926 (_golang.so+0x16f7e) #4 __pyx_pf_6golang_7_golang_2pygo golang/_golang.cpp:2399 (_golang.so+0x16f7e) #5 __pyx_pw_6golang_7_golang_3pygo golang/_golang.cpp:2324 (_golang.so+0x16f7e) #6 PyEval_EvalFrameEx <null> (python2.7+0xf0e49) (2) WARNING: ThreadSanitizer: data race (pid=14185) Write of size 8 at 0x7b1800003000 by thread T95: #0 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:649 (libtsan.so.0+0x2b46a) #1 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:643 (libtsan.so.0+0x2b46a) #2 operator() golang/runtime/libgolang.cpp:1023 (liblibgolang.so.0.1+0x44bd) #3 _M_invoke /usr/include/c++/8/bits/std_function.h:297 (liblibgolang.so.0.1+0x44bd) #4 std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 (liblibgolang.so.0.1+0x5fd8) #5 golang::_deferred::~_deferred() golang/runtime/libgolang.cpp:215 (liblibgolang.so.0.1+0x5fd8) #6 __chanselect2 golang/runtime/libgolang.cpp:1023 (liblibgolang.so.0.1+0x5fd8) #7 _chanselect2<true> golang/runtime/libgolang.cpp:949 (liblibgolang.so.0.1+0x6736) #8 _chanselect golang/runtime/libgolang.cpp:944 (liblibgolang.so.0.1+0x6736) #9 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #10 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #11 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #12 _PyCFunction_FastCallDict Objects/methodobject.c:231 (python3.6+0xd4db9) #13 pythread_wrapper Python/thread_pthread.h:205 (python3.6+0x6c5d6) Previous read of size 8 at 0x7b1800003000 by main thread: #0 golang::_RecvSendWaiting::wakeup(bool) golang/runtime/libgolang.cpp:347 (liblibgolang.so.0.1+0x4769) #1 golang::_chan::_trysend(void const*) golang/runtime/libgolang.cpp:661 (liblibgolang.so.0.1+0x5781) #2 _chanselect golang/runtime/libgolang.cpp:901 (liblibgolang.so.0.1+0x64d9) #3 __pyx_f_6golang_7_golang__chanselect_pyexc golang/_golang.cpp:5896 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #4 __pyx_pf_6golang_7_golang_4pyselect golang/_golang.cpp:4935 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #5 __pyx_pw_6golang_7_golang_5pyselect golang/_golang.cpp:4355 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1e562) #6 _PyCFunction_FastCallDict Objects/methodobject.c:231 (python3.6+0xd4db9) Thread T95 (tid=16547, running) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread Python/thread_pthread.h:252 (python3.6+0x6c67e) #2 _taskgo golang/runtime/libgolang.cpp:123 (liblibgolang.so.0.1+0x4158) #3 __pyx_f_6golang_7_golang__taskgo_pyexc golang/_golang.cpp:5926 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1a9b5) #4 __pyx_pf_6golang_7_golang_2pygo golang/_golang.cpp:2399 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1a9b5) #5 __pyx_pw_6golang_7_golang_3pygo golang/_golang.cpp:2324 (_golang.cpython-36m-x86_64-linux-gnu.so+0x1a9b5) #6 _PyCFunction_FastCallDict Objects/methodobject.c:231 (python3.6+0xd4db9) -> Fix it by always waiting for WaitGroup's won case to become ready. The bug was introduced in 3b241983 (Port/move channels to C/C++/Pyx). Before that - when channels were implemented at Python level, we were always waiting on select's group. Added test catches the bug on all - even not under TSAN - builds.
-
Kirill Smelkov authored
Currently chan.close iterates all send/recv subscribers unlocking/relocking the channel for each and notifying dequeued subscriber with channel unlocked. This leads to that even if channel had only one subscriber, chan.close accesses chan._mu again - after notifying that subscriber. That in turn means that an idiom where e.g. a done channel is passed to worker, which worker closes at the end, and main task waiting on the done and destroying done right after wakeup cannot work - because close, internally, accesses already destroyed channel as the following TSAN report shows for _test_go_c: WARNING: ThreadSanitizer: data race (pid=7143) Write of size 8 at 0x7b1400000650 by main thread: #0 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:649 (libtsan.so.0+0x2b46a) #1 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:643 (libtsan.so.0+0x2b46a) #2 golang::_chan::decref() golang/runtime/libgolang.cpp:470 (liblibgolang.so.0.1+0x47f2) #3 _chanxdecref golang/runtime/libgolang.cpp:452 (liblibgolang.so.0.1+0x484a) #4 _test_go_c golang/runtime/libgolang_test_c.c:86 (_golang_test.so+0x13a2e) #5 __pyx_pf_6golang_12_golang_test_12test_go_c golang/_golang_test.cpp:3340 (_golang_test.so+0xcbaa) #6 __pyx_pw_6golang_12_golang_test_13test_go_c golang/_golang_test.cpp:3305 (_golang_test.so+0xcbaa) #7 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) Previous read of size 8 at 0x7b1400000650 by thread T8: #0 golang::Sema::acquire() golang/runtime/libgolang.cpp:164 (liblibgolang.so.0.1+0x410a) #1 golang::Mutex::lock() golang/runtime/libgolang.cpp:175 (liblibgolang.so.0.1+0x4c82) #2 golang::_chan::close() golang/runtime/libgolang.cpp:754 (liblibgolang.so.0.1+0x4c82) #3 _chanclose golang/runtime/libgolang.cpp:732 (liblibgolang.so.0.1+0x4d1a) #4 _work golang/runtime/libgolang_test_c.c:92 (_golang_test.so+0x136cc) #5 <null> <null> (python2.7+0x1929e3) Thread T8 (tid=7311, finished) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:119 (liblibgolang.so.0.1+0x3f68) #3 _test_go_c golang/runtime/libgolang_test_c.c:84 (_golang_test.so+0x13a1c) #4 __pyx_pf_6golang_12_golang_test_12test_go_c golang/_golang_test.cpp:3340 (_golang_test.so+0xcbaa) #5 __pyx_pw_6golang_12_golang_test_13test_go_c golang/_golang_test.cpp:3305 (_golang_test.so+0xcbaa) #6 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) -> Fix close to dequeue all channel's subscribers atomically, and notify them all after channel is unlocked and _no_ longer accessed. Close was already working this way when channels were done at Python level, but in 3b241983 (Port/move channels to C/C++/Pyx) I introduced this bug while trying to avoid additional memory allocation in close. Added test catches the bug on all - even not under TSAN - builds. ---- Added test also reveals another bug: recv<onstack=false> uses channel after wakeup, and, as at the time of wakeup the channel could be already destroyed, that segfaults. Fix it by pre-reading in recv everything needed from _chan object before going to sleep. This fix cannot go separately from close fix, as fixed close is required for recv-uses-chan-after-wakeup testcase.
-
Kirill Smelkov authored
This will be needed in the next patch.
-
Kirill Smelkov authored
When lambda captures stack-resided variable by reference, it actually remembers address of that variable and inside lambda code dereferences that address on variable use. The lifetime of all spawned goroutines in libgolang_test is subset of test function driver lifetime, so capturing by reference should be safe in that situation on the first glance. However for that to work, it is required that stacks of both goroutines - the main goroutine and spawned goroutine - must be live at the same time, so that spawned goroutine could safely retrieve a reference-captured variable located on the main goroutine stack. This works for thread runtime, but is known not to work for gevent runtime, where inactive goroutine stack is swapped onto heap and is generally considered "dead" while that goroutine is parked (see "Implementation note" in 3b241983 "Port/move channels to C/C++/Pyx" for details about this). -> Fix the test by capturing by value in lambdas. What we capture is usually chan<T> object, which itself is a pointer, so it should not make a big difference in efficiency. It is also more safe to capture channels by value, since that automatically incref/decref them and adds extra protection wrt lifetime management bugs. NOTE sending/receiving via channels from/to stack-based variables is always safe - for both thread and gevent runtimes, as channels implementation explicitly cares for this to work. Once again "Implementation note" in 3b241983 has the details.
-
Kirill Smelkov authored
Give what() to PanicError so that uncaught panics give proper message on std::terminate_handler crash instead of printing just "std::exception", for example terminate called after throwing an instance of 'golang::PanicError' what(): chan: decref: free: recvq not empty instead of terminate called after throwing an instance of 'golang::PanicError' what(): std::exception
-
Kirill Smelkov authored
- ThreadSanitizer helps to detect races and some memory errors, - AddressSanitizer helps to detect memory errors, - Python debug builds help to detect e.g reference counting errors. Adding all those tools to testing coverage discovers e.g. the following bugs (not a full list): ---- 8< ---- py27-thread-tsan: WARNING: ThreadSanitizer: data race (pid=7143) Write of size 8 at 0x7b1400000650 by main thread: #0 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:649 (libtsan.so.0+0x2b46a) #1 free ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:643 (libtsan.so.0+0x2b46a) #2 golang::_chan::decref() golang/runtime/libgolang.cpp:470 (liblibgolang.so.0.1+0x47f2) #3 _chanxdecref golang/runtime/libgolang.cpp:452 (liblibgolang.so.0.1+0x484a) #4 _test_go_c golang/runtime/libgolang_test_c.c:86 (_golang_test.so+0x13a2e) #5 __pyx_pf_6golang_12_golang_test_12test_go_c golang/_golang_test.cpp:3340 (_golang_test.so+0xcbaa) #6 __pyx_pw_6golang_12_golang_test_13test_go_c golang/_golang_test.cpp:3305 (_golang_test.so+0xcbaa) #7 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) Previous read of size 8 at 0x7b1400000650 by thread T8: #0 golang::Sema::acquire() golang/runtime/libgolang.cpp:164 (liblibgolang.so.0.1+0x410a) #1 golang::Mutex::lock() golang/runtime/libgolang.cpp:175 (liblibgolang.so.0.1+0x4c82) #2 golang::_chan::close() golang/runtime/libgolang.cpp:754 (liblibgolang.so.0.1+0x4c82) #3 _chanclose golang/runtime/libgolang.cpp:732 (liblibgolang.so.0.1+0x4d1a) #4 _work golang/runtime/libgolang_test_c.c:92 (_golang_test.so+0x136cc) #5 <null> <null> (python2.7+0x1929e3) Thread T8 (tid=7311, finished) created by main thread at: #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors.cc:915 (libtsan.so.0+0x2be1b) #1 PyThread_start_new_thread <null> (python2.7+0x19299f) #2 _taskgo golang/runtime/libgolang.cpp:119 (liblibgolang.so.0.1+0x3f68) #3 _test_go_c golang/runtime/libgolang_test_c.c:84 (_golang_test.so+0x13a1c) #4 __pyx_pf_6golang_12_golang_test_12test_go_c golang/_golang_test.cpp:3340 (_golang_test.so+0xcbaa) #5 __pyx_pw_6golang_12_golang_test_13test_go_c golang/_golang_test.cpp:3305 (_golang_test.so+0xcbaa) #6 PyEval_EvalFrameEx <null> (python2.7+0xf68b4) py37-thread-asan: ==22205==ERROR: AddressSanitizer: heap-use-after-free on address 0x607000002cd0 at pc 0x7fd3732a7679 bp 0x7fd3723c8c50 sp 0x7fd3723c8c48 READ of size 8 at 0x607000002cd0 thread T7 #0 0x7fd3732a7678 in golang::Sema::acquire() golang/runtime/libgolang.cpp:164 #1 0x7fd3732a8644 in golang::Mutex::lock() golang/runtime/libgolang.cpp:175 #2 0x7fd3732a8644 in golang::_chan::close() golang/runtime/libgolang.cpp:754 #3 0x7fd3724004b2 in golang::chan<golang::structZ>::close() const golang/libgolang.h:323 #4 0x7fd3724004b2 in operator() golang/runtime/libgolang_test.cpp:262 #5 0x7fd3724004b2 in __invoke_impl<void, _test_chan_vs_stackdeadwhileparked()::<lambda()>&> /usr/include/c++/8/bits/invoke.h:60 #6 0x7fd3724004b2 in __invoke<_test_chan_vs_stackdeadwhileparked()::<lambda()>&> /usr/include/c++/8/bits/invoke.h:95 #7 0x7fd3724004b2 in __call<void> /usr/include/c++/8/functional:400 #8 0x7fd3724004b2 in operator()<> /usr/include/c++/8/functional:484 #9 0x7fd3724004b2 in _M_invoke /usr/include/c++/8/bits/std_function.h:297 #10 0x7fd3723fdc6e in std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687 #11 0x7fd3723fdc6e in operator() golang/libgolang.h:273 #12 0x7fd3723fdc6e in _FUN golang/libgolang.h:271 #13 0x62ddf3 (/home/kirr/src/tools/go/pygolang-master/.tox/py37-thread-asan/bin/python3+0x62ddf3) #14 0x7fd377393fa2 in start_thread /build/glibc-vjB4T1/glibc-2.28/nptl/pthread_create.c:486 #15 0x7fd376eda4ce in clone (/lib/x86_64-linux-gnu/libc.so.6+0xf94ce) 0x607000002cd0 is located 16 bytes inside of 72-byte region [0x607000002cc0,0x607000002d08) freed by thread T0 here: #0 0x7fd377519fb0 in __interceptor_free (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xe8fb0) #1 0x7fd372401335 in golang::chan<golang::structZ>::~chan() golang/libgolang.h:292 #2 0x7fd372401335 in _test_chan_vs_stackdeadwhileparked() golang/runtime/libgolang_test.cpp:222 previously allocated by thread T0 here: #0 0x7fd37751a518 in calloc (/usr/lib/x86_64-linux-gnu/libasan.so.5+0xe9518) #1 0x7fd3732a7d0b in zalloc golang/runtime/libgolang.cpp:1185 #2 0x7fd3732a7d0b in _makechan golang/runtime/libgolang.cpp:413 Thread T7 created by T0 here: #0 0x7fd377481db0 in __interceptor_pthread_create (/usr/lib/x86_64-linux-gnu/libasan.so.5+0x50db0) #1 0x62df39 in PyThread_start_new_thread (/home/kirr/src/tools/go/pygolang-master/.tox/py37-thread-asan/bin/python3+0x62df39) SUMMARY: AddressSanitizer: heap-use-after-free golang/runtime/libgolang.cpp:164 in golang::Sema::acquire() Shadow bytes around the buggy address: 0x0c0e7fff8540: fa fa fa fa fd fd fd fd fd fd fd fd fd fa fa fa 0x0c0e7fff8550: fa fa fd fd fd fd fd fd fd fd fd fa fa fa fa fa 0x0c0e7fff8560: fd fd fd fd fd fd fd fd fd fd fa fa fa fa fd fd 0x0c0e7fff8570: fd fd fd fd fd fd fd fa fa fa fa fa fd fd fd fd 0x0c0e7fff8580: fd fd fd fd fd fa fa fa fa fa fd fd fd fd fd fd =>0x0c0e7fff8590: fd fd fd fa fa fa fa fa fd fd[fd]fd fd fd fd fd 0x0c0e7fff85a0: fd fa fa fa fa fa 00 00 00 00 00 00 00 00 00 fa 0x0c0e7fff85b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0e7fff85c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0e7fff85d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0e7fff85e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ---- 8< ---- The bugs will be addressed in the followup patches.
-
Kirill Smelkov authored
Make it clear which scope is covered by with_lock(mu): instead of implicitly having it from with_lock till end of current block with_lock(mu); ... } // end of current block make it to be the statement covered by with_lock, as if with_lock was e.g. an `if`, for example: with_lock(mu) do_something(); // mu released here or with_lock(mu) { do_smth1(); do_smth2(); } // mu released here This makes the intent in __chanselect2 more clear: it was ch->_mu.lock(); with_lock(g->_mu); ... ch->_mu.unlock(); and semantically human expects g->mu to be released _before_ ch->_mu.unlock(). However with current with_lock implementation, g->mu will be released _after_ ch->_mu.unlock(), which goes against intuition. -> By reworking with_lock implementation to cover only next statement or block of code we make sure that g->_mu will be released _before_ ch->_mu - the same way as it was until 3b241983 (Port/move channels to C/C++/Pyx): ch->_mu.lock(); with_lock(g->_mu) { ... } ch->_mu.unlock();
-
Kirill Smelkov authored
Be on safe side: both aspects - copy and move - are forbidden for all those internal classes.
-
- 16 Sep, 2019 2 commits
-
-
Kirill Smelkov authored
We already clear whole chan memory (struct _chan + channel buffer that comes after it) on channel release. However, probably due to a thinko, on creation path only struct _chan without buffer was cleared. -> Be on the safe side and clear everything. Use zalloc helper to automatically avoid bugs when size of allocation != size of clear.
-
Kirill Smelkov authored
I reported the bug (see 69db91bf "libgolang: Add internal semaphores") to CPython and PyPy upstreams. PyPy people already fixed it and the fix, if I understood correctly, should be available as part of PyPy 7.2 . Let's hope that CPython 2.7 will be also fixed. - https://bugs.python.org/issue38106 -> https://github.com/python/cpython/pull/16047 - https://bitbucket.org/pypy/pypy/issues/3072 Added Python-level test triggers the bug with high probability. Dedicated C-level Semaphore-only test is pending.
-
- 11 Sep, 2019 4 commits
-
-
Kirill Smelkov authored
With WAIT_LOCK flag and correct usage PyThread_acquire_lock should never fail, but let's be cautious and verify that it indeed succeeds to acquire the lock. The rest of external functions called by thread runtime either have void return, or their return is checked.
-
Kirill Smelkov authored
- pytest cache - pip wheel metadata - tags (from ctags) - vim swap files
-
Kirill Smelkov authored
In Python, %-formatting needs % operator, not ",".
-
Kirill Smelkov authored
chan<T> is a pointer type and e.g. send does not change the pointer - it only "modifies" channel buffer. Marking appropriate methods as const is needed so that `const chan<T>` could be used to send/receive the same way as `chan <T>` is used.
-
- 01 Sep, 2019 1 commit
-
-
Kirill Smelkov authored
ch->_mu was initialized with Sema - not Mutex - constructor. It was appearing to work because currently Mutex is a trivial wrapper around Sema.
-
- 29 Aug, 2019 4 commits
-
-
Kirill Smelkov authored
As suggested by https://github.com/jayvdb run manifest checker and oops, it finds that we forgot to include pyproject.toml for testprog/golang_pyx_user: missing from sdist: golang/pyx/testprog/golang_pyx_user/pyproject.toml suggested MANIFEST.in rules: recursive-include golang *.toml -> Fix MANIFEST.in in generic way to include golang/*/*.toml The bug was not affecting pygolang usage - only golang.pyx.build tests were failing for sdist installed pygolang.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
- Move channels implementation to be done in C++ inside libgolang. The code and logic is based on previous Python-level channels implementation, but the new code is just C++ and does not depend on Python nor GIL at all, and so works without GIL if libgolang runtime works without GIL(*). (*) for example "thread" runtime works without GIL, while "gevent" runtime acquires GIL on every semaphore acquire. New channels implementation is located in δ(libgolang.cpp). - Provide low-level C channels API to the implementation. The low-level C API was inspired by Libtask[1] and Plan9/Libthread[2]. [1] Libtask: a Coroutine Library for C and Unix. https://swtch.com/libtask. [2] http://9p.io/magic/man2html/2/thread. - Provide high-level C++ channels API that provides type-safety and automatic channel lifetime management. Overview of C and C++ APIs are in δ(libgolang.h). - Expose C++ channels API at Pyx level as Cython/nogil API so that Cython programs could use channels with ease and without need to care about lifetime management and low-level details. Overview of Cython/nogil channels API is in δ(README.rst) and δ(_golang.pxd). - Turn Python channels to be tiny wrapper around chan<PyObject>. Implementation note: - gevent case needs special care because greenlet, which gevent uses, swaps coroutine stack from C stack to heap on coroutine park, and replaces that space on C stack with stack of activated coroutine copied back from heap. This way if an object on g's stack is accessed while g is parked it would be memory of another g's stack. The channels implementation explicitly cares about this issue so that stack -> * channel send, or * -> stack channel receive work correctly. It should be noted that greenlet approach, which it inherits from stackless, is not only a bit tricky, but also comes with overhead (stack <-> heap copy), and prevents a coroutine to migrate from 1 OS thread to another OS thread as that would change addresses of on-stack things for that coroutine. As the latter property prevents to use multiple CPUs even if the program / runtime are prepared to work without GIL, it would be more logical to change gevent/greenlet to use separate stack for each coroutine. That would remove stack <-> heap copy and the need for special care in channels implementation for stack - stack sends. Such approach should be possible to implement with e.g. swapcontext or similar mechanism, and a proof of concept of such work wrapped into greenlet-compatible API exists[3]. It would be good if at some point there would be a chance to explore such approach in Pygolang context. [3] https://github.com/python-greenlet/greenlet/issues/113#issuecomment-264529838 and below Just this patch brings in the following speedup at Python level: (on i7@2.6GHz) thread runtime: name old time/op new time/op delta go 20.0µs ± 1% 15.6µs ± 1% -21.84% (p=0.000 n=10+10) chan 9.37µs ± 4% 2.89µs ± 6% -69.12% (p=0.000 n=10+10) select 20.2µs ± 4% 3.4µs ± 5% -83.20% (p=0.000 n=8+10) def 58.0ns ± 0% 60.0ns ± 0% +3.45% (p=0.000 n=8+10) func_def 43.8µs ± 1% 43.9µs ± 1% ~ (p=0.796 n=10+10) call 62.4ns ± 1% 63.5ns ± 1% +1.76% (p=0.001 n=10+10) func_call 1.06µs ± 1% 1.05µs ± 1% -0.63% (p=0.002 n=10+10) try_finally 136ns ± 0% 137ns ± 0% +0.74% (p=0.000 n=9+10) defer 2.28µs ± 1% 2.33µs ± 1% +2.34% (p=0.000 n=10+10) workgroup_empty 48.2µs ± 1% 34.1µs ± 2% -29.18% (p=0.000 n=9+10) workgroup_raise 58.9µs ± 1% 45.5µs ± 1% -22.74% (p=0.000 n=10+10) gevent runtime: name old time/op new time/op delta go 24.7µs ± 1% 15.9µs ± 1% -35.72% (p=0.000 n=9+9) chan 11.6µs ± 1% 7.3µs ± 1% -36.74% (p=0.000 n=10+10) select 22.5µs ± 1% 10.4µs ± 1% -53.73% (p=0.000 n=10+10) def 55.0ns ± 0% 55.0ns ± 0% ~ (all equal) func_def 43.6µs ± 1% 43.6µs ± 1% ~ (p=0.684 n=10+10) call 63.0ns ± 0% 64.0ns ± 0% +1.59% (p=0.000 n=10+10) func_call 1.06µs ± 1% 1.07µs ± 1% +0.45% (p=0.045 n=10+9) try_finally 135ns ± 0% 137ns ± 0% +1.48% (p=0.000 n=10+10) defer 2.31µs ± 1% 2.33µs ± 1% +0.89% (p=0.000 n=10+10) workgroup_empty 70.2µs ± 0% 55.8µs ± 0% -20.63% (p=0.000 n=10+10) workgroup_raise 90.3µs ± 0% 70.9µs ± 1% -21.51% (p=0.000 n=9+10) The whole Cython/nogil work - starting from 8fa3c15b (Start using Cython and providing Cython/nogil API) to this patch - brings in the following speedup at Python level: (on i7@2.6GHz) thread runtime: name old time/op new time/op delta go 92.9µs ± 1% 15.6µs ± 1% -83.16% (p=0.000 n=10+10) chan 13.9µs ± 1% 2.9µs ± 6% -79.14% (p=0.000 n=10+10) select 29.7µs ± 6% 3.4µs ± 5% -88.55% (p=0.000 n=10+10) def 57.0ns ± 0% 60.0ns ± 0% +5.26% (p=0.000 n=10+10) func_def 44.0µs ± 1% 43.9µs ± 1% ~ (p=0.055 n=10+10) call 63.5ns ± 1% 63.5ns ± 1% ~ (p=1.000 n=10+10) func_call 1.06µs ± 0% 1.05µs ± 1% -1.31% (p=0.000 n=10+10) try_finally 139ns ± 0% 137ns ± 0% -1.44% (p=0.000 n=10+10) defer 2.36µs ± 1% 2.33µs ± 1% -1.26% (p=0.000 n=10+10) workgroup_empty 98.4µs ± 1% 34.1µs ± 2% -65.32% (p=0.000 n=10+10) workgroup_raise 135µs ± 1% 46µs ± 1% -66.35% (p=0.000 n=10+10) gevent runtime: name old time/op new time/op delta go 68.8µs ± 1% 15.9µs ± 1% -76.91% (p=0.000 n=10+9) chan 14.8µs ± 1% 7.3µs ± 1% -50.67% (p=0.000 n=10+10) select 32.0µs ± 0% 10.4µs ± 1% -67.57% (p=0.000 n=10+10) def 58.0ns ± 0% 55.0ns ± 0% -5.17% (p=0.000 n=10+10) func_def 43.9µs ± 1% 43.6µs ± 1% -0.53% (p=0.035 n=10+10) call 63.5ns ± 1% 64.0ns ± 0% +0.79% (p=0.033 n=10+10) func_call 1.08µs ± 1% 1.07µs ± 1% -1.74% (p=0.000 n=10+9) try_finally 142ns ± 0% 137ns ± 0% -3.52% (p=0.000 n=10+10) defer 2.32µs ± 1% 2.33µs ± 1% +0.71% (p=0.005 n=10+10) workgroup_empty 90.3µs ± 0% 55.8µs ± 0% -38.26% (p=0.000 n=10+10) workgroup_raise 108µs ± 1% 71µs ± 1% -34.64% (p=0.000 n=10+10) This patch is the final patch in series to reach the goal of providing channels that could be used in Cython/nogil code. Cython/nogil channels work is dedicated to the memory of Вера Павловна Супрун[4]. [4] https://navytux.spb.ru/memory/%D0%A2%D1%91%D1%82%D1%8F%20%D0%92%D0%B5%D1%80%D0%B0.pdf#page=3
-
Kirill Smelkov authored
Copy linux/list.h from wendelin.core which copied it from util-linux.git which took it from linux.git at LGPL license state. Here are corresponding links for wendelin.core and util-linux: wendelin.core@a0f940ad https://git.kernel.org/cgit/utils/util-linux/util-linux.git/tree/include/list.h?id=v2.25-165-g9138d6f Linked lists will be used in channels implementation in the next patch.
-