- 03 Oct, 2019 1 commit
-
-
Kirill Smelkov authored
prepares the code to be moved into pyx/nogil, as sync.Mutex - contrary to threading.Lock - is present also in pyx/nogil world.
-
- 01 Oct, 2019 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 29 Sep, 2019 8 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
* master: readme: wendelin.io moved -> wendelin.nexedi.com
-
Kirill Smelkov authored
-
Kirill Smelkov authored
The limitation that only one simultaneous watch request over one wlink is possible was added in April (b4857f66) when both Watch and WatchLink locking was not there and marked as XXX. That locking was added in July (85d86a32) though, so there should be real need to simultaneous watch requests over one wlink. Also the limit implementation had a bug: it was setting handlingWatch back to 0, but _after_ sending reply to client. This way a situation was possible when client is woken up first, sends another watch request, wcfs was not yet scheduled, handlingWatch is still _1_, and watch request is rejected. This bug is very likely to happen when running wcfs tests with 2 CPU machine or with just GOMAXPROCS=2: C: setup watch f<0000000000000043> @at1 (03d2c23f46d04dcc) # pinok: {2: @at1 (03d2c23f46d04dcc), 3: @at0 (03d2c23f46c44300), 5: @at0 (03d2c23f46c44300)} S: wlink 6: rx: "1 watch 0000000000000043 @03d2c23f46d04dcc\n" S: wlink 6: tx: "2 pin 0000000000000043 #3 @03d2c23f46c44300\n" C: watch : rx: '2 pin 0000000000000043 #3 @03d2c23f46c44300\n' S: wlink 6: tx: "4 pin 0000000000000043 #2 @03d2c23f46d04dcc\n" S: wlink 6: tx: "6 pin 0000000000000043 #5 @03d2c23f46c44300\n" C: watch : rx: '4 pin 0000000000000043 #2 @03d2c23f46d04dcc\n' C: watch : rx: '6 pin 0000000000000043 #5 @03d2c23f46c44300\n' S: wlink 6: rx: "2 ack\n" S: wlink 6: rx: "4 ack\n" S: wlink 6: rx: "6 ack\n" S: wlink 6: tx: "1 ok\n" C: watch : rx: '1 ok\n' C: setup watch f<0000000000000043> (@at1 (03d2c23f46d04dcc) ->) @at2 (03d2c23f46e91daa) # pin@old: {2: @at1 (03d2c23f46d04dcc), 3: @at0 (03d2c23f46c44300), 5: @at0 (03d2c23f46c44300)} # pin@new: {2: @at2 (03d2c23f46e91daa), 3: @at2 (03d2c23f46e91daa), 5: @at2 (03d2c23f46e91daa)} # pinok: {2: @at2 (03d2c23f46e91daa), 3: @at2 (03d2c23f46e91daa), 5: @at2 (03d2c23f46e91daa)} S: wlink 6: rx: "3 watch 0000000000000043 @03d2c23f46e91daa\n" S: wlink 6: tx: "0 error: 3: another watch request is already in progress\n" C: watch : rx: '0 error: 3: another watch request is already in progress\n' C: watch : rx fatal: 'error: 3: another watch request is already in progress' C: watch : rx: '' If we would need to maintain the limit, we should move setting handlingWatch=0 just before sending final reply to client, but since the need for the limit is not there anymore, let's fix it by removing the limit altogether.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 27 Sep, 2019 3 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
which has glibc 2.23 but mlock2 was added only to glibc 2.27. The kernel (4.4.x) has mlock2 though.
-
Kirill Smelkov authored
-
- 20 Sep, 2019 1 commit
-
-
Kirill Smelkov authored
-
- 18 Sep, 2019 1 commit
-
-
Kirill Smelkov authored
-
- 17 Sep, 2019 1 commit
-
-
Kirill Smelkov authored
-
- 07 Aug, 2019 1 commit
-
-
Kirill Smelkov authored
X Use `with gil` + regular py code instead of PyGILState_Ensure/PyGILState_Release/PyRun_SimpleString
-
- 19 Jul, 2019 1 commit
-
-
Kirill Smelkov authored
-
- 17 Jul, 2019 21 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-