- 16 Aug, 2021 11 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
X wcfs/xbtree: Fix update not to add duplicate extra point if rebuild - called by Update - already added it It was failing e.g. as === RUN TestΔBTail/rebuild/T4/T1,3-T/T-T-T-T/B0:b-B1:c,2:j-T-B4:d/B3:h→T/T2,3/T-T-T/B1:d-B2:c-B3:i/_T{3};R/_→T2/B1:g-B2:c,3:i δbtail_test.go:917: after Update(@at1→@at2): vδT: have: @at1: map[0:{b ø} 1:{c d} 2:{j c} 3:{h i} 4:{d ø}] @at2: map[1:{d g}] vδb/root: @at1 @at2 @at2 <-- HERE
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
xbtree tests, in normal mode, run ~ 130s on my laptop. On a testnode however they run ~500s and sometimes more than 10 minutes, probably depending on surrounding load. -> increase default `go test` timeout to avoid sporadic "test timed out" failures.
-
Kirill Smelkov authored
This is what we are interested in the first place. For full details -vv will show *.py logs verbosely.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This makes sure those programs are always built afresh instead being stuck at outdated build. This is needed because corresponding test .c file includes many other .c files and we don't implement dependency tracking.
-
Kirill Smelkov authored
There is no TransactionMetaData on ZODB4.
-
- 12 Aug, 2021 1 commit
-
-
Kirill Smelkov authored
See added comments to wcfs_test.py for details on how that can happen. Fixes test segmentation faults like $ WENDELIN_CORE_TEST_DB="<zeo>" python -m pytest -vs -k test_wcfs_watch_vs_access wcfs_test.py::test_wcfs_watch_vs_access ------------------------------- live log setup -------------------------------- INFO ZEO.ClientStorage:ClientStorage.py:263 ('localhost', 20106) ClientStorage (pid=36942) created RW/normal for storage: '1' INFO ZEO.cache:cache.py:217 created temporary cache file '<fdopen>' INFO ZEO.ClientStorage:ClientStorage.py:574 ('localhost', 20106) Testing connection <ManagedClientConnection ('127.0.0.1', 20106)> INFO ZEO.zrpc.Connection('C'):connection.py:365 (127.0.0.1:20106) received handshake 'Z4' INFO ZEO.ClientStorage:ClientStorage.py:580 ('localhost', 20106) Server authentication protocol None INFO ZEO.ClientStorage:ClientStorage.py:640 ('localhost', 20106) Connected to storage: ('localhost', 20106) INFO ZEO.ClientStorage:ClientStorage.py:1326 ('localhost', 20106) No verification necessary -- empty cache INFO ZEO.ClientStorage:ClientStorage.py:728 ('localhost', 20106) Disconnected from storage: "('localhost', 20106)" -------------------------------- live log call -------------------------------- INFO ZEO.ClientStorage:ClientStorage.py:263 ('localhost', 20106) ClientStorage (pid=36942) created RW/normal for storage: '1' INFO ZEO.cache:cache.py:217 created temporary cache file '<fdopen>' INFO ZEO.ClientStorage:ClientStorage.py:574 ('localhost', 20106) Testing connection <ManagedClientConnection ('127.0.0.1', 20106)> INFO ZEO.zrpc.Connection('C'):connection.py:365 (127.0.0.1:20106) received handshake 'Z4' INFO ZEO.ClientStorage:ClientStorage.py:580 ('localhost', 20106) Server authentication protocol None INFO ZEO.ClientStorage:ClientStorage.py:640 ('localhost', 20106) Connected to storage: ('localhost', 20106) INFO ZEO.ClientStorage:ClientStorage.py:1326 ('localhost', 20106) No verification necessary -- empty cache INFO root:__init__.py:294 wcfs: starting for zeo://localhost:20106 ... wcfs: 2021/08/13 02:27:40 zodb: FIXME: open zeo://localhost:20106: raw cache is not ready for invalidations -> NoCache forced INFO root:__init__.py:335 wcfs: started pid37431 @ /dev/shm/wcfs/e7630c831aeed36692d06459de5a25a745eb9d76 M: commit -> @at0 (03e2107fabf002ee) M: commit -> @at1 (03e2107fac097466) M: f<0000000000000002> [2] M: commit -> @at2 (03e2107fac3df2aa) M: f<0000000000000002> [2, 3, 5] M: commit -> @at3 (03e2107fac5ef011) M: f<0000000000000002> [2, 5] C: setup watch f<0000000000000002> @at3 (03e2107fac5ef011) # pinok: {} C: setup watch f<0000000000000002> @at3 (03e2107fac5ef011) # pinok: {} C: setup watch f<0000000000000002> @at2 (03e2107fac3df2aa) # pinok: {2: @at2 (03e2107fac3df2aa)} M: commit -> @at4 (03e2107face33c77) M: f<0000000000000002> [2, 5, 6] >>> Change history by file: f<0000000000000002>: 0 1 2 3 4 5 6 7 a b c d e f g h @at0 (03e2107fabf002ee) @at1 (03e2107fac097466) 2 @at2 (03e2107fac3df2aa) 2 3 5 @at3 (03e2107fac5ef011) 2 5 @at4 (03e2107face33c77) 2 5 6 INFO ZEO.ClientStorage:ClientStorage.py:728 ('localhost', 20106) Disconnected from storage: "('localhost', 20106)" INFO root:__init__.py:401 wcfs: unmount/stop wcfs pid37431 @ /dev/shm/wcfs/e7630c831aeed36692d06459de5a25a745eb9d76 WARNING root:__init__.py:548 fuse_unmount /dev/shm/wcfs/e7630c831aeed36692d06459de5a25a745eb9d76: failed: fusermount: failed to unmount /dev/shm/wcfs/e7630c831aeed36692d06459de5a25a745eb9d76: Device or resource busy WARNING root:__init__.py:533 # lsof /dev/shm/wcfs/e7630c831aeed36692d06459de5a25a745eb9d76 WARNING root:__init__.py:541 WARNING root:__init__.py:543 (lsof failed) WARNING root:__init__.py:461 -> kill -TERM wcfs.go ... WARNING root:__init__.py:464 -> abort FUSE connection ... Segmentation fault: read @00007f6e36bfe000 /srv/slapgrid/slappart91/srv/runner/software/3335682bae677c2d474f9244e578f64b/parts/wendelin.core/wcfs/client/./../../bigfile/liblibvirtmem.so(dump_traceback+0x1b)[0x7f6f80844e4b] /srv/slapgrid/slappart91/srv/runner/software/3335682bae677c2d474f9244e578f64b/parts/wendelin.core/wcfs/client/./../../bigfile/liblibvirtmem.so(+0x3956)[0x7f6f80841956] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730)[0x7f6f83117730] /srv/slapgrid/slappart91/srv/runner/software/3335682bae677c2d474f9244e578f64b/parts/wendelin.core/wcfs/internal/wcfs_test.so(+0x10860)[0x7f6e3e2eb860] /srv/slapgrid/slappart91/srv//runner//shared/python2.7/93d57ff089fd75f374514794469a0538/bin/python2.7(PyEval_EvalFrameEx+0x7b5)[0x4d2dc5] /srv/slapgrid/slappart91/srv//runner//shared/python2.7/93d57ff089fd75f374514794469a0538/bin/python2.7(PyEval_EvalCodeEx+0x2cc)[0x4d1abc] /srv/slapgrid/slappart91/srv//runner//shared/python2.7/93d57ff089fd75f374514794469a0538/bin/python2.7[0x51b92e] /srv/slapgrid/slappart91/srv/runner/software/3335682bae677c2d474f9244e578f64b/develop-eggs/pygolang-0.0.8-py2.7-linux-x86_64.egg/golang/_golang.so(+0xc8b0)[0x7f6f8182b8b0] /srv/slapgrid/slappart91/srv/runner/software/3335682bae677c2d474f9244e578f64b/develop-eggs/pygolang-0.0.8-py2.7-linux-x86_64.egg/golang/_golang.so(+0x14ab4)[0x7f6f81833ab4] /srv/slapgrid/slappart91/srv//runner//shared/python2.7/93d57ff089fd75f374514794469a0538/bin/python2.7[0x54bbb4] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3)[0x7f6f8310cfa3] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f6f82eae4cf] Segmentation fault (core dumped) Which looks under gdb as #0 on_pagefault (sig=<optimized out>, si=0x7f6dde7fb570, _uc=<optimized out>) at bigfile/pagefault.c:171 #1 <signal handler called> #2 __pyx_pf_8wendelin_4wcfs_8internal_9wcfs_test_read_nogil (__pyx_self=<optimized out>, __pyx_v_mem=...) at wcfs/internal/wcfs_test.cpp:3103 #3 __pyx_pw_8wendelin_4wcfs_8internal_9wcfs_test_1read_nogil (__pyx_self=<optimized out>, __pyx_arg_mem=<optimized out>) at wcfs/internal/wcfs_test.cpp:3029 #4 0x00000000004d2dc5 in call_function (oparg=<optimized out>, pp_stack=0x7f6dde7fbc88) at Python/ceval.c:4364 #5 PyEval_EvalFrameEx (f=<optimized out>, throwflag=<optimized out>) at Python/ceval.c:3013 #6 0x00000000004d1abc in PyEval_EvalCodeEx (co=0x7f6f8094cbb0, globals=<optimized out>, locals=locals@entry=0x0, args=args@entry=0x7f6f82d72068, argcount=<optimized out>, kws=kws@entry=0x7f6f82d72068, kwcount=0, defs=0x0, defcount=0, closure=0x7f6e3c6e7110) at Python/ceval.c:3608 #7 0x000000000051b92e in function_call (func=0x7f6e3c711150, arg=0x7f6f82d72050, kw=0x7f6e3c710b90) at Objects/funcobject.c:523 #8 0x00007f6f8182b8b0 in __Pyx_PyObject_Call (func=0x7f6e3c711150, arg=<optimized out>, kw=<optimized out>) at golang/_golang.cpp:15660 #9 0x00007f6f81833ab4 in __pyx_f_6golang_7_golang___goviac (__pyx_v_arg=0x7f6e3c70f5f0) at golang/_golang.cpp:3466 #10 __pyx_f_6golang_7_golang__goviac (__pyx_v_arg=__pyx_v_arg@entry=0x7f6e3c70f5f0) at golang/_golang.cpp:3350 #11 0x000000000054bbb4 in pythread_wrapper (arg=<optimized out>) at Python/thread_pthread.h:178 #12 0x00007f6f8310cfa3 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0 #13 0x00007f6f82eae4cf in clone () from /lib/x86_64-linux-gnu/libc.so.6
-
- 11 Aug, 2021 4 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
This reverts commit 30740602. libbracktrace is not always automatically installed.
-
Kirill Smelkov authored
-
Kirill Smelkov authored
Do what we can do without gdb and then tail to regular segmentation fault. With core gdb can still be useed, but if we already get in the log traceback of the crash automatically.
-
- 09 Aug, 2021 4 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
t(TestΔBTailRandom) ~320s -> ~100s.
-
Kirill Smelkov authored
* kirr/t: X Fix mlock2 build on Debian 8
-
Kirill Smelkov authored
* t2: X ΔFtail.SliceByFileRev: Fix untracked entries to be present uniformly in result . . . . . X test that shows problem of SliceByRootRev where untracked blocks are not added uniformly into whole history . . . . . . . . X Size no longer tracks [0,∞) since we start tracking when zfile is non-empty X ΔFtail: `go test -failfast -short -v -run Random -randseed=1626793016249041295` discovered problems
-
- 06 Aug, 2021 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 05 Aug, 2021 4 commits
-
-
Kirill Smelkov authored
X test that shows problem of SliceByRootRev where untracked blocks are not added uniformly into whole history === RUN TestΔFtailSliceXXX 2021/08/05 18:07:35 zodb: FIXME: open /tmp/TestΔFtailSliceXXX2265944622/001/1.fs: raw cache is not ready for invalidations -> NoCache forced δftail_test.go:689: slice (@at0,@at2]: have: [@at1·{0 1}S @at2·{1}S] want: [@at1·{0 1}S @at2·{0 1}S]
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 04 Aug, 2021 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 03 Aug, 2021 1 commit
-
-
Kirill Smelkov authored
-> Always track all blocks in blkTab.
-
- 30 Jul, 2021 1 commit
-
-
Kirill Smelkov authored
@rporchetto reports build failure on Debian 8 / Linux 3.16 [2021-07-30 15:40:35,677] INFO gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/srv/slapgrid/slappart19/srv/runner/shared/python2.7/3b7a074d1ded44046871b13502341482/include/python2.7 -c wcfs/internal/mm.c -o build/temp.linux-x86_64-2.7/wcfs/internal/mm.o [2021-07-30 15:40:35,677] INFO wcfs/internal/mm.c: In function 'mlock2': [2021-07-30 15:40:35,677] INFO wcfs/internal/mm.c:618:28: error: 'SYS_mlock2' undeclared (first use in this function); did you mean 'SYS_mlock'? [2021-07-30 15:40:35,677] INFO long err = syscall(SYS_mlock2, addr, len, flags); [2021-07-30 15:40:35,677] INFO ^~~~~~~~~~ [2021-07-30 15:40:35,677] INFO SYS_mlock Fix the build. NOTE mlock2 was added in Linux 4.3. Similarly MCL_ONFAULT is not provided on that old glibc 2.19: [2021-07-30 15:40:35,677] INFO wcfs/internal/mm.c:986:55: error: 'MCL_ONFAULT' undeclared here (not in a function); did you mean 'MLOCK_ONFAULT'? [2021-07-30 15:40:35,677] INFO __pyx_e_8wendelin_4wcfs_8internal_2mm_MCL_ONFAULT = MCL_ONFAULT, [2021-07-30 15:40:35,678] INFO ^~~~~~~~~~~ [2021-07-30 15:40:35,678] INFO MLOCK_ONFAULT -> Comment MCL_ONFAULT for now since we do not actually use it anywhere yet.
-
- 23 Jul, 2021 1 commit
-
-
Kirill Smelkov authored
X ΔFtail: `go test -failfast -short -v -run Random -randseed=1626793016249041295` discovered problems === RUN TestΔFtailRandom δftail_test.go:141: # n=10 seed=1626793016249041295 2021/07/23 12:26:01 zodb: FIXME: open /tmp/TestΔFtailRandom1363232041/001/1.fs: raw cache is not ready for invalidations -> NoCache forced δftail_test.go:191: # @at0 (03e19cb6064c58dd) δftail_test.go:203: # → @at1 (03e19cb6064ddd99) t0:a Da:a,b:b,c:c,d:d,e:e,f:f,g:g,h:h,i:i,j:j ; not-yet-tracked δftail_test.go:375: # → @at2 (03e19cb6064fc922) δT2:i,3:c,5:d,9:c δD{a b c d e f g h i} ; t0:a,2:i,3:c,5:d,9:c Da:a2,b:b2,c:c2,d:d2,e:e2,f:f2,g:g2,h:h2,i:i2,j:j δ{0 2 3 5 9} δftail_test.go:472: δf: have: &{03e19cb6064fc922 false {2 3 5 9} true} want: &{03e19cb6064fc922 false {0 2 3 5 9} true} δftail_test.go:499: .trackSetZBlk: ~have: map[c:{3 9} d:{5} i:{2}] want: map[a:{0} c:{3 9} d:{5} i:{2}] ...
-
- 20 Jul, 2021 2 commits
-
-
Kirill Smelkov authored
- Reimplement ΔFtail queries via gluing ΔBtail and ΔZtail data on the fly. This helps to avoid implementing complex rebuild logic in ΔFtail. The only place that needs to have that complexity is now ΔBtail, and there it already works draftly. - Add ΔFtail tests. - Add notion of epochs to ΔFtail. Epochs correspond to ZBigFile objects changes (creation and deletion). Unfortunately handling ZBigFile object changes turned out to be necessary to keep wcfs tests in passing state. - Move common testing infrastructure - that is used by both ΔBtail and ΔFtail - to xbtreetest package. - Add tests for ΔBtail.SliceByRootRev aliasing - Lazy rebuild is now on - ΔBtail.GetAt reworked ... * t2: (112 commits) X wcfs: v↑ NEO/go (checkpoint) . . . . . . . . . . X ΔFtail: Rebuild vδE after first track . . . . . . . . ...
-
Kirill Smelkov authored
To pick up neo@bc3b5ec3
-