- 29 Oct, 2018 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
* nxd/master: bigarray: RAMArray bigarray/tests: Factor out a way to spcify on which BigFile/BigFileH an array is tested into fixture parameter
-
Kirill Smelkov authored
-
- 26 Oct, 2018 1 commit
-
-
Kirill Smelkov authored
Currently fails with: /home/kirr/src/wendelin/wendelin.core/lib/xnumpy.py in restructure(arr, dtype) 82 print 'stridev:', stridev 83 #return np.ndarray.__new__(type(arr), shape, dtype, buffer(arr), 0, stridev) ---> 84 return np.ndarray(shape, dtype, buffer(arr), 0, stridev) TypeError: expected a single-segment buffer object
-
- 21 Oct, 2018 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
The kernel sends SIGSTOP to interrupt tracee, but the signal will be processed only when the process returns from kernel space, e.g. here https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/entry/common.c?id=v4.19-rc8-151-g23469de647c4#n160 This way the tracer won't receive obligatory information that tracee stopped (via wait...) and even though ptrace(ATTACH) succeeds, all other ptrace commands will fail: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/ptrace.c?id=v4.19-rc8-151-g23469de647c4#n1140 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/ptrace.c?id=v4.19-rc8-151-g23469de647c4#n207 My original idea was to use ptrace to run code in process to change it's memory mappings, while the triggering process is under pagefault/read to wcfs, and the above shows it won't work - trying to ptrace the client from under wcfs will just block forever (the kernel will be waiting for read operation to finish for ptrace, and read will be first waiting on ptrace stopping to complete = deadlock)
-
- 19 Oct, 2018 5 commits
-
-
Kirill Smelkov authored
go-fuse@f822c9db
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 18 Oct, 2018 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
X test that ZBlk objects can be actually removed from ZODB Connection cache and cause invalidation to be missed ____________________________________ test_bigfile_filezodb_vs_cache_invalidation ____________________________________ def test_bigfile_filezodb_vs_cache_invalidation(): root = dbopen() conn = root._p_jar db = conn.db() conn.close() del root, conn tm1 = TransactionManager() tm2 = TransactionManager() conn1 = db.open(transaction_manager=tm1) root1 = conn1.root() # setup zfile with fileh view to it root1['zfile3'] = f1 = ZBigFile(blksize) tm1.commit() fh1 = f1.fileh_open() tm1.commit() # set zfile initial data vma1 = fh1.mmap(0, 1) Blk(vma1, 0)[0] = 1 tm1.commit() # read zfile and setup fileh for it in conn2 conn2 = db.open(transaction_manager=tm2) root2 = conn2.root() f2 = root2['zfile3'] fh2 = f2.fileh_open() vma2 = fh2.mmap(0, 1) assert Blk(vma2, 0)[0] == 1 # read data in conn2 + make sure read correctly # now zfile content is both in ZODB.Connection cache and in _ZBigFileH # cache for each conn1 and conn2. Modify data in conn1 and make sure it # fully propagate to conn2. Blk(vma1, 0)[0] = 2 tm1.commit() # still should be read as old value in conn2 assert Blk(vma2, 0)[0] == 1 # and even after virtmem pages reclaim # ( verifies that _p_invalidate() in ZBlk.loadblkdata() does not lead to # reloading data as updated ) ram_reclaim_all() assert Blk(vma2, 0)[0] == 1 # FIXME: this simulates ZODB Connection cache pressure and currently # removes ZBlk corresponding to blk #0 from conn2 cache. # In turn this leads to conn2 missing that block invalidation on follow-up # transaction boundary. # # See FIXME notes on ZBlkBase._p_invalidate() for detailed description. conn2._cache.minimize() tm2.commit() # transaction boundary for t2 # data from tm1 should propagate -> ZODB -> ram pages for _ZBigFileH in conn2 > assert Blk(vma2, 0)[0] == 2 E assert 1 == 2 tests/test_filezodb.py:615: AssertionError
-
- 17 Oct, 2018 1 commit
-
-
Kirill Smelkov authored
This one is less exotic compared to format changes rewrite.
-
- 16 Oct, 2018 3 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 15 Oct, 2018 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 12 Oct, 2018 3 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
RAMArray is compatible to ZBigArray in API and semantic, but stores its data in RAM only. It is useful in situations where ZBigArray compatible data type is needed, but the amount of data is small and the data itself is needed only temporarily - e.g. in a simulation. Please see details in individual patches. Original merge request by @klaus (nexedi/wendelin.core!8). /cc @Tyagov /reviewed-on nexedi/wendelin.core!9
-
Kirill Smelkov authored
RAMArray is compatible to ZBigArray in API and semantic, but stores its data in RAM only. It is useful in situations where ZBigArray compatible data type is needed, but the amount of data is small and the data itself is needed only temporarily - e.g. in a simulation. Implementation is based on mmapping temporary files from /dev/shm/... and passing them as file handles, similarly to how ZBigArray works, to BigArray. We don't use just numpy.ndarray because of append - for ZBigArray append works in O(1), but more importantly it does not copy data. This way mmapings previously created for ZBigArray views, continue to correctly alias array data. If we would be using ndarray directly, since ndarray.resize copies data, that property would not be preserved. Original patch by Klaus Wölfel <klaus@nexedi.com> (nexedi/wendelin.core!8)
-
- 11 Oct, 2018 6 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
We were checking for `loading.err != nil` as the indication for success and it should have been `err == nil`. The symphoms of the bug were that \0 instead of data were read sometimes: wcfs: 2018/10/11 19:18:12 < 22: i7.READ {Fh 0 [2097152 +131072) L 0 RDONLY,0x8000} <-- NOTE I1011 19:18:12.556125 6330 wcfs.go:538] readBlk #1 dest[0:+2097152] I1011 19:18:12.556361 6330 wcfs.go:538] readBlk #1 dest[0:+2097152] wcfs: 2018/10/11 19:18:12 ZBlk0.PySetState #11 wcfs: 2018/10/11 19:18:12 ZBigFile.loadblk(1) -> 2097152B wcfs: 2018/10/11 19:18:12 > 22: OK, 131072B data "\x00\x00\x00\x00\x00\x00\x00\x00"... <-- XXX not "hello world" wcfs: 2018/10/11 19:18:12 < 24: i7.READ {Fh 0 [2359296 +131072) L 0 RDONLY,0x8000} wcfs: 2018/10/11 19:18:12 > 23: OK, 131072B data "\x00\x00\x00\x00\x00\x00\x00\x00"... wcfs: 2018/10/11 19:18:12 > 0: NOTIFY_STORE_CACHE, {i7 [2097152 +2097152)} 2097152B data "hello wo"... <-- NOTE
-
Kirill Smelkov authored
Else: wcfs: 2018/10/10 17:52:15 < 40: i7.READ {Fh 0 [4063232 +131072) L 0 RDONLY,0x8000} wcfs: 2018/10/10 17:52:15 > 39: OK, 131072B data wcfs: 2018/10/10 17:52:15 > 40: OK, 131072B data wcfs: 2018/10/10 17:52:15 < 41: i7.GETATTR {Fh 0} wcfs: 2018/10/10 17:52:15 Response: INODE_NOTIFY_STORE_CACHE: OK wcfs: 2018/10/10 17:52:15 > 41: OK, {tA=1s {M0100444 SZ=4194304 L=1 1000:1000 B0*0 i0:7 A 0.000000 M 1539183135.261177 C 1539183135.261177}} # XXX vvv why we store 2M after read @4M even though read gives len=0 ? wcfs: 2018/10/10 17:52:15 > 0: NOTIFY_STORE_CACHE, {i7 [4194304 +2097152)} 2097152B data wcfs: 2018/10/10 17:52:15 < 42: i7.READ {Fh 0 [4194304 +4096) L 0 RDONLY,0x8000} wcfs: 2018/10/10 17:52:15 > 42: OK, wcfs: 2018/10/10 17:52:15 < 43: i7.GETATTR {Fh 0} wcfs: 2018/10/10 17:52:15 > 43: OK, {tA=1s {M0100444 SZ=4194304 L=1 1000:1000 B0*0 i0:7 A 0.000000 M 1539183135.261177 C 1539183135.261177}} wcfs: 2018/10/10 17:52:15 Response: INODE_NOTIFY_STORE_CACHE: OK wcfs: 2018/10/10 17:52:15 < 44: i7.READ {Fh 0 [4198400 +4096) L 0 RDONLY,0x8000} wcfs: 2018/10/10 17:52:15 > 44: OK, data = readfile(fpath + "/head/data") > assert len(data) == fsize E AssertionError: assert 4198400 == 4194304
-
Kirill Smelkov authored
bigarray/tests: Factor out a way to spcify on which BigFile/BigFileH an array is tested into fixture parameter Currently we have only one BigFile and its BigFileH handle. However in the next patch, for RAMArray, we'll be adding handles for opened RAM files, and it would be good to test whole BigArray functionality on data served by those handles too. Prepare for this and first factor out into testbig fixture the way to open such handles.
-
Kirill Smelkov authored
-
- 10 Oct, 2018 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 09 Oct, 2018 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-