1. 29 Oct, 2018 6 commits
  2. 26 Oct, 2018 1 commit
    • Kirill Smelkov's avatar
      X xnumpy.restructure · 2569b175
      Kirill Smelkov authored
      Currently fails with:
      
      /home/kirr/src/wendelin/wendelin.core/lib/xnumpy.py in restructure(arr, dtype)
           82     print 'stridev:', stridev
           83     #return np.ndarray.__new__(type(arr), shape, dtype, buffer(arr), 0, stridev)
      ---> 84     return np.ndarray(shape, dtype, buffer(arr), 0, stridev)
      
      TypeError: expected a single-segment buffer object
      2569b175
  3. 21 Oct, 2018 2 commits
  4. 19 Oct, 2018 5 commits
  5. 18 Oct, 2018 2 commits
    • Kirill Smelkov's avatar
      X invalidation design draftly settled · 9b4a42a3
      Kirill Smelkov authored
      9b4a42a3
    • Kirill Smelkov's avatar
      X test that ZBlk objects can be actually removed from ZODB Connection cache... · 69c94fbc
      Kirill Smelkov authored
      X test that ZBlk objects can be actually removed from ZODB Connection cache and cause invalidation to be missed
      
      ____________________________________ test_bigfile_filezodb_vs_cache_invalidation ____________________________________
      
          def test_bigfile_filezodb_vs_cache_invalidation():
              root = dbopen()
              conn = root._p_jar
              db   = conn.db()
              conn.close()
              del root, conn
      
              tm1 = TransactionManager()
              tm2 = TransactionManager()
      
              conn1 = db.open(transaction_manager=tm1)
              root1 = conn1.root()
      
              # setup zfile with fileh view to it
              root1['zfile3'] = f1 = ZBigFile(blksize)
              tm1.commit()
      
              fh1 = f1.fileh_open()
              tm1.commit()
      
              # set zfile initial data
              vma1 = fh1.mmap(0, 1)
              Blk(vma1, 0)[0] = 1
              tm1.commit()
      
              # read zfile and setup fileh for it in conn2
              conn2 = db.open(transaction_manager=tm2)
              root2 = conn2.root()
      
              f2 = root2['zfile3']
              fh2 = f2.fileh_open()
              vma2 = fh2.mmap(0, 1)
      
              assert Blk(vma2, 0)[0] == 1 # read data in conn2 + make sure read correctly
      
              # now zfile content is both in ZODB.Connection cache and in _ZBigFileH
              # cache for each conn1 and conn2. Modify data in conn1 and make sure it
              # fully propagate to conn2.
      
              Blk(vma1, 0)[0] = 2
              tm1.commit()
      
              # still should be read as old value in conn2
              assert Blk(vma2, 0)[0] == 1
              # and even after virtmem pages reclaim
              # ( verifies that _p_invalidate() in ZBlk.loadblkdata() does not lead to
              #   reloading data as updated )
              ram_reclaim_all()
              assert Blk(vma2, 0)[0] == 1
      
              # FIXME: this simulates ZODB Connection cache pressure and currently
              # removes ZBlk corresponding to blk #0 from conn2 cache.
              # In turn this leads to conn2 missing that block invalidation on follow-up
              # transaction boundary.
              #
              # See FIXME notes on ZBlkBase._p_invalidate() for detailed description.
              conn2._cache.minimize()
      
              tm2.commit()                # transaction boundary for t2
      
              # data from tm1 should propagate -> ZODB -> ram pages for _ZBigFileH in conn2
      >       assert Blk(vma2, 0)[0] == 2
      E       assert 1 == 2
      
      tests/test_filezodb.py:615: AssertionError
      69c94fbc
  6. 17 Oct, 2018 1 commit
  7. 16 Oct, 2018 3 commits
  8. 15 Oct, 2018 5 commits
  9. 12 Oct, 2018 3 commits
    • Kirill Smelkov's avatar
      . · 15123fbf
      Kirill Smelkov authored
      15123fbf
    • Kirill Smelkov's avatar
      RAMArray · 99b91c84
      Kirill Smelkov authored
      RAMArray is compatible to ZBigArray in API and semantic, but stores its
      data in RAM only. It is useful in situations where ZBigArray compatible
      data type is needed, but the amount of data is small and the data itself
      is needed only temporarily - e.g. in a simulation.
      
      Please see details in individual patches.
      
      Original merge request by @klaus (!8).
      
      /cc @Tyagov
      /reviewed-on !9
      99b91c84
    • Kirill Smelkov's avatar
      bigarray: RAMArray · fc9b69d8
      Kirill Smelkov authored
      RAMArray is compatible to ZBigArray in API and semantic, but stores its
      data in RAM only. It is useful in situations where ZBigArray compatible
      data type is needed, but the amount of data is small and the data itself
      is needed only temporarily - e.g. in a simulation.
      
      Implementation is based on mmapping temporary files from /dev/shm/... and
      passing them as file handles, similarly to how ZBigArray works, to BigArray.
      We don't use just numpy.ndarray because of append - for ZBigArray append
      works in O(1), but more importantly it does not copy data. This way
      mmapings previously created for ZBigArray views, continue to correctly
      alias array data. If we would be using ndarray directly, since
      ndarray.resize copies data, that property would not be preserved.
      
      Original patch by Klaus Wölfel <klaus@nexedi.com>
      (!8)
      fc9b69d8
  10. 11 Oct, 2018 6 commits
    • Kirill Smelkov's avatar
      . · 100995d6
      Kirill Smelkov authored
      100995d6
    • Kirill Smelkov's avatar
      . · 899b6102
      Kirill Smelkov authored
      899b6102
    • Kirill Smelkov's avatar
      X readBlk: Fix thinko in aready case · 29c9f13d
      Kirill Smelkov authored
      We were checking for `loading.err != nil` as the indication for success
      and it should have been `err == nil`. The symphoms of the bug were that
      \0 instead of data were read sometimes:
      
      	wcfs: 2018/10/11 19:18:12 < 22: i7.READ {Fh 0 [2097152 +131072)  L 0 RDONLY,0x8000}                             <-- NOTE
      
      	I1011 19:18:12.556125    6330 wcfs.go:538] readBlk #1 dest[0:+2097152]
      	I1011 19:18:12.556361    6330 wcfs.go:538] readBlk #1 dest[0:+2097152]
      	wcfs: 2018/10/11 19:18:12 ZBlk0.PySetState #11
      	wcfs: 2018/10/11 19:18:12 ZBigFile.loadblk(1) -> 2097152B
      
      	wcfs: 2018/10/11 19:18:12 > 22:     OK,  131072B data "\x00\x00\x00\x00\x00\x00\x00\x00"...                     <-- XXX not "hello world"
      
      	wcfs: 2018/10/11 19:18:12 < 24: i7.READ {Fh 0 [2359296 +131072)  L 0 RDONLY,0x8000}
      	wcfs: 2018/10/11 19:18:12 > 23:     OK,  131072B data "\x00\x00\x00\x00\x00\x00\x00\x00"...
      	wcfs: 2018/10/11 19:18:12 > 0:     NOTIFY_STORE_CACHE, {i7 [2097152 +2097152)} 2097152B data "hello wo"...      <-- NOTE
      29c9f13d
    • Kirill Smelkov's avatar
      X don't overalign end by 1 blksize if end is already aligned · d58c71e8
      Kirill Smelkov authored
      Else:
      
      	wcfs: 2018/10/10 17:52:15 < 40: i7.READ {Fh 0 [4063232 +131072)  L 0 RDONLY,0x8000}
      	wcfs: 2018/10/10 17:52:15 > 39:     OK,  131072B data
      	wcfs: 2018/10/10 17:52:15 > 40:     OK,  131072B data
      	wcfs: 2018/10/10 17:52:15 < 41: i7.GETATTR {Fh 0}
      	wcfs: 2018/10/10 17:52:15 Response: INODE_NOTIFY_STORE_CACHE: OK
      	wcfs: 2018/10/10 17:52:15 > 41:     OK, {tA=1s {M0100444 SZ=4194304 L=1 1000:1000 B0*0 i0:7 A 0.000000 M 1539183135.261177 C 1539183135.261177}}
      
      	# XXX vvv why we store 2M after read @4M even though read gives len=0 ?
      	wcfs: 2018/10/10 17:52:15 > 0:     NOTIFY_STORE_CACHE, {i7 [4194304 +2097152)} 2097152B data
      	wcfs: 2018/10/10 17:52:15 < 42: i7.READ {Fh 0 [4194304 +4096)  L 0 RDONLY,0x8000}
      	wcfs: 2018/10/10 17:52:15 > 42:     OK,
      
      	wcfs: 2018/10/10 17:52:15 < 43: i7.GETATTR {Fh 0}
      	wcfs: 2018/10/10 17:52:15 > 43:     OK, {tA=1s {M0100444 SZ=4194304 L=1 1000:1000 B0*0 i0:7 A 0.000000 M 1539183135.261177 C 1539183135.261177}}
      	wcfs: 2018/10/10 17:52:15 Response: INODE_NOTIFY_STORE_CACHE: OK
      	wcfs: 2018/10/10 17:52:15 < 44: i7.READ {Fh 0 [4198400 +4096)  L 0 RDONLY,0x8000}
      	wcfs: 2018/10/10 17:52:15 > 44:     OK,
      
              data = readfile(fpath + "/head/data")
      >       assert len(data) == fsize
      E       AssertionError: assert 4198400 == 4194304
      d58c71e8
    • Kirill Smelkov's avatar
      bigarray/tests: Factor out a way to spcify on which BigFile/BigFileH an array... · 7365979b
      Kirill Smelkov authored
      bigarray/tests: Factor out a way to spcify on which BigFile/BigFileH an array is tested into fixture parameter
      
      Currently we have only one BigFile and its BigFileH handle. However in
      the next patch, for RAMArray, we'll be adding handles for opened RAM
      files, and it would be good to test whole BigArray functionality on
      data served by those handles too.
      
      Prepare for this and first factor out into testbig fixture the way to
      open such handles.
      7365979b
    • Kirill Smelkov's avatar
      . · 5a793aa3
      Kirill Smelkov authored
      5a793aa3
  11. 10 Oct, 2018 2 commits
  12. 09 Oct, 2018 4 commits