1. 27 Nov, 2018 3 commits
  2. 26 Nov, 2018 1 commit
  3. 23 Nov, 2018 1 commit
  4. 22 Nov, 2018 3 commits
  5. 20 Nov, 2018 1 commit
  6. 30 Oct, 2018 1 commit
    • Kirill Smelkov's avatar
      Merge branch 'master' into t · e1f05973
      Kirill Smelkov authored
      * master:
        lib.xnumpy.structured: New utility to create structured view of an array
        bigarray: Factor-out our custom numpy.lib.stride_tricks.as_strided-alike into lib/xnumpy.py
      e1f05973
  7. 29 Oct, 2018 9 commits
    • Kirill Smelkov's avatar
      lib.xnumpy.structured: New utility to create structured view of an array · 32ca80e2
      Kirill Smelkov authored
      Structured creates view of the array interpreting its minor axis as fully covered by a dtype.
      
      It is similar to arr.view(dtype) + corresponding reshape, but does
      not have limitations of ndarray.view(). For example:
      
        In [1]: a = np.arange(3*3, dtype=np.int32).reshape((3,3))
      
        In [2]: a
        Out[2]:
        array([[0, 1, 2],
               [3, 4, 5],
               [6, 7, 8]], dtype=int32)
      
        In [3]: b = a[:2,:2]
      
        In [4]: b
        Out[4]:
        array([[0, 1],
               [3, 4]], dtype=int32)
      
        In [5]: dtxy = np.dtype([('x', np.int32), ('y', np.int32)])
      
        In [6]: dtxy
        Out[6]: dtype([('x', '<i4'), ('y', '<i4')])
      
        In [7]: b.view(dtxy)
        ---------------------------------------------------------------------------
        ValueError                                Traceback (most recent call last)
        <ipython-input-66-af98529aa150> in <module>()
        ----> 1 b.view(dtxy)
      
        ValueError: To change to a dtype of a different size, the array must be C-contiguous
      
        In [8]: structured(b, dtxy)
        Out[8]: array([(0, 1), (3, 4)], dtype=[('x', '<i4'), ('y', '<i4')])
      
      Structured always creates view and never copies data.
      
      Here is original context where separately playing with .shape and .dtype
      was not enough, since it was creating array copy and OOM'ing the machine:
      
      klaus/wendelin@cbe4938b
      32ca80e2
    • Kirill Smelkov's avatar
      bigarray: Factor-out our custom numpy.lib.stride_tricks.as_strided-alike into lib/xnumpy.py · 6a5dfefa
      Kirill Smelkov authored
      We are going to use this code in another place, so move this out to
      dommon place as a preparatory step first.
      
      On a related note: Since ArrayRef is generic and quite independent from
      BigArray (it only supports it, but equally it supports just other - e.g.
      plain arrays), the proper place for it might be also to be lib/xnumpy.py .
      We might get to this topic a bit later.
      6a5dfefa
    • Kirill Smelkov's avatar
      . · 33ff0c80
      Kirill Smelkov authored
      33ff0c80
    • Kirill Smelkov's avatar
      . · b4a3a0bd
      Kirill Smelkov authored
      b4a3a0bd
    • Kirill Smelkov's avatar
      . · c4b66f7d
      Kirill Smelkov authored
      c4b66f7d
    • Kirill Smelkov's avatar
      . · fd647f2b
      Kirill Smelkov authored
      fd647f2b
    • Kirill Smelkov's avatar
      . · 2ad300e8
      Kirill Smelkov authored
      2ad300e8
    • Kirill Smelkov's avatar
      Merge remote-tracking branch 'nxd/master' into t · 009b9fb9
      Kirill Smelkov authored
      * nxd/master:
        bigarray: RAMArray
        bigarray/tests: Factor out a way to spcify on which BigFile/BigFileH an array is tested into fixture parameter
      009b9fb9
    • Kirill Smelkov's avatar
  8. 26 Oct, 2018 1 commit
    • Kirill Smelkov's avatar
      X xnumpy.restructure · 2569b175
      Kirill Smelkov authored
      Currently fails with:
      
      /home/kirr/src/wendelin/wendelin.core/lib/xnumpy.py in restructure(arr, dtype)
           82     print 'stridev:', stridev
           83     #return np.ndarray.__new__(type(arr), shape, dtype, buffer(arr), 0, stridev)
      ---> 84     return np.ndarray(shape, dtype, buffer(arr), 0, stridev)
      
      TypeError: expected a single-segment buffer object
      2569b175
  9. 21 Oct, 2018 2 commits
  10. 19 Oct, 2018 5 commits
  11. 18 Oct, 2018 2 commits
    • Kirill Smelkov's avatar
      X invalidation design draftly settled · 9b4a42a3
      Kirill Smelkov authored
      9b4a42a3
    • Kirill Smelkov's avatar
      X test that ZBlk objects can be actually removed from ZODB Connection cache... · 69c94fbc
      Kirill Smelkov authored
      X test that ZBlk objects can be actually removed from ZODB Connection cache and cause invalidation to be missed
      
      ____________________________________ test_bigfile_filezodb_vs_cache_invalidation ____________________________________
      
          def test_bigfile_filezodb_vs_cache_invalidation():
              root = dbopen()
              conn = root._p_jar
              db   = conn.db()
              conn.close()
              del root, conn
      
              tm1 = TransactionManager()
              tm2 = TransactionManager()
      
              conn1 = db.open(transaction_manager=tm1)
              root1 = conn1.root()
      
              # setup zfile with fileh view to it
              root1['zfile3'] = f1 = ZBigFile(blksize)
              tm1.commit()
      
              fh1 = f1.fileh_open()
              tm1.commit()
      
              # set zfile initial data
              vma1 = fh1.mmap(0, 1)
              Blk(vma1, 0)[0] = 1
              tm1.commit()
      
              # read zfile and setup fileh for it in conn2
              conn2 = db.open(transaction_manager=tm2)
              root2 = conn2.root()
      
              f2 = root2['zfile3']
              fh2 = f2.fileh_open()
              vma2 = fh2.mmap(0, 1)
      
              assert Blk(vma2, 0)[0] == 1 # read data in conn2 + make sure read correctly
      
              # now zfile content is both in ZODB.Connection cache and in _ZBigFileH
              # cache for each conn1 and conn2. Modify data in conn1 and make sure it
              # fully propagate to conn2.
      
              Blk(vma1, 0)[0] = 2
              tm1.commit()
      
              # still should be read as old value in conn2
              assert Blk(vma2, 0)[0] == 1
              # and even after virtmem pages reclaim
              # ( verifies that _p_invalidate() in ZBlk.loadblkdata() does not lead to
              #   reloading data as updated )
              ram_reclaim_all()
              assert Blk(vma2, 0)[0] == 1
      
              # FIXME: this simulates ZODB Connection cache pressure and currently
              # removes ZBlk corresponding to blk #0 from conn2 cache.
              # In turn this leads to conn2 missing that block invalidation on follow-up
              # transaction boundary.
              #
              # See FIXME notes on ZBlkBase._p_invalidate() for detailed description.
              conn2._cache.minimize()
      
              tm2.commit()                # transaction boundary for t2
      
              # data from tm1 should propagate -> ZODB -> ram pages for _ZBigFileH in conn2
      >       assert Blk(vma2, 0)[0] == 2
      E       assert 1 == 2
      
      tests/test_filezodb.py:615: AssertionError
      69c94fbc
  12. 17 Oct, 2018 1 commit
  13. 16 Oct, 2018 3 commits
  14. 15 Oct, 2018 5 commits
  15. 12 Oct, 2018 2 commits
    • Kirill Smelkov's avatar
      . · 15123fbf
      Kirill Smelkov authored
      15123fbf
    • Kirill Smelkov's avatar
      RAMArray · 99b91c84
      Kirill Smelkov authored
      RAMArray is compatible to ZBigArray in API and semantic, but stores its
      data in RAM only. It is useful in situations where ZBigArray compatible
      data type is needed, but the amount of data is small and the data itself
      is needed only temporarily - e.g. in a simulation.
      
      Please see details in individual patches.
      
      Original merge request by @klaus (nexedi/wendelin.core!8).
      
      /cc @Tyagov
      /reviewed-on nexedi/wendelin.core!9
      99b91c84