- 12 Aug, 2015 1 commit
-
-
Kirill Smelkov authored
( without dbclose, next test will not be able to open database - will timeout on open on waiting for FileStorage lock )
-
- 27 Jul, 2015 1 commit
-
-
Kirill Smelkov authored
ca064f75 (bigarray: Support resizing in-place) added O(1) in-place BigArray.resize() which makes possible for users to append data to BigArray in O(δ) time. But it is easy for people to make off-by-one mistakes when calculating indices for append. So provide a convenient BigArray.append() which simplifies the following A # ZBigArray e.g. of shape (N, 3) values # ndarray to append of shape (δ, 3) n, δ = len(A), len(values) # length of A's major index =N A.resize((n+δ, A.shape[1:])) # add δ new entries ; now len(A) =N+δ A[-δ:] = values # set data for last new δ entries into A.append(values) /cc @klaus
-
- 24 Jul, 2015 1 commit
-
-
Kirill Smelkov authored
We stopped using numpy.multiply in 73926487 (*: It is not safe to use multiply.reduce() - it overflows).
-
- 26 Jun, 2015 2 commits
-
-
Kirill Smelkov authored
We compare A_[10*PS-1] (which is A_[1]) to 0, but A_= ndarray ((10*PS,), uint8) and that means the array memory is not initialized. So the comparison works sometimes and sometimes it does not. Initialize compared element explicitly. NOTE: A (without _) element does not need to be initialized - because not-initialized BigArray parts read as zeros.
-
Kirill Smelkov authored
Previously we were always testing with DBs backed up by FileStorage. Now we provide a way to run the testsuite with user selected storage backend: $ WENDELIN_CORE_TEST_DB="<fs>" make test.py # test with temporary db with FileStorage $ WENDELIN_CORE_TEST_DB="<zeo>" make test.py # ----------//---------- with ZEO $ WENDELIN_CORE_TEST_DB="<neo>" make test.py # ----------//---------- with NEO $ WENDELIN_CORE_TEST_DB=neo://db@master make test.py # test with externally provided DB Default is still to run tests with FileStorage. /cc @jm
-
- 25 Jun, 2015 1 commit
-
-
Kirill Smelkov authored
Factor out those routines to open a ZODB database to common place. The reason for doing so is that we'll soon teach dbopen to automatically recognize several protocols, e.g. neo:// and zeo:// and this way, clients who use dbopen() could automatically access storages besides FileStorage.
-
- 02 Jun, 2015 3 commits
-
-
Kirill Smelkov authored
BigArrays can be big - up to 2^64 bytes, and thus in general it is not possible to represent whole BigArray as ndarray view, because address space is usually smaller on 64bit architectures. However users often try to pass BigArrays to numpy functions as-is, and numpy finds a way to convert, or start converting, BigArray to ndarray - via detecting it as a sequence, and extracting elements one-by-one. Which is slooooow. Because of the above, we provide users a well-defined service: - if virtual address space is available - we succeed at creating ndarray view for whole BigArray, without delay and copying. - if not - we report properly the error and give hint how BigArrays have to be processed in chunks. Verifying that big BigArrays cannot be converted to ndarray also tests for behaviour and issues fixed in last 5 patches. /cc @Tyagov /cc @klaus
-
Kirill Smelkov authored
e.g. In [1]: multiply.reduce((1<<30, 1<<30, 1<<30)) Out[1]: 0 instead of In [2]: (1<<30) * (1<<30) * (1<<30) Out[2]: 1237940039285380274899124224 In [3]: 1<<90 Out[3]: 1237940039285380274899124224 also multiply.reduce returns int64, instead of python int: In [4]: type( multiply.reduce([1,2,3]) ) Out[4]: numpy.int64 which also leads to overflow-related problems if we further compute with this value and other integers and results exceeds int64 - it becomes float: In [5]: idx0_stop = 18446744073709551615 In [6]: stride0 = numpy.int64(1) In [7]: byte0_stop = idx0_stop * stride0 In [8]: byte0_stop Out[8]: 1.8446744073709552e+19 and then it becomes a real problem for BigArray.__getitem__() wendelin.core/bigarray/__init__.py:326: RuntimeWarning: overflow encountered in long_scalars page0_min = min(byte0_start, byte0_stop+byte0_stride) // pagesize # TODO -> fileh.pagesize and then > vma0 = self._fileh.mmap(page0_min, page0_max-page0_min+1) E TypeError: integer argument expected, got float ~~~~ So just avoid multiple.reduce() and do our own mul() properly the same way sum() is builtin into python, and we avoid overflow-related problems.
-
Kirill Smelkov authored
OverflowError when computing slice indices practically means we'll cannot allocate so much address space at next step: In [1]: s = slice(None) In [2]: s.indices(1<<62) Out[2]: (0, 4611686018427387904, 1) In [3]: s.indices(1<<63) --------------------------------------------------------------------------- OverflowError Traceback (most recent call last) <ipython-input-4-5aa549641bc6> in <module>() ----> 1 s.indices(1<<63) OverflowError: cannot fit 'long' into an index-sized integer So translate this OverflowError into MemoryError (preserving message details), because we'll need such "no so much address space" cases to show up as MemoryError in a sooner patch.
-
- 28 May, 2015 3 commits
-
-
Kirill Smelkov authored
It was hanging with NumPy-1.9 before 425dc5d1 (bigarray: Raise IndexError for out-of-bound element access), because of the following correct NumPy commit: https://github.com/numpy/numpy/commit/d36f8227 and in particular https://github.com/numpy/numpy/commit/d36f8227#diff-6d326badc0872de91e025cbfb0be1aafR522 That PySequence_Fast(obj) (with obj being BigArray) creates iterator on top of obj and before our previous IndexError fix in 425dc5d1, this was looping forever. Test explicitly with both NumPy 1.8 and NumPy 1.9, that this construct does not hang. /cc @Tyagov
-
Kirill Smelkov authored
The way BigArray.__getitem__ works for element access is that for e.g. A[i] it translates the request to A[i:i+1] and remembers to lower the dimensionality at scalar index dim_adjust = (0,) so, in full, A[i] is computed this way: A[i] -> A[i:i+1](0,) ( it is done this way to unify code for scalar / slice access in __getitem__ - see 0c826d5c "BigArray: An ndarray-like on top of BigFile memory mappings" ) The code for slice access also has a shortcut - if it sees that slice results in empty array (e.g. for out-of-bound slice), we can avoid spending time to create a file vma mapping only to create empty view on top of it. In 0c826d5c, that optimization, however forgot to apply the "lower the dimensionality" step on top of resulting empty view, and that turned out for not raising IndexError for out-of-bounds scalar access: A = BigArray((10,), uint8) In [1]: A[0] Out[1]: 0 In [2]: A[1] Out[2]: 0 In [3]: A[2] Out[3]: 0 In [4]: A[9] Out[4]: 0 In [5]: A[10] Out[5]: array([], dtype=uint8) NOTE that A[10] returns empty array instead of raising IndexError. So do not forget to apply the "reduce dimensionality" step for empty views, and this way we get proper IndexError (because for empty view, scalar access results in IndexError). NOTE: this bug was also preventing for e.g. list(A) to work, because list(A) internally works this way: l = [] i = iter(A) for _ in i: l.append(_) but iterating would not stop after 10 elements - after array end, _ will be always array([], dtype=uint8), and thus the loop never finished and memory usage grow to infinity. /cc @Tyagov
-
Kirill Smelkov authored
In NumPy speak advanced indexing is picking up arbitrarily requested elemtnts, e.g. a = arange(10) a[[0,3,2]] -> array([0, 3, 2]) The way this indexing schem works is - it creates a new array with len = len(key), and picks up requested elements sequentially into new area. So it is very not the same as creating _view_ to original array data by using basic indexing [1] BigArray does not support advanced indexing, because its main job is to organize an ndarray _view_ backed up by BigFile data and give that view to clients, and then it is up to clients how to use that view with full numpy api available with it. So be explicit, and reject advanced indexing in __getitem__ right at the beginning. [1] http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
-
- 20 May, 2015 3 commits
-
-
Kirill Smelkov authored
In NumPy, ndarray has .resize() but actually it does a whole array copy into newly allocated larger segment which makes e.g. appending O(n). For BigArray, we don't have that internal constraint NumPy has - to keep the array itself contiguously _stored_ (compare to contiguously _presented_ in memory). So we can have O(1) resize for big arrays. NOTE having O(1) resize, here is how O(δ) append can be done: A # ZBigArray e.g. of shape (N, 3) n = len(A) # lengh of A's major index =N A.resize((n+δ, A.shape[1:])) # add δ new entries ; now len(A) =N+δ A[-δ:] = <new-data> # set data for last new δ entries /cc @klaus
-
Kirill Smelkov authored
test_bigarray_indexing_Nd() contains useful class to have a BigFile connected to ndarray storage. Factor it out so that all tests could use it. BigFile_Data.storeblk() is newly introduced and is currently unused, but will be convenient to have later.
-
Kirill Smelkov authored
-
- 03 Apr, 2015 2 commits
-
-
Kirill Smelkov authored
This is like to BigArray, like ZBigFile is to BigFile (4174b84a "bigfile: BigFile backend to store data in ZODB")
-
Kirill Smelkov authored
I.e. something like numpy.memmap for numpy.ndarray and OS files. The whole bigarray cannot be used as a drop-in replacement for numpy arrays, but BigArray _slices_ are real ndarrays and can be used everywhere ndarray can be used, including in C/Fortran code. Slice size is limited by mapping-size (= address-space size) limit, i.e. to ~ max 127TB on Linux/amd64. Changes to bigarray memory are changes to bigfile memory mapping and as such can be discarded or saved back to bigfile using mapping (= BigFileH) dirty discard/writeout interface. For the same reason the whole amount of changes to memory is limited by amount of physical RAM.
-