- 28 Sep, 2015 4 commits
-
-
Kirill Smelkov authored
Our current workloads are mostly a lot of small data changes and this is what ZBlk1 was created for. Yes it has larger overhead for accessing data, but we already painted the way how to handle this in 13c0c17c (bigfile/zodb: Format #1 which is optimized for small changes) -> move data deduplication/management to server side. So be it ZBlk1 the default for now. /cc @Tyagov, @klaus
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
Kirill Smelkov authored
We check all pairs of possible formats and for every pair write some data at file 0 block, check that it is in source format; then change default format, write another data, check that file.blktab[0] changed its type. I.e. with this test we verify that the we can read data in old format and change it incrementally to new. Or the other way, if data is in new format, and we decide to migrate to old one, it also works. /cc @Tyagov, @klaus
-
- 24 Sep, 2015 9 commits
-
-
Kirill Smelkov authored
Our current approach is that each file block is represented by 1 zodb object, with block size being 2M. Even with trailing \0 trimming, which halves the overhead on average, DB size grows very fast if we do a lot of small appends or changes. So another format needs to be introduced which has lower overhead for storing small changes: In general, to represent BigFile as ZODB objects, each file block could be represented separately either as 1) one ZODB object, or (ZBlk0 - this what we have already) 2) group of ZODB objects (ZBlk1 - this is what we introduce) with top-level BTree directory #blk -> objects representing block. For "1" we have - low-overhead access time (only 1 object loaded from DB), but - high-overhead in terms of ZODB size (with FileStorage / ZEO, every change to a block causes it to be written into DB in full again) For "2" we have - low-overhead in terms of ZODB size (only part of a block is overwritten in DB on single change), but - high-overhead in terms of access time (several objects need to be loaded for 1 block) In general it is not possible to have low-overhead for both i) access-time, and ii) DB size, with approach where we do block objects representation / management on *client* side. On the other hand, if object management is moved to DB *server* side, it is possible to deduplicate them there and this way have low-overhead for both access-time and DB size with just client storing 1 object per file block. This will be our future approach after we teach NEO about object deduplication. ~~~~ As shown above in the last paragraph it is not possible to perform optimally on client side. Thus ZBlk1 should be only an intermediate solution until we move data management to DB server side, with main criteria for ZBlk1 to keep it simple. In this patch a simple scheme is used, where every block is divided into chunks organized via BTree. When a block part changes, only corresponding chunk is updated. Chunk size is chosen to be 4K which creates ~ 512 fanout for 2M block. DB size after tests is changed as follows: bigfile bigarray ZBlk0 24K 6200K ZBlk1 36K 36K ( slight size increase for bigfile tests is because of btree structures overhead ) Time to run tests stays approximately the same. /cc @Tyagov, @klaus
-
Kirill Smelkov authored
- current ZBlk becomes format 0 - write format can be selected via WENDELIN_CORE_ZBLK_FMT env var - upon writing a block we always make sure we write it in current write format - so if a block was previously written in one format, it could be changed on the next write. - tox is prepared to test all write formats (so far only ZBlk0 there). The reason is - in the next patch we'll introduce another format for blocks which is optimized for small changes.
-
Kirill Smelkov authored
If we aim to have several kinds of ZBlk, the functionality to invalidate a block and bind it to zfile is common and thus should be shared. If we introduce a base class, it also makes sense to document what .loadblkdata() and .setblkdata() should do there - in one place.
-
Kirill Smelkov authored
- we have logic to init ._v_zfile and ._v_blk there - the same for ._v_blkdata - logic to init it is there + it is better to set variables right from instance creation, not hoping "it will be set outside from master" NOTE ._v_blkdata = None means the block was not yet loaded and generally fits into logic how ZBlk operates and thus the change is ok.
-
Kirill Smelkov authored
A lot of times data in blocks come shorter than block size and the rest of the memory page is zeros (because it was pre-filled zeros by OS when page was allocated). Do a simple heuristic and trim those trailing zeros and not store them into DB. With this change size of DB created by running bigfile and bigarray tests changes as following: bigfile bigarray old 145M 35M new 24K 6M Trimming trailing zeros is currently done with str.rstrip('\0') which creates a copy. When/if needed this could be optimized to work in-place.
-
Kirill Smelkov authored
- to keep things uniform with counterpart .loadblkdata() - so that master do not mess with ZBlk internals and works only through interface - this way it will be possible to use several kinds of ZBlk.
-
Kirill Smelkov authored
All those functions move data between DB and ._v_blkdata and only master then connects data to memory page. Make that fact explicit.
-
Kirill Smelkov authored
Again, leftover from 4174b84a (bigfile: BigFile backend to store data in ZODB).
-
Kirill Smelkov authored
The comment was a leftover there from 4174b84a (bigfile: BigFile backend to store data in ZODB).
-
- 23 Sep, 2015 2 commits
-
-
Kirill Smelkov authored
i.e. it is ok to copy smaller data into larger buffer.
-
Kirill Smelkov authored
- not only multiple of 8. We can do it by using uint8 typed arrays, and it does not hurt performance: In [1]: from wendelin.lib.mem import bzero, memset, memcpy In [2]: A = bytearray(2*1024*1024) In [3]: B = bytearray(2*1024*1024) memcpy(B, A) bzero(A) memset(A, 0xff) old: 718 µs 227 µs / 1116 228 µs / 1055 (*) new: 718 µs 176 µs / 1080 175 µs / 1048 (*) the second number comes from e.g. In [8]: timeit bzero(A) The slowest run took 4.63 times longer than the fastest. This could mean that an intermediate result is being cached 10000 loops, best of 3: 228 µs per loop so the second number is more realistic and says performance stays aproximately the same and only slightly improves.
-
- 21 Sep, 2015 2 commits
-
-
Kirill Smelkov authored
When we serve indexing request, we first compute page range in backing file, which contains the result based on major index range, then mmap that file range and pick up result from there. Page range math was however not correct: e.g. for positive strides, last element's byte is (byte0_stop-1), NOT (byte0_stop - byte0_stride) which for cases where byte0_stop is just a bit after page boundary, can make a difference - page_max will be 1 page less what it should be and then whole ndarray view creation breaks: ... Module wendelin.bigarray, line 381, in __getitem__ view0 = ndarray(view0_shape, self._dtype, vma0, view0_offset, view0_stridev) ValueError: strides is incompatible with shape of requested array and size of buffer ( because vma0 was created less in size than what is needed to create view0_shape shaped array starting from view0_offset in vma0. ) Similar story for negative strides math - it was not correct neither. Fix it. /reported-by @Camata
-
Kirill Smelkov authored
We'll need this class in tests in the next patch.
-
- 11 Sep, 2015 1 commit
-
-
Kirill Smelkov authored
It was already done from the beginning in 4174b84a (bigfile: BigFile backend to store data in ZODB).
-
- 02 Sep, 2015 2 commits
-
-
Kirill Smelkov authored
bigfile/zodb/tests: Make sure _p_invalidate() in Zblk.loadblk() does not lead to reloading data updated Thanks to ZODB being MVCC this does not happen, but we better test explicitly.
-
Kirill Smelkov authored
We'll need it in other places in the next patch.
-
- 19 Aug, 2015 1 commit
-
-
Kirill Smelkov authored
-
- 18 Aug, 2015 4 commits
-
-
Kirill Smelkov authored
e.g. on .shape
-
Kirill Smelkov authored
When there is a conflict (on any object, but on ZBlk in particular) ZODB machinery calls its ._p_invalidate() twice: File ".../wendelin.core/bigfile/tests/test_filezodb.py", line 661, in test_bigfile_filezodb_vs_conflicts tm2.commit() # this should raise ConflictError and stay at 11 state File ".../transaction/_manager.py", line 111, in commit return self.get().commit() File ".../transaction/_transaction.py", line 271, in commit self._commitResources() File ".../transaction/_transaction.py", line 414, in _commitResources self._cleanup(L) File ".../transaction/_transaction.py", line 426, in _cleanup rm.abort(self) File ".../ZODB/Connection.py", line 436, in abort self._abort() File ".../ZODB/Connection.py", line 479, in _abort self._cache.invalidate(oid) File ".../wendelin.core/bigfile/file_zodb.py", line 148, in _p_invalidate traceback.print_stack() and File ".../wendelin.core/bigfile/tests/test_filezodb.py", line 661, in test_bigfile_filezodb_vs_conflicts tm2.commit() # this should raise ConflictError and stay at 11 state File ".../transaction/_manager.py", line 111, in commit return self.get().commit() File ".../transaction/_transaction.py", line 271, in commit self._commitResources() File ".../transaction/_transaction.py", line 416, in _commitResources self._synchronizers.map(lambda s: s.afterCompletion(self)) File ".../transaction/weakset.py", line 59, in map f(elt) File ".../transaction/_transaction.py", line 416, in <lambda> self._synchronizers.map(lambda s: s.afterCompletion(self)) File ".../ZODB/Connection.py", line 831, in _storage_sync self._flush_invalidations() File ".../ZODB/Connection.py", line 539, in _flush_invalidations self._cache.invalidate(invalidated) File ".../wendelin.core/bigfile/file_zodb.py", line 148, in _p_invalidate traceback.print_stack() i.e. first invalidation is done by commit cleanup: https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L414 https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L479 and then Connection.afterCompletion() flushes invalidation again: https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L416 https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L833 https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L539 If there was no conflict - there will be no ConflictError raised and thus no Transaction._cleanup() done in its ._commitResources() -> invalidation called only once. But with ConflictError - it is twice. Adjust ZBlk._p_invalidate() not to delve into real invalidation more than once - else we will fail, as ZBlk._v_zfile becomes unbound after invalidation done the first time.
-
Kirill Smelkov authored
All is currently handled correctly, but an observation is made that upon such invalidation we through away ._v_fileh i.e. we through away whole data cache just because an array was resized.
-
Kirill Smelkov authored
LivePersistent can go to ghost state, because invalidation cannot be ignored, i.e. they indicate the object has been changed externally. This does not break our logic for ZBigFile and ZBigArray as invalidations can happen only at transaction boundary, so during the course of transaction those classes are guaranteed to stay uptodate and thus not loose ._v_file and ._v_fileh (which is the reason they inherit from LivePersistent). it is ok to loose ._v_file and ._v_fileh at transaction boundary and become ghost - those objects will be recreated upon going back uptodate and will stay alive again during the whole transaction window. We care only not to loose e.g. ._v_fileh inside transaction, because loosing that data manager and thus data it manages inside transaction can break synchronization logic and forget changed-through-mmap data.
-
- 17 Aug, 2015 5 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
If we do - ZBigFileH objects just don't get garbage collected, and sooner or later this way it leaks enough filedescriptors so that main zope loop breaks: Traceback (most recent call last): File ".../bin/runzope", line 194, in <module> sys.exit(Zope2.Startup.run.run()) File ".../eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/run.py", line 26, in run starter.run() File ".../eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/__init__.py", line 105, in run Lifetime.loop() File ".../eggs/Zope2-2.13.22-py2.7.egg/Lifetime/__init__.py", line 43, in loop lifetime_loop() File ".../eggs/Zope2-2.13.22-py2.7.egg/Lifetime/__init__.py", line 53, in lifetime_loop asyncore.poll(timeout, map) File ".../parts/python2.7/lib/python2.7/asyncore.py", line 145, in poll r, w, e = select.select(r, w, e, timeout) ValueError: filedescriptor out of range in select() $ lsof -p <runzope-pid> |grep ramh | wc -l 950 So continuing 64d1f40b (bigfile/zodb: Monkey-patch for ZODB.Connection to support callback on .open()) let's change the implementation to use WeakSet for callbacks list. Yes, because weakref to bound methods release immediately, we give up flexibility to subscribe to arbitrary callbacks. If it become an issue, something like WeakMethod from py3 or recipes from the net how to do it are there.
-
Kirill Smelkov authored
ZODB 3.10.4 was released almost 4 years ago, and contains significant change how ghost objects coming from DB are initially setup.
-
Kirill Smelkov authored
Continuing theme from the previous patch, here is propagation of invalidation messages from ZODB to BigFileH memory. The use-case here is that e.g. one fileh mapping was created in one connection, another in another, and after doing changes in second connection and committing there, the first fileh has to invalidate appropriate already-loaded pages, so its next transaction won't work with stale data. To do it, we hook into ZBlk._p_invalidate() and propagate the invalidation message to ZBigFile which then notifies all opened-through-it ZBigFileH to invalidate a page. ZBlk -> ZBigFile lookup is done without storing backpointer in ZODB - instead, every time ZBigFile touches ZBlk object (and thus potentially does GHOST -> Live transition to it), we (re-)bind it back to ZBigFile. Since ZBigFile is the only class that works with ZBlk objects it is safe to do so. For ZBigFile to notify "all-opened-through-it" ZBigFileH, a weakset is introduced to track them. Otherwise the real page invalidation work is done by virtmem (see previous patch).
-
Kirill Smelkov authored
FileH is a handle representing snapshot of a file. If, for a pgoffset, fileh already has loaded page, but we know the content of the file has changed externally after loading has been done, we need to propagate to fileh that such-and-such page should be invalidated (and reloaded on next access). This patch introduces fileh_invalidate_page(fileh, pgoffset) to do just that. In the next patch we'll use this facility to propagate invalidations of ZBlk ZODB objects to virtmem subsystem. NOTE Since invalidation removes "dirtiness" from a page state, several subsequent invalidations can make a fileh completely non-dirty (invalidating all dirty page). Previously fileh->dirty was just a one bit, so we needed to improve how we track dirtiness. One way would be to have a dirty list for fileh pages and operate on that. This has advantage to even optimize dirty pages processing like fileh_dirty_writeout() where we currently scan through all fileh pages just to write only PAGE_DIRTY ones. Another simpler way is to make fileh->dirty a counter and maintain that. Since we are going to move virtmem subsystem back into the kernel, here, a simpler less-intrusive approach is used.
-
- 12 Aug, 2015 4 commits
-
-
Kirill Smelkov authored
Intro ----- ZODB maintains pool of opened-to-DB connections. For each request Zope opens 1 connection and, after request handling is done, returns the connection back to ZODB pool (via Connection.close()). The same connection will be opened again for handling some future next request at some future time. This next open can happen in different-from-first request worker thread. TransactionManager (as accessed by transaction.{get,commit,abort,...}) is thread-local, that is e.g. transaction.get() returns different transaction for threads T1 and T2. When _ZBigFileH hooks into txn_manager to get a chance to run its .beforeCompletion() when transaction.commit() is run, it hooks into _current_ _thread_ transaction manager. Without unhooking on connection close, and circumstances where connection migrates to different thread this can lead to dissynchronization between ZBigFileH managing fileh pages and Connection with ZODB objects. And even to data corruption, e.g. T1 T2 open zarray[0] = 11 commit close open # opens connection as closed in T1 open zarray[0] = 21 commit abort close close Here zarray[0]=21 _will_ be committed by T1 as part of T1 transaction - because when T1 does commit .beforeCompletion() for zarray is invoked, sees there is dirty data and propagate changes to zodb objects in connection for T2, joins connection for T2 into txn for T1, and then txn for t1 when doing two-phase-commit stores modified objects to DB -> oops. ---------------------------------------- To prevent such dissynchronization _ZBigFileH needs to be a DataManager which works in sync with the connection it was initially created under - on connection close, unregister from transaction_manager, and on connection open, register to transaction manager in current, possibly different, thread context. Then there won't be incorrect beforeCompletion() notification and corruption. This issue, besides possible data corruption, was probably also exposing itself via following ways we've seen in production (everywhere connection was migrated from T1 to T2): 1. Exception ZODB.POSException.ConnectionStateError: ConnectionStateError('Cannot close a connection joined to a transaction',) in <bound method Cleanup.__del__ of <App.ZApplication.Cleanup instance at 0x7f10f4bab050>> ignored T1 T2 modify zarray commit/abort # does not join zarray to T2.txn, # because .beforeCompletion() is # registered in T1.txn_manager commit # T1 invokes .beforeCompletion() ... # beforeCompletion() joins ZBigFileH and zarray._p_jar (=T2.conn) to T1.txn ... # commit is going on in progress ... ... close # T2 thinks request handling is done and ... # and closes connection. But T2.conn is ... # still joined to T1.txn 2. Traceback (most recent call last): File ".../wendelin/bigfile/file_zodb.py", line 121, in storeblk def storeblk(self, blk, buf): return self.zself.storeblk(blk, buf) File ".../wendelin/bigfile/file_zodb.py", line 220, in storeblk zblk._v_blkdata = bytes(buf) # FIXME does memcpy File ".../ZODB/Connection.py", line 857, in setstate raise ConnectionStateError(msg) ZODB.POSException.ConnectionStateError: Shouldn't load state for 0x1f23a5 when the connection is closed Similar to "1", but close in T2 happens sooner, so that when T1 does the commit and tries to store object to database, Connection refuses to do the store: T1 T2 modify zarray commit/abort commit ... close ... ... . obj.store() ... ... 3. Traceback (most recent call last): File ".../wendelin/bigfile/file_zodb.py", line 121, in storeblk def storeblk(self, blk, buf): return self.zself.storeblk(blk, buf) File ".../wendelin/bigfile/file_zodb.py", line 221, in storeblk zblk._p_changed = True # if zblk was already in DB: _p_state -> CHANGED File ".../ZODB/Connection.py", line 979, in register self._register(obj) File ".../ZODB/Connection.py", line 989, in _register self.transaction_manager.get().join(self) File ".../transaction/_transaction.py", line 220, in join Status.ACTIVE, Status.DOOMED, self.status)) ValueError: expected txn status 'Active' or 'Doomed', but it's 'Committing' ( storeblk() does zblk._p_changed -> Connection.register(zblk) -> txn.join() but txn is already committing IOW storeblk() was invoked with txn.state being already 'Committing' ) T1 T2 modify obj # this way T2.conn joins T2.txn modify zarray commit # T1 invokes .beforeCompletion() ... # beforeCompletion() joins only _ZBigFileH to T1.txn ... # (because T2.conn is already marked as joined) ... ... commit/abort # T2 does commit/abort - this touches only T2.conn, not ZBigFileH ... # in particular T2.conn is now reset to be not joined ... . tpc_begin # actual active commit phase of T1 was somehow delayed a bit . tpc_commit # when changes from RAM propagate to ZODB objects associated . storeblk # connection (= T2.conn !) is notified again, . zblk = ... # wants to join txn for it thinks its transaction_manager, # which when called from under T1 returns *T1* transaction manager for # which T1.txn is already in state='Committing' 4. Empty transaction committed to NEO ( different from doing just transaction.commit() without changing any data - a connection was joined to txn, but set of modified object turned out to be empty ) This is probably a race in Connection._register when both T1 and T2 go to it at the same time: https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L988 def _register(self, obj=None): if self._needs_to_join: self.transaction_manager.get().join(self) self._needs_to_join = False T1 T2 modify zarray commit ... .beforeCompletion modify obj . if T2.conn.needs_join if T2.conn.needs_join # race here . T2.conn.join(T1.txn) T2.conn.join(T2.txn) # as a result T2.conn joins both T1.txn and T2.txn . commit finishes # T2.conn registered-for-commit object list is now empty commit tpc_begin storage.tpc_begin tpc_commit # no object stored, because for-commit-list is empty /cc @jm, @klaus, @Tyagov, @vpelletier
-
Kirill Smelkov authored
ZODB.Connection has support for calling callbacks on .close() but not on .open() . We'll need to hook into both Connection open/close process in the next patch (for _ZBigFileH to stay in sync with Connection state). NOTE on-open callbacks are setup once and fire many times on every open, on-close callbacks are setup once and fire only once on next close. The reason for this is that on-close callbacks are useful for scheduling current connection cleanup, after its processing is done. But on-open callback is for future connection usage, which is generally not related to current connection. /cc @jm, @vpelletier
-
Kirill Smelkov authored
-
Kirill Smelkov authored
( without dbclose, next test will not be able to open database - will timeout on open on waiting for FileStorage lock )
-
- 09 Aug, 2015 1 commit
-
-
Kirill Smelkov authored
Previously we were limited to printing traceback starting down from just storeblk() via explicit PyErr_PrintEx() - because pybuf was attached to memory which could go away right after return from C function - so we had to destroy that object for sure, not letting any traceback to hold a reference to it. This turned out to be too limiting and not showing full context where errors happen. So do the following trick: before returning, reattach pybuf to empty region at NULL, and this way we don't need to worry about pybuf pointing to memory which can go away -> thus instead of printing exception locally - just return it the usual way it is done with C api in Python. NOTE In contrast to PyMemoryViewObject, PyBufferObject definition is not public, so to support Python2 - had to copy its definition to PY2 compat header. NOTE2 loadblk() is not touched - the loading is done from sighandler context, which simulates as if it work in separate python thread, so it is leaved as is for now.
-
- 06 Aug, 2015 5 commits
-
-
Kirill Smelkov authored
At present several threads running can corrupt internal virtmem datastructures (e.g. ram->lru_list, fileh->pagemap, etc). This can happen even if we have zope instances only with 1 worker thread - because there are other "system" thread, and python garbage collection can trigger at any thread, so if a virtmem object, e.g. VMA or FileH was there sitting at GC queue to be collected, their collection, and thus e.g. vma_unmap() and fileh_close() will be called from different-from-worker thread. Because of that virtmem just has to be aware of threads not to allow internal datastructure corruption. On the other hand, the idea of introducing userspace virtual memory manager turned out to be not so good from performance and complexity point of view, and thus the plan is to try to move it back into the kernel. This way it does not make sense to do a well-optimised locking implementation for userspace version. So we do just a simple single "protect-all" big lock for virtmem. Of a particular note is interaction with Python's GIL - any long-lived lock has to be taken with GIL released, because else it can deadlock: t1 t2 G V G !G V G so we introduce helpers to make sure the GIL is not taken, and to retake it back if we were holding it initially. Those helpers (py_gil_ensure_unlocked / py_gil_retake_if_waslocked) are symmetrical opposites to what Python provides to make sure the GIL is locked (via PyGILState_Ensure / PyGILState_Release). Otherwise, the patch is more-or-less straightforward application for one-big-lock to protect everything idea.
-
Kirill Smelkov authored
Mutex lock/unlock should not fail if mutex was correctly initialized/used.
-
Kirill Smelkov authored
And specifically that GC'ed object __del__ calls into virtmem (vma_dealloc and fileh_dealloc) again. NOTE not sure it is a good idea to do GC from under sighandle, but currently it happens in practice, because we did not cared to protect against it.
-
Kirill Smelkov authored
We factored out SIGSEGV block/restore from fileh_dirty_writeout() to all functions in cb7a7055 (bigfile/virtmem: Block/restore SIGSEGV in non-pagefault-handling function). The restoration however just sets whole thread sigmask. It could be possible that between block/restore calls procmask for other signals could be changed, and this way - setting procmask directly - we will overwrite them. So be careful, and when restoring SIGSEGV mask, touch mask bit for only that signal. ( we need xsigismember helper to get this done, which is also introduced in this patch )
-
Kirill Smelkov authored
The mistake was there from the beginning - from 3e5e78cd (lib/utils: Small C utilities we'll use).
-