- 15 Apr, 2020 1 commit
-
-
Kirill Smelkov authored
- mention in comments that _ZBigFileH not only proxies changes from virtmem -> ZODB, but also the other way: virtmem <- ZODB. - refresh comments, fix typo.
-
- 01 Apr, 2020 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
-
- 18 Dec, 2019 4 commits
-
-
Kirill Smelkov authored
It was from long-ago marked as "XXX move to common place".
-
Kirill Smelkov authored
I noticed this while working on WCFS: if file blocks topology change, the invalidation process is not working correctly. It is also not correct with respect to live cache pressure. Add FIXME in the code and test for live cache pressure. kirr/wendelin.core@5a4562fc kirr/wendelin.core@48eb692f kirr/wendelin.core@d1a579b2 kirr/wendelin.core@69c94fbc
-
Kirill Smelkov authored
For ZBlk0 this is trivial, but for ZBlk1 it may seem that we could avoid changing ZBlk object itself and mark only pointed-to ZData object as changed. However that would be not correct to do if we consider invalidations. Noticed while working on WCFS.
-
Kirill Smelkov authored
Add package-level documentation to - bigfile/file_zodb.py, - bigarray/array_zodb.py, and - lib/zodb.py The most interesting read is file_zodb.py . Slightly improve documenation for functions in a couple of places. Improving documentation was long overdue and it is improved only slightly by this commit.
-
- 23 May, 2019 1 commit
-
-
Kirill Smelkov authored
This continues c7c01ce4 (bigfile/zodb: ZODB.Connection can migrate between threads on close/open and we have to care): Until now we were retrieving zconn.transaction_manager on _ZBigFileH init, and further using that transaction manager for every connection reopen. However that is not correct because on every reopen connection can be given new transaction manager. We were not practically hitting the bug because until recently ZODB was, by default, using the same ThreadTransactionManager manager instance as Connection.transaction_manager for all connections, and not doing all steps needed to keep _ZBigFileH.transaction_manager in sync to Connection was forgiven - a particular transaction manager that was used was TransactionManager instance implicitly associated with current thread by global threading.Local transaction.manager . However starting from ZODB 5.5.0 Connection code was changed to remember as .transaction_manager the particular TransactionManager instance without any threading.Local games: https://github.com/zopefoundation/ZODB/commit/b6ac40f153 https://github.com/zopefoundation/ZODB/issues/208 https://github.com/zopefoundation/ZODB/pull/226 Given that we were not syncing properly that broke wendelin.core tests, for example: bigfile/tests/test_filezodb.py::test_bigfile_filezodb_vs_conn_migration Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "/home/kirr/src/wendelin/wendelin.core/bigfile/tests/test_filezodb.py", line 401, in T11 transaction.commit() # should be nothing File "/home/kirr/src/wendelin/venv/z-dev/local/lib/python2.7/site-packages/transaction/_manager.py", line 252, in commit return self.manager.commit() File "/home/kirr/src/wendelin/venv/z-dev/local/lib/python2.7/site-packages/transaction/_manager.py", line 131, in commit return self.get().commit() File "/home/kirr/src/wendelin/venv/z-dev/local/lib/python2.7/site-packages/transaction/_transaction.py", line 298, in commit self._synchronizers.map(lambda s: s.beforeCompletion(self)) File "/home/kirr/src/wendelin/venv/z-dev/local/lib/python2.7/site-packages/transaction/weakset.py", line 61, in map f(elt) File "/home/kirr/src/wendelin/venv/z-dev/local/lib/python2.7/site-packages/transaction/_transaction.py", line 298, in <lambda> self._synchronizers.map(lambda s: s.beforeCompletion(self)) File "/home/kirr/src/wendelin/wendelin.core/bigfile/file_zodb.py", line 671, in beforeCompletion assert txn is zconn.transaction_manager.get() AssertionError What is happening here is that one thread used the connection and ZBigFile/_ZBigFileH associated with it, then the connection was closed and released to DB pool. Then the connection was reopened but by another thread and thus with different TransactionManager instance and oops - _ZBigFileH.transaction_manager is different because it is TransactionManager instance that was used by the first thread. Fix it by resyncing _ZBigFileH.transaction_manager on every connection reopen. No new test as existing tests already cover the problem when run with ZODB >= 5.5.0 .
-
- 24 Oct, 2017 1 commit
-
-
Kirill Smelkov authored
Relicense to GPLv3+ with wide exception for all Free Software / Open Source projects + Business options. Nexedi stack is licensed under Free Software licenses with various exceptions that cover three business cases: - Free Software - Proprietary Software - Rebranding As long as one intends to develop Free Software based on Nexedi stack, no license cost is involved. Developing proprietary software based on Nexedi stack may require a proprietary exception license. Rebranding Nexedi stack is prohibited unless rebranding license is acquired. Through this licensing approach, Nexedi expects to encourage Free Software development without restrictions and at the same time create a framework for proprietary software to contribute to the long term sustainability of the Nexedi stack. Please see https://www.nexedi.com/licensing for details, rationale and options.
-
- 24 Mar, 2017 1 commit
-
-
Kirill Smelkov authored
This reverts commit 9ae42085. When working with big arrays and accessing / changing it not in tiny bits ZBlk1 is much slower compared to ZBlk0. See details here https://www.nexedi.com/blog/NXD-Document.Blog.Wendelin.Core.Release.0.5.Performance.Tests and in 13c0c17c (bigfile/zodb: Format #1 which is optimized for small changes) Until we can rely on database handling both cases automatically, projects which care about changing arrays in small parts can manually set WENDELIN_CORE_ZBLK_FMT=ZBlk1 or under ERP5/SlapOS use this setting: slapos@2558aadd And let's have it performant in "big data" case by default. /cc @yusei, @klaus, @Tyagov /reviewed-on !5
-
- 14 Aug, 2016 1 commit
-
-
Kirill Smelkov authored
13c0c17c (bigfile/zodb: Format #1 which is optimized for small changes) used BTree to organize ZBlk1 block's chunks and for loadblkdata() added "TODO we are missing to free internal BTree structures on data load". #3 besides other things showed that even when we deactivate ZData objects, we are still keeping them as ghosts occupying memory and the same for IOBucket objects. This all happens because there is no proper way to deactivate whole btree - including internal buckets objects. And since internal buckets are not deactivated, they stay in picklecache and thus hold a reference to ZData objects and ZData objects in turn, even if explicitly deactivated, stay in memory. We can fix this all via implementing whole-btree deactivation procedure. To do so we need to iterate over all btree buckets recursively, but unfortunately there is no BTree API to access/iterate btree's buckets. We can however still get reference to first top-level buckets via gc.get_referents(btree) and then scan buckets further without hacks. gc.get_referents(btree) is a hack, but - it works in O(1) (we only get pointers from btree, not scanning all gcable objects and deducing them) - it works reliable if we filter out non-interesting objects. So in the end it works. Before the patch loading more and more ZBlk1 data with objgraph instrumentation was showing itself like # Nobj δ wendelin.bigfile.file_zodb.ZData 7168 +512 BTrees.IOBTree.IOBucket 238 +17 BTrees.IOBTree.IOBTree 14 +1 and after this patch we now have BTrees.IOBTree.IOBTree 14 +1 we cannot remove that "IOBTree + 1", since ZBlk1 is holding direct reference on it (via .chunktab) and we have to keep ZBlk1 live with ._v_zfile and ._v_zblk set for invalidation to work. "+1 IOBtree" is however small - 144 bytes per 2M (= 0.006%) so we can neglect that the same way we neglect keeping ZBlk1 staying live for each block.
-
- 20 Apr, 2016 1 commit
-
-
Kirill Smelkov authored
For ZBlk1 we already compare ZData content about whether it was changed compared to data already stored to DB, and do not store it twice if data is the same. However ZBlk itself is always marked as changed, if corresponding memory page was dirtied. This results in transactions like Trans #33915309 tid=03b6944919befeee time=2016-04-17 22:01:06.034237 offset=140320105842 status=' ' user='...' description='...' # ... other parts, but no ZData here data #2 oid=000000000026fc4c size=79 class=wendelin.bigfile.file_zodb.ZBlk1 where ZBlk1 is committed the same without necessity. NOTE we cannot avoid committing ZBlk in all cases, because it is used to signal other DB clients that a ZBlk needs to be invalidated and this way associated fileh pages are invalidated too. This cannot work via ZData, because ZData don't have back-pointer to ZBlk1 or to corresponding zfile.
-
- 30 Sep, 2015 1 commit
-
-
Kirill Smelkov authored
ZBlk* objects are intermediate ZODB object in between data stored in ZODB and memory pages managed by virtmem. As such, after they do their job to either load data from DB to memory, or store from memory to DB, it is not needed to keep them alive with duplicate content thus only wasting memory. ZBlk0 cares about this detail via "deactivating" ._v_blkdata in loadblkdata() and __getstate__() prologues. ZBlk1 did the same for load path in loadblkdata() prologue, but for .__getstate__() it was not directly possible, because for ZBlk1 the state is IOBTree, not one non-persistent object, and thus it first needs to be processed by ZODB together with its subobjects on its way to storage and only then all they deactivated. So 13c0c17c (bigfile/zodb: Format #1 which is optimized for small changes) only put TODO for memory-page -> DB path about not wasting memory this way. But the problem is relatively easy to solve: - we can deactivate ZData objects (leaf objects in ZBlk1.chunktab btree) by hooking into ZData.__getstate__() prologue; - we also need to care to deactivate chunks right away, which setblkdata() loaded to compare .data and found them to be not changed This way we do not waste memory keeping intermediate ZData objects alive with the same content as memory page after commit. /cc @Tyagov
-
- 28 Sep, 2015 3 commits
-
-
Kirill Smelkov authored
ZBlk1.setblkdata() has logic to detect CHUNKSIZE change, and if so recreate whole chunktab from scratch for simplicity. There was a thinko however - len(chunk.data) == CHUNKSIZE is ok and actually very often happens when data does not have zeroes. Because of this off-by-1 comparison mistake, ZData objects were constantly created and thrown out instead of being reused which led to fast ZODB growth. Fix it. /reported-by @Tyagov
-
Kirill Smelkov authored
Our current workloads are mostly a lot of small data changes and this is what ZBlk1 was created for. Yes it has larger overhead for accessing data, but we already painted the way how to handle this in 13c0c17c (bigfile/zodb: Format #1 which is optimized for small changes) -> move data deduplication/management to server side. So be it ZBlk1 the default for now. /cc @Tyagov, @klaus
-
Kirill Smelkov authored
-
- 24 Sep, 2015 9 commits
-
-
Kirill Smelkov authored
Our current approach is that each file block is represented by 1 zodb object, with block size being 2M. Even with trailing \0 trimming, which halves the overhead on average, DB size grows very fast if we do a lot of small appends or changes. So another format needs to be introduced which has lower overhead for storing small changes: In general, to represent BigFile as ZODB objects, each file block could be represented separately either as 1) one ZODB object, or (ZBlk0 - this what we have already) 2) group of ZODB objects (ZBlk1 - this is what we introduce) with top-level BTree directory #blk -> objects representing block. For "1" we have - low-overhead access time (only 1 object loaded from DB), but - high-overhead in terms of ZODB size (with FileStorage / ZEO, every change to a block causes it to be written into DB in full again) For "2" we have - low-overhead in terms of ZODB size (only part of a block is overwritten in DB on single change), but - high-overhead in terms of access time (several objects need to be loaded for 1 block) In general it is not possible to have low-overhead for both i) access-time, and ii) DB size, with approach where we do block objects representation / management on *client* side. On the other hand, if object management is moved to DB *server* side, it is possible to deduplicate them there and this way have low-overhead for both access-time and DB size with just client storing 1 object per file block. This will be our future approach after we teach NEO about object deduplication. ~~~~ As shown above in the last paragraph it is not possible to perform optimally on client side. Thus ZBlk1 should be only an intermediate solution until we move data management to DB server side, with main criteria for ZBlk1 to keep it simple. In this patch a simple scheme is used, where every block is divided into chunks organized via BTree. When a block part changes, only corresponding chunk is updated. Chunk size is chosen to be 4K which creates ~ 512 fanout for 2M block. DB size after tests is changed as follows: bigfile bigarray ZBlk0 24K 6200K ZBlk1 36K 36K ( slight size increase for bigfile tests is because of btree structures overhead ) Time to run tests stays approximately the same. /cc @Tyagov, @klaus
-
Kirill Smelkov authored
- current ZBlk becomes format 0 - write format can be selected via WENDELIN_CORE_ZBLK_FMT env var - upon writing a block we always make sure we write it in current write format - so if a block was previously written in one format, it could be changed on the next write. - tox is prepared to test all write formats (so far only ZBlk0 there). The reason is - in the next patch we'll introduce another format for blocks which is optimized for small changes.
-
Kirill Smelkov authored
If we aim to have several kinds of ZBlk, the functionality to invalidate a block and bind it to zfile is common and thus should be shared. If we introduce a base class, it also makes sense to document what .loadblkdata() and .setblkdata() should do there - in one place.
-
Kirill Smelkov authored
- we have logic to init ._v_zfile and ._v_blk there - the same for ._v_blkdata - logic to init it is there + it is better to set variables right from instance creation, not hoping "it will be set outside from master" NOTE ._v_blkdata = None means the block was not yet loaded and generally fits into logic how ZBlk operates and thus the change is ok.
-
Kirill Smelkov authored
A lot of times data in blocks come shorter than block size and the rest of the memory page is zeros (because it was pre-filled zeros by OS when page was allocated). Do a simple heuristic and trim those trailing zeros and not store them into DB. With this change size of DB created by running bigfile and bigarray tests changes as following: bigfile bigarray old 145M 35M new 24K 6M Trimming trailing zeros is currently done with str.rstrip('\0') which creates a copy. When/if needed this could be optimized to work in-place.
-
Kirill Smelkov authored
- to keep things uniform with counterpart .loadblkdata() - so that master do not mess with ZBlk internals and works only through interface - this way it will be possible to use several kinds of ZBlk.
-
Kirill Smelkov authored
All those functions move data between DB and ._v_blkdata and only master then connects data to memory page. Make that fact explicit.
-
Kirill Smelkov authored
Again, leftover from 4174b84a (bigfile: BigFile backend to store data in ZODB).
-
Kirill Smelkov authored
The comment was a leftover there from 4174b84a (bigfile: BigFile backend to store data in ZODB).
-
- 11 Sep, 2015 1 commit
-
-
Kirill Smelkov authored
It was already done from the beginning in 4174b84a (bigfile: BigFile backend to store data in ZODB).
-
- 18 Aug, 2015 2 commits
-
-
Kirill Smelkov authored
When there is a conflict (on any object, but on ZBlk in particular) ZODB machinery calls its ._p_invalidate() twice: File ".../wendelin.core/bigfile/tests/test_filezodb.py", line 661, in test_bigfile_filezodb_vs_conflicts tm2.commit() # this should raise ConflictError and stay at 11 state File ".../transaction/_manager.py", line 111, in commit return self.get().commit() File ".../transaction/_transaction.py", line 271, in commit self._commitResources() File ".../transaction/_transaction.py", line 414, in _commitResources self._cleanup(L) File ".../transaction/_transaction.py", line 426, in _cleanup rm.abort(self) File ".../ZODB/Connection.py", line 436, in abort self._abort() File ".../ZODB/Connection.py", line 479, in _abort self._cache.invalidate(oid) File ".../wendelin.core/bigfile/file_zodb.py", line 148, in _p_invalidate traceback.print_stack() and File ".../wendelin.core/bigfile/tests/test_filezodb.py", line 661, in test_bigfile_filezodb_vs_conflicts tm2.commit() # this should raise ConflictError and stay at 11 state File ".../transaction/_manager.py", line 111, in commit return self.get().commit() File ".../transaction/_transaction.py", line 271, in commit self._commitResources() File ".../transaction/_transaction.py", line 416, in _commitResources self._synchronizers.map(lambda s: s.afterCompletion(self)) File ".../transaction/weakset.py", line 59, in map f(elt) File ".../transaction/_transaction.py", line 416, in <lambda> self._synchronizers.map(lambda s: s.afterCompletion(self)) File ".../ZODB/Connection.py", line 831, in _storage_sync self._flush_invalidations() File ".../ZODB/Connection.py", line 539, in _flush_invalidations self._cache.invalidate(invalidated) File ".../wendelin.core/bigfile/file_zodb.py", line 148, in _p_invalidate traceback.print_stack() i.e. first invalidation is done by commit cleanup: https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L414 https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L479 and then Connection.afterCompletion() flushes invalidation again: https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L416 https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L833 https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L539 If there was no conflict - there will be no ConflictError raised and thus no Transaction._cleanup() done in its ._commitResources() -> invalidation called only once. But with ConflictError - it is twice. Adjust ZBlk._p_invalidate() not to delve into real invalidation more than once - else we will fail, as ZBlk._v_zfile becomes unbound after invalidation done the first time.
-
Kirill Smelkov authored
LivePersistent can go to ghost state, because invalidation cannot be ignored, i.e. they indicate the object has been changed externally. This does not break our logic for ZBigFile and ZBigArray as invalidations can happen only at transaction boundary, so during the course of transaction those classes are guaranteed to stay uptodate and thus not loose ._v_file and ._v_fileh (which is the reason they inherit from LivePersistent). it is ok to loose ._v_file and ._v_fileh at transaction boundary and become ghost - those objects will be recreated upon going back uptodate and will stay alive again during the whole transaction window. We care only not to loose e.g. ._v_fileh inside transaction, because loosing that data manager and thus data it manages inside transaction can break synchronization logic and forget changed-through-mmap data.
-
- 17 Aug, 2015 4 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
If we do - ZBigFileH objects just don't get garbage collected, and sooner or later this way it leaks enough filedescriptors so that main zope loop breaks: Traceback (most recent call last): File ".../bin/runzope", line 194, in <module> sys.exit(Zope2.Startup.run.run()) File ".../eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/run.py", line 26, in run starter.run() File ".../eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/__init__.py", line 105, in run Lifetime.loop() File ".../eggs/Zope2-2.13.22-py2.7.egg/Lifetime/__init__.py", line 43, in loop lifetime_loop() File ".../eggs/Zope2-2.13.22-py2.7.egg/Lifetime/__init__.py", line 53, in lifetime_loop asyncore.poll(timeout, map) File ".../parts/python2.7/lib/python2.7/asyncore.py", line 145, in poll r, w, e = select.select(r, w, e, timeout) ValueError: filedescriptor out of range in select() $ lsof -p <runzope-pid> |grep ramh | wc -l 950 So continuing 64d1f40b (bigfile/zodb: Monkey-patch for ZODB.Connection to support callback on .open()) let's change the implementation to use WeakSet for callbacks list. Yes, because weakref to bound methods release immediately, we give up flexibility to subscribe to arbitrary callbacks. If it become an issue, something like WeakMethod from py3 or recipes from the net how to do it are there.
-
Kirill Smelkov authored
ZODB 3.10.4 was released almost 4 years ago, and contains significant change how ghost objects coming from DB are initially setup.
-
Kirill Smelkov authored
Continuing theme from the previous patch, here is propagation of invalidation messages from ZODB to BigFileH memory. The use-case here is that e.g. one fileh mapping was created in one connection, another in another, and after doing changes in second connection and committing there, the first fileh has to invalidate appropriate already-loaded pages, so its next transaction won't work with stale data. To do it, we hook into ZBlk._p_invalidate() and propagate the invalidation message to ZBigFile which then notifies all opened-through-it ZBigFileH to invalidate a page. ZBlk -> ZBigFile lookup is done without storing backpointer in ZODB - instead, every time ZBigFile touches ZBlk object (and thus potentially does GHOST -> Live transition to it), we (re-)bind it back to ZBigFile. Since ZBigFile is the only class that works with ZBlk objects it is safe to do so. For ZBigFile to notify "all-opened-through-it" ZBigFileH, a weakset is introduced to track them. Otherwise the real page invalidation work is done by virtmem (see previous patch).
-
- 12 Aug, 2015 3 commits
-
-
Kirill Smelkov authored
Intro ----- ZODB maintains pool of opened-to-DB connections. For each request Zope opens 1 connection and, after request handling is done, returns the connection back to ZODB pool (via Connection.close()). The same connection will be opened again for handling some future next request at some future time. This next open can happen in different-from-first request worker thread. TransactionManager (as accessed by transaction.{get,commit,abort,...}) is thread-local, that is e.g. transaction.get() returns different transaction for threads T1 and T2. When _ZBigFileH hooks into txn_manager to get a chance to run its .beforeCompletion() when transaction.commit() is run, it hooks into _current_ _thread_ transaction manager. Without unhooking on connection close, and circumstances where connection migrates to different thread this can lead to dissynchronization between ZBigFileH managing fileh pages and Connection with ZODB objects. And even to data corruption, e.g. T1 T2 open zarray[0] = 11 commit close open # opens connection as closed in T1 open zarray[0] = 21 commit abort close close Here zarray[0]=21 _will_ be committed by T1 as part of T1 transaction - because when T1 does commit .beforeCompletion() for zarray is invoked, sees there is dirty data and propagate changes to zodb objects in connection for T2, joins connection for T2 into txn for T1, and then txn for t1 when doing two-phase-commit stores modified objects to DB -> oops. ---------------------------------------- To prevent such dissynchronization _ZBigFileH needs to be a DataManager which works in sync with the connection it was initially created under - on connection close, unregister from transaction_manager, and on connection open, register to transaction manager in current, possibly different, thread context. Then there won't be incorrect beforeCompletion() notification and corruption. This issue, besides possible data corruption, was probably also exposing itself via following ways we've seen in production (everywhere connection was migrated from T1 to T2): 1. Exception ZODB.POSException.ConnectionStateError: ConnectionStateError('Cannot close a connection joined to a transaction',) in <bound method Cleanup.__del__ of <App.ZApplication.Cleanup instance at 0x7f10f4bab050>> ignored T1 T2 modify zarray commit/abort # does not join zarray to T2.txn, # because .beforeCompletion() is # registered in T1.txn_manager commit # T1 invokes .beforeCompletion() ... # beforeCompletion() joins ZBigFileH and zarray._p_jar (=T2.conn) to T1.txn ... # commit is going on in progress ... ... close # T2 thinks request handling is done and ... # and closes connection. But T2.conn is ... # still joined to T1.txn 2. Traceback (most recent call last): File ".../wendelin/bigfile/file_zodb.py", line 121, in storeblk def storeblk(self, blk, buf): return self.zself.storeblk(blk, buf) File ".../wendelin/bigfile/file_zodb.py", line 220, in storeblk zblk._v_blkdata = bytes(buf) # FIXME does memcpy File ".../ZODB/Connection.py", line 857, in setstate raise ConnectionStateError(msg) ZODB.POSException.ConnectionStateError: Shouldn't load state for 0x1f23a5 when the connection is closed Similar to "1", but close in T2 happens sooner, so that when T1 does the commit and tries to store object to database, Connection refuses to do the store: T1 T2 modify zarray commit/abort commit ... close ... ... . obj.store() ... ... 3. Traceback (most recent call last): File ".../wendelin/bigfile/file_zodb.py", line 121, in storeblk def storeblk(self, blk, buf): return self.zself.storeblk(blk, buf) File ".../wendelin/bigfile/file_zodb.py", line 221, in storeblk zblk._p_changed = True # if zblk was already in DB: _p_state -> CHANGED File ".../ZODB/Connection.py", line 979, in register self._register(obj) File ".../ZODB/Connection.py", line 989, in _register self.transaction_manager.get().join(self) File ".../transaction/_transaction.py", line 220, in join Status.ACTIVE, Status.DOOMED, self.status)) ValueError: expected txn status 'Active' or 'Doomed', but it's 'Committing' ( storeblk() does zblk._p_changed -> Connection.register(zblk) -> txn.join() but txn is already committing IOW storeblk() was invoked with txn.state being already 'Committing' ) T1 T2 modify obj # this way T2.conn joins T2.txn modify zarray commit # T1 invokes .beforeCompletion() ... # beforeCompletion() joins only _ZBigFileH to T1.txn ... # (because T2.conn is already marked as joined) ... ... commit/abort # T2 does commit/abort - this touches only T2.conn, not ZBigFileH ... # in particular T2.conn is now reset to be not joined ... . tpc_begin # actual active commit phase of T1 was somehow delayed a bit . tpc_commit # when changes from RAM propagate to ZODB objects associated . storeblk # connection (= T2.conn !) is notified again, . zblk = ... # wants to join txn for it thinks its transaction_manager, # which when called from under T1 returns *T1* transaction manager for # which T1.txn is already in state='Committing' 4. Empty transaction committed to NEO ( different from doing just transaction.commit() without changing any data - a connection was joined to txn, but set of modified object turned out to be empty ) This is probably a race in Connection._register when both T1 and T2 go to it at the same time: https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L988 def _register(self, obj=None): if self._needs_to_join: self.transaction_manager.get().join(self) self._needs_to_join = False T1 T2 modify zarray commit ... .beforeCompletion modify obj . if T2.conn.needs_join if T2.conn.needs_join # race here . T2.conn.join(T1.txn) T2.conn.join(T2.txn) # as a result T2.conn joins both T1.txn and T2.txn . commit finishes # T2.conn registered-for-commit object list is now empty commit tpc_begin storage.tpc_begin tpc_commit # no object stored, because for-commit-list is empty /cc @jm, @klaus, @Tyagov, @vpelletier
-
Kirill Smelkov authored
ZODB.Connection has support for calling callbacks on .close() but not on .open() . We'll need to hook into both Connection open/close process in the next patch (for _ZBigFileH to stay in sync with Connection state). NOTE on-open callbacks are setup once and fire many times on every open, on-close callbacks are setup once and fire only once on next close. The reason for this is that on-close callbacks are useful for scheduling current connection cleanup, after its processing is done. But on-open callback is for future connection usage, which is generally not related to current connection. /cc @jm, @vpelletier
-
Kirill Smelkov authored
-
- 03 Apr, 2015 1 commit
-
-
Kirill Smelkov authored
This adds transactionality and with e.g. NEO[1] allows to distribute objects to nodes into cluster. We hook into ZODB two-phase commit process as a separate data manager, and synchronize changes to memory, to changes to object only at that time. Alternative would be to get notified on every page change, and mark appropriate object as dirty right at that moment. But I wanted to stay close to filesystem design (we don't get notification for every file change from kernel) - that's why it is done the first way. [1] http://www.neoppod.org/
-