- 26 Jun, 2014 1 commit
-
-
Julien Muchembled authored
-
- 24 Jun, 2014 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
- Fix case of empty values: there's no reason to do anything special for them. - Do not warn about multiple level of indirection to get data serial. With the current structure of tables, this does not cause significant performance issue as it did before.
-
- 20 Jun, 2014 3 commits
-
-
Julien Muchembled authored
Export: - Remove leftover warning about a bug that was fixed in commit e76af297 - In neomigrate script, open NEO storage read-only. - IStorageIteration is already implemented. Import: - Review comments. - In neomigrate script, warn that IStorageRestoreable is not implemented. - Do not call 'close' method on source iterator. BaseStorage does not do it and this is not part of ZODB API. In the case of FileStorage, resource are freed automatically during garbage collection.
-
Julien Muchembled authored
This is more realistic than testing with a single partition, in particular when there are more storage nodes that copies.
-
Julien Muchembled authored
-
- 19 Jun, 2014 4 commits
-
-
Julien Muchembled authored
There is simply no way to guess data serials and instead of producing random values, the only solution is to retrieve the values from storages. There are still differences for data serials between FileStorage and NEO: - NEO always resolves to original serial, to avoid any indirection (which slightly speeds up undo at the expense of a more complex pack code) - NEO does not make any difference between object deletion and creation undone (data serial always null in storage) It has to be decided whether NEO implementation should be changed about this. Apart from that, conversion database back from NEO should be fixed. testExportFileStorageBug passes and there was in fact no FileStorage bug. Another change is that iterator does not trash the client cache anymore.
-
Julien Muchembled authored
-
Julien Muchembled authored
- _[gs]etPackTID accessors implementation is not backend-specific so move them to superclass - _getObjectLength method is useless since data_tid always contains the wanted information, regardless the contents of value_tid column
-
Julien Muchembled authored
-
- 05 Jun, 2014 1 commit
-
-
Julien Muchembled authored
Sometimes, the tested cluster reacts so quickly that a new primary master arised before we test that at some point, there is no primary master.
-
- 04 Jun, 2014 2 commits
-
-
Julien Muchembled authored
This fixes: Traceback (most recent call last): File "neo/tests/functional/testMaster.py", line 50, in testStoppingSecondaryMaster self.neo.expectDead(master) File "neo/tests/functional/__init__.py", line 615, in expectDead self.expectCondition(callback, *args, **kw) File "neo/tests/functional/__init__.py", line 509, in expectCondition 'History: %s' % opaque_history) AssertionError: Timeout while expecting condition. History: [False, False, False, False, False, False, False, False, False, False, False]
-
Julien Muchembled authored
See commit d9ab77b8
-
- 03 Jun, 2014 3 commits
-
-
Julien Muchembled authored
One entry should have been removed before v1.1
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 29 May, 2014 1 commit
-
-
Julien Muchembled authored
-
- 08 Jan, 2014 1 commit
-
-
Julien Muchembled authored
-
- 07 Jan, 2014 10 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
If anything wrong happens after a transaction is locked and before the end of onTransactionCommitted, recovery phase should be run again, so that the master gets correct last tid. Following patch by Vincent is an attempt to fix this: --- a/neo/master/app.py +++ b/neo/master/app.py @@ -329,8 +329,8 @@ def playPrimaryRole(self): # recover the cluster status at startup try: - self.runManager(RecoveryManager) while True: + self.runManager(RecoveryManager) self.runManager(VerificationManager) try: if self.backup_tid: @@ -338,10 +338,6 @@ def playPrimaryRole(self): raise RuntimeError("No upstream cluster to backup" " defined in configuration") self.backup_app.provideService() - # Reset connection with storages (and go through a - # recovery phase) when leaving backup mode in order - # to get correct last oid/tid. - self.runManager(RecoveryManager) continue self.provideService() except OperationFailure:
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
This should following random errors: > File "neo/lib/event.py", line 77, in unregister > self.epoll.unregister(fd) > IOError: [Errno 2] No such file or directory > File "neo/tests/threaded/test.py", line 670, in testClientReconnection > c, = cluster.storage.nm.getClientList() > ValueError: need more than 0 values to unpack
-
Julien Muchembled authored
-
Julien Muchembled authored
This implementation proper cache invalidation. Connection to master is also made optional to load from storage nodes, as long as partition table contains up-to-date data (which is anyway unlikely to change when there is no master).
-
Julien Muchembled authored
This is enough because on disconnection, the master already aborts all transactions on its side.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 04 Jan, 2014 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 17 Dec, 2013 9 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
This may help client to recover after an assertion failure. For example, this should fix the following bug: ERROR ZODB.Connection Couldn't load state for 0x13b6 Traceback (most recent call last): File "ZODB/Connection.py", line 851, in setstate self._setstate(obj) File "ZODB/Connection.py", line 901, in _setstate p, serial = self._storage.load(obj._p_oid, '') File "neo/client/Storage.py", line 85, in load return self.app.load(oid)[:2] File "neo/client/app.py", line 435, in load result = self._loadFromStorage(oid, tid, before_tid) File "neo/client/app.py", line 450, in _loadFromStorage for node, conn in self.cp.iterateForObject(oid, readable=True): File "neo/client/pool.py", line 130, in iterateForObject conn = getConnForNode(node) File "neo/client/pool.py", line 155, in getConnForNode conn = self._initNodeConnection(node) File "neo/client/pool.py", line 61, in _initNodeConnection connector=app.connector_handler(), dispatcher=app.dispatcher) File "neo/lib/connection.py", line 749, in __init__ super(MTClientConnection, self).__init__(*args, **kwargs) File "neo/lib/connection.py", line 685, in __init__ node.setConnection(self) File "neo/lib/node.py", line 119, in setConnection attributeTracker.whoSet(self, '_connection') AssertionError
-
Julien Muchembled authored
-
Vincent Pelletier authored
Also, saves one local variable assignment in "hit" code path.
-
Vincent Pelletier authored
Use a non-intrusive code profiling tool with NEO instead, like pprofile.
-
Vincent Pelletier authored
-
Vincent Pelletier authored
Also, drop extraneous parentheses in another set's creation.
-
Vincent Pelletier authored
Nodes are likely to be running, so filtering before sort is unlikely to save time. Caller is likely to stop iterating after first yield connection ("load" case), so move filtering inside loop. Also, document non-straightforward code.
-
Vincent Pelletier authored
Connection is more often established than not, so do a first lookup without locking, and only acquire it if it misses. Then do a second lookup in case another thread also established connection, and connect if it still misses.
-