1. 26 Apr, 2019 1 commit
  2. 16 Apr, 2019 5 commits
  3. 05 Apr, 2019 3 commits
  4. 01 Apr, 2019 1 commit
  5. 21 Mar, 2019 2 commits
  6. 16 Mar, 2019 1 commit
    • importer: fix possible data loss on writeback · e387ad59
      If the source DB is lost during the import and then restored from a backup,
      all new transactions have to written back again on resume. It is the most
      common case for which the writeback hits the maximum number of transactions
      per partition to process at each iteration; the previous code was buggy in
      that it could skip transactions.
      Julien Muchembled committed
  7. 11 Mar, 2019 3 commits
  8. 26 Feb, 2019 2 commits
    • qa: new tool to stress-test NEO · 38e98a12
      Example output:
      
          stress: yes (toggle with F1)
          cluster state: RUNNING
          last oid: 0x44c0
          last tid: 0x3cdee272ef19355 (2019-02-26 15:35:11.002419)
          clients: 2308, 2311, 2302, 2173, 2226, 2215, 2306, 2255, 2314, 2356 (+48)
                  8m53.988s (42.633861/s)
          pt id: 4107
              RRRDDRRR
           0: OU......
           1: ..UO....
           2: ....OU..
           3: ......UU
           4: OU......
           5: ..UO....
           6: ....OU..
           7: ......UU
           8: OU......
           9: ..UO....
          10: ....OU..
          11: ......UU
          12: OU......
          13: ..UO....
          14: ....OU..
          15: ......UU
          16: OU......
          17: ..UO....
          18: ....OU..
          19: ......UU
          20: OU......
          21: ..UO....
          22: ....OU..
          23: ......UU
      Julien Muchembled committed
    • master: fix typo in comment · ce25e429
      Julien Muchembled committed
  9. 25 Feb, 2019 1 commit
  10. 31 Dec, 2018 7 commits
  11. 05 Dec, 2018 1 commit
  12. 21 Nov, 2018 4 commits
    • fixup! client: discard late answers to lockless writes · 8ef1ddba
      Since commit 50e7fe52,
      some code can be simplified.
      Julien Muchembled committed
    • client: fix race condition between Storage.load() and invalidations · a2e278d5
      This fixes a bug that could manifest as follows:
      
        Traceback (most recent call last):
          File "neo/client/app.py", line 432, in load
            self._cache.store(oid, data, tid, next_tid)
          File "neo/client/cache.py", line 223, in store
            assert item.tid == tid, (item, tid)
        AssertionError: (<CacheItem oid='\x00\x00\x00\x00\x00\x00\x00\x01' tid='\x03\xcb\xc6\xca\xfd\xc7\xda\xee' next_tid='\x03\xcb\xc6\xca\xfd\xd8\t\x88' data='...' counter=1 level=1 expire=10000 prev=<...> next=<...>>, '\x03\xcb\xc6\xca\xfd\xd8\t\x88')
      
      The big changes in the threaded test framework are required because we need to
      reproduce a race condition between client threads and this conflicts with the
      serialization of epoll events (deadlock).
      Julien Muchembled committed
    • client: fix race condition in refcounting dispatched answer packets · 743026d5
      This was found when stress-testing a big cluster. 1 client node was stuck:
      
        (Pdb) pp app.dispatcher.__dict__
        {'lock_acquire': <built-in method acquire of thread.lock object at 0x7f788c6e4250>,
        'lock_release': <built-in method release of thread.lock object at 0x7f788c6e4250>,
        'message_table': {140155667614608: {},
                          140155668875280: {},
                          140155671145872: {},
                          140155672381008: {},
                          140155672381136: {},
                          140155672381456: {},
                          140155673002448: {},
                          140155673449680: {},
                          140155676093648: {170: <neo.lib.locking.SimpleQueue object at 0x7f788a109c58>},
                          140155677536464: {},
                          140155679224336: {},
                          140155679876496: {},
                          140155680702992: {},
                          140155681851920: {},
                          140155681852624: {},
                          140155682773584: {},
                          140155685988880: {},
                          140155693061328: {},
                          140155693062224: {},
                          140155693074960: {},
                          140155696334736: {278: <neo.lib.locking.SimpleQueue object at 0x7f788a109c58>},
                          140155696411408: {},
                          140155696414160: {},
                          140155696576208: {},
                          140155722373904: {}},
        'queue_dict': {140155673622936: 1, 140155689147480: 2}}
      
      140155673622936 should not be queue_dict
      Julien Muchembled committed
    • More RTMIN+2 (log) information for clients and connections · 7e456329
      Julien Muchembled committed
  13. 15 Nov, 2018 3 commits
  14. 08 Nov, 2018 6 commits
    • client: merge ConnectionPool inside Application · 7494de84
      Julien Muchembled committed
    • client: prepare merge of ConnectionPool inside Application · 693aaf79
      Julien Muchembled committed
    • client: fix AssertionError when trying to reconnect too quickly after an error · 305dda86
      When ConnectionPool._initNodeConnection fails a first time with:
      
        StorageError: protocol error: already connected
      
      the following assertion failure happens when trying to reconnect before the
      previous connection is actually closed (currently, only the node sending an
      error message closes the connection, as commented in EventHandler):
      
        Traceback (most recent call last):
          File "neo/client/Storage.py", line 82, in load
            return self.app.load(oid)[:2]
          File "neo/client/app.py", line 367, in load
            data, tid, next_tid, _ = self._loadFromStorage(oid, tid, before_tid)
          File "neo/client/app.py", line 399, in _loadFromStorage
            askStorage)
          File "neo/client/app.py", line 293, in _askStorageForRead
            conn = cp.getConnForNode(node)
          File "neo/client/pool.py", line 98, in getConnForNode
            conn = self._initNodeConnection(node)
          File "neo/client/pool.py", line 48, in _initNodeConnection
            dispatcher=app.dispatcher)
          File "neo/lib/connection.py", line 704, in __init__
            super(MTClientConnection, self).__init__(*args, **kwargs)
          File "neo/lib/connection.py", line 602, in __init__
            node.setConnection(self)
          File "neo/lib/node.py", line 122, in setConnection
            attributeTracker.whoSet(self, '_connection'))
        AssertionError
      Julien Muchembled committed
    • qa: fix attributeTracker · 163858ed
      Julien Muchembled committed
    • client: discard late answers to lockless writes · 50e7fe52
      This fixes:
      
        Traceback (most recent call last):
          File "neo/client/Storage.py", line 108, in tpc_vote
            return self.app.tpc_vote(transaction)
          File "neo/client/app.py", line 546, in tpc_vote
            self.waitStoreResponses(txn_context)
          File "neo/client/app.py", line 539, in waitStoreResponses
            _waitAnyTransactionMessage(txn_context)
          File "neo/client/app.py", line 160, in _waitAnyTransactionMessage
            self._handleConflicts(txn_context)
          File "neo/client/app.py", line 514, in _handleConflicts
            self._store(txn_context, oid, serial, data)
          File "neo/client/app.py", line 452, in _store
            self._waitAnyTransactionMessage(txn_context, False)
          File "neo/client/app.py", line 155, in _waitAnyTransactionMessage
            self._waitAnyMessage(queue, block=block)
          File "neo/client/app.py", line 142, in _waitAnyMessage
            _handlePacket(conn, packet, kw)
          File "neo/lib/threaded_app.py", line 133, in _handlePacket
            handler.dispatch(conn, packet, kw)
          File "neo/lib/handler.py", line 72, in dispatch
            method(conn, *args, **kw)
          File "neo/client/handlers/storage.py", line 143, in answerRebaseObject
            assert cached == data
        AssertionError
      Julien Muchembled committed