1. 27 Apr, 2019 7 commits
    • Julien Muchembled's avatar
      3839d224
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
      Better error reporting from the master to neoctl for denied requests · c2c9e99d
      Julien Muchembled authored
      This stops abusing ProtocolError, which disconnects the admin node needlessly.
      
      The many 'if ... raise RuntimeError' in neo/neoctl/neoctl.py
      could be turned into assertions.
      c2c9e99d
    • Julien Muchembled's avatar
      21190ee7
    • Julien Muchembled's avatar
      Make the number of replicas modifiable when the cluster is running · ef5fc508
      Julien Muchembled authored
      neoctl gets a new command to change the number of replicas.
      
      The number of replicas becomes a new partition table attribute and
      like the PT id, it is stored in the config table. On the other side,
      the configuration value for the number of partitions is dropped,
      since it can be computed from the partition table, which is
      always stored in full.
      
      The -p/-r master options now only apply at database creation.
      
      Some implementation notes:
      
      - The protocol is slightly optimized in that the master now sends
        automatically the whole partition tables to the admin & client
        nodes upon connection, like for storage nodes.
        This makes the protocol more consistent, and the master is the
        only remaining node requesting partition tables, during recovery.
      
      - Some parts become tricky because app.pt can be None in more cases.
        For example, the extra condition in NodeManager.update
        (before app.pt.dropNode) was added for this is the reason.
        Or the 'loadPartitionTable' method (storage) that is not inlined
        because of unit tests.
        Overall, this commit simplifies more than it complicates.
      
      - In the master handlers, we stop hijacking the 'connectionCompleted'
        method for tasks to be performed (often send the full partition
        table) on handler switches.
      
      - The admin's 'bootstrapped' flag could have been removed earlier:
        race conditions can't happen since the AskNodeInformation packet
        was removed (commit d048a52d).
      ef5fc508
    • Julien Muchembled's avatar
      New --new-nid storage option for fast cloning · 27e3f620
      Julien Muchembled authored
      It is often faster to set up replicas by stopping a node (and any
      underlying database server like MariaDB) and do a raw copy of the
      database (e.g. with rsync). So far, it required to stop the whole
      cluster and use tools like 'mysql' or sqlite3' to edit:
      - the 'pt' table in databases,
      - the 'config.nid' values of the new nodes.
      
      With this new option, if you already have 1 replica, you can set up
      new replicas with such fast raw copy, and without interruption of
      service. Obviously, this implies less redundancy during the operation.
      27e3f620
    • Julien Muchembled's avatar
      qa: fix 2 tests with ZODB5 · 64e02391
      Julien Muchembled authored
      64e02391
  2. 26 Apr, 2019 4 commits
  3. 16 Apr, 2019 5 commits
  4. 05 Apr, 2019 3 commits
  5. 01 Apr, 2019 1 commit
  6. 21 Mar, 2019 2 commits
  7. 16 Mar, 2019 1 commit
    • Julien Muchembled's avatar
      importer: fix possible data loss on writeback · e387ad59
      Julien Muchembled authored
      If the source DB is lost during the import and then restored from a backup,
      all new transactions have to written back again on resume. It is the most
      common case for which the writeback hits the maximum number of transactions
      per partition to process at each iteration; the previous code was buggy in
      that it could skip transactions.
      e387ad59
  8. 11 Mar, 2019 3 commits
  9. 26 Feb, 2019 2 commits
    • Julien Muchembled's avatar
      qa: new tool to stress-test NEO · 38e98a12
      Julien Muchembled authored
      Example output:
      
          stress: yes (toggle with F1)
          cluster state: RUNNING
          last oid: 0x44c0
          last tid: 0x3cdee272ef19355 (2019-02-26 15:35:11.002419)
          clients: 2308, 2311, 2302, 2173, 2226, 2215, 2306, 2255, 2314, 2356 (+48)
                  8m53.988s (42.633861/s)
          pt id: 4107
              RRRDDRRR
           0: OU......
           1: ..UO....
           2: ....OU..
           3: ......UU
           4: OU......
           5: ..UO....
           6: ....OU..
           7: ......UU
           8: OU......
           9: ..UO....
          10: ....OU..
          11: ......UU
          12: OU......
          13: ..UO....
          14: ....OU..
          15: ......UU
          16: OU......
          17: ..UO....
          18: ....OU..
          19: ......UU
          20: OU......
          21: ..UO....
          22: ....OU..
          23: ......UU
      38e98a12
    • Julien Muchembled's avatar
      master: fix typo in comment · ce25e429
      Julien Muchembled authored
      ce25e429
  10. 25 Feb, 2019 1 commit
  11. 31 Dec, 2018 7 commits
  12. 05 Dec, 2018 1 commit
  13. 21 Nov, 2018 3 commits
    • Julien Muchembled's avatar
      fixup! client: discard late answers to lockless writes · 8ef1ddba
      Julien Muchembled authored
      Since commit 50e7fe52,
      some code can be simplified.
      8ef1ddba
    • Julien Muchembled's avatar
      client: fix race condition between Storage.load() and invalidations · a2e278d5
      Julien Muchembled authored
      This fixes a bug that could manifest as follows:
      
        Traceback (most recent call last):
          File "neo/client/app.py", line 432, in load
            self._cache.store(oid, data, tid, next_tid)
          File "neo/client/cache.py", line 223, in store
            assert item.tid == tid, (item, tid)
        AssertionError: (<CacheItem oid='\x00\x00\x00\x00\x00\x00\x00\x01' tid='\x03\xcb\xc6\xca\xfd\xc7\xda\xee' next_tid='\x03\xcb\xc6\xca\xfd\xd8\t\x88' data='...' counter=1 level=1 expire=10000 prev=<...> next=<...>>, '\x03\xcb\xc6\xca\xfd\xd8\t\x88')
      
      The big changes in the threaded test framework are required because we need to
      reproduce a race condition between client threads and this conflicts with the
      serialization of epoll events (deadlock).
      a2e278d5
    • Julien Muchembled's avatar
      client: fix race condition in refcounting dispatched answer packets · 743026d5
      Julien Muchembled authored
      This was found when stress-testing a big cluster. 1 client node was stuck:
      
        (Pdb) pp app.dispatcher.__dict__
        {'lock_acquire': <built-in method acquire of thread.lock object at 0x7f788c6e4250>,
        'lock_release': <built-in method release of thread.lock object at 0x7f788c6e4250>,
        'message_table': {140155667614608: {},
                          140155668875280: {},
                          140155671145872: {},
                          140155672381008: {},
                          140155672381136: {},
                          140155672381456: {},
                          140155673002448: {},
                          140155673449680: {},
                          140155676093648: {170: <neo.lib.locking.SimpleQueue object at 0x7f788a109c58>},
                          140155677536464: {},
                          140155679224336: {},
                          140155679876496: {},
                          140155680702992: {},
                          140155681851920: {},
                          140155681852624: {},
                          140155682773584: {},
                          140155685988880: {},
                          140155693061328: {},
                          140155693062224: {},
                          140155693074960: {},
                          140155696334736: {278: <neo.lib.locking.SimpleQueue object at 0x7f788a109c58>},
                          140155696411408: {},
                          140155696414160: {},
                          140155696576208: {},
                          140155722373904: {}},
        'queue_dict': {140155673622936: 1, 140155689147480: 2}}
      
      140155673622936 should not be queue_dict
      743026d5