1. 10 Sep, 2020 2 commits
  2. 04 Sep, 2020 1 commit
  3. 21 Aug, 2020 1 commit
  4. 25 Jun, 2020 1 commit
  5. 24 Jun, 2020 1 commit
  6. 12 Jun, 2020 1 commit
    • Julien Muchembled's avatar
      qa: skip broken ZODB test · f4cb59d2
      Julien Muchembled authored
      ======================================================================
      FAIL: check_tid_ordering_w_commit (neo.tests.zodb.testBasic.BasicTests)
      ----------------------------------------------------------------------
      Traceback (most recent call last):
        File "ZODB/tests/BasicStorage.py", line 397, in check_tid_ordering_w_commit
          self.assertEqual(results.pop('lastTransaction'), tids[1])
        File "neo/tests/__init__.py", line 301, in assertEqual
          return super(NeoTestBase, self).assertEqual(first, second, msg=msg)
      failureException: '\x03\xd8\x85H\xbffp\xbb' != '\x03\xd8\x85H\xbfs\x0b\xdd'
      f4cb59d2
  7. 11 Jun, 2020 1 commit
  8. 29 May, 2020 3 commits
  9. 18 May, 2020 1 commit
    • Julien Muchembled's avatar
      admin: fix monitoring timer after 2 identical consecutive checks · c611c48f
      Julien Muchembled authored
      This fixes the bug that with only email notification, monitoring
      stopped checking whether backup clusters are lagging after status is
      unchanged since the last check (about lagging, what is compared is
      the set of lagging backups). Until another event wakes up monitoring.
      
      The code is also simplified in that there's no need for the moment to
      have a different timeout between the normal case and a smtp failure.
      c611c48f
  10. 20 Mar, 2020 1 commit
  11. 16 Mar, 2020 2 commits
  12. 14 Feb, 2020 1 commit
    • Julien Muchembled's avatar
      master: fix tpc_finish possibly trying to kill too many nodes after client-storage failures · 82eea0cd
      Julien Muchembled authored
      When concurrent transactions fail with different storages (e.g. only network
      issues between C1-S2 and C2-S1), in such a way that each transaction can be
      committed but not both (or the cluster would be non-operational), and if the
      first transaction is aborted (between tpc_vote and tpc_finish), then the second
      wrongly failed with INCOMPLETE_TRANSACTION.
      
      And if both transactions could be committed (e.g. more than 1 replica),
      some nodes would be disconnected for nothing.
      82eea0cd
  13. 21 Jan, 2020 1 commit
    • Julien Muchembled's avatar
      admin: fix possible crash when monitoring a backup cluster that has just switch to BACKINGUP state · 5ee0b0a3
      Julien Muchembled authored
      This fixes:
      
        Traceback (most recent call last):
          ...
          File "neo/admin/handler.py", line 200, in answerLastTransaction
            app.maybeNotify(name)
          File "neo/admin/app.py", line 380, in maybeNotify
            self._notify(False)
          File "neo/admin/app.py", line 302, in _notify
            body += '', name, '    ' + backup.formatSummary(upstream)[1]
          File "neo/admin/app.py", line 74, in formatSummary
            tid = self.backup_tid if backup else self.ltid
        AttributeError: 'Backup' object has no attribute 'backup_tid'
      5ee0b0a3
  14. 10 Jan, 2020 1 commit
    • Julien Muchembled's avatar
      master: fix crash of backup master when disconnected from upstream while serving clients · 7e8ca9ec
      Julien Muchembled authored
      This fixes:
      
        Traceback (most recent call last):
          File "neo/master/app.py", line 172, in run
            self._run()
          File "neo/master/app.py", line 182, in _run
            self.playPrimaryRole()
          File "neo/master/app.py", line 314, in playPrimaryRole
            self.backup_app.provideService())
          File "neo/master/backup_app.py", line 101, in provideService
            app.changeClusterState(ClusterStates.STARTING_BACKUP)
          File "neo/master/app.py", line 474, in changeClusterState
            ) or not node.isClient(), (state, node)
        AssertionError: (<EnumItem STARTING_BACKUP (4)>, <ClientNode(uuid=C1, state=RUNNING, connection=<ServerConnection(nid=C1, address=127.0.0.1:52430, handler=ClientReadOnlyServiceHandler, fd=59, on_close=onConnectionClosed, server) at 7f38f5628390>) at 7f38f5628ad0>)
      7e8ca9ec
  15. 07 Jan, 2020 1 commit
    • Julien Muchembled's avatar
      admin: fix handling of immediate connection failure to upstream admin · e2b11d54
      Julien Muchembled authored
      In such case, it didn't reconnect, but thought it was connected,
      which eventually led to crashes like:
      
        Traceback (most recent call last):
          ...
          File "neo/admin/handler.py", line 130, in answerClusterState
            self.app.updateMonitorInformation(None, cluster_state=state)
          File "neo/admin/app.py", line 274, in updateMonitorInformation
            self.upstream_admin_conn.send(Packets.NotifyMonitorInformation(kw))
          File "neo/lib/connection.py", line 565, in send
            raise ConnectionClosed
        neo.lib.connection.ConnectionClosed
      e2b11d54
  16. 26 Dec, 2019 2 commits
  17. 13 Nov, 2019 1 commit
    • Julien Muchembled's avatar
      admin: fix possible crash when connecting to upstream admin · d4603189
      Julien Muchembled authored
      This fixes:
      
        Traceback (most recent call last):
          File "neo/scripts/neoadmin.py", line 31, in main
            app.run()
          File "neo/admin/app.py", line 179, in run
            self._run()
          File "neo/admin/app.py", line 199, in _run
            self.em.poll(1)
          File "neo/lib/event.py", line 155, in poll
            self._poll(blocking)
          File "neo/lib/event.py", line 220, in _poll
            if conn.readable():
          File "neo/lib/connection.py", line 487, in readable
            self._closure()
          File "neo/lib/connection.py", line 545, in _closure
            self.close()
          File "neo/lib/connection.py", line 534, in close
            handler.connectionFailed(self)
          File "neo/admin/handler.py", line 210, in connectionClosed
            app.connectToUpstreamAdmin()
          File "neo/admin/app.py", line 230, in connectToUpstreamAdmin
            None, None, self.name, None, {}))
          File "neo/lib/connection.py", line 574, in ask
            raise ConnectionClosed
        neo.lib.connection.ConnectionClosed
      d4603189
  18. 22 Oct, 2019 3 commits
  19. 17 Oct, 2019 3 commits
  20. 14 Oct, 2019 5 commits
  21. 16 Aug, 2019 3 commits
    • Julien Muchembled's avatar
      Bump protocol version · c681f666
      Julien Muchembled authored
      c681f666
    • Julien Muchembled's avatar
      protocol: small cleanup in packet registration · c156f11a
      Julien Muchembled authored
      Same as commit a00ab78b.
      
      It was reverted mistakenly when switching to msgpack.
      c156f11a
    • Julien Muchembled's avatar
      New feature: monitoring · e434c253
      Julien Muchembled authored
      This task is done by the admin node, in 2 possible ways:
      - email notifications, as soon as some state change;
      - new 'neoctl print summary' command that can be used periodically
        to check the health of the database.
      They report the same information.
      
      About backup clusters:
      
      The admin of the main cluster also monitors selected backup clusters,
      with the help of their admin nodes.
      
      Internally, when a backup master node connects to the upstream master node,
      it receives the address of the upstream admin node and forwards it to its
      admin node, which is therefore able to connect to the upstream admin node.
      So the 2 admin nodes remain connected and communicate in 2 ways:
      - the backup node notifies upstream about the health of the backup cluster;
      - the upstream node queries the backup node periodically to check whether
        replication is not too late.
      
      TODO:
      
      A few things are hard-coded and we may want to configure them:
      - backup lateness is checked every 10 min;
      - backup is expected to never be late.
      
      There's also no delay to prevent 2 consecutive emails from having the same
      Date: (unfortunately, the RFC 5322 does not allow sub-second precision),
      in which case the MUA can display them in random order. This is mostly
      confusing when one notification is OK and the other is not, because one
      may wonder if there's a new problem.
      e434c253
  22. 05 Jun, 2019 2 commits
    • Julien Muchembled's avatar
      Introduce extra node properties · 82c142c4
      Julien Muchembled authored
      Explicit fields in RequestIdentification are only suitable for the actual
      identification or for properties that most nodes have.
      
      But some current (and future) features require to pass values (always and
      as soon as possible) for tasks that are unrelated to identification.
      82c142c4
    • Julien Muchembled's avatar
      admin: fix misuse of Packet.setId · 2b9e14e8
      Julien Muchembled authored
      What Packet.setId does was overridden by Connection.answer
      and that would have broken concurrent queries to the admin node
      (this is something we currently don't do).
      2b9e14e8
  23. 29 May, 2019 1 commit
  24. 28 May, 2019 1 commit