- 22 Oct, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 17 Oct, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 14 Oct, 2019 5 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
- make the stress process log to stress.log - log decisions to firewall/kill nodes - new --backlog option
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
Stress code reuses the admin application class and the latter was changed in commit e434c253.
-
- 16 Aug, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
Same as commit a00ab78b. It was reverted mistakenly when switching to msgpack.
-
Julien Muchembled authored
This task is done by the admin node, in 2 possible ways: - email notifications, as soon as some state change; - new 'neoctl print summary' command that can be used periodically to check the health of the database. They report the same information. About backup clusters: The admin of the main cluster also monitors selected backup clusters, with the help of their admin nodes. Internally, when a backup master node connects to the upstream master node, it receives the address of the upstream admin node and forwards it to its admin node, which is therefore able to connect to the upstream admin node. So the 2 admin nodes remain connected and communicate in 2 ways: - the backup node notifies upstream about the health of the backup cluster; - the upstream node queries the backup node periodically to check whether replication is not too late. TODO: A few things are hard-coded and we may want to configure them: - backup lateness is checked every 10 min; - backup is expected to never be late. There's also no delay to prevent 2 consecutive emails from having the same Date: (unfortunately, the RFC 5322 does not allow sub-second precision), in which case the MUA can display them in random order. This is mostly confusing when one notification is OK and the other is not, because one may wonder if there's a new problem.
-
- 05 Jun, 2019 2 commits
-
-
Julien Muchembled authored
Explicit fields in RequestIdentification are only suitable for the actual identification or for properties that most nodes have. But some current (and future) features require to pass values (always and as soon as possible) for tasks that are unrelated to identification.
-
Julien Muchembled authored
What Packet.setId does was overridden by Connection.answer and that would have broken concurrent queries to the admin node (this is something we currently don't do).
-
- 29 May, 2019 1 commit
-
-
Julien Muchembled authored
-
- 28 May, 2019 1 commit
-
-
Julien Muchembled authored
-
- 24 May, 2019 1 commit
-
-
Julien Muchembled authored
-
- 20 May, 2019 1 commit
-
-
Julien Muchembled authored
-
- 09 May, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
... rather than logging when the backend does not override.
-
- 30 Apr, 2019 3 commits
-
-
Julien Muchembled authored
Contrary to FileStorage, NEO remembers uses of readCurrent().
-
Julien Muchembled authored
-
Julien Muchembled authored
This fixes the following assertion: Traceback (most recent call last): File "neo/master/app.py", line 172, in run self._run() File "neo/master/app.py", line 182, in _run self.playPrimaryRole() File "neo/master/app.py", line 302, in playPrimaryRole self.backup_app.provideService()) File "neo/master/backup_app.py", line 114, in provideService node, conn = bootstrap.getPrimaryConnection() File "neo/lib/bootstrap.py", line 74, in getPrimaryConnection poll(1) File "neo/lib/event.py", line 160, in poll to_process.process() File "neo/lib/connection.py", line 504, in process self._handlers.handle(self, self._queue.pop(0)) File "neo/lib/connection.py", line 92, in handle self._handle(connection, packet) File "neo/lib/connection.py", line 107, in _handle pending[0][1].packetReceived(connection, packet) File "neo/lib/handler.py", line 125, in packetReceived self.dispatch(*args) File "neo/lib/handler.py", line 75, in dispatch method(conn, *args, **kw) File "neo/lib/handler.py", line 159, in notPrimaryMaster assert primary != self.app.server AttributeError: 'BackupApplication' object has no attribute 'server'
-
- 28 Apr, 2019 3 commits
-
-
Julien Muchembled authored
With the switch to msgpack, there was no schema anymore whereas it was sometimes used for both automatic conversion (e.g. the last argument of AskStoreTransaction must now be explicitly cast to list) and type checking. This somewhat reintroduces a kind of schema that: - is used by the test suite for type checking - can be generated automatically from the test suite when one change the procotol
-
Julien Muchembled authored
Not only for performance reasons (at least 3% faster) but also because of several ugly things in the way packets were defined: - packet field names, which are only documentary; for roots fields, they even just duplicate the packet names - a lot of repetitions for packet names, and even confusion between the name of the packet definition and the name of the actual notify/request packet - the need to implement field types for anything, like PByte to support new compression formats, since PBoolean is not enough neo/lib/protocol.py is now much smaller.
-
Julien Muchembled authored
-
- 27 Apr, 2019 11 commits
-
-
Julien Muchembled authored
The following 2 operations can be onerous and they should not be directly usable without some kind of confirmation by the user: - Dropping a node now requires to first stop it. - Tweaking does not exclude anymore automatically DOWN nodes, because a node could go DOWN between the moment the user sends the command to tweak and the actual tweak by the master.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
Initially, I wanted to do the simulation inside neoctl but it has no knowledge of the topology (the master don't send devpath values of storage nodes). Therefore, the work is delegated to the master node, which implies a change of the protocol.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
This stops abusing ProtocolError, which disconnects the admin node needlessly. The many 'if ... raise RuntimeError' in neo/neoctl/neoctl.py could be turned into assertions.
-
Julien Muchembled authored
-
Julien Muchembled authored
neoctl gets a new command to change the number of replicas. The number of replicas becomes a new partition table attribute and like the PT id, it is stored in the config table. On the other side, the configuration value for the number of partitions is dropped, since it can be computed from the partition table, which is always stored in full. The -p/-r master options now only apply at database creation. Some implementation notes: - The protocol is slightly optimized in that the master now sends automatically the whole partition tables to the admin & client nodes upon connection, like for storage nodes. This makes the protocol more consistent, and the master is the only remaining node requesting partition tables, during recovery. - Some parts become tricky because app.pt can be None in more cases. For example, the extra condition in NodeManager.update (before app.pt.dropNode) was added for this is the reason. Or the 'loadPartitionTable' method (storage) that is not inlined because of unit tests. Overall, this commit simplifies more than it complicates. - In the master handlers, we stop hijacking the 'connectionCompleted' method for tasks to be performed (often send the full partition table) on handler switches. - The admin's 'bootstrapped' flag could have been removed earlier: race conditions can't happen since the AskNodeInformation packet was removed (commit d048a52d).
-
Julien Muchembled authored
It is often faster to set up replicas by stopping a node (and any underlying database server like MariaDB) and do a raw copy of the database (e.g. with rsync). So far, it required to stop the whole cluster and use tools like 'mysql' or sqlite3' to edit: - the 'pt' table in databases, - the 'config.nid' values of the new nodes. With this new option, if you already have 1 replica, you can set up new replicas with such fast raw copy, and without interruption of service. Obviously, this implies less redundancy during the operation.
-