- 05 Dec, 2020 1 commit
-
-
Kirill Smelkov authored
This is 2020 edition of my original patch from 2016 ( dd3bb8b4 ). It was described in my NEO/go article ( https://navytux.spb.ru/~kirr/neo.html ) in the text quoted below: Then comes the link layer which provides service to exchange messages over network. In current NEO/py every message has `msg_id` field, that similarly to ZEO/py marks a request with serial number with requester then waiting for corresponding answer to come back with the same message id. Even though there might be several reply messages coming back to a single request, as e.g. NEO/py asynchronous replication code[0], this approach is still similar to ZEO/py remote procedure call (RPC) model because of single request semantic. One of the places where this limitation shows is the same replicator code where transactions metadata is fetched first with first series of RPC calls, and only then object data is fetched with the second series of RPC calls. This could be not very good e.g. in case when there is a lot of transactions/data to synchronize, because 1) it puts assumption on, and so constraints, the storage backend model on how data is stored (separate SQL tables for metadata and data), and 2) no data will be synchronized at all until all transactions are synchronized first. The second point prevents for example the syncing storage in turn to provide, even if read-only, service for the already fetched data. What would be maybe more useful is for requester to send request that it wants to fetch ZODB data in `tid_min..tid_max` range and then the sender sending intermixed stream of metadata/data in zodbdump-like format. Keeping in mind this, and other examples, NEO/go shifts from thinking about protocol logic as RPC to thinking of it as more general network protocol and settles to provide general connection-oriented message exchange service[1] : whenever a message with new `msg_id` is sent, a new connection is established multiplexed on top of a single node-node TCP link. Then it is possible to send/receive arbitrary messages over back and forth until so established connection is closed. This works transparently to NEO/py who still thinks it operates in simple RPC mode because of the way messages are put on the wire and because simple RPC is subset of a general exchange. The `neonet` module also provides `DialLink` and `ListenLink` primitives[2] that work similarly to standard Go `net.Dial` and `net.Listen` but wrap so created link into the multiplexing layer. What is actually done this way is very similar to HTTP/2 which also provides multiple general streams multiplexing on top of a single TCP connection ([3], [4]). However if connection ids (sent in place of `msg_id` on the wire) are assigned arbitrary, there could be a case when two nodes could try to initiate two new different connections to each other with the same connection id. To prevent such kind of conflict a simple rule to allocate connection ids either even or odd, depending on the role peer played while establishing the link, could be used. HTTP/2 takes similar approach[5] where `"Streams initiated by a client MUST use odd-numbered stream identifiers; those initiated by the server MUST use even-numbered stream identifiers."` with NEO/go doing the same corresponding to who was originally dialer and who was a listener. However it requires small patch to be applied on NEO/py side to increment `msg_id` by 2 instead of 1. NEO/py currently explicitly specifies `msg_id` for an answer in only limited set of cases, by default assuming a reply comes to the last received message whose `msg_id` it remembers globally per TCP-link. This approach is error-prone and cannot generally work in cases where several simultaneous requests are received over single link. This way NEO/go does not maintain any such global per-link knowledge and handles every request by always explicitly using corresponding connection object created at request reception time. [0] https://lab.nexedi.com/kirr/neo/blob/463ef9ad/neo/storage/replicator.py [1] https://lab.nexedi.com/kirr/neo/blob/463ef9ad/go/neo/neonet/connection.go [2] https://lab.nexedi.com/kirr/neo/blob/463ef9ad/go/neo/neonet/newlink.go [3] https://tools.ietf.org/html/rfc7540#section-5 [4] https://http2.github.io/faq/#why-is-http2-multiplexed [5] https://tools.ietf.org/html/rfc7540#section-5.1.1 It can be criticized, but the fact is: - it does no harm to NEO/py and is backward-compatible: a NEO/py node without this patch can still successfully connect and interoperate to another NEO/py node with this patch. - it is required for NEO/go to be able to interoperate with NEO/py. Both client and server parts of NEO/go use the same neonet module to exchange messages. - NEO/go client is used by wendelin.core 2, which organizes access to on-ZODB ZBigFile data via WCFS filesystem implemented in Go. So on one side this patch is small, simple and does not do any harm to NEO/py. On the other side it is required for NEO/go and wendelin.core 2. To me this clearly indicates that there should be no good reason to reject inclusion of this patch into NEO/py. -------- My original patch from 2016 came with corresponding adjustments to neo/tests/testConnection.py ( dd3bb8b4 ) but commit f6eb02b4 (Remove packet timeouts; 2017-05-04) removed testConnection.py completely and, if I understand correctly, did not add any other test to compensate that. This way I'm not trying to restore my tests to Connection neither. Anyway, with this patch there is no regression to all other existing NEO/py tests. -------- My original patch description from 2016 follows: - even for server initiated streams - odd for client initiated streams This way I will be able to use Pkt.msg_id as real stream_id in go's Conn because with even / odd scheme there is no possibility for id conflicts in between two peers. /cc @romain, @tomo, @rafael, @arnau, @vpelletier, @klaus, @Tyagov
-
- 19 Aug, 2020 4 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
====================================================================== FAIL: check_tid_ordering_w_commit (neo.tests.zodb.testBasic.BasicTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "ZODB/tests/BasicStorage.py", line 397, in check_tid_ordering_w_commit self.assertEqual(results.pop('lastTransaction'), tids[1]) File "neo/tests/__init__.py", line 301, in assertEqual return super(NeoTestBase, self).assertEqual(first, second, msg=msg) failureException: '\x03\xd8\x85H\xbffp\xbb' != '\x03\xd8\x85H\xbfs\x0b\xdd' (cherry picked from commit f4cb59d2)
-
Julien Muchembled authored
This requires ZODB >= 5.6.0 (cherry picked from commit a7d101ec)
-
Julien Muchembled authored
(cherry picked from commit 43029be2)
-
- 22 May, 2020 2 commits
-
-
Julien Muchembled authored
This fixes the following assertion: Traceback (most recent call last): File "neo/master/app.py", line 172, in run self._run() File "neo/master/app.py", line 182, in _run self.playPrimaryRole() File "neo/master/app.py", line 302, in playPrimaryRole self.backup_app.provideService()) File "neo/master/backup_app.py", line 114, in provideService node, conn = bootstrap.getPrimaryConnection() File "neo/lib/bootstrap.py", line 74, in getPrimaryConnection poll(1) File "neo/lib/event.py", line 160, in poll to_process.process() File "neo/lib/connection.py", line 504, in process self._handlers.handle(self, self._queue.pop(0)) File "neo/lib/connection.py", line 92, in handle self._handle(connection, packet) File "neo/lib/connection.py", line 107, in _handle pending[0][1].packetReceived(connection, packet) File "neo/lib/handler.py", line 125, in packetReceived self.dispatch(*args) File "neo/lib/handler.py", line 75, in dispatch method(conn, *args, **kw) File "neo/lib/handler.py", line 159, in notPrimaryMaster assert primary != self.app.server AttributeError: 'BackupApplication' object has no attribute 'server' (cherry picked from commit dba07e72)
-
Julien Muchembled authored
-
- 07 Jan, 2020 1 commit
-
-
Julien Muchembled authored
-
- 28 Apr, 2019 1 commit
-
-
Julien Muchembled authored
-
- 27 Apr, 2019 12 commits
-
-
Julien Muchembled authored
The following 2 operations can be onerous and they should not be directly usable without some kind of confirmation by the user: - Dropping a node now requires to first stop it. - Tweaking does not exclude anymore automatically DOWN nodes, because a node could go DOWN between the moment the user sends the command to tweak and the actual tweak by the master.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
Initially, I wanted to do the simulation inside neoctl but it has no knowledge of the topology (the master don't send devpath values of storage nodes). Therefore, the work is delegated to the master node, which implies a change of the protocol.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
This stops abusing ProtocolError, which disconnects the admin node needlessly. The many 'if ... raise RuntimeError' in neo/neoctl/neoctl.py could be turned into assertions.
-
Julien Muchembled authored
-
Julien Muchembled authored
neoctl gets a new command to change the number of replicas. The number of replicas becomes a new partition table attribute and like the PT id, it is stored in the config table. On the other side, the configuration value for the number of partitions is dropped, since it can be computed from the partition table, which is always stored in full. The -p/-r master options now only apply at database creation. Some implementation notes: - The protocol is slightly optimized in that the master now sends automatically the whole partition tables to the admin & client nodes upon connection, like for storage nodes. This makes the protocol more consistent, and the master is the only remaining node requesting partition tables, during recovery. - Some parts become tricky because app.pt can be None in more cases. For example, the extra condition in NodeManager.update (before app.pt.dropNode) was added for this is the reason. Or the 'loadPartitionTable' method (storage) that is not inlined because of unit tests. Overall, this commit simplifies more than it complicates. - In the master handlers, we stop hijacking the 'connectionCompleted' method for tasks to be performed (often send the full partition table) on handler switches. - The admin's 'bootstrapped' flag could have been removed earlier: race conditions can't happen since the AskNodeInformation packet was removed (commit d048a52d).
-
Julien Muchembled authored
It is often faster to set up replicas by stopping a node (and any underlying database server like MariaDB) and do a raw copy of the database (e.g. with rsync). So far, it required to stop the whole cluster and use tools like 'mysql' or sqlite3' to edit: - the 'pt' table in databases, - the 'config.nid' values of the new nodes. With this new option, if you already have 1 replica, you can set up new replicas with such fast raw copy, and without interruption of service. Obviously, this implies less redundancy during the operation.
-
Julien Muchembled authored
-
- 26 Apr, 2019 4 commits
-
-
Julien Muchembled authored
--kill-mysqld should be combined with something like -f .3 -r .1 to give storage nodes enough time to recover. And also -D 0 to focus testing on the storage backend rather than NEO.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 16 Apr, 2019 5 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
This also reverts commit 442bb43a.
-
- 05 Apr, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
This fixes up commit be839e92.
-
Julien Muchembled authored
-
- 01 Apr, 2019 1 commit
-
-
Julien Muchembled authored
-
- 21 Mar, 2019 2 commits
-
-
Julien Muchembled authored
This is not used currently.
-
Julien Muchembled authored
This breaks compatibily but it was mentionned from the beginning that these options are only there for testing purpose. TODO: rename all remaining occurrences of UUID into NID in the code
-
- 16 Mar, 2019 1 commit
-
-
Julien Muchembled authored
If the source DB is lost during the import and then restored from a backup, all new transactions have to written back again on resume. It is the most common case for which the writeback hits the maximum number of transactions per partition to process at each iteration; the previous code was buggy in that it could skip transactions.
-
- 13 Mar, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-