- 22 Feb, 2024 5 commits
-
-
Julien Muchembled authored
Else it leads to DB corruption and a crash of the master.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
This fixes commit 0e43dd1f ("Fix signals not always being processed as soon as possible").
-
Julien Muchembled authored
See commit b6f821a2.
-
- 18 Dec, 2023 4 commits
-
-
Kirill Smelkov authored
Julien notes this is very likely unneeded: nexedi/neoppod!21 (diffs, comment 195929) We had it like this since 01a01c8c (client: Add support for zodburi), but I rechecked zodburi codebase now and it does not do any similar lowering anywhere. So drop support for case normalization in zurl options. /cc @levin.zimmermann /reviewed-by @jm /reviewed-on nexedi/neoppod!21
-
Kirill Smelkov authored
Unfortunately after creating SSL context it is not possible, or at least I could not find how, to retrieve original credentials with which the context was created. However wendelin.core needs to be able to take a client storage, reconstruct zurl to refer to that particular storage, and pass that zurl to wcfs, so that wcfs, in turn, could access the same ZODB database. Given a NEO client instance, it is already possible to retrieve master_nodes, cluster name, and detect whether SSL is being in use. However without being able to retrieve original SSL credentials, reconstructed zurl will not be full and wcfs won't be able to use exactly the same secrets as python part does. -> Help wendelin.core by remembering which ca/cert/key were used to build SSL context. This information is used by zstor_2zurl in wendelin.core here: https://lab.nexedi.com/nexedi/wendelin.core/blob/885b3556/lib/zodb.py#L390-418 /cc @levin.zimmermann /reviewed-by @jm /reviewed-on nexedi/neoppod!21
-
Kirill Smelkov authored
Similarly to how it is done with e.g. http:// and https:// - if neos:// is given TLS usage is forced and ca/cert/key must be there either in the URI itself, or in $NEO_CA, $NEO_CERT and $NEO_KEY environment variables mimicking the way how e.g. for https:// TLS credentials are taken from host environment, not from the uri. The latter might be usability convenience, but is also useful for WCFS which needs to be able to remove secrets from uri on zurl normalization. Please see discussion at nexedi/neoppod!18 (comment 184439) for details. /cc @levin.zimmermann /reviewed-by @jm /reviewed-on nexedi/neoppod!21
-
Kirill Smelkov authored
Because list of masters and cluster name must be already present in netloc and path. Previously e.g. neo://db@α,β,γ?master_nodes=a,b,c" would mean to use master nodes {a,b,c} not {α,β,γ}. Now it is treated as invalid URI to remove ambiguity. Same for cluster name. /cc @levin.zimmermann /reviewed-by @jm /reviewed-on nexedi/neoppod!21
-
- 08 Nov, 2023 1 commit
-
-
Julien Muchembled authored
Pre-mortem data: Traceback (most recent call last): File "neo/master/app.py", line 172, in run self._run() File "neo/master/app.py", line 180, in _run self.listening_conn = ListeningConnection(self, None, self.server) File "neo/lib/connection.py", line 298, in __init__ connector.makeListeningConnection() File "neo/lib/connector.py", line 133, in makeListeningConnection self._error('listen', e) File "neo/lib/connector.py", line 93, in _error raise ConnectorException ConnectorException Traceback (most recent call last): File "neomaster", line 50, in <module> sys.exit(neo.scripts.neomaster.main()) File "neo/scripts/neomaster.py", line 31, in main app.run() File "neo/master/app.py", line 175, in run self.log() File "neo/master/app.py", line 167, in log if self.pt is not None: AttributeError: 'Application' object has no attribute 'pt'
-
- 16 Oct, 2023 5 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
This is still pack without garbage collection, and without deleting any transaction metadata ('trans' table). Partial pack means that the client can take a list of oids: only these oids will be packed. No API is defined yet at IStorage level. Storage nodes pack in background, independently from other storage nodes, partition by partition, and calling IStorage.pack() returns immediately (though internally, NEO does have a mechanism to wait until it's done, which can be required for some ZODB unit tests). This new implementation also introduces the concept of signing pack orders. The idea is that calling IStorage.pack() only records a pack order in the database, that can be reviewed/approved/rejected using an UI that is left to be done. For the moment, pack orders are automatically approved (by the master). Internally, pack orders are stored as extra metadata of a transaction. IOW, IStorage.pack() implies the commit of an (empty) transaction. IStorage.pack() can be called without waiting for the previous one to be completed. Pack orders processed in the same order as they are requested: - an unsigned pack order blocks the processing of any newer pack order; - rejected pack order are ignored. Approving a pack order also triggers pack on backup clusters. That's the simplest way to have everything consistent. Maybe later we could identify scenarios where it would be ok to unsign pack orders during asynchronous replication. The feature to check replicas is marked as experimental because it is not aware of differences that can happen during pack operations. _______________________________________________________________________ About concurrency within the storage node, a first implementation extended what was done to delete partitions in background (see previous commit). But here, the job can't be easily split in splices that are never too big: - it's simpler to never split the processing of an oid but this can freeze the application for a long time when packing an oid that was modified many times (e.g. 30 min for an oid with 20 millions historical records); - then an attempt so that an oid can be processed in several times was inefficient, maybe due to a limit in RocksDB (packing the oid in the above example would take days during which NEO is significantly slower). So background database jobs were moved to a separate thread, using a separate connection to the underlying database. This is obviously only useful for the MySQL backend. In order to share as much code as possible between backends, SQLite also does the work in a separate thread but sharing the main connection instead of opening a separate one (so such backend would not be suited in the above example). But deleting raw data with a secondary connection is not possible without fsyncing too often (or transaction isolation issues...): these deletions are deferred by recording them in a new table, which is processed later with the main connection. This is not so bad because the actual deletion of raw data is usually more efficient this way (more sequential IO). Here are a few numbers: - without load: 10h45 (12h for the first reimplementation) - with a load that normally takes 6h58: - load: 7h33 (so 8.4% slower) - pack: 15h36 (+4h51) As explained above, the pack of a partition is split in 2 steps: - the longest one (here 78% without load) should have negligible peformance impact on the application because the work is done in a separate thread with a secondary connection, and also with something to minimize GIL impact by prioritizing the main thread; - the shortest one (22%) to process the deferred deletions, with even lower priority than replication: it tries to split the work in tasks that take ~10ms.
-
- 11 Oct, 2023 1 commit
-
-
Julien Muchembled authored
This is implemented using the same concurrency mechanism as for the replication: the work is split in slices that should be small enough to avoid slowing down network requests significantly.
-
- 04 Apr, 2023 9 commits
-
-
Julien Muchembled authored
undone_data_tid can't be equal to a TTID.
-
Julien Muchembled authored
It has never been enabled and the code to drop partitions will be changed in a way that only 'trans' may still benefit of partitioning. We'll see in the future if we have cases where 'trans' is too big to delete all rows (of a given partition) in a single query.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
- When undoing current record, fix: - crash of storage nodes that don't have the undo data (non-readable cells); - and conflict resolution. - Fix undo deduplication in replication when NEO deduplication is disabled. - client: minor fixes in undo() about concurrent storage disconnections and PT updates.
-
Julien Muchembled authored
-
Julien Muchembled authored
Found by running testPruneOrphan many times. Once I even got: SystemError: NULL result without error in PyObject_Call
-
- 09 Mar, 2023 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
The reverts a wrong change in commit 30a02bdc ("importer: new option to write back new transactions to the source database").
-
- 19 Feb, 2023 1 commit
-
-
Julien Muchembled authored
-
- 16 Feb, 2023 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 14 Feb, 2023 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
It's been many years we don't get 'array' objects, no idea when exactly.
-
Julien Muchembled authored
-
- 10 Feb, 2023 1 commit
-
-
Julien Muchembled authored
Like commit 243c1a0f ("sqlite: optimize storage of metadata"), the fake changes in test data are because we don't force upgrade for this optimization.
-
- 02 Feb, 2022 1 commit
-
-
Kirill Smelkov authored
Starting from zodbpickle 2 its binary class does not allow users to set arbitrary attributes and so binary._pack = bytes.__str__ fails with TypeError: can't set attributes of built-in/extension type 'zodbpickle.binary' -> Fix it by explicitly checking for binary type on encoding instead of setting binary._pack See nexedi/slapos@27f574bc for pre-history. /cc @jerome
-
- 04 Jun, 2021 1 commit
-
-
Julien Muchembled authored
Traceback (most recent call last): ... File ".../neo/lib/handler.py", line 75, in dispatch method(conn, *args, **kw) File ".../neo/admin/handler.py", line 174, in wrapper return func(self, name, *args, **kw) File ".../neo/admin/handler.py", line 190, in notifyMonitorInformation self.app.updateMonitorInformation(name, **info) File ".../neo/admin/app.py", line 290, in updateMonitorInformation self._notify(self.operational) File ".../neo/admin/app.py", line 315, in _notify body += '', name, ' ' + backup.formatSummary(upstream)[1] File ".../neo/admin/app.py", line 83, in formatSummary tid = self.ltid AttributeError: 'Backup' object has no attribute 'ltid'
-
- 11 May, 2021 1 commit
-
-
Julien Muchembled authored
-
- 02 Apr, 2021 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-