1. 17 May, 2024 1 commit
    • Vincent Pelletier's avatar
      SQUASH Apply part of first review · 61fa401a
      Vincent Pelletier authored
      Mark a line to be folded back once migrated to python 3.
      Make storage.database.manager's getFirstTID return the tid packed.
      Also, update docstring to stop saying the result is unpacked.
      Also, fix None handling in the storage.database.manager and in
      master.transaction.
      61fa401a
  2. 16 May, 2024 2 commits
    • Vincent Pelletier's avatar
      master: Forbid truncature before database's first transaction · 6dffb894
      Vincent Pelletier authored
      This is intended as a sanity check, so simple typos in neoctl truncate
      command do not easily lead to the entire database being wiped.
      6dffb894
    • Vincent Pelletier's avatar
      neoctl: Change the expected tid-or-timestamp format · f70a688c
      Vincent Pelletier authored
      Before this change, the only distinction between a timestamp and a TID was
      the presence of the decimal separator, ".". As a result, a timestamp
      mistakenly provided without a decimal separator would be interpreted as a
      TID, which will be somewhere in January 1900 (as TIDs are 64bits with much
      finer accuracy than timestamps). When used to truncate a database, and in
      the absence of sanity checks, this would simply wipe the database.
      
      So, instead of just relying on a decimal separator, require a longer
      string. Make it a prefix for readability. Also, TIDs are more niche than
      timestamp, require them to have a mark, and do not require anything from
      timestamps.
      f70a688c
  3. 09 May, 2024 4 commits
  4. 16 Apr, 2024 1 commit
  5. 22 Mar, 2024 6 commits
  6. 22 Feb, 2024 8 commits
  7. 18 Dec, 2023 4 commits
  8. 08 Nov, 2023 1 commit
    • Julien Muchembled's avatar
      master: fix crash when aborting early e.g. when failing to open listening socket · 9a3898e4
      Julien Muchembled authored
      Pre-mortem data:
      Traceback (most recent call last):
      File "neo/master/app.py", line 172, in run
      self._run()
      File "neo/master/app.py", line 180, in _run
      self.listening_conn = ListeningConnection(self, None, self.server)
      File "neo/lib/connection.py", line 298, in __init__
      connector.makeListeningConnection()
      File "neo/lib/connector.py", line 133, in makeListeningConnection
      self._error('listen', e)
      File "neo/lib/connector.py", line 93, in _error
      raise ConnectorException
      ConnectorException
      Traceback (most recent call last):
        File "neomaster", line 50, in <module>
          sys.exit(neo.scripts.neomaster.main())
        File "neo/scripts/neomaster.py", line 31, in main
          app.run()
        File "neo/master/app.py", line 175, in run
          self.log()
        File "neo/master/app.py", line 167, in log
          if self.pt is not None:
      AttributeError: 'Application' object has no attribute 'pt'
      9a3898e4
  9. 16 Oct, 2023 5 commits
    • Julien Muchembled's avatar
      b6f821a2
    • Julien Muchembled's avatar
      d112bfbd
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
      Bump protocol version · 0fc95175
      Julien Muchembled authored
      0fc95175
    • Julien Muchembled's avatar
      Reimplement pack in a scalable way, partial pack & approval/reject of pack orders · 4c3b6c4d
      Julien Muchembled authored
      This is still pack without garbage collection, and without deleting
      any transaction metadata ('trans' table).
      
      Partial pack means that the client can take a list of oids: only these
      oids will be packed. No API is defined yet at IStorage level.
      
      Storage nodes pack in background, independently from other storage
      nodes, partition by partition, and calling IStorage.pack() returns
      immediately (though internally, NEO does have a mechanism to wait
      until it's done, which can be required for some ZODB unit tests).
      
      This new implementation also introduces the concept of signing pack
      orders. The idea is that calling IStorage.pack() only records a pack
      order in the database, that can be reviewed/approved/rejected using
      an UI that is left to be done. For the moment, pack orders are
      automatically approved (by the master).
      
      Internally, pack orders are stored as extra metadata of a transaction.
      IOW, IStorage.pack() implies the commit of an (empty) transaction.
      
      IStorage.pack() can be called without waiting for the previous one
      to be completed. Pack orders processed in the same order as they are
      requested:
      - an unsigned pack order blocks the processing of any newer pack order;
      - rejected pack order are ignored.
      
      Approving a pack order also triggers pack on backup clusters.
      That's the simplest way to have everything consistent.
      Maybe later we could identify scenarios where it would be ok
      to unsign pack orders during asynchronous replication.
      
      The feature to check replicas is marked as experimental because it is
      not aware of differences that can happen during pack operations.
      _______________________________________________________________________
      
      About concurrency within the storage node, a first implementation
      extended what was done to delete partitions in background (see
      previous commit). But here, the job can't be easily split in splices
      that are never too big:
      - it's simpler to never split the processing of an oid but this can
        freeze the application for a long time when packing an oid that was
        modified many times (e.g. 30 min for an oid with 20 millions
        historical records);
      - then an attempt so that an oid can be processed in several times was
        inefficient, maybe due to a limit in RocksDB (packing the oid in the
        above example would take days during which NEO is significantly
        slower).
      
      So background database jobs were moved to a separate thread, using a
      separate connection to the underlying database. This is obviously
      only useful for the MySQL backend. In order to share as much code as
      possible between backends, SQLite also does the work in a separate
      thread but sharing the main connection instead of opening a separate
      one (so such backend would not be suited in the above example).
      
      But deleting raw data with a secondary connection is not possible
      without fsyncing too often (or transaction isolation issues...): these
      deletions are deferred by recording them in a new table, which is
      processed later with the main connection. This is not so bad because
      the actual deletion of raw data is usually more efficient this way
      (more sequential IO).
      
      Here are a few numbers:
      - without load: 10h45 (12h for the first reimplementation)
      - with a load that normally takes 6h58:
        - load: 7h33 (so 8.4% slower)
        - pack: 15h36 (+4h51)
      
      As explained above, the pack of a partition is split in 2 steps:
      - the longest one (here 78% without load) should have negligible
        peformance impact on the application because the work is done in a
        separate thread with a secondary connection, and also with something
        to minimize GIL impact by prioritizing the main thread;
      - the shortest one (22%) to process the deferred deletions,
        with even lower priority than replication: it tries to split
        the work in tasks that take ~10ms.
      4c3b6c4d
  10. 11 Oct, 2023 1 commit
  11. 04 Apr, 2023 7 commits