• Julien Muchembled's avatar
    Reimplement pack in a scalable way, partial pack & approval/reject of pack orders · 4c3b6c4d
    Julien Muchembled authored
    This is still pack without garbage collection, and without deleting
    any transaction metadata ('trans' table).
    
    Partial pack means that the client can take a list of oids: only these
    oids will be packed. No API is defined yet at IStorage level.
    
    Storage nodes pack in background, independently from other storage
    nodes, partition by partition, and calling IStorage.pack() returns
    immediately (though internally, NEO does have a mechanism to wait
    until it's done, which can be required for some ZODB unit tests).
    
    This new implementation also introduces the concept of signing pack
    orders. The idea is that calling IStorage.pack() only records a pack
    order in the database, that can be reviewed/approved/rejected using
    an UI that is left to be done. For the moment, pack orders are
    automatically approved (by the master).
    
    Internally, pack orders are stored as extra metadata of a transaction.
    IOW, IStorage.pack() implies the commit of an (empty) transaction.
    
    IStorage.pack() can be called without waiting for the previous one
    to be completed. Pack orders processed in the same order as they are
    requested:
    - an unsigned pack order blocks the processing of any newer pack order;
    - rejected pack order are ignored.
    
    Approving a pack order also triggers pack on backup clusters.
    That's the simplest way to have everything consistent.
    Maybe later we could identify scenarios where it would be ok
    to unsign pack orders during asynchronous replication.
    
    The feature to check replicas is marked as experimental because it is
    not aware of differences that can happen during pack operations.
    _______________________________________________________________________
    
    About concurrency within the storage node, a first implementation
    extended what was done to delete partitions in background (see
    previous commit). But here, the job can't be easily split in splices
    that are never too big:
    - it's simpler to never split the processing of an oid but this can
      freeze the application for a long time when packing an oid that was
      modified many times (e.g. 30 min for an oid with 20 millions
      historical records);
    - then an attempt so that an oid can be processed in several times was
      inefficient, maybe due to a limit in RocksDB (packing the oid in the
      above example would take days during which NEO is significantly
      slower).
    
    So background database jobs were moved to a separate thread, using a
    separate connection to the underlying database. This is obviously
    only useful for the MySQL backend. In order to share as much code as
    possible between backends, SQLite also does the work in a separate
    thread but sharing the main connection instead of opening a separate
    one (so such backend would not be suited in the above example).
    
    But deleting raw data with a secondary connection is not possible
    without fsyncing too often (or transaction isolation issues...): these
    deletions are deferred by recording them in a new table, which is
    processed later with the main connection. This is not so bad because
    the actual deletion of raw data is usually more efficient this way
    (more sequential IO).
    
    Here are a few numbers:
    - without load: 10h45 (12h for the first reimplementation)
    - with a load that normally takes 6h58:
      - load: 7h33 (so 8.4% slower)
      - pack: 15h36 (+4h51)
    
    As explained above, the pack of a partition is split in 2 steps:
    - the longest one (here 78% without load) should have negligible
      peformance impact on the application because the work is done in a
      separate thread with a secondary connection, and also with something
      to minimize GIL impact by prioritizing the main thread;
    - the shortest one (22%) to process the deferred deletions,
      with even lower priority than replication: it tries to split
      the work in tasks that take ~10ms.
    4c3b6c4d
testReplication.py 51.4 KB