1. 30 May, 2018 2 commits
    • Julien Muchembled's avatar
      Optimize resumption of replication by starting from a greater TID · b3dd6973
      Julien Muchembled authored
      Although data that are already transferred aren't transferred again, checking
      that the data are there for a whole partition can still be a lot of work for
      big databases. This commit is a major performance improvement in that a storage
      node that gets disconnected for a short time now gets fully operational quite
      instantaneously because it only has to replicate the new data. Before, the time
      to recover depended on the size of the DB.
      
      For OUT_OF_DATE cells, the difficult part was that they are writable and
      can then contain holes, so we can't just take the last TID in trans/obj
      (we wrongly did that at the beginning, and then committed
      6b1f198f as a workaround). We solve that
      by storing up to where it was up-to-date: this value is initialized from
      the last TIDs in trans/obj when the state switches from UP_TO_DATE/FEEDING.
      
      There's actually one such OUT_OF_DATE TID per assigned cell (backends store
      these values in the 'pt' table). Otherwise, a cell that still has a lot to
      replicate would still cause all other cells to resume from the a very small
      TID, or even ZERO_TID; the worse case is when a new cell is assigned to a node
      (as a result of tweak).
      
      For UP_TO_DATE cells of a backup cluster, replication was resumed from the
      maximum TID at which all assigned cells are known to be fully replicated.
      Like for OUT_OF_DATE cells, the presence of a late cell could cause a lot of
      extra work for others, the worst case being when setting up a backup cluster
      (it always restarted from ZERO_TID as long as at least 1 cell was still empty).
      Because UP_TO_DATE cells are guaranteed to have no holes, there's no need to
      store extra information: we simply look at the last TIDs in trans/obj.
      We even handle trans & obj independently, to minimize the work in 1 table
      (i.e. trans since it's processed first) if the other is late (obj).
      
      There's a small change in the protocol so that OUT_OF_DATE enum value equals 0.
      This way, backends can store the OUT_OF_DATE TID (verbatim) in the same column
      as the cell state.
      
      Note about MySQL changes in commit ca58ccd7:
      what we did as a workaround is not one any more. Now, we do so much on Python
      side that it's unlikely we could reduce the number of queries using GROUP BY.
      We even stopped doing that for SQLite.
      b3dd6973
    • Julien Muchembled's avatar
  2. 25 May, 2018 1 commit
  3. 24 May, 2018 9 commits
  4. 17 May, 2018 1 commit
    • Julien Muchembled's avatar
      fixup! importer: fetch and process the data to import in a separate process · dc220d04
      Julien Muchembled authored
      - for FileStorage DB, make sure a transaction index is built at most once
      - for other DB types, reopen the DB in the subprocess
      
      Now that we have specific code for FileStorage, the generic case is not tested
      anymore. We should add a test using ZEO. Or better, and in some way crazy,
      one with NEO, but one would need to fix a special case in getObject.
      dc220d04
  5. 16 May, 2018 5 commits
    • Julien Muchembled's avatar
      Serialize empty transaction extension with an empty string · a6d4c4e9
      Julien Muchembled authored
      The protocol version is increased to ensure that client nodes are able to
      handle an empty 'extension' field in AnswerTransactionInformation.
      
      It also means that once new transactions are written, going back to a previous
      revision is not possible.
      a6d4c4e9
    • Julien Muchembled's avatar
      client: fix partial import from a source storage · 346c9d00
      Julien Muchembled authored
      The correct way to specify a start/stop tid is when constructing the 'source'
      object, hence the remove of start/stop args. In fact, source.iterator()
      does not always take such args.
      
      On the other hand, when resuming import, Application.importFrom must manage
      with incomplete preindex.
      346c9d00
    • Julien Muchembled's avatar
      qa: give a title to subprocesses of functional tests · b648904b
      Julien Muchembled authored
      Same as previous commit: only cosmetics so optional.
      b648904b
    • Julien Muchembled's avatar
      importer: give a title to the 'import' and 'writeback' subprocesses · 461df152
      Julien Muchembled authored
      'title' means both process name and command line.
      
      This is cosmetics so it won't fail if the 'setproctitle' module
      is not available.
      461df152
    • Julien Muchembled's avatar
      importer: fetch and process the data to import in a separate process · 05bf48de
      Julien Muchembled authored
      A new subprocess is used to:
      - fetch data from the source DB
      - repickle to change oids (when merging several DB)
      - compress
      - checksum
      
      This is mostly useful for the second step, which is relatively much slower than
      any other step, while not releasing the GIL.
      
      By using a second CPU core, it is also often possible to use a better
      compression algorithm for free (e.g. zlib=9). Actually, smaller data can speed
      up the writing process.
      
      In addition to greatly speed up the import by parallelizing fetch+process with
      write, it also makes the main process more reactive to queries from client
      nodes.
      05bf48de
  6. 15 May, 2018 1 commit
    • Julien Muchembled's avatar
      importer: new option to write back new transactions to the source database · 30a02bdc
      Julien Muchembled authored
      By doing the work with secondary connections to the underlying databases,
      asynchronously and in a separate process, this should have minimal impact on
      the performance of the storage node. Extra complexity comes from backends that
      may lose connection to the database (here MySQL): this commit fully implements
      reconnection.
      30a02bdc
  7. 11 May, 2018 3 commits
  8. 07 May, 2018 4 commits
  9. 18 Apr, 2018 3 commits
  10. 16 Apr, 2018 3 commits
    • Julien Muchembled's avatar
      Fix a few issues with ZODB5 · 1316c225
      Julien Muchembled authored
      In the Importer storage backend, the repickler code never really worked with
      ZODB 5 (use of protocol > 1), and now the test does not pass anymore.
      
      The other issues caused by ZODB commit 12ee41c47310156027a674932df34b60de86ba36
      are fixed:
      
        TypeError: list indices must be integers, not binary
      
        ValueError: unsupported pickle protocol: 3
      
      Although not necessary as long as we don't support Python 3,
      this commit also replaces `str` by `bytes` in a few places.
      1316c225
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
      importer: do not trigger speedupFileStorageTxnLookup uselessly · 3bcac6d3
      Julien Muchembled authored
      When importing a FileStorage DB without interruption and without having to
      serve client nodes, the index built by speedupFileStorageTxnLookup is useless.
      Such case happens when doing simulation tests and on DB with many oids,
      it can take a lot of time and memory for nothing.
      3bcac6d3
  11. 13 Apr, 2018 2 commits
  12. 12 Apr, 2018 2 commits
  13. 10 Apr, 2018 1 commit
  14. 29 Mar, 2018 2 commits
    • Julien Muchembled's avatar
      master: automatically discard feeding cells that get out-of-date · 3efbbfe3
      Julien Muchembled authored
      This is a follow-up of commit 2ca7c335,
      which changed 'tweak' not to discard readable cells too quickly.
      
      The scenario of a storage being lost whereas it has feeding cells was forgotten.
      These must be discarded immediately, otherwise we end up with more up-to-date
      cells than wanted. Without the change in outdate(), testSafeTweak would end
      with: UU.|U.U|UUU
      
      Once replication is optimized not to always restart checking cells from the
      beginning:
      - Remembering that an out-of-date cell was feeding could be a safer
        option, but it may not be worth the extra complexity.
      - Another possibility may be to replace the FEEDING state by an automatic
        partial tweak that only discards up-to-date cells too many whenever a cell
        becomes up-to-date.
      3efbbfe3
    • Julien Muchembled's avatar
      3443d483
  15. 20 Mar, 2018 1 commit