1. 20 Apr, 2016 1 commit
  2. 25 Jan, 2016 1 commit
  3. 01 Dec, 2015 1 commit
    • Julien Muchembled's avatar
      Safer DB truncation, new 'truncate' ctl command · d3c8b76d
      Julien Muchembled authored
      With the previous commit, the request to truncate the DB was not stored
      persistently, which means that this operation was still vulnerable to the case
      where the master is restarted after some nodes, but not all, have already
      truncated. The master didn't have the information to fix this and the result
      was a DB partially truncated.
      
      -> On a Truncate packet, a storage node only stores the tid somewhere, to send
         it back to the master, which stays in RECOVERING state as long as any node
         has a different value than that of the node with the latest partition table.
      
      We also want to make sure that there is no unfinished data, because a user may
      truncate at a tid higher than a locked one.
      
      -> Truncation is now effective at the end on the VERIFYING phase, just before
         returning the last ids to the master.
      
      At last all nodes should be truncated, to avoid that an offline node comes back
      with a different history. Currently, this would not be an issue since
      replication is always restart from the beginning, but later we'd like they
      remember where they stopped to replicate.
      
      -> If a truncation is requested, the master waits for all nodes to be pending,
         even if it was previously started (the user can still force the cluster to
         start with neoctl). And any lost node during verification also causes the
         master to go back to recovery.
      
      Obviously, the protocol has been changed to split the LastIDs packet and
      introduce a new Recovery, since it does not make sense anymore to ask last ids
      during recovery.
      d3c8b76d
  4. 05 Oct, 2015 1 commit
  5. 24 Sep, 2015 2 commits
  6. 14 Aug, 2015 1 commit
    • Julien Muchembled's avatar
      Do not reconnect too quickly to a node after an error · d898a83d
      Julien Muchembled authored
      For example, a backup storage node that was rejected because the upstream
      cluster was not ready could reconnect in loop without delay, using 100% CPU
      and flooding logs.
      
      A new 'setReconnectionNoDelay' method on Connection can be used for cases where
      it's legitimate to quickly reconnect.
      
      With this new delayed reconnection, it's possible to remove the remaining
      time.sleep().
      d898a83d
  7. 12 Aug, 2015 2 commits
  8. 21 May, 2015 1 commit
  9. 07 Jan, 2014 1 commit
  10. 20 Aug, 2012 1 commit
  11. 16 Aug, 2012 1 commit
  12. 01 Aug, 2012 1 commit
  13. 13 Mar, 2012 1 commit
  14. 12 Mar, 2012 1 commit
    • Julien Muchembled's avatar
      New feature to check that partitions are replicated properly · 04f72a4c
      Julien Muchembled authored
      This includes an API change of Node.isIdentified, which now tells whether
      identification packets have been exchanged or not.
      All handlers must be updated to implement '_acceptIdentification' instead of
      overriding EventHandler.acceptIdentification: this patch only does it for
      StorageOperationHandler
      04f72a4c
  15. 17 Jan, 2012 1 commit
  16. 06 Jan, 2012 1 commit
  17. 26 Oct, 2011 1 commit
  18. 05 Sep, 2011 1 commit
  19. 02 May, 2011 1 commit
  20. 25 Feb, 2011 1 commit
    • Olivier Cros's avatar
      Implementing ipv6 on neo · 0cdbf0ea
      Olivier Cros authored
      In order to synchronise neo with slapos, it has to work perfectly with ipv4
      and ipv6. This allows to integrate neo in erp5 and to prepare different buildout
      installations of neo.
      The protocol and connectors are no more generic but can now support IPv4 and
      IPv6 connections. We adopted a specific way of development which allow to
      easily add new protocols in the future.
      
      git-svn-id: https://svn.erp5.org/repos/neo/trunk@2654 71dcc9de-d417-0410-9af5-da40c76e7ee4
      0cdbf0ea
  21. 17 Jan, 2011 1 commit
  22. 01 Nov, 2010 2 commits
  23. 16 Sep, 2010 1 commit
  24. 16 Mar, 2010 1 commit
  25. 08 Mar, 2010 1 commit
  26. 01 Mar, 2010 1 commit
  27. 08 Feb, 2010 1 commit
  28. 01 Feb, 2010 1 commit
  29. 28 Jan, 2010 1 commit
  30. 20 Jan, 2010 1 commit
  31. 07 Oct, 2009 1 commit
  32. 05 Oct, 2009 1 commit
  33. 01 Oct, 2009 3 commits
  34. 30 Sep, 2009 1 commit
  35. 07 Aug, 2009 1 commit