1. 21 Feb, 2019 2 commits
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
      CMFActivity: new activate() parameter to prefer executing on the same node · 301962ad
      Julien Muchembled authored
      The goal is to make better use of the ZODB Storage cache. It is common to do
      processing on a data set in several sequential transactions: in such case, by
      continuing execution of these messages on the same node, data is loaded from
      ZODB only once. Without this, and if there are many other messages to process,
      processing always continue on a random node, causing much more load from ZODB.
      To prevent nodes from having too much work to do, or too little compared to
      other nodes, this new parameter is only a hint for CMFActivity. It remains
      possible for a node to execute a message that was intended for another node.
      Before this commit, a processing node selects the first message(s) according to
      the following ordering:
        priority, date
      and now:
        priority, node_preference, date
      where node_preference is:
        -1 -> same node
         0 -> no preferred node
         1 -> another node
      The implementation is tricky for 2 reasons:
      - MariaDB can't order this way in a single simple query, so we have 1
        subquery for each case, potentially getting 3 times the wanted maximum of
        messages, then order/filter on the resulting union.
      - MariaDB also can't filter efficiently messages for other nodes, so the 3rd
        subquery returns messages for any node, potentially duplicating results from
        the first 2 subqueries. This works because they'll be ordered last.
        Unfortunately, this requires extra indices.
      In any case, message reservation must be very efficient, or MariaDB deadlocks
      quickly happen, and locking an activity table during reservation reduces
      parallelism too much.
      In addition to better cache efficiency, this new feature can be used as a
      workaround for a bug affecting serialiation_tag, causing IntegrityError when
      reindexing many new objects. If you have 2 recursive reindexations for both a
      document and one of its lines, and if you have so many messages than grouping
      is split between these 2 messages, then you end up with 2 nodes indexing the
      same line in parallel: for some tables, the pattern DELETE+INSERT conflicts
      since InnoDB does not take any lock when deleting a non-existent row.
      If you have many activities creating such documents, you can combine with
      grouping and appropriate priority to make sure that such pair of messages won't
      be executed on different nodes, except maybe at the end (when there's no
      document to create anymore; then activity reexecution may be enough).
      For example:
        from Products.CMFActivity.ActivityTool import getCurrentNode
          activate_kw={'node': 'same', 'priority': priority},
      where `priority` is the same as the activity containing the above code, which
      can also use grouping without increasing the probability of IntegrityError.
  2. 13 Feb, 2019 2 commits
  3. 05 Feb, 2019 1 commit
  4. 18 Jan, 2019 3 commits
  5. 08 Jan, 2019 1 commit
  6. 26 Apr, 2018 1 commit
  7. 26 Mar, 2018 1 commit
    • Vincent Pelletier's avatar
      CMFActivity: Stop deleting duplicates during SQLDict.distribute · d0472bc2
      Vincent Pelletier authored
      Duplicate message detection is not good enough: different messages with
      the same unicity value may bear different serialization_tags. This code
      does not takes this into account, which can lead to deleting such tagged
      message and validate an untagged one, which breaks serialization_tag
      contract of preventing any further activity validation until execution
      of all such-tagged validated activities is successful.
      Also, it is not validation's node job to deduplicate: it can happen during
      message execution without slowing down this crucial (performance-wise)
      activity node.
      As a result, distribute methods of SQLDict and SQLQueue can be factorised.
  8. 09 Mar, 2018 1 commit
    • Vincent Pelletier's avatar
      testCMFActivity: Ignore "processing" column value. · 9ac52814
      Vincent Pelletier authored
      This column is not a significant condition for this test. It is an
      unreliable transient piece of information intended for a human observer
      monitoring activity execution. processing_node is the reliable piece of
      information this test should care about.
  9. 06 Mar, 2018 2 commits
  10. 20 Oct, 2017 2 commits
  11. 27 Jul, 2017 2 commits
  12. 19 May, 2015 2 commits
  13. 13 May, 2015 2 commits
  14. 06 May, 2015 1 commit
    • Julien Muchembled's avatar
      CMFActivity: slightly delay non-executed grouped messages · c85a840f
      Julien Muchembled authored
      When grouped messages fail, ActivityTool must distinguish 3 groups,
      in order to reexecute them separately, as follows:
      - first, those that succeeded
      - then, those that were skipped
      - at last, failed ones
      Grouping methods are updated to handle partial failures, and stop doing
      anything when something goes wrong.
      Without this, we would have the following pathological cases.
      1. Let's suppose first that skipped messages are marked as succeeded.
      The problem is that each skipped message that will fail causes the reexecution
      of those that didn't fail.
      Exemple: A:ok B:ok C:err D:err E:err F:err
        1: A:ok, B:ok, C:err, D:skipped, E:skipped, F:skipped
        2: A:ok, B:ok, D:err, E:skipped, F:skipped
        3: A:ok, B:ok, E:err, F:skipped
        4: A:ok, B:ok, F:err
        5: A:ok, B:ok -> commit
      And worst, the first failed (C) may be processable again before 5, entering
      a failing loop if it is executed again in the same group as A & B.
      2. Another implementation is to mark all skipped as failed.
        1: A:ok, B:ok, C:err, D:skipped, E:skipped, F:skipped
        2: A:ok, B:ok -> commit
        3: C:err, D:skipped, E:skipped, F:skipped
       >3: same as 3
      => D, E or F are never tried.
  15. 30 Mar, 2015 1 commit
  16. 27 Mar, 2015 1 commit
    • Julien Muchembled's avatar
      CMFActivity: automatic migration of queues and removal of button to recreate tables · 3d644bde
      Julien Muchembled authored
      The action to recreate activity tables while preserving existing messages
      was unsafe for 2 reasons:
      - if any error happened, messages could be lost
      - it relied on Message.reactivate
      Which this patch, any instance created after commit d881edd1 (Aug 2010) will
      upgrade successfully. For older instances, make sure you have no activity left.
      For cases where 'ALTER TABLE' would not work, a better way to implement repair
      functionality would be:
      - one action to backup all messages in ZODB
      - and another to restore them
      And maybe a security so that during the backup-clear-restore sequence,
      activities can't be created nor processed.
      If any column is added in the future, it would still be possible to write code
      that fills them by inspecting messages.
  17. 10 Mar, 2015 1 commit
  18. 16 Oct, 2014 1 commit
  19. 04 Sep, 2014 1 commit
  20. 30 Jan, 2014 1 commit
  21. 06 Aug, 2013 1 commit
  22. 11 Jun, 2013 1 commit
    • Vincent Pelletier's avatar
      Move some work out of Message.__init__ . · fda3f093
      Vincent Pelletier authored
      So that creating an ActiveWrapper (or Method) once and reusing it to spawn
      several activities gets a larger speed-up.
      Message class API is not supposed to be used outside this module, so
      drop failing test rather than fixing it.
  23. 21 May, 2013 2 commits
  24. 22 Apr, 2013 2 commits
    • Julien Muchembled's avatar
      CMFActivity: remove non-executable message state (-3) · e47f2923
      Julien Muchembled authored
      When an object is deleted, higher level code used to flush its messages (without
      invoking them). However, a concurrent and very long transaction may be about to
      activate such an object, without conflict. We already experienced false -3
      errors that could prevent other messages to be validated.
      Because there is no efficient and reliable way to flush absolutely all messages,
      messages on deleted objects are now ignored and deleted without any email
      notification. There's only a WARNING in logs. But for performance reasons,
      there's still a flush on object deletion.
      To simplify code, messages that went to -3 for other reasons, like a
      non-existing method, now go to -2. In fact, this was already the case for
      grouped messages.
      In case that a path is recycled, it may still be possible for a message to be
      executed on a wrong object (the new one), instead of being ignored (because the
      activated object was deleted). So in such scenario, developer should make sure
      not to delete an object that may be activated in a concurrent transaction.
      If the original object has an OID at the moment it is activated, an assertion
      will make sure the message is not executed on another object.
    • Julien Muchembled's avatar
      testCMFActivity: clean up · fcce7b97
      Julien Muchembled authored
  25. 18 Apr, 2013 1 commit
  26. 21 Feb, 2013 1 commit
  27. 15 Feb, 2013 1 commit
    • Julien Muchembled's avatar
      Fix commit order of CMFActivity SQL connection on nodes with several zserver threads · 2c11b76a
      Julien Muchembled authored
      When a ZODB connection is closed, it usually returns to a ZODB pool and may be
      reused by another thread. If the SQL connection was open and is still in ZODB
      cache, the _v_database_connection attribute is still there:
      ActivityConnection.connect() is not called and a new instance of ZMySQLDA.db.DB
      is created for the new thread without initializing its sort key.
  28. 08 Jan, 2013 1 commit
  29. 26 Nov, 2012 1 commit