1. 20 Apr, 2021 3 commits
    • Kirill Smelkov's avatar
      doc/requirements: Require pygments < 2.6 on py2 · 5b8e2dc1
      Kirill Smelkov authored
      Commit 75c703fc (Security fix documentation dependencies (#342)) changed
      requirements for pygments from
      
          # pygments 2.6 stops the support for python2
          pygments<2.6
      
      to
      
          # Silence dependabot claiming a security issue in older versions:
          pygments >= 2.7.4
      
      However this broke CI for python2, as sphinx now fails to install:
      
          Collecting Sphinx
            Using cached Sphinx-1.8.5-py2.py3-none-any.whl (3.1 MB)
          ERROR: Could not find a version that satisfies the requirement pygments>=2.7.4 (from -r doc/requirements.txt (line 3)) (from versions: 0.5, 0.5.1, 0.6, 0.7, 0.7.1, 0.8, 0.8.1, 0.9, 0.10, 0.11, 0.11.1, 1.0, 1.1, 1.1.1, 1.2, 1.2.1, 1.2.2, 1.3, 1.3.1, 1.4, 1.5, 1.6rc1, 1.6, 2.0rc1, 2.0, 2.0.1, 2.0.2, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.2.0, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 2.4.2, 2.5.1, 2.5.2)
          ERROR: No matching distribution found for pygments>=2.7.4 (from -r doc/requirements.txt (line 3))
          The command "if [[ $TRAVIS_PYTHON_VERSION != pypy3* ]]; then pip install --upgrade --requirement doc/requirements.txt; fi" exited with 1.
      
      (see e.g. https://travis-ci.com/github/zopefoundation/ZODB/jobs/499836548)
      
      Try to fix that by bringing back requirement for `pygments < 2.6` on
      python2.
      5b8e2dc1
    • Kirill Smelkov's avatar
      fixup! buildout: Fix Sphinx install on Python2 · 82bee274
      Kirill Smelkov authored
      It is not only py2 as build on py3 is also affected, e.g.
      
      https://travis-ci.com/github/zopefoundation/ZODB/jobs/499826281
      
      And even though Sphinx is going to be ready for newer docutils,
      sphinx_rtd_theme requires docutils<0.17 for now:
      
      https://github.com/readthedocs/sphinx_rtd_theme/blob/2026bb61ddf85a672141127b162d6f9181603725/setup.py#L119-L122
      82bee274
    • Kirill Smelkov's avatar
      buildout: Fix Sphinx install on Python2 · 3e3c321d
      Kirill Smelkov authored
      Else it is failing as:
      
          Got j1m.sphinxautozconfig 0.1.0.
          Getting distribution for 'sphinx_rtd_theme'.
          Got sphinx-rtd-theme 0.5.2.
          Version and requirements information containing docutils:
            Base installation request: 'Sphinx', 'docutils', 'ZODB', 'sphinxcontrib_zopeext', 'j1m.sphinxautozconfig', 'sphinx_rtd_theme'
            Requirement of sphinx_rtd_theme: docutils<0.17
            Requirement of j1m.sphinxautozconfig: docutils
            Requirement of Sphinx: docutils<0.17,>=0.12
          While:
            Installing sphinx.
          Error: There is a version conflict.
          We already have: docutils 0.17.1
          but sphinx-rtd-theme 0.5.2 requires 'docutils<0.17'.
      
      (see e.g. https://travis-ci.com/github/zopefoundation/ZODB/jobs/499760337)
      
      Suggested by @d-maurer.
      3e3c321d
  2. 01 Apr, 2021 1 commit
    • Claudius Ellsel's avatar
      Update README.rst · dad77801
      Claudius Ellsel authored
      Fix tiny issue with a remaining colon that was probably not deleted during updates of the README.
      dad77801
  3. 31 Mar, 2021 1 commit
  4. 29 Mar, 2021 2 commits
    • Kirill Smelkov's avatar
      changes: Correct link to UnboundLocalError fsoids.py fix · 2798502e
      Kirill Smelkov authored
      Commit fc4c86e6 (Fix unbound local error when using the fsoids.py script
      (#295)) wanted to refer to "issue 285", but put it as "issue 268" into
      visible text.
      2798502e
    • Kirill Smelkov's avatar
      fsrefs: Optimize IO (take 2) (#340) · 79078049
      Kirill Smelkov authored
      * fsrefs: Optimize IO  (take 2)
      
      Access objects in the order of their position in file instead of in the order
      of their OID. This should give dramatical speedup when data are on HDD.
      
      For example @perrinjerome reports that on a 73Go database it takes
      almost 8h to run fsrefs (where on the same database, fstest takes 15
      minutes) [1,2]. After the patch fsrefs took ~80 minutes to run on the same
      database. In other words this is ~ 6x improvement.
      
      Fsrefs has no tests. I tested it only lightly via generating a bit
      corrupt database with deleted referred object(*), and it gives the same
      output as unmodified fsrefs.
      
          oid 0x0 __main__.Object
          last updated: 1979-01-03 21:00:42.900001, tid=0x285cbacb70a3db3
          refers to invalid objects:
                  oid 0x07 missing: '<unknown>'
                  oid 0x07 object creation was undone: '<unknown>'
      
      This "take 2" version is derived from https://github.com/zopefoundation/ZODB/pull/338
      and only iterates objects in the order of their in-file position without
      building complete references graph in-RAM, because that in-RAM graph would
      consume ~12GB of memory.
      
      Added pos2oid in-RAM index also consumes memory: for the 73GB database in
      question fs._index takes ~700MB, while pos2oid takes ~2GB. In theory it could be less,
      because we need only array of oid sorted by key(oid)=fs._index[oid]. However
      array.array does not support sorting, and if we use plain list to keep just
      []oid, the memory consumption just for that list is ~5GB. Also because
      list.sort(key=...) internally allocates memory for key array (and
      list.sort(cmp=...) was removed from Python3), total memory consumption just to
      produce list of []oid ordered by pos is ~10GB.
      So without delving into C/Cython and/or manually sorting the array in Python (=
      slow), using QQBTree seems to be the best out-of-the-box option for oid-by-pos index.
      
      [1] zodbtools!19 (comment 129480)
      [2] zodbtools!19 (comment 129551)
      
      (*) test database generated via a bit modified gen_testdata.py from
      zodbtools:
      
      https://lab.nexedi.com/nexedi/zodbtools/blob/v0.0.0.dev8-28-g129afa6/zodbtools/test/gen_testdata.py
      
      +
      
      ```diff
      --- a/zodbtools/test/gen_testdata.py
      +++ b/zodbtools/test/gen_testdata.py
      @@ -229,7 +229,7 @@ def ext(subj): return {}
               # delete an object
               name = random.choice(list(root.keys()))
               obj = root[name]
      -        root[name] = Object("%s%i*" % (name, i))
      +#       root[name] = Object("%s%i*" % (name, i))
               # NOTE user/ext are kept empty on purpose - to also test this case
               commit(u"", u"predelete %s" % unpack64(obj._p_oid), {})
      ```
      
      /cc @tim-one, @jeremyhylton, @jamadden
      /reviewed-by @jamadden, @perrinjerome 
      /reviewed-on https://github.com/zopefoundation/ZODB/pull/340
      79078049
  5. 19 Feb, 2021 1 commit
  6. 28 Oct, 2020 2 commits
  7. 23 Sep, 2020 1 commit
  8. 04 Sep, 2020 2 commits
  9. 31 Aug, 2020 2 commits
    • Kirill Smelkov's avatar
      interface: Require invalidations to be called with full set of objects and not to skip transactions · c1e08052
      Kirill Smelkov authored
      Currently invalidate documentation is not clear whether it should be
      called for every transaction and whether it should include full set of
      objects created/modified by that transaction. Until now this was working
      relatively well for the sole purpose of invalidating client ZEO cache,
      because for that particular task it is relatively OK not to include just
      created objects into invalidation messages, and even to completely skip
      sending invalidation if transaction only create - not modify - objects.
      Due to this fact the workings of the client cache was indifferent to the
      ambiguity of the interface.
      
      In 2016 skipping transactions with only created objects was reconsidered
      as bug and fixed in ZEO5 because ZODB5 relies more heavily on MVCC
      semantic and needs to be notified about every transaction committed to
      storage to be able to properly update ZODB.Connection view:
      
      https://github.com/zopefoundation/ZEO/commit/02943acd#diff-52fb76aaf08a1643cdb8fdaf69e37802L889-R834
      https://github.com/zopefoundation/ZEO/commit/9613f09b
      
      However just-created objects were not included into invalidation
      messages until, hopefully, recently:
      
      https://github.com/zopefoundation/ZEO/pull/160
      
      As ZODB is started to be used more widely in areas where it was not
      traditionally used before, the ambiguity in invalidate interface and the
      lack of guarantees - for any storage - to be notified with full set of
      information, creates at least the following problems:
      
      - a ZODB client (not necessarily native ZODB/py client) can maintain
        raw cache for the storage. If such client tries to load an oid at
        database view when that object did not existed yet, gets "no object"
        reply and stores that information into raw cache, to properly invalidate
        the cache it needs an invalidation message from ZODB server that
        *includes* created object.
      
      - tools like `zodb watch` [1,2,3] don't work properly (give incorrect output)
        if not all objects modified/created by a transaction are included into
        invalidation messages.
      
      - similarly to `zodb watch`, a monitoring tool, that would want to be
        notified of all created/modified objects, won't see full
        database-change picture, and so won't work properly without knowing
        which objects were created.
      
      - wendelin.core 2 - which builds data from ZODB BTrees and data objects
        into virtual filesystem - needs to get invalidation messages with both
        modified and created objects to properly implement its own lazy
        invalidation and isolation protocol for file blocks in OS cache: when
        a block of file is accessed, all clients, that have this block mmaped,
        need to be notified and asked to remmap that block into particular
        revision of the file depending on a client's view of the filesystem and
        database [4,5].
      
        To compute to where a client needs to remmap the block, WCFS server
        (that in turn acts as ZODB client wrt ZEO/NEO server), needs to be able
        to see whether client's view of the filesystem is before object creation
        (and then ask that client to pin that block to hole), or after creation
        (and then ask the client to pin that block to corresponding revision).
      
        This computation needs ZODB server to send invalidation messages in
        full: with both modified and just created objects.
      
      Also:
      
      - the property that all objects - both modified and just created -
        are included into invalidation messages is required and can help to
        remove `next_serial` from `loadBefore` return in the future.
        This, in turn, can help to do 2x less SQL queries in loadBefore for
        NEO and RelStorage (and maybe other storages too):
        https://github.com/zopefoundation/ZODB/issues/318#issuecomment-657685745
      
      Current state of storages with respect to new requirements:
      
      - ZEO: does not skip transactions, but includes only modified - not
        created - objects. This is fixed by https://github.com/zopefoundation/ZEO/pull/160
      
      - NEO: already implements the requirements in full
      
      - RelStorage: already implements the requirements in full, if I
        understand correctly:
      
        https://github.com/zodb/relstorage/blob/3.1.2-1-gaf57d6c/src/relstorage/adapters/poller.py#L28-L145
      
      While editing invalidate documentation, use the occasion to document
      recently added property that invalidate(tid) is always called before
      storage starts to report its lastTransaction() ≥ tid - see 4a6b0283
      (mvccadapter: check if the last TID changed without invalidation).
      
      /cc @jimfulton, @jamadden, @jmuchemb, @vpelletier, @arnaud-fontaine, @gidzit, @klawlf82, @jwolf083
      /reviewed-on https://github.com/zopefoundation/ZODB/pull/319
      /reviewed-by @dataflake
      /reviewed-by @jmuchemb
      
      [1] https://lab.nexedi.com/kirr/neo/blob/049cb9a0/go/zodb/zodbtools/watch.go
      [2] neo@e0d59f5d
      [3] neo@c41c2907
      
      [4] https://lab.nexedi.com/kirr/wendelin.core/blob/1efb5876/wcfs/wcfs.go#L94-182
      [5] https://lab.nexedi.com/kirr/wendelin.core/blob/1efb5876/wcfs/client/wcfs.h#L20-71
      c1e08052
    • Jérome Perrin's avatar
      Merge pull request #327 from perrinjerome/doc-sphinx-python3 · b4f233cd
      Jérome Perrin authored
      Fix requirements for sphinx on python2
      b4f233cd
  10. 26 Aug, 2020 2 commits
  11. 19 Aug, 2020 1 commit
    • Julien Muchembled's avatar
      Relax assertion in check_tid_ordering_w_commit test · 4aa62185
      Julien Muchembled authored
      It is pointless for lastTransaction() to block until it is allowed to
      return the TID of a transaction that has just been committed, because
      it may still not be the real last TID (e.g. for some storage
      implementations, invalidations are received from a shared server
      via the network). While invalidations are still being processed,
      it's fine to return immediately with the previous last TID.
      
      This was clarified in commit 4a6b0283
      ("mvccadapter: check if the last TID changed without invalidation").
      
      See pull request #316
      4aa62185
  12. 31 Jul, 2020 1 commit
    • Kirill Smelkov's avatar
      Kill leftovers of pre-MVCC read conflicts · 3a493b01
      Kirill Smelkov authored
      In the early days, before MVCC was introduced, ZODB used to raise
      ReadConflictError on access to object that was simultaneously changed by
      another client in concurrent transaction. However, as
      doc/articles/ZODB-overview.rst says
      
      	Since Zope 2.8 ZODB has implemented **Multi Version Concurrency Control**.
      	This means no more ReadConflictErrors, each transaction is guaranteed to be
      	able to load any object as it was when the transaction begun.
      
      So today the only way to get a ReadConflictError should be
      
        1) at commit time for an object that was requested to stay unchanged
           via checkCurrentSerialInTransaction, and
      
        2) at plain access time, if a pack running simultaneously to current
           transaction, removes object revision that we try to load.
      
      The second point is a bit unfortunate, since when load discovers that
      object was deleted or not yet created, it is logically more clean to
      raise POSKeyError. However due to backward compatibility we still want
      to raise ReadConflictError in this case - please see comments added to
      MVCCAdapter for details.
      
      Anyway, let's remove leftovers of handling regular read-conflicts from
      pre-MVCC era:
      
      Adjust docstring of ReadConflictError to explicitly describe that this
      error can only happen at commit time for objects requested to be
      current, or at plain access if pack is running simultaneously under
      connection foot.
      
      There were also leftover code, comment and test bits in Connection,
      interfaces, testmvcc and testZODB, that are corrected/removed
      correspondingly. testZODB actually had ReadConflictTests that was
      completely deactivated: commit b0f992fd ("Removed the mvcc option..."; 2007)
      moved read-conflict-on-access related tests out of ZODBTests, but did not
      activated moved parts at all, because as that commit says when MVCC is
      always on unconditionally, there is no on-access conflicts:
      
          Removed the mvcc option.  Everybody wants mvcc and removing us lets us
          simplify the code a little. (We'll be able to simplify more when we
          stop supporting versions.)
      
      Today, if I try to manually activate that ReadConflictTests via
      
          @@ -637,6 +637,7 @@ def __init__(self, poisonedjar):
           def test_suite():
               return unittest.TestSuite((
                   unittest.makeSuite(ZODBTests, 'check'),
          +        unittest.makeSuite(ReadConflictTests, 'check'),
                   ))
      
           if __name__ == "__main__":
      
      it fails in dumb way showing that this tests were unmaintained for ages:
      
          Error in test checkReadConflict (ZODB.tests.testZODB.ReadConflictTests)
          Traceback (most recent call last):
            File "/usr/lib/python2.7/unittest/case.py", line 320, in run
              self.setUp()
            File "/home/kirr/src/wendelin/z/ZODB/src/ZODB/tests/testZODB.py", line 451, in setUp
              ZODB.tests.utils.TestCase.setUp(self)
          AttributeError: 'module' object has no attribute 'utils'
      
      Since today ZODB always uses MVCC and there is no way to get
      ReadConflictError on concurrent plain read/write access, those tests
      should be also gone together with old pre-MVCC way of handling
      concurrency.
      
      /cc @jimfulton
      /reviewed-on https://github.com/zopefoundation/ZODB/pull/320
      /reviewed-by @jamadden
      3a493b01
  13. 12 Jun, 2020 2 commits
  14. 11 Jun, 2020 2 commits
  15. 10 Jun, 2020 1 commit
  16. 09 Jun, 2020 1 commit
    • Julien Muchembled's avatar
      mvccadapter: check if the last TID changed without invalidation · 4a6b0283
      Julien Muchembled authored
      Since commit b5895a5c ("mvccadapter:
      fix race with invalidations when starting a new transaction"),
      a ZEO test fails as follows:
      
          File "src/ZEO/tests/drop_cache_rather_than_verify.txt", line 114, in drop_cache_rather_than_verify.txt
          Failed example:
              conn.root()[1].x
          Expected:
              6
          Got:
              1
      
      Earlier in the test, the ZEO server is restarted and then another
      client commits. When disconnected, the first client does not receive
      invalidations anymore and the connection gets stuck in the past until
      there's a new commit after it reconnected. It was possible to make the
      test pass with the following patch:
      
      --- a/src/ZEO/ClientStorage.py
      +++ b/src/ZEO/ClientStorage.py
      @@ -357,6 +357,7 @@ def notify_connected(self, conn, info):
      
               # invalidate our db cache
               if self._db is not None:
      +            self._db.invalidate(self.lastTransaction(), ())
                   self._db.invalidateCache()
      
               logger.info("%s %s to storage: %s",
      
      Other implementations like NEO are probably affected the same way.
      
      Rather than changing interfaces in a backward-incompatible way,
      this commit revert to the original behaviour, and all the changes
      that were done in existing tests are reverted.
      
      However, the interfaces are clarified about the fact that storage
      implementations must update at a precise moment the value that is
      returned by lastTransaction(): just after invalidate() or
      tpc_finish callback.
      4a6b0283
  17. 02 Jun, 2020 1 commit
  18. 20 May, 2020 4 commits
  19. 31 Mar, 2020 1 commit
    • Kirill Smelkov's avatar
      FileStorage: Save committed transaction to disk even if changed data is empty · bb9bf539
      Kirill Smelkov authored
      ZODB tries to avoid saving empty transactions to storage on
      `transaction.commit()`. The way it works is: if no objects were changed
      during ongoing transaction, ZODB.Connection does not join current
      TransactionManager, and transaction.commit() performs two-phase commit
      protocol only on joined DataManagers. In other words if no objects were
      changed, no tpc_*() methods are called at all on ZODB.Connection at
      transaction.commit() time.
      
      This way application servers like Zope/ZServer/ERP5/... can have
      something as
      
          try:
              # process incoming request
              transaction.commit()    # processed ok
          except:
              transaction.abort()
              # problem: log + reraise
      
      in top-level code to process requests without creating many on-disk
      transactions with empty data changes just because read-only requests
      were served.
      
      Everything is working as intended.
      
      However at storage level, FileStorage currently also checks whether
      transaction that is being committed also comes with empty data changes,
      and _skips_ saving transaction into disk *at all* for such cases, even
      if it has been explicitly told to commit the transaction via two-phase
      commit protocol calls done at storage level.
      
      This creates the situation, where contrary to promise in
      ZODB/interfaces.py(*), after successful tpc_begin/tpc_vote/tpc_finish()
      calls made at storage level, transaction is _not_ made permanent,
      despite tid of "committed" transaction being returned to caller. In other
      words FileStorage, when asked to commit a transaction, even if one with
      empty data changes, reports "ok" and gives transaction ID to the caller,
      without creating corresponding transaction record on disk.
      
      This behaviour is
      
      a) redundant to application-level avoidance to create empty transaction
         on storage described in the beginning, and
      
      b) creates problems:
      
      The first problem is that application that works at storage-level might
      be interested in persisting transaction, even with empty changes to
      data, just because it wants to save the metadata similarly to e.g.
      `git commit --allow-empty`.
      
      The other problem is that an application view and data in database
      become inconsistent: an application is told that a transaction was
      created with corresponding transaction ID, but if the storage is
      actually inspected, e.g. by iteration, the transaction is not there.
      This, in particular, can create problems if TID of committed transaction
      is reported elsewhere and that second database client does not find the
      transaction it was told should exist.
      
      I hit this particular problem with wendelin.core. In wendelin.core,
      there is custom virtual memory layer that keeps memory in sync with
      data in ZODB. At commit time, the memory is inspected for being dirtied,
      and if a page was changed, virtual memory layer joins current
      transaction _and_ forces corresponding ZODB.Connection - via which it
      will be saving data into ZODB objects - to join the transaction too,
      because it would be too late to join ZODB.Connection after 2PC process
      has begun(+). One of the format in which data are saved tries to
      optimize disk space usage, and it actually might happen, that even if
      data in RAM were dirtied, the data itself stayed the same and so nothing
      should be saved into ZODB. However ZODB.Connection is already joined
      into transaction and it is hard not to join it because joining a
      DataManager when the 2PC is already ongoing does not work.
      
      This used to work ok with wendelin.core 1, but with wendelin.core 2 -
      where separate virtual filesystem is also connected to the database to
      provide base layer for arrays mappings - this creates problem, because
      when wcfs (the filesystem) is told to synchronize to view the database
      @tid of committed transaction, it can wait forever waiting for that, or
      later, transaction to appear on disk in the database, creating
      application-level deadlock.
      
      I agree that some more effort might be made at wendelin.core side to
      avoid committing transactions with empty data at storage level.
      
      However the most clean way to fix this problem in my view is to fix
      FileStorage itself, because if at storage level it was asked to commit
      something, it should not silently skip doing so and dropping even non-empty
      metadata + returning ok and committed transaction ID to the caller.
      
      As described in the beginning this should not create problems for
      application-level ZODB users, while at storage-level the implementation
      is now consistently matching interface and common sense.
      
      ----
      
      (*) tpc_finish: Finish the transaction, making any transaction changes permanent.
          Changes must be made permanent at this point.
          ...
      
          https://github.com/zopefoundation/ZODB/blob/5.5.1-35-gb5895a5c2/src/ZODB/interfaces.py#L828-L831
      
      (+) https://lab.nexedi.com/kirr/wendelin.core/blob/9ff5ed32/bigfile/file_zodb.py#L788-822
      bb9bf539
  20. 27 Mar, 2020 1 commit
  21. 26 Mar, 2020 1 commit
  22. 20 Mar, 2020 1 commit
  23. 17 Mar, 2020 6 commits