Commit 133ebeb3 authored by Kirill Smelkov's avatar Kirill Smelkov

Merge remote-tracking branch 'origin/master' into y/loadAt.7

to resolve trivial conflict on CHANGES.rst

* origin/master: (22 commits)
  Fix TypeError for fsoids (#351)
  Fix deprecation warnings occurring on Python 3.10.
  fix more PY3 incompatibilities in `fsstats`
  fix Python 3 incompatibility for `fsstats`
  add `fsdump/fsstats` test
  fsdump/fsstats improvements
  - add coverage combine step
  - first cut moving tests from Travis CI to GitHub Actions
  - ignore virtualenv artifacts [ci skip]
  tests: Run race-related tests with high frequency of switches between threads
  tests: Add test for load vs external invalidation race
  tests: Add test for open vs invalidation race
  fixup! doc/requirements: Require pygments < 2.6 on py2
  doc/requirements: Require pygments < 2.6 on py2
  fixup! buildout: Fix Sphinx install on Python2
  buildout: Fix Sphinx install on Python2
  Update README.rst
  Security fix documentation dependencies (#342)
  changes: Correct link to UnboundLocalError fsoids.py fix
  fsrefs: Optimize IO  (take 2) (#340)
  ...
parents 5ae48fe1 1f4c6429
name: tests
on:
push:
pull_request:
schedule:
- cron: '0 12 * * 0' # run once a week on Sunday
jobs:
build:
strategy:
fail-fast: false
matrix:
config:
# [Python version, tox env]
- ["2.7", "py27"]
- ["3.5", "py35"]
- ["3.6", "py36"]
- ["3.7", "py37"]
- ["3.8", "py38"]
- ["3.8", "py38-pure"]
- ["pypy2", "pypy"]
- ["pypy3", "pypy3"]
- ["3.7", "docs"]
- ["3.7", "coverage"]
runs-on: ubuntu-latest
name: ${{ matrix.config[1] }}
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.config[0] }}
- name: Pip cache
uses: actions/cache@v2
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.config[0] }}-${{ hashFiles('setup.*', 'tox.ini') }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.config[0] }}-
${{ runner.os }}-pip-
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install tox
- name: Test
run: tox -e ${{ matrix.config[1] }}
- name: Coverage
if: matrix.config[1] == 'coverage'
run: |
pip install coveralls coverage-python-version
coveralls --service=github
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
......@@ -22,3 +22,7 @@ testing.log
htmlcov
tmp
*~
.*.swp
lib/
lib64
pyvenv.cfg
language: python
env:
global:
ZOPE_INTERFACE_STRICT_IRO: 1
python:
- 2.7
- 3.5
- 3.6
- 3.7
- 3.8
- pypy
- pypy3
jobs:
include:
# Special Linux builds
- name: "Python: 3.8, pure (no C extensions)"
python: 3.8
env: PURE_PYTHON=1
install:
- pip install -U pip
# BBB use older setuptools on pypy, setuptools 50.0.0 is not compatible with pypy 7.1.1-beta0
- if [[ $TRAVIS_PYTHON_VERSION == pypy3* ]]; then pip install -U setuptools==49.6.0 zc.buildout; fi
- if [[ $TRAVIS_PYTHON_VERSION != pypy3* ]]; then pip install -U setuptools zc.buildout; fi
- buildout $BUILOUT_OPTIONS
script:
- if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then bin/coverage run bin/coverage-test -v; fi
- if [[ $TRAVIS_PYTHON_VERSION == pypy* ]]; then bin/test -v; fi
- if [[ $TRAVIS_PYTHON_VERSION != pypy3* ]]; then pip install --upgrade --requirement doc/requirements.txt; fi
- if [[ $TRAVIS_PYTHON_VERSION != pypy3* ]]; then make -C doc html; fi
- if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then pip install coveralls; fi # install early enough to get into the cache
after_success:
- if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then bin/coverage combine; fi
- if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then coveralls; fi
notifications:
email: false
cache:
directories:
- $HOME/.cache/pip
- eggs
before_cache:
- rm -f $HOME/.cache/pip/log/debug.log
......@@ -14,8 +14,24 @@
and `PR 323 <https://github.com/zopefoundation/ZODB/pull/323>`_
for details.
- Fix ``TypeError: can't concat str to bytes`` when running fsoids.py script with Python 3.
See `issue 350 <https://github.com/zopefoundation/ZODB/issues/350>`_.
- Readd transaction size information to ``fsdump`` output;
adapt `fsstats` to ``fsdump``'s exchanged order for ``size`` and ``class``
information in data records;
(fixes `#354 <https://github.com/zopefoundation/ZODB/issues/354>_).
Make ``fsdump`` callable via Python's ``-m`` command line option.
- Fix UnboundLocalError when running fsoids.py script.
See `issue 268 <https://github.com/zopefoundation/ZODB/issues/285>`_.
See `issue 285 <https://github.com/zopefoundation/ZODB/issues/285>`_.
- Rework ``fsrefs`` script to work significantly faster by optimizing how it does
IO. See `PR 340 <https://github.com/zopefoundation/ZODB/pull/340>`_.
- Require Python 3 to build the documentation.
- Fix deprecation warnings occurring on Python 3.10.
5.6.0 (2020-06-11)
......
......@@ -41,7 +41,7 @@ ZODB is an ACID Transactional database.
To learn more, visit: https://zodb-docs.readthedocs.io
The github repository is: at https://github.com/zopefoundation/zodb
The github repository is at https://github.com/zopefoundation/zodb
If you're interested in contributing to ZODB itself, see the
`developer notes
......
......@@ -9,6 +9,8 @@ parts =
[versions]
# Avoid breakage in 4.4.5:
zope.testrunner = >= 4.4.6
# sphinx_rtd_theme depends on docutils<0.17
docutils = < 0.17
[versions:python2]
Sphinx = < 2
......
......@@ -44,6 +44,19 @@ Learning more
* `The ZODB Book (in progress) <http://zodb.readthedocs.org/en/latest/>`_
What is the expansion of "ZODB"?
================================
The expansion of "ZODB" is the Z Object Database. But, of course, we
usually just use "ZODB".
In the past, it was the Zope Object Database, because it was
developed as part of the Zope project. But ZODB doesn't depend on
Zope in any way and is used in many projects that have nothing to do
with Zope.
Downloads
=========
......
Sphinx
# pygments 2.6 stops the support for python2
pygments<2.6
# Silence dependabot claiming a security issue in older versions:
pygments >= 2.7.4 ; python_version >= '3'
# pygments 2.6 stopped supporting py2
pygments < 2.6 ; python_version < '3'
docutils
ZODB
sphinxcontrib_zopeext
......
......@@ -2194,7 +2194,7 @@ class FilePool(object):
self.writing = False
if self.writers > 0:
self.writers -= 1
self._cond.notifyAll()
self._cond.notify_all()
@contextlib.contextmanager
def get(self):
......@@ -2219,7 +2219,7 @@ class FilePool(object):
if not self._out:
with self._cond:
if self.writers and not self._out:
self._cond.notifyAll()
self._cond.notify_all()
def empty(self):
while self._files:
......
......@@ -23,12 +23,14 @@ from ZODB.utils import u64, get_pickle_metadata
def fsdump(path, file=None, with_offset=1):
iter = FileIterator(path)
for i, trans in enumerate(iter):
size = trans._tend - trans._tpos
if with_offset:
print(("Trans #%05d tid=%016x time=%s offset=%d" %
(i, u64(trans.tid), TimeStamp(trans.tid), trans._pos)), file=file)
print(("Trans #%05d tid=%016x size=%d time=%s offset=%d" %
(i, u64(trans.tid), size,
TimeStamp(trans.tid), trans._pos)), file=file)
else:
print(("Trans #%05d tid=%016x time=%s" %
(i, u64(trans.tid), TimeStamp(trans.tid))), file=file)
print(("Trans #%05d tid=%016x size=%d time=%s" %
(i, u64(trans.tid), size, TimeStamp(trans.tid))), file=file)
print((" status=%r user=%r description=%r" %
(trans.status, trans.user, trans.description)), file=file)
......@@ -122,3 +124,7 @@ class Dumper(object):
def main():
import sys
fsdump(sys.argv[1])
if __name__ == "__main__":
main()
......@@ -29,7 +29,11 @@ def shorten(s, size=50):
navail = size - 5
nleading = navail // 2
ntrailing = size - nleading
return s[:nleading] + " ... " + s[-ntrailing:]
if isinstance(s, bytes):
sep = b" ... "
else:
sep = " ... "
return s[:nleading] + sep + s[-ntrailing:]
class Tracer(object):
"""Trace all occurrences of a set of oids in a FileStorage.
......
......@@ -66,9 +66,10 @@ import traceback
from ZODB.FileStorage import FileStorage
from ZODB.TimeStamp import TimeStamp
from ZODB.utils import u64, oid_repr, get_pickle_metadata, load_current
from ZODB.utils import u64, p64, oid_repr, get_pickle_metadata, load_current
from ZODB.serialize import get_refs
from ZODB.POSException import POSKeyError
from BTrees.QQBTree import QQBTree
# There's a problem with oid. 'data' is its pickle, and 'serial' its
# serial number. 'missing' is a list of (oid, class, reason) triples,
......@@ -118,7 +119,18 @@ def main(path=None):
# This does not include oids in undone.
noload = {}
for oid in fs._index.keys():
# build {pos -> oid} index that is reverse to {oid -> pos} fs._index
# we'll need this to iterate objects in order of ascending file position to
# optimize disk IO.
pos2oid = QQBTree() # pos -> u64(oid)
for oid, pos in fs._index.iteritems():
pos2oid[pos] = u64(oid)
# pass 1: load all objects listed in the index and remember those objects
# that are deleted or load with an error. Iterate objects in order of
# ascending file position to optimize disk IO.
for oid64 in pos2oid.itervalues():
oid = p64(oid64)
try:
data, serial = load_current(fs, oid)
except (KeyboardInterrupt, SystemExit):
......@@ -130,9 +142,13 @@ def main(path=None):
traceback.print_exc()
noload[oid] = 1
# pass 2: go through all objects again and verify that their references do
# not point to problematic object set. Iterate objects in order of ascending
# file position to optimize disk IO.
inactive = noload.copy()
inactive.update(undone)
for oid in fs._index.keys():
for oid64 in pos2oid.itervalues():
oid = p64(oid64)
if oid in inactive:
continue
data, serial = load_current(fs, oid)
......
......@@ -6,8 +6,8 @@ import sys
import six
from six.moves import filter
rx_txn = re.compile("tid=([0-9a-f]+).*size=(\d+)")
rx_data = re.compile("oid=([0-9a-f]+) class=(\S+) size=(\d+)")
rx_txn = re.compile(r"tid=([0-9a-f]+).*size=(\d+)")
rx_data = re.compile(r"oid=([0-9a-f]+) size=(\d+) class=(\S+)")
def sort_byhsize(seq, reverse=False):
L = [(v.size(), k, v) for k, v in seq]
......@@ -31,8 +31,7 @@ class Histogram(dict):
def median(self):
# close enough?
n = self.size() / 2
L = self.keys()
L.sort()
L = sorted(self.keys())
L.reverse()
while 1:
k = L.pop()
......@@ -50,11 +49,14 @@ class Histogram(dict):
return mode
def make_bins(self, binsize):
maxkey = max(six.iterkeys(self))
try:
maxkey = max(six.iterkeys(self))
except ValueError:
maxkey = 0
self.binsize = binsize
self.bins = [0] * (1 + maxkey / binsize)
self.bins = [0] * (1 + maxkey // binsize)
for k, v in six.iteritems(self):
b = k / binsize
b = k // binsize
self.bins[b] += v
def report(self, name, binsize=50, usebins=False, gaps=True, skip=True):
......@@ -88,7 +90,7 @@ class Histogram(dict):
cum += n
pc = 100 * cum / tot
print("%6d %6d %3d%% %3d%% %s" % (
i * binsize, n, p, pc, "*" * (n / dot)))
i * binsize, n, p, pc, "*" * (n // dot)))
print()
def class_detail(class_size):
......@@ -104,7 +106,7 @@ def class_detail(class_size):
# per class details
for klass, h in sort_byhsize(six.iteritems(class_size), reverse=True):
h.make_bins(50)
if len(filter(None, h.bins)) == 1:
if len(tuple(filter(None, h.bins))) == 1:
continue
h.report("Object size for %s" % klass, usebins=True)
......@@ -138,7 +140,7 @@ def main(path=None):
objects = 0
tid = None
f = open(path, "rb")
f = open(path, "r")
for i, line in enumerate(f):
if MAX and i > MAX:
break
......@@ -146,7 +148,7 @@ def main(path=None):
m = rx_data.search(line)
if not m:
continue
oid, klass, size = m.groups()
oid, size, klass = m.groups()
size = int(size)
obj_size.add(size)
......@@ -178,6 +180,8 @@ def main(path=None):
objects = 0
txn_bytes.add(size)
if objects:
txn_objects.add(objects)
f.close()
print("Summary: %d txns, %d objects, %d revisions" % (
......
##############################################################################
#
# Copyright (c) 2021 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
from ZODB import DB
from ZODB.scripts.fsstats import rx_data
from ZODB.scripts.fsstats import rx_txn
from ZODB.tests.util import TestCase
from ZODB.tests.util import run_module_as_script
class FsdumpFsstatsTests(TestCase):
def setUp(self):
super(FsdumpFsstatsTests, self).setUp()
# create (empty) storage ``data.fs``
DB("data.fs").close()
def test_fsdump(self):
run_module_as_script("ZODB.FileStorage.fsdump", ["data.fs"])
# verify that ``fsstats`` will understand the output
with open("stdout") as f:
tno = obno = 0
for li in f:
if li.startswith(" data"):
m = rx_data.search(li)
if m is None:
continue
oid, size, klass = m.groups()
int(size)
obno += 1
elif li.startswith("Trans"):
m = rx_txn.search(li)
if not m:
continue
tid, size = m.groups()
size = int(size)
tno += 1
self.assertEqual(tno, 1)
self.assertEqual(obno, 1)
def test_fsstats(self):
# The ``fsstats`` output is complex
# currently, we just check the first (summary) line
run_module_as_script("ZODB.FileStorage.fsdump", ["data.fs"],
"data.dmp")
run_module_as_script("ZODB.scripts.fsstats", ["data.dmp"])
with open("stdout") as f:
self.assertEqual(f.readline().strip(),
"Summary: 1 txns, 1 objects, 1 revisions")
......@@ -18,16 +18,19 @@ http://www.zope.org/Documentation/Developer/Models/ZODB/ZODB_Architecture_Storag
All storages should be able to pass these tests.
"""
from ZODB import POSException
import transaction
from ZODB import DB, POSException
from ZODB.Connection import TransactionMetaData
from ZODB.tests.MinPO import MinPO
from ZODB.tests.StorageTestBase import zodb_unpickle, zodb_pickle
from ZODB.tests.StorageTestBase import ZERO
from ZODB.tests.util import with_high_concurrency
import threading
import time
import zope.interface
import zope.interface.verify
from random import randint
from .. import utils
......@@ -206,7 +209,7 @@ class BasicStorage(object):
# We'll run the competing trans in a separate thread:
thread = threading.Thread(name='T2',
target=self._dostore, args=(oid,), kwargs=dict(revid=revid))
thread.setDaemon(True)
thread.daemon = True
thread.start()
thread.join(.1)
return thread
......@@ -321,7 +324,7 @@ class BasicStorage(object):
to_join = []
def run_in_thread(func):
t = threading.Thread(target=func)
t.setDaemon(True)
t.daemon = True
t.start()
to_join.append(t)
......@@ -344,7 +347,7 @@ class BasicStorage(object):
def update_attempts():
with attempts_cond:
attempts.append(1)
attempts_cond.notifyAll()
attempts_cond.notify_all()
@run_in_thread
......@@ -385,3 +388,226 @@ class BasicStorage(object):
self.assertEqual(results.pop('lastTransaction'), tids[1])
for m, tid in results.items():
self.assertEqual(tid, tids[1])
# verify storage/Connection for race in between load/open and local invalidations.
# https://github.com/zopefoundation/ZEO/issues/166
# https://github.com/zopefoundation/ZODB/issues/290
@with_high_concurrency
def check_race_loadopen_vs_local_invalidate(self):
db = DB(self._storage)
# init initializes the database with two integer objects - obj1/obj2
# that are set to 0.
def init():
transaction.begin()
zconn = db.open()
root = zconn.root()
root['obj1'] = MinPO(0)
root['obj2'] = MinPO(0)
transaction.commit()
zconn.close()
# verify accesses obj1/obj2 and verifies that obj1.value == obj2.value
#
# access to obj1 is organized to always trigger loading from zstor.
# access to obj2 goes through zconn cache and so verifies whether the
# cache is not stale.
failed = threading.Event()
failure = [None]
def verify():
transaction.begin()
zconn = db.open()
root = zconn.root()
obj1 = root['obj1']
obj2 = root['obj2']
# obj1 - reload it from zstor
# obj2 - get it from zconn cache
obj1._p_invalidate()
# both objects must have the same values
v1 = obj1.value
v2 = obj2.value
if v1 != v2:
failure[0] = "verify: obj1.value (%d) != obj2.value (%d)" % (v1, v2)
failed.set()
transaction.abort() # we did not changed anything; also fails with commit
zconn.close()
# modify changes obj1/obj2 by doing `objX.value += 1`.
#
# Since both objects start from 0, the invariant that
# `obj1.value == obj2.value` is always preserved.
def modify():
transaction.begin()
zconn = db.open()
root = zconn.root()
obj1 = root['obj1']
obj2 = root['obj2']
obj1.value += 1
obj2.value += 1
assert obj1.value == obj2.value
transaction.commit()
zconn.close()
# xrun runs f in a loop until either N iterations, or until failed is set.
def xrun(f, N):
try:
for i in range(N):
#print('%s.%d' % (f.__name__, i))
f()
if failed.is_set():
break
except:
failed.set()
raise
# loop verify and modify concurrently.
init()
N = 500
tverify = threading.Thread(name='Tverify', target=xrun, args=(verify, N))
tmodify = threading.Thread(name='Tmodify', target=xrun, args=(modify, N))
tverify.start()
tmodify.start()
tverify.join(60)
tmodify.join(60)
if failed.is_set():
self.fail(failure[0])
# client-server storages like ZEO, NEO and RelStorage allow several storage
# clients to be connected to single storage server.
#
# For client-server storages test subclasses should implement
# _new_storage_client to return new storage client that is connected to the
# same storage server self._storage is connected to.
def _new_storage_client(self):
raise NotImplementedError
# verify storage for race in between load and external invalidations.
# https://github.com/zopefoundation/ZEO/issues/155
#
# This test is similar to check_race_loadopen_vs_local_invalidate but does
# not reuse its code because the probability to reproduce external
# invalidation bug with only 1 mutator + 1 verifier is low.
@with_high_concurrency
def check_race_load_vs_external_invalidate(self):
# dbopen creates new client storage connection and wraps it with DB.
def dbopen():
try:
zstor = self._new_storage_client()
except NotImplementedError:
# the test will be skipped from main thread because dbopen is
# first used in init on the main thread before any other thread
# is spawned.
self.skipTest("%s does not implement _new_storage_client" % type(self))
return DB(zstor)
# init initializes the database with two integer objects - obj1/obj2 that are set to 0.
def init():
db = dbopen()
transaction.begin()
zconn = db.open()
root = zconn.root()
root['obj1'] = MinPO(0)
root['obj2'] = MinPO(0)
transaction.commit()
zconn.close()
db.close()
# we'll run 8 T workers concurrently. As of 20210416, due to race conditions
# in ZEO, it triggers the bug where T sees stale obj2 with obj1.value != obj2.value
#
# The probability to reproduce the bug is significantly reduced with
# decreasing n(workers): almost never with nwork=2 and sometimes with nwork=4.
nwork = 8
# T is a worker that accesses obj1/obj2 in a loop and verifies
# `obj1.value == obj2.value` invariant.
#
# access to obj1 is organized to always trigger loading from zstor.
# access to obj2 goes through zconn cache and so verifies whether the cache is not stale.
#
# Once in a while T tries to modify obj{1,2}.value maintaining the invariant as
# test source of changes for other workers.
failed = threading.Event()
failure = [None] * nwork # [tx] is failure from T(tx)
def T(tx, N):
db = dbopen()
def t_():
transaction.begin()
zconn = db.open()
root = zconn.root()
obj1 = root['obj1']
obj2 = root['obj2']
# obj1 - reload it from zstor
# obj2 - get it from zconn cache
obj1._p_invalidate()
# both objects must have the same values
i1 = obj1.value
i2 = obj2.value
if i1 != i2:
#print('FAIL')
failure[tx] = "T%s: obj1.value (%d) != obj2.value (%d)" % (tx, i1, i2)
failed.set()
# change objects once in a while
if randint(0,4) == 0:
#print("T%s: modify" % tx)
obj1.value += 1
obj2.value += 1
try:
transaction.commit()
except POSException.ConflictError:
#print('conflict -> ignore')
transaction.abort()
zconn.close()
try:
for i in range(N):
#print('T%s.%d' % (tx, i))
t_()
if failed.is_set():
break
except:
failed.set()
raise
finally:
db.close()
# run the workers concurrently.
init()
N = 100
tg = []
for x in range(nwork):
t = threading.Thread(name='T%d' % x, target=T, args=(x, N))
t.start()
tg.append(t)
for t in tg:
t.join(60)
if failed.is_set():
self.fail([_ for _ in failure if _])
......@@ -50,7 +50,7 @@ class ZODBClientThread(TestThread):
def __init__(self, db, test, commits=10, delay=SHORT_DELAY):
self.__super_init()
self.setDaemon(1)
self.daemon = True
self.db = db
self.test = test
self.commits = commits
......@@ -76,7 +76,7 @@ class ZODBClientThread(TestThread):
time.sleep(self.delay)
# Return a new PersistentMapping, and store it on the root object under
# the name (.getName()) of the current thread.
# the name of the current thread.
def get_thread_dict(self, root):
# This is vicious: multiple threads are slamming changes into the
# root object, then trying to read the root object, simultaneously
......@@ -86,7 +86,7 @@ class ZODBClientThread(TestThread):
# around (at most) 1000 times was enough so that a 100-thread test
# reliably passed on Tim's hyperthreaded WinXP box (but at the
# original 10 retries, the same test reliably failed with 15 threads).
name = self.getName()
name = self.name
MAXRETRIES = 1000
for i in range(MAXRETRIES):
......@@ -129,7 +129,7 @@ class StorageClientThread(TestThread):
data, serial = load_current(self.storage, oid)
self.test.assertEqual(serial, revid)
obj = zodb_unpickle(data)
self.test.assertEqual(obj.value[0], self.getName())
self.test.assertEqual(obj.value[0], self.name)
def pause(self):
time.sleep(self.delay)
......@@ -140,7 +140,7 @@ class StorageClientThread(TestThread):
return oid
def dostore(self, i):
data = zodb_pickle(MinPO((self.getName(), i)))
data = zodb_pickle(MinPO((self.name, i)))
t = TransactionMetaData()
oid = self.oid()
self.pause()
......
......@@ -113,11 +113,13 @@ class MVCCMappingStorage(MappingStorage):
def tpc_finish(self, transaction, func = lambda tid: None):
self._data_snapshot = None
return MappingStorage.tpc_finish(self, transaction, func)
with self._main_lock:
return MappingStorage.tpc_finish(self, transaction, func)
def tpc_abort(self, transaction):
self._data_snapshot = None
MappingStorage.tpc_abort(self, transaction)
with self._main_lock:
MappingStorage.tpc_abort(self, transaction)
def pack(self, t, referencesf, gc=True):
# prevent all concurrent commits during packing
......
......@@ -12,7 +12,7 @@ class ZODBClientThread(threading.Thread):
def __init__(self, db, test):
threading.Thread.__init__(self)
self._exc_info = None
self.setDaemon(True)
self.daemon = True
self.db = db
self.test = test
self.event = threading.Event()
......
......@@ -16,9 +16,13 @@
from ZODB.MappingStorage import DB
import atexit
import doctest
import os
import pdb
import persistent
import re
import runpy
import sys
import tempfile
import time
import transaction
......@@ -338,3 +342,66 @@ class MonotonicallyIncreasingTimeMinimalTestLayer(MininalTestLayer):
def testTearDown(self):
self.time_manager.close()
reset_monotonic_time()
def with_high_concurrency(f):
"""
with_high_concurrency decorates f to run with high frequency of thread context switches.
It is useful for tests that try to probabilistically reproduce race
condition scenarios.
"""
@functools.wraps(f)
def _(*argv, **kw):
if six.PY3:
# Python3, by default, switches every 5ms, which turns threads in
# intended "high concurrency" scenarios to execute almost serially.
# Raise the frequency of context switches in order to increase the
# probability to reproduce interesting/tricky overlapping of threads.
#
# See https://github.com/zopefoundation/ZODB/pull/345#issuecomment-822188305 and
# https://github.com/zopefoundation/ZEO/issues/168#issuecomment-821829116 for details.
_ = sys.getswitchinterval()
def restore():
sys.setswitchinterval(_)
sys.setswitchinterval(5e-6) # ~ 100 simple instructions on modern hardware
else:
# Python2, by default, switches threads every "100 instructions".
# Just make sure we run f with that default.
_ = sys.getcheckinterval()
def restore():
sys.setcheckinterval(_)
sys.setcheckinterval(100)
try:
return f(*argv, **kw)
finally:
restore()
return _
def run_module_as_script(mod, args, stdout="stdout", stderr="stderr"):
"""run module *mod* as script with arguments *arg*.
stdout and stderr are redirected to files given by the
correcponding parameters.
The function is usually called in a ``setUp/tearDown`` frame
which will remove the created files.
"""
sargv, sout, serr = sys.argv, sys.stdout, sys.stderr
s_set_trace = pdb.set_trace
try:
sys.argv = [sargv[0]] + args
sys.stdout = open(stdout, "w")
sys.stderr = open(stderr, "w")
# to allow debugging
pdb.set_trace = doctest._OutputRedirectingPdb(sout)
runpy.run_module(mod, run_name="__main__", alter_sys=True)
finally:
sys.stdout.close()
sys.stderr.close()
pdb.set_trace = s_set_trace
sys.argv, sys.stdout, sys.stderr = sargv, sout, serr
......@@ -19,7 +19,8 @@ setenv =
[testenv:coverage]
basepython = python3.7
commands =
coverage run --source=ZODB -m zope.testrunner --test-path=src []
coverage run {envdir}/bin/zope-testrunner --all --test-path=src []
coverage combine
coverage report
deps =
{[testenv]deps}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment