wendelin.core:afdba2829e93403a2787380ba8e612f0436ef8e3 commitshttps://lab.nexedi.com/kirr/wendelin.core/-/commits/afdba2829e93403a2787380ba8e612f0436ef8e32015-09-02T15:17:13+03:00https://lab.nexedi.com/kirr/wendelin.core/-/commit/afdba2829e93403a2787380ba8e612f0436ef8e3bigfile/zodb/tests: Factor out code to reclaim all pages2015-09-02T15:17:13+03:00Kirill Smelkovkirr@nexedi.com
We'll need it in other places in the next patch.https://lab.nexedi.com/kirr/wendelin.core/-/commit/1eeb03244fbccbffccff6b0caf1b5e6bad89c7e6wendelin.core v0.42015-08-19T10:55:19+03:00Kirill Smelkovkirr@nexedi.comhttps://lab.nexedi.com/kirr/wendelin.core/-/commit/9752178c77d07ec04851e9dc1fc47413a49ad34abigarray: Test for correctly handling conflict on array metadata2015-08-18T16:03:15+03:00Kirill Smelkovkirr@nexedi.com
e.g. on .shapehttps://lab.nexedi.com/kirr/wendelin.core/-/commit/800c14a9758ee8730334412ff85de857b6b3c9e3bigfile/zodb: ZBlk._p_invalidate() can be called more than once, in particula...2015-08-18T15:37:55+03:00Kirill Smelkovkirr@nexedi.com
When there is a conflict (on any object, but on ZBlk in particular) ZODB
machinery calls its ._p_invalidate() twice:
File ".../wendelin.core/bigfile/tests/test_filezodb.py", line 661, in test_bigfile_filezodb_vs_conflicts
tm2.commit() # this should raise ConflictError and stay at 11 state
File ".../transaction/_manager.py", line 111, in commit
return self.get().commit()
File ".../transaction/_transaction.py", line 271, in commit
self._commitResources()
File ".../transaction/_transaction.py", line 414, in _commitResources
self._cleanup(L)
File ".../transaction/_transaction.py", line 426, in _cleanup
rm.abort(self)
File ".../ZODB/Connection.py", line 436, in abort
self._abort()
File ".../ZODB/Connection.py", line 479, in _abort
self._cache.invalidate(oid)
File ".../wendelin.core/bigfile/file_zodb.py", line 148, in _p_invalidate
traceback.print_stack()
and
File ".../wendelin.core/bigfile/tests/test_filezodb.py", line 661, in test_bigfile_filezodb_vs_conflicts
tm2.commit() # this should raise ConflictError and stay at 11 state
File ".../transaction/_manager.py", line 111, in commit
return self.get().commit()
File ".../transaction/_transaction.py", line 271, in commit
self._commitResources()
File ".../transaction/_transaction.py", line 416, in _commitResources
self._synchronizers.map(lambda s: s.afterCompletion(self))
File ".../transaction/weakset.py", line 59, in map
f(elt)
File ".../transaction/_transaction.py", line 416, in <lambda>
self._synchronizers.map(lambda s: s.afterCompletion(self))
File ".../ZODB/Connection.py", line 831, in _storage_sync
self._flush_invalidations()
File ".../ZODB/Connection.py", line 539, in _flush_invalidations
self._cache.invalidate(invalidated)
File ".../wendelin.core/bigfile/file_zodb.py", line 148, in _p_invalidate
traceback.print_stack()
i.e. first invalidation is done by commit cleanup:
<a href="https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L414" rel="nofollow noreferrer noopener" target="_blank">https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L414</a>
<a href="https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L479" rel="nofollow noreferrer noopener" target="_blank">https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L479</a>
and then Connection.afterCompletion() flushes invalidation again:
<a href="https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L416" rel="nofollow noreferrer noopener" target="_blank">https://github.com/zopefoundation/transaction/blob/1.4.4/transaction/_transaction.py#L416</a>
<a href="https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L833" rel="nofollow noreferrer noopener" target="_blank">https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L833</a>
<a href="https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L539" rel="nofollow noreferrer noopener" target="_blank">https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L539</a>
If there was no conflict - there will be no ConflictError raised and
thus no Transaction._cleanup() done in its ._commitResources() ->
invalidation called only once. But with ConflictError - it is twice.
Adjust ZBlk._p_invalidate() not to delve into real invalidation more
than once - else we will fail, as ZBlk._v_zfile becomes unbound after
invalidation done the first time.https://lab.nexedi.com/kirr/wendelin.core/-/commit/e6bea85f3fccef1d742a13ffec6d0bbf1b1c3f0abigarray: Test for ZBigArray invalidation via usual attribute, e.g. .shape2015-08-18T12:24:02+03:00Kirill Smelkovkirr@nexedi.com
All is currently handled correctly, but an observation is made that upon
such invalidation we through away ._v_fileh i.e. we through away whole
data cache just because an array was resized.https://lab.nexedi.com/kirr/wendelin.core/-/commit/48b2bb74b542a5c3b65eda10dce868a4f6183c24bigfile/zodb: Note that even LivePersistent goes to GHOST state on invalidation2015-08-18T12:01:05+03:00Kirill Smelkovkirr@nexedi.com
LivePersistent can go to ghost state, because invalidation cannot be
ignored, i.e. they indicate the object has been changed externally.
This does not break our logic for ZBigFile and ZBigArray as
invalidations can happen only at transaction boundary, so during the
course of transaction those classes are guaranteed to stay uptodate and
thus not loose ._v_file and ._v_fileh (which is the reason they inherit
from LivePersistent).
it is ok to loose ._v_file and ._v_fileh at transaction boundary and
become ghost - those objects will be recreated upon going back uptodate
and will stay alive again during the whole transaction window.
We care only not to loose e.g. ._v_fileh inside transaction, because
loosing that data manager and thus data it manages inside transaction
can break synchronization logic and forget changed-through-mmap data.https://lab.nexedi.com/kirr/wendelin.core/-/commit/26d5b35eafd09c68eefe469c773c035e1cb440a9bigfile/zodb: Fix typos2015-08-17T20:41:06+03:00Kirill Smelkovkirr@nexedi.comhttps://lab.nexedi.com/kirr/wendelin.core/-/commit/059c71e12a4f2c7c84dfb5d18a9bc8cfd21cc8c0bigfile/zodb: Do not hold reference to ZBigFileH indefinitely in Connection.o...2015-08-17T20:30:42+03:00Kirill Smelkovkirr@nexedi.com
If we do - ZBigFileH objects just don't get garbage collected, and
sooner or later this way it leaks enough filedescriptors so that main
zope loop breaks:
Traceback (most recent call last):
File ".../bin/runzope", line 194, in <module>
sys.exit(Zope2.Startup.run.run())
File ".../eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/run.py", line 26, in run
starter.run()
File ".../eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/__init__.py", line 105, in run
Lifetime.loop()
File ".../eggs/Zope2-2.13.22-py2.7.egg/Lifetime/__init__.py", line 43, in loop
lifetime_loop()
File ".../eggs/Zope2-2.13.22-py2.7.egg/Lifetime/__init__.py", line 53, in lifetime_loop
asyncore.poll(timeout, map)
File ".../parts/python2.7/lib/python2.7/asyncore.py", line 145, in poll
r, w, e = select.select(r, w, e, timeout)
ValueError: filedescriptor out of range in select()
$ lsof -p <runzope-pid> |grep ramh | wc -l
950
So continuing <a href="/kazuhiko/wendelin.core/-/commit/64d1f40bd5260630e2953511ef943102aeffa7bf" data-original="64d1f40b" data-link="false" data-link-reference="false" data-project="375" data-commit="64d1f40bd5260630e2953511ef943102aeffa7bf" data-reference-type="commit" data-container="body" data-placement="top" data-html="true" title="bigfile/zodb: Monkey-patch for ZODB.Connection to support callback on .open()" class="gfm gfm-commit has-tooltip">64d1f40b</a> (bigfile/zodb: Monkey-patch for ZODB.Connection
to support callback on .open()) let's change the implementation to use
WeakSet for callbacks list.
Yes, because weakref to bound methods release immediately, we give up
flexibility to subscribe to arbitrary callbacks. If it become an issue,
something like WeakMethod from py3 or recipes from the net how to do it
are there.https://lab.nexedi.com/kirr/wendelin.core/-/commit/105ab1c71ec42eb21df2ad21e807f32e69ebc39fDrop support for ZODB < 3.102015-08-17T18:49:02+03:00Kirill Smelkovkirr@nexedi.com
ZODB 3.10.4 was released almost 4 years ago, and contains significant
change how ghost objects coming from DB are initially setup.https://lab.nexedi.com/kirr/wendelin.core/-/commit/92bfd03e178aeb7597529df7461c2b812a094e77bigfile: ZODB -> BigFileH invalidate propagation2015-08-17T17:29:46+03:00Kirill Smelkovkirr@nexedi.com
Continuing theme from the previous patch, here is propagation of
invalidation messages from ZODB to BigFileH memory.
The use-case here is that e.g. one fileh mapping was created in one
connection, another in another, and after doing changes in second
connection and committing there, the first fileh has to invalidate
appropriate already-loaded pages, so its next transaction won't work
with stale data.
To do it, we hook into ZBlk._p_invalidate() and propagate the
invalidation message to ZBigFile which then notifies all
opened-through-it ZBigFileH to invalidate a page.
ZBlk -> ZBigFile lookup is done without storing backpointer in ZODB -
instead, every time ZBigFile touches ZBlk object (and thus potentially
does GHOST -> Live transition to it), we (re-)bind it back to ZBigFile.
Since ZBigFile is the only class that works with ZBlk objects it is safe
to do so.
For ZBigFile to notify "all-opened-through-it" ZBigFileH, a weakset is
introduced to track them.
Otherwise the real page invalidation work is done by virtmem (see
previous patch).https://lab.nexedi.com/kirr/wendelin.core/-/commit/cb779c7b13e7fafdd8a4c559fbbddd32dd8498d7bigfile/virtmem: Client API to invalidate a fileh page2015-08-17T14:27:26+03:00Kirill Smelkovkirr@nexedi.com
FileH is a handle representing snapshot of a file. If, for a pgoffset,
fileh already has loaded page, but we know the content of the file has
changed externally after loading has been done, we need to propagate to
fileh that such-and-such page should be invalidated (and reloaded on
next access).
This patch introduces
fileh_invalidate_page(fileh, pgoffset)
to do just that.
In the next patch we'll use this facility to propagate invalidations of
ZBlk ZODB objects to virtmem subsystem.
NOTE
Since invalidation removes "dirtiness" from a page state, several
subsequent invalidations can make a fileh completely non-dirty
(invalidating all dirty page). Previously fileh->dirty was just a one
bit, so we needed to improve how we track dirtiness.
One way would be to have a dirty list for fileh pages and operate on
that. This has advantage to even optimize dirty pages processing like
fileh_dirty_writeout() where we currently scan through all fileh pages
just to write only PAGE_DIRTY ones.
Another simpler way is to make fileh->dirty a counter and maintain that.
Since we are going to move virtmem subsystem back into the kernel, here,
a simpler less-intrusive approach is used.https://lab.nexedi.com/kirr/wendelin.core/-/commit/c7c01ce4771b2915a08c6b38068365053e620d70bigfile/zodb: ZODB.Connection can migrate between threads on close/open and w...2015-08-12T21:19:33+03:00Kirill Smelkovkirr@nexedi.com
Intro
-----
ZODB maintains pool of opened-to-DB connections. For each request Zope
opens 1 connection and, after request handling is done, returns the
connection back to ZODB pool (via Connection.close()). The same
connection will be opened again for handling some future next request at
some future time. This next open can happen in different-from-first
request worker thread.
TransactionManager (as accessed by transaction.{get,commit,abort,...})
is thread-local, that is e.g. transaction.get() returns different
transaction for threads T1 and T2.
When _ZBigFileH hooks into txn_manager to get a chance to run its
.beforeCompletion() when transaction.commit() is run, it hooks into
_current_ _thread_ transaction manager.
Without unhooking on connection close, and circumstances where
connection migrates to different thread this can lead to
dissynchronization between ZBigFileH managing fileh pages and Connection
with ZODB objects. And even to data corruption, e.g.
T1 T2
open
zarray[0] = 11
commit
close
open # opens connection as closed in T1
open
zarray[0] = 21
commit
abort
close close
Here zarray[0]=21 _will_ be committed by T1 as part of T1 transaction -
because when T1 does commit .beforeCompletion() for zarray is invoked,
sees there is dirty data and propagate changes to zodb objects in
connection for T2, joins connection for T2 into txn for T1, and then txn
for t1 when doing two-phase-commit stores modified objects to DB ->
oops.
----------------------------------------
To prevent such dissynchronization _ZBigFileH needs to be a DataManager
which works in sync with the connection it was initially created under -
on connection close, unregister from transaction_manager, and on
connection open, register to transaction manager in current, possibly
different, thread context. Then there won't be incorrect
beforeCompletion() notification and corruption.
This issue, besides possible data corruption, was probably also exposing
itself via following ways we've seen in production (everywhere
connection was migrated from T1 to T2):
1. Exception ZODB.POSException.ConnectionStateError:
ConnectionStateError('Cannot close a connection joined to a transaction',)
in <bound method Cleanup.__del__ of <App.ZApplication.Cleanup instance at 0x7f10f4bab050>> ignored
T1 T2
modify zarray
commit/abort # does not join zarray to T2.txn,
# because .beforeCompletion() is
# registered in T1.txn_manager
commit # T1 invokes .beforeCompletion()
... # beforeCompletion() joins ZBigFileH and zarray._p_jar (=T2.conn) to T1.txn
... # commit is going on in progress
...
... close # T2 thinks request handling is done and
... # and closes connection. But T2.conn is
... # still joined to T1.txn
2. Traceback (most recent call last):
File ".../wendelin/bigfile/file_zodb.py", line 121, in storeblk
def storeblk(self, blk, buf): return self.zself.storeblk(blk, buf)
File ".../wendelin/bigfile/file_zodb.py", line 220, in storeblk
zblk._v_blkdata = bytes(buf) # FIXME does memcpy
File ".../ZODB/Connection.py", line 857, in setstate
raise ConnectionStateError(msg)
ZODB.POSException.ConnectionStateError: Shouldn't load state for 0x1f23a5 when the connection is closed
Similar to "1", but close in T2 happens sooner, so that when T1 does
the commit and tries to store object to database, Connection refuses to
do the store:
T1 T2
modify zarray
commit/abort
commit
... close
...
...
. obj.store()
...
...
3. Traceback (most recent call last):
File ".../wendelin/bigfile/file_zodb.py", line 121, in storeblk
def storeblk(self, blk, buf): return self.zself.storeblk(blk, buf)
File ".../wendelin/bigfile/file_zodb.py", line 221, in storeblk
zblk._p_changed = True # if zblk was already in DB: _p_state -> CHANGED
File ".../ZODB/Connection.py", line 979, in register
self._register(obj)
File ".../ZODB/Connection.py", line 989, in _register
self.transaction_manager.get().join(self)
File ".../transaction/_transaction.py", line 220, in join
Status.ACTIVE, Status.DOOMED, self.status))
ValueError: expected txn status 'Active' or 'Doomed', but it's 'Committing'
( storeblk() does zblk._p_changed -> Connection.register(zblk) ->
txn.join() but txn is already committing
IOW storeblk() was invoked with txn.state being already 'Committing' )
T1 T2
modify obj # this way T2.conn joins T2.txn
modify zarray
commit # T1 invokes .beforeCompletion()
... # beforeCompletion() joins only _ZBigFileH to T1.txn
... # (because T2.conn is already marked as joined)
...
... commit/abort # T2 does commit/abort - this touches only T2.conn, not ZBigFileH
... # in particular T2.conn is now reset to be not joined
...
. tpc_begin # actual active commit phase of T1 was somehow delayed a bit
. tpc_commit # when changes from RAM propagate to ZODB objects associated
. storeblk # connection (= T2.conn !) is notified again,
. zblk = ... # wants to join txn for it thinks its transaction_manager,
# which when called from under T1 returns *T1* transaction manager for
# which T1.txn is already in state='Committing'
4. Empty transaction committed to NEO
( different from doing just transaction.commit() without changing
any data - a connection was joined to txn, but set of modified
object turned out to be empty )
This is probably a race in Connection._register when both T1 and T2
go to it at the same time:
<a href="https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L988" rel="nofollow noreferrer noopener" target="_blank">https://github.com/zopefoundation/ZODB/blob/3.10/src/ZODB/Connection.py#L988</a>
def _register(self, obj=None):
if self._needs_to_join:
self.transaction_manager.get().join(self)
self._needs_to_join = False
T1 T2
modify zarray
commit
...
.beforeCompletion modify obj
. if T2.conn.needs_join if T2.conn.needs_join # race here
. T2.conn.join(T1.txn) T2.conn.join(T2.txn) # as a result T2.conn joins both T1.txn and T2.txn
.
commit finishes # T2.conn registered-for-commit object list is now empty
commit
tpc_begin
storage.tpc_begin
tpc_commit
# no object stored, because for-commit-list is empty
/cc <a href="/jm" data-user="30" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Julien Muchembled">@jm</a>, <a href="/klaus" data-user="7" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Klaus Wölfel">@klaus</a>, <a href="/Tyagov" data-user="15" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Ivan Tyagov">@Tyagov</a>, <a href="/vpelletier" data-user="23" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Vincent Pelletier">@vpelletier</a>https://lab.nexedi.com/kirr/wendelin.core/-/commit/64d1f40bd5260630e2953511ef943102aeffa7bfbigfile/zodb: Monkey-patch for ZODB.Connection to support callback on .open()2015-08-12T21:19:11+03:00Kirill Smelkovkirr@nexedi.com
ZODB.Connection has support for calling callbacks on .close() but not on
.open() . We'll need to hook into both Connection open/close process in the
next patch (for _ZBigFileH to stay in sync with Connection state).
NOTE
on-open callbacks are setup once and fire many times on every open,
on-close callbacks are setup once and fire only once on next close.
The reason for this is that on-close callbacks are useful for scheduling
current connection cleanup, after its processing is done. But on-open
callback is for future connection usage, which is generally not related
to current connection.
/cc <a href="/jm" data-user="30" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Julien Muchembled">@jm</a>, <a href="/vpelletier" data-user="23" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Vincent Pelletier">@vpelletier</a>https://lab.nexedi.com/kirr/wendelin.core/-/commit/b92f82c821c0ffa85ac689a331c011f12dcdd77abigfile/zodb: Clarify comments - .beforeCompletion() is called before both co...2015-08-12T18:59:14+03:00Kirill Smelkovkirr@nexedi.comhttps://lab.nexedi.com/kirr/wendelin.core/-/commit/070aeaa9d5583e7ff7be696240750aa4da8b4109bigarray/zodb: Forgot to close DB in tests2015-08-12T18:59:12+03:00Kirill Smelkovkirr@nexedi.com
( without dbclose, next test will not be able to open database - will
timeout on open on waiting for FileStorage lock )https://lab.nexedi.com/kirr/wendelin.core/-/commit/6da5172e3c087324b0fffdb1e373407e1066c4f8bigfile/py: Teach storeblk() how to correctly propagate traceback on error2015-08-09T20:42:40+03:00Kirill Smelkovkirr@nexedi.com
Previously we were limited to printing traceback starting down from just
storeblk() via explicit PyErr_PrintEx() - because pybuf was attached to
memory which could go away right after return from C function - so we
had to destroy that object for sure, not letting any traceback to hold a
reference to it.
This turned out to be too limiting and not showing full context where
errors happen.
So do the following trick: before returning, reattach pybuf to empty
region at NULL, and this way we don't need to worry about pybuf pointing
to memory which can go away -> thus instead of printing exception locally
- just return it the usual way it is done with C api in Python.
NOTE In contrast to PyMemoryViewObject, PyBufferObject definition is not
public, so to support Python2 - had to copy its definition to PY2 compat
header.
NOTE2 loadblk() is not touched - the loading is done from sighandler
context, which simulates as if it work in separate python thread, so it
is leaved as is for now.https://lab.nexedi.com/kirr/wendelin.core/-/commit/d53271b9230a925ac3d76725968cebbd3b02840ebigfile/virtmem: Big Virtmem lock2015-08-06T19:45:52+03:00Kirill Smelkovkirr@nexedi.com
At present several threads running can corrupt internal virtmem
datastructures (e.g. ram->lru_list, fileh->pagemap, etc).
This can happen even if we have zope instances only with 1 worker thread
- because there are other "system" thread, and python garbage collection
can trigger at any thread, so if a virtmem object, e.g. VMA or FileH was
there sitting at GC queue to be collected, their collection, and thus
e.g. vma_unmap() and fileh_close() will be called from
different-from-worker thread.
Because of that virtmem just has to be aware of threads not to allow
internal datastructure corruption.
On the other hand, the idea of introducing userspace virtual memory
manager turned out to be not so good from performance and complexity
point of view, and thus the plan is to try to move it back into the
kernel. This way it does not make sense to do a well-optimised locking
implementation for userspace version.
So we do just a simple single "protect-all" big lock for virtmem.
Of a particular note is interaction with Python's GIL - any long-lived
lock has to be taken with GIL released, because else it can deadlock:
t1 t2
G
V G
!G V
G
so we introduce helpers to make sure the GIL is not taken, and to retake
it back if we were holding it initially.
Those helpers (py_gil_ensure_unlocked / py_gil_retake_if_waslocked) are
symmetrical opposites to what Python provides to make sure the GIL is
locked (via PyGILState_Ensure / PyGILState_Release).
Otherwise, the patch is more-or-less straightforward application for
one-big-lock to protect everything idea.https://lab.nexedi.com/kirr/wendelin.core/-/commit/78cbf2a08a547ac8cbab1f3cc937ee76feee9002lib/utils: X- versions for pthread_mutex_{lock,unlock}2015-08-06T18:23:42+03:00Kirill Smelkovkirr@nexedi.com
Mutex lock/unlock should not fail if mutex was correctly initialized/used.https://lab.nexedi.com/kirr/wendelin.core/-/commit/786d418d27ce2077d2a84bb2be9f852a055f9940bigfile: Simple test that we can handle GC from-under sighandler2015-08-06T18:23:42+03:00Kirill Smelkovkirr@nexedi.com
And specifically that GC'ed object __del__ calls into virtmem
(vma_dealloc and fileh_dealloc) again.
NOTE not sure it is a good idea to do GC from under sighandle, but
currently it happens in practice, because we did not cared to
protect against it.https://lab.nexedi.com/kirr/wendelin.core/-/commit/d7c33cd78098fa6c636c520a1ee29263aa6f005dbigfile/virtmem: When restoring SIGSEGV, don't change procmask for other signals2015-08-06T18:23:29+03:00Kirill Smelkovkirr@nexedi.com
We factored out SIGSEGV block/restore from fileh_dirty_writeout() to all
functions in <a href="/kirr/wendelin.core/-/commit/cb7a70551ffb5a0392d253aa3b194f7001b159e2" data-original="cb7a7055" data-link="false" data-link-reference="false" data-project="20" data-commit="cb7a70551ffb5a0392d253aa3b194f7001b159e2" data-reference-type="commit" data-container="body" data-placement="top" data-html="true" title="bigfile/virtmem: Block/restore SIGSEGV in non-pagefault-handling function" class="gfm gfm-commit has-tooltip">cb7a7055</a> (bigfile/virtmem: Block/restore SIGSEGV in
non-pagefault-handling function). The restoration however just sets
whole thread sigmask.
It could be possible that between block/restore calls procmask for other
signals could be changed, and this way - setting procmask directly - we
will overwrite them.
So be careful, and when restoring SIGSEGV mask, touch mask bit for only
that signal.
( we need xsigismember helper to get this done, which is also introduced
in this patch )https://lab.nexedi.com/kirr/wendelin.core/-/commit/8fa9af7f94c485f07f1393e3d4f6b1abcbe170cflib/utils: pthread_sigmask() returns error directly, not in errno2015-08-06T18:21:50+03:00Kirill Smelkovkirr@nexedi.com
The mistake was there from the beginning - from <a href="/nexedi/wendelin.core/-/commit/3e5e78cdcedced3a5f860861597072506a1499c0" data-original="3e5e78cd" data-link="false" data-link-reference="false" data-project="21" data-commit="3e5e78cdcedced3a5f860861597072506a1499c0" data-reference-type="commit" data-container="body" data-placement="top" data-html="true" title="lib/utils: Small C utilities we'll use" class="gfm gfm-commit has-tooltip">3e5e78cd</a> (lib/utils:
Small C utilities we'll use).https://lab.nexedi.com/kirr/wendelin.core/-/commit/ec6ecd4e102041d854982a57084d2ff267c0be6clib/bug: BUGerr(err) - like BUGe() but takes error code explicitly2015-08-06T18:21:50+03:00Kirill Smelkovkirr@nexedi.com
We'll need this for function which return error not in errno - e.g.
pthread_sigmask().https://lab.nexedi.com/kirr/wendelin.core/-/commit/8213a9e851c266598c4dd6e23bcdb5d940ce262ctox: Bump NEO to 1.42015-08-06T18:21:50+03:00Kirill Smelkovkirr@nexedi.com
<a href="http://mail.tiolive.com/pipermail/neo-users/20150713/000027.html" rel="nofollow noreferrer noopener" target="_blank">http://mail.tiolive.com/pipermail/neo-users/20150713/000027.html</a>https://lab.nexedi.com/kirr/wendelin.core/-/commit/cb7a70551ffb5a0392d253aa3b194f7001b159e2bigfile/virtmem: Block/restore SIGSEGV in non-pagefault-handling function2015-08-06T18:21:28+03:00Kirill Smelkovkirr@nexedi.com
Non on-pagefault code should not access any not-mmapped memory.
Here we just refactor the code we already had to block/restore
SIGSEGV from fileh_dirty_writeout() and use it in all functions called
from non-pagefaulting context, as promised.
This way, if there is an error in virtmem implementation which
incorrectly accesses prepared for BigFile maps memory, we'll just die
with coredump instead of trying to incorrectly handle the pagefault.https://lab.nexedi.com/kirr/wendelin.core/-/commit/1245acc9309c5a592ed07a3edd4b380c8b028b38bigarray: In-place .append()2015-07-27T15:36:02+03:00Kirill Smelkovkirr@nexedi.com
<a href="/nexedi/wendelin.core/-/commit/ca064f75b0919ac8984a9e5b7511f4ac5dd23db7" data-original="ca064f75" data-link="false" data-link-reference="false" data-project="21" data-commit="ca064f75b0919ac8984a9e5b7511f4ac5dd23db7" data-reference-type="commit" data-container="body" data-placement="top" data-html="true" title="bigarray: Support resizing in-place" class="gfm gfm-commit has-tooltip">ca064f75</a> (bigarray: Support resizing in-place) added O(1) in-place
BigArray.resize() which makes possible for users to append data to BigArray in
O(δ) time.
But it is easy for people to make off-by-one mistakes when calculating
indices for append.
So provide a convenient BigArray.append() which simplifies the following
A # ZBigArray e.g. of shape (N, 3)
values # ndarray to append of shape (δ, 3)
n, δ = len(A), len(values) # length of A's major index =N
A.resize((n+δ, A.shape[1:])) # add δ new entries ; now len(A) =N+δ
A[-δ:] = values # set data for last new δ entries
into
A.append(values)
/cc <a href="/klaus" data-user="7" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Klaus Wölfel">@klaus</a>https://lab.nexedi.com/kirr/wendelin.core/-/commit/605a2a907e31451def64139674d47c47439ce924bigarray: multiply imported but unused2015-07-24T17:46:11+03:00Kirill Smelkovkirr@nexedi.com
We stopped using numpy.multiply in <a href="/kirr/wendelin.core/-/commit/73926487ee01406c8f8df522f4e2b03b21ece293" data-original="73926487" data-link="false" data-link-reference="false" data-project="20" data-commit="73926487ee01406c8f8df522f4e2b03b21ece293" data-reference-type="commit" data-container="body" data-placement="top" data-html="true" title="*: It is not safe to use multiply.reduce() - it overflows" class="gfm gfm-commit has-tooltip">73926487</a> (*: It is not safe to use
multiply.reduce() - it overflows).https://lab.nexedi.com/kirr/wendelin.core/-/commit/da4617c729f88d5f7255691dc491604886f54a43t/shm-punch-hole: fallocate() for hugetlbfs patch v4 & v52015-06-27T00:32:26+03:00Kirill Smelkovkirr@nexedi.comhttps://lab.nexedi.com/kirr/wendelin.core/-/commit/9357bac85260e936383f2841d7897234ba5a036cbigarray: Fix flaky test in test_bigarray_indexing_1d2015-06-27T00:32:26+03:00Kirill Smelkovkirr@nexedi.com
We compare A_[10*PS-1] (which is A_[1]) to 0, but
A_= ndarray ((10*PS,), uint8)
and that means the array memory is not initialized. So the comparison
works sometimes and sometimes it does not.
Initialize compared element explicitly.
NOTE: A (without _) element does not need to be initialized -
because not-initialized BigArray parts read as zeros.https://lab.nexedi.com/kirr/wendelin.core/-/commit/010eeb35187e1dec97784b0c5ab3df964e09bb44tox: Automatically test with all FS, ZEO and NEO backends2015-06-27T00:32:21+03:00Kirill Smelkovkirr@nexedi.com
/cc <a href="/jm" data-user="30" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Julien Muchembled">@jm</a>https://lab.nexedi.com/kirr/wendelin.core/-/commit/7fc4ec6697201e04fe392de51e96cb73380866d8tests: Allow to test with ZEO & NEO ZODB storages2015-06-27T00:02:41+03:00Kirill Smelkovkirr@nexedi.com
Previously we were always testing with DBs backed up by FileStorage. Now
we provide a way to run the testsuite with user selected storage
backend:
$ WENDELIN_CORE_TEST_DB="<fs>" make test.py # test with temporary db with FileStorage
$ WENDELIN_CORE_TEST_DB="<zeo>" make test.py # ----------//---------- with ZEO
$ WENDELIN_CORE_TEST_DB="<neo>" make test.py # ----------//---------- with NEO
$ WENDELIN_CORE_TEST_DB=<a href="neo://db@master">neo://db@master</a> make test.py # test with externally provided DB
Default is still to run tests with FileStorage.
/cc <a href="/jm" data-user="30" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Julien Muchembled">@jm</a>https://lab.nexedi.com/kirr/wendelin.core/-/commit/92c3bbfac23fad9b27a06b44e50624c87552331ademo_zbigarray: Switch to dbopen() for opening database2015-06-25T13:19:12+03:00Kirill Smelkovkirr@nexedi.com
And this way, because dbopen() supports opening various kind of
databases (see previous commit) we can now specify type of database on
command line, e.g.
/path/to/db
<a href="neo://db@master">neo://db@master</a>
zeo://host:port
/cc <a href="/jm" data-user="30" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Julien Muchembled">@jm</a>https://lab.nexedi.com/kirr/wendelin.core/-/commit/0e8dd91b718e5f2977c5989ee6a199ef1cccb6cdlib/zodb: Add support for opening neo:// and zeo:// databases2015-06-25T13:14:52+03:00Kirill Smelkovkirr@nexedi.com
Done via manual hacky way for now. The clean solution would be to reuse
e.g. repoze.zodbconn[1] or zodburi[2] and teach them to support NEO.
But for now we can't -- those eggs depend on ZODB, and we still use
ZODB3 for maintaining compatibility with both ZODB3.10 and ZODB4.
/cc <a href="/jm" data-user="30" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Julien Muchembled">@jm</a>
[1] <a href="https://pypi.python.org/pypi/repoze.zodbconn" rel="nofollow noreferrer noopener" target="_blank">https://pypi.python.org/pypi/repoze.zodbconn</a>
[2] <a href="https://pypi.python.org/pypi/zodburi" rel="nofollow noreferrer noopener" target="_blank">https://pypi.python.org/pypi/zodburi</a>https://lab.nexedi.com/kirr/wendelin.core/-/commit/726853069c3a895870f1ec9e98d39f4fdbc5a5abMove dbopen(), dbclose() to wendelin.lib.zodb2015-06-25T11:41:18+03:00Kirill Smelkovkirr@nexedi.com
Factor out those routines to open a ZODB database to common place.
The reason for doing so is that we'll soon teach dbopen to automatically
recognize several protocols, e.g. neo:// and zeo:// and this way,
clients who use dbopen() could automatically access storages besides
FileStorage.https://lab.nexedi.com/kirr/wendelin.core/-/commit/ab5bb80b9af7983e91af8878e98b583ffbffddebAdd forgotten copyright & license in a couple of places2015-06-25T11:34:51+03:00Kirill Smelkovkirr@nexedi.comhttps://lab.nexedi.com/kirr/wendelin.core/-/commit/de3fdb85004b91caa84ad19ba719a1a3c3d95598wendelin.core v0.32015-06-12T12:00:45+03:00Kirill Smelkovkirr@nexedi.comhttps://lab.nexedi.com/kirr/wendelin.core/-/commit/a5511edf0773103c2c9764b80c24bbebac60361abigfile/py: We cannot use memoryview for py2 even on 2.7.102015-06-02T17:37:50+03:00Kirill Smelkovkirr@nexedi.com
Because numpy.ndarray does not accept it as buffer= argument
<a href="https://github.com/numpy/numpy/issues/5935" rel="nofollow noreferrer noopener" target="_blank">https://github.com/numpy/numpy/issues/5935</a>
and our memcpy crashes.
NOTE if we'll need to use memoryview, we can adapt our memcpy to use
array() directly which works with memoryview, as outlined in the above
numpy issue.https://lab.nexedi.com/kirr/wendelin.core/-/commit/00db08d6de6e793a262e05723314f82804fef6a4bigarray: Teach it how to automatically convert to ndarray (if enough addres...2015-06-02T17:37:44+03:00Kirill Smelkovkirr@nexedi.com
BigArrays can be big - up to 2^64 bytes, and thus in general it is not
possible to represent whole BigArray as ndarray view, because address
space is usually smaller on 64bit architectures.
However users often try to pass BigArrays to numpy functions as-is, and
numpy finds a way to convert, or start converting, BigArray to ndarray -
via detecting it as a sequence, and extracting elements one-by-one.
Which is slooooow.
Because of the above, we provide users a well-defined service:
- if virtual address space is available - we succeed at creating ndarray
view for whole BigArray, without delay and copying.
- if not - we report properly the error and give hint how BigArrays have
to be processed in chunks.
Verifying that big BigArrays cannot be converted to ndarray also tests
for behaviour and issues fixed in last 5 patches.
/cc <a href="/Tyagov" data-user="15" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Ivan Tyagov">@Tyagov</a>
/cc <a href="/klaus" data-user="7" data-reference-type="user" data-container="body" data-placement="top" data-html="true" class="gfm gfm-project_member" title="Klaus Wölfel">@klaus</a>https://lab.nexedi.com/kirr/wendelin.core/-/commit/73926487ee01406c8f8df522f4e2b03b21ece293*: It is not safe to use multiply.reduce() - it overflows2015-06-02T17:37:14+03:00Kirill Smelkovkirr@nexedi.com
e.g.
In [1]: multiply.reduce((1<<30, 1<<30, 1<<30))
Out[1]: 0
instead of
In [2]: (1<<30) * (1<<30) * (1<<30)
Out[2]: 1237940039285380274899124224
In [3]: 1<<90
Out[3]: 1237940039285380274899124224
also multiply.reduce returns int64, instead of python int:
In [4]: type( multiply.reduce([1,2,3]) )
Out[4]: numpy.int64
which also leads to overflow-related problems if we further compute with
this value and other integers and results exceeds int64 - it becomes
float:
In [5]: idx0_stop = 18446744073709551615
In [6]: stride0 = numpy.int64(1)
In [7]: byte0_stop = idx0_stop * stride0
In [8]: byte0_stop
Out[8]: 1.8446744073709552e+19
and then it becomes a real problem for BigArray.__getitem__()
wendelin.core/bigarray/__init__.py:326: RuntimeWarning: overflow encountered in long_scalars
page0_min = min(byte0_start, byte0_stop+byte0_stride) // pagesize # TODO -> fileh.pagesize
and then
> vma0 = self._fileh.mmap(page0_min, page0_max-page0_min+1)
E TypeError: integer argument expected, got float
~~~~
So just avoid multiple.reduce() and do our own mul() properly the same
way sum() is builtin into python, and we avoid overflow-related
problems.https://lab.nexedi.com/kirr/wendelin.core/-/commit/d59b15a3962c496b354e8b4efb72278a1acbd1493rdparty/ccan: Update for bitmap_alloc0() segfault fix2015-06-02T16:26:33+03:00Kirill Smelkovkirr@nexedi.com
We need this commit:
<a href="http://git.ozlabs.org/?p=ccan;a=commitdiff;h=c38e11b508e52fb2921e67d1123b05d9bef90fd2" rel="nofollow noreferrer noopener" target="_blank">http://git.ozlabs.org/?p=ccan;a=commitdiff;h=c38e11b508e52fb2921e67d1123b05d9bef90fd2</a>
or else we segfault on really big arrays allocation instead of getting
ENOMEM and reporting it as MemoryError to python.https://lab.nexedi.com/kirr/wendelin.core/-/commit/7e6829c77d0560a9bd62813f30bf09c9feddc15dbigfile/py: Fix crash in {pyvma,pyfileh}_dealloc() if deallocated object was ...2015-06-02T16:26:33+03:00Kirill Smelkovkirr@nexedi.com
Consider e.g. this for pyvma:
1. in pyfileh_mmap() pyvma is created
2. next fileh_mmap(pyvma, pyfileh, ...) fails
3. we need to deallocate pyvma which was not mapped
4. in pyvma_dealloc() we unmap pyvma unconditionally -> boom.
The same story goes for pyfileh dealloc vs not fully constructing it in
pyfileh_open().