Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
slapos
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Lukas Niegsch
slapos
Commits
0261593b
Commit
0261593b
authored
Sep 14, 2016
by
Julien Muchembled
Committed by
Kazuhiko Shiozaki
Sep 16, 2016
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
version up: ZODB3 3.10.7
parent
237ed0c0
Changes
4
Show whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
4 additions
and
260 deletions
+4
-260
component/egg-patch/ZODB3-3.10.5.patch
component/egg-patch/ZODB3-3.10.5.patch
+0
-51
component/egg-patch/ZODB3-persistent-ghostify-slots.patch
component/egg-patch/ZODB3-persistent-ghostify-slots.patch
+0
-201
software/neoppod/software-common.cfg
software/neoppod/software-common.cfg
+1
-7
software/neoppod/software.cfg
software/neoppod/software.cfg
+3
-1
No files found.
component/egg-patch/ZODB3-3.10.5.patch
deleted
100644 → 0
View file @
237ed0c0
#
https://mail.zope.org/pipermail/zodb-dev/2014-February/015182.html
diff --git a/src/ZODB/FileStorage/FileStorage.py b/src/ZODB/FileStorage/FileStorage.py
index d45cbbf..d662bf4 100644
--- a/src/ZODB/FileStorage/FileStorage.py
+++ b/src/ZODB/FileStorage/FileStorage.py
@@ -683,6 +683,7 @@
def tpc_vote(self, transaction):
# Hm, an error occurred writing out the data. Maybe the
# disk is full. We don't want any turd at the end.
self._file.truncate(self._pos)
+ self._files.flush()
raise
self._nextpos = self._pos + (tl + 8)
@@ -737,6 +738,7 @@
def _finish_finish(self, tid):
def _abort(self):
if self._nextpos:
self._file.truncate(self._pos)
+ self._files.flush()
self._nextpos=0
self._blob_tpc_abort()
@@ -1996,6 +1998,15 @@
def __init__(self, file_name):
self._out = []
self._cond = threading.Condition()
+ def flush(self):
+ """Empty read buffers.
+
+ This is required if they may contain data of rolled back transactions.
+ """
+ with self.write_lock():
+ for f in self._files:
+ f.flush()
+
@contextlib.contextmanager
def write_lock(self):
with self._cond:
# https://github.com/zopefoundation/ZODB/pull/15/files
diff -ur a/src/ZODB/FileStorage/FileStorage.py b/src/ZODB/FileStorage/FileStorage.py
--- a/src/ZODB/FileStorage/FileStorage.py
+++ b/src/ZODB/FileStorage/FileStorage.py
@@ -430,7 +430,7 @@
if h.tid == serial:
break
pos = h.prev
- if not pos:
+ if h.tid < serial or not pos:
raise POSKeyError(oid)
if h.plen:
return self._file.read(h.plen)
component/egg-patch/ZODB3-persistent-ghostify-slots.patch
deleted
100644 → 0
View file @
237ed0c0
From d387a425941b37b99355077657edf7a2f117cf47 Mon Sep 17 00:00:00 2001
From: Kirill Smelkov <kirr@nexedi.com>
Date: Thu, 21 Jul 2016 22:34:55 +0300
Subject: [PATCH] persistent: On deactivate release in-slots objects too
( This is backport of https://github.com/zopefoundation/persistent/pull/44
to ZODB-3.10 )
On ._p_deactivate() and ._p_invalidate(), when an object goes to ghost
state, objects referenced by all its attributes, except related to
persistence machinery, are released, this way freeing memory (if they
were referenced only from going-to-ghost object).
That's the idea - an object in ghost state is simply a stub, which loads
its content on first access (via hooking into get/set attr) while
occupying minimal memory in not-yet-loaded state.
However the above is not completely true right now, as currently on
ghostification only object's .__dict__ is released, while in-slots objects
are retained attached to ghost object staying in RAM:
---- 8< ----
from ZODB import DB
from persistent import Persistent
import gc
db = DB(None)
jar = db.open()
class C:
def __init__(self, v):
self.v = v
def __del__(self):
print 'released (%s)' % self.v
class P1(Persistent):
pass
class P2(Persistent):
__slots__ = ('aaa')
p1 = P1()
jar.add(p1)
p1.aaa = C(1)
p2 = P2()
jar.add(p2)
p2.aaa = C(2)
p1._p_invalidate()
# "released (1)" is printed
p2._p_invalidate()
gc.collect()
# "released (2)" is NOT printed <--
---- 8< ----
So teach ghostify() & friends to release objects in slots to free-up
memory when an object goes to ghost state.
NOTE PyErr_Occurred() added after ghostify() calls because
pickle_slotnames() can raise an error, but we do not want to change
ghostify() prototype for backward compatibility reason - as it is used
in cPersistenceCAPIstruct.
( I hit this bug with wendelin.core which uses proxies to load
data from DB to virtual memory manager and then deactivate proxy right
after load has been completed:
https://lab.nexedi.com/nexedi/wendelin.core/blob/f7803634/bigfile/file_zodb.py#L239
https://lab.nexedi.com/nexedi/wendelin.core/blob/f7803634/bigfile/file_zodb.py#L295 )
---
src/persistent/cPersistence.c | 41 +++++++++++++++++++++++++++++++++-
src/persistent/tests/testPersistent.py | 24 ++++++++++++++++++++
2 files changed, 64 insertions(+), 1 deletion(-)
diff --git a/src/persistent/cPersistence.c b/src/persistent/cPersistence.c
index b4a185c..28d1f9a 100644
--- a/src/persistent/cPersistence.c
+++ b/src/persistent/cPersistence.c
@@ -75,6 +75,7 @@
fatal_1350(cPersistentObject *self, const char *caller, const char *detail)
#endif
static void ghostify(cPersistentObject*);
+static PyObject * pickle_slotnames(PyTypeObject *cls);
/* Load the state of the object, unghostifying it. Upon success, return 1.
* If an error occurred, re-ghostify the object and return -1.
@@ -141,7 +142,7 @@
accessed(cPersistentObject *self)
static void
ghostify(cPersistentObject *self)
{
- PyObject **dictptr;
+ PyObject **dictptr, *slotnames;
/* are we already a ghost? */
if (self->state == cPersistent_GHOST_STATE)
@@ -171,6 +172,8 @@
ghostify(cPersistentObject *self)
_estimated_size_in_bytes(self->estimated_size);
ring_del(&self->ring);
self->state = cPersistent_GHOST_STATE;
+
+ /* clear __dict__ */
dictptr = _PyObject_GetDictPtr((PyObject *)self);
if (dictptr && *dictptr)
{
@@ -178,6 +181,38 @@
ghostify(cPersistentObject *self)
*dictptr = NULL;
}
+ /* clear all slots besides _p_* */
+ slotnames = pickle_slotnames(Py_TYPE(self));
+ if (slotnames && slotnames != Py_None)
+ {
+ int i;
+
+ for (i = 0; i < PyList_GET_SIZE(slotnames); i++)
+ {
+ PyObject *name;
+ char *cname;
+ int is_special;
+
+ name = PyList_GET_ITEM(slotnames, i);
+ if (PyBytes_Check(name))
+ {
+ cname = PyBytes_AS_STRING(name);
+ is_special = !strncmp(cname, "_p_", 3);
+ if (is_special) /* skip persistent */
+ {
+ continue;
+ }
+ }
+
+ /* NOTE: this skips our delattr hook */
+ if (PyObject_GenericSetAttr((PyObject *)self, name, NULL) < 0)
+ /* delattr of non-set slot will raise AttributeError - we
+ * simply ignore. */
+ PyErr_Clear();
+ }
+ }
+ Py_XDECREF(slotnames);
+
/* We remove the reference to the just ghosted object that the ring
* holds. Note that the dictionary of oids->objects has an uncounted
* reference, so if the ring's reference was the only one, this frees
@@ -261,6 +296,8 @@
Per__p_deactivate(cPersistentObject *self)
called directly. Methods that override this need to
do the same! */
ghostify(self);
+ if (PyErr_Occurred())
+ return NULL;
}
Py_INCREF(Py_None);
@@ -289,6 +326,8 @@
Per__p_invalidate(cPersistentObject *self)
if (Per_set_changed(self, NULL) < 0)
return NULL;
ghostify(self);
+ if (PyErr_Occurred())
+ return NULL;
}
Py_INCREF(Py_None);
return Py_None;
diff --git a/src/persistent/tests/testPersistent.py b/src/persistent/tests/testPersistent.py
index 51e0382..fdb8b67 100644
--- a/src/persistent/tests/testPersistent.py
+++ b/src/persistent/tests/testPersistent.py
@@ -180,6 +180,30 @@
class PersistenceTest(unittest.TestCase):
self.assertEqual(obj._p_changed, None)
self.assertEqual(obj._p_state, GHOST)
+ def test__p_invalidate_from_changed_w_slots(self):
+ from persistent import Persistent
+ class Derived(Persistent):
+ __slots__ = ('myattr1', 'myattr2')
+ def __init__(self):
+ self.myattr1 = 'value1'
+ self.myattr2 = 'value2'
+ obj = Derived()
+ jar = self._makeJar()
+ jar.add(obj)
+ obj._p_activate()
+ obj._p_changed = True
+ jar._loaded = []
+ jar._registered = []
+ self.assertEqual(Derived.myattr1.__get__(obj), 'value1')
+ self.assertEqual(Derived.myattr2.__get__(obj), 'value2')
+ obj._p_invalidate()
+ self.assertIs(obj._p_changed, None)
+ self.assertEqual(list(jar._loaded), [])
+ self.assertRaises(AttributeError, lambda: Derived.myattr1.__get__(obj))
+ self.assertRaises(AttributeError, lambda: Derived.myattr2.__get__(obj))
+ self.assertEqual(list(jar._loaded), [])
+ self.assertEqual(list(jar._registered), [])
+
def test_initial_serial(self):
NOSERIAL = "\000" * 8
obj = self._makeOne()
--
2.9.2.701.gf965a18.dirty
software/neoppod/software-common.cfg
View file @
0261593b
...
...
@@ -42,11 +42,6 @@ recipe = zc.recipe.egg
eggs = neoppod[admin, ctl, master, storage-importer, storage-mysqldb, tests]
${python-mysqlclient:egg}
ZODB3
patch-binary = ${patch:location}/bin/patch
ZODB3-patches =
${:_profile_base_location_}/../../component/egg-patch/ZODB3-3.10.5.patch#c5fe331b1e3a930446f93ab4f6e97c6e
${:_profile_base_location_}/../../component/egg-patch/ZODB3-persistent-ghostify-slots.patch#3a66e9c018d7269bd522d5b0a746f510
ZODB3-patch-options = -p1
[slapos-deps-eggs]
recipe = zc.recipe.egg
...
...
@@ -105,8 +100,7 @@ md5sum = 81ab5e842ecf8385b12d735585497cc8
[versions]
slapos.recipe.template = 2.9
# patched egg
ZODB3 = 3.10.5+SlapOSPatched002
ZODB3 = 3.10.7
# Required by slapos.toolbox = 0.58
slapos.toolbox = 0.58
PyRSS2Gen = 1.1
...
...
software/neoppod/software.cfg
View file @
0261593b
...
...
@@ -34,11 +34,13 @@ eggs = erp5.util
interpreter = ${:_buildout_section_name_}
[neoppod]
patch-binary = ${patch:location}/bin/patch
ZODB3-patch-options = -p1
ZODB3-patches +=
${neoppod-repository:location}/ZODB3.patch
[versions]
ZODB3 = 3.10.
5+SlapOSPatched003
ZODB3 = 3.10.
7+SlapOSPatched001
erp5.util = 0.4.45
# To match ERP5
transaction = 1.1.1
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment