Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Z
ZODB
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
nexedi
ZODB
Commits
10e1326a
Commit
10e1326a
authored
Sep 08, 2016
by
Jim Fulton
Committed by
GitHub
Sep 08, 2016
Browse files
Options
Browse Files
Download
Plain Diff
Merge pull request #101 from zopefoundation/reference-docs
Reference docs
parents
8a4d763a
bed11cba
Changes
10
Show whitespace changes
Inline
Side-by-side
Showing
10 changed files
with
386 additions
and
70 deletions
+386
-70
src/ZODB/Connection.py
src/ZODB/Connection.py
+6
-2
src/ZODB/DB.py
src/ZODB/DB.py
+143
-47
src/ZODB/DemoStorage.py
src/ZODB/DemoStorage.py
+51
-0
src/ZODB/FileStorage/FileStorage.py
src/ZODB/FileStorage/FileStorage.py
+78
-1
src/ZODB/FileStorage/interfaces.py
src/ZODB/FileStorage/interfaces.py
+14
-5
src/ZODB/MappingStorage.py
src/ZODB/MappingStorage.py
+14
-0
src/ZODB/component.xml
src/ZODB/component.xml
+37
-14
src/ZODB/config.py
src/ZODB/config.py
+27
-0
src/ZODB/interfaces.py
src/ZODB/interfaces.py
+3
-1
src/ZODB/valuedoc.py
src/ZODB/valuedoc.py
+13
-0
No files found.
src/ZODB/Connection.py
View file @
10e1326a
...
@@ -78,9 +78,13 @@ def className(obj):
...
@@ -78,9 +78,13 @@ def className(obj):
IPersistentDataManager
,
IPersistentDataManager
,
ISynchronizer
)
ISynchronizer
)
class
Connection
(
ExportImport
,
object
):
class
Connection
(
ExportImport
,
object
):
"""Connection to ZODB for loading and storing objects."""
"""Connection to ZODB for loading and storing objects.
Connections manage object state in collaboration with transaction
managers. They're created by calling the
:meth:`~ZODB.DB.open` method on :py:class:`database
<ZODB.DB>` objects.
"""
_code_timestamp
=
0
_code_timestamp
=
0
...
...
src/ZODB/DB.py
View file @
10e1326a
...
@@ -39,7 +39,7 @@ import transaction
...
@@ -39,7 +39,7 @@ import transaction
from
persistent.TimeStamp
import
TimeStamp
from
persistent.TimeStamp
import
TimeStamp
import
six
import
six
from
.
import
POSException
from
.
import
POSException
,
valuedoc
logger
=
logging
.
getLogger
(
'ZODB.DB'
)
logger
=
logging
.
getLogger
(
'ZODB.DB'
)
...
@@ -323,11 +323,9 @@ def getTID(at, before):
...
@@ -323,11 +323,9 @@ def getTID(at, before):
before
=
TimeStamp
(
before
).
raw
()
before
=
TimeStamp
(
before
).
raw
()
return
before
return
before
@
implementer
(
IDatabase
)
@
implementer
(
IDatabase
)
class
DB
(
object
):
class
DB
(
object
):
"""The Object Database
"""The Object Database
-------------------
The DB class coordinates the activities of multiple database
The DB class coordinates the activities of multiple database
Connection instances. Most of the work is done by the
Connection instances. Most of the work is done by the
...
@@ -340,36 +338,19 @@ class DB(object):
...
@@ -340,36 +338,19 @@ class DB(object):
connections are opened, a warning is logged, and if more than twice
connections are opened, a warning is logged, and if more than twice
that many, a critical problem is logged.
that many, a critical problem is logged.
The class variable 'klass' is used by open() to create database
connections. It is set to Connection, but a subclass could override
it to provide a different connection implementation.
The database provides a few methods intended for application code
The database provides a few methods intended for application code
-- open, close, undo, and pack -- and a large collection of
-- open, close, undo, and pack -- and a large collection of
methods for inspecting the database and its connections' caches.
methods for inspecting the database and its connections' caches.
:Cvariables:
- `klass`: Class used by L{open} to create database connections
:Groups:
- `User Methods`: __init__, open, close, undo, pack, classFactory
- `Inspection Methods`: getName, getSize, objectCount,
getActivityMonitor, setActivityMonitor
- `Connection Pool Methods`: getPoolSize, getHistoricalPoolSize,
setPoolSize, setHistoricalPoolSize, getHistoricalTimeout,
setHistoricalTimeout
- `Transaction Methods`: invalidate
- `Other Methods`: lastTransaction, connectionDebugInfo
- `Cache Inspection Methods`: cacheDetail, cacheExtremeDetail,
cacheFullSweep, cacheLastGCTime, cacheMinimize, cacheSize,
cacheDetailSize, getCacheSize, getHistoricalCacheSize, setCacheSize,
setHistoricalCacheSize
"""
"""
klass
=
Connection
# Class to use for connections
klass
=
Connection
# Class to use for connections
_activity_monitor
=
next
=
previous
=
None
_activity_monitor
=
next
=
previous
=
None
def
__init__
(
self
,
storage
,
#: Database storage, implementing :interface:`~ZODB.interfaces.IStorage`
storage
=
valuedoc
.
ValueDoc
(
'storage object'
)
def
__init__
(
self
,
storage
,
pool_size
=
7
,
pool_size
=
7
,
pool_timeout
=
1
<<
31
,
pool_timeout
=
1
<<
31
,
cache_size
=
400
,
cache_size
=
400
,
...
@@ -385,23 +366,53 @@ class DB(object):
...
@@ -385,23 +366,53 @@ class DB(object):
**
storage_args
):
**
storage_args
):
"""Create an object database.
"""Create an object database.
:Parameters:
:param storage: the storage used by the database, such as a
- `storage`: the storage used by the database, e.g. FileStorage
:class:`~ZODB.FileStorage.FileStorage.FileStorage`.
- `pool_size`: expected maximum number of open connections
This can be a string path name to use a constructed
- `cache_size`: target size of Connection object cache
:class:`~ZODB.FileStorage.FileStorage.FileStorage`
- `cache_size_bytes`: target size measured in total estimated size
storage or ``None`` to use a constructed
of objects in the Connection object cache.
:class:`~ZODB.MappingStorage.MappingStorage`.
"0" means unlimited.
:param int pool_size: expected maximum number of open connections.
- `historical_pool_size`: expected maximum number of total
Warnings are logged when this is exceeded and critical
messages are logged if twice the pool size is exceeded.
:param seconds pool_timeout: Maximum age of inactive connections
When a connection has remained unused in a connection
pool for more than pool_timeout seconds, it will be
discarded and it's resources released.
:param objects cache_size: target maximum number of non-ghost
objects in each connection object cache.
:param int cache_size_bytes: target total memory usage of non-ghost
objects in each connection object cache.
:param int historical_pool_size: expected maximum number of total
historical connections
historical connections
- `historical_cache_size`: target size of Connection object cache for
:param objects historical_cache_size: target maximum number
historical (`at` or `before`) connections
of non-ghost objects in each historical connection object
- `historical_cache_size_bytes` -- similar to `cache_size_bytes` for
cache.
the historical connection.
:param int historical_cache_size_bytes: target total memory
- `historical_timeout`: minimum number of seconds that
usage of non-ghost objects in each historical connection
an unused historical connection will be kept, or None.
object cache.
- `xrefs` - Boolian flag indicating whether implicit cross-database
:param seconds historical_timeout: Maximum age of inactive
references are allowed
historical connections. When a connection has remained
unused in a historical connection pool for more than pool_timeout
seconds, it will be discarded and it's resources
released.
:param str database_name: The name of this database in a
multi-database configuration. The name is used when
constructing cross-database references ans when accessing
database connections fron other databases.
:param dict databases: dictionary of database name to
databases in a multi-database configuration. The new
database will add itself to this dictionary. The
dictionary is used when getting connections in other databases.
:param boolean xrefs: Flag indicating whether cross-database
references are allowed from this database to other
databases in a multi-database configuration.
:param int large_record_size: When object records are saved
that are larger than this, a warning is issued,
suggesting that blobs should be used instead.
:param storage_args: Extra keywork arguments passed to a
storage constructor if a path name or None is passed as
the storage argument.
"""
"""
# Allocate lock.
# Allocate lock.
...
@@ -494,9 +505,7 @@ class DB(object):
...
@@ -494,9 +505,7 @@ class DB(object):
self
.
historical_pool
.
map
(
f
)
self
.
historical_pool
.
map
(
f
)
def
cacheDetail
(
self
):
def
cacheDetail
(
self
):
"""Return information on objects in the various caches
"""Return object counts by class accross all connections.
Organized by class.
"""
"""
detail
=
{}
detail
=
{}
...
@@ -514,6 +523,12 @@ class DB(object):
...
@@ -514,6 +523,12 @@ class DB(object):
return
sorted
(
detail
.
items
())
return
sorted
(
detail
.
items
())
def
cacheExtremeDetail
(
self
):
def
cacheExtremeDetail
(
self
):
"""Return information about all of the objects in the object caches.
Information includes a connection number, class, object id,
reference count and state. The reference count returned
excludes references help by ZODB itself.
"""
detail
=
[]
detail
=
[]
conn_no
=
[
0
]
# A mutable reference to a counter
conn_no
=
[
0
]
# A mutable reference to a counter
# sys.getrefcount is a CPython implementation detail
# sys.getrefcount is a CPython implementation detail
...
@@ -563,7 +578,7 @@ class DB(object):
...
@@ -563,7 +578,7 @@ class DB(object):
self
.
_connectionMap
(
f
)
self
.
_connectionMap
(
f
)
return
detail
return
detail
def
cacheFullSweep
(
self
):
def
cacheFullSweep
(
self
):
# XXX this is the same as cacheMinimize
self
.
_connectionMap
(
lambda
c
:
c
.
_cache
.
full_sweep
())
self
.
_connectionMap
(
lambda
c
:
c
.
_cache
.
full_sweep
())
def
cacheLastGCTime
(
self
):
def
cacheLastGCTime
(
self
):
...
@@ -577,9 +592,13 @@ class DB(object):
...
@@ -577,9 +592,13 @@ class DB(object):
return
m
[
0
]
return
m
[
0
]
def
cacheMinimize
(
self
):
def
cacheMinimize
(
self
):
"""Minimize cache sizes for all connections
"""
self
.
_connectionMap
(
lambda
c
:
c
.
_cache
.
minimize
())
self
.
_connectionMap
(
lambda
c
:
c
.
_cache
.
minimize
())
def
cacheSize
(
self
):
def
cacheSize
(
self
):
"""Return the total count of non-ghost objects in all object caches
"""
m
=
[
0
]
m
=
[
0
]
def
f
(
con
,
m
=
m
):
def
f
(
con
,
m
=
m
):
m
[
0
]
+=
con
.
_cache
.
cache_non_ghost_count
m
[
0
]
+=
con
.
_cache
.
cache_non_ghost_count
...
@@ -588,6 +607,8 @@ class DB(object):
...
@@ -588,6 +607,8 @@ class DB(object):
return
m
[
0
]
return
m
[
0
]
def
cacheDetailSize
(
self
):
def
cacheDetailSize
(
self
):
"""Return non-ghost counts sizes for all connections.
"""
m
=
[]
m
=
[]
def
f
(
con
,
m
=
m
):
def
f
(
con
,
m
=
m
):
m
.
append
({
'connection'
:
repr
(
con
),
m
.
append
({
'connection'
:
repr
(
con
),
...
@@ -625,38 +646,60 @@ class DB(object):
...
@@ -625,38 +646,60 @@ class DB(object):
del
self
.
_mvcc_storage
del
self
.
_mvcc_storage
def
getCacheSize
(
self
):
def
getCacheSize
(
self
):
"""Get the configured cache size (objects).
"""
return
self
.
_cache_size
return
self
.
_cache_size
def
getCacheSizeBytes
(
self
):
def
getCacheSizeBytes
(
self
):
"""Get the configured cache size in bytes.
"""
return
self
.
_cache_size_bytes
return
self
.
_cache_size_bytes
def
lastTransaction
(
self
):
def
lastTransaction
(
self
):
"""Get the storage last transaction id.
"""
return
self
.
storage
.
lastTransaction
()
return
self
.
storage
.
lastTransaction
()
def
getName
(
self
):
def
getName
(
self
):
"""Get the storage name
"""
return
self
.
storage
.
getName
()
return
self
.
storage
.
getName
()
def
getPoolSize
(
self
):
def
getPoolSize
(
self
):
"""Get the configured pool size
"""
return
self
.
pool
.
size
return
self
.
pool
.
size
def
getSize
(
self
):
def
getSize
(
self
):
"""Get the approximate database size, in bytes
"""
return
self
.
storage
.
getSize
()
return
self
.
storage
.
getSize
()
def
getHistoricalCacheSize
(
self
):
def
getHistoricalCacheSize
(
self
):
"""Get the configured historical cache size (objects).
"""
return
self
.
_historical_cache_size
return
self
.
_historical_cache_size
def
getHistoricalCacheSizeBytes
(
self
):
def
getHistoricalCacheSizeBytes
(
self
):
"""Get the configured historical cache size in bytes.
"""
return
self
.
_historical_cache_size_bytes
return
self
.
_historical_cache_size_bytes
def
getHistoricalPoolSize
(
self
):
def
getHistoricalPoolSize
(
self
):
"""Get the configured historical pool size
"""
return
self
.
historical_pool
.
size
return
self
.
historical_pool
.
size
def
getHistoricalTimeout
(
self
):
def
getHistoricalTimeout
(
self
):
"""Get the configured historical pool timeout
"""
return
self
.
historical_pool
.
timeout
return
self
.
historical_pool
.
timeout
transform_record_data
=
untransform_record_data
=
lambda
self
,
data
:
data
transform_record_data
=
untransform_record_data
=
lambda
self
,
data
:
data
def
objectCount
(
self
):
def
objectCount
(
self
):
"""Get the approximate object count
"""
return
len
(
self
.
storage
)
return
len
(
self
.
storage
)
def
open
(
self
,
transaction_manager
=
None
,
at
=
None
,
before
=
None
):
def
open
(
self
,
transaction_manager
=
None
,
at
=
None
,
before
=
None
):
...
@@ -729,6 +772,15 @@ class DB(object):
...
@@ -729,6 +772,15 @@ class DB(object):
return
result
return
result
def
connectionDebugInfo
(
self
):
def
connectionDebugInfo
(
self
):
"""Get debugging information about connections
This is especially useful to debug connections that seem to be
leaking or open too long. Information includes connection
info, the connection before setting, and, if a connection is
open, the time it was opened. The info is the result of
calling :meth:`~ZODB.Connection.Connection.getDebugInfo` on
the connection, and the connection's cache size.
"""
result
=
[]
result
=
[]
t
=
time
.
time
()
t
=
time
.
time
()
...
@@ -790,6 +842,8 @@ class DB(object):
...
@@ -790,6 +842,8 @@ class DB(object):
return
find_global
(
modulename
,
globalname
)
return
find_global
(
modulename
,
globalname
)
def
setCacheSize
(
self
,
size
):
def
setCacheSize
(
self
,
size
):
"""Reconfigure the cache size (non-ghost object count)
"""
with
self
.
_lock
:
with
self
.
_lock
:
self
.
_cache_size
=
size
self
.
_cache_size
=
size
def
setsize
(
c
):
def
setsize
(
c
):
...
@@ -797,6 +851,8 @@ class DB(object):
...
@@ -797,6 +851,8 @@ class DB(object):
self
.
pool
.
map
(
setsize
)
self
.
pool
.
map
(
setsize
)
def
setCacheSizeBytes
(
self
,
size
):
def
setCacheSizeBytes
(
self
,
size
):
"""Reconfigure the cache total size in bytes
"""
with
self
.
_lock
:
with
self
.
_lock
:
self
.
_cache_size_bytes
=
size
self
.
_cache_size_bytes
=
size
def
setsize
(
c
):
def
setsize
(
c
):
...
@@ -804,6 +860,8 @@ class DB(object):
...
@@ -804,6 +860,8 @@ class DB(object):
self
.
pool
.
map
(
setsize
)
self
.
pool
.
map
(
setsize
)
def
setHistoricalCacheSize
(
self
,
size
):
def
setHistoricalCacheSize
(
self
,
size
):
"""Reconfigure the historical cache size (non-ghost object count)
"""
with
self
.
_lock
:
with
self
.
_lock
:
self
.
_historical_cache_size
=
size
self
.
_historical_cache_size
=
size
def
setsize
(
c
):
def
setsize
(
c
):
...
@@ -811,6 +869,8 @@ class DB(object):
...
@@ -811,6 +869,8 @@ class DB(object):
self
.
historical_pool
.
map
(
setsize
)
self
.
historical_pool
.
map
(
setsize
)
def
setHistoricalCacheSizeBytes
(
self
,
size
):
def
setHistoricalCacheSizeBytes
(
self
,
size
):
"""Reconfigure the historical cache total size in bytes
"""
with
self
.
_lock
:
with
self
.
_lock
:
self
.
_historical_cache_size_bytes
=
size
self
.
_historical_cache_size_bytes
=
size
def
setsize
(
c
):
def
setsize
(
c
):
...
@@ -818,21 +878,33 @@ class DB(object):
...
@@ -818,21 +878,33 @@ class DB(object):
self
.
historical_pool
.
map
(
setsize
)
self
.
historical_pool
.
map
(
setsize
)
def
setPoolSize
(
self
,
size
):
def
setPoolSize
(
self
,
size
):
"""Reconfigure the connection pool size
"""
with
self
.
_lock
:
with
self
.
_lock
:
self
.
pool
.
size
=
size
self
.
pool
.
size
=
size
def
setHistoricalPoolSize
(
self
,
size
):
def
setHistoricalPoolSize
(
self
,
size
):
"""Reconfigure the connection historical pool size
"""
with
self
.
_lock
:
with
self
.
_lock
:
self
.
historical_pool
.
size
=
size
self
.
historical_pool
.
size
=
size
def
setHistoricalTimeout
(
self
,
timeout
):
def
setHistoricalTimeout
(
self
,
timeout
):
"""Reconfigure the connection historical pool timeout
"""
with
self
.
_lock
:
with
self
.
_lock
:
self
.
historical_pool
.
timeout
=
timeout
self
.
historical_pool
.
timeout
=
timeout
def
history
(
self
,
*
args
,
**
kw
):
def
history
(
self
,
oid
,
size
=
1
):
return
self
.
storage
.
history
(
*
args
,
**
kw
)
"""Get revision history information for an object.
See :meth:`ZODB.interfaces.IStorage.history`.
"""
return
self
.
storage
.
history
(
oid
,
size
)
def
supportsUndo
(
self
):
def
supportsUndo
(
self
):
"""Return whether the database supports undo.
"""
try
:
try
:
f
=
self
.
storage
.
supportsUndo
f
=
self
.
storage
.
supportsUndo
except
AttributeError
:
except
AttributeError
:
...
@@ -840,11 +912,20 @@ class DB(object):
...
@@ -840,11 +912,20 @@ class DB(object):
return
f
()
return
f
()
def
undoLog
(
self
,
*
args
,
**
kw
):
def
undoLog
(
self
,
*
args
,
**
kw
):
"""Return a sequence of descriptions for transactions.
See :meth:`ZODB.interfaces.IStorageUndoable.undoLog`.
"""
if
not
self
.
supportsUndo
():
if
not
self
.
supportsUndo
():
return
()
return
()
return
self
.
storage
.
undoLog
(
*
args
,
**
kw
)
return
self
.
storage
.
undoLog
(
*
args
,
**
kw
)
def
undoInfo
(
self
,
*
args
,
**
kw
):
def
undoInfo
(
self
,
*
args
,
**
kw
):
"""Return a sequence of descriptions for transactions.
See :meth:`ZODB.interfaces.IStorageUndoable.undoInfo`.
"""
if
not
self
.
supportsUndo
():
if
not
self
.
supportsUndo
():
return
()
return
()
return
self
.
storage
.
undoInfo
(
*
args
,
**
kw
)
return
self
.
storage
.
undoInfo
(
*
args
,
**
kw
)
...
@@ -894,6 +975,14 @@ class DB(object):
...
@@ -894,6 +975,14 @@ class DB(object):
self
.
undoMultiple
([
id
],
txn
)
self
.
undoMultiple
([
id
],
txn
)
def
transaction
(
self
,
note
=
None
):
def
transaction
(
self
,
note
=
None
):
"""Execute a block of code as a transaction.
If a note is given, it will be added to the transaction's
description.
The ``transaction`` method returns a context manager that can
be used with the ``with`` statement.
"""
return
ContextManager
(
self
,
note
)
return
ContextManager
(
self
,
note
)
def
new_oid
(
self
):
def
new_oid
(
self
):
...
@@ -966,4 +1055,11 @@ class TransactionalUndo(object):
...
@@ -966,4 +1055,11 @@ class TransactionalUndo(object):
return
"%s:%s"
%
(
self
.
_storage
.
sortKey
(),
id
(
self
))
return
"%s:%s"
%
(
self
.
_storage
.
sortKey
(),
id
(
self
))
def
connection
(
*
args
,
**
kw
):
def
connection
(
*
args
,
**
kw
):
"""Create a database :class:`connection <ZODB.Connection.Connection>`.
A database is created using the given arguments and opened to
create the returned connection. The database will be closed when
the connection is closed. This is a convenience function to avoid
managing a separate database object.
"""
return
DB
(
*
args
,
**
kw
).
open_then_close_db_when_connection_closes
()
return
DB
(
*
args
,
**
kw
).
open_then_close_db_when_connection_closes
()
src/ZODB/DemoStorage.py
View file @
10e1326a
...
@@ -40,10 +40,52 @@ from .utils import load_current, maxtid
...
@@ -40,10 +40,52 @@ from .utils import load_current, maxtid
ZODB
.
interfaces
.
IStorageIteration
,
ZODB
.
interfaces
.
IStorageIteration
,
)
)
class
DemoStorage
(
ConflictResolvingStorage
):
class
DemoStorage
(
ConflictResolvingStorage
):
"""A storage that stores changes against a read-only base database
This storage was originally meant to support distribution of
application demonstrations with populated read-only databases (on
CDROM) and writable in-memory databases.
Demo storages are extemely convenient for testing where setup of a
base database can be shared by many tests.
Demo storages are also handy for staging appplications where a
read-only snapshot of a production database (often accomplished
using a `beforestorage
<https://pypi.python.org/pypi/zc.beforestorage>`_) is combined
with a changes database implemented with a
:class:`~ZODB.FileStorage.FileStorage.FileStorage`.
"""
def
__init__
(
self
,
name
=
None
,
base
=
None
,
changes
=
None
,
def
__init__
(
self
,
name
=
None
,
base
=
None
,
changes
=
None
,
close_base_on_close
=
None
,
close_changes_on_close
=
None
):
close_base_on_close
=
None
,
close_changes_on_close
=
None
):
"""Create a demo storage
:param str name: The storage name used by the
:meth:`~ZODB.interfaces.IStorage.getName` and
:meth:`~ZODB.interfaces.IStorage.sortKey` methods.
:param object base: base storage
:param object changes: changes storage
:param bool close_base_on_close: A Flag indicating whether the base
database should be closed when the demo storage is closed.
:param bool close_changes_on_close: A Flag indicating whether the
changes database should be closed when the demo storage is closed.
If a base database isn't provided, a
:class:`~ZODB.MappingStorage.MappingStorage` will be
constructed and used.
If ``close_base_on_close`` isn't specified, it will be ``True`` if
a base database was provided and ``False`` otherwise.
If a changes database isn't provided, a
:class:`~ZODB.MappingStorage.MappingStorage` will be
constructed and used and blob support will be provided using a
temporary blob directory.
If ``close_changes_on_close`` isn't specified, it will be ``True`` if
a changes database was provided and ``False`` otherwise.
"""
if
close_base_on_close
is
None
:
if
close_base_on_close
is
None
:
if
base
is
None
:
if
base
is
None
:
...
@@ -51,6 +93,8 @@ class DemoStorage(ConflictResolvingStorage):
...
@@ -51,6 +93,8 @@ class DemoStorage(ConflictResolvingStorage):
close_base_on_close
=
False
close_base_on_close
=
False
else
:
else
:
close_base_on_close
=
True
close_base_on_close
=
True
elif
base
is
None
:
base
=
ZODB
.
MappingStorage
.
MappingStorage
()
self
.
base
=
base
self
.
base
=
base
self
.
close_base_on_close
=
close_base_on_close
self
.
close_base_on_close
=
close_base_on_close
...
@@ -285,10 +329,17 @@ class DemoStorage(ConflictResolvingStorage):
...
@@ -285,10 +329,17 @@ class DemoStorage(ConflictResolvingStorage):
raise
raise
def
pop
(
self
):
def
pop
(
self
):
"""Close the changes database and return the base.
"""
self
.
changes
.
close
()
self
.
changes
.
close
()
return
self
.
base
return
self
.
base
def
push
(
self
,
changes
=
None
):
def
push
(
self
,
changes
=
None
):
"""Create a new demo storage using the storage as a base.
The given changes are used as the changes for the returned
storage and ``False`` is passed as ``close_base_on_close``.
"""
return
self
.
__class__
(
base
=
self
,
changes
=
changes
,
return
self
.
__class__
(
base
=
self
,
changes
=
changes
,
close_base_on_close
=
False
)
close_base_on_close
=
False
)
...
...
src/ZODB/FileStorage/FileStorage.py
View file @
10e1326a
...
@@ -140,7 +140,8 @@ class FileStorage(
...
@@ -140,7 +140,8 @@ class FileStorage(
ConflictResolvingStorage
,
ConflictResolvingStorage
,
BaseStorage
,
BaseStorage
,
):
):
"""Storage that saves data in a file
"""
# Set True while a pack is in progress; undo is blocked for the duration.
# Set True while a pack is in progress; undo is blocked for the duration.
_pack_is_in_progress
=
False
_pack_is_in_progress
=
False
...
@@ -148,6 +149,82 @@ class FileStorage(
...
@@ -148,6 +149,82 @@ class FileStorage(
def
__init__
(
self
,
file_name
,
create
=
False
,
read_only
=
False
,
stop
=
None
,
def
__init__
(
self
,
file_name
,
create
=
False
,
read_only
=
False
,
stop
=
None
,
quota
=
None
,
pack_gc
=
True
,
pack_keep_old
=
True
,
packer
=
None
,
quota
=
None
,
pack_gc
=
True
,
pack_keep_old
=
True
,
packer
=
None
,
blob_dir
=
None
):
blob_dir
=
None
):
"""Create a file storage
:param str file_name: Path to store data file
:param bool create: Flag indicating whether a file should be
created even if it already exists.
:param bool read_only: Flag indicating whether the file is
read only. Only one process is able to open the file
non-read-only.
:param bytes stop: Time-travel transaction id
When the file is opened, data will be read up to the given
transaction id. Transaction ids correspond to times and
you can compute transaction ids for a given time using
:class:`~ZODB.TimeStamp.TimeStamp`.
:param int quota: File-size quota
:param bool pack_gc: Flag indicating whether garbage
collection should be performed when packing.
:param bool pack_keep_old: flag indicating whether old data
files should be retained after packing as a ``.old`` file.
:param callable packer: An alternative
:interface:`packer <ZODB.FileStorage.interfaces.IFileStoragePacker>`.
:param str blob_dir: A blob-directory path name.
Blobs will be supported if this option is provided.
A file storage stores data in a single file that behaves like
a traditional transaction log. New data records are appended
to the end of the file. Periodically, the file is packed to
free up space. When this is done, current records as of the
pack time or later are copied to a new file, which replaces
the old file.
FileStorages keep in-memory indexes mapping object oids to the
location of their current records in the file. Back pointers to
previous records allow access to non-current records from the
current records.
In addition to the data file, some ancillary files are
created. These can be lost without affecting data
integrity, however losing the index file may cause extremely
slow startup. Each has a name that's a concatenation of the
original file and a suffix. The files are listed below by
suffix:
.index
Snapshot of the in-memory index. This are created on
shutdown, packing, and after rebuilding an index when one
was not found. For large databases, creating a
file-storage object without an index file can take very
long because it's necessary to scan the data file to build
the index.
.lock
A lock file preventing multiple processes from opening a
file storage on non-read-only mode.
.tmp
A file used to store data being committed in the first phase
of 2-phase commit
.index_tmp
A temporary file used when saving the in-memory index to
avoid overwriting an existing index until a new index has
been fully saved.
.pack
A temporary file written while packing containing current
records as of and after the pack time.
.old
The previous database file after a pack.
When the database is packed, current records as of the pack
time and later are written to the ``.pack`` file. At the end
of packing, the ``.old`` file is removed, if it exists, and
the data file is renamed to the ``.old`` file and finally the
``.pack`` file is rewritten to the data file.
"""
if
read_only
:
if
read_only
:
self
.
_is_read_only
=
True
self
.
_is_read_only
=
True
...
...
src/ZODB/FileStorage/interfaces.py
View file @
10e1326a
...
@@ -16,18 +16,27 @@ import zope.interface
...
@@ -16,18 +16,27 @@ import zope.interface
class
IFileStoragePacker
(
zope
.
interface
.
Interface
):
class
IFileStoragePacker
(
zope
.
interface
.
Interface
):
def
__call__
(
storage
,
referencesf
,
stop
,
gc
):
def
__call__
(
storage
,
referencesf
,
stop
,
gc
):
"""Pack the file storage into a new file
r"""Pack the file storage into a new file
:param FileStorage storage: The storage object to be packed
:param callable referencesf: A function that extracts object
references from a pickle bytes string. This is usually
``ZODB.serialize.referencesf``.
:param bytes stop: A transaction id representing the time at
which to stop packing.
:param bool gc: A flag indicating whether garbage collection
should be performed.
The new file will have the same name as the old file with
The new file will have the same name as the old file with
'.pack'
appended. (The packer can get the old file name via
``.pack``
appended. (The packer can get the old file name via
storage._file.name.) If blobs are supported, if the storages
storage._file.name.) If blobs are supported, if the storages
blob_dir attribute is not None or empty, then a .removed file
blob_dir attribute is not None or empty, then a .removed file
m
ost be created in the blob directory. This file contain
s of
m
ust be created in the blob directory. This file contains record
s of
the form:
the form:
:
(oid+serial).encode('hex')+'\n'
(oid+serial).encode('hex')+'\n'
or, of the form:
or, of the form:
:
oid.encode('hex')+'\n'
oid.encode('hex')+'\n'
...
...
src/ZODB/MappingStorage.py
View file @
10e1326a
...
@@ -32,8 +32,22 @@ import zope.interface
...
@@ -32,8 +32,22 @@ import zope.interface
ZODB
.
interfaces
.
IStorageIteration
,
ZODB
.
interfaces
.
IStorageIteration
,
)
)
class
MappingStorage
(
object
):
class
MappingStorage
(
object
):
"""In-memory storage implementation
Note that this implementation is somewhat naive and inefficient
with regard to locking. Its implementation is primarily meant to
be a simple illustration of storage implementation. It's also
useful for testing and exploration where scalability and efficiency
are unimportant.
"""
def
__init__
(
self
,
name
=
'MappingStorage'
):
def
__init__
(
self
,
name
=
'MappingStorage'
):
"""Create a mapping storage
The name parameter is used by the
:meth:`~ZODB.interfaces.IStorage.getName` and
:meth:`~ZODB.interfaces.IStorage.sortKey` methods.
"""
self
.
__name__
=
name
self
.
__name__
=
name
self
.
_data
=
{}
# {oid->{tid->pickle}}
self
.
_data
=
{}
# {oid->{tid->pickle}}
self
.
_transactions
=
BTrees
.
OOBTree
.
OOBTree
()
# {tid->TransactionRecord}
self
.
_transactions
=
BTrees
.
OOBTree
.
OOBTree
()
# {tid->TransactionRecord}
...
...
src/ZODB/component.xml
View file @
10e1326a
...
@@ -18,7 +18,7 @@
...
@@ -18,7 +18,7 @@
<description>
<description>
If supplied, the file storage will provide blob support and this
If supplied, the file storage will provide blob support and this
is the name of a directory to hold blob data. The directory will
is the name of a directory to hold blob data. The directory will
be created if it doe
e
sn't exist. If no value (or an empty value)
be created if it doesn't exist. If no value (or an empty value)
is provided, then no blob support will be provided. (You can still
is provided, then no blob support will be provided. (You can still
use a BlobStorage to provide blob support.)
use a BlobStorage to provide blob support.)
</description>
</description>
...
@@ -69,7 +69,13 @@
...
@@ -69,7 +69,13 @@
<sectiontype
name=
"mappingstorage"
datatype=
".MappingStorage"
<sectiontype
name=
"mappingstorage"
datatype=
".MappingStorage"
implements=
"ZODB.storage"
>
implements=
"ZODB.storage"
>
<key
name=
"name"
default=
"Mapping Storage"
/>
<key
name=
"name"
default=
"Mapping Storage"
>
<description>
The storage name, used by the
:meth:`~ZODB.interfaces.IStorage.getName` and
:meth:`~ZODB.interfaces.IStorage.sortKey` methods.
</description>
</key>
</sectiontype>
</sectiontype>
<!-- The BDB storages probably need to be revised somewhat still.
<!-- The BDB storages probably need to be revised somewhat still.
...
@@ -187,14 +193,14 @@
...
@@ -187,14 +193,14 @@
<key
name=
"read-only-fallback"
datatype=
"boolean"
default=
"off"
>
<key
name=
"read-only-fallback"
datatype=
"boolean"
default=
"off"
>
<description>
<description>
A flag indicating whether a read-only remote storage should be
A flag indicating whether a read-only remote storage should be
acceptable as a fallback when no writable storages are
acceptable as a fall
-
back when no writable storages are
available. Defaults to false. At most one of read_only and
available. Defaults to false. At most one of read_only and
read_only_fallback should be true.
read_only_fallback should be true.
</description>
</description>
</key>
</key>
<key
name=
"username"
required=
"no"
>
<key
name=
"username"
required=
"no"
>
<description>
<description>
The authentication username of the server.
The authentication user
name of the server.
</description>
</description>
</key>
</key>
<key
name=
"password"
required=
"no"
>
<key
name=
"password"
required=
"no"
>
...
@@ -205,7 +211,7 @@
...
@@ -205,7 +211,7 @@
<key
name=
"realm"
required=
"no"
>
<key
name=
"realm"
required=
"no"
>
<description>
<description>
The authentication realm of the server. Some authentication
The authentication realm of the server. Some authentication
schemes use a realm to identify the logic set of usernames
schemes use a realm to identify the logic set of user
names
that are accepted by this server.
that are accepted by this server.
</description>
</description>
</key>
</key>
...
@@ -225,7 +231,13 @@
...
@@ -225,7 +231,13 @@
<sectiontype
name=
"demostorage"
datatype=
".DemoStorage"
<sectiontype
name=
"demostorage"
datatype=
".DemoStorage"
implements=
"ZODB.storage"
>
implements=
"ZODB.storage"
>
<key
name=
"name"
/>
<key
name=
"name"
>
<description>
The storage name, used by the
:meth:`~ZODB.interfaces.IStorage.getName` and
:meth:`~ZODB.interfaces.IStorage.sortKey` methods.
</description>
</key>
<multisection
type=
"ZODB.storage"
name=
"*"
attribute=
"factories"
/>
<multisection
type=
"ZODB.storage"
name=
"*"
attribute=
"factories"
/>
</sectiontype>
</sectiontype>
...
@@ -233,11 +245,12 @@
...
@@ -233,11 +245,12 @@
<sectiontype
name=
"zodb"
datatype=
".ZODBDatabase"
<sectiontype
name=
"zodb"
datatype=
".ZODBDatabase"
implements=
"ZODB.database"
>
implements=
"ZODB.database"
>
<section
type=
"ZODB.storage"
name=
"*"
attribute=
"storage"
/>
<section
type=
"ZODB.storage"
name=
"*"
attribute=
"storage"
/>
<key
name=
"cache-size"
datatype=
"integer"
default=
"5000"
/
>
<key
name=
"cache-size"
datatype=
"integer"
default=
"5000"
>
<description>
<description>
Target size, in number of objects, of each connection's
Target size, in number of objects, of each connection's
object cache.
object cache.
</description>
</description>
</key>
<key
name=
"cache-size-bytes"
datatype=
"byte-size"
default=
"0"
>
<key
name=
"cache-size-bytes"
datatype=
"byte-size"
default=
"0"
>
<description>
<description>
Target size, in total estimated size for objects, of each connection's
Target size, in total estimated size for objects, of each connection's
...
@@ -245,8 +258,14 @@
...
@@ -245,8 +258,14 @@
"0" means no limit.
"0" means no limit.
</description>
</description>
</key>
</key>
<key
name=
"large-record-size"
datatype=
"byte-size"
/>
<key
name=
"large-record-size"
datatype=
"byte-size"
default=
"16MB"
>
<key
name=
"pool-size"
datatype=
"integer"
default=
"7"
/>
<description>
When object records are saved
that are larger than this, a warning is issued,
suggesting that blobs should be used instead.
</description>
</key>
<key
name=
"pool-size"
datatype=
"integer"
default=
"7"
>
<description>
<description>
The expected maximum number of simultaneously open connections.
The expected maximum number of simultaneously open connections.
There is no hard limit (as many connections as are requested
There is no hard limit (as many connections as are requested
...
@@ -255,21 +274,25 @@
...
@@ -255,21 +274,25 @@
and exceeding twice pool-size connections causes a critical
and exceeding twice pool-size connections causes a critical
message to be logged.
message to be logged.
</description>
</description>
<key
name=
"pool-timeout"
datatype=
"time-interval"
/>
</key>
<key
name=
"pool-timeout"
datatype=
"time-interval"
>
<description>
<description>
The minimum interval that an unused (non-historical)
The minimum interval that an unused (non-historical)
connection should be kept.
connection should be kept.
</description>
</description>
<key
name=
"historical-pool-size"
datatype=
"integer"
default=
"3"
/>
</key>
<key
name=
"historical-pool-size"
datatype=
"integer"
default=
"3"
>
<description>
<description>
The expected maximum total number of historical connections
The expected maximum total number of historical connections
simultaneously open.
simultaneously open.
</description>
</description>
<key
name=
"historical-cache-size"
datatype=
"integer"
default=
"1000"
/>
</key>
<key
name=
"historical-cache-size"
datatype=
"integer"
default=
"1000"
>
<description>
<description>
Target size, in number of objects, of each historical connection's
Target size, in number of objects, of each historical connection's
object cache.
object cache.
</description>
</description>
</key>
<key
name=
"historical-cache-size-bytes"
datatype=
"byte-size"
default=
"0"
>
<key
name=
"historical-cache-size-bytes"
datatype=
"byte-size"
default=
"0"
>
<description>
<description>
Target size, in total estimated size of objects, of each historical connection's
Target size, in total estimated size of objects, of each historical connection's
...
@@ -285,12 +308,12 @@
...
@@ -285,12 +308,12 @@
</key>
</key>
<key
name=
"database-name"
>
<key
name=
"database-name"
>
<description>
<description>
When multidatabases are in use, this is the name given to this
When multi
-
databases are in use, this is the name given to this
database in the collection. The name must be unique across all
database in the collection. The name must be unique across all
databases in the collection. The collection must also be given
databases in the collection. The collection must also be given
a mapping from its databases' names to their databases, but that
a mapping from its databases' names to their databases, but that
cannot be specified in a ZODB config file. Applications using
cannot be specified in a ZODB config file. Applications using
multidatabases typical supply a way to configure the mapping in
multi
-
databases typical supply a way to configure the mapping in
their own config files, using the "databases" parameter of a DB
their own config files, using the "databases" parameter of a DB
constructor.
constructor.
</description>
</description>
...
...
src/ZODB/config.py
View file @
10e1326a
...
@@ -42,13 +42,33 @@ def getStorageSchema():
...
@@ -42,13 +42,33 @@ def getStorageSchema():
return
_s_schema
return
_s_schema
def
databaseFromString
(
s
):
def
databaseFromString
(
s
):
"""Create a database from a database-configuration string.
The string must contain one or more :ref:`zodb
<database-text-configuration>` sections.
The database defined by the first section is returned.
If :ref:`more than one zodb section is provided
<multidatabase-text-configuration>`, a multi-database
configuration will be created and all of the databases will be
available in the returned database's ``databases`` attribute.
"""
return
databaseFromFile
(
StringIO
(
s
))
return
databaseFromFile
(
StringIO
(
s
))
def
databaseFromFile
(
f
):
def
databaseFromFile
(
f
):
"""Create a database from a file object that provides configuration.
See :func:`databaseFromString`.
"""
config
,
handle
=
ZConfig
.
loadConfigFile
(
getDbSchema
(),
f
)
config
,
handle
=
ZConfig
.
loadConfigFile
(
getDbSchema
(),
f
)
return
databaseFromConfig
(
config
.
database
)
return
databaseFromConfig
(
config
.
database
)
def
databaseFromURL
(
url
):
def
databaseFromURL
(
url
):
"""Load a database from URL (or file name) that provides configuration.
See :func:`databaseFromString`.
"""
config
,
handler
=
ZConfig
.
loadConfig
(
getDbSchema
(),
url
)
config
,
handler
=
ZConfig
.
loadConfig
(
getDbSchema
(),
url
)
return
databaseFromConfig
(
config
.
database
)
return
databaseFromConfig
(
config
.
database
)
...
@@ -63,13 +83,20 @@ def databaseFromConfig(database_factories):
...
@@ -63,13 +83,20 @@ def databaseFromConfig(database_factories):
return
first
return
first
def
storageFromString
(
s
):
def
storageFromString
(
s
):
"""Create a storage from a storage-configuration string.
"""
return
storageFromFile
(
StringIO
(
s
))
return
storageFromFile
(
StringIO
(
s
))
def
storageFromFile
(
f
):
def
storageFromFile
(
f
):
"""Create a storage from a file object providing storage-configuration.
"""
config
,
handle
=
ZConfig
.
loadConfigFile
(
getStorageSchema
(),
f
)
config
,
handle
=
ZConfig
.
loadConfigFile
(
getStorageSchema
(),
f
)
return
storageFromConfig
(
config
.
storage
)
return
storageFromConfig
(
config
.
storage
)
def
storageFromURL
(
url
):
def
storageFromURL
(
url
):
"""
\
Create a storage from a URL (or file name) providing storage-configuration.
"""
config
,
handler
=
ZConfig
.
loadConfig
(
getStorageSchema
(),
url
)
config
,
handler
=
ZConfig
.
loadConfig
(
getStorageSchema
(),
url
)
return
storageFromConfig
(
config
.
storage
)
return
storageFromConfig
(
config
.
storage
)
...
...
src/ZODB/interfaces.py
View file @
10e1326a
...
@@ -437,7 +437,6 @@ class IStorage(Interface):
...
@@ -437,7 +437,6 @@ class IStorage(Interface):
"""A storage is responsible for storing and retrieving data of objects.
"""A storage is responsible for storing and retrieving data of objects.
Consistency and locking
Consistency and locking
-----------------------
When transactions are committed, a storage assigns monotonically
When transactions are committed, a storage assigns monotonically
increasing transaction identifiers (tids) to the transactions and
increasing transaction identifiers (tids) to the transactions and
...
@@ -472,6 +471,9 @@ class IStorage(Interface):
...
@@ -472,6 +471,9 @@ class IStorage(Interface):
Finalize the storage, releasing any external resources. The
Finalize the storage, releasing any external resources. The
storage should not be used after this method is called.
storage should not be used after this method is called.
Note that databses close their storages when they're closed, so
this method isn't generally called from application code.
"""
"""
def
getName
():
def
getName
():
...
...
src/ZODB/valuedoc.py
0 → 100644
View file @
10e1326a
"""Work around an issue with defining class attribute documentation.
See http://stackoverflow.com/questions/9153473/sphinx-values-for-attributes-reported-as-none/39276413
"""
class
ValueDoc
:
def
__init__
(
self
,
text
):
self
.
text
=
text
def
__repr__
(
self
):
return
self
.
text
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment