pax_global_header 0000666 0000000 0000000 00000000064 11634614701 0014515 g ustar 00root root 0000000 0000000 52 comment=cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/ 0000775 0000000 0000000 00000000000 11634614701 0020407 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/.gitignore 0000664 0000000 0000000 00000000100 11634614701 0022366 0 ustar 00root root 0000000 0000000 *.pyc
*.pyo
*.swp
*~
/build/
/dist/
/mock.py
/neoppod.egg-info/
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/CHANGES 0000664 0000000 0000000 00000001240 11634614701 0021377 0 ustar 00root root 0000000 0000000 Change History
==============
0.9.1 (unreleased)
------------------
- client: method to retrieve history of persistent objects was incompatible
with recent ZODB and needlessly asked all storages systematically.
- neoctl: 'print node' command (to get list of all nodes) raised an
AssertionError.
- 'neomigrate' raised a TypeError when converting NEO DB back to FileStorage.
0.9 (2011-09-12)
----------------
Initial release.
NEO is considered stable enough to replace existing ZEO setups, except that:
- there's no backup mechanism (aka efficient snapshoting): there's only
replication and underlying MySQL tools
- MySQL tables format may change in the future
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/MANIFEST.in 0000664 0000000 0000000 00000000175 11634614701 0022150 0 ustar 00root root 0000000 0000000 graft tools
include neo.conf CHANGES TODO TESTS.txt ZODB3.patch
include neo/client/component.xml # required for Python < 2.7
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/README 0000664 0000000 0000000 00000011524 11634614701 0021272 0 ustar 00root root 0000000 0000000 NEO is a distributed, redundant and scalable implementation of ZODB API.
NEO stands for Nexedi Enterprise Object.
Overview
========
A NEO cluster is composed of the following types of nodes:
- "master" nodes (mandatory, 1 or more)
Takes care of transactionality. Only one master node is really active
(the active master node is called "primary master") at any given time,
extra masters are spares (they are called "secondary masters").
- "storage" nodes (mandatory, 1 or more)
Stores data in a MySQL database, preserving history. All available storage
nodes are in use simultaneously. This offers redundancy and data distribution.
Other storage backends than MySQL are considered for future release.
- "admin" nodes (mandatory for startup, optional after)
Accepts commands from neoctl tool and transmits them to the
primary master, and monitors cluster state.
- "client" nodes
Well... Something needing to store/load data in a NEO cluster.
ZODB API is fully implemented except:
- pack: only old revisions of objects are removed for the moment
(full implementation is considered)
- blobs: not implemented (not considered yet)
There is a simple way to convert FileStorage to NEO and back again.
See also http://www.neoppod.org/links for more detailed information about
features related to scalability.
Disclaimer
==========
In addition of the disclaimer contained in the licence this code is
released under, please consider the following.
NEO does not implement any authentication mechanism between its nodes, and
does not encrypt data exchanged between nodes either.
If you want to protect your cluster from malicious nodes, or your data from
being snooped, please consider encrypted tunelling (such as openvpn).
Requirements
============
- Linux 2.6 or later
- Python 2.4 or later
- For python 2.4: `ctypes `_
(packaged with later python versions)
Note that setup.py does not define any dependency to 'ctypes' so you will
have to install it explicitely.
- For storage nodes:
- MySQLdb: http://sourceforge.net/projects/mysql-python
- For client nodes: ZODB 3.10.x but it should work with ZODB >= 3.4
Installation
============
a. Make neo directory available for python to import (for example, by
adding its container directory to the PYTHONPATH environment variable).
b. Choose a cluster name and setup a MySQL database
c. Start all required nodes::
neomaster --cluster=
neostorage --cluster= --database=user:passwd@db
neoadmin --cluster=
d. Tell the cluster it can provide service::
neoctl start
How to use
==========
First make sure Python can import 'neo.client' package.
In zope
-------
a. Edit your zope.conf, add a neo import and edit the `zodb_db` section by
replacing its filestorage subsection by a NEOStorage one.
It should look like::
%import neo.client
# Main FileStorage database
master_nodes 127.0.0.1:10000
name
mount-point /
b. Start zope
In a Python script
------------------
Just create the storage object and play with it::
from neo.client.Storage import Storage
s = Storage(master_nodes="127.0.0.1:10010", name="main")
...
"name" and "master_nodes" parameters have the same meaning as in
configuration file.
Shutting down
-------------
There is no administration command yet to stop properly a running cluster.
So following manual actions should be done:
a. Make sure all clients like Zope instances are stopped, so that cluster
become idle.
b. Stop all master nodes first with a SIGINT or SIGTERM, so that storage nodes
don't become in OUT_OF_DATE state.
c. At last stop remaining nodes with a SIGINT or SIGTERM.
Deployment
==========
NEO has no built-in deployment features such as process daemonization. We use
`supervisor `_ with configuration like below::
[group:neo]
programs=master_01,storage_01,admin
[program:master_01]
priority=1
command=neomaster -c neo -s master_01 -f /neo/neo.conf
user=neo
[program:storage_01]
priority=2
command=neostorage -c neo -s storage_01 -f /neo/neo.conf
user=neo
[program:admin]
priority=3
command=neoadmin -c neo -s admin -f /neo/neo.conf
user=neo
Developers
==========
Developers interested in NEO may refer to
`NEO Web site `_ and subscribe to following mailing
lists:
- `neo-users `_:
users discussion
- `neo-dev `_:
developers discussion
- `neo-report `_:
automated test results (read-only list)
Commercial Support
==================
Nexedi provides commercial support for NEO: http://www.nexedi.com/
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/TESTS.txt 0000664 0000000 0000000 00000001460 11634614701 0022053 0 ustar 00root root 0000000 0000000 In order to check the test-coverage of Neo, we used the figleaf tool. The usage
(for a complete neo test suite) is :
Download and install figleaf : http://darcs.idyll.org/~t/projects/figleaf/doc/
$ figleaf neotestrunner -u (it will generate a .figleaf file)
$figleaf2html .figleaf (to convert .figleaf file in html pages)
$firefox html/ (to read the results)
Each one of the page contains Neo code, which the following colours :
Green : Executed code during test suite
Red : Unexecuted code
Black : Comments and "unused" lines
In order to check only needed neo file, you should specify the following options:
figleaf -i : ignore python libraries
figleaf2html -f : allows to specify a list of check-needed files
For stats, you can also check the index.html page, which indicated which
percentage of test-coverage.
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/TODO 0000664 0000000 0000000 00000031645 11634614701 0021110 0 ustar 00root root 0000000 0000000 RC = Release Critical (for next release)
Documentation
- Clarify node state signification, and consider renaming them in the code.
Ideas:
TEMPORARILY_DOWN becomes UNAVAILABLE
BROKEN is removed ?
- Clarify the use of each error codes:
- NOT_READY removed (connection kept opened until ready)
- Split PROTOCOL_ERROR (BAD IDENTIFICATION, ...)
RC - Clarify cell state signification
- Add docstrings (think of doctests)
Code
Code changes often impact more than just one node. They are categorised by
node where the most important changes are needed.
General
RC - Review XXX in the code (CODE)
RC - Review TODO in the code (CODE)
RC - Review output of pylint (CODE)
- Keep-alive (HIGH AVAILABILITY) (implemented, to be reviewed and tested)
Consider the need to implement a keep-alive system (packets sent
automatically when there is no activity on the connection for a period
of time).
- Factorise packet data when sending partition table cells (BANDWITH)
Currently, each cell in a partition table update contains UUIDs of all
involved nodes.
It must be changed to a correspondance table using shorter keys (sent
in the packet) to avoid repeating the same UUIDs many times.
- Consider using multicast for cluster-wide notifications. (BANDWITH)
Currently, multi-receivers notifications are sent in unicast to each
receiver. Multicast should be used.
- Remove sleeps (LATENCY, CPU WASTE)
Code still contains many delays (explicit sleeps or polling timeouts).
They must be removed to be either infinite (sleep until some condition
becomes true, without waking up needlessly in the meantime) or null
(don't wait at all).
- Implements delayed connection acceptation.
Currently, any node that connects to early to another that is busy for
some reasons is immediately rejected with the 'not ready' error code. This
should be replaced by a queue in the listening node that keep a pool a
nodes that will be accepted late, when the conditions will be satisfied.
This is mainly the case for :
- Client rejected before the cluster is operational
- Empty storages rejected during recovery process
Masters implies in the election process should still reject any connection
as the primary master is still unknown.
- Connections must support 2 simultaneous handlers (CODE)
Connections currently define only one handler, which is enough for
monothreaded code. But when using multithreaded code, there are 2
possible handlers involved in a packet reception:
- The first one handles notifications only (nothing special to do
regarding multithreading)
- The second one handles expected messages (such message must be
directed to the right thread)
The second handler must be possible to set on the connection when that
connection is thread-safe (MT version of connection classes).
Also, the code to detect wether a response is expected or not must be
genericised and moved out of handlers.
- Implement transaction garbage collection API (FEATURE)
NEO packing implementation does not update transaction metadata when
deleting object revisions. This inconsistency must be made possible to
clean up from a client application, much in the same way garbage
collection part of packing is done.
- Factorise node initialisation for admin, client and storage (CODE)
The same code to ask/receive node list and partition table exists in too
many places.
- Clarify handler methods to call when a connection is accepted from a
listening conenction and when remote node is identified
(cf. neo/lib/bootstrap.py).
- Choose how to handle a storage integrity verification when it comes back.
Do the replication process, the verification stage, with or without
unfinished transactions, cells have to set as outdated, if yes, should the
partition table changes be broadcasted ? (BANDWITH, SPEED)
- Implement proper shutdown (ClusterStates.STOPPING)
- Review PENDING/HIDDEN/SHUTDOWN states, don't use notifyNodeInformation()
to do a state-switch, use a exception-based mechanism ? (CODE)
- Split protocol.py in a 'protocol' module ?
- Review handler split (CODE)
The current handler split is the result of small incremental changes. A
global review is required to make them square.
- Make handler instances become singletons (SPEED, MEMORY)
In some places handlers are instanciated outside of App.__init__ . As a
handler is completely re-entrant (no modifiable properties) it can and
should be made a singleton (saves the CPU time needed to instanciates all
the copies - often when a connection is established, saves the memory
used by each copy).
- Consider replace setNodeState admin packet by one per action, like
dropNode to reduce packet processing complexity and reduce bad actions
like set a node in TEMPORARILY_DOWN state.
- Review node notfications. Eg. A storage don't have to be notified of new
clients but only when one is lost.
- Review transactional isolation of various methods
Some methods might not implement proper transaction isolation when they
should. An example is object history (undoLog), which can see data
committed by future transactions.
Storage
- Use HailDB instead of a stand-alone MySQL server.
- Notify master when storage becomes available for clients (LATENCY)
Currently, storage presence is broadcasted to client nodes too early, as
the storage node would refuse them until it has only up-to-date data (not
only up-to-date cells, but also a partition table and node states).
- Create a specialized PartitionTable that know the database and replicator
to remove duplicates and remove logic from handlers (CODE)
- Consider insert multiple objects at time in the database, with taking care
of maximum SQL request size allowed. (SPEED)
- Prevent from SQL injection, escape() from MySQLdb api is not sufficient,
consider using query(request, args) instead of query(request % args)
- Make listening address and port optionnal, and if they are not provided
listen on all interfaces on any available port.
- Replication throttling (HIGH AVAILABILITY)
In its current implementation, replication runs at full speed, which
degrades performance for client nodes. Replication should allow
throttling, and that throttling should be configurable.
See "Replication pipelining".
- Pack segmentation & throttling (HIGH AVAILABILITY)
In its current implementation, pack runs in one call on all storage nodes
at the same time, which lcoks down the whole cluster. This task should
be split in chunks and processed in "background" on storage nodes.
Packing throttling should probably be at the lowest possible priority
(below interactive use and below replication).
- Replication pipelining (SPEED)
Replication work currently with too many exchanges between replicating
storage, and network latency can become a significant limit.
This should be changed to have just one initial request from
replicating storage, and multiple packets from reference storage with
database range checksums. When receiving these checksums, replicating
storage must compare with what it has, and ask row lists (might not even
be required) and data when there are differences. Quick fetching from
network with asynchronous checking (=queueing) + congestion control
(asking reference storage's to pause its packet flow) will probably be
required.
This should make it easier to throttle replication workload on reference
storage node, as it can decide to postpone replication-related packets on
its own.
- Partial replication (SPEED)
In its current implementation, replication always happens on a whole
partition. In typical use, only a few last transactions will have been
missed, so replicating only past a given TID would be much faster.
To achieve this, storage nodes must store 2 values:
- a pack identifier, which must be different each time a pack occurs
(increasing number sequence, TID-ish, etc) to trigger a
whole-partition replication when a pack happened (this could be
improved too, later)
- the latest (-ish) transaction committed locally, to use as a lower
replication boundary
- tpc_finish failures propagation to master (FUNCTIONALITY)
When asked to lock transaction data, if something goes wrong the master
node must be informed.
- Verify data checksum on reception (FUNCTIONALITY)
In current implementation, client generates a checksum before storing,
which is only checked upon load. This doesn't prevent from storing
altered data, which misses the point of having a checksum, and creates
weird decisions (ex: if checksum verification fails on load, what should
be done ? hope to find a storage with valid checksum ? assume that data
is correct in storage but was altered when it travelled through network
as we loaded it ?).
Master
- Master node data redundancy (HIGH AVAILABILITY)
Secondary master nodes should replicate primary master data (ie, primary
master should inform them of such changes).
This data takes too long to extract from storage nodes, and losing it
increases the risk of starting from underestimated values.
This risk is (currently) unavoidable when all nodes stop running, but this
case must be avoided.
- Differential partition table updates (BANDWITH)
When a storage asks for current partition table (when it connects to a
cluster in service state), it must update its knowledge of the partition
table. Currently it's done by fetching the entire table. If the master
keeps a history of a few last changes to partition table, it would be able
to only send a differential update (via the incremental update mechanism)
- During recovery phase, store multiple partition tables (ADMINISTATION)
When storage nodes know different version of the partition table, the
master should be abdle to present them to admin to allow him to choose one
when moving on to next phase.
- Optimize operational status check by recording which rows are ready
instead of parsing the whole partition table. (SPEED)
- Improve partition table tweaking algorithm to reduce differences between
frequently and rarely used nodes (SCALABILITY)
- tpc_finish failures propagation to client (FUNCTIONALITY)
When a storage node notifies a problem during lock/unlock phase, an error
must be propagated to client.
Client
- Implement C version of mq.py (LOAD LATENCY)
- Use generic bootstrap module (CODE)
- Find a way to make ask() from the thread poll to allow send initial packet
(requestNodeIdentification) from the connectionCompleted() event instead
of app. This requires to know to what thread will wait for the answer.
- Discuss about dead storage notification. If a client fails to connect to
a storage node supposed in running state, then it should notify the master
to check if this node is well up or not.
- Implement restore() ZODB API method to bypass consistency checks during
imports.
- tpc_finish failures (FUNCTIONALITY)
New failure cases during tpc_finish must be handled.
- Fix and reenable deadlock avoidance (SPEED). This is required for
neo.tests.zodb.testBasic.BasicTests.check_checkCurrentSerialInTransaction
Admin
- Make admin node able to monitor multiple clusters simultaneously
- Send notifications (ie: mail) when a storage node is lost
Tests
- Use another mock library that is eggified and maintained.
See http://garybernhardt.github.com/python-mock-comparison/
for a comparison of available mocking libraries/frameworks.
- Fix epoll descriptor leak.
Later
- Consider auto-generating cluster name upon initial startup (it might
actualy be a partition property).
- Consider ways to centralise the configuration file, or make the
configuration updatable automaticaly on all nodes.
- Consider storing some metadata on master nodes (partition table [version],
...). This data should be treated non-authoritatively, as a way to lower
the probability to use an outdated partition table.
- Decentralize primary master tasks as much as possible (consider
distributed lock mechanisms, ...)
- Choose how to compute the storage size
- Make storage check if the OID match with it's partitions during a store
- Investigate delta compression for stored data
Idea would be to have a few most recent revisions being stored fully, and
older revision delta-compressed, in order to save space.
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/ZODB3.patch 0000664 0000000 0000000 00000021545 11634614701 0022260 0 ustar 00root root 0000000 0000000 Patch to ZODB3 for ZODB unit tests.
See also monkey-patch to Connection.tpc_finish in neo/client/__init__.py
Index: src/ZODB/tests/BasicStorage.py
===================================================================
--- src/ZODB/tests/BasicStorage.py (revision 122777)
+++ src/ZODB/tests/BasicStorage.py (working copy)
@@ -72,8 +72,10 @@
r1 = self._storage.store(oid, None, zodb_pickle(MinPO(11)),
'', txn)
r2 = self._storage.tpc_vote(txn)
- self._storage.tpc_finish(txn)
+ serial = self._storage.tpc_finish(txn)
newrevid = handle_serials(oid, r1, r2)
+ if newrevid is None and serial is not None:
+ newrevid = serial
data, revid = self._storage.load(oid, '')
value = zodb_unpickle(data)
eq(value, MinPO(11))
Index: src/ZODB/tests/TransactionalUndoStorage.py
===================================================================
--- src/ZODB/tests/TransactionalUndoStorage.py (revision 122777)
+++ src/ZODB/tests/TransactionalUndoStorage.py (working copy)
@@ -76,6 +76,12 @@
def _transaction_newserial(self, oid):
return self.__serials[oid]
+ def _transaction_finish(self, t, oid_list):
+ tid = self._storage.tpc_finish(t)
+ if tid is not None:
+ for oid in oid_list:
+ self.__serials[oid] = tid
+
def _multi_obj_transaction(self, objs):
newrevs = {}
t = Transaction()
@@ -85,7 +91,7 @@
self._transaction_store(oid, rev, data, '', t)
newrevs[oid] = None
self._transaction_vote(t)
- self._storage.tpc_finish(t)
+ self._transaction_finish(t, [x[0] for x in objs])
for oid in newrevs.keys():
newrevs[oid] = self._transaction_newserial(oid)
return newrevs
@@ -218,9 +224,9 @@
self._transaction_store(oid2, revid2, p51, '', t)
# Finish the transaction
self._transaction_vote(t)
+ self._transaction_finish(t, [oid1, oid2])
revid1 = self._transaction_newserial(oid1)
revid2 = self._transaction_newserial(oid2)
- self._storage.tpc_finish(t)
eq(revid1, revid2)
# Update those same two objects
t = Transaction()
@@ -230,9 +236,9 @@
self._transaction_store(oid2, revid2, p52, '', t)
# Finish the transaction
self._transaction_vote(t)
+ self._transaction_finish(t, [oid1, oid2])
revid1 = self._transaction_newserial(oid1)
revid2 = self._transaction_newserial(oid2)
- self._storage.tpc_finish(t)
eq(revid1, revid2)
# Make sure the objects have the current value
data, revid1 = self._storage.load(oid1, '')
@@ -288,11 +294,12 @@
tid1 = info[1]['id']
t = Transaction()
oids = self._begin_undos_vote(t, tid, tid1)
- self._storage.tpc_finish(t)
+ serial = self._storage.tpc_finish(t)
# We get the finalization stuff called an extra time:
- eq(len(oids), 4)
- unless(oid1 in oids)
- unless(oid2 in oids)
+ if serial is None:
+ eq(len(oids), 4)
+ unless(oid1 in oids)
+ unless(oid2 in oids)
data, revid1 = self._storage.load(oid1, '')
eq(zodb_unpickle(data), MinPO(30))
data, revid2 = self._storage.load(oid2, '')
@@ -326,7 +333,7 @@
self._transaction_store(oid2, revid2, p52, '', t)
# Finish the transaction
self._transaction_vote(t)
- self._storage.tpc_finish(t)
+ self._transaction_finish(t, [oid1, oid2])
revid1 = self._transaction_newserial(oid1)
revid2 = self._transaction_newserial(oid2)
eq(revid1, revid2)
@@ -346,7 +353,7 @@
self._transaction_store(oid2, revid2, p53, '', t)
# Finish the transaction
self._transaction_vote(t)
- self._storage.tpc_finish(t)
+ self._transaction_finish(t, [oid1, oid2])
revid1 = self._transaction_newserial(oid1)
revid2 = self._transaction_newserial(oid2)
eq(revid1, revid2)
@@ -358,10 +365,11 @@
tid = info[1]['id']
t = Transaction()
oids = self._begin_undos_vote(t, tid)
- self._storage.tpc_finish(t)
- eq(len(oids), 1)
- self.failUnless(oid1 in oids)
- self.failUnless(not oid2 in oids)
+ serial = self._storage.tpc_finish(t)
+ if serial is None:
+ eq(len(oids), 1)
+ self.failUnless(oid1 in oids)
+ self.failUnless(not oid2 in oids)
data, revid1 = self._storage.load(oid1, '')
eq(zodb_unpickle(data), MinPO(33))
data, revid2 = self._storage.load(oid2, '')
@@ -397,7 +405,7 @@
self._transaction_store(oid1, revid1, p81, '', t)
self._transaction_store(oid2, revid2, p91, '', t)
self._transaction_vote(t)
- self._storage.tpc_finish(t)
+ self._transaction_finish(t, [oid1, oid2])
revid1 = self._transaction_newserial(oid1)
revid2 = self._transaction_newserial(oid2)
eq(revid1, revid2)
Index: src/ZODB/tests/StorageTestBase.py
===================================================================
--- src/ZODB/tests/StorageTestBase.py (revision 122777)
+++ src/ZODB/tests/StorageTestBase.py (working copy)
@@ -134,7 +134,7 @@
A helper for function _handle_all_serials().
"""
- return handle_all_serials(oid, *args)[oid]
+ return handle_all_serials(oid, *args).get(oid)
def import_helper(name):
__import__(name)
@@ -191,7 +191,9 @@
# Finish the transaction
r2 = self._storage.tpc_vote(t)
revid = handle_serials(oid, r1, r2)
- self._storage.tpc_finish(t)
+ serial = self._storage.tpc_finish(t)
+ if serial is not None and revid is None:
+ revid = serial
except:
self._storage.tpc_abort(t)
raise
@@ -211,8 +213,8 @@
self._storage.tpc_begin(t)
undo_result = self._storage.undo(tid, t)
vote_result = self._storage.tpc_vote(t)
- self._storage.tpc_finish(t)
- if expected_oids is not None:
+ serial = self._storage.tpc_finish(t)
+ if expected_oids is not None and serial is None:
oids = undo_result and undo_result[1] or []
oids.extend(oid for (oid, _) in vote_result or ())
self.assertEqual(len(oids), len(expected_oids), repr(oids))
Index: src/ZODB/tests/MTStorage.py
===================================================================
--- src/ZODB/tests/MTStorage.py (revision 122777)
+++ src/ZODB/tests/MTStorage.py (working copy)
@@ -155,10 +155,12 @@
r2 = self.storage.tpc_vote(t)
self.pause()
- self.storage.tpc_finish(t)
+ serial = self.storage.tpc_finish(t)
self.pause()
revid = handle_serials(oid, r1, r2)
+ if serial is not None and revid is None:
+ revid = serial
self.oids[oid] = revid
class ExtStorageClientThread(StorageClientThread):
Index: src/ZODB/tests/RevisionStorage.py
===================================================================
--- src/ZODB/tests/RevisionStorage.py (revision 122777)
+++ src/ZODB/tests/RevisionStorage.py (working copy)
@@ -150,10 +150,12 @@
# Finish the transaction
r2 = self._storage.tpc_vote(t)
newrevid = handle_serials(oid, r1, r2)
- self._storage.tpc_finish(t)
+ serial = self._storage.tpc_finish(t)
except:
self._storage.tpc_abort(t)
raise
+ if serial is not None and newrevid is None:
+ newrevid = serial
return newrevid
revid1 = helper(1, None, 1)
revid2 = helper(2, revid1, 2)
Index: src/ZODB/interfaces.py
===================================================================
--- src/ZODB/interfaces.py (revision 122777)
+++ src/ZODB/interfaces.py (working copy)
@@ -776,6 +776,10 @@
called while the storage transaction lock is held. It takes
the new transaction id generated by the transaction.
+ The return value can be either None or a serial giving new
+ serial for objects whose ids were passed to previous store calls
+ in the same transaction, and for which no serial was returned
+ from either store or tpc_vote for objects passed to store.
"""
def tpc_vote(transaction):
@@ -794,8 +798,6 @@
The return value can be either None or a sequence of object-id
and serial pairs giving new serials for objects who's ids were
passed to previous store calls in the same transaction.
- After the tpc_vote call, new serials must have been returned,
- either from tpc_vote or store for objects passed to store.
A serial returned in a sequence of oid/serial pairs, may be
the special value ZODB.ConflictResolution.ResolvedSerial to
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo.conf 0000664 0000000 0000000 00000003007 11634614701 0022037 0 ustar 00root root 0000000 0000000 # Note: Unless otherwise noted, all parameters in this configuration file
# must be identical for all nodes in a given cluster.
# Default parameters.
[DEFAULT]
# The cluster name
# This must be set.
# It must be a name unique to a given cluster, to prevent foreign
# misconfigured nodes from interfering.
cluster:
# The list of master nodes
# Master nodes not not in this list will be rejected by the cluster.
# This list should be identical for all nodes in a given cluster for
# maximum availability.
masters: 127.0.0.1:10000
# Partition table configuration
# Data in the cluster is distributed among nodes using a partition table, which
# has the following parameters.
# Replicas: How many copies of a partition should exist at a time.
# 0 means no redundancy
# 1 means there is a spare copy of all partitions
replicas: 1
# Partitions: How data spreads among storage nodes. This number must be at
# least equal to the number of storage nodes the cluster contains.
# IMPORTANT: This must not be changed once the cluster contains data.
partitions: 20
# Individual nodes parameters
# Some parameters makes no sense to be defined in [DEFAULT] section.
# They are:
# bind: The ip:port the node will listen on.
# database: Storage nodes only. The MySQL database credentials to use
# (username:password@database).
# Those database must be created manualy.
# Admin node
[admin]
bind: 127.0.0.1:9999
# Master nodes
[master]
bind: 127.0.0.1:10000
# Storage nodes
[storage]
database: neo:neo@neo1
bind: 127.0.0.1:20000
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/ 0000775 0000000 0000000 00000000000 11634614701 0021170 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/__init__.py 0000664 0000000 0000000 00000000000 11634614701 0023267 0 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/admin/ 0000775 0000000 0000000 00000000000 11634614701 0022260 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/admin/__init__.py 0000664 0000000 0000000 00000000000 11634614701 0024357 0 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/admin/app.py 0000664 0000000 0000000 00000014530 11634614701 0023415 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.lib.node import NodeManager
from neo.lib.event import EventManager
from neo.lib.connection import ListeningConnection
from neo.lib.exception import PrimaryFailure
from neo.admin.handler import AdminEventHandler, MasterEventHandler, \
MasterRequestEventHandler
from neo.lib.connector import getConnectorHandler
from neo.lib.bootstrap import BootstrapManager
from neo.lib.pt import PartitionTable
from neo.lib.protocol import NodeTypes, NodeStates, Packets, Errors
from neo.lib.debug import register as registerLiveDebugger
class Dispatcher:
"""Dispatcher use to redirect master request to handler"""
def __init__(self):
# associate conn/message_id to dispatch
# message to connection
self.message_table = {}
def register(self, msg_id, conn, kw=None):
self.message_table[msg_id] = conn, kw
def pop(self, msg_id):
return self.message_table.pop(msg_id)
def registered(self, msg_id):
return self.message_table.has_key(msg_id)
def clear(self):
"""
Unregister packet expected for a given connection
"""
self.message_table.clear()
class Application(object):
"""The storage node application."""
def __init__(self, config):
# Internal attributes.
self.em = EventManager()
self.nm = NodeManager()
self.name = config.getCluster()
self.server = config.getBind()
self.master_addresses, connector_name = config.getMasters()
self.connector_handler = getConnectorHandler(connector_name)
neo.lib.logging.debug('IP address is %s, port is %d', *(self.server))
# The partition table is initialized after getting the number of
# partitions.
self.pt = None
self.uuid = config.getUUID()
self.primary_master_node = None
self.request_handler = MasterRequestEventHandler(self)
self.master_event_handler = MasterEventHandler(self)
self.dispatcher = Dispatcher()
self.cluster_state = None
self.reset()
registerLiveDebugger(on_log=self.log)
def close(self):
self.listening_conn = None
self.nm.close()
self.em.close()
del self.__dict__
def reset(self):
self.bootstrapped = False
self.master_conn = None
self.master_node = None
def log(self):
self.em.log()
self.nm.log()
if self.pt is not None:
self.pt.log()
def run(self):
"""Make sure that the status is sane and start a loop."""
if len(self.name) == 0:
raise RuntimeError, 'cluster name must be non-empty'
# Make a listening port.
handler = AdminEventHandler(self)
self.listening_conn = ListeningConnection(self.em, handler,
addr=self.server, connector=self.connector_handler())
while True:
self.connectToPrimary()
try:
while True:
self.em.poll(1)
except PrimaryFailure:
neo.lib.logging.error('primary master is down')
def connectToPrimary(self):
"""Find a primary master node, and connect to it.
If a primary master node is not elected or ready, repeat
the attempt of a connection periodically.
Note that I do not accept any connection from non-master nodes
at this stage."""
nm = self.nm
nm.init()
self.cluster_state = None
for address in self.master_addresses:
self.nm.createMaster(address=address)
# search, find, connect and identify to the primary master
bootstrap = BootstrapManager(self, self.name, NodeTypes.ADMIN,
self.uuid, self.server)
data = bootstrap.getPrimaryConnection(self.connector_handler)
(node, conn, uuid, num_partitions, num_replicas) = data
nm.update([(node.getType(), node.getAddress(), node.getUUID(),
NodeStates.RUNNING)])
self.master_node = node
self.master_conn = conn
self.uuid = uuid
if self.pt is None:
self.pt = PartitionTable(num_partitions, num_replicas)
elif self.pt.getPartitions() != num_partitions:
# XXX: shouldn't we recover instead of raising ?
raise RuntimeError('the number of partitions is inconsistent')
elif self.pt.getReplicas() != num_replicas:
# XXX: shouldn't we recover instead of raising ?
raise RuntimeError('the number of replicas is inconsistent')
# passive handler
self.master_conn.setHandler(self.master_event_handler)
self.master_conn.ask(Packets.AskNodeInformation())
self.master_conn.ask(Packets.AskPartitionTable())
def sendPartitionTable(self, conn, min_offset, max_offset, uuid):
# we have a pt
self.pt.log()
row_list = []
if max_offset == 0:
max_offset = self.pt.getPartitions()
try:
for offset in xrange(min_offset, max_offset):
row = []
try:
for cell in self.pt.getCellList(offset):
if uuid is not None and cell.getUUID() != uuid:
continue
else:
row.append((cell.getUUID(), cell.getState()))
except TypeError:
pass
row_list.append((offset, row))
except IndexError:
p = Errors.ProtocolError('invalid partition table offset')
conn.notify(p)
return
p = Packets.AnswerPartitionList(self.pt.getID(), row_list)
conn.answer(p)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/admin/handler.py 0000664 0000000 0000000 00000016324 11634614701 0024255 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
from neo.lib.handler import EventHandler
from neo.lib import protocol
from neo.lib.protocol import Packets, Errors
from neo.lib.exception import PrimaryFailure
from neo.lib.util import dump
def forward_ask(klass):
def wrapper(self, conn, *args, **kw):
app = self.app
if app.master_conn is None:
raise protocol.NotReadyError('Not connected to a primary master.')
msg_id = app.master_conn.ask(klass(*args, **kw))
app.dispatcher.register(msg_id, conn, {'msg_id': conn.getPeerId()})
return wrapper
def forward_answer(klass):
def wrapper(self, conn, *args, **kw):
packet = klass(*args, **kw)
self._answerNeoCTL(conn, packet)
return wrapper
class AdminEventHandler(EventHandler):
"""This class deals with events for administrating cluster."""
def askPartitionList(self, conn, min_offset, max_offset, uuid):
neo.lib.logging.info("ask partition list from %s to %s for %s" %
(min_offset, max_offset, dump(uuid)))
app = self.app
# check we have one pt otherwise ask it to PMN
if app.pt is None:
if self.app.master_conn is None:
raise protocol.NotReadyError('Not connected to a primary ' \
'master.')
msg_id = self.app.master_conn.ask(Packets.AskPartitionTable())
app.dispatcher.register(msg_id, conn,
{'min_offset' : min_offset,
'max_offset' : max_offset,
'uuid' : uuid,
'msg_id' : conn.getPeerId()})
else:
app.sendPartitionTable(conn, min_offset, max_offset, uuid)
def askNodeList(self, conn, node_type):
if node_type is None:
node_type = 'all'
node_filter = None
else:
node_filter = lambda n: n.getType() is node_type
neo.lib.logging.info("ask list of %s nodes", node_type)
node_list = self.app.nm.getList(node_filter)
node_information_list = [node.asTuple() for node in node_list ]
p = Packets.AnswerNodeList(node_information_list)
conn.answer(p)
def setNodeState(self, conn, uuid, state, modify_partition_table):
neo.lib.logging.info("set node state for %s-%s" %(dump(uuid), state))
node = self.app.nm.getByUUID(uuid)
if node is None:
raise protocol.ProtocolError('invalid uuid')
if node.getState() == state and modify_partition_table is False:
# no change
p = Errors.Ack('no change')
conn.answer(p)
return
# forward to primary master node
if self.app.master_conn is None:
raise protocol.NotReadyError('Not connected to a primary master.')
p = Packets.SetNodeState(uuid, state, modify_partition_table)
msg_id = self.app.master_conn.ask(p)
self.app.dispatcher.register(msg_id, conn, {'msg_id' : conn.getPeerId()})
def askClusterState(self, conn):
if self.app.cluster_state is None:
if self.app.master_conn is None:
raise protocol.NotReadyError('Not connected to a primary ' \
'master.')
# required it from PMN first
msg_id = self.app.master_conn.ask(Packets.AskClusterState())
self.app.dispatcher.register(msg_id, conn,
{'msg_id' : conn.getPeerId()})
else:
conn.answer(Packets.AnswerClusterState(self.app.cluster_state))
def askPrimary(self, conn):
if self.app.master_conn is None:
raise protocol.NotReadyError('Not connected to a primary master.')
master_node = self.app.master_node
conn.answer(Packets.AnswerPrimary(master_node.getUUID(), []))
addPendingNodes = forward_ask(Packets.AddPendingNodes)
setClusterState = forward_ask(Packets.SetClusterState)
class MasterEventHandler(EventHandler):
""" This class is just used to dispacth message to right handler"""
def _connectionLost(self, conn):
app = self.app
if app.listening_conn: # if running
assert app.master_conn in (conn, None)
app.dispatcher.clear()
app.reset()
app.uuid = None
raise PrimaryFailure
def connectionFailed(self, conn):
self._connectionLost(conn)
def connectionClosed(self, conn):
self._connectionLost(conn)
def dispatch(self, conn, packet):
if packet.isResponse() and \
self.app.dispatcher.registered(packet.getId()):
# expected answer
self.app.request_handler.dispatch(conn, packet)
else:
# unexpectexd answers and notifications
super(MasterEventHandler, self).dispatch(conn, packet)
def answerNodeInformation(self, conn):
# XXX: This will no more exists when the initialization module will be
# implemented for factorize code (as done for bootstrap)
neo.lib.logging.debug("answerNodeInformation")
def notifyPartitionChanges(self, conn, ptid, cell_list):
self.app.pt.update(ptid, cell_list, self.app.nm)
def answerPartitionTable(self, conn, ptid, row_list):
self.app.pt.load(ptid, row_list, self.app.nm)
self.app.bootstrapped = True
def sendPartitionTable(self, conn, ptid, row_list):
if self.app.bootstrapped:
self.app.pt.load(ptid, row_list, self.app.nm)
def notifyClusterInformation(self, conn, cluster_state):
self.app.cluster_state = cluster_state
def notifyNodeInformation(self, conn, node_list):
app = self.app
app.nm.update(node_list)
class MasterRequestEventHandler(EventHandler):
""" This class handle all answer from primary master node"""
def _answerNeoCTL(self, conn, packet):
msg_id = conn.getPeerId()
client_conn, kw = self.app.dispatcher.pop(msg_id)
client_conn.answer(packet)
def answerClusterState(self, conn, state):
neo.lib.logging.info("answerClusterState for a conn")
self.app.cluster_state = state
self._answerNeoCTL(conn, Packets.AnswerClusterState(state))
def answerPartitionTable(self, conn, ptid, row_list):
neo.lib.logging.info("answerPartitionTable for a conn")
client_conn, kw = self.app.dispatcher.pop(conn.getPeerId())
# sent client the partition table
self.app.sendPartitionTable(client_conn)
ack = forward_answer(Errors.Ack)
protocolError = forward_answer(Errors.ProtocolError)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/ 0000775 0000000 0000000 00000000000 11634614701 0022446 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/Storage.py 0000664 0000000 0000000 00000024674 11634614701 0024441 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from ZODB import BaseStorage, ConflictResolution, POSException
from zope.interface import implements
import ZODB.interfaces
import neo.lib
from functools import wraps
from neo.lib import setupLog
from neo.lib.util import add64
from neo.lib.protocol import ZERO_TID
from neo.client.app import Application
from neo.client.exception import NEOStorageNotFoundError
from neo.client.exception import NEOStorageDoesNotExistError
def check_read_only(func):
def wrapped(self, *args, **kw):
if self._is_read_only:
raise POSException.ReadOnlyError()
return func(self, *args, **kw)
return wraps(func)(wrapped)
def old_history_api(func):
try:
if ZODB.interfaces.IStorage['history'].positional[1] != 'version':
return func # ZODB >= 3.9
except KeyError: # ZODB < 3.8
pass
def history(self, oid, version=None, *args, **kw):
if version is None:
return func(self, oid, *args, **kw)
raise ValueError('Versions are not supported')
return wraps(func)(history)
class Storage(BaseStorage.BaseStorage,
ConflictResolution.ConflictResolvingStorage):
"""Wrapper class for neoclient."""
# Stores the highest TID visible for current transaction.
# First call sets this snapshot by asking master node most recent
# committed TID.
# As a (positive) side-effect, this forces us to handle all pending
# invalidations, so we get a very recent view of the database (which is
# good when multiple databases are used in the same program with some
# amount of referential integrity).
# Should remain None when not bound to a connection,
# so that it always read the last revision.
_snapshot_tid = None
implements(*filter(None, (
ZODB.interfaces.IStorage,
# "restore" missing for the moment, but "store" implements this
# interface.
# ZODB.interfaces.IStorageRestoreable,
# XXX: imperfect iterator implementation:
# - start & stop are not handled (raises if either is not None)
# - transaction isolation is not done
# ZODB.interfaces.IStorageIteration,
ZODB.interfaces.IStorageUndoable,
getattr(ZODB.interfaces, 'IExternalGC', None), # XXX ZODB < 3.9
getattr(ZODB.interfaces, 'ReadVerifyingStorage', None), # XXX ZODB 3.9
)))
def __init__(self, master_nodes, name, read_only=False,
compress=None, logfile=None, verbose=False, _app=None, **kw):
"""
Do not pass those parameters (used internally):
_app
_cache
"""
if compress is None:
compress = True
setupLog('CLIENT', filename=logfile, verbose=verbose)
BaseStorage.BaseStorage.__init__(self, 'NEOStorage(%s)' % (name, ))
# Warning: _is_read_only is used in BaseStorage, do not rename it.
self._is_read_only = read_only
if _app is None:
_app = Application(master_nodes, name, compress=compress)
self.app = _app
# Used to clone self (see new_instance & IMVCCStorage definition).
self._init_args = (master_nodes, name)
self._init_kw = {
'read_only': read_only,
'compress': compress,
'logfile': logfile,
'verbose': verbose,
'_app': _app,
}
@property
def _cache(self):
return self.app._cache
def load(self, oid, version=''):
# In order to know if it was safe to get the last revision of an object
# instead of using loadBefore(), ZODB.Connection._setstate relies on
# the fact that retrieving data from a remote storage forces incoming
# invalidations to be received.
# But in NEO, invalidations are not received from the same network
# connection that the one used to retrieve data.
# So we must implement load() like a loadBefore().
# XXX: interface definition states that version parameter is
# mandatory, while some ZODB tests do not provide it. For now, make
# it optional.
assert version == '', 'Versions are not supported'
try:
return self.app.load(oid, None, self._snapshot_tid)[:2]
except NEOStorageNotFoundError:
raise POSException.POSKeyError(oid)
@check_read_only
def new_oid(self):
return self.app.new_oid()
@check_read_only
def tpc_begin(self, transaction, tid=None, status=' '):
"""
Note: never blocks in NEO.
"""
return self.app.tpc_begin(transaction=transaction, tid=tid,
status=status)
@check_read_only
def tpc_vote(self, transaction):
return self.app.tpc_vote(transaction=transaction,
tryToResolveConflict=self.tryToResolveConflict)
@check_read_only
def tpc_abort(self, transaction):
return self.app.tpc_abort(transaction=transaction)
def tpc_finish(self, transaction, f=None):
tid = self.app.tpc_finish(transaction=transaction,
tryToResolveConflict=self.tryToResolveConflict, f=f)
# XXX: Note that when undoing changes, the following is useless because
# a temporary Storage object is used to commit.
# See also testZODB.NEOZODBTests.checkMultipleUndoInOneTransaction
if self._snapshot_tid:
self._snapshot_tid = add64(tid, 1)
return tid
@check_read_only
def store(self, oid, serial, data, version, transaction):
assert version == '', 'Versions are not supported'
return self.app.store(oid=oid, serial=serial,
data=data, version=version, transaction=transaction)
@check_read_only
def deleteObject(self, oid, serial, transaction):
self.app.store(oid=oid, serial=serial, data='', version=None,
transaction=transaction)
# mutliple revisions
def loadSerial(self, oid, serial):
try:
return self.app.load(oid, serial)[0]
except NEOStorageNotFoundError:
raise POSException.POSKeyError(oid)
def loadBefore(self, oid, tid):
try:
return self.app.load(oid, None, tid)
except NEOStorageDoesNotExistError:
raise POSException.POSKeyError(oid)
except NEOStorageNotFoundError:
return None
def iterator(self, start=None, stop=None):
# Iterator lives in its own transaction, so get a fresh snapshot.
snapshot_tid = self.lastTransaction()
if stop is None:
stop = snapshot_tid
else:
stop = min(snapshot_tid, stop)
return self.app.iterator(start, stop)
# undo
@check_read_only
def undo(self, transaction_id, txn):
return self.app.undo(self._snapshot_tid, undone_tid=transaction_id,
txn=txn, tryToResolveConflict=self.tryToResolveConflict)
@check_read_only
def undoLog(self, first=0, last=-20, filter=None):
return self.app.undoLog(first, last, filter)
def supportsUndo(self):
return True
def supportsTransactionalUndo(self):
return True
@check_read_only
def abortVersion(self, src, transaction):
return self.app.abortVersion(src, transaction)
@check_read_only
def commitVersion(self, src, dest, transaction):
return self.app.commitVersion(src, dest, transaction)
def loadEx(self, oid, version):
try:
data, serial, _ = self.app.load(oid, None, self._snapshot_tid)
except NEOStorageNotFoundError:
raise POSException.POSKeyError(oid)
return data, serial, ''
def __len__(self):
return self.app.getStorageSize()
def registerDB(self, db, limit=None):
self.app.registerDB(db, limit)
@old_history_api
def history(self, oid, *args, **kw):
try:
return self.app.history(oid, *args, **kw)
except NEOStorageNotFoundError:
raise POSException.POSKeyError(oid)
def sync(self, force=True):
# Increment by one, as we will use this as an excluded upper
# bound (loadBefore).
self._snapshot_tid = add64(self.lastTransaction(), 1)
def copyTransactionsFrom(self, source, verbose=False):
""" Zope compliant API """
return self.app.importFrom(source, None, None,
self.tryToResolveConflict)
def importFrom(self, source, start=None, stop=None):
""" Allow import only a part of the source storage """
return self.app.importFrom(source, start, stop,
self.tryToResolveConflict)
def restore(self, oid, serial, data, version, prev_txn, transaction):
raise NotImplementedError
def pack(self, t, referencesf, gc=False):
if gc:
neo.lib.logging.warning(
'Garbage Collection is not available in NEO, '
'please use an external tool. Packing without GC.')
self.app.pack(t)
def lastSerial(self):
# seems unused
raise NotImplementedError
def lastTransaction(self):
# Used in ZODB unit tests
return self.app.lastTransaction()
def _clear_temp(self):
raise NotImplementedError
def set_max_oid(self, possible_new_max_oid):
# seems used only by FileStorage
raise NotImplementedError
def cleanup(self):
# Used in unit tests to remove local database files.
# We have no such thing, so make this method a no-op.
pass
def close(self):
self.app.close()
def getTid(self, oid):
try:
return self.app.getLastTID(oid)
except NEOStorageNotFoundError:
raise KeyError
def checkCurrentSerialInTransaction(self, oid, serial, transaction):
self.app.checkCurrentSerialInTransaction(oid, serial, transaction)
def new_instance(self):
return Storage(*self._init_args, **self._init_kw)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/__init__.py 0000664 0000000 0000000 00000014052 11634614701 0024561 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2001, 2002 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
# NEO requires ZODB to allow TID to be returned as late as tpc_finish.
# At the moment, no ZODB release include this patch.
# Later, this must be replaced by some detection mechanism.
needs_patch = True
if needs_patch:
from ZODB.Connection import Connection
def tpc_finish(self, transaction):
"""Indicate confirmation that the transaction is done."""
def callback(tid):
# BBB: _mvcc_storage not supported on older ZODB
if getattr(self, '_mvcc_storage', False):
# Inter-connection invalidation is not needed when the
# storage provides MVCC.
return
d = dict.fromkeys(self._modified)
self._db.invalidate(tid, d, self)
# It's important that the storage calls the passed function
# while it still has its lock. We don't want another thread
# to be able to read any updated data until we've had a chance
# to send an invalidation message to all of the other
# connections!
serial = self._storage.tpc_finish(transaction, callback)
if serial is not None:
assert isinstance(serial, str), repr(serial)
for oid_iterator in (self._modified, self._creating):
for oid in oid_iterator:
obj = self._cache.get(oid, None)
# Ignore missing objects and don't update ghosts.
if obj is not None and obj._p_changed is not None:
obj._p_changed = 0
obj._p_serial = serial
self._tpc_cleanup()
Connection.tpc_finish = tpc_finish
try:
if Connection._nexedi_fix != 3:
raise Exception("A different ZODB fix is already applied")
except AttributeError:
Connection._nexedi_fix = 3
# Whenever an connection is opened (and there's usually an existing one
# in DB pool that can be reused) whereas the transaction is already
# started, we must make sure that proper storage setup is done by
# calling Connection.newTransaction.
# For example, there's no open transaction when a ZPublisher/Publish
# transaction begins.
def open(self, *args, **kw):
def _flush_invalidations():
acquire = self._db._a
try:
self._db._r()
except thread.error:
acquire = lambda: None
try:
del self._flush_invalidations
self.newTransaction()
finally:
acquire()
self._flush_invalidations = _flush_invalidations
self._flush_invalidations = _flush_invalidations
try:
Connection_open(self, *args, **kw)
finally:
del self._flush_invalidations
try:
Connection_open = Connection._setDB
Connection._setDB = open
except AttributeError: # recent ZODB
Connection_open = Connection.open
Connection.open = open
# Storage.sync usually implements a "network barrier" (at least
# in NEO, but ZEO should be fixed to do the same), which is quite
# slow so we prefer to not call it where it's not useful.
# I don't know any legitimate use of DB access outside a transaction.
# But old versions of ERP5 (before 2010-10-29 17:15:34) and maybe other
# applications do not always call 'transaction.begin()' when they should
# so this patch disabled as a precaution, at least as long as we support
# old software. This should also be discussed on zodb-dev ML first.
def afterCompletion(self, *ignored):
try:
self._readCurrent.clear()
except AttributeError: # old ZODB (e.g. ZODB 3.4)
pass
self._flush_invalidations()
#Connection.afterCompletion = afterCompletion
class _DB(object):
"""
Wrapper to DB instance that properly initialize Connection objects
with NEO storages.
It forces the connection to always create a new instance of the
storage, for compatibility with ZODB 3.4, and because we don't
implement IMVCCStorage completely.
"""
def __new__(cls, db, connection):
if db._storage.__class__.__module__ != 'neo.client.Storage':
return db
self = object.__new__(cls)
self._db = db
self._connection = connection
return self
def __getattr__(self, attr):
result = getattr(self._db, attr)
if attr in ('storage', '_storage'):
result = result.new_instance()
self._connection._db = self._db
setattr(self, attr, result)
return result
try:
Connection_setDB = Connection._setDB
except AttributeError: # recent ZODB
Connection_init = Connection.__init__
Connection.__init__ = lambda self, db, *args, **kw: \
Connection_init(self, _DB(db, self), *args, **kw)
else: # old ZODB (e.g. ZODB 3.4)
Connection._setDB = lambda self, odb, *args, **kw: \
Connection_setDB(self, _DB(odb, self), *args, **kw)
from ZODB.DB import DB
DB_invalidate = DB.invalidate
DB.invalidate = lambda self, tid, oids, *args, **kw: \
DB_invalidate(self, tid, dict.fromkeys(oids, None), *args, **kw)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/app.py 0000664 0000000 0000000 00000131216 11634614701 0023604 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from cPickle import dumps, loads
from zlib import compress as real_compress, decompress
from neo.lib.locking import Empty
from random import shuffle
import time
import os
from ZODB.POSException import UndoError, StorageTransactionError, ConflictError
from ZODB.POSException import ReadConflictError
from ZODB.ConflictResolution import ResolvedSerial
from persistent.TimeStamp import TimeStamp
import neo.lib
from neo.lib.protocol import NodeTypes, Packets, INVALID_PARTITION, ZERO_TID
from neo.lib.event import EventManager
from neo.lib.util import makeChecksum as real_makeChecksum, dump
from neo.lib.locking import Lock
from neo.lib.connection import MTClientConnection, OnTimeout, ConnectionClosed
from neo.lib.node import NodeManager
from neo.lib.connector import getConnectorHandler
from neo.client.exception import NEOStorageError, NEOStorageCreationUndoneError
from neo.client.exception import NEOStorageNotFoundError
from neo.lib.exception import NeoException
from neo.client.handlers import storage, master
from neo.lib.dispatcher import Dispatcher, ForgottenPacket
from neo.client.poll import ThreadedPoll, psThreadedPoll
from neo.client.iterator import Iterator
from neo.client.cache import ClientCache
from neo.client.pool import ConnectionPool
from neo.lib.util import u64, parseMasterList
from neo.lib.profiling import profiler_decorator, PROFILING_ENABLED
from neo.lib.debug import register as registerLiveDebugger
from neo.client.container import ThreadContainer, TransactionContainer
if PROFILING_ENABLED:
# Those functions require a "real" python function wrapper before they can
# be decorated.
@profiler_decorator
def compress(data):
return real_compress(data)
@profiler_decorator
def makeChecksum(data):
return real_makeChecksum(data)
else:
# If profiling is disabled, directly use original functions.
compress = real_compress
makeChecksum = real_makeChecksum
class Application(object):
"""The client node application."""
def __init__(self, master_nodes, name, compress=True, **kw):
# Start polling thread
self.em = EventManager()
self.poll_thread = ThreadedPoll(self.em, name=name)
psThreadedPoll()
# Internal Attributes common to all thread
self._db = None
self.name = name
master_addresses, connector_name = parseMasterList(master_nodes)
self.connector_handler = getConnectorHandler(connector_name)
self.dispatcher = Dispatcher(self.poll_thread)
self.nm = NodeManager()
self.cp = ConnectionPool(self)
self.pt = None
self.master_conn = None
self.primary_master_node = None
self.trying_master_node = None
# load master node list
for address in master_addresses:
self.nm.createMaster(address=address)
# no self-assigned UUID, primary master will supply us one
self.uuid = None
self._cache = ClientCache()
self.new_oid_list = []
self.last_oid = '\0' * 8
self.storage_event_handler = storage.StorageEventHandler(self)
self.storage_bootstrap_handler = storage.StorageBootstrapHandler(self)
self.storage_handler = storage.StorageAnswersHandler(self)
self.primary_handler = master.PrimaryAnswersHandler(self)
self.primary_bootstrap_handler = master.PrimaryBootstrapHandler(self)
self.notifications_handler = master.PrimaryNotificationsHandler( self)
# Internal attribute distinct between thread
self._thread_container = ThreadContainer()
self._txn_container = TransactionContainer()
# Lock definition :
# _load_lock is used to make loading and storing atomic
lock = Lock()
self._load_lock_acquire = lock.acquire
self._load_lock_release = lock.release
# _oid_lock is used in order to not call multiple oid
# generation at the same time
lock = Lock()
self._oid_lock_acquire = lock.acquire
self._oid_lock_release = lock.release
lock = Lock()
# _cache_lock is used for the client cache
self._cache_lock_acquire = lock.acquire
self._cache_lock_release = lock.release
lock = Lock()
# _connecting_to_master_node is used to prevent simultaneous master
# node connection attemps
self._connecting_to_master_node_acquire = lock.acquire
self._connecting_to_master_node_release = lock.release
# _nm ensure exclusive access to the node manager
lock = Lock()
self._nm_acquire = lock.acquire
self._nm_release = lock.release
self.compress = compress
registerLiveDebugger(on_log=self.log)
def getHandlerData(self):
return self._thread_container.get()['answer']
def setHandlerData(self, data):
self._thread_container.get()['answer'] = data
def _getThreadQueue(self):
return self._thread_container.get()['queue']
def log(self):
self.em.log()
self.nm.log()
if self.pt is not None:
self.pt.log()
@profiler_decorator
def _handlePacket(self, conn, packet, handler=None):
"""
conn
The connection which received the packet (forwarded to handler).
packet
The packet to handle.
handler
The handler to use to handle packet.
If not given, it will be guessed from connection's not type.
"""
if handler is None:
# Guess the handler to use based on the type of node on the
# connection
node = self.nm.getByAddress(conn.getAddress())
if node is None:
raise ValueError, 'Expecting an answer from a node ' \
'which type is not known... Is this right ?'
if node.isStorage():
handler = self.storage_handler
elif node.isMaster():
handler = self.primary_handler
else:
raise ValueError, 'Unknown node type: %r' % (node.__class__, )
conn.lock()
try:
handler.dispatch(conn, packet)
finally:
conn.unlock()
@profiler_decorator
def _waitAnyMessage(self, queue, block=True):
"""
Handle all pending packets.
block
If True (default), will block until at least one packet was
received.
"""
pending = self.dispatcher.pending
get = queue.get
_handlePacket = self._handlePacket
while pending(queue):
try:
conn, packet = get(block)
except Empty:
break
if packet is None or isinstance(packet, ForgottenPacket):
# connection was closed or some packet was forgotten
continue
block = False
try:
_handlePacket(conn, packet)
except ConnectionClosed:
pass
def _waitAnyTransactionMessage(self, txn_context, block=True):
"""
Just like _waitAnyMessage, but for per-transaction exchanges, rather
than per-thread.
"""
queue = txn_context['queue']
self.setHandlerData(txn_context)
try:
self._waitAnyMessage(queue, block=block)
finally:
# Don't leave access to thread context, even if a raise happens.
self.setHandlerData(None)
@profiler_decorator
def _ask(self, conn, packet, handler=None):
self.setHandlerData(None)
queue = self._getThreadQueue()
msg_id = conn.ask(packet, queue=queue)
get = queue.get
_handlePacket = self._handlePacket
while True:
qconn, qpacket = get(True)
is_forgotten = isinstance(qpacket, ForgottenPacket)
if conn is qconn:
# check fake packet
if qpacket is None:
raise ConnectionClosed
if msg_id == qpacket.getId():
if is_forgotten:
raise ValueError, 'ForgottenPacket for an ' \
'explicitely expected packet.'
_handlePacket(qconn, qpacket, handler=handler)
break
if not is_forgotten and qpacket is not None:
_handlePacket(qconn, qpacket)
return self.getHandlerData()
@profiler_decorator
def _askStorage(self, conn, packet):
""" Send a request to a storage node and process its answer """
return self._ask(conn, packet, handler=self.storage_handler)
@profiler_decorator
def _askPrimary(self, packet):
""" Send a request to the primary master and process its answer """
return self._ask(self._getMasterConnection(), packet,
handler=self.primary_handler)
@profiler_decorator
def _getMasterConnection(self):
""" Connect to the primary master node on demand """
# acquire the lock to allow only one thread to connect to the primary
result = self.master_conn
if result is None:
self._connecting_to_master_node_acquire()
try:
self.new_oid_list = []
result = self._connectToPrimaryNode()
self.master_conn = result
finally:
self._connecting_to_master_node_release()
return result
def getPartitionTable(self):
""" Return the partition table manager, reconnect the PMN if needed """
# this ensure the master connection is established and the partition
# table is up to date.
self._getMasterConnection()
return self.pt
@profiler_decorator
def _connectToPrimaryNode(self):
"""
Lookup for the current primary master node
"""
neo.lib.logging.debug('connecting to primary master...')
ready = False
nm = self.nm
packet = Packets.AskPrimary()
while not ready:
# Get network connection to primary master
index = 0
connected = False
while not connected:
if self.primary_master_node is not None:
# If I know a primary master node, pinpoint it.
self.trying_master_node = self.primary_master_node
self.primary_master_node = None
else:
# Otherwise, check one by one.
master_list = nm.getMasterList()
try:
self.trying_master_node = master_list[index]
except IndexError:
time.sleep(1)
index = 0
self.trying_master_node = master_list[0]
index += 1
# Connect to master
conn = MTClientConnection(self.em,
self.notifications_handler,
addr=self.trying_master_node.getAddress(),
connector=self.connector_handler(),
dispatcher=self.dispatcher)
# Query for primary master node
if conn.getConnector() is None:
# This happens if a connection could not be established.
neo.lib.logging.error(
'Connection to master node %s failed',
self.trying_master_node)
continue
try:
self._ask(conn, packet,
handler=self.primary_bootstrap_handler)
except ConnectionClosed:
continue
# If we reached the primary master node, mark as connected
connected = self.primary_master_node is not None and \
self.primary_master_node is self.trying_master_node
neo.lib.logging.info(
'Connected to %s' % (self.primary_master_node, ))
try:
ready = self.identifyToPrimaryNode(conn)
except ConnectionClosed:
neo.lib.logging.error('Connection to %s lost',
self.trying_master_node)
self.primary_master_node = None
continue
neo.lib.logging.info("Connected and ready")
return conn
def identifyToPrimaryNode(self, conn):
"""
Request identification and required informations to be operational.
Might raise ConnectionClosed so that the new primary can be
looked-up again.
"""
neo.lib.logging.info('Initializing from master')
ask = self._ask
handler = self.primary_bootstrap_handler
# Identify to primary master and request initial data
p = Packets.RequestIdentification(NodeTypes.CLIENT, self.uuid, None,
self.name)
while conn.getUUID() is None:
ask(conn, p, handler=handler)
if conn.getUUID() is None:
# Node identification was refused by master, it is considered
# as the primary as long as we are connected to it.
time.sleep(1)
ask(conn, Packets.AskNodeInformation(), handler=handler)
ask(conn, Packets.AskPartitionTable(), handler=handler)
return self.pt.operational()
def registerDB(self, db, limit):
self._db = db
def getDB(self):
return self._db
@profiler_decorator
def new_oid(self):
"""Get a new OID."""
self._oid_lock_acquire()
try:
if len(self.new_oid_list) == 0:
# Get new oid list from master node
# we manage a list of oid here to prevent
# from asking too many time new oid one by one
# from master node
self._askPrimary(Packets.AskNewOIDs(100))
if len(self.new_oid_list) <= 0:
raise NEOStorageError('new_oid failed')
self.last_oid = self.new_oid_list.pop(0)
return self.last_oid
finally:
self._oid_lock_release()
def getStorageSize(self):
# return the last OID used, this is innacurate
return int(u64(self.last_oid))
@profiler_decorator
def load(self, oid, tid=None, before_tid=None):
"""
Internal method which manage load, loadSerial and loadBefore.
OID and TID (serial) parameters are expected packed.
oid
OID of object to get.
tid
If given, the exact serial at which OID is desired.
before_tid should be None.
before_tid
If given, the excluded upper bound serial at which OID is desired.
serial should be None.
Return value: (3-tuple)
- Object data (None if object creation was undone).
- Serial of given data.
- Next serial at which object exists, or None. Only set when tid
parameter is not None.
Exceptions:
NEOStorageError
technical problem
NEOStorageNotFoundError
object exists but no data satisfies given parameters
NEOStorageDoesNotExistError
object doesn't exist
NEOStorageCreationUndoneError
object existed, but its creation was undone
Note that loadSerial is used during conflict resolution to load
object's current version, which is not visible to us normaly (it was
committed after our snapshot was taken).
"""
# TODO:
# - rename parameters (here? and in handlers & packet definitions)
self._load_lock_acquire()
try:
result = self._loadFromCache(oid, tid, before_tid)
if not result:
result = self._loadFromStorage(oid, tid, before_tid)
self._cache_lock_acquire()
try:
self._cache.store(oid, *result)
finally:
self._cache_lock_release()
if result[0] == '':
raise NEOStorageCreationUndoneError(dump(oid))
return result
finally:
self._load_lock_release()
@profiler_decorator
def _loadFromStorage(self, oid, at_tid, before_tid):
data = None
packet = Packets.AskObject(oid, at_tid, before_tid)
for node, conn in self.cp.iterateForObject(oid, readable=True):
try:
noid, tid, next_tid, compression, checksum, data \
= self._askStorage(conn, packet)
except ConnectionClosed:
continue
if checksum != makeChecksum(data):
# Warning: see TODO file.
# Check checksum.
neo.lib.logging.error('wrong checksum from %s for oid %s',
conn, dump(oid))
data = None
continue
break
if data is None:
# We didn't got any object from all storage node because of
# connection error
raise NEOStorageError('connection failure')
# Uncompress data
if compression:
data = decompress(data)
return data, tid, next_tid
@profiler_decorator
def _loadFromCache(self, oid, at_tid=None, before_tid=None):
"""
Load from local cache, return None if not found.
"""
self._cache_lock_acquire()
try:
if at_tid:
result = self._cache.load(oid, at_tid + '*')
assert not result or result[1] == at_tid
return result
return self._cache.load(oid, before_tid)
finally:
self._cache_lock_release()
@profiler_decorator
def tpc_begin(self, transaction, tid=None, status=' '):
"""Begin a new transaction."""
txn_container = self._txn_container
# First get a transaction, only one is allowed at a time
if txn_container.get(transaction) is not None:
# We already begin the same transaction
raise StorageTransactionError('Duplicate tpc_begin calls')
txn_context = txn_container.new(transaction)
# use the given TID or request a new one to the master
answer_ttid = self._askPrimary(Packets.AskBeginTransaction(tid))
if answer_ttid is None:
raise NEOStorageError('tpc_begin failed')
assert tid in (None, answer_ttid), (tid, answer_ttid)
txn_context['txn'] = transaction
txn_context['ttid'] = answer_ttid
@profiler_decorator
def store(self, oid, serial, data, version, transaction):
"""Store object."""
txn_context = self._txn_container.get(transaction)
if txn_context is None:
raise StorageTransactionError(self, transaction)
neo.lib.logging.debug(
'storing oid %s serial %s', dump(oid), dump(serial))
self._store(txn_context, oid, serial, data)
return None
def _store(self, txn_context, oid, serial, data, data_serial=None,
unlock=False):
ttid = txn_context['ttid']
if data is None:
# This is some undo: either a no-data object (undoing object
# creation) or a back-pointer to an earlier revision (going back to
# an older object revision).
data = compressed_data = ''
compression = 0
else:
assert data_serial is None
compression = self.compress
compressed_data = data
if self.compress:
compressed_data = compress(data)
if len(compressed_data) > len(data):
compressed_data = data
compression = 0
else:
compression = 1
checksum = makeChecksum(compressed_data)
on_timeout = OnTimeout(self.onStoreTimeout, txn_context, oid)
# Store object in tmp cache
data_dict = txn_context['data_dict']
if oid not in data_dict:
txn_context['data_list'].append(oid)
data_dict[oid] = data
# Store data on each node
txn_context['object_stored_counter_dict'][oid] = {}
object_base_serial_dict = txn_context['object_base_serial_dict']
if oid not in object_base_serial_dict:
object_base_serial_dict[oid] = serial
txn_context['object_serial_dict'][oid] = serial
queue = txn_context['queue']
involved_nodes = txn_context['involved_nodes']
add_involved_nodes = involved_nodes.add
packet = Packets.AskStoreObject(oid, serial, compression,
checksum, compressed_data, data_serial, ttid, unlock)
for node, conn in self.cp.iterateForObject(oid, writable=True):
try:
conn.ask(packet, on_timeout=on_timeout, queue=queue)
add_involved_nodes(node)
except ConnectionClosed:
continue
if not involved_nodes:
raise NEOStorageError("Store failed")
self._waitAnyTransactionMessage(txn_context, False)
def onStoreTimeout(self, conn, msg_id, txn_context, oid):
# NOTE: this method is called from poll thread, don't use
# thread-specific value !
txn_context.setdefault('timeout_dict', {})[oid] = msg_id
# Ask the storage if someone locks the object.
# By sending a message with a smaller timeout,
# the connection will be kept open.
conn.ask(Packets.AskHasLock(txn_context['ttid'], oid),
timeout=5, queue=txn_context['queue'])
@profiler_decorator
def _handleConflicts(self, txn_context, tryToResolveConflict):
result = []
append = result.append
# Check for conflicts
data_dict = txn_context['data_dict']
object_base_serial_dict = txn_context['object_base_serial_dict']
object_serial_dict = txn_context['object_serial_dict']
conflict_serial_dict = txn_context['conflict_serial_dict'].copy()
txn_context['conflict_serial_dict'].clear()
resolved_conflict_serial_dict = txn_context[
'resolved_conflict_serial_dict']
for oid, conflict_serial_set in conflict_serial_dict.iteritems():
conflict_serial = max(conflict_serial_set)
serial = object_serial_dict[oid]
data = data_dict[oid]
if ZERO_TID in conflict_serial_set:
if 1:
# XXX: disable deadlock avoidance code until it is fixed
neo.lib.logging.info('Deadlock avoidance on %r:%r',
dump(oid), dump(serial))
else:
# Storage refused us from taking object lock, to avoid a
# possible deadlock. TID is actually used for some kind of
# "locking priority": when a higher value has the lock,
# this means we stored objects "too late", and we would
# otherwise cause a deadlock.
# To recover, we must ask storages to release locks we
# hold (to let possibly-competing transactions acquire
# them), and requeue our already-sent store requests.
# XXX: currently, brute-force is implemented: we send
# object data again.
neo.lib.logging.info('Deadlock avoidance triggered on %r:%r',
dump(oid), dump(serial))
for store_oid, store_data in data_dict.iteritems():
store_serial = object_serial_dict[store_oid]
if store_data is None:
self._checkCurrentSerialInTransaction(txn_context,
store_oid, store_serial)
else:
if store_data is '':
# Some undo
neo.lib.logging.warning('Deadlock avoidance cannot'
' reliably work with undo, this must be '
'implemented.')
conflict_serial = ZERO_TID
break
self._store(txn_context, store_oid, store_serial,
store_data, unlock=True)
else:
continue
elif data is not None:
resolved_serial_set = resolved_conflict_serial_dict.setdefault(
oid, set())
if resolved_serial_set and conflict_serial <= max(
resolved_serial_set):
# A later serial has already been resolved, skip.
resolved_serial_set.update(conflict_serial_set)
continue
new_data = tryToResolveConflict(oid, conflict_serial,
serial, data)
if new_data is not None:
neo.lib.logging.info('Conflict resolution succeed for ' \
'%r:%r with %r', dump(oid), dump(serial),
dump(conflict_serial))
# Mark this conflict as resolved
resolved_serial_set.update(conflict_serial_set)
# Base serial changes too, as we resolved a conflict
object_base_serial_dict[oid] = conflict_serial
# Try to store again
self._store(txn_context, oid, conflict_serial, new_data)
append(oid)
continue
else:
neo.lib.logging.info('Conflict resolution failed for ' \
'%r:%r with %r', dump(oid), dump(serial),
dump(conflict_serial))
# XXX: Is it really required to remove from data_dict ?
del data_dict[oid]
txn_context['data_list'].remove(oid)
if data is None:
raise ReadConflictError(oid=oid, serials=(conflict_serial,
serial))
raise ConflictError(oid=oid, serials=(txn_context['ttid'],
serial), data=data)
return result
@profiler_decorator
def waitResponses(self, queue, handler_data):
"""Wait for all requests to be answered (or their connection to be
detected as closed)"""
pending = self.dispatcher.pending
_waitAnyMessage = self._waitAnyMessage
self.setHandlerData(handler_data)
while pending(queue):
_waitAnyMessage(queue)
@profiler_decorator
def waitStoreResponses(self, txn_context, tryToResolveConflict):
result = []
append = result.append
resolved_oid_set = set()
update = resolved_oid_set.update
ttid = txn_context['ttid']
_handleConflicts = self._handleConflicts
queue = txn_context['queue']
conflict_serial_dict = txn_context['conflict_serial_dict']
pending = self.dispatcher.pending
_waitAnyTransactionMessage = self._waitAnyTransactionMessage
while pending(queue) or conflict_serial_dict:
# Note: handler data can be overwritten by _handleConflicts
# so we must set it for each iteration.
_waitAnyTransactionMessage(txn_context)
if conflict_serial_dict:
conflicts = _handleConflicts(txn_context,
tryToResolveConflict)
if conflicts:
update(conflicts)
# Check for never-stored objects, and update result for all others
for oid, store_dict in \
txn_context['object_stored_counter_dict'].iteritems():
if not store_dict:
neo.lib.logging.error('tpc_store failed')
raise NEOStorageError('tpc_store failed')
elif oid in resolved_oid_set:
append((oid, ResolvedSerial))
return result
@profiler_decorator
def tpc_vote(self, transaction, tryToResolveConflict):
"""Store current transaction."""
txn_context = self._txn_container.get(transaction)
if txn_context is None or transaction is not txn_context['txn']:
raise StorageTransactionError(self, transaction)
result = self.waitStoreResponses(txn_context, tryToResolveConflict)
ttid = txn_context['ttid']
# Store data on each node
txn_stored_counter = 0
packet = Packets.AskStoreTransaction(ttid, str(transaction.user),
str(transaction.description), dumps(transaction._extension),
txn_context['data_list'])
add_involved_nodes = txn_context['involved_nodes'].add
for node, conn in self.cp.iterateForObject(ttid, writable=True):
neo.lib.logging.debug("voting object %s on %s", dump(ttid),
dump(conn.getUUID()))
try:
self._askStorage(conn, packet)
except ConnectionClosed:
continue
add_involved_nodes(node)
txn_stored_counter += 1
# check at least one storage node accepted
if txn_stored_counter == 0:
neo.lib.logging.error('tpc_vote failed')
raise NEOStorageError('tpc_vote failed')
# Check if master connection is still alive.
# This is just here to lower the probability of detecting a problem
# in tpc_finish, as we should do our best to detect problem before
# tpc_finish.
self._getMasterConnection()
txn_context['txn_voted'] = True
return result
@profiler_decorator
def tpc_abort(self, transaction):
"""Abort current transaction."""
txn_container = self._txn_container
txn_context = txn_container.get(transaction)
if txn_context is None:
return
ttid = txn_context['ttid']
p = Packets.AbortTransaction(ttid)
getConnForNode = self.cp.getConnForNode
# cancel transaction one all those nodes
for node in txn_context['involved_nodes']:
conn = getConnForNode(node)
if conn is None:
continue
try:
conn.notify(p)
except:
neo.lib.logging.error(
'Exception in tpc_abort while notifying' \
'storage node %r of abortion, ignoring.',
conn, exc_info=1)
self._getMasterConnection().notify(p)
queue = txn_context['queue']
# We don't need to flush queue, as it won't be reused by future
# transactions (deleted on next line & indexed by transaction object
# instance).
self.dispatcher.forget_queue(queue, flush_queue=False)
txn_container.delete(transaction)
@profiler_decorator
def tpc_finish(self, transaction, tryToResolveConflict, f=None):
"""Finish current transaction."""
txn_container = self._txn_container
txn_context = txn_container.get(transaction)
if txn_context is None:
raise StorageTransactionError('tpc_finish called for wrong '
'transaction')
if not txn_context['txn_voted']:
self.tpc_vote(transaction, tryToResolveConflict)
self._load_lock_acquire()
try:
# Call finish on master
oid_list = txn_context['data_list']
p = Packets.AskFinishTransaction(txn_context['ttid'], oid_list)
tid = self._askPrimary(p)
# Call function given by ZODB
if f is not None:
f(tid)
# Update cache
self._cache_lock_acquire()
try:
cache = self._cache
for oid, data in txn_context['data_dict'].iteritems():
if data is None:
# this is just a remain of
# checkCurrentSerialInTransaction call, ignore (no data
# was modified).
continue
# Update ex-latest value in cache
cache.invalidate(oid, tid)
if data:
# Store in cache with no next_tid
cache.store(oid, data, tid, None)
finally:
self._cache_lock_release()
txn_container.delete(transaction)
return tid
finally:
self._load_lock_release()
def undo(self, snapshot_tid, undone_tid, txn, tryToResolveConflict):
txn_context = self._txn_container.get(txn)
if txn_context is None:
raise StorageTransactionError(self, undone_tid)
txn_info, txn_ext = self._getTransactionInformation(undone_tid)
txn_oid_list = txn_info['oids']
# Regroup objects per partition, to ask a minimum set of storage.
partition_oid_dict = {}
pt = self.getPartitionTable()
for oid in txn_oid_list:
partition = pt.getPartition(oid)
try:
oid_list = partition_oid_dict[partition]
except KeyError:
oid_list = partition_oid_dict[partition] = []
oid_list.append(oid)
# Ask storage the undo serial (serial at which object's previous data
# is)
getCellList = pt.getCellList
getCellSortKey = self.cp.getCellSortKey
getConnForCell = self.cp.getConnForCell
queue = self._getThreadQueue()
ttid = txn_context['ttid']
for partition, oid_list in partition_oid_dict.iteritems():
cell_list = getCellList(partition, readable=True)
# We do want to shuffle before getting one with the smallest
# key, so that all cells with the same (smallest) key has
# identical chance to be chosen.
shuffle(cell_list)
# BBB: min(..., key=...) requires Python >= 2.5
cell_list.sort(key=getCellSortKey)
storage_conn = getConnForCell(cell_list[0])
storage_conn.ask(Packets.AskObjectUndoSerial(ttid,
snapshot_tid, undone_tid, oid_list), queue=queue)
# Wait for all AnswerObjectUndoSerial. We might get OidNotFoundError,
# meaning that objects in transaction's oid_list do not exist any
# longer. This is the symptom of a pack, so forbid undoing transaction
# when it happens.
undo_object_tid_dict = {}
try:
self.waitResponses(queue, undo_object_tid_dict)
except NEOStorageNotFoundError:
self.dispatcher.forget_queue(queue)
raise UndoError('non-undoable transaction')
# Send undo data to all storage nodes.
for oid in txn_oid_list:
current_serial, undo_serial, is_current = undo_object_tid_dict[oid]
if is_current:
data = None
else:
# Serial being undone is not the latest version for this
# object. This is an undo conflict, try to resolve it.
try:
# Load the latest version we are supposed to see
data = self.load(oid, current_serial)[0]
# Load the version we were undoing to
undo_data = self.load(oid, undo_serial)[0]
except NEOStorageNotFoundError:
raise UndoError('Object not found while resolving undo '
'conflict')
# Resolve conflict
try:
data = tryToResolveConflict(oid, current_serial,
undone_tid, undo_data, data)
except ConflictError:
data = None
if data is None:
raise UndoError('Some data were modified by a later ' \
'transaction', oid)
undo_serial = None
self._store(txn_context, oid, current_serial, data, undo_serial)
return None, txn_oid_list
def _insertMetadata(self, txn_info, extension):
for k, v in loads(extension).items():
txn_info[k] = v
def _getTransactionInformation(self, tid):
packet = Packets.AskTransactionInformation(tid)
for node, conn in self.cp.iterateForObject(tid, readable=True):
try:
txn_info, txn_ext = self._askStorage(conn, packet)
except ConnectionClosed:
continue
except NEOStorageNotFoundError:
# TID not found
continue
break
else:
raise NEOStorageError('Transaction %r not found' % (tid, ))
return (txn_info, txn_ext)
def undoLog(self, first, last, filter=None, block=0):
# XXX: undoLog is broken
if last < 0:
# See FileStorage.py for explanation
last = first - last
# First get a list of transactions from all storage nodes.
# Each storage node will return TIDs only for UP_TO_DATE state and
# FEEDING state cells
pt = self.getPartitionTable()
storage_node_list = pt.getNodeList()
queue = self._getThreadQueue()
packet = Packets.AskTIDs(first, last, INVALID_PARTITION)
for storage_node in storage_node_list:
conn = self.cp.getConnForNode(storage_node)
if conn is None:
continue
conn.ask(packet, queue=queue)
# Wait for answers from all storages.
tid_set = set()
self.waitResponses(queue, tid_set)
# Reorder tids
ordered_tids = sorted(tid_set, reverse=True)
neo.lib.logging.debug(
"UndoLog tids %s", [dump(x) for x in ordered_tids])
# For each transaction, get info
undo_info = []
append = undo_info.append
for tid in ordered_tids:
(txn_info, txn_ext) = self._getTransactionInformation(tid)
if filter is None or filter(txn_info):
txn_info.pop('packed')
txn_info.pop("oids")
self._insertMetadata(txn_info, txn_ext)
append(txn_info)
if len(undo_info) >= last - first:
break
# Check we return at least one element, otherwise call
# again but extend offset
if len(undo_info) == 0 and not block:
undo_info = self.undoLog(first=first, last=last*5, filter=filter,
block=1)
return undo_info
def transactionLog(self, start, stop, limit):
node_map = self.pt.getNodeMap()
node_list = node_map.keys()
node_list.sort(key=self.cp.getCellSortKey)
partition_set = set(range(self.pt.getPartitions()))
queue = self._getThreadQueue()
# request a tid list for each partition
for node in node_list:
conn = self.cp.getConnForNode(node)
request_set = set(node_map[node]) & partition_set
if conn is None or not request_set:
continue
partition_set -= set(request_set)
packet = Packets.AskTIDsFrom(start, stop, limit, request_set)
conn.ask(packet, queue=queue)
if not partition_set:
break
assert not partition_set
tid_set = set()
self.waitResponses(queue, tid_set)
# request transactions informations
txn_list = []
append = txn_list.append
tid = None
for tid in sorted(tid_set):
(txn_info, txn_ext) = self._getTransactionInformation(tid)
txn_info['ext'] = loads(txn_ext)
append(txn_info)
return (tid, txn_list)
def history(self, oid, size=1, filter=None):
# Get history informations for object first
packet = Packets.AskObjectHistory(oid, 0, size)
for node, conn in self.cp.iterateForObject(oid, readable=True):
try:
history_list = self._askStorage(conn, packet)
except ConnectionClosed:
continue
# Now that we have object informations, get txn informations
result = []
# history_list is already sorted descending (by the storage)
for serial, size in history_list:
txn_info, txn_ext = self._getTransactionInformation(serial)
# create history dict
txn_info.pop('id')
txn_info.pop('oids')
txn_info.pop('packed')
txn_info['tid'] = serial
txn_info['version'] = ''
txn_info['size'] = size
if filter is None or filter(txn_info):
result.append(txn_info)
self._insertMetadata(txn_info, txn_ext)
return result
@profiler_decorator
def importFrom(self, source, start, stop, tryToResolveConflict):
serials = {}
transaction_iter = source.iterator(start, stop)
for transaction in transaction_iter:
tid = transaction.tid
self.tpc_begin(transaction, tid, transaction.status)
for r in transaction:
oid = r.oid
pre = serials.get(oid, None)
# TODO: bypass conflict resolution, locks...
self.store(oid, pre, r.data, r.version, transaction)
serials[oid] = tid
conflicted = self.tpc_vote(transaction, tryToResolveConflict)
assert not conflicted, conflicted
real_tid = self.tpc_finish(transaction, tryToResolveConflict)
assert real_tid == tid, (real_tid, tid)
transaction_iter.close()
def iterator(self, start, stop):
if start is None:
start = ZERO_TID
return Iterator(self, start, stop)
def lastTransaction(self):
return self._askPrimary(Packets.AskLastTransaction())
def abortVersion(self, src, transaction):
if self._txn_container.get(transaction) is None:
raise StorageTransactionError(self, transaction)
return '', []
def commitVersion(self, src, dest, transaction):
if self._txn_container.get(transaction) is None:
raise StorageTransactionError(self, transaction)
return '', []
def __del__(self):
"""Clear all connection."""
# Due to bug in ZODB, close is not always called when shutting
# down zope, so use __del__ to close connections
for conn in self.em.getConnectionList():
conn.close()
self.cp.flush()
self.master_conn = None
# Stop polling thread
neo.lib.logging.debug('Stopping %s', self.poll_thread)
self.poll_thread.stop()
psThreadedPoll()
close = __del__
def invalidationBarrier(self):
self._askPrimary(Packets.AskBarrier())
def pack(self, t):
tid = repr(TimeStamp(*time.gmtime(t)[:5] + (t % 60, )))
if tid == ZERO_TID:
raise NEOStorageError('Invalid pack time')
self._askPrimary(Packets.AskPack(tid))
# XXX: this is only needed to make ZODB unit tests pass.
# It should not be otherwise required (clients should be free to load
# old data as long as it is available in cache, event if it was pruned
# by a pack), so don't bother invalidating on other clients.
self._cache_lock_acquire()
try:
self._cache.clear()
finally:
self._cache_lock_release()
def getLastTID(self, oid):
return self.load(oid)[1]
def checkCurrentSerialInTransaction(self, oid, serial, transaction):
txn_context = self._txn_container.get(transaction)
if txn_context is None:
raise StorageTransactionError(self, transaction)
self._checkCurrentSerialInTransaction(txn_context, oid, serial)
def _checkCurrentSerialInTransaction(self, txn_context, oid, serial):
ttid = txn_context['ttid']
txn_context['object_serial_dict'][oid] = serial
# Placeholders
queue = txn_context['queue']
txn_context['object_stored_counter_dict'][oid] = {}
data_dict = txn_context['data_dict']
if oid not in data_dict:
# Marker value so we don't try to resolve conflicts.
data_dict[oid] = None
txn_context['data_list'].append(oid)
packet = Packets.AskCheckCurrentSerial(ttid, serial, oid)
for node, conn in self.cp.iterateForObject(oid, writable=True):
try:
conn.ask(packet, queue=queue)
except ConnectionClosed:
continue
self._waitAnyTransactionMessage(txn_context, False)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/cache.py 0000664 0000000 0000000 00000021622 11634614701 0024066 0 ustar 00root root 0000000 0000000 ##############################################################################
#
# Copyright (c) 2011 Nexedi SARL and Contributors. All Rights Reserved.
# Julien Muchembled
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly advised to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
##############################################################################
import math
class CacheItem(object):
__slots__ = ('oid', 'tid', 'next_tid', 'data',
'counter', 'level', 'expire',
'prev', 'next')
def __repr__(self):
s = ''
for attr in self.__slots__:
try:
value = getattr(self, attr)
if value:
if attr in ('prev', 'next'):
s += ' %s=<...>' % attr
continue
elif attr == 'data':
value = '...'
s += ' %s=%r' % (attr, value)
except AttributeError:
pass
return '<%s%s>' % (self.__class__.__name__, s)
class ClientCache(object):
"""In-memory pickle cache based on Multi-Queue cache algorithm
Multi-Queue algorithm for Second Level Buffer Caches:
http://www.usenix.org/event/usenix01/full_papers/zhou/zhou_html/index.html
Quick description:
- There are multiple "regular" queues, plus a history queue
- The queue to store an object in depends on its access frequency
- The queue an object is in defines its lifespan (higher-index queue eq.
longer lifespan)
-> The more often an object is accessed, the higher lifespan it will
have
- Upon cache or history hit, object frequency is increased and object
might get moved to longer-lived queue
- Each access "ages" objects in cache, and an aging object is moved to
shorter-lived queue as it ages without being accessed, or in the
history queue if it's really too old.
"""
__slots__ = ('_life_time', '_max_history_size', '_max_size',
'_queue_list', '_oid_dict', '_time', '_size', '_history_size')
def __init__(self, life_time=10000, max_history_size=100000,
max_size=20*1024*1024):
self._life_time = life_time
self._max_history_size = max_history_size
self._max_size = max_size
self.clear()
def clear(self):
"""Reset cache"""
self._queue_list = [None] # first is history
self._oid_dict = {}
self._time = 0
self._size = 0
self._history_size = 0
def _iterQueue(self, level):
"""for debugging purpose"""
if level < len(self._queue_list):
item = head = self._queue_list[level]
if item:
while 1:
yield item
item = item.next
if item is head:
break
def _add(self, item):
level = item.level
try:
head = self._queue_list[level]
except IndexError:
assert len(self._queue_list) == level
self._queue_list.append(item)
item.prev = item.next = item
else:
if head:
item.prev = tail = head.prev
tail.next = head.prev = item
item.next = head
else:
self._queue_list[level] = item
item.prev = item.next = item
if level:
item.expire = self._time + self._life_time
else:
self._size -= len(item.data)
item.data = None
if self._history_size < self._max_history_size:
self._history_size += 1
else:
self._remove(head)
item_list = self._oid_dict[head.oid]
item_list.remove(head)
if not item_list:
del self._oid_dict[head.oid]
def _remove(self, item):
level = item.level
if level is not None:
item.level = level - 1
next = item.next
if next is item:
self._queue_list[level] = next = None
else:
item.prev.next = next
next.prev = item.prev
if self._queue_list[level] is item:
self._queue_list[level] = next
return next
def _fetched(self, item, _log=math.log):
self._remove(item)
item.counter = counter = item.counter + 1
# XXX It might be better to adjust the level according to the object
# size. See commented factor for example.
item.level = 1 + int(_log(counter, 2)
# * (1.01 - float(len(item.data)) / self._max_size)
)
self._add(item)
self._time = time = self._time + 1
for head in self._queue_list[1:]:
if head and head.expire < time:
self._remove(head)
self._add(head)
break
def _load(self, oid, before_tid=None):
item_list = self._oid_dict.get(oid)
if item_list:
if before_tid:
for item in reversed(item_list):
if item.tid < before_tid:
next_tid = item.next_tid
if next_tid and next_tid < before_tid:
break
return item
else:
item = item_list[-1]
if not item.next_tid:
return item
def load(self, oid, before_tid=None):
"""Return a revision of oid that was current before given tid"""
item = self._load(oid, before_tid)
if item:
data = item.data
if data is not None:
self._fetched(item)
return data, item.tid, item.next_tid
def store(self, oid, data, tid, next_tid):
"""Store a new data record in the cache"""
size = len(data)
max_size = self._max_size
if size < max_size:
item = self._load(oid, next_tid)
if item:
assert not (item.data or item.level)
assert item.tid == tid and item.next_tid == next_tid
self._history_size -= 1
else:
item = CacheItem()
item.oid = oid
item.tid = tid
item.next_tid = next_tid
item.counter = 0
item.level = None
try:
item_list = self._oid_dict[oid]
except KeyError:
self._oid_dict[oid] = [item]
else:
if next_tid:
for i, x in enumerate(item_list):
if tid < x.tid:
break
item_list.insert(i, item)
else:
if item_list:
prev = item_list[-1]
item.counter = prev.counter
prev.counter = 0
if prev.level > 1:
self._fetched(prev)
item_list.append(item)
item.data = data
self._fetched(item)
self._size += size
if max_size < self._size:
for head in self._queue_list[1:]:
while head:
next = self._remove(head)
head.level = 0
self._add(head)
if self._size <= max_size:
return
head = next
def invalidate(self, oid, tid):
"""Mark data record as being valid only up to given tid"""
try:
item = self._oid_dict[oid][-1]
except KeyError:
pass
else:
if item.next_tid is None:
item.next_tid = tid
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/component.xml 0000664 0000000 0000000 00000002217 11634614701 0025174 0 ustar 00root root 0000000 0000000
A scalable storage for Zope
Give the list of the master node like ip:port ip:port...
Give the name of the cluster
If true, enable automatic data compression (compression is only used
when compressed size is smaller).
If true, only reads may be executed against the storage. Note
that the "pack" operation is not considered a write operation
and is still allowed on a read-only neostorage.
Log debugging information
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/config.py 0000664 0000000 0000000 00000002013 11634614701 0024261 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from ZODB.config import BaseConfig
class NeoStorage(BaseConfig):
def open(self):
from neo.client.Storage import Storage
config = self.config
return Storage(**dict((k, getattr(config, k))
for k in config.getSectionAttributes()))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/container.py 0000664 0000000 0000000 00000004620 11634614701 0025004 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2011 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from thread import get_ident
from neo.lib.locking import Queue
class ContainerBase(object):
def __init__(self):
self._context_dict = {}
def _getID(self, *args, **kw):
raise NotImplementedError
def _new(self, *args, **kw):
raise NotImplementedError
def delete(self, *args, **kw):
del self._context_dict[self._getID(*args, **kw)]
def get(self, *args, **kw):
return self._context_dict.get(self._getID(*args, **kw))
def new(self, *args, **kw):
result = self._context_dict[self._getID(*args, **kw)] = self._new(
*args, **kw)
return result
class ThreadContainer(ContainerBase):
def _getID(self):
return get_ident()
def _new(self):
return {
'queue': Queue(0),
'answer': None,
}
def get(self):
"""
Implicitely create a thread context if it doesn't exist.
"""
my_id = self._getID()
try:
result = self._context_dict[my_id]
except KeyError:
result = self._context_dict[my_id] = self._new()
return result
class TransactionContainer(ContainerBase):
def _getID(self, txn):
return id(txn)
def _new(self, txn):
return {
'queue': Queue(0),
'txn': txn,
'ttid': None,
'data_dict': {},
'data_list': [],
'object_base_serial_dict': {},
'object_serial_dict': {},
'object_stored_counter_dict': {},
'conflict_serial_dict': {},
'resolved_conflict_serial_dict': {},
'txn_voted': False,
'involved_nodes': set(),
}
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/exception.py 0000664 0000000 0000000 00000002516 11634614701 0025022 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from ZODB import POSException
class NEOStorageError(POSException.StorageError):
pass
class NEOStorageNotFoundError(NEOStorageError):
pass
class NEOStorageDoesNotExistError(NEOStorageNotFoundError):
"""
This error is a refinement of NEOStorageNotFoundError: this means
that some object was not found, but also that it does not exist at all.
"""
pass
class NEOStorageCreationUndoneError(NEOStorageDoesNotExistError):
"""
This error is a refinement of NEOStorageDoesNotExistError: this means that
some object existed at some point, but its creation was undone.
"""
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/handlers/ 0000775 0000000 0000000 00000000000 11634614701 0024246 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/handlers/__init__.py 0000664 0000000 0000000 00000004775 11634614701 0026374 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.lib.handler import EventHandler
from neo.lib.protocol import ProtocolError, Packets
class BaseHandler(EventHandler):
"""Base class for client-side EventHandler implementations."""
def __init__(self, app):
super(BaseHandler, self).__init__(app)
self.dispatcher = app.dispatcher
def dispatch(self, conn, packet):
# Before calling superclass's dispatch method, lock the connection.
# This covers the case where handler sends a response to received
# packet.
conn.lock()
try:
super(BaseHandler, self).dispatch(conn, packet)
finally:
conn.release()
def packetReceived(self, conn, packet):
"""Redirect all received packet to dispatcher thread."""
if packet.isResponse() and type(packet) is not Packets.Pong:
if not self.dispatcher.dispatch(conn, packet.getId(), packet):
raise ProtocolError('Unexpected response packet from %r: %r'
% (conn, packet))
else:
self.dispatch(conn, packet)
def connectionLost(self, conn, new_state):
self.app.dispatcher.unregister(conn)
def connectionFailed(self, conn):
self.app.dispatcher.unregister(conn)
def unexpectedInAnswerHandler(*args, **kw):
raise Exception('Unexpected event in an answer handler')
class AnswerBaseHandler(EventHandler):
connectionStarted = unexpectedInAnswerHandler
connectionCompleted = unexpectedInAnswerHandler
connectionFailed = unexpectedInAnswerHandler
connectionAccepted = unexpectedInAnswerHandler
timeoutExpired = unexpectedInAnswerHandler
connectionClosed = unexpectedInAnswerHandler
packetReceived = unexpectedInAnswerHandler
peerBroken = unexpectedInAnswerHandler
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/handlers/master.py 0000664 0000000 0000000 00000013305 11634614701 0026115 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.client.handlers import BaseHandler, AnswerBaseHandler
from neo.lib.pt import MTPartitionTable as PartitionTable
from neo.lib.protocol import NodeTypes, NodeStates, ProtocolError
from neo.lib.util import dump
from neo.client.exception import NEOStorageError
class PrimaryBootstrapHandler(AnswerBaseHandler):
""" Bootstrap handler used when looking for the primary master """
def notReady(self, conn, message):
app = self.app
app.trying_master_node = None
def acceptIdentification(self, conn, node_type,
uuid, num_partitions, num_replicas, your_uuid):
app = self.app
# this must be a master node
if node_type != NodeTypes.MASTER:
conn.close()
return
# the master must give an UUID
if your_uuid is None:
raise ProtocolError('No UUID supplied')
app.uuid = your_uuid
neo.lib.logging.info('Got an UUID: %s', dump(app.uuid))
node = app.nm.getByAddress(conn.getAddress())
conn.setUUID(uuid)
node.setUUID(uuid)
# Always create partition table
app.pt = PartitionTable(num_partitions, num_replicas)
def answerPrimary(self, conn, primary_uuid,
known_master_list):
app = self.app
# Register new master nodes.
for address, uuid in known_master_list:
n = app.nm.getByAddress(address)
if uuid is not None and n.getUUID() != uuid:
n.setUUID(uuid)
if primary_uuid is not None:
primary_node = app.nm.getByUUID(primary_uuid)
if primary_node is None:
# I don't know such a node. Probably this information
# is old. So ignore it.
neo.lib.logging.warning('Unknown primary master UUID: %s. ' \
'Ignoring.' % dump(primary_uuid))
else:
if app.trying_master_node is not primary_node:
app.trying_master_node = None
conn.close()
app.primary_master_node = primary_node
else:
if app.primary_master_node is not None:
# The primary master node is not a primary master node
# any longer.
app.primary_master_node = None
app.trying_master_node = None
conn.close()
def answerPartitionTable(self, conn, ptid, row_list):
assert row_list
self.app.pt.load(ptid, row_list, self.app.nm)
def answerNodeInformation(self, conn):
pass
class PrimaryNotificationsHandler(BaseHandler):
""" Handler that process the notifications from the primary master """
def connectionClosed(self, conn):
app = self.app
if app.master_conn is not None:
neo.lib.logging.critical("connection to primary master node closed")
app.master_conn = None
app.primary_master_node = None
super(PrimaryNotificationsHandler, self).connectionClosed(conn)
def stopOperation(self, conn):
neo.lib.logging.critical("master node ask to stop operation")
def invalidateObjects(self, conn, tid, oid_list):
app = self.app
app._cache_lock_acquire()
try:
invalidate = app._cache.invalidate
for oid in oid_list:
invalidate(oid, tid)
db = app.getDB()
if db is not None:
db.invalidate(tid, oid_list)
finally:
app._cache_lock_release()
# For the two methods below, we must not use app._getPartitionTable()
# to avoid a dead lock. It is safe to not check the master connection
# because it's in the master handler, so the connection is already
# established.
def notifyPartitionChanges(self, conn, ptid, cell_list):
if self.app.pt.filled():
self.app.pt.update(ptid, cell_list, self.app.nm)
def notifyNodeInformation(self, conn, node_list):
nm = self.app.nm
nm.update(node_list)
# XXX: 'update' automatically closes DOWN nodes. Do we really want
# to do the same thing for nodes in other non-running states ?
for node_type, addr, uuid, state in node_list:
if state != NodeStates.RUNNING:
node = nm.getByUUID(uuid)
if node and node.isConnected():
node.getConnection().close()
class PrimaryAnswersHandler(AnswerBaseHandler):
""" Handle that process expected packets from the primary master """
def answerBeginTransaction(self, conn, ttid):
self.app.setHandlerData(ttid)
def answerNewOIDs(self, conn, oid_list):
self.app.new_oid_list = list(oid_list)
def answerTransactionFinished(self, conn, _, tid):
self.app.setHandlerData(tid)
def answerPack(self, conn, status):
if not status:
raise NEOStorageError('Already packing')
def answerLastTransaction(self, conn, ltid):
self.app.setHandlerData(ltid)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/handlers/storage.py 0000664 0000000 0000000 00000017272 11634614701 0026275 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from ZODB.TimeStamp import TimeStamp
from ZODB.POSException import ConflictError
import neo.lib
from neo.client.handlers import BaseHandler, AnswerBaseHandler
from neo.lib.protocol import NodeTypes, ProtocolError, LockState, ZERO_TID
from neo.lib.util import dump
from neo.client.exception import NEOStorageError, NEOStorageNotFoundError
from neo.client.exception import NEOStorageDoesNotExistError
from neo.lib.exception import NodeNotReady
class StorageEventHandler(BaseHandler):
def connectionLost(self, conn, new_state):
node = self.app.nm.getByAddress(conn.getAddress())
assert node is not None
self.app.cp.removeConnection(node)
self.app.dispatcher.unregister(conn)
def connectionFailed(self, conn):
# Connection to a storage node failed
node = self.app.nm.getByAddress(conn.getAddress())
assert node is not None
self.app.cp.removeConnection(node)
super(StorageEventHandler, self).connectionFailed(conn)
class StorageBootstrapHandler(AnswerBaseHandler):
""" Handler used when connecting to a storage node """
def notReady(self, conn, message):
raise NodeNotReady(message)
def acceptIdentification(self, conn, node_type,
uuid, num_partitions, num_replicas, your_uuid):
# this must be a storage node
if node_type != NodeTypes.STORAGE:
conn.close()
return
node = self.app.nm.getByAddress(conn.getAddress())
assert node is not None, conn.getAddress()
conn.setUUID(uuid)
node.setUUID(uuid)
node.setConnection(conn)
class StorageAnswersHandler(AnswerBaseHandler):
""" Handle all messages related to ZODB operations """
def answerObject(self, conn, oid, start_serial, end_serial,
compression, checksum, data, data_serial):
if data_serial is not None:
raise NEOStorageError, 'Storage should never send non-None ' \
'data_serial to clients, got %s' % (dump(data_serial), )
self.app.setHandlerData((oid, start_serial, end_serial,
compression, checksum, data))
def answerStoreObject(self, conn, conflicting, oid, serial):
txn_context = self.app.getHandlerData()
object_stored_counter_dict = txn_context[
'object_stored_counter_dict'][oid]
if conflicting:
# Warning: if a storage (S1) is much faster than another (S2), then
# we may process entirely a conflict with S1 (i.e. we received the
# answer to the store of the resolved object on S1) before we
# receive the conflict answer from the first store on S2.
neo.lib.logging.info('%r report a conflict for %r with %r', conn,
dump(oid), dump(serial))
# If this conflict is not already resolved, mark it for
# resolution.
if serial not in txn_context[
'resolved_conflict_serial_dict'].get(oid, ()):
if serial in object_stored_counter_dict and serial != ZERO_TID:
raise NEOStorageError('Storages %s accepted object %s'
' for serial %s but %s reports a conflict for it.' % (
map(dump, object_stored_counter_dict[serial]),
dump(oid), dump(serial), dump(conn.getUUID())))
conflict_serial_dict = txn_context['conflict_serial_dict']
conflict_serial_dict.setdefault(oid, set()).add(serial)
else:
uuid_set = object_stored_counter_dict.setdefault(serial, set())
uuid_set.add(conn.getUUID())
answerCheckCurrentSerial = answerStoreObject
def answerStoreTransaction(self, conn, _):
pass
def answerTIDsFrom(self, conn, tid_list):
neo.lib.logging.debug('Get %d TIDs from %r', len(tid_list), conn)
tids_from = self.app.getHandlerData()
assert not tids_from.intersection(set(tid_list))
tids_from.update(tid_list)
def answerTransactionInformation(self, conn, tid,
user, desc, ext, packed, oid_list):
self.app.setHandlerData(({
'time': TimeStamp(tid).timeTime(),
'user_name': user,
'description': desc,
'id': tid,
'oids': oid_list,
'packed': packed,
}, ext))
def answerObjectHistory(self, conn, _, history_list):
# history_list is a list of tuple (serial, size)
self.app.setHandlerData(history_list)
def oidNotFound(self, conn, message):
# This can happen either when :
# - loading an object
# - asking for history
raise NEOStorageNotFoundError(message)
def oidDoesNotExist(self, conn, message):
raise NEOStorageDoesNotExistError(message)
def tidNotFound(self, conn, message):
# This can happen when requiring txn informations
raise NEOStorageNotFoundError(message)
def answerTIDs(self, conn, tid_list):
self.app.getHandlerData().update(tid_list)
def answerObjectUndoSerial(self, conn, object_tid_dict):
self.app.getHandlerData().update(object_tid_dict)
def answerHasLock(self, conn, oid, status):
store_msg_id = self.app.getHandlerData()['timeout_dict'].pop(oid)
if status == LockState.GRANTED_TO_OTHER:
# Stop expecting the timed-out store request.
self.app.dispatcher.forget(conn, store_msg_id)
# Object is locked by another transaction, and we have waited until
# timeout. To avoid a deadlock, abort current transaction (we might
# be locking objects the other transaction is waiting for).
raise ConflictError, 'Lock wait timeout for oid %s on %r' % (
dump(oid), conn)
# HasLock design required that storage is multi-threaded so that
# it can answer to AskHasLock while processing store resquests.
# This means that the 2 cases (granted to us or nobody) are legitimate,
# either because it gave us the lock but is/was slow to store our data,
# or because the storage took a lot of time processing a previous
# store (and did not even considered our lock request).
# XXX: But storage nodes are still mono-threaded, so they should
# only answer with GRANTED_TO_OTHER (if they reply!), except
# maybe in very rare cases of race condition. Only log for now.
# This also means that most of the time, if the storage is slow
# to process some store requests, HasLock will timeout in turn
# and the connector will be closed.
# Anyway, it's not clear that HasLock requests are useful.
# Are store requests potentially long to process ? If not,
# we should simply raise a ConflictError on store timeout.
neo.lib.logging.info('Store of oid %s delayed (storage overload ?)',
dump(oid))
def alreadyPendingError(self, conn, message):
pass
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/iterator.py 0000664 0000000 0000000 00000011446 11634614701 0024657 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from ZODB import BaseStorage
from zope.interface import implements
import ZODB.interfaces
from neo.lib.util import u64, add64
from neo.client.exception import NEOStorageCreationUndoneError
from neo.client.exception import NEOStorageNotFoundError
CHUNK_LENGTH = 100
class Record(BaseStorage.DataRecord):
""" BaseStorage Transaction record yielded by the Transaction object """
def __init__(self, oid, tid, data, prev):
BaseStorage.DataRecord.__init__(self, oid, tid, data, prev)
def __str__(self):
oid = u64(self.oid)
tid = u64(self.tid)
args = (oid, tid, len(self.data), self.data_txn)
return 'Record %s:%s: %s (%s)' % args
class Transaction(BaseStorage.TransactionRecord):
""" Transaction object yielded by the NEO iterator """
def __init__(self, app, tid, status, user, desc, ext, oid_list,
prev_serial_dict):
BaseStorage.TransactionRecord.__init__( self, tid, status, user, desc,
ext)
self.app = app
self.oid_list = oid_list
self.oid_index = 0
self.history = []
self.prev_serial_dict = prev_serial_dict
def __iter__(self):
return self
def next(self):
""" Iterate over the transaction records """
app = self.app
oid_list = self.oid_list
oid_index = self.oid_index
oid_len = len(oid_list)
# load an object
while oid_index < oid_len:
oid = oid_list[oid_index]
try:
data, _, next_tid = app.load(oid, self.tid)
except NEOStorageCreationUndoneError:
data = next_tid = None
except NEOStorageNotFoundError:
# Transactions are not updated after a pack, so their object
# will not be found in the database. Skip them.
oid_list.pop(oid_index)
oid_len -= 1
continue
oid_index += 1
break
else:
# no more records for this transaction
self.oid_index = 0
raise StopIteration
self.oid_index = oid_index
record = Record(oid, self.tid, data,
self.prev_serial_dict.get(oid))
if next_tid is None:
self.prev_serial_dict.pop(oid, None)
else:
self.prev_serial_dict[oid] = self.tid
return record
def __str__(self):
tid = u64(self.tid)
args = (tid, self.user, self.status)
return 'Transaction #%s: %s %s' % args
class Iterator(object):
""" An iterator for the NEO storage """
def __init__(self, app, start, stop):
self.app = app
self._txn_list = []
assert None not in (start, stop)
self._start = start
self._stop = stop
# index of current iteration
self._index = 0
self._closed = False
# OID -> previous TID mapping
# TODO: prune old entries while walking ?
self._prev_serial_dict = {}
def __iter__(self):
return self
def __getitem__(self, index):
""" Simple index-based iterator """
if index != self._index:
raise IndexError, index
return self.next()
def next(self):
""" Return an iterator for the next transaction"""
if self._closed:
raise IOError, 'iterator closed'
if not self._txn_list:
(max_tid, chunk) = self.app.transactionLog(self._start, self._stop,
CHUNK_LENGTH)
if not chunk:
# nothing more
raise StopIteration
self._start = add64(max_tid, 1)
self._txn_list = chunk
txn = self._txn_list.pop(0)
self._index += 1
tid = txn['id']
user = txn['user_name']
desc = txn['description']
oid_list = txn['oids']
extension = txn['ext']
txn = Transaction(self.app, tid, ' ', user, desc, extension, oid_list,
self._prev_serial_dict)
return txn
def __str__(self):
return 'NEO transactions iterator'
def close(self):
self._closed = True
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/poll.py 0000664 0000000 0000000 00000007312 11634614701 0023771 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from logging import DEBUG, ERROR
from threading import Thread, Event, enumerate as thread_enum
from neo.lib.locking import Lock
import neo.lib
class _ThreadedPoll(Thread):
"""Polling thread."""
def __init__(self, em, **kw):
Thread.__init__(self, **kw)
self.em = em
self.setDaemon(True)
self._stop = Event()
def run(self):
_log = neo.lib.logging.log
def log(*args, **kw):
# Ignore errors due to garbage collection on exit
try:
_log(*args, **kw)
except:
if not self.stopping():
raise
log(DEBUG, 'Started %s', self)
while not self.stopping():
try:
# XXX: Delay cannot be infinite here, because we need
# to check connection timeout and thread shutdown.
self.em.poll(1)
except:
log(ERROR, 'poll raised, retrying', exc_info=1)
log(DEBUG, 'Threaded poll stopped')
self._stop.clear()
def stop(self):
self._stop.set()
def stopping(self):
return self._stop.isSet()
class ThreadedPoll(object):
"""
Wrapper for polloing thread, just to be able to start it again when
it stopped.
"""
_thread = None
_started = False
def __init__(self, *args, **kw):
lock = Lock()
self._status_lock_acquire = lock.acquire
self._status_lock_release = lock.release
self._args = args
self._kw = kw
self.newThread()
def newThread(self):
self._thread = _ThreadedPoll(*self._args, **self._kw)
def start(self):
"""
Start thread if not started or restart it if it's shutting down.
"""
# TODO: a refcount-based approach would be better, but more intrusive.
self._status_lock_acquire()
try:
thread = self._thread
if thread.stopping():
# XXX: ideally, we should wake thread up here, to be sure not
# to wait forever.
thread.join()
if not thread.isAlive():
if self._started:
self.newThread()
else:
self._started = True
self._thread.start()
finally:
self._status_lock_release()
def stop(self):
self._status_lock_acquire()
try:
self._thread.stop()
finally:
self._status_lock_release()
def __getattr__(self, key):
return getattr(self._thread, key)
def __repr__(self):
return repr(self._thread)
def psThreadedPoll(log=None):
"""
Logs alive ThreadedPoll threads.
"""
if log is None:
log = neo.lib.logging.debug
for thread in thread_enum():
if not isinstance(thread, ThreadedPoll):
continue
log('Thread %s at 0x%x, %s', thread.getName(), id(thread),
thread._stop.isSet() and 'stopping' or 'running')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/client/pool.py 0000664 0000000 0000000 00000015356 11634614701 0024003 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import time
from random import shuffle
import neo.lib
from neo.lib.locking import RLock
from neo.lib.protocol import NodeTypes, Packets
from neo.lib.connection import MTClientConnection, ConnectionClosed
from neo.client.exception import NEOStorageError
from neo.lib.profiling import profiler_decorator
from neo.lib.exception import NodeNotReady
# How long before we might retry a connection to a node to which connection
# failed in the past.
MAX_FAILURE_AGE = 600
# Cell list sort keys
# We are connected to storage node hosting cell, high priority
CELL_CONNECTED = -1
# normal priority
CELL_GOOD = 0
# Storage node hosting cell failed recently, low priority
CELL_FAILED = 1
class ConnectionPool(object):
"""This class manages a pool of connections to storage nodes."""
def __init__(self, app, max_pool_size = 25):
self.app = app
self.max_pool_size = max_pool_size
self.connection_dict = {}
# Define a lock in order to create one connection to
# a storage node at a time to avoid multiple connections
# to the same node.
l = RLock()
self.connection_lock_acquire = l.acquire
self.connection_lock_release = l.release
self.node_failure_dict = {}
@profiler_decorator
def _initNodeConnection(self, node):
"""Init a connection to a given storage node."""
addr = node.getAddress()
assert addr is not None
app = self.app
neo.lib.logging.debug('trying to connect to %s - %s', node,
node.getState())
conn = MTClientConnection(app.em, app.storage_event_handler, addr,
connector=app.connector_handler(), dispatcher=app.dispatcher)
p = Packets.RequestIdentification(NodeTypes.CLIENT,
app.uuid, None, app.name)
try:
app._ask(conn, p, handler=app.storage_bootstrap_handler)
except ConnectionClosed:
neo.lib.logging.error('Connection to %r failed', node)
self.notifyFailure(node)
conn = None
except NodeNotReady:
neo.lib.logging.info('%r not ready', node)
self.notifyFailure(node)
conn = None
else:
neo.lib.logging.info('Connected %r', node)
return conn
@profiler_decorator
def _dropConnections(self):
"""Drop connections."""
for node_uuid, conn in self.connection_dict.items():
# Drop first connection which looks not used
conn.lock()
try:
if not conn.pending() and \
not self.app.dispatcher.registered(conn):
del self.connection_dict[conn.getUUID()]
conn.close()
neo.lib.logging.debug('_dropConnections : connection to ' \
'storage node %s:%d closed', *(conn.getAddress()))
if len(self.connection_dict) <= self.max_pool_size:
break
finally:
conn.unlock()
@profiler_decorator
def notifyFailure(self, node):
self._notifyFailure(node.getUUID(), time.time() + MAX_FAILURE_AGE)
def _notifyFailure(self, uuid, at):
self.node_failure_dict[uuid] = at
@profiler_decorator
def getCellSortKey(self, cell):
return self._getCellSortKey(cell.getUUID(), time.time())
def _getCellSortKey(self, uuid, now):
if uuid in self.connection_dict:
result = CELL_CONNECTED
else:
failure = self.node_failure_dict.get(uuid)
if failure is None or failure < now:
result = CELL_GOOD
else:
result = CELL_FAILED
return result
@profiler_decorator
def getConnForCell(self, cell):
return self.getConnForNode(cell.getNode())
def iterateForObject(self, object_id, readable=False, writable=False):
""" Iterate over nodes managing an object """
pt = self.app.getPartitionTable()
cell_list = pt.getCellListForOID(object_id, readable, writable)
if not cell_list:
raise NEOStorageError('no storage available')
getConnForNode = self.getConnForNode
while cell_list:
new_cell_list = []
cell_list = [c for c in cell_list if c.getNode().isRunning()]
shuffle(cell_list)
cell_list.sort(key=self.getCellSortKey)
for cell in cell_list:
node = cell.getNode()
conn = getConnForNode(node)
if conn is not None:
yield (node, conn)
elif node.isRunning():
new_cell_list.append(cell)
cell_list = new_cell_list
if new_cell_list:
# wait a bit to avoid a busy loop
time.sleep(1)
@profiler_decorator
def getConnForNode(self, node):
"""Return a locked connection object to a given node
If no connection exists, create a new one"""
if not node.isRunning():
return None
uuid = node.getUUID()
self.connection_lock_acquire()
try:
try:
# Already connected to node
return self.connection_dict[uuid]
except KeyError:
if len(self.connection_dict) > self.max_pool_size:
# must drop some unused connections
self._dropConnections()
# Create new connection to node
while True:
conn = self._initNodeConnection(node)
if conn is not None:
self.connection_dict[uuid] = conn
return conn
else:
return None
finally:
self.connection_lock_release()
@profiler_decorator
def removeConnection(self, node):
"""Explicitly remove connection when a node is broken."""
self.connection_dict.pop(node.getUUID(), None)
def flush(self):
"""Remove all connections"""
self.connection_dict.clear()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/ 0000775 0000000 0000000 00000000000 11634614701 0021736 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/__init__.py 0000664 0000000 0000000 00000004174 11634614701 0024055 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib.python
import logging as logging_std
FMT = ('%(asctime)s %(levelname)-9s %(name)-10s'
' [%(module)14s:%(lineno)3d] \n%(message)s')
class Formatter(logging_std.Formatter):
def formatTime(self, record, datefmt=None):
return logging_std.Formatter.formatTime(self, record,
'%Y-%m-%d %H:%M:%S') + '.%04d' % (record.msecs * 10)
def format(self, record):
lines = iter(logging_std.Formatter.format(self, record).splitlines())
prefix = lines.next()
return '\n'.join(prefix + line for line in lines)
def setupLog(name='NEO', filename=None, verbose=False):
global logging
if verbose:
level = logging_std.DEBUG
else:
level = logging_std.INFO
if logging is not None:
for handler in logging.handlers:
handler.acquire()
try:
handler.close()
finally:
handler.release()
del logging.manager.loggerDict[logging.name]
logging = logging_std.getLogger(name)
for handler in logging.handlers[:]:
logging.removeHandler(handler)
if filename is None:
handler = logging_std.StreamHandler()
else:
handler = logging_std.FileHandler(filename)
handler.setFormatter(Formatter(FMT))
logging.setLevel(level)
logging.addHandler(handler)
logging.propagate = 0
# Create default logger
logging = None
setupLog()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/attributeTracker.py 0000664 0000000 0000000 00000004011 11634614701 0025623 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
ATTRIBUTE_TRACKER_ENABLED = False
from neo.lib.locking import LockUser
"""
Usage example:
from neo import attributeTracker
class Foo(object):
...
def assertBar(self, expected_value):
if self.bar_attr != expected_value:
attributeTracker.whoSet(self, 'bar_attr')
attributeTracker.track(Foo)
"""
MODIFICATION_CONTAINER_ID = '_attribute_tracker_dict'
def tracker_setattr(self, attr, value, setattr):
modification_container = getattr(self, MODIFICATION_CONTAINER_ID, None)
if modification_container is None:
modification_container = {}
setattr(self, MODIFICATION_CONTAINER_ID, modification_container)
modification_container[attr] = LockUser()
setattr(self, attr, value)
if ATTRIBUTE_TRACKER_ENABLED:
def track(klass):
original_setattr = klass.__setattr__
def klass_tracker_setattr(self, attr, value):
tracker_setattr(self, attr, value, original_setattr)
klass.__setattr__ = klass_tracker_setattr
else:
def track(klass):
pass
def whoSet(instance, attr):
result = getattr(instance, MODIFICATION_CONTAINER_ID, None)
if result is not None:
result = result.get(attr)
if result is not None:
result = result.formatStack()
return result
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/bootstrap.py 0000664 0000000 0000000 00000012524 11634614701 0024331 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
from time import sleep
from neo.lib.handler import EventHandler
from neo.lib.protocol import Packets
from neo.lib.util import dump
from neo.lib.connection import ClientConnection
NO_SERVER = ('0.0.0.0', 0)
class BootstrapManager(EventHandler):
"""
Manage the bootstrap stage, lookup for the primary master then connect to it
"""
def __init__(self, app, name, node_type, uuid=None, server=NO_SERVER):
"""
Manage the bootstrap stage of a non-master node, it lookup for the
primary master node, connect to it then returns when the master node
is ready.
"""
EventHandler.__init__(self, app)
self.primary = None
self.server = server
self.node_type = node_type
self.uuid = uuid
self.name = name
self.num_replicas = None
self.num_partitions = None
self.current = None
def connectionCompleted(self, conn):
"""
Triggered when the network connection is successful.
Now ask who's the primary.
"""
EventHandler.connectionCompleted(self, conn)
self.current.setRunning()
conn.ask(Packets.AskPrimary())
def connectionFailed(self, conn):
"""
Triggered when the network connection failed.
Restart bootstrap.
"""
EventHandler.connectionFailed(self, conn)
self.current = None
def connectionLost(self, conn, new_state):
"""
Triggered when an established network connection is lost.
Restart bootstrap.
"""
self.current.setTemporarilyDown()
self.current = None
def notReady(self, conn, message):
"""
The primary master send this message when it is still not ready to
handle the client node.
Close connection and restart.
"""
conn.close()
def answerPrimary(self, conn, primary_uuid, known_master_list):
"""
A master answer who's the primary. If it's another node, connect to it.
If it's itself then the primary is successfully found, ask
identification.
"""
nm = self.app.nm
# Register new master nodes.
for address, uuid in known_master_list:
node = nm.getByAddress(address)
if node is None:
node = nm.createMaster(address=address)
node.setUUID(uuid)
self.primary = nm.getByUUID(primary_uuid)
if self.primary is None or self.current is not self.primary:
# three cases here:
# - something goes wrong (unknown UUID)
# - this master doesn't know who's the primary
# - got the primary's uuid, so cut here
conn.close()
return
neo.lib.logging.info('connected to a primary master node')
conn.ask(Packets.RequestIdentification(self.node_type,
self.uuid, self.server, self.name))
def acceptIdentification(self, conn, node_type,
uuid, num_partitions, num_replicas, your_uuid):
"""
The primary master has accepted the node.
"""
self.num_partitions = num_partitions
self.num_replicas = num_replicas
if self.uuid != your_uuid:
# got an uuid from the primary master
self.uuid = your_uuid
neo.lib.logging.info('Got a new UUID : %s' % dump(self.uuid))
conn.setUUID(uuid)
def getPrimaryConnection(self, connector_handler):
"""
Primary lookup/connection process.
Returns when the connection is made.
"""
neo.lib.logging.info('connecting to a primary master node')
em, nm = self.app.em, self.app.nm
index = 0
self.current = nm.getMasterList()[0]
conn = None
# retry until identified to the primary
while self.primary is None or conn.getUUID() != self.primary.getUUID():
if self.current is None:
# conn closed
conn = None
# select a master
master_list = nm.getMasterList()
index = (index + 1) % len(master_list)
self.current = master_list[index]
if index == 0:
# tried all known masters, sleep a bit
sleep(1)
if conn is None:
# open the connection
addr = self.current.getAddress()
conn = ClientConnection(em, self, addr, connector_handler())
# still processing
em.poll(1)
node = nm.getByUUID(conn.getUUID())
return (node, conn, self.uuid, self.num_partitions, self.num_replicas)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/config.py 0000664 0000000 0000000 00000005447 11634614701 0023567 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from ConfigParser import SafeConfigParser
from neo.lib import util
from neo.lib.util import parseNodeAddress
class ConfigurationManager(object):
"""
Configuration manager that load options from a configuration file and
command line arguments
"""
def __init__(self, defaults, config_file, section, argument_list):
self.defaults = defaults
self.argument_list = argument_list
self.parser = None
if config_file is not None:
self.parser = SafeConfigParser(defaults)
self.parser.read(config_file)
self.section = section
def __get(self, key, optional=False):
value = self.argument_list.get(key)
if value is None:
if self.parser is None:
value = self.defaults.get(key)
else:
value = self.parser.get(self.section, key)
if value is None and not optional:
raise RuntimeError("Option '%s' is undefined'" % (key, ))
return value
def getMasters(self):
""" Get the master node list except itself """
masters = self.__get('masters')
# lod master node list except itself
return util.parseMasterList(masters, except_node=self.getBind())
def getBind(self):
""" Get the address to bind to """
bind = self.__get('bind')
return parseNodeAddress(bind, 0)
def getDatabase(self):
return self.__get('database')
def getAdapter(self):
return self.__get('adapter')
def getCluster(self):
cluster = self.__get('cluster')
assert cluster != '', "Cluster name must be non-empty"
return cluster
def getName(self):
return self.__get('name')
def getReplicas(self):
return int(self.__get('replicas'))
def getPartitions(self):
return int(self.__get('partitions'))
def getReset(self):
# only from command line
return self.argument_list.get('reset', False)
def getUUID(self):
# only from command line
return util.bin(self.argument_list.get('uuid', None))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/connection.py 0000664 0000000 0000000 00000061452 11634614701 0024457 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from functools import wraps
from time import time
import neo.lib
from neo.lib.locking import RLock
from neo.lib.protocol import PacketMalformedError, Packets, ParserState
from neo.lib.connector import ConnectorException, ConnectorTryAgainException, \
ConnectorInProgressException, ConnectorConnectionRefusedException, \
ConnectorConnectionClosedException
from neo.lib.util import dump
from neo.lib.logger import PACKET_LOGGER
from neo.lib import attributeTracker
from neo.lib.util import ReadBuffer
from neo.lib.profiling import profiler_decorator
CRITICAL_TIMEOUT = 30
class ConnectionClosed(Exception):
pass
def not_closed(func):
def decorator(self, *args, **kw):
if self.connector is None:
raise ConnectorConnectionClosedException
return func(self, *args, **kw)
return wraps(func)(decorator)
def lockCheckWrapper(func):
"""
This function is to be used as a wrapper around
MT(Client|Server)Connection class methods.
It uses a "_" method on RLock class, so it might stop working without
notice (sadly, RLock does not offer any "acquired" method, but that one
will do as it checks that current thread holds this lock).
It requires moniroted class to have an RLock instance in self._lock
property.
"""
def wrapper(self, *args, **kw):
if not self._lock._is_owned():
import traceback
neo.lib.logging.warning('%s called on %s instance without being ' \
'locked. Stack:\n%s', func.func_code.co_name,
self.__class__.__name__, ''.join(traceback.format_stack()))
# Call anyway
return func(self, *args, **kw)
return wraps(func)(wrapper)
class OnTimeout(object):
"""
Simple helper class for on_timeout parameter used in HandlerSwitcher
class.
"""
def __init__(self, func, *args, **kw):
self.func = func
self.args = args
self.kw = kw
def __call__(self, conn, msg_id):
return self.func(conn, msg_id, *self.args, **self.kw)
class HandlerSwitcher(object):
_next_timeout = None
_next_timeout_msg_id = None
_next_on_timeout = None
def __init__(self, handler):
# pending handlers and related requests
self._pending = [[{}, handler]]
self._is_handling = False
def clear(self):
handler = self._pending[0][1]
self._pending = [[{}, handler]]
def isPending(self):
return bool(self._pending[0][0])
def getHandler(self):
return self._pending[0][1]
def getLastHandler(self):
""" Return the last (may be unapplied) handler registered """
return self._pending[-1][1]
@profiler_decorator
def emit(self, request, timeout, on_timeout):
# register the request in the current handler
_pending = self._pending
if self._is_handling:
# If this is called while handling a packet, the response is to
# be excpected for the current handler...
(request_dict, _) = _pending[0]
else:
# ...otherwise, queue for for the latest handler
assert len(_pending) == 1 or _pending[0][0]
(request_dict, _) = _pending[-1]
msg_id = request.getId()
answer_class = request.getAnswerClass()
assert answer_class is not None, "Not a request"
assert msg_id not in request_dict, "Packet id already expected"
next_timeout = self._next_timeout
if next_timeout is None or timeout < next_timeout:
self._next_timeout = timeout
self._next_timeout_msg_id = msg_id
self._next_on_timeout = on_timeout
request_dict[msg_id] = (answer_class, timeout, on_timeout)
def getNextTimeout(self):
return self._next_timeout
def timeout(self, connection):
msg_id = self._next_timeout_msg_id
if self._next_on_timeout is not None:
self._next_on_timeout(connection, msg_id)
if self._next_timeout_msg_id != msg_id:
# on_timeout sent a packet with a smaller timeout
# so keep the connection open
return
# Notify that a timeout occured
return msg_id
def handle(self, connection, packet):
assert not self._is_handling
self._is_handling = True
try:
self._handle(connection, packet)
finally:
self._is_handling = False
@profiler_decorator
def _handle(self, connection, packet):
assert len(self._pending) == 1 or self._pending[0][0]
PACKET_LOGGER.dispatch(connection, packet, False)
if connection.isClosed() and packet.ignoreOnClosedConnection():
neo.lib.logging.debug('Ignoring packet %r on closed connection %r',
packet, connection)
return
msg_id = packet.getId()
(request_dict, handler) = self._pending[0]
# notifications are not expected
if not packet.isResponse():
handler.packetReceived(connection, packet)
return
# checkout the expected answer class
(klass, timeout, _) = request_dict.pop(msg_id, (None, None, None))
if klass and isinstance(packet, klass) or packet.isError():
handler.packetReceived(connection, packet)
else:
neo.lib.logging.error(
'Unexpected answer %r in %r', packet, connection)
if not connection.isClosed():
notification = Packets.Notify('Unexpected answer: %r' % packet)
connection.notify(notification)
connection.abort()
# handler.peerBroken(connection)
# apply a pending handler if no more answers are pending
while len(self._pending) > 1 and not self._pending[0][0]:
del self._pending[0]
neo.lib.logging.debug(
'Apply handler %r on %r', self._pending[0][1],
connection)
if msg_id == self._next_timeout_msg_id:
self._updateNextTimeout()
def _updateNextTimeout(self):
# Find next timeout and its msg_id
next_timeout = None
for pending in self._pending:
for msg_id, (_, timeout, on_timeout) in pending[0].iteritems():
if not next_timeout or timeout < next_timeout[0]:
next_timeout = timeout, msg_id, on_timeout
self._next_timeout, self._next_timeout_msg_id, self._next_on_timeout = \
next_timeout or (None, None, None)
@profiler_decorator
def setHandler(self, handler):
can_apply = len(self._pending) == 1 and not self._pending[0][0]
if can_apply:
# nothing is pending, change immediately
self._pending[0][1] = handler
else:
# put the next handler in queue
self._pending.append([{}, handler])
return can_apply
class BaseConnection(object):
"""A base connection
About timeouts:
Timeout are mainly per-connection instead of per-packet.
The idea is that most of time, packets are received and processed
sequentially, so if it takes a long for a peer to process a packet,
following packets would just be enqueued.
What really matters is that the peer makes progress in its work.
As long as we receive an answer, we consider it's still alive and
it may just have started to process the following request. So we reset
timeouts.
There is anyway nothing more we could do, because processing of a packet
may be delayed in a very unpredictable way depending of previously
received packets on peer side.
Even ourself may be slow to receive a packet. We must not timeout for
an answer that is already in our incoming buffer (read_buf or _queue).
Timeouts in HandlerSwitcher are only there to prioritize some packets.
"""
KEEP_ALIVE = 60
_base_timeout = None
def __init__(self, event_manager, handler, connector, addr=None):
assert connector is not None, "Need a low-level connector"
self.em = event_manager
self.connector = connector
self.addr = addr
self._handlers = HandlerSwitcher(handler)
event_manager.register(self)
def isPending(self):
return self._handlers.isPending()
def updateTimeout(self, t=None):
if not self._queue:
if t:
self._base_timeout = t
self._timeout = self._handlers.getNextTimeout() or self.KEEP_ALIVE
def checkTimeout(self, t):
# first make sure we don't timeout on answers we already received
if self._base_timeout and not self._queue:
timeout = t - self._base_timeout
if self._timeout <= timeout:
handlers = self._handlers
if handlers.isPending():
msg_id = handlers.timeout(self)
if msg_id is None:
self._base_timeout = t
else:
neo.lib.logging.info('timeout for #0x%08x with %r',
msg_id, self)
self.close()
else:
self.idle()
def lock(self):
return 1
def unlock(self):
return None
def getConnector(self):
return self.connector
def getAddress(self):
return self.addr
def readable(self):
raise NotImplementedError
def writable(self):
raise NotImplementedError
def close(self):
"""Close the connection."""
if self.connector is not None:
em = self.em
em.removeReader(self)
em.removeWriter(self)
em.unregister(self)
self.connector.shutdown()
self.connector.close()
self.connector = None
def __repr__(self):
address = self.addr and '%s:%d' % self.addr or '?'
return '<%s(uuid=%s, address=%s, closed=%s, handler=%s) at %x>' % (
self.__class__.__name__,
dump(self.getUUID()),
address,
int(self.isClosed()),
self.getHandler(),
id(self),
)
__del__ = close
def getHandler(self):
return self._handlers.getHandler()
def setHandler(self, handler):
if self._handlers.setHandler(handler):
neo.lib.logging.debug('Set handler %r on %r', handler, self)
else:
neo.lib.logging.debug('Delay handler %r on %r', handler, self)
def getEventManager(self):
return self.em
def getUUID(self):
return None
def isClosed(self):
return self.connector is None
def isAborted(self):
return False
def isListening(self):
return False
def isServer(self):
return False
def isClient(self):
return False
def hasPendingMessages(self):
return False
def whoSetConnector(self):
"""
Debugging method: call this method to know who set the current
connector value.
"""
return attributeTracker.whoSet(self, 'connector')
def idle(self):
pass
attributeTracker.track(BaseConnection)
class ListeningConnection(BaseConnection):
"""A listen connection."""
def __init__(self, event_manager, handler, addr, connector, **kw):
neo.lib.logging.debug('listening to %s:%d', *addr)
BaseConnection.__init__(self, event_manager, handler,
addr=addr, connector=connector)
self.connector.makeListeningConnection(addr)
self.em.addReader(self)
def readable(self):
try:
new_s, addr = self.connector.getNewConnection()
neo.lib.logging.debug('accepted a connection from %s:%d', *addr)
handler = self.getHandler()
new_conn = ServerConnection(self.getEventManager(), handler,
connector=new_s, addr=addr)
handler.connectionAccepted(new_conn)
except ConnectorTryAgainException:
pass
def getAddress(self):
return self.connector.getAddress()
def writable(self):
return False
def isListening(self):
return True
class Connection(BaseConnection):
"""A connection."""
connecting = False
def __init__(self, event_manager, *args, **kw):
BaseConnection.__init__(self, event_manager, *args, **kw)
self.read_buf = ReadBuffer()
self.write_buf = []
self.cur_id = 0
self.peer_id = 0
self.aborted = False
self.uuid = None
self._queue = []
self._on_close = None
self._parser_state = ParserState()
event_manager.addReader(self)
def setOnClose(self, callback):
assert self._on_close is None
self._on_close = callback
def isAborted(self):
return self.aborted
def getUUID(self):
return self.uuid
def setUUID(self, uuid):
self.uuid = uuid
def setPeerId(self, peer_id):
self.peer_id = peer_id
def getPeerId(self):
return self.peer_id
@profiler_decorator
def _getNextId(self):
next_id = self.cur_id
self.cur_id = (next_id + 1) & 0xffffffff
return next_id
def abort(self):
"""Abort dealing with this connection."""
neo.lib.logging.debug('aborting a connector for %r', self)
self.aborted = True
assert self.write_buf
def writable(self):
"""Called when self is writable."""
self._send()
if not self.write_buf and self.connector is not None:
if self.aborted:
self.close()
else:
self.em.removeWriter(self)
def readable(self):
"""Called when self is readable."""
self._recv()
self.analyse()
if self.aborted:
self.em.removeReader(self)
def analyse(self):
"""Analyse received data."""
while True:
# parse a packet
try:
packet = Packets.parse(self.read_buf, self._parser_state)
if packet is None:
break
except PacketMalformedError, msg:
self.getHandler()._packetMalformed(self, msg)
return
self._queue.append(packet)
def hasPendingMessages(self):
"""
Returns True if there are messages queued and awaiting processing.
"""
return len(self._queue) != 0
def process(self):
"""
Process a pending packet.
"""
# check out packet and process it with current handler
packet = self._queue.pop(0)
self._handlers.handle(self, packet)
self.updateTimeout()
def pending(self):
return self.connector is not None and self.write_buf
def close(self):
if self.connector is None:
assert self._on_close is None
assert not self.read_buf
assert not self.write_buf
assert not self.isPending()
return
# process the network events with the last registered handler to
# solve issues where a node is lost with pending handlers and
# create unexpected side effects.
neo.lib.logging.debug('closing a connector for %r', self)
handler = self._handlers.getLastHandler()
super(Connection, self).close()
if self._on_close is not None:
self._on_close()
self._on_close = None
del self.write_buf[:]
self.read_buf.clear()
self._handlers.clear()
if self.connecting:
handler.connectionFailed(self)
else:
handler.connectionClosed(self)
def _closure(self):
assert self.connector is not None, self.whoSetConnector()
self.close()
@profiler_decorator
def _recv(self):
"""Receive data from a connector."""
try:
data = self.connector.receive()
except ConnectorTryAgainException:
pass
except ConnectorConnectionRefusedException:
assert self.connecting
self._closure()
except ConnectorConnectionClosedException:
# connection resetted by peer, according to the man, this error
# should not occurs but it seems it's false
neo.lib.logging.debug(
'Connection reset by peer: %r', self.connector)
self._closure()
except:
neo.lib.logging.debug(
'Unknown connection error: %r', self.connector)
self._closure()
# unhandled connector exception
raise
else:
if not data:
neo.lib.logging.debug(
'Connection %r closed in recv', self.connector)
self._closure()
return
self._base_timeout = time() # last known remote activity
self.read_buf.append(data)
@profiler_decorator
def _send(self):
"""Send data to a connector."""
if not self.write_buf:
return
msg = ''.join(self.write_buf)
try:
n = self.connector.send(msg)
except ConnectorTryAgainException:
pass
except ConnectorConnectionClosedException:
# connection resetted by peer
neo.lib.logging.debug(
'Connection reset by peer: %r', self.connector)
self._closure()
except:
neo.lib.logging.debug(
'Unknown connection error: %r', self.connector)
# unhandled connector exception
self._closure()
raise
else:
if not n:
neo.lib.logging.debug('Connection %r closed in send',
self.connector)
self._closure()
return
if n == len(msg):
del self.write_buf[:]
else:
self.write_buf = [msg[n:]]
@profiler_decorator
def _addPacket(self, packet):
"""Add a packet into the write buffer."""
if self.connector is None:
return
was_empty = not self.write_buf
self.write_buf.extend(packet.encode())
if was_empty:
# enable polling for writing.
self.em.addWriter(self)
PACKET_LOGGER.dispatch(self, packet, True)
@not_closed
def notify(self, packet):
""" Then a packet with a new ID """
msg_id = self._getNextId()
packet.setId(msg_id)
self._addPacket(packet)
return msg_id
@profiler_decorator
@not_closed
def ask(self, packet, timeout=CRITICAL_TIMEOUT, on_timeout=None):
"""
Send a packet with a new ID and register the expectation of an answer
"""
msg_id = self._getNextId()
packet.setId(msg_id)
self._addPacket(packet)
handlers = self._handlers
t = not handlers.isPending() and time() or None
handlers.emit(packet, timeout, on_timeout)
self.updateTimeout(t)
return msg_id
@not_closed
def answer(self, packet, msg_id=None):
""" Answer to a packet by re-using its ID for the packet answer """
if msg_id is None:
msg_id = self.getPeerId()
packet.setId(msg_id)
assert packet.isResponse(), packet
self._addPacket(packet)
def idle(self):
self.ask(Packets.Ping())
class ClientConnection(Connection):
"""A connection from this node to a remote node."""
connecting = True
def __init__(self, event_manager, handler, addr, connector):
Connection.__init__(self, event_manager, handler, connector, addr)
handler.connectionStarted(self)
try:
try:
self.connector.makeClientConnection(addr)
except ConnectorInProgressException:
event_manager.addWriter(self)
else:
self.connecting = False
self.updateTimeout(time())
self.getHandler().connectionCompleted(self)
except ConnectorConnectionRefusedException:
self._closure()
except ConnectorException:
# unhandled connector exception
self._closure()
raise
def writable(self):
"""Called when self is writable."""
if self.connecting:
err = self.connector.getError()
if err:
self._closure()
return
else:
self.connecting = False
self.updateTimeout(time())
self.getHandler().connectionCompleted(self)
self.em.addReader(self)
else:
Connection.writable(self)
def isClient(self):
return True
class ServerConnection(Connection):
"""A connection from a remote node to this node."""
# Both server and client must check the connection, in case:
# - the remote crashed brutally (i.e. without closing TCP connections)
# - or packets sent by the remote are dropped (network failure)
# Use different timeout so that in normal condition, server never has to
# ping the client. Otherwise, it would do it about half of the time.
KEEP_ALIVE = Connection.KEEP_ALIVE + 5
def __init__(self, *args, **kw):
Connection.__init__(self, *args, **kw)
self.updateTimeout(time())
def isServer(self):
return True
class MTClientConnection(ClientConnection):
"""A Multithread-safe version of ClientConnection."""
def __init__(self, *args, **kwargs):
# _lock is only here for lock debugging purposes. Do not use.
self._lock = lock = RLock()
self.acquire = lock.acquire
self.release = lock.release
self.dispatcher = kwargs.pop('dispatcher')
self.dispatcher.needPollThread()
self.lock()
try:
super(MTClientConnection, self).__init__(*args, **kwargs)
finally:
self.unlock()
def lock(self, blocking = 1):
return self.acquire(blocking = blocking)
def unlock(self):
self.release()
@lockCheckWrapper
def writable(self, *args, **kw):
return super(MTClientConnection, self).writable(*args, **kw)
@lockCheckWrapper
def readable(self, *args, **kw):
return super(MTClientConnection, self).readable(*args, **kw)
@lockCheckWrapper
def analyse(self, *args, **kw):
return super(MTClientConnection, self).analyse(*args, **kw)
def notify(self, *args, **kw):
self.lock()
try:
return super(MTClientConnection, self).notify(*args, **kw)
finally:
self.unlock()
@profiler_decorator
def ask(self, packet, timeout=CRITICAL_TIMEOUT, on_timeout=None,
queue=None):
self.lock()
try:
if self.isClosed():
raise ConnectionClosed
# XXX: Here, we duplicate Connection.ask because we need to call
# self.dispatcher.register after setId is called and before
# _addPacket is called.
msg_id = self._getNextId()
packet.setId(msg_id)
if queue is None:
if type(packet) is not Packets.Ping:
raise TypeError, 'Only Ping packet can be asked ' \
'without a queue, got a %r.' % (packet, )
else:
self.dispatcher.register(self, msg_id, queue)
self._addPacket(packet)
handlers = self._handlers
t = not handlers.isPending() and time() or None
handlers.emit(packet, timeout, on_timeout)
self.updateTimeout(t)
return msg_id
finally:
self.unlock()
@lockCheckWrapper
def answer(self, *args, **kw):
return super(MTClientConnection, self).answer(*args, **kw)
@lockCheckWrapper
def checkTimeout(self, *args, **kw):
return super(MTClientConnection, self).checkTimeout(*args, **kw)
def close(self):
self.lock()
try:
super(MTClientConnection, self).close()
finally:
self.release()
@lockCheckWrapper
def process(self, *args, **kw):
return super(MTClientConnection, self).process(*args, **kw)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/connector.py 0000664 0000000 0000000 00000015707 11634614701 0024314 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import socket
import errno
# Global connector registry.
# Fill by calling registerConnectorHandler.
# Read by calling getConnectorHandler.
connector_registry = {}
DEFAULT_CONNECTOR = 'SocketConnectorIPv4'
def registerConnectorHandler(connector_handler):
connector_registry[connector_handler.__name__] = connector_handler
def getConnectorHandler(connector=None):
if connector is None:
connector = DEFAULT_CONNECTOR
if isinstance(connector, basestring):
connector_handler = connector_registry.get(connector)
else:
# Allow to directly provide a handler class without requiring to
# register it first.
connector_handler = connector
return connector_handler
class SocketConnector:
""" This class is a wrapper for a socket """
is_listening = False
remote_addr = None
is_closed = None
def __init__(self, s=None, accepted_from=None):
self.accepted_from = accepted_from
if accepted_from is not None:
self.remote_addr = accepted_from
self.is_listening = False
self.is_closed = False
if s is None:
self.socket = socket.socket(self.af_type, socket.SOCK_STREAM)
else:
self.socket = s
self.socket_fd = self.socket.fileno()
# always use non-blocking sockets
self.socket.setblocking(0)
# disable Nagle algorithm to reduce latency
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
def makeClientConnection(self, addr):
self.is_closed = False
self.remote_addr = addr
try:
self.socket.connect(addr)
except socket.error, (err, errmsg):
if err == errno.EINPROGRESS:
raise ConnectorInProgressException
if err == errno.ECONNREFUSED:
raise ConnectorConnectionRefusedException
raise ConnectorException, 'makeClientConnection to %s failed:' \
' %s:%s' % (addr, err, errmsg)
def makeListeningConnection(self, addr):
self.is_closed = False
self.is_listening = True
try:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(addr)
self.socket.listen(5)
except socket.error, (err, errmsg):
self.socket.close()
raise ConnectorException, 'makeListeningConnection on %s failed:' \
' %s:%s' % (addr, err, errmsg)
def getError(self):
return self.socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR)
def getAddress(self):
raise NotImplementedError
def getDescriptor(self):
# this descriptor must only be used by the event manager, where it
# guarantee unicity only while the connector is opened and registered
# in epoll
return self.socket_fd
def getNewConnection(self):
try:
(new_s, addr) = self._accept()
new_s = self.__class__(new_s, accepted_from=addr)
return (new_s, addr)
except socket.error, (err, errmsg):
if err == errno.EAGAIN:
raise ConnectorTryAgainException
raise ConnectorException, 'getNewConnection failed: %s:%s' % \
(err, errmsg)
def shutdown(self):
# This may fail if the socket is not connected.
try:
self.socket.shutdown(socket.SHUT_RDWR)
except socket.error:
pass
def receive(self):
try:
return self.socket.recv(4096)
except socket.error, (err, errmsg):
if err == errno.EAGAIN:
raise ConnectorTryAgainException
if err in (errno.ECONNREFUSED, errno.EHOSTUNREACH):
raise ConnectorConnectionRefusedException
if err in (errno.ECONNRESET, errno.ETIMEDOUT):
raise ConnectorConnectionClosedException
raise ConnectorException, 'receive failed: %s:%s' % (err, errmsg)
def send(self, msg):
try:
return self.socket.send(msg)
except socket.error, (err, errmsg):
if err == errno.EAGAIN:
raise ConnectorTryAgainException
if err in (errno.ECONNRESET, errno.ETIMEDOUT, errno.EPIPE):
raise ConnectorConnectionClosedException
raise ConnectorException, 'send failed: %s:%s' % (err, errmsg)
def close(self):
self.is_closed = True
return self.socket.close()
def __repr__(self):
if self.is_closed:
fileno = '?'
else:
fileno = self.socket_fd
result = '<%s at 0x%x fileno %s %s, ' % (self.__class__.__name__,
id(self), fileno, self.socket.getsockname())
if self.is_closed is None:
result += 'never opened'
else:
if self.is_closed:
result += 'closed '
else:
result += 'opened '
if self.is_listening:
result += 'listening'
else:
if self.accepted_from is None:
result += 'to'
else:
result += 'from'
result += ' %s' % (self.remote_addr, )
return result + '>'
def _accept(self):
raise NotImplementedError
class SocketConnectorIPv4(SocketConnector):
" Wrapper for IPv4 sockets"
af_type = socket.AF_INET
def _accept(self):
return self.socket.accept()
def getAddress(self):
return self.socket.getsockname()
class SocketConnectorIPv6(SocketConnector):
" Wrapper for IPv6 sockets"
af_type = socket.AF_INET6
def _accept(self):
new_s, addr = self.socket.accept()
addr = (addr[0], addr[1])
return (new_s, addr)
def getAddress(self):
addr = self.socket.getsockname()
addr = (addr[0], addr[1])
return addr
registerConnectorHandler(SocketConnectorIPv4)
registerConnectorHandler(SocketConnectorIPv6)
class ConnectorException(Exception):
pass
class ConnectorTryAgainException(ConnectorException):
pass
class ConnectorInProgressException(ConnectorException):
pass
class ConnectorConnectionClosedException(ConnectorException):
pass
class ConnectorConnectionRefusedException(ConnectorException):
pass
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/debug.py 0000664 0000000 0000000 00000007520 11634614701 0023402 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import traceback
import signal
import ctypes
import imp
import os
import sys
from functools import wraps
import neo
# WARNING: This module should only be used for live application debugging.
# It - by purpose - allows code injection in a running neo process.
# You don't want to enable it in a production environment. Really.
ENABLED = False
# How to include in python code:
# from neo.debug import register
# register()
#
# How to trigger it:
# Kill python process with:
# SIGUSR1:
# Loads (or reloads) neo.debug module.
# The content is up to you (it's only imported).
# SIGUSR2:
# Triggers a pdb prompt on process' controlling TTY.
libc = ctypes.cdll.LoadLibrary('libc.so.6')
errno = ctypes.c_int.in_dll(libc, 'errno')
def decorate(func):
def decorator(sig, frame):
# Save errno value, to restore it after sig handler returns
old_errno = errno.value
try:
func(sig, frame)
except:
# Prevent exception from exiting signal handler, so mistakes in
# "debug" module don't kill process.
traceback.print_exc()
errno.value = old_errno
return wraps(func)(decorator)
@decorate
def debugHandler(sig, frame):
file, filename, (suffix, mode, type) = imp.find_module('debug',
neo.__path__)
imp.load_module('neo.debug', file, filename, (suffix, mode, type))
def getPdb():
try: # try ipython if available
import IPython
IPython.Shell.IPShell(argv=[])
return IPython.Debugger.Tracer().debugger
except ImportError:
import pdb
return pdb.Pdb()
_debugger = None
def winpdb(depth=0):
import rpdb2
depth += 1
if rpdb2.g_debugger is not None:
return rpdb2.setbreak(depth)
script = rpdb2.calc_frame_path(sys._getframe(depth))
pwd = str(os.getpid()) + os.getcwd().replace('/', '_').replace('-', '_')
pid = os.fork()
if pid:
try:
rpdb2.start_embedded_debugger(pwd, depth=depth)
finally:
os.waitpid(pid, 0)
else:
try:
os.execlp('python', 'python', '-c', """import os\nif not os.fork():
import rpdb2, winpdb
rpdb2_raw_input = rpdb2._raw_input
rpdb2._raw_input = lambda s: \
s == rpdb2.STR_PASSWORD_INPUT and %r or rpdb2_raw_input(s)
winpdb.g_ignored_warnings[winpdb.STR_EMBEDDED_WARNING] = True
winpdb.main()
""" % pwd, '-a', script)
finally:
os.abort()
@decorate
def pdbHandler(sig, frame):
try:
winpdb(2) # depth is 2, because of the decorator
except ImportError:
global _debugger
if _debugger is None:
_debugger = getPdb()
debugger.set_trace(frame)
def register(on_log=None):
if ENABLED:
signal.signal(signal.SIGUSR1, debugHandler)
signal.signal(signal.SIGUSR2, pdbHandler)
if on_log is not None:
# use 'kill -RTMIN
@decorate
def on_log_signal(signum, signal):
on_log()
signal.signal(signal.SIGRTMIN, on_log_signal)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/dispatcher.py 0000664 0000000 0000000 00000013414 11634614701 0024441 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from functools import wraps
from neo.lib.locking import Lock, Empty
from neo.lib.profiling import profiler_decorator
EMPTY = {}
NOBODY = []
class ForgottenPacket(object):
"""
Instances of this class will be pushed to queue when an expected answer
is being forgotten. Its purpose is similar to pushing "None" when
connection is closed, but the meaning is different.
"""
def __init__(self, msg_id):
self.msg_id = msg_id
def getId(self):
return self.msg_id
def giant_lock(func):
def wrapped(self, *args, **kw):
self.lock_acquire()
try:
return func(self, *args, **kw)
finally:
self.lock_release()
return wraps(func)(wrapped)
class Dispatcher:
"""Register a packet, connection pair as expecting a response packet."""
def __init__(self, poll_thread=None):
self.message_table = {}
self.queue_dict = {}
lock = Lock()
self.lock_acquire = lock.acquire
self.lock_release = lock.release
self.poll_thread = poll_thread
@giant_lock
@profiler_decorator
def dispatch(self, conn, msg_id, packet):
"""
Retrieve register-time provided queue, and put conn and packet in it.
"""
queue = self.message_table.get(id(conn), EMPTY).pop(msg_id, None)
if queue is None:
return False
elif queue is NOBODY:
return True
self._decrefQueue(queue)
queue.put((conn, packet))
return True
def _decrefQueue(self, queue):
queue_id = id(queue)
queue_dict = self.queue_dict
if queue_dict[queue_id] == 1:
queue_dict.pop(queue_id)
else:
queue_dict[queue_id] -= 1
def _increfQueue(self, queue):
queue_id = id(queue)
queue_dict = self.queue_dict
try:
queue_dict[queue_id] += 1
except KeyError:
queue_dict[queue_id] = 1
def needPollThread(self):
self.poll_thread.start()
@giant_lock
@profiler_decorator
def register(self, conn, msg_id, queue):
"""Register an expectation for a reply."""
if self.poll_thread is not None:
self.needPollThread()
self.message_table.setdefault(id(conn), {})[msg_id] = queue
self._increfQueue(queue)
@profiler_decorator
def unregister(self, conn):
""" Unregister a connection and put fake packet in queues to unlock
threads excepting responses from that connection """
self.lock_acquire()
try:
message_table = self.message_table.pop(id(conn), EMPTY)
finally:
self.lock_release()
notified_set = set()
_decrefQueue = self._decrefQueue
for queue in message_table.itervalues():
if queue is NOBODY:
continue
queue_id = id(queue)
if queue_id not in notified_set:
queue.put((conn, None))
notified_set.add(queue_id)
_decrefQueue(queue)
@giant_lock
@profiler_decorator
def forget(self, conn, msg_id):
""" Forget about a specific message for a specific connection.
Actually makes it "expected by nobody", so we know we can ignore it,
and not detect it as an error. """
message_table = self.message_table[id(conn)]
queue = message_table[msg_id]
if queue is NOBODY:
raise KeyError, 'Already expected by NOBODY: %r, %r' % (
conn, msg_id)
queue.put((conn, ForgottenPacket(msg_id)))
self.queue_dict[id(queue)] -= 1
message_table[msg_id] = NOBODY
return queue
@giant_lock
@profiler_decorator
def forget_queue(self, queue, flush_queue=True):
"""
Forget all pending messages for given queue.
Actually makes them "expected by nobody", so we know we can ignore
them, and not detect it as an error.
flush_queue (boolean, default=True)
All packets in queue get flushed.
"""
# XXX: expensive lookup: we iterate over the whole dict
found = 0
for message_table in self.message_table.itervalues():
for msg_id, t_queue in message_table.iteritems():
if queue is t_queue:
found += 1
message_table[msg_id] = NOBODY
refcount = self.queue_dict.pop(id(queue), 0)
if refcount != found:
raise ValueError('We hit a refcount bug: %s queue uses ' \
'expected, %s found' % (refcount, found))
if flush_queue:
get = queue.get
while True:
try:
get(block=False)
except Empty:
break
@profiler_decorator
def registered(self, conn):
"""Check if a connection is registered into message table."""
return len(self.message_table.get(id(conn), EMPTY)) != 0
@giant_lock
@profiler_decorator
def pending(self, queue):
return not queue.empty() or self.queue_dict.get(id(queue), 0) > 0
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/epoll.py 0000664 0000000 0000000 00000010421 11634614701 0023421 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
r"""This is an epoll(4) interface available in Linux 2.6. This requires
ctypes ."""
from ctypes import cdll, Union, Structure, \
c_void_p, c_int, byref
try:
from ctypes import c_uint32, c_uint64
except ImportError:
from ctypes import c_uint, c_ulonglong
c_uint32 = c_uint
c_uint64 = c_ulonglong
from os import close
from errno import EINTR, EAGAIN
libc = cdll.LoadLibrary('libc.so.6')
epoll_create = libc.epoll_create
epoll_wait = libc.epoll_wait
epoll_ctl = libc.epoll_ctl
errno = c_int.in_dll(libc, 'errno')
EPOLLIN = 0x001
EPOLLPRI = 0x002
EPOLLOUT = 0x004
EPOLLRDNORM = 0x040
EPOLLRDBAND = 0x080
EPOLLWRNORM = 0x100
EPOLLWRBAND = 0x200
EPOLLMSG = 0x400
EPOLLERR = 0x008
EPOLLHUP = 0x010
EPOLLONESHOT = (1 << 30)
EPOLLET = (1 << 31)
EPOLL_CTL_ADD = 1
EPOLL_CTL_DEL = 2
EPOLL_CTL_MOD = 3
class EpollData(Union):
_fields_ = [("ptr", c_void_p),
("fd", c_int),
("u32", c_uint32),
("u64", c_uint64)]
class EpollEvent(Structure):
_fields_ = [("events", c_uint32),
("data", EpollData)]
_pack_ = 1
class Epoll(object):
efd = -1
def __init__(self):
self.efd = epoll_create(10)
if self.efd == -1:
raise OSError(errno.value, 'epoll_create failed')
self.maxevents = 1024 # XXX arbitrary
epoll_event_array = EpollEvent * self.maxevents
self.events = epoll_event_array()
def poll(self, timeout=1):
if timeout is None:
timeout = -1
else:
timeout *= 1000
timeout = int(timeout)
while True:
n = epoll_wait(self.efd, byref(self.events), self.maxevents,
timeout)
if n == -1:
e = errno.value
# XXX: Why 0 ? Maybe due to partial workaround in neo.lib.debug.
if e in (0, EINTR, EAGAIN):
continue
else:
raise OSError(e, 'epoll_wait failed')
else:
readable_fd_list = []
writable_fd_list = []
error_fd_list = []
for i in xrange(n):
ev = self.events[i]
fd = int(ev.data.fd)
if ev.events & EPOLLIN:
readable_fd_list.append(fd)
if ev.events & EPOLLOUT:
writable_fd_list.append(fd)
if ev.events & (EPOLLERR | EPOLLHUP):
error_fd_list.append(fd)
return readable_fd_list, writable_fd_list, error_fd_list
def register(self, fd):
ev = EpollEvent()
ev.data.fd = fd
ret = epoll_ctl(self.efd, EPOLL_CTL_ADD, fd, byref(ev))
if ret == -1:
raise OSError(errno.value, 'epoll_ctl failed')
def modify(self, fd, readable, writable):
ev = EpollEvent()
ev.data.fd = fd
events = 0
if readable:
events |= EPOLLIN
if writable:
events |= EPOLLOUT
ev.events = events
ret = epoll_ctl(self.efd, EPOLL_CTL_MOD, fd, byref(ev))
if ret == -1:
raise OSError(errno.value, 'epoll_ctl failed')
def unregister(self, fd):
ev = EpollEvent()
ret = epoll_ctl(self.efd, EPOLL_CTL_DEL, fd, byref(ev))
if ret == -1:
raise OSError(errno.value, 'epoll_ctl failed')
def __del__(self):
efd = self.efd
if efd >= 0:
del self.efd
close(efd)
close = __del__
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/event.py 0000664 0000000 0000000 00000016447 11634614701 0023445 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from time import time
import neo.lib
from neo.lib.epoll import Epoll
from neo.lib.profiling import profiler_decorator
class EpollEventManager(object):
"""This class manages connections and events based on epoll(5)."""
def __init__(self):
self.connection_dict = {}
self.reader_set = set([])
self.writer_set = set([])
self.epoll = Epoll()
self._pending_processing = []
def close(self):
for c in self.connection_dict.values():
c.close()
del self.__dict__
def getConnectionList(self):
# XXX: use index
return [x for x in self.connection_dict.values() if not x.isAborted()]
def getClientList(self):
# XXX: use index
return [c for c in self.getConnectionList() if c.isClient()]
def getServerList(self):
# XXX: use index
return [c for c in self.getConnectionList() if c.isServer()]
def getConnectionListByUUID(self, uuid):
""" Return the connection associated to the UUID, None if the UUID is
None, invalid or not found"""
# XXX: use index
# XXX: consider remove UUID from connection and thus this method
if uuid is None:
return None
result = []
append = result.append
for conn in self.getConnectionList():
if conn.getUUID() == uuid:
append(conn)
return result
def register(self, conn):
fd = conn.getConnector().getDescriptor()
self.connection_dict[fd] = conn
self.epoll.register(fd)
def unregister(self, conn):
new_pending_processing = [x for x in self._pending_processing
if x is not conn]
# Check that we removed at most one entry from
# self._pending_processing .
assert len(new_pending_processing) > len(self._pending_processing) - 2
self._pending_processing = new_pending_processing
fd = conn.getConnector().getDescriptor()
self.epoll.unregister(fd)
del self.connection_dict[fd]
def _getPendingConnection(self):
if len(self._pending_processing):
result = self._pending_processing.pop(0)
else:
result = None
return result
def _addPendingConnection(self, conn):
pending_processing = self._pending_processing
if conn not in pending_processing:
pending_processing.append(conn)
def poll(self, timeout=1):
to_process = self._getPendingConnection()
if to_process is None:
# Fetch messages from polled file descriptors
self._poll(timeout=timeout)
# See if there is anything to process
to_process = self._getPendingConnection()
if to_process is not None:
to_process.lock()
try:
try:
# Process
to_process.process()
finally:
# ...and requeue if there are pending messages
if to_process.hasPendingMessages():
self._addPendingConnection(to_process)
finally:
to_process.unlock()
# Non-blocking call: as we handled a packet, we should just offer
# poll a chance to fetch & send already-available data, but it must
# not delay us.
self._poll(timeout=0)
def _poll(self, timeout=1):
rlist, wlist, elist = self.epoll.poll(timeout)
for fd in frozenset(rlist):
conn = self.connection_dict[fd]
conn.lock()
try:
conn.readable()
finally:
conn.unlock()
if conn.hasPendingMessages():
self._addPendingConnection(conn)
for fd in frozenset(wlist):
# This can fail, if a connection is closed in readable().
try:
conn = self.connection_dict[fd]
except KeyError:
pass
else:
conn.lock()
try:
conn.writable()
finally:
conn.unlock()
for fd in frozenset(elist):
# This can fail, if a connection is closed in previous calls to
# readable() or writable().
try:
conn = self.connection_dict[fd]
except KeyError:
pass
else:
conn.lock()
try:
conn.readable()
finally:
conn.unlock()
if conn.hasPendingMessages():
self._addPendingConnection(conn)
t = time()
for conn in self.connection_dict.values():
conn.lock()
try:
conn.checkTimeout(t)
finally:
conn.unlock()
def addReader(self, conn):
connector = conn.getConnector()
assert connector is not None, conn.whoSetConnector()
fd = connector.getDescriptor()
if fd not in self.reader_set:
self.reader_set.add(fd)
self.epoll.modify(fd, 1, fd in self.writer_set)
def removeReader(self, conn):
connector = conn.getConnector()
assert connector is not None, conn.whoSetConnector()
fd = connector.getDescriptor()
if fd in self.reader_set:
self.reader_set.remove(fd)
self.epoll.modify(fd, 0, fd in self.writer_set)
@profiler_decorator
def addWriter(self, conn):
connector = conn.getConnector()
assert connector is not None, conn.whoSetConnector()
fd = connector.getDescriptor()
if fd not in self.writer_set:
self.writer_set.add(fd)
self.epoll.modify(fd, fd in self.reader_set, 1)
def removeWriter(self, conn):
connector = conn.getConnector()
assert connector is not None, conn.whoSetConnector()
fd = connector.getDescriptor()
if fd in self.writer_set:
self.writer_set.remove(fd)
self.epoll.modify(fd, fd in self.reader_set, 0)
def log(self):
neo.lib.logging.info('Event Manager:')
neo.lib.logging.info(' Readers: %r', [x for x in self.reader_set])
neo.lib.logging.info(' Writers: %r', [x for x in self.writer_set])
neo.lib.logging.info(' Connections:')
pending_set = set(self._pending_processing)
for fd, conn in self.connection_dict.items():
neo.lib.logging.info(' %r: %r (pending=%r)', fd, conn,
conn in pending_set)
# Default to EpollEventManager.
EventManager = EpollEventManager
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/exception.py 0000664 0000000 0000000 00000001774 11634614701 0024317 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
class NeoException(Exception):
pass
class ElectionFailure(NeoException):
pass
class PrimaryFailure(NeoException):
pass
class OperationFailure(NeoException):
pass
class DatabaseFailure(NeoException):
pass
class NodeNotReady(NeoException):
pass
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/handler.py 0000664 0000000 0000000 00000013601 11634614701 0023726 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.lib.protocol import NodeStates, ErrorCodes, Packets, Errors
from neo.lib.protocol import PacketMalformedError, UnexpectedPacketError, \
BrokenNodeDisallowedError, NotReadyError, ProtocolError
class EventHandler(object):
"""This class handles events."""
def __init__(self, app):
self.app = app
def __repr__(self):
return self.__class__.__name__
def __unexpectedPacket(self, conn, packet, message=None):
"""Handle an unexpected packet."""
if message is None:
message = 'unexpected packet type %s in %s' % (type(packet),
self.__class__.__name__)
else:
message = 'unexpected packet: %s in %s' % (message,
self.__class__.__name__)
neo.lib.logging.error(message)
conn.answer(Errors.ProtocolError(message))
conn.abort()
# self.peerBroken(conn)
def dispatch(self, conn, packet):
"""This is a helper method to handle various packet types."""
try:
try:
method = getattr(self, packet.handler_method_name)
except AttributeError:
raise UnexpectedPacketError('no handler found')
args = packet.decode() or ()
conn.setPeerId(packet.getId())
method(conn, *args)
except UnexpectedPacketError, e:
self.__unexpectedPacket(conn, packet, *e.args)
except PacketMalformedError:
neo.lib.logging.error('malformed packet from %r', conn)
conn.notify(Packets.Notify('Malformed packet: %r' % (packet, )))
conn.abort()
# self.peerBroken(conn)
except BrokenNodeDisallowedError:
conn.answer(Errors.BrokenNode('go away'))
conn.abort()
except NotReadyError, message:
if not message.args:
message = 'Retry Later'
message = str(message)
conn.answer(Errors.NotReady(message))
conn.abort()
except ProtocolError, message:
message = str(message)
conn.answer(Errors.ProtocolError(message))
conn.abort()
def checkClusterName(self, name):
# raise an exception if the given name mismatch the current cluster name
if self.app.name != name:
neo.lib.logging.error('reject an alien cluster')
raise ProtocolError('invalid cluster name')
# Network level handlers
def packetReceived(self, conn, packet):
"""Called when a packet is received."""
self.dispatch(conn, packet)
def connectionStarted(self, conn):
"""Called when a connection is started."""
neo.lib.logging.debug('connection started for %r', conn)
def connectionCompleted(self, conn):
"""Called when a connection is completed."""
neo.lib.logging.debug('connection completed for %r (from %s:%u)',
conn, *conn.getConnector().getAddress())
def connectionFailed(self, conn):
"""Called when a connection failed."""
neo.lib.logging.debug('connection failed for %r', conn)
def connectionAccepted(self, conn):
"""Called when a connection is accepted."""
def connectionClosed(self, conn):
"""Called when a connection is closed by the peer."""
neo.lib.logging.debug('connection closed for %r', conn)
self.connectionLost(conn, NodeStates.TEMPORARILY_DOWN)
#def peerBroken(self, conn):
# """Called when a peer is broken."""
# neo.lib.logging.error('%r is broken', conn)
# # NodeStates.BROKEN
def connectionLost(self, conn, new_state):
""" this is a method to override in sub-handlers when there is no need
to make distinction from the kind event that closed the connection """
pass
# Packet handlers.
def ping(self, conn):
if not conn.isAborted():
conn.answer(Packets.Pong())
def pong(self, conn):
# Ignore PONG packets. The only purpose of ping/pong packets is
# to test/maintain underlying connection.
pass
def notify(self, conn, message):
neo.lib.logging.info('notification from %r: %s', conn, message)
def askBarrier(self, conn):
conn.answer(Packets.AnswerBarrier())
def answerBarrier(self, conn):
pass
# Error packet handlers.
def error(self, conn, code, message):
try:
getattr(self, Errors[code])(conn, message)
except (AttributeError, ValueError):
raise UnexpectedPacketError(message)
def protocolError(self, conn, message):
# the connection should have been closed by the remote peer
neo.lib.logging.error('protocol error: %s' % (message,))
def timeoutError(self, conn, message):
neo.lib.logging.error('timeout error: %s' % (message,))
def brokenNodeDisallowedError(self, conn, message):
raise RuntimeError, 'broken node disallowed error: %s' % (message,)
def alreadyPendingError(self, conn, message):
neo.lib.logging.error('already pending error: %s' % (message, ))
def ack(self, conn, message):
neo.lib.logging.debug("no error message : %s" % (message))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/locking.py 0000664 0000000 0000000 00000013717 11634614701 0023747 0 ustar 00root root 0000000 0000000 from threading import Lock as threading_Lock
from threading import RLock as threading_RLock
from threading import currentThread
from Queue import Queue as Queue_Queue
from Queue import Empty
"""
Verbose locking classes.
Python threading module contains a simple logging mechanism, but:
- It's limitted to RLock class
- It's enabled instance by instance
- Choice to log or not is done at instanciation
- It does not emit any log before trying to acquire lock
This file defines a VerboseLock class implementing basic lock API and
logging in appropriate places with extensive details.
It can be globaly toggled by changing VERBOSE_LOCKING value.
There is no overhead at all when disabled (passthrough to threading
classes).
"""
__all__ = ['Lock', 'RLock', 'Queue', 'Empty']
VERBOSE_LOCKING = False
import traceback
import sys
import os
class LockUser(object):
def __init__(self, level=0):
self.ident = currentThread().getName()
# This class is instanciated from a place desiring to known what
# called it.
# limit=1 would return execution position in this method
# limit=2 would return execution position in caller
# limit=3 returns execution position in caller's caller
# Additionnal level value (should be positive only) can be used when
# more intermediate calls are involved
self.stack = stack = traceback.extract_stack()[:-(2 + level)]
path, line_number, func_name, line = stack[-1]
# Simplify path. Only keep 3 last path elements. It is enough for
# current Neo directory structure.
path = os.path.join('...', *path.split(os.path.sep)[-3:])
self.caller = (path, line_number, func_name, line)
def __eq__(self, other):
return isinstance(other, self.__class__) and self.ident == other.ident
def __repr__(self):
return '%s@%s:%s %s' % (self.ident, self.caller[0], self.caller[1],
self.caller[3])
def formatStack(self):
return ''.join(traceback.format_list(self.stack))
class VerboseLockBase(object):
def __init__(self, reentrant=False, debug_lock=False):
self.reentrant = reentrant
self.debug_lock = debug_lock
self.owner = None
self.waiting = []
self._note('%s@%X created by %r', self.__class__.__name__, id(self),
LockUser(1))
def _note(self, fmt, *args):
sys.stderr.write(fmt % args + '\n')
sys.stderr.flush()
def _getOwner(self):
if self._locked():
owner = self.owner
else:
owner = None
return owner
def acquire(self, blocking=1):
me = LockUser()
owner = self._getOwner()
self._note('[%r]%s.acquire(%s) Waiting for lock. Owned by:%r ' \
'Waiting:%r', me, self, blocking, owner, self.waiting)
if (self.debug_lock and owner is not None) or \
(not self.reentrant and blocking and me == owner):
if me == owner:
self._note('[%r]%s.acquire(%s): Deadlock detected: ' \
' I already own this lock:%r', me, self, blocking, owner)
else:
self._note('[%r]%s.acquire(%s): debug lock triggered: %r',
me, self, blocking, owner)
self._note('Owner traceback:\n%s', owner.formatStack())
self._note('My traceback:\n%s', me.formatStack())
self.waiting.append(me)
try:
return self.lock.acquire(blocking)
finally:
self.owner = me
self.waiting.remove(me)
self._note('[%r]%s.acquire(%s) Lock granted. Waiting: %r',
me, self, blocking, self.waiting)
def release(self):
me = LockUser()
self._note('[%r]%s.release() Waiting: %r', me, self, self.waiting)
return self.lock.release()
def _locked(self):
raise NotImplementedError
def __repr__(self):
return '<%s@%X>' % (self.__class__.__name__, id(self))
class VerboseRLock(VerboseLockBase):
def __init__(self, verbose=None, debug_lock=False):
super(VerboseRLock, self).__init__(reentrant=True,
debug_lock=debug_lock)
self.lock = threading_RLock()
def _locked(self):
return self.lock._RLock__block.locked()
def _is_owned(self):
return self.lock._is_owned()
class VerboseLock(VerboseLockBase):
def __init__(self, verbose=None, debug_lock=False):
super(VerboseLock, self).__init__(debug_lock=debug_lock)
self.lock = threading_Lock()
def locked(self):
return self.lock.locked()
_locked = locked
class VerboseQueue(Queue_Queue):
def __init__(self, maxsize=0):
if maxsize <= 0:
self.put = self._verbose_put
Queue_Queue.__init__(self, maxsize=maxsize)
def _verbose_note(self, fmt, *args):
sys.stderr.write(fmt % args + '\n')
sys.stderr.flush()
def get(self, block=True, timeout=None):
note = self._verbose_note
me = '[%r]%s.get(block=%r, timeout=%r)' % (LockUser(), self, block, timeout)
note('%s waiting', me)
try:
result = Queue_Queue.get(self, block=block, timeout=timeout)
except Exception, exc:
note('%s got exeption %r', me, exc)
raise
note('%s got item', me)
return result
def _verbose_put(self, item, block=True, timeout=None):
note = self._verbose_note
me = '[%r]%s.put(..., block=%r, timeout=%r)' % (LockUser(), self, block, timeout)
try:
Queue_Queue.put(self, item, block=block, timeout=timeout)
except Exception, exc:
note('%s got exeption %r', me, exc)
raise
note('%s put item', me)
def __repr__(self):
return '<%s@%X>' % (self.__class__.__name__, id(self))
if VERBOSE_LOCKING:
Lock = VerboseLock
RLock = VerboseRLock
Queue = VerboseQueue
else:
Lock = threading_Lock
RLock = threading_RLock
Queue = Queue_Queue
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/logger.py 0000664 0000000 0000000 00000005222 11634614701 0023570 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from base64 import b64encode
import neo
from neo.lib.protocol import PacketMalformedError
from neo.lib.util import dump
from neo.lib.handler import EventHandler
from neo.lib.profiling import profiler_decorator
LOGGER_ENABLED = False
class PacketLogger(object):
""" Logger at packet level (for debugging purpose) """
def __init__(self):
self.enable(LOGGER_ENABLED)
def enable(self, enabled):
self.dispatch = enabled and self._dispatch or (lambda *args, **kw: None)
def _dispatch(self, conn, packet, outgoing):
"""This is a helper method to handle various packet types."""
# default log message
uuid = dump(conn.getUUID())
ip, port = conn.getAddress()
packet_name = packet.__class__.__name__
neo.lib.logging.debug('#0x%04x %-30s %s %s (%s:%d) %s', packet.getId(),
packet_name, outgoing and '>' or '<', uuid, ip, port,
b64encode(packet._body[:96]))
# look for custom packet logger
logger = getattr(self, packet.handler_method_name, None)
if logger is None:
return
# enhanced log
try:
args = packet.decode() or ()
except PacketMalformedError:
neo.lib.logging.warning("Can't decode packet for logging")
return
log_message = logger(conn, *args)
if log_message is not None:
neo.lib.logging.debug('#0x%04x %s', packet.getId(), log_message)
def error(self, conn, code, message):
return "%s (%s)" % (code, message)
def notifyNodeInformation(self, conn, node_list):
for node_type, address, uuid, state in node_list:
if address is not None:
address = '%s:%d' % address
else:
address = '?'
node = (dump(uuid), node_type, address, state)
neo.lib.logging.debug(' ! %s | %8s | %22s | %s' % node)
PACKET_LOGGER = PacketLogger()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/node.py 0000664 0000000 0000000 00000037134 11634614701 0023245 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from time import time
import neo.lib
from neo.lib.util import dump
from neo.lib.protocol import NodeTypes, NodeStates
from neo.lib import attributeTracker
class Node(object):
"""This class represents a node."""
_connection = None
def __init__(self, manager, address=None, uuid=None,
state=NodeStates.UNKNOWN):
self._state = state
self._address = address
self._uuid = uuid
self._manager = manager
self._last_state_change = time()
manager.add(self)
def notify(self, packet):
assert self.isConnected(), 'Not connected'
self._connection.notify(packet)
def ask(self, packet, *args, **kw):
assert self.isConnected(), 'Not connected'
self._connection.ask(packet, *args, **kw)
def answer(self, packet, msg_id=None):
assert self.isConnected(), 'Not connected'
self._connection.answer(packet, msg_id)
def getLastStateChange(self):
return self._last_state_change
def getState(self):
return self._state
def setState(self, new_state):
if self._state == new_state:
return
old_state = self._state
self._state = new_state
self._last_state_change = time()
self._manager._updateState(self, old_state)
def setAddress(self, address):
if self._address == address:
return
old_address = self._address
self._address = address
self._manager._updateAddress(self, old_address)
def getAddress(self):
return self._address
def setUUID(self, uuid):
if self._uuid == uuid:
return
old_uuid = self._uuid
self._uuid = uuid
self._manager._updateUUID(self, old_uuid)
self._manager._updateIdentified(self)
def getUUID(self):
return self._uuid
def onConnectionClosed(self):
"""
Callback from node's connection when closed
"""
assert self._connection is not None
del self._connection
self._manager._updateIdentified(self)
def setConnection(self, connection):
"""
Define the connection that is currently available to this node.
"""
assert connection is not None
assert self._connection is None
self._connection = connection
connection.setOnClose(self.onConnectionClosed)
self._manager._updateIdentified(self)
def getConnection(self):
"""
Returns the connection to the node if available
"""
assert self._connection is not None
return self._connection
def isConnected(self):
"""
Returns True is a connection is established with the node
"""
return self._connection is not None
def isIdentified(self):
"""
Returns True is the node is connected and identified
"""
return self._connection is not None and self._uuid is not None
def __repr__(self):
return '<%s(uuid=%s, address=%s, state=%s) at %x>' % (
self.__class__.__name__,
dump(self._uuid),
self._address,
self._state,
id(self),
)
def isMaster(self):
return False
def isStorage(self):
return False
def isClient(self):
return False
def isAdmin(self):
return False
def isRunning(self):
return self._state == NodeStates.RUNNING
def isUnknown(self):
return self._state == NodeStates.UNKNOWN
def isTemporarilyDown(self):
return self._state == NodeStates.TEMPORARILY_DOWN
def isDown(self):
return self._state == NodeStates.DOWN
def isBroken(self):
return self._state == NodeStates.BROKEN
def isHidden(self):
return self._state == NodeStates.HIDDEN
def isPending(self):
return self._state == NodeStates.PENDING
def setRunning(self):
self.setState(NodeStates.RUNNING)
def setUnknown(self):
self.setState(NodeStates.UNKNOWN)
def setTemporarilyDown(self):
self.setState(NodeStates.TEMPORARILY_DOWN)
def setDown(self):
self.setState(NodeStates.DOWN)
def setBroken(self):
self.setState(NodeStates.BROKEN)
def setHidden(self):
self.setState(NodeStates.HIDDEN)
def setPending(self):
self.setState(NodeStates.PENDING)
def asTuple(self):
""" Returned tuple is intented to be used in procotol encoders """
return (self.getType(), self._address, self._uuid, self._state)
def __gt__(self, node):
# sort per UUID if defined
if self._uuid is not None:
return self._uuid > node._uuid
return self._address > node._address
def getType(self):
try:
return NODE_CLASS_MAPPING[self.__class__]
except KeyError:
raise NotImplementedError
def whoSetState(self):
"""
Debugging method: call this method to know who set the current
state value.
"""
return attributeTracker.whoSet(self, '_state')
attributeTracker.track(Node)
class MasterNode(Node):
"""This class represents a master node."""
def isMaster(self):
return True
class StorageNode(Node):
"""This class represents a storage node."""
def isStorage(self):
return True
class ClientNode(Node):
"""This class represents a client node."""
def isClient(self):
return True
class AdminNode(Node):
"""This class represents an admin node."""
def isAdmin(self):
return True
NODE_TYPE_MAPPING = {
NodeTypes.MASTER: MasterNode,
NodeTypes.STORAGE: StorageNode,
NodeTypes.CLIENT: ClientNode,
NodeTypes.ADMIN: AdminNode,
}
NODE_CLASS_MAPPING = {
StorageNode: NodeTypes.STORAGE,
MasterNode: NodeTypes.MASTER,
ClientNode: NodeTypes.CLIENT,
AdminNode: NodeTypes.ADMIN,
}
class NodeManager(object):
"""This class manages node status."""
# TODO: rework getXXXList() methods, filter first by node type
# - getStorageList(identified=True, connected=True, )
# - getList(...)
def __init__(self):
self._node_set = set()
self._address_dict = {}
self._uuid_dict = {}
self._type_dict = {}
self._state_dict = {}
self._identified_dict = {}
close = __init__
def add(self, node):
if node in self._node_set:
neo.lib.logging.warning('adding a known node %r, ignoring', node)
return
self._node_set.add(node)
self._updateAddress(node, None)
self._updateUUID(node, None)
self.__updateSet(self._type_dict, None, node.__class__, node)
self.__updateSet(self._state_dict, None, node.getState(), node)
self._updateIdentified(node)
def remove(self, node):
if node not in self._node_set:
neo.lib.logging.warning('removing unknown node %r, ignoring', node)
return
self._node_set.remove(node)
self.__drop(self._address_dict, node.getAddress())
self.__drop(self._uuid_dict, node.getUUID())
self.__dropSet(self._state_dict, node.getState(), node)
self.__dropSet(self._type_dict, node.__class__, node)
uuid = node.getUUID()
if uuid in self._identified_dict:
del self._identified_dict[uuid]
def __drop(self, index_dict, key):
try:
del index_dict[key]
except KeyError:
# a node may have not be indexed by uuid or address, eg.:
# - a master known by address but without UUID
# - a client or admin node that don't have listening address
pass
def __update(self, index_dict, old_key, new_key, node):
""" Update an index from old to new key """
if old_key is not None:
assert index_dict[old_key] is node, '%r is stored as %s, ' \
'moving %r to %s' % (index_dict[old_key], old_key, node,
new_key)
del index_dict[old_key]
if new_key is not None:
index_dict[new_key] = node
def _updateIdentified(self, node):
uuid = node.getUUID()
if node.isIdentified():
self._identified_dict[uuid] = node
else:
self._identified_dict.pop(uuid, None)
def _updateAddress(self, node, old_address):
self.__update(self._address_dict, old_address, node.getAddress(), node)
def _updateUUID(self, node, old_uuid):
self.__update(self._uuid_dict, old_uuid, node.getUUID(), node)
def __dropSet(self, set_dict, key, node):
if key in set_dict and node in set_dict[key]:
set_dict[key].remove(node)
def __updateSet(self, set_dict, old_key, new_key, node):
""" Update a set index from old to new key """
if old_key in set_dict:
set_dict[old_key].remove(node)
if new_key is not None:
set_dict.setdefault(new_key, set()).add(node)
def _updateState(self, node, old_state):
self.__updateSet(self._state_dict, old_state, node.getState(), node)
def getList(self, node_filter=None):
if filter is None:
return list(self._node_set)
return filter(node_filter, self._node_set)
def getIdentifiedList(self, pool_set=None):
"""
Returns a generator to iterate over identified nodes
pool_set is an iterable of UUIDs allowed
"""
if pool_set is not None:
identified_nodes = self._identified_dict.items()
return [v for k, v in identified_nodes if k in pool_set]
return list(self._identified_dict.values())
def getConnectedList(self):
"""
Returns a generator to iterate over connected nodes
"""
# TODO: use an index
return [x for x in self._node_set if x.isConnected()]
def __getList(self, index_dict, key):
return index_dict.setdefault(key, set())
def getByStateList(self, state):
""" Get a node list filtered per the node state """
return list(self.__getList(self._state_dict, state))
def __getTypeList(self, type_klass, only_identified=False):
node_set = self.__getList(self._type_dict, type_klass)
if only_identified:
return [x for x in node_set if x.getUUID() in self._identified_dict]
return list(node_set)
def getMasterList(self, only_identified=False):
""" Return a list with master nodes """
return self.__getTypeList(MasterNode, only_identified)
def getStorageList(self, only_identified=False):
""" Return a list with storage nodes """
return self.__getTypeList(StorageNode, only_identified)
def getClientList(self, only_identified=False):
""" Return a list with client nodes """
return self.__getTypeList(ClientNode, only_identified)
def getAdminList(self, only_identified=False):
""" Return a list with admin nodes """
return self.__getTypeList(AdminNode, only_identified)
def getByAddress(self, address):
""" Return the node that match with a given address """
return self._address_dict.get(address, None)
def getByUUID(self, uuid):
""" Return the node that match with a given UUID """
return self._uuid_dict.get(uuid, None)
def hasAddress(self, address):
return address in self._address_dict
def hasUUID(self, uuid):
return uuid in self._uuid_dict
def _createNode(self, klass, **kw):
return klass(self, **kw)
def createMaster(self, **kw):
""" Create and register a new master """
return self._createNode(MasterNode, **kw)
def createStorage(self, **kw):
""" Create and register a new storage """
return self._createNode(StorageNode, **kw)
def createClient(self, **kw):
""" Create and register a new client """
return self._createNode(ClientNode, **kw)
def createAdmin(self, **kw):
""" Create and register a new admin """
return self._createNode(AdminNode, **kw)
def _getClassFromNodeType(self, node_type):
klass = NODE_TYPE_MAPPING.get(node_type)
if klass is None:
raise ValueError('Unknown node type : %s' % node_type)
return klass
def createFromNodeType(self, node_type, **kw):
return self._createNode(self._getClassFromNodeType(node_type), **kw)
def init(self):
self._node_set.clear()
self._type_dict.clear()
self._state_dict.clear()
self._uuid_dict.clear()
self._address_dict.clear()
def update(self, node_list):
for node_type, addr, uuid, state in node_list:
# This should be done here (although klass might not be used in this
# iteration), as it raises if type is not valid.
klass = self._getClassFromNodeType(node_type)
# lookup in current table
node_by_uuid = self.getByUUID(uuid)
node_by_addr = self.getByAddress(addr)
node = node_by_uuid or node_by_addr
log_args = (node_type, dump(uuid), addr, state)
if node is None:
if state == NodeStates.DOWN:
neo.lib.logging.debug('NOT creating node %s %s %s %s',
*log_args)
else:
node = self._createNode(klass, address=addr, uuid=uuid,
state=state)
neo.lib.logging.debug('creating node %r', node)
else:
assert isinstance(node, klass), 'node %r is not ' \
'of expected type: %r' % (node, klass)
assert None in (node_by_uuid, node_by_addr) or \
node_by_uuid is node_by_addr, \
'Discrepancy between node_by_uuid (%r) and ' \
'node_by_addr (%r)' % (node_by_uuid, node_by_addr)
if state == NodeStates.DOWN:
neo.lib.logging.debug(
'droping node %r (%r), found with %s ' \
'%s %s %s', node, node.isConnected(), *log_args)
if node.isConnected():
# cut this connection, node removed by handler
node.getConnection().close()
self.remove(node)
else:
neo.lib.logging.debug('updating node %r to %s %s %s %s',
node, *log_args)
node.setUUID(uuid)
node.setAddress(addr)
node.setState(state)
self.log()
def log(self):
neo.lib.logging.info('Node manager : %d nodes' % len(self._node_set))
for node in sorted(list(self._node_set)):
uuid = dump(node.getUUID()) or '-' * 32
address = node.getAddress() or ''
if address:
address = '%s:%d' % address
neo.lib.logging.info(' * %32s | %8s | %22s | %s' % (
uuid, node.getType(), address, node.getState()))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/profiling.py 0000664 0000000 0000000 00000002442 11634614701 0024303 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
"""
Profiling is done with tiny-profiler, a very simple profiler.
It is different from python's built-in profilers in that it requires
developpers to explicitely put probes on specific methods, reducing:
- profiling overhead
- undesired result entries
You can get this profiler at:
https://svn.erp5.org/repos/public/erp5/trunk/utils/tiny_profiler
"""
PROFILING_ENABLED = False
if PROFILING_ENABLED:
from tiny_profiler import profiler_decorator, profiler_report
else:
def profiler_decorator(func):
return func
def profiler_report():
pass
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/protocol.py 0000664 0000000 0000000 00000122145 11634614701 0024156 0 ustar 00root root 0000000 0000000
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import socket
import sys
import traceback
from socket import inet_ntoa, inet_aton
from cStringIO import StringIO
from struct import Struct
from neo.lib.util import Enum, getAddressType
# The protocol version (major, minor).
PROTOCOL_VERSION = (4, 1)
# Size restrictions.
MIN_PACKET_SIZE = 10
MAX_PACKET_SIZE = 0x4000000
PACKET_HEADER_FORMAT = Struct('!LHL')
# Check that header size is the expected value.
# If it is not, it means that struct module result is incompatible with
# "reference" platform (python 2.4 on x86-64).
assert PACKET_HEADER_FORMAT.size == 10, \
'Unsupported platform, packet header length = %i' % \
(PACKET_HEADER_FORMAT.size, )
RESPONSE_MASK = 0x8000
class ErrorCodes(Enum):
ACK = Enum.Item(0)
NOT_READY = Enum.Item(1)
OID_NOT_FOUND = Enum.Item(2)
OID_DOES_NOT_EXIST = Enum.Item(6)
TID_NOT_FOUND = Enum.Item(3)
PROTOCOL_ERROR = Enum.Item(4)
BROKEN_NODE = Enum.Item(5)
ALREADY_PENDING = Enum.Item(7)
ErrorCodes = ErrorCodes()
class ClusterStates(Enum):
RECOVERING = Enum.Item(1)
VERIFYING = Enum.Item(2)
RUNNING = Enum.Item(3)
STOPPING = Enum.Item(4)
ClusterStates = ClusterStates()
class NodeTypes(Enum):
MASTER = Enum.Item(1)
STORAGE = Enum.Item(2)
CLIENT = Enum.Item(3)
ADMIN = Enum.Item(4)
NodeTypes = NodeTypes()
class NodeStates(Enum):
RUNNING = Enum.Item(1)
TEMPORARILY_DOWN = Enum.Item(2)
DOWN = Enum.Item(3)
BROKEN = Enum.Item(4)
HIDDEN = Enum.Item(5)
PENDING = Enum.Item(6)
UNKNOWN = Enum.Item(7)
NodeStates = NodeStates()
class CellStates(Enum):
UP_TO_DATE = Enum.Item(1)
OUT_OF_DATE = Enum.Item(2)
FEEDING = Enum.Item(3)
DISCARDED = Enum.Item(4)
CellStates = CellStates()
class LockState(Enum):
NOT_LOCKED = Enum.Item(1)
GRANTED = Enum.Item(2)
GRANTED_TO_OTHER = Enum.Item(3)
LockState = LockState()
# used for logging
node_state_prefix_dict = {
NodeStates.RUNNING: 'R',
NodeStates.TEMPORARILY_DOWN: 'T',
NodeStates.DOWN: 'D',
NodeStates.BROKEN: 'B',
NodeStates.HIDDEN: 'H',
NodeStates.PENDING: 'P',
NodeStates.UNKNOWN: 'U',
}
# used for logging
cell_state_prefix_dict = {
CellStates.UP_TO_DATE: 'U',
CellStates.OUT_OF_DATE: 'O',
CellStates.FEEDING: 'F',
CellStates.DISCARDED: 'D',
}
# Other constants.
INVALID_UUID = '\0' * 16
INVALID_TID = '\xff' * 8
INVALID_OID = '\xff' * 8
INVALID_PARTITION = 0xffffffff
INVALID_ADDRESS_TYPE = socket.AF_UNSPEC
ZERO_TID = '\0' * 8
ZERO_OID = '\0' * 8
OID_LEN = len(INVALID_OID)
TID_LEN = len(INVALID_TID)
UUID_NAMESPACES = {
NodeTypes.STORAGE: 'S',
NodeTypes.MASTER: 'M',
NodeTypes.CLIENT: 'C',
NodeTypes.ADMIN: 'A',
}
class ProtocolError(Exception):
""" Base class for protocol errors, close the connection """
pass
class PacketMalformedError(ProtocolError):
""" Close the connection and set the node as broken"""
pass
class UnexpectedPacketError(ProtocolError):
""" Close the connection and set the node as broken"""
pass
class NotReadyError(ProtocolError):
""" Just close the connection """
pass
class BrokenNodeDisallowedError(ProtocolError):
""" Just close the connection """
pass
class Packet(object):
"""
Base class for any packet definition. The _fmt class attribute must be
defined for any non-empty packet.
"""
_ignore_when_closed = False
_request = None
_answer = None
_body = None
_code = None
_fmt = None
_id = None
def __init__(self, *args, **kw):
assert self._code is not None, "Packet class not registered"
if args or kw:
args = list(args)
buf = StringIO()
# load named arguments
for item in self._fmt._items[len(args):]:
args.append(kw.get(item._name))
self._fmt.encode(buf.write, args)
self._body = buf.getvalue()
else:
self._body = ''
def decode(self):
assert self._body is not None
if self._fmt is None:
return ()
buf = StringIO(self._body)
try:
return self._fmt.decode(buf.read)
except ParseError, msg:
name = self.__class__.__name__
raise PacketMalformedError("%s fail (%s)" % (name, msg))
def setContent(self, msg_id, body):
""" Register the packet content for future decoding """
self._id = msg_id
self._body = body
def setId(self, value):
self._id = value
def getId(self):
assert self._id is not None, "No identifier applied on the packet"
return self._id
def encode(self):
""" Encode a packet as a string to send it over the network """
content = self._body
length = PACKET_HEADER_FORMAT.size + len(content)
return (PACKET_HEADER_FORMAT.pack(self._id, self._code, length), content)
def __len__(self):
return PACKET_HEADER_FORMAT.size + len(self._body)
def __repr__(self):
return '%s[%r]' % (self.__class__.__name__, self._id)
def __eq__(self, other):
""" Compare packets with their code instead of content """
if other is None:
return False
assert isinstance(other, Packet)
return self._code == other._code
def isError(self):
return isinstance(self, Error)
def isResponse(self):
return self._code & RESPONSE_MASK == RESPONSE_MASK
def getAnswerClass(self):
return self._answer
def ignoreOnClosedConnection(self):
"""
Tells if this packet must be ignored when its connection is closed
when it is handled.
"""
return self._ignore_when_closed
class ParseError(Exception):
"""
An exception that encapsulate another and build the 'path' of the
packet item that generate the error.
"""
def __init__(self, item, trace):
Exception.__init__(self)
self._trace = trace
self._items = [item]
def append(self, item):
self._items.append(item)
def __repr__(self):
chain = '/'.join([item.getName() for item in reversed(self._items)])
return 'at %s:\n%s' % (chain, self._trace)
__str__ = __repr__
# packet parsers
class PItem(object):
"""
Base class for any packet item, _encode and _decode must be overriden
by subclasses.
"""
def __init__(self, name):
self._name = name
def __repr__(self):
return self.__class__.__name__
def getName(self):
return self._name
def _trace(self, method, *args):
try:
return method(*args)
except ParseError, e:
# trace and forward exception
e.append(self)
raise
except Exception:
# original exception, encapsulate it
trace = ''.join(traceback.format_exception(*sys.exc_info())[2:])
raise ParseError(self, trace)
def encode(self, writer, items):
return self._trace(self._encode, writer, items)
def decode(self, reader):
return self._trace(self._decode, reader)
def _encode(self, writer, items):
raise NotImplementedError, self.__class__.__name__
def _decode(self, reader):
raise NotImplementedError, self.__class__.__name__
class PStruct(PItem):
"""
Aggregate other items
"""
def __init__(self, name, *items):
PItem.__init__(self, name)
self._items = items
def _encode(self, writer, items):
assert len(self._items) == len(items), (items, self._items)
for item, value in zip(self._items, items):
item.encode(writer, value)
def _decode(self, reader):
return tuple([item.decode(reader) for item in self._items])
class PStructItem(PItem):
"""
A single value encoded with struct
"""
def __init__(self, name, fmt):
PItem.__init__(self, name)
struct = Struct(fmt)
self.pack = struct.pack
self.unpack = struct.unpack
self.size = struct.size
def _encode(self, writer, value):
writer(self.pack(value))
def _decode(self, reader):
return self.unpack(reader(self.size))[0]
class PList(PStructItem):
"""
A list of homogeneous items
"""
def __init__(self, name, item):
PStructItem.__init__(self, name, '!L')
self._item = item
def _encode(self, writer, items):
assert isinstance(items, (list, tuple, set)), (type(items), items)
writer(self.pack(len(items)))
item = self._item
for value in items:
item.encode(writer, value)
def _decode(self, reader):
length = self.unpack(reader(self.size))[0]
item = self._item
return [item.decode(reader) for _ in xrange(length)]
class PDict(PStructItem):
"""
A dictionary with custom key and value formats
"""
def __init__(self, name, key, value):
PStructItem.__init__(self, name, '!L')
self._key = key
self._value = value
def _encode(self, writer, item):
assert isinstance(item , dict), (type(item), item)
writer(self.pack(len(item)))
key, value = self._key, self._value
for k, v in item.iteritems():
key.encode(writer, k)
value.encode(writer, v)
def _decode(self, reader):
length = self.unpack(reader(self.size))[0]
key, value = self._key, self._value
new_dict = {}
for _ in xrange(length):
k = key.decode(reader)
v = value.decode(reader)
new_dict[k] = v
return new_dict
class PEnum(PStructItem):
"""
Encapsulate an enumeration value
"""
def __init__(self, name, enum):
PStructItem.__init__(self, name, '!l')
self._enum = enum
def _encode(self, writer, item):
if item is None:
item = -1
else:
assert isinstance(item, int), item
writer(self.pack(item))
def _decode(self, reader):
code = self.unpack(reader(self.size))[0]
if code == -1:
return None
try:
return self._enum[code]
except KeyError:
enum = self._enum.__class__.__name__
raise ValueError, 'Invalid code for %s enum: %r' % (enum, code)
class PAddressIPGeneric(PStructItem):
def __init__(self, name, format):
PStructItem.__init__(self, name, format)
def encode(self, writer, address):
host, port = address
host = socket.inet_pton(self.af_type, host)
writer(self.pack(host, port))
def decode(self, reader):
data = reader(self.size)
address = self.unpack(data)
host, port = address
host = socket.inet_ntop(self.af_type, host)
return (host, port)
class PAddressIPv4(PAddressIPGeneric):
af_type = socket.AF_INET
def __init__(self, name):
PAddressIPGeneric.__init__(self, name, '!4sH')
class PAddressIPv6(PAddressIPGeneric):
af_type = socket.AF_INET6
def __init__(self, name):
PAddressIPGeneric.__init__(self, name, '!16sH')
class PAddress(PStructItem):
"""
An host address (IPv4/IPv6)
"""
address_format_dict = {
socket.AF_INET: PAddressIPv4('ipv4'),
socket.AF_INET6: PAddressIPv6('ipv6'),
}
def __init__(self, name):
PStructItem.__init__(self, name, '!L')
def _encode(self, writer, address):
if address is None:
writer(self.pack(INVALID_ADDRESS_TYPE))
return
af_type = getAddressType(address)
writer(self.pack(af_type))
encoder = self.address_format_dict[af_type]
encoder.encode(writer, address)
def _decode(self, reader):
af_type = self.unpack(reader(self.size))[0]
if af_type == INVALID_ADDRESS_TYPE:
return None
decoder = self.address_format_dict[af_type]
host, port = decoder.decode(reader)
return (host, port)
class PString(PStructItem):
"""
A variable-length string
"""
def __init__(self, name):
PStructItem.__init__(self, name, '!L')
def _encode(self, writer, value):
writer(self.pack(len(value)))
writer(value)
def _decode(self, reader):
length = self.unpack(reader(self.size))[0]
return reader(length)
class PBoolean(PStructItem):
"""
A boolean value, encoded as a single byte
"""
def __init__(self, name):
PStructItem.__init__(self, name, '!B')
def _encode(self, writer, value):
writer(self.pack(bool(value)))
def _decode(self, reader):
return bool(self.unpack(reader(self.size))[0])
class PNumber(PStructItem):
"""
A integer number (4-bytes length)
"""
def __init__(self, name):
PStructItem.__init__(self, name, '!L')
class PIndex(PStructItem):
"""
A big integer to defined indexes in a huge list.
"""
def __init__(self, name):
PStructItem.__init__(self, name, '!Q')
class PPTID(PStructItem):
"""
A None value means an invalid PTID
"""
def __init__(self, name):
PStructItem.__init__(self, name, '!Q')
def _encode(self, writer, value):
if value is None:
value = 0
PStructItem._encode(self, writer, value)
def _decode(self, reader):
value = PStructItem._decode(self, reader)
if value == 0:
value = None
return value
class PProtocol(PStructItem):
"""
The protocol version definition
"""
def __init__(self, name):
PStructItem.__init__(self, name, '!LL')
def _encode(self, writer, version):
writer(self.pack(*version))
def _decode(self, reader):
major, minor = self.unpack(reader(self.size))
if (major, minor) != PROTOCOL_VERSION:
raise ProtocolError('protocol version mismatch')
return (major, minor)
class PUUID(PItem):
"""
An UUID (node identifier)
"""
def _encode(self, writer, uuid):
if uuid is None:
uuid = INVALID_UUID
assert len(uuid) == 16, (len(uuid), uuid)
writer(uuid)
def _decode(self, reader):
uuid = reader(16)
if uuid == INVALID_UUID:
uuid = None
return uuid
class PTID(PItem):
"""
A transaction identifier
"""
def _encode(self, writer, tid):
if tid is None:
tid = INVALID_TID
assert len(tid) == 8, (len(tid), tid)
writer(tid)
def _decode(self, reader):
tid = reader(8)
if tid == INVALID_TID:
tid = None
return tid
# same definition, for now
POID = PTID
PChecksum = PUUID # (md5 is same length as uuid)
# common definitions
PFEmpty = PStruct('no_content')
PFNodeType = PEnum('type', NodeTypes)
PFNodeState = PEnum('state', NodeStates)
PFCellState = PEnum('state', CellStates)
PFNodeList = PList('node_list',
PStruct('node',
PFNodeType,
PAddress('address'),
PUUID('uuid'),
PFNodeState,
),
)
PFCellList = PList('cell_list',
PStruct('cell',
PUUID('uuid'),
PFCellState,
),
)
PFRowList = PList('row_list',
PStruct('row',
PNumber('offset'),
PFCellList,
),
)
PFHistoryList = PList('history_list',
PStruct('history_entry',
PTID('serial'),
PNumber('size'),
),
)
PFUUIDList = PList('uuid_list',
PUUID('uuid'),
)
PFTidList = PList('tid_list',
PTID('tid'),
)
PFOidList = PList('oid_list',
POID('oid'),
)
# packets definition
class Notify(Packet):
"""
General purpose notification (remote logging)
"""
_fmt = PStruct('notify',
PString('message'),
)
class Error(Packet):
"""
Error is a special type of message, because this can be sent against
any other message, even if such a message does not expect a reply
usually. Any -> Any.
"""
_fmt = PStruct('error',
PNumber('code'),
PString('message'),
)
class Ping(Packet):
"""
Check if a peer is still alive. Any -> Any.
"""
_answer = PFEmpty
class RequestIdentification(Packet):
"""
Request a node identification. This must be the first packet for any
connection. Any -> Any.
"""
_fmt = PStruct('request_identification',
PProtocol('protocol_version'),
PFNodeType,
PUUID('uuid'),
PAddress('address'),
PString('name'),
)
_answer = PStruct('accept_identification',
PFNodeType,
PUUID('my_uuid'),
PNumber('num_partitions'),
PNumber('num_replicas'),
PUUID('your_uuid'),
)
def __init__(self, *args, **kw):
if args or kw:
# always announce current protocol version
args = list(args)
args.insert(0, PROTOCOL_VERSION)
super(RequestIdentification, self).__init__(*args, **kw)
def decode(self):
return super(RequestIdentification, self).decode()[1:]
class PrimaryMaster(Packet):
"""
Ask a current primary master node. This must be the second message when
connecting to a master node. Any -> M.
Reply to Ask Primary Master. This message includes a list of known master
nodes to make sure that a peer has the same information. M -> Any.
"""
_answer = PStruct('answer_primary',
PUUID('primary_uuid'),
PList('known_master_list',
PStruct('master',
PAddress('address'),
PUUID('uuid'),
),
),
)
class AnnouncePrimary(Packet):
"""
Announce a primary master node election. PM -> SM.
"""
class ReelectPrimary(Packet):
"""
Force a re-election of a primary master node. M -> M.
"""
class LastIDs(Packet):
"""
Ask the last OID, the last TID and the last Partition Table ID that
a storage node stores. Used to recover information. PM -> S, S -> PM.
Reply to Ask Last IDs. S -> PM, PM -> S.
"""
_answer = PStruct('answer_last_ids',
POID('last_oid'),
PTID('last_tid'),
PPTID('last_ptid'),
)
class PartitionTable(Packet):
"""
Ask the full partition table. PM -> S.
Answer rows in a partition table. S -> PM.
"""
_answer = PStruct('answer_partition_table',
PPTID('ptid'),
PFRowList,
)
class NotifyPartitionTable(Packet):
"""
Send rows in a partition table to update other nodes. PM -> S, C.
"""
_fmt = PStruct('send_partition_table',
PPTID('ptid'),
PFRowList,
)
class PartitionChanges(Packet):
"""
Notify a subset of a partition table. This is used to notify changes.
PM -> S, C.
"""
_fmt = PStruct('notify_partition_changes',
PPTID('ptid'),
PList('cell_list',
PStruct('cell',
PNumber('offset'),
PUUID('uuid'),
PFNodeState,
),
),
)
class ReplicationDone(Packet):
"""
Notify the master node that a partition has been successully replicated from
a storage to another.
S -> M
"""
_fmt = PStruct('notify_replication_done',
PNumber('offset'),
)
class StartOperation(Packet):
"""
Tell a storage nodes to start an operation. Until a storage node receives
this message, it must not serve client nodes. PM -> S.
"""
class StopOperation(Packet):
"""
Tell a storage node to stop an operation. Once a storage node receives
this message, it must not serve client nodes. PM -> S.
"""
class UnfinishedTransactions(Packet):
"""
Ask unfinished transactions PM -> S.
Answer unfinished transactions S -> PM.
"""
_answer = PStruct('answer_unfinished_transactions',
PTID('max_tid'),
PList('tid_list',
PTID('unfinished_tid'),
),
)
class ObjectPresent(Packet):
"""
Ask if an object is present. If not present, OID_NOT_FOUND should be
returned. PM -> S.
Answer that an object is present. PM -> S.
"""
_fmt = PStruct('object_present',
POID('oid'),
PTID('tid'),
)
_answer = PStruct('object_present',
POID('oid'),
PTID('tid'),
)
class DeleteTransaction(Packet):
"""
Delete a transaction. PM -> S.
"""
_fmt = PStruct('delete_transaction',
PTID('tid'),
PFOidList,
)
class CommitTransaction(Packet):
"""
Commit a transaction. PM -> S.
"""
_fmt = PStruct('commit_transaction',
PTID('tid'),
)
class BeginTransaction(Packet):
"""
Ask to begin a new transaction. C -> PM.
Answer when a transaction begin, give a TID if necessary. PM -> C.
"""
_fmt = PStruct('ask_begin_transaction',
PTID('tid'),
)
_answer = PStruct('answer_begin_transaction',
PTID('tid'),
)
class FinishTransaction(Packet):
"""
Finish a transaction. C -> PM.
Answer when a transaction is finished. PM -> C.
"""
_fmt = PStruct('ask_finish_transaction',
PTID('tid'),
PFOidList,
)
_answer = PStruct('answer_information_locked',
PTID('ttid'),
PTID('tid'),
)
class NotifyTransactionFinished(Packet):
"""
Notify that a transaction blocking a replication is now finished
M -> S
"""
_fmt = PStruct('notify_transaction_finished',
PTID('ttid'),
PTID('max_tid'),
)
class LockInformation(Packet):
"""
Lock information on a transaction. PM -> S.
Notify information on a transaction locked. S -> PM.
"""
_fmt = PStruct('ask_lock_informations',
PTID('ttid'),
PTID('tid'),
PFOidList,
)
_answer = PStruct('answer_information_locked',
PTID('tid'),
)
class InvalidateObjects(Packet):
"""
Invalidate objects. PM -> C.
"""
_fmt = PStruct('ask_finish_transaction',
PTID('tid'),
PFOidList,
)
class UnlockInformation(Packet):
"""
Unlock information on a transaction. PM -> S.
"""
_fmt = PStruct('notify_unlock_information',
PTID('tid'),
)
class GenerateOIDs(Packet):
"""
Ask new object IDs. C -> PM.
Answer new object IDs. PM -> C.
"""
_fmt = PStruct('ask_new_oids',
PNumber('num_oids'),
)
_answer = PStruct('answer_new_oids',
PFOidList,
)
class StoreObject(Packet):
"""
Ask to store an object. Send an OID, an original serial, a current
transaction ID, and data. C -> S.
Answer if an object has been stored. If an object is in conflict,
a serial of the conflicting transaction is returned. In this case,
if this serial is newer than the current transaction ID, a client
node must not try to resolve the conflict. S -> C.
"""
_fmt = PStruct('ask_store_object',
POID('oid'),
PTID('serial'),
PBoolean('compression'),
PNumber('checksum'),
PString('data'),
PTID('data_serial'),
PTID('tid'),
PBoolean('unlock'),
)
_answer = PStruct('answer_store_object',
PBoolean('conflicting'),
POID('oid'),
PTID('serial'),
)
class AbortTransaction(Packet):
"""
Abort a transaction. C -> S, PM.
"""
_fmt = PStruct('abort_transaction',
PTID('tid'),
)
class StoreTransaction(Packet):
"""
Ask to store a transaction. C -> S.
Answer if transaction has been stored. S -> C.
"""
_fmt = PStruct('ask_store_transaction',
PTID('tid'),
PString('user'),
PString('description'),
PString('extension'),
PFOidList,
)
_answer = PStruct('answer_store_transaction',
PTID('tid'),
)
class GetObject(Packet):
"""
Ask a stored object by its OID and a serial or a TID if given. If a serial
is specified, the specified revision of an object will be returned. If
a TID is specified, an object right before the TID will be returned. S,C -> S.
Answer the requested object. S -> C.
"""
_fmt = PStruct('ask_object',
POID('oid'),
PTID('serial'),
PTID('tid'),
)
_answer = PStruct('answer_object',
POID('oid'),
PTID('serial_start'),
PTID('serial_end'),
PBoolean('compression'),
PNumber('checksum'),
PString('data'),
PTID('data_serial'),
)
class TIDList(Packet):
"""
Ask for TIDs between a range of offsets. The order of TIDs is descending,
and the range is [first, last). C -> S.
Answer the requested TIDs. S -> C.
"""
_fmt = PStruct('ask_tids',
PIndex('first'),
PIndex('last'),
PNumber('partition'),
)
_answer = PStruct('answer_tids',
PFTidList,
)
class TIDListFrom(Packet):
"""
Ask for length TIDs starting at min_tid. The order of TIDs is ascending.
S -> S.
Answer the requested TIDs. S -> S
"""
_fmt = PStruct('tid_list_from',
PTID('min_tid'),
PTID('max_tid'),
PNumber('length'),
PList('partition_list',
PNumber('partition'),
),
)
_answer = PStruct('answer_tids',
PFTidList,
)
class TransactionInformation(Packet):
"""
Ask information about a transaction. Any -> S.
Answer information (user, description) about a transaction. S -> Any.
"""
_fmt = PStruct('ask_transaction_information',
PTID('tid'),
)
_answer = PStruct('answer_transaction_information',
PTID('tid'),
PString('user'),
PString('description'),
PString('extension'),
PBoolean('packed'),
PFOidList,
)
class ObjectHistory(Packet):
"""
Ask history information for a given object. The order of serials is
descending, and the range is [first, last]. C -> S.
Answer history information (serial, size) for an object. S -> C.
"""
_fmt = PStruct('ask_object_history',
POID('oid'),
PIndex('first'),
PIndex('last'),
)
_answer = PStruct('answer_object_history',
POID('oid'),
PFHistoryList,
)
class ObjectHistoryFrom(Packet):
"""
Ask history information for a given object. The order of serials is
ascending, and starts at (or above) min_serial for min_oid. S -> S.
Answer the requested serials. S -> S.
"""
_fmt = PStruct('ask_object_history',
POID('min_oid'),
PTID('min_serial'),
PTID('max_serial'),
PNumber('length'),
PNumber('partition'),
)
_answer = PStruct('ask_finish_transaction',
PDict('object_dict',
POID('oid'),
PFTidList,
),
)
class PartitionList(Packet):
"""
All the following messages are for neoctl to admin node
Ask information about partition
Answer information about partition
"""
_fmt = PStruct('ask_partition_list',
PNumber('min_offset'),
PNumber('max_offset'),
PUUID('uuid'),
)
_answer = PStruct('answer_partition_list',
PPTID('ptid'),
PFRowList,
)
class NodeList(Packet):
"""
Ask information about nodes
Answer information about nodes
"""
_fmt = PStruct('ask_node_list',
PFNodeType,
)
_answer = PStruct('answer_node_list',
PFNodeList,
)
class SetNodeState(Packet):
"""
Set the node state
"""
_fmt = PStruct('set_node_state',
PUUID('uuid'),
PFNodeState,
PBoolean('modify_partition_table'),
)
_answer = Error
class AddPendingNodes(Packet):
"""
Ask the primary to include some pending node in the partition table
"""
_fmt = PStruct('add_pending_nodes',
PFUUIDList,
)
_answer = Error
class NotifyNodeInformation(Packet):
"""
Notify information about one or more nodes. PM -> Any.
"""
_fmt = PStruct('notify_node_informations',
PFNodeList,
)
class NodeInformation(Packet):
"""
Ask node information
"""
_answer = PFEmpty
class SetClusterState(Packet):
"""
Set the cluster state
"""
_fmt = PStruct('set_cluster_state',
PEnum('state', ClusterStates),
)
_answer = Error
class ClusterInformation(Packet):
"""
Notify information about the cluster
"""
_fmt = PStruct('notify_cluster_information',
PEnum('state', ClusterStates),
)
class ClusterState(Packet):
"""
Ask state of the cluster
Answer state of the cluster
"""
_answer = PStruct('answer_cluster_state',
PEnum('state', ClusterStates),
)
class NotifyLastOID(Packet):
"""
Notify last OID generated
"""
_fmt = PStruct('notify_last_oid',
POID('last_oid'),
)
class ObjectUndoSerial(Packet):
"""
Ask storage the serial where object data is when undoing given transaction,
for a list of OIDs.
C -> S
Answer serials at which object data is when undoing a given transaction.
object_tid_dict has the following format:
key: oid
value: 3-tuple
current_serial (TID)
The latest serial visible to the undoing transaction.
undo_serial (TID)
Where undone data is (tid at which data is before given undo).
is_current (bool)
If current_serial's data is current on storage.
S -> C
"""
_fmt = PStruct('ask_undo_transaction',
PTID('tid'),
PTID('ltid'),
PTID('undone_tid'),
PFOidList,
)
_answer = PStruct('answer_undo_transaction',
PDict('object_tid_dict',
POID('oid'),
PStruct('object_tid_value',
PTID('current_serial'),
PTID('undo_serial'),
PBoolean('is_current'),
),
),
)
class HasLock(Packet):
"""
Ask a storage is oid is locked by another transaction.
C -> S
Answer whether a transaction holds the write lock for requested object.
"""
_fmt = PStruct('has_load_lock',
PTID('tid'),
POID('oid'),
)
_answer = PStruct('answer_has_lock',
POID('oid'),
PEnum('lock_state', LockState),
)
class CheckCurrentSerial(Packet):
"""
Verifies if given serial is current for object oid in the database, and
take a write lock on it (so that this state is not altered until
transaction ends).
Answer to AskCheckCurrentSerial.
Same structure as AnswerStoreObject, to handle the same way, except there
is nothing to invalidate in any client's cache.
"""
_fmt = PStruct('ask_check_current_serial',
PTID('tid'),
PTID('serial'),
POID('oid'),
)
_answer = PStruct('answer_store_object',
PBoolean('conflicting'),
POID('oid'),
PTID('serial'),
)
class Barrier(Packet):
"""
Initates a "network barrier", allowing the node sending this packet to know
when all packets sent previously on the same connection have been handled
by its peer.
"""
_answer = PFEmpty
class Pack(Packet):
"""
Request a pack at given TID.
C -> M
M -> S
Inform that packing it over.
S -> M
M -> C
"""
_fmt = PStruct('ask_pack',
PTID('tid'),
)
_answer = PStruct('answer_pack',
PBoolean('status'),
)
class CheckTIDRange(Packet):
"""
Ask some stats about a range of transactions.
Used to know if there are differences between a replicating node and
reference node.
S -> S
Stats about a range of transactions.
Used to know if there are differences between a replicating node and
reference node.
S -> S
"""
_fmt = PStruct('ask_check_tid_range',
PTID('min_tid'),
PTID('max_tid'),
PNumber('length'),
PNumber('partition'),
)
_answer = PStruct('answer_check_tid_range',
PTID('min_tid'),
PNumber('length'),
PNumber('count'),
PChecksum('checksum'),
PTID('max_tid'),
)
class CheckSerialRange(Packet):
"""
Ask some stats about a range of object history.
Used to know if there are differences between a replicating node and
reference node.
S -> S
Stats about a range of object history.
Used to know if there are differences between a replicating node and
reference node.
S -> S
"""
_fmt = PStruct('ask_check_serial_range',
POID('min_oid'),
PTID('min_serial'),
PTID('max_tid'),
PNumber('length'),
PNumber('partition'),
)
_answer = PStruct('answer_check_serial_range',
POID('min_oid'),
PTID('min_serial'),
PNumber('length'),
PNumber('count'),
PChecksum('oid_checksum'),
POID('max_oid'),
PChecksum('serial_checksum'),
PTID('max_serial'),
)
class LastTransaction(Packet):
"""
Ask last committed TID.
C -> M
Answer last committed TID.
M -> C
"""
_answer = PStruct('answer_last_transaction',
PTID('tid'),
)
class NotifyReady(Packet):
"""
Notify that node is ready to serve requests.
S -> M
"""
pass
StaticRegistry = {}
def register(code, request, ignore_when_closed=None):
""" Register a packet in the packet registry """
# register the request
assert code not in StaticRegistry, "Duplicate request packet code"
request._code = code
StaticRegistry[code] = request
answer = request._answer
if ignore_when_closed is None:
# By default, on a closed connection:
# - request: ignore
# - answer: keep
# - nofitication: keep
ignore_when_closed = answer is not None
request._ignore_when_closed = ignore_when_closed
if answer in (Error, None):
return request
# build a class for the answer
answer = type('Answer%s' % (request.__name__, ), (Packet, ), {})
answer._fmt = request._answer
# compute the answer code
code = code | RESPONSE_MASK
answer._request = request
assert answer._code is None, "Answer of %s is already used" % (request, )
answer._code = code
request._answer = answer
# and register the answer packet
assert code not in StaticRegistry, "Duplicate response packet code"
StaticRegistry[code] = answer
return (request, answer)
class ParserState(object):
"""
Parser internal state.
To be considered opaque datatype outside of PacketRegistry.parse .
"""
payload = None
def set(self, payload):
self.payload = payload
def get(self):
return self.payload
def clear(self):
self.payload = None
class Packets(dict):
"""
Packet registry that check packet code unicity and provide an index
"""
def __metaclass__(name, base, d):
for k, v in d.iteritems():
if isinstance(v, type) and issubclass(v, Packet):
v.handler_method_name = k[0].lower() + k[1:]
# this builds a "singleton"
return type('PacketRegistry', base, d)(StaticRegistry)
def parse(self, buf, state_container):
state = state_container.get()
if state is None:
header = buf.read(PACKET_HEADER_FORMAT.size)
if header is None:
return None
msg_id, msg_type, msg_len = PACKET_HEADER_FORMAT.unpack(header)
try:
packet_klass = self[msg_type]
except KeyError:
raise PacketMalformedError('Unknown packet type')
if msg_len > MAX_PACKET_SIZE:
raise PacketMalformedError('message too big (%d)' % msg_len)
if msg_len < MIN_PACKET_SIZE:
raise PacketMalformedError('message too small (%d)' % msg_len)
msg_len -= PACKET_HEADER_FORMAT.size
else:
msg_id, packet_klass, msg_len = state
data = buf.read(msg_len)
if data is None:
# Not enough.
if state is None:
state_container.set((msg_id, packet_klass, msg_len))
return None
if state:
state_container.clear()
packet = packet_klass()
packet.setContent(msg_id, data)
return packet
# notifications
Error = register(
0x8000, Error)
Ping, Pong = register(
0x0001, Ping)
Notify = register(
0x0002, Notify)
RequestIdentification, AcceptIdentification = register(
0x0003, RequestIdentification)
AskPrimary, AnswerPrimary = register(
0x0004, PrimaryMaster)
AnnouncePrimary = register(
0x0005, AnnouncePrimary)
ReelectPrimary = register(
0x0006, ReelectPrimary)
NotifyNodeInformation = register(
0x0007, NotifyNodeInformation)
AskLastIDs, AnswerLastIDs = register(
0x0008, LastIDs)
AskPartitionTable, AnswerPartitionTable = register(
0x0009, PartitionTable)
SendPartitionTable = register(
0x000A, NotifyPartitionTable)
NotifyPartitionChanges = register(
0x000B, PartitionChanges)
StartOperation = register(
0x000C, StartOperation)
StopOperation = register(
0x000D, StopOperation)
AskUnfinishedTransactions, AnswerUnfinishedTransactions = register(
0x000E, UnfinishedTransactions)
AskObjectPresent, AnswerObjectPresent = register(
0x000F, ObjectPresent)
DeleteTransaction = register(
0x0010, DeleteTransaction)
CommitTransaction = register(
0x0011, CommitTransaction)
AskBeginTransaction, AnswerBeginTransaction = register(
0x0012, BeginTransaction)
AskFinishTransaction, AnswerTransactionFinished = register(
0x0013, FinishTransaction, ignore_when_closed=False)
AskLockInformation, AnswerInformationLocked = register(
0x0014, LockInformation, ignore_when_closed=False)
InvalidateObjects = register(
0x0015, InvalidateObjects)
NotifyUnlockInformation = register(
0x0016, UnlockInformation)
AskNewOIDs, AnswerNewOIDs = register(
0x0017, GenerateOIDs)
AskStoreObject, AnswerStoreObject = register(
0x0018, StoreObject)
AbortTransaction = register(
0x0019, AbortTransaction)
AskStoreTransaction, AnswerStoreTransaction = register(
0x001A, StoreTransaction)
AskObject, AnswerObject = register(
0x001B, GetObject)
AskTIDs, AnswerTIDs = register(
0x001C, TIDList)
AskTransactionInformation, AnswerTransactionInformation = register(
0x001D, TransactionInformation)
AskObjectHistory, AnswerObjectHistory = register(
0x001E, ObjectHistory)
AskPartitionList, AnswerPartitionList = register(
0x001F, PartitionList)
AskNodeList, AnswerNodeList = register(
0x0020, NodeList)
SetNodeState = register(
0x0021, SetNodeState, ignore_when_closed=False)
AddPendingNodes = register(
0x0022, AddPendingNodes, ignore_when_closed=False)
AskNodeInformation, AnswerNodeInformation = register(
0x0023, NodeInformation)
SetClusterState = register(
0x0024, SetClusterState, ignore_when_closed=False)
NotifyClusterInformation = register(
0x0025, ClusterInformation)
AskClusterState, AnswerClusterState = register(
0x0026, ClusterState)
NotifyLastOID = register(
0x0027, NotifyLastOID)
NotifyReplicationDone = register(
0x0028, ReplicationDone)
AskObjectUndoSerial, AnswerObjectUndoSerial = register(
0x0029, ObjectUndoSerial)
AskHasLock, AnswerHasLock = register(
0x002A, HasLock)
AskTIDsFrom, AnswerTIDsFrom = register(
0x002B, TIDListFrom)
AskObjectHistoryFrom, AnswerObjectHistoryFrom = register(
0x002C, ObjectHistoryFrom)
AskBarrier, AnswerBarrier = register(
0x002D, Barrier)
AskPack, AnswerPack = register(
0x002E, Pack, ignore_when_closed=False)
AskCheckTIDRange, AnswerCheckTIDRange = register(
0x002F, CheckTIDRange)
AskCheckSerialRange, AnswerCheckSerialRange = register(
0x0030, CheckSerialRange)
NotifyReady = register(
0x0031, NotifyReady)
AskLastTransaction, AnswerLastTransaction = register(
0x0032, LastTransaction)
AskCheckCurrentSerial, AnswerCheckCurrentSerial = register(
0x0033, CheckCurrentSerial)
NotifyTransactionFinished = register(
0x003E, NotifyTransactionFinished)
def Errors():
registry_dict = {}
handler_method_name_dict = {}
def register_error(code):
return lambda self, message='': Error(code, message)
for code, error in ErrorCodes.iteritems():
name = ''.join(part.capitalize() for part in str(error).split('_'))
registry_dict[name] = register_error(error)
handler_method_name_dict[code] = name[0].lower() + name[1:]
return type('ErrorRegistry', (dict,),
registry_dict)(handler_method_name_dict)
Errors = Errors()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/pt.py 0000664 0000000 0000000 00000032056 11634614701 0022741 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from functools import wraps
import neo
from neo.lib import protocol
from neo.lib.protocol import CellStates
from neo.lib.util import dump, u64
from neo.lib.locking import RLock
class PartitionTableException(Exception):
"""
Base class for partition table exceptions
"""
class Cell(object):
"""This class represents a cell in a partition table."""
def __init__(self, node, state = CellStates.UP_TO_DATE):
self.node = node
self.state = state
def __repr__(self):
return "" % (
dump(self.getUUID()),
self.getAddress(),
self.getState(),
)
def getState(self):
return self.state
def setState(self, state):
self.state = state
def isUpToDate(self):
return self.state == CellStates.UP_TO_DATE
def isOutOfDate(self):
return self.state == CellStates.OUT_OF_DATE
def isFeeding(self):
return self.state == CellStates.FEEDING
def getNode(self):
return self.node
def getNodeState(self):
"""This is a short hand."""
return self.node.getState()
def getUUID(self):
return self.node.getUUID()
def getAddress(self):
return self.node.getAddress()
class PartitionTable(object):
"""This class manages a partition table."""
def __init__(self, num_partitions, num_replicas):
self._id = None
self.np = num_partitions
self.nr = num_replicas
self.num_filled_rows = 0
# Note: don't use [[]] * num_partition construct, as it duplicates
# instance *references*, so the outer list contains really just one
# inner list instance.
self.partition_list = [[] for _ in xrange(num_partitions)]
self.count_dict = {}
def getID(self):
return self._id
def getPartitions(self):
return self.np
def getReplicas(self):
return self.nr
def clear(self):
"""Forget an existing partition table."""
self._id = None
self.num_filled_rows = 0
# Note: don't use [[]] * self.np construct, as it duplicates
# instance *references*, so the outer list contains really just one
# inner list instance.
self.partition_list = [[] for _ in xrange(self.np)]
self.count_dict.clear()
def getAssignedPartitionList(self, uuid):
""" Return the partition assigned to the specified UUID """
assigned_partitions = []
for offset in xrange(self.np):
for cell in self.getCellList(offset, readable=True):
if cell.getUUID() == uuid:
assigned_partitions.append(offset)
break
return assigned_partitions
def hasOffset(self, offset):
try:
return len(self.partition_list[offset]) > 0
except IndexError:
return False
def getNodeList(self):
"""Return all used nodes."""
return [node for node, count in self.count_dict.iteritems() \
if count > 0]
def getCellList(self, offset, readable=False, writable=False):
# allow all cell states
state_set = set(CellStates.values())
if readable or writable:
# except non readables
state_set.remove(CellStates.DISCARDED)
if readable:
# except non writables
state_set.remove(CellStates.OUT_OF_DATE)
try:
return [cell for cell in self.partition_list[offset] \
if cell is not None and cell.getState() in state_set]
except (TypeError, KeyError):
return []
def getCellListForTID(self, tid, readable=False, writable=False):
return self.getCellList(self.getPartition(tid), readable, writable)
def getCellListForOID(self, oid, readable=False, writable=False):
return self.getCellList(self.getPartition(oid), readable, writable)
def getPartition(self, oid_or_tid):
return u64(oid_or_tid) % self.getPartitions()
def getOutdatedOffsetListFor(self, uuid):
return [
offset for offset in xrange(self.np)
for c in self.partition_list[offset]
if c.getUUID() == uuid and c.getState() == CellStates.OUT_OF_DATE
]
def isAssigned(self, oid, uuid):
""" Check if the oid is assigned to the given node """
for cell in self.partition_list[u64(oid) % self.np]:
if cell.getUUID() == uuid:
return True
return False
def setCell(self, offset, node, state):
if state == CellStates.DISCARDED:
return self.removeCell(offset, node)
if node.isBroken() or node.isDown():
raise PartitionTableException('Invalid node state')
self.count_dict.setdefault(node, 0)
row = self.partition_list[offset]
if len(row) == 0:
# Create a new row.
row = [Cell(node, state), ]
if state != CellStates.FEEDING:
self.count_dict[node] += 1
self.partition_list[offset] = row
self.num_filled_rows += 1
else:
# XXX this can be slow, but it is necessary to remove a duplicate,
# if any.
for cell in row:
if cell.getNode() == node:
row.remove(cell)
if not cell.isFeeding():
self.count_dict[node] -= 1
break
row.append(Cell(node, state))
if state != CellStates.FEEDING:
self.count_dict[node] += 1
return (offset, node.getUUID(), state)
def removeCell(self, offset, node):
row = self.partition_list[offset]
assert row is not None
for cell in row:
if cell.getNode() == node:
row.remove(cell)
if not cell.isFeeding():
self.count_dict[node] -= 1
break
return (offset, node.getUUID(), CellStates.DISCARDED)
def load(self, ptid, row_list, nm):
"""
Load the partition table with the specified PTID, discard all previous
content.
"""
self.clear()
self._id = ptid
for offset, row in row_list:
if offset >= self.getPartitions():
raise IndexError
for uuid, state in row:
node = nm.getByUUID(uuid)
# the node must be known by the node manager
assert node is not None
self.setCell(offset, node, state)
neo.lib.logging.debug('partition table loaded (ptid=%s)', ptid)
self.log()
def update(self, ptid, cell_list, nm):
"""
Update the partition with the cell list supplied. Ignore those changes
if the partition table ID is not greater than the current one. If a node
is not known, it is created in the node manager and set as unavailable
"""
if ptid <= self._id:
neo.lib.logging.warning('ignoring older partition changes')
return
self._id = ptid
for offset, uuid, state in cell_list:
node = nm.getByUUID(uuid)
assert node is not None, 'No node found for uuid %r' % (dump(uuid), )
self.setCell(offset, node, state)
neo.lib.logging.debug('partition table updated (ptid=%s)', ptid)
self.log()
def filled(self):
return self.num_filled_rows == self.np
def log(self):
for line in self._format():
neo.lib.logging.debug(line)
def format(self):
return '\n'.join(self._format())
def _format(self):
"""Help debugging partition table management.
Output sample:
DEBUG:root:pt: node 0: ad7ffe8ceef4468a0c776f3035c7a543, R
DEBUG:root:pt: node 1: a68a01e8bf93e287bd505201c1405bc2, R
DEBUG:root:pt: node 2: 67ae354b4ed240a0594d042cf5c01b28, R
DEBUG:root:pt: node 3: df57d7298678996705cd0092d84580f4, R
DEBUG:root:pt: 00000000: .UU.|U..U|.UU.|U..U|.UU.|U..U|.UU.|U..U|.UU.
DEBUG:root:pt: 00000009: U..U|.UU.|U..U|.UU.|U..U|.UU.|U..U|.UU.|U..U
Here, there are 4 nodes in RUNNING state.
The first partition has 2 replicas in UP_TO_DATE state, on nodes 1 and
2 (nodes 0 and 3 are displayed as unused for that partition by
displaying a dot).
The 8-digits number on the left represents the number of the first
partition on the line (here, line length is 9 to keep the docstring
width under 80 column).
"""
result = []
append = result.append
node_list = self.count_dict.keys()
node_list = [k for k, v in self.count_dict.items() if v != 0]
node_list.sort()
node_dict = {}
for i, node in enumerate(node_list):
uuid = node.getUUID()
node_dict[uuid] = i
append('pt: node %d: %s, %s' % (i, dump(uuid),
protocol.node_state_prefix_dict[node.getState()]))
line = []
max_line_len = 20 # XXX: hardcoded number of partitions per line
cell_state_dict = protocol.cell_state_prefix_dict
for offset, row in enumerate(self.partition_list):
if len(line) == max_line_len:
append('pt: %08d: %s' % (offset - max_line_len,
'|'.join(line)))
line = []
if row is None:
line.append('X' * len(node_list))
else:
cell = []
cell_dict = dict([(node_dict.get(x.getUUID(), None), x)
for x in row])
for node in xrange(len(node_list)):
if node in cell_dict:
cell.append(cell_state_dict[cell_dict[node].getState()])
else:
cell.append('.')
line.append(''.join(cell))
if len(line):
append('pt: %08d: %s' % (offset - len(line) + 1,
'|'.join(line)))
return result
def operational(self):
if not self.filled():
return False
for row in self.partition_list:
for cell in row:
if (cell.isUpToDate() or cell.isFeeding()) and \
cell.getNode().isRunning():
break
else:
return False
return True
def getRow(self, offset):
row = self.partition_list[offset]
if row is None:
return []
return [(cell.getUUID(), cell.getState()) for cell in row]
def getRowList(self):
getRow = self.getRow
return [(x, getRow(x)) for x in xrange(self.np)]
def getNodeMap(self):
""" Return a list of 2-tuple: (uuid, partition_list) """
uuid_map = {}
for index, row in enumerate(self.partition_list):
for cell in row:
uuid_map.setdefault(cell.getNode(), []).append(index)
return uuid_map
def thread_safe(method):
def wrapper(self, *args, **kwargs):
self.lock()
try:
return method(self, *args, **kwargs)
finally:
self.unlock()
return wraps(method)(wrapper)
class MTPartitionTable(PartitionTable):
""" Thread-safe aware version of the partition table, override only methods
used in the client """
def __init__(self, *args, **kwargs):
self._lock = RLock()
PartitionTable.__init__(self, *args, **kwargs)
def lock(self):
self._lock.acquire()
def unlock(self):
self._lock.release()
@thread_safe
def getCellListForTID(self, *args, **kwargs):
return PartitionTable.getCellListForTID(self, *args, **kwargs)
@thread_safe
def getCellListForOID(self, *args, **kwargs):
return PartitionTable.getCellListForOID(self, *args, **kwargs)
@thread_safe
def setCell(self, *args, **kwargs):
return PartitionTable.setCell(self, *args, **kwargs)
@thread_safe
def clear(self, *args, **kwargs):
return PartitionTable.clear(self, *args, **kwargs)
@thread_safe
def operational(self, *args, **kwargs):
return PartitionTable.operational(self, *args, **kwargs)
@thread_safe
def getNodeList(self, *args, **kwargs):
return PartitionTable.getNodeList(self, *args, **kwargs)
@thread_safe
def getNodeMap(self, *args, **kwargs):
return PartitionTable.getNodeMap(self, *args, **kwargs)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/python.py 0000664 0000000 0000000 00000004254 11634614701 0023636 0 ustar 00root root 0000000 0000000 # Copyright (C) 2011 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import sys, types
if sys.version_info < (2, 5):
import __builtin__, imp
def all(iterable):
"""
Return True if bool(x) is True for all values x in the iterable.
"""
for x in iterable:
if not x:
return False
return True
__builtin__.all = all
def any(iterable):
"""
Return True if bool(x) is True for any x in the iterable.
"""
for x in iterable:
if x:
return True
return False
__builtin__.any = any
import md5, sha
sys.modules['hashlib'] = hashlib = imp.new_module('hashlib')
hashlib.md5 = md5.new
hashlib.sha1 = sha.new
import struct
class Struct(object):
def __init__(self, fmt):
self._fmt = fmt
self.size = struct.calcsize(fmt)
def pack(self, *args):
return struct.pack(self._fmt, *args)
def unpack(self, *args):
return struct.unpack(self._fmt, *args)
struct.Struct = Struct
sys.modules['functools'] = functools = imp.new_module('functools')
def wraps(wrapped):
"""Simple backport of functools.wraps from Python >= 2.5"""
def decorator(wrapper):
wrapper.__module__ = wrapped.__module__
wrapper.__name__ = wrapped.__name__
wrapper.__doc__ = wrapped.__doc__
wrapper.__dict__.update(wrapped.__dict__)
return wrapper
return decorator
functools.wraps = wraps
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/lib/util.py 0000664 0000000 0000000 00000015021 11634614701 0023264 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import re
import socket
from zlib import adler32
from Queue import deque
from struct import pack, unpack
SOCKET_CONNECTORS_DICT = {
socket.AF_INET : 'SocketConnectorIPv4',
socket.AF_INET6: 'SocketConnectorIPv6',
}
def u64(s):
return unpack('!Q', s)[0]
def p64(n):
return pack('!Q', n)
def add64(packed, offset):
"""Add a python number to a 64-bits packed value"""
return p64(u64(packed) + offset)
def dump(s):
"""Dump a binary string in hex."""
if s is None:
return None
if isinstance(s, str):
ret = []
for c in s:
ret.append('%02x' % ord(c))
return ''.join(ret)
else:
return repr(s)
def bin(s):
"""Inverse of dump method."""
if s is None:
return None
ret = []
while len(s):
ret.append(chr(int(s[:2], 16)))
s = s[2:]
return ''.join(ret)
def makeChecksum(s):
"""Return a 4-byte integer checksum against a string."""
return adler32(s) & 0xffffffff
def resolve(hostname):
"""
Returns the first IP address that match with the given hostname
"""
try:
# an IP resolves to itself
_, _, address_list = socket.gethostbyname_ex(hostname)
except socket.gaierror:
return None
return address_list[0]
def getAddressType(address):
"Return the type (IPv4 or IPv6) of an ip"
(host, port) = address
for af_type in SOCKET_CONNECTORS_DICT.keys():
try :
socket.inet_pton(af_type, host)
except:
continue
else:
break
else:
raise ValueError("Unknown type of host", host)
return af_type
def getConnectorFromAddress(address):
address_type = getAddressType(address)
return SOCKET_CONNECTORS_DICT[address_type]
def parseNodeAddress(address, port_opt=None):
if ']' in address:
(ip, port) = address.split(']')
ip = ip.lstrip('[')
port = port.lstrip(':')
if port == '':
port = port_opt
elif ':' in address:
(ip, port) = address.split(':')
ip = resolve(ip)
else:
ip = address
port = port_opt
if port is None:
raise ValueError
return (ip, int(port))
def parseMasterList(masters, except_node=None):
assert masters, 'At least one master must be defined'
socket_connector = ''
# load master node list
master_node_list = []
# XXX: support '/' and ' ' as separator
masters = masters.replace('/', ' ')
for node in masters.split(' '):
address = parseNodeAddress(node)
if (address != except_node):
master_node_list.append(address)
socket_connector_temp = getConnectorFromAddress(address)
if socket_connector == '':
socket_connector = socket_connector_temp
elif socket_connector == socket_connector_temp:
pass
else:
return TypeError, (" Wrong connector type : you're trying to use ipv6 and ipv4 simultaneously")
return tuple(master_node_list), socket_connector
class Enum(dict):
"""
Simulate an enumeration, define them as follow :
class MyEnum(Enum):
ITEM1 = Enum.Item(0)
ITEM2 = Enum.Item(1)
Enum items must be written in full upper case
"""
class Item(int):
_enum = None
_name = None
def __new__(cls, value):
instance = super(Enum.Item, cls).__new__(cls, value)
instance._enum = None
instance._name = None
return instance
def __str__(self):
return self._name
def __repr__(self):
return "" % (self._name, self)
def __eq__(self, other):
if other is None:
return False
assert isinstance(other, (Enum.Item, int, float, long))
if isinstance(other, Enum):
assert self._enum == other._enum
return int(self) == int(other)
def __init__(self):
dict.__init__(self)
for name in dir(self):
if not re.match('^[A-Z_]*$', name):
continue
item = getattr(self, name)
item._name = name
item._enum = self
self[int(item)] = item
def getByName(self, name):
return getattr(self, name)
class ReadBuffer(object):
"""
Implementation of a lazy buffer. Main purpose if to reduce useless
copies of data by storing chunks and join them only when the requested
size is available.
"""
def __init__(self):
self.size = 0
self.content = deque()
def append(self, data):
""" Append some data and compute the new buffer size """
size = len(data)
self.size += size
self.content.append((size, data))
def __len__(self):
""" Return the current buffer size """
return self.size
def read(self, size):
""" Read and consume size bytes """
if self.size < size:
return None
self.size -= size
chunk_list = []
pop_chunk = self.content.popleft
append_data = chunk_list.append
to_read = size
chunk_len = 0
# select required chunks
while to_read > 0:
chunk_size, chunk_data = pop_chunk()
to_read -= chunk_size
append_data(chunk_data)
if to_read < 0:
# too many bytes consumed, cut the last chunk
last_chunk = chunk_list[-1]
keep, let = last_chunk[:to_read], last_chunk[to_read:]
self.content.appendleft((-to_read, let))
chunk_list[-1] = keep
# join all chunks (one copy)
data = ''.join(chunk_list)
assert len(data) == size
return data
def clear(self):
""" Erase all buffer content """
self.size = 0
self.content.clear()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/ 0000775 0000000 0000000 00000000000 11634614701 0022463 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/__init__.py 0000664 0000000 0000000 00000000000 11634614701 0024562 0 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/app.py 0000664 0000000 0000000 00000055700 11634614701 0023624 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
import os, sys
from time import time
from neo.lib import protocol
from neo.lib.protocol import UUID_NAMESPACES, ZERO_TID
from neo.lib.protocol import ClusterStates, NodeStates, NodeTypes, Packets
from neo.lib.node import NodeManager
from neo.lib.event import EventManager
from neo.lib.connection import ListeningConnection, ClientConnection
from neo.lib.exception import ElectionFailure, PrimaryFailure, OperationFailure
from neo.master.handlers import election, identification, secondary
from neo.master.handlers import storage, client, shutdown
from neo.master.handlers import administration
from neo.master.pt import PartitionTable
from neo.master.transactions import TransactionManager
from neo.master.verification import VerificationManager
from neo.master.recovery import RecoveryManager
from neo.lib.util import dump
from neo.lib.connector import getConnectorHandler
from neo.lib.debug import register as registerLiveDebugger
class Application(object):
"""The master node application."""
packing = None
# Latest completely commited TID
last_transaction = ZERO_TID
def __init__(self, config):
# Internal attributes.
self.em = EventManager()
self.nm = NodeManager()
self.tm = TransactionManager(self.onTransactionCommitted)
self.name = config.getCluster()
self.server = config.getBind()
self.storage_readiness = set()
master_addresses, connector_name = config.getMasters()
self.connector_handler = getConnectorHandler(connector_name)
for master_address in master_addresses :
self.nm.createMaster(address=master_address)
neo.lib.logging.debug('IP address is %s, port is %d', *(self.server))
# Partition table
replicas, partitions = config.getReplicas(), config.getPartitions()
if replicas < 0:
raise RuntimeError, 'replicas must be a positive integer'
if partitions <= 0:
raise RuntimeError, 'partitions must be more than zero'
self.pt = PartitionTable(partitions, replicas)
neo.lib.logging.info('Configuration:')
neo.lib.logging.info('Partitions: %d', partitions)
neo.lib.logging.info('Replicas : %d', replicas)
neo.lib.logging.info('Name : %s', self.name)
self.listening_conn = None
self.primary = None
self.primary_master_node = None
self.cluster_state = None
self._startup_allowed = False
# Generate an UUID for self
uuid = config.getUUID()
if uuid is None or uuid == '':
uuid = self.getNewUUID(NodeTypes.MASTER)
self.uuid = uuid
neo.lib.logging.info('UUID : %s', dump(uuid))
# election related data
self.unconnected_master_node_set = set()
self.negotiating_master_node_set = set()
self._current_manager = None
registerLiveDebugger(on_log=self.log)
def close(self):
self.listening_conn = None
self.nm.close()
self.em.close()
del self.__dict__
def log(self):
self.em.log()
self.nm.log()
self.tm.log()
if self.pt is not None:
self.pt.log()
def run(self):
try:
self._run()
except:
neo.lib.logging.info('\nPre-mortem informations:')
self.log()
raise
def _run(self):
"""Make sure that the status is sane and start a loop."""
bootstrap = True
# Make a listening port.
self.listening_conn = ListeningConnection(self.em, None,
addr=self.server, connector=self.connector_handler())
# Start a normal operation.
while True:
# (Re)elect a new primary master.
self.primary = not self.nm.getMasterList()
if not self.primary:
self.electPrimary(bootstrap=bootstrap)
bootstrap = False
try:
if self.primary:
self.playPrimaryRole()
else:
self.playSecondaryRole()
raise RuntimeError, 'should not reach here'
except (ElectionFailure, PrimaryFailure):
# Forget all connections.
for conn in self.em.getClientList():
conn.close()
def electPrimary(self, bootstrap = True):
"""Elect a primary master node.
The difficulty is that a master node must accept connections from
others while attempting to connect to other master nodes at the
same time. Note that storage nodes and client nodes may connect
to self as well as master nodes."""
neo.lib.logging.info('begin the election of a primary master')
self.unconnected_master_node_set.clear()
self.negotiating_master_node_set.clear()
self.listening_conn.setHandler(election.ServerElectionHandler(self))
while True:
# handle new connected masters
for node in self.nm.getMasterList():
node.setUnknown()
self.unconnected_master_node_set.add(node.getAddress())
# start the election process
self.primary = None
self.primary_master_node = None
try:
self._doElection(bootstrap)
except ElectionFailure, m:
# something goes wrong, clean then restart
self._electionFailed(m)
bootstrap = False
else:
# election succeed, stop the process
self.primary = self.primary is None
break
def _doElection(self, bootstrap):
"""
Start the election process:
- Try to connect to any known master node
- Wait at most for the timeout defined by bootstrap parameter
When done, the current process is defined either as primary or
secondary master node
"""
# Wait at most 20 seconds at bootstrap. Otherwise, wait at most
# 10 seconds to avoid stopping the whole cluster for a long time.
# Note that even if not all master are up in the first 20 seconds
# this is not an issue because the first up will timeout and take
# the primary role.
if bootstrap:
expiration = 20
else:
expiration = 10
client_handler = election.ClientElectionHandler(self)
t = 0
while True:
current_time = time()
if current_time >= t:
t = current_time + 1
for node in self.nm.getMasterList():
if not node.isRunning() and node.getLastStateChange() + \
expiration < current_time:
neo.lib.logging.info('%s is down' % (node, ))
node.setDown()
self.unconnected_master_node_set.discard(
node.getAddress())
# Try to connect to master nodes.
for addr in self.unconnected_master_node_set.difference(
x.getAddress() for x in self.em.getClientList()):
ClientConnection(self.em, client_handler, addr=addr,
connector=self.connector_handler())
self.em.poll(1)
if not (self.unconnected_master_node_set or
self.negotiating_master_node_set):
break
def _announcePrimary(self):
"""
Broadcast the announce that I'm the primary
"""
# I am the primary.
neo.lib.logging.debug('I am the primary, sending an announcement')
for conn in self.em.getClientList():
conn.notify(Packets.AnnouncePrimary())
conn.abort()
t = time()
while self.em.getClientList():
self.em.poll(1)
if t + 10 < time():
for conn in self.em.getClientList():
conn.close()
break
def _electionFailed(self, m):
"""
Ask other masters to reelect a primary after an election failure.
"""
neo.lib.logging.error('election failed: %s', (m, ))
# Ask all connected nodes to reelect a single primary master.
for conn in self.em.getClientList():
conn.notify(Packets.ReelectPrimary())
conn.abort()
# Wait until the connections are closed.
self.primary = None
self.primary_master_node = None
t = time() + 10
while self.em.getClientList() and time() < t:
try:
self.em.poll(1)
except ElectionFailure:
pass
# Close all connections.
for conn in self.em.getClientList() + self.em.getServerList():
conn.close()
def broadcastNodesInformation(self, node_list):
"""
Broadcast changes for a set a nodes
Send only one packet per connection to reduce bandwidth
"""
node_dict = {}
# group modified nodes by destination node type
for node in node_list:
node_info = node.asTuple()
def assign_for_notification(node_type):
# helper function
node_dict.setdefault(node_type, []).append(node_info)
if node.isMaster() or node.isStorage():
# client get notifications for master and storage only
assign_for_notification(NodeTypes.CLIENT)
if node.isMaster() or node.isStorage() or node.isClient():
assign_for_notification(NodeTypes.STORAGE)
assign_for_notification(NodeTypes.ADMIN)
# send at most one non-empty notification packet per node
for node in self.nm.getIdentifiedList():
node_list = node_dict.get(node.getType(), [])
if node_list and node.isRunning():
node.notify(Packets.NotifyNodeInformation(node_list))
def broadcastPartitionChanges(self, cell_list, selector=None):
"""Broadcast a Notify Partition Changes packet."""
neo.lib.logging.debug('broadcastPartitionChanges')
if not cell_list:
return
if not selector:
selector = lambda n: n.isClient() or n.isStorage() or n.isAdmin()
self.pt.log()
ptid = self.pt.setNextID()
packet = Packets.NotifyPartitionChanges(ptid, cell_list)
for node in self.nm.getIdentifiedList():
if not node.isRunning():
continue
if selector(node):
node.notify(packet)
def outdateAndBroadcastPartition(self):
" Outdate cell of non-working nodes and broadcast changes """
self.broadcastPartitionChanges(self.pt.outdate())
def broadcastLastOID(self):
oid = self.tm.getLastOID()
neo.lib.logging.debug(
'Broadcast last OID to storages : %s' % dump(oid))
packet = Packets.NotifyLastOID(oid)
for node in self.nm.getStorageList(only_identified=True):
node.notify(packet)
def provideService(self):
"""
This is the normal mode for a primary master node. Handle transactions
and stop the service only if a catastrophy happens or the user commits
a shutdown.
"""
neo.lib.logging.info('provide service')
em = self.em
self.tm.reset()
self.changeClusterState(ClusterStates.RUNNING)
# Now everything is passive.
while True:
try:
em.poll(1)
except OperationFailure:
# If not operational, send Stop Operation packets to storage
# nodes and client nodes. Abort connections to client nodes.
neo.lib.logging.critical('No longer operational')
for node in self.nm.getIdentifiedList():
if node.isStorage() or node.isClient():
node.notify(Packets.StopOperation())
if node.isClient():
node.getConnection().abort()
# Then, go back, and restart.
return
def playPrimaryRole(self):
neo.lib.logging.info(
'play the primary role with %r', self.listening_conn)
# i'm the primary, send the announcement
self._announcePrimary()
# all incoming connections identify through this handler
self.listening_conn.setHandler(
identification.IdentificationHandler(self))
em = self.em
nm = self.nm
# Close all remaining connections to other masters,
# for the same reason as in playSecondaryRole.
for conn in em.getConnectionList():
conn_uuid = conn.getUUID()
if conn_uuid is not None:
node = nm.getByUUID(conn_uuid)
assert node is not None
assert node.isMaster() and not conn.isClient()
assert node._connection is None and node.isUnknown()
# this may trigger 'unexpected answer' warnings on remote side
conn.close()
# If I know any storage node, make sure that they are not in the
# running state, because they are not connected at this stage.
for node in nm.getStorageList():
if node.isRunning():
node.setTemporarilyDown()
# recover the cluster status at startup
self.runManager(RecoveryManager)
while True:
self.runManager(VerificationManager)
self.provideService()
def playSecondaryRole(self):
"""
I play a secondary role, thus only wait for a primary master to fail.
"""
neo.lib.logging.info('play the secondary role with %r',
self.listening_conn)
# Wait for an announcement. If this is too long, probably
# the primary master is down.
t = time()
while self.primary_master_node is None:
self.em.poll(1)
if t + 10 < time():
# election timeout
raise ElectionFailure("Election timeout")
# Restart completely. Non-optimized
# but lower level code needs to be stabilized first.
addr = self.primary_master_node.getAddress()
for conn in self.em.getConnectionList():
conn.close()
# Reconnect to primary master node.
primary_handler = secondary.PrimaryHandler(self)
ClientConnection(self.em, primary_handler, addr=addr,
connector=self.connector_handler())
# and another for the future incoming connections
handler = identification.IdentificationHandler(self)
self.listening_conn.setHandler(handler)
while True:
self.em.poll(1)
def runManager(self, manager_klass):
self._current_manager = manager_klass(self)
self._current_manager.run()
self._current_manager = None
def changeClusterState(self, state):
"""
Change the cluster state and apply right handler on each connections
"""
if self.cluster_state == state:
return
# select the storage handler
client_handler = client.ClientServiceHandler(self)
if state == ClusterStates.RUNNING:
storage_handler = storage.StorageServiceHandler(self)
elif self._current_manager is not None:
storage_handler = self._current_manager.getHandler()
else:
raise RuntimeError('Unexpected cluster state')
# change handlers
notification_packet = Packets.NotifyClusterInformation(state)
for node in self.nm.getIdentifiedList():
if node.isMaster():
continue
conn = node.getConnection()
if node.isClient() and conn.isAborted():
continue
node.notify(notification_packet)
if node.isClient():
if state != ClusterStates.RUNNING:
conn.close()
handler = client_handler
elif node.isStorage():
handler = storage_handler
else:
continue # keep handler
conn.setHandler(handler)
handler.connectionCompleted(conn)
self.cluster_state = state
def getNewUUID(self, node_type):
# build an UUID
uuid = os.urandom(15)
while uuid == protocol.INVALID_UUID[1:]:
uuid = os.urandom(15)
# look for the prefix
prefix = UUID_NAMESPACES.get(node_type, None)
if prefix is None:
raise RuntimeError, 'No UUID namespace found for this node type'
return prefix + uuid
def isValidUUID(self, uuid, addr):
node = self.nm.getByUUID(uuid)
if node is not None and node.getAddress() is not None \
and node.getAddress() != addr:
return False
return uuid != self.uuid and uuid is not None
def getClusterState(self):
return self.cluster_state
def shutdown(self):
"""Close all connections and exit"""
# XXX: This behaviour is probably broken, as it applies the same
# handler to all connection types. It must be carefuly reviewed and
# corrected.
# change handler
handler = shutdown.ShutdownHandler(self)
for node in self.nm.getIdentifiedList():
node.getConnection().setHandler(handler)
# wait for all transaction to be finished
while self.tm.hasPending():
self.em.poll(1)
if self.cluster_state != ClusterStates.RUNNING:
neo.lib.logging.info("asking all nodes to shutdown")
# This code sends packets but never polls, so they never reach
# network.
for node in self.nm.getIdentifiedList():
notification = Packets.NotifyNodeInformation([node.asTuple()])
if node.isClient():
node.notify(notification)
elif node.isStorage() or node.isMaster():
node.notify(notification)
# then shutdown
sys.exit()
def identifyStorageNode(self, uuid, node):
state = NodeStates.RUNNING
handler = None
if self.cluster_state == ClusterStates.RUNNING:
if uuid is None or node is None:
# same as for verification
state = NodeStates.PENDING
handler = storage.StorageServiceHandler(self)
elif self.cluster_state == ClusterStates.STOPPING:
raise protocol.NotReadyError
else:
raise RuntimeError('unhandled cluster state: %s' %
(self.cluster_state, ))
return (uuid, state, handler)
def identifyNode(self, node_type, uuid, node):
state = NodeStates.RUNNING
if node_type == NodeTypes.ADMIN:
# always accept admin nodes
node_ctor = self.nm.createAdmin
handler = administration.AdministrationHandler(self)
neo.lib.logging.info('Accept an admin %s' % (dump(uuid), ))
elif node_type == NodeTypes.MASTER:
if node is None:
# unknown master, rejected
raise protocol.ProtocolError('Reject an unknown master node')
# always put other master in waiting state
node_ctor = self.nm.createMaster
handler = secondary.SecondaryMasterHandler(self)
neo.lib.logging.info('Accept a master %s' % (dump(uuid), ))
elif node_type == NodeTypes.CLIENT:
# refuse any client before running
if self.cluster_state != ClusterStates.RUNNING:
neo.lib.logging.info('Reject a connection from a client')
raise protocol.NotReadyError
node_ctor = self.nm.createClient
handler = client.ClientServiceHandler(self)
neo.lib.logging.info('Accept a client %s' % (dump(uuid), ))
elif node_type == NodeTypes.STORAGE:
node_ctor = self.nm.createStorage
manager = self._current_manager
if manager is None:
manager = self
(uuid, state, handler) = manager.identifyStorageNode(uuid, node)
neo.lib.logging.info('Accept a storage %s (%s)' %
(dump(uuid), state))
else:
handler = identification.IdentificationHandler(self)
return (uuid, node, state, handler, node_ctor)
def onTransactionCommitted(self, txn):
# I have received all the lock answers now:
# - send a Notify Transaction Finished to the initiated client node
# - Invalidate Objects to the other client nodes
ttid = txn.getTTID()
tid = txn.getTID()
transaction_node = txn.getNode()
invalidate_objects = Packets.InvalidateObjects(tid, txn.getOIDList())
transaction_finished = Packets.AnswerTransactionFinished(ttid, tid)
for client_node in self.nm.getClientList(only_identified=True):
c = client_node.getConnection()
if client_node is transaction_node:
c.answer(transaction_finished, msg_id=txn.getMessageId())
else:
c.notify(invalidate_objects)
# Unlock Information to relevant storage nodes.
notify_unlock = Packets.NotifyUnlockInformation(ttid)
getByUUID = self.nm.getByUUID
for storage_uuid in txn.getUUIDList():
getByUUID(storage_uuid).getConnection().notify(notify_unlock)
# Notify storage that have replications blocked by this transaction
notify_finished = Packets.NotifyTransactionFinished(ttid, tid)
for storage_uuid in txn.getNotificationUUIDList():
node = getByUUID(storage_uuid)
if node is not None and node.isConnected():
node.getConnection().notify(notify_finished)
# remove transaction from manager
self.tm.remove(transaction_node.getUUID(), ttid)
self.setLastTransaction(tid)
def getLastTransaction(self):
return self.last_transaction
def setLastTransaction(self, tid):
ltid = self.last_transaction
assert tid >= ltid, (tid, ltid)
self.last_transaction = tid
def setStorageNotReady(self, uuid):
self.storage_readiness.discard(uuid)
def setStorageReady(self, uuid):
self.storage_readiness.add(uuid)
def isStorageReady(self, uuid):
return uuid in self.storage_readiness
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/handlers/ 0000775 0000000 0000000 00000000000 11634614701 0024263 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/handlers/__init__.py 0000664 0000000 0000000 00000010101 11634614701 0026365 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
from neo.lib.handler import EventHandler
from neo.lib.protocol import NodeTypes, NodeStates, Packets
from neo.lib.util import dump
class MasterHandler(EventHandler):
"""This class implements a generic part of the event handlers."""
def protocolError(self, conn, message):
neo.lib.logging.error(
'Protocol error %s %s', message, conn.getAddress())
def askPrimary(self, conn):
app = self.app
if app.primary:
primary_uuid = app.uuid
elif app.primary_master_node is not None:
primary_uuid = app.primary_master_node.getUUID()
else:
primary_uuid = None
known_master_list = [(app.server, app.uuid, )]
for n in app.nm.getMasterList():
if n.isBroken():
continue
known_master_list.append((n.getAddress(), n.getUUID(), ))
conn.answer(Packets.AnswerPrimary(
primary_uuid,
known_master_list),
)
def askClusterState(self, conn):
assert conn.getUUID() is not None
state = self.app.getClusterState()
conn.answer(Packets.AnswerClusterState(state))
def askNodeInformation(self, conn):
nm = self.app.nm
node_list = []
node_list.extend(n.asTuple() for n in nm.getMasterList())
node_list.extend(n.asTuple() for n in nm.getClientList())
node_list.extend(n.asTuple() for n in nm.getStorageList())
conn.notify(Packets.NotifyNodeInformation(node_list))
conn.answer(Packets.AnswerNodeInformation())
def askPartitionTable(self, conn):
ptid = self.app.pt.getID()
row_list = self.app.pt.getRowList()
conn.answer(Packets.AnswerPartitionTable(ptid, row_list))
DISCONNECTED_STATE_DICT = {
NodeTypes.STORAGE: NodeStates.TEMPORARILY_DOWN,
}
class BaseServiceHandler(MasterHandler):
"""This class deals with events for a service phase."""
def nodeLost(self, conn, node):
# This method provides a hook point overridable by service classes.
# It is triggered when a connection to a node gets lost.
pass
def connectionLost(self, conn, new_state):
node = self.app.nm.getByUUID(conn.getUUID())
if node is None:
return # for example, when a storage is removed by an admin
if new_state != NodeStates.BROKEN:
new_state = DISCONNECTED_STATE_DICT.get(node.getType(),
NodeStates.DOWN)
assert new_state in (NodeStates.TEMPORARILY_DOWN, NodeStates.DOWN,
NodeStates.BROKEN), new_state
assert node.getState() not in (NodeStates.TEMPORARILY_DOWN,
NodeStates.DOWN, NodeStates.BROKEN), (dump(self.app.uuid),
node.whoSetState(), new_state)
was_pending = node.isPending()
node.setState(new_state)
if new_state != NodeStates.BROKEN and was_pending:
# was in pending state, so drop it from the node manager to forget
# it and do not set in running state when it comes back
neo.lib.logging.info('drop a pending node from the node manager')
self.app.nm.remove(node)
self.app.broadcastNodesInformation([node])
# clean node related data in specialized handlers
self.nodeLost(conn, node)
def notifyReady(self, conn):
self.app.setStorageReady(conn.getUUID())
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/handlers/administration.py 0000664 0000000 0000000 00000013425 11634614701 0027667 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
from neo.master.handlers import MasterHandler
from neo.lib.protocol import ClusterStates, NodeStates, Packets, ProtocolError
from neo.lib.protocol import Errors
from neo.lib.util import dump
CLUSTER_STATE_WORKFLOW = {
# destination: sources
ClusterStates.VERIFYING: set([ClusterStates.RECOVERING]),
ClusterStates.STOPPING: set([ClusterStates.RECOVERING,
ClusterStates.VERIFYING, ClusterStates.RUNNING]),
}
class AdministrationHandler(MasterHandler):
"""This class deals with messages from the admin node only"""
def connectionLost(self, conn, new_state):
node = self.app.nm.getByUUID(conn.getUUID())
self.app.nm.remove(node)
def askPrimary(self, conn):
app = self.app
# I'm the primary
conn.answer(Packets.AnswerPrimary(app.uuid, []))
def setClusterState(self, conn, state):
# check request
if not state in CLUSTER_STATE_WORKFLOW.keys():
raise ProtocolError('Invalid state requested')
valid_current_states = CLUSTER_STATE_WORKFLOW[state]
if self.app.cluster_state not in valid_current_states:
raise ProtocolError('Cannot switch to this state')
# change state
if state == ClusterStates.VERIFYING:
# XXX: /!\ this allow leave the first phase of recovery
self.app._startup_allowed = True
else:
self.app.changeClusterState(state)
# answer
conn.answer(Errors.Ack('Cluster state changed'))
if state == ClusterStates.STOPPING:
self.app.cluster_state = state
self.app.shutdown()
def setNodeState(self, conn, uuid, state, modify_partition_table):
neo.lib.logging.info("set node state for %s-%s : %s" %
(dump(uuid), state, modify_partition_table))
app = self.app
node = app.nm.getByUUID(uuid)
if node is None:
raise ProtocolError('unknown node')
if uuid == app.uuid:
node.setState(state)
# get message for self
if state != NodeStates.RUNNING:
p = Errors.Ack('node state changed')
conn.answer(p)
app.shutdown()
if node.getState() == state:
# no change, just notify admin node
p = Errors.Ack('node already in %s state' % state)
conn.answer(p)
return
if state == NodeStates.RUNNING:
# first make sure to have a connection to the node
if not node.isConnected():
raise ProtocolError('no connection to the node')
node.setState(state)
elif state == NodeStates.DOWN and node.isStorage():
# update it's state
node.setState(state)
if node.isConnected():
# notify itself so it can shutdown
node.notify(Packets.NotifyNodeInformation([node.asTuple()]))
# close to avoid handle the closure as a connection lost
node.getConnection().abort()
# modify the partition table if required
cell_list = []
if modify_partition_table:
# remove from pt
cell_list = app.pt.dropNode(node)
app.nm.remove(node)
else:
# outdate node in partition table
cell_list = app.pt.outdate()
app.broadcastPartitionChanges(cell_list)
else:
node.setState(state)
# /!\ send the node information *after* the partition table change
p = Errors.Ack('state changed')
conn.answer(p)
app.broadcastNodesInformation([node])
def addPendingNodes(self, conn, uuid_list):
uuids = ', '.join([dump(uuid) for uuid in uuid_list])
neo.lib.logging.debug('Add nodes %s' % uuids)
app = self.app
nm = app.nm
em = app.em
pt = app.pt
cell_list = []
uuid_set = set()
if app.getClusterState() == ClusterStates.RUNNING:
# take all pending nodes
for node in nm.getStorageList():
if node.isPending():
uuid_set.add(node.getUUID())
# keep only selected nodes
if uuid_list:
uuid_set = uuid_set.intersection(set(uuid_list))
# nothing to do
if not uuid_set:
neo.lib.logging.warning('No nodes added')
conn.answer(Errors.Ack('No nodes added'))
return
uuids = ', '.join([dump(uuid) for uuid in uuid_set])
neo.lib.logging.info('Adding nodes %s' % uuids)
# switch nodes to running state
node_list = [nm.getByUUID(uuid) for uuid in uuid_set]
for node in node_list:
new_cells = pt.addNode(node)
cell_list.extend(new_cells)
node.setRunning()
node.getConnection().notify(Packets.StartOperation())
app.broadcastNodesInformation(node_list)
# broadcast the new partition table
app.broadcastPartitionChanges(cell_list)
conn.answer(Errors.Ack('Nodes added: %s' % (uuids, )))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/handlers/client.py 0000664 0000000 0000000 00000011065 11634614701 0026116 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.lib.protocol import NodeStates, Packets, ProtocolError
from neo.master.handlers import MasterHandler
from neo.lib.util import dump
from neo.master.transactions import DelayedError
class ClientServiceHandler(MasterHandler):
""" Handler dedicated to client during service state """
def connectionCompleted(self, conn):
pass
def connectionLost(self, conn, new_state):
# cancel its transactions and forgot the node
app = self.app
if app.listening_conn: # if running
node = app.nm.getByUUID(conn.getUUID())
assert node is not None
app.tm.abortFor(node)
node.setState(NodeStates.DOWN)
app.broadcastNodesInformation([node])
app.nm.remove(node)
def askNodeInformation(self, conn):
# send informations about master and storages only
nm = self.app.nm
node_list = []
node_list.extend(n.asTuple() for n in nm.getMasterList())
node_list.extend(n.asTuple() for n in nm.getStorageList())
conn.notify(Packets.NotifyNodeInformation(node_list))
conn.answer(Packets.AnswerNodeInformation())
def askBeginTransaction(self, conn, tid):
"""
A client request a TID, nothing is kept about it until the finish.
"""
app = self.app
node = app.nm.getByUUID(conn.getUUID())
conn.answer(Packets.AnswerBeginTransaction(app.tm.begin(node, tid)))
def askNewOIDs(self, conn, num_oids):
app = self.app
conn.answer(Packets.AnswerNewOIDs(app.tm.getNextOIDList(num_oids)))
app.broadcastLastOID()
def askFinishTransaction(self, conn, ttid, oid_list):
app = self.app
# Collect partitions related to this transaction.
getPartition = app.pt.getPartition
partition_set = set()
partition_set.add(getPartition(ttid))
partition_set.update((getPartition(oid) for oid in oid_list))
# Collect the UUIDs of nodes related to this transaction.
uuid_set = set()
isStorageReady = app.isStorageReady
for part in partition_set:
uuid_set.update((uuid for uuid in (
cell.getUUID() for cell in app.pt.getCellList(part)
if cell.getNodeState() != NodeStates.HIDDEN)
if isStorageReady(uuid)))
if not uuid_set:
raise ProtocolError('No storage node ready for transaction')
identified_node_list = app.nm.getIdentifiedList(pool_set=uuid_set)
usable_uuid_set = set((x.getUUID() for x in identified_node_list))
partitions = app.pt.getPartitions()
peer_id = conn.getPeerId()
tid = app.tm.prepare(ttid, partitions, oid_list, usable_uuid_set,
peer_id)
# check if greater and foreign OID was stored
if app.tm.updateLastOID(oid_list):
app.broadcastLastOID()
# Request locking data.
# build a new set as we may not send the message to all nodes as some
# might be not reachable at that time
p = Packets.AskLockInformation(ttid, tid, oid_list)
for node in identified_node_list:
node.ask(p, timeout=60)
def askPack(self, conn, tid):
app = self.app
if app.packing is None:
storage_list = app.nm.getStorageList(only_identified=True)
app.packing = (conn, conn.getPeerId(),
set(x.getUUID() for x in storage_list))
p = Packets.AskPack(tid)
for storage in storage_list:
storage.getConnection().ask(p)
else:
conn.answer(Packets.AnswerPack(False))
def askLastTransaction(self, conn):
conn.answer(Packets.AnswerLastTransaction(
self.app.getLastTransaction()))
def abortTransaction(self, conn, tid):
self.app.tm.remove(conn.getUUID(), tid)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/handlers/election.py 0000664 0000000 0000000 00000020216 11634614701 0026440 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.lib.protocol import NodeTypes, NodeStates, Packets
from neo.lib.protocol import NotReadyError, ProtocolError, \
UnexpectedPacketError
from neo.lib.protocol import BrokenNodeDisallowedError
from neo.master.handlers import MasterHandler
from neo.lib.exception import ElectionFailure
from neo.lib.util import dump
class ClientElectionHandler(MasterHandler):
# FIXME: this packet is not allowed here, but handled in MasterHandler
# a global handler review is required.
def askPrimary(self, conn):
raise UnexpectedPacketError, "askPrimary on server connection"
def connectionStarted(self, conn):
addr = conn.getAddress()
# connection in progress
self.app.unconnected_master_node_set.remove(addr)
self.app.negotiating_master_node_set.add(addr)
MasterHandler.connectionStarted(self, conn)
def connectionFailed(self, conn):
addr = conn.getAddress()
node = self.app.nm.getByAddress(addr)
assert node is not None, (dump(self.app.uuid), addr)
assert node.isUnknown(), (dump(self.app.uuid), node.whoSetState(),
node.getState())
# connection never success, node is still in unknown state
self.app.negotiating_master_node_set.discard(addr)
self.app.unconnected_master_node_set.add(addr)
MasterHandler.connectionFailed(self, conn)
def connectionCompleted(self, conn):
conn.ask(Packets.AskPrimary())
MasterHandler.connectionCompleted(self, conn)
def connectionLost(self, conn, new_state):
addr = conn.getAddress()
self.app.negotiating_master_node_set.discard(addr)
def acceptIdentification(self, conn, node_type,
uuid, num_partitions, num_replicas, your_uuid):
app = self.app
node = app.nm.getByAddress(conn.getAddress())
if node_type != NodeTypes.MASTER:
# The peer is not a master node!
neo.lib.logging.error('%r is not a master node', conn)
app.nm.remove(node)
conn.close()
return
if your_uuid != app.uuid:
# uuid conflict happened, accept the new one and restart election
app.uuid = your_uuid
neo.lib.logging.info('UUID conflict, new UUID: %s',
dump(your_uuid))
raise ElectionFailure, 'new uuid supplied'
conn.setUUID(uuid)
node.setUUID(uuid)
if app.uuid < uuid:
# I lost.
app.primary = False
app.negotiating_master_node_set.discard(conn.getAddress())
def answerPrimary(self, conn, primary_uuid, known_master_list):
app = self.app
# Register new master nodes.
for address, uuid in known_master_list:
if app.server == address:
# This is self.
continue
n = app.nm.getByAddress(address)
# master node must be known
assert n is not None, 'Unknown master node: %s' % (address, )
if uuid is not None:
# If I don't know the UUID yet, believe what the peer
# told me at the moment.
if n.getUUID() is None or n.getUUID() != uuid:
n.setUUID(uuid)
if primary_uuid is not None:
# The primary master is defined.
if app.primary_master_node is not None \
and app.primary_master_node.getUUID() != primary_uuid:
# There are multiple primary master nodes. This is
# dangerous.
raise ElectionFailure, 'multiple primary master nodes'
primary_node = app.nm.getByUUID(primary_uuid)
if primary_node is None:
# I don't know such a node. Probably this information
# is old. So ignore it.
neo.lib.logging.warning(
'received an unknown primary node UUID')
else:
# Whatever the situation is, I trust this master.
app.primary = False
app.primary_master_node = primary_node
# Stop waiting for connections than primary master's to
# complete to exit election phase ASAP.
app.unconnected_master_node_set.clear()
app.negotiating_master_node_set.clear()
primary_node = app.primary_master_node
if (primary_node is None or \
conn.getAddress() == primary_node.getAddress()) and \
not conn.isClosed():
# Request a node identification.
# There are 3 cases here:
# - Peer doesn't know primary node
# We must ask its identification so we exchange our uuids, to
# know which of us is secondary.
# - Peer knows primary node
# - He is the primary
# We must ask its identification, as part of the normal
# connection process
# - He is not the primary
# We don't need to ask its identification, as we will close
# this connection anyway (exiting election).
# Also, connection can be closed by peer after he sent
# AnswerPrimary if he finds the primary master before we
# give him our UUID.
# The connection gets closed before this message gets processed
# because this message might have been queued, but connection
# interruption takes effect as soon as received.
conn.ask(Packets.RequestIdentification(
NodeTypes.MASTER,
app.uuid,
app.server,
app.name
))
class ServerElectionHandler(MasterHandler):
def reelectPrimary(self, conn):
raise ElectionFailure, 'reelection requested'
def requestIdentification(self, conn, node_type,
uuid, address, name):
self.checkClusterName(name)
app = self.app
if node_type != NodeTypes.MASTER:
neo.lib.logging.info('reject a connection from a non-master')
raise NotReadyError
node = app.nm.getByAddress(address)
if node is None:
neo.lib.logging.error('unknown master node: %s' % (address, ))
raise ProtocolError('unknown master node')
# If this node is broken, reject it.
if node.getUUID() == uuid:
if node.isBroken():
raise BrokenNodeDisallowedError
# supplied another uuid in case of conflict
while not app.isValidUUID(uuid, address):
uuid = app.getNewUUID(node_type)
node.setUUID(uuid)
conn.setUUID(uuid)
p = Packets.AcceptIdentification(
NodeTypes.MASTER,
app.uuid,
app.pt.getPartitions(),
app.pt.getReplicas(),
uuid
)
conn.answer(p)
def announcePrimary(self, conn):
uuid = conn.getUUID()
if uuid is None:
raise ProtocolError('Not identified')
app = self.app
if app.primary:
# I am also the primary... So restart the election.
raise ElectionFailure, 'another primary arises'
node = app.nm.getByUUID(uuid)
app.primary = False
app.primary_master_node = node
app.unconnected_master_node_set.clear()
app.negotiating_master_node_set.clear()
neo.lib.logging.info('%s is the primary', node)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/handlers/identification.py 0000664 0000000 0000000 00000005221 11634614701 0027626 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
from neo.lib.protocol import NodeTypes, Packets
from neo.lib.protocol import BrokenNodeDisallowedError, ProtocolError
from neo.master.handlers import MasterHandler
class IdentificationHandler(MasterHandler):
def nodeLost(self, conn, node):
neo.lib.logging.warning(
'lost a node in IdentificationHandler : %s' % node)
def requestIdentification(self, conn, node_type, uuid, address, name):
self.checkClusterName(name)
app = self.app
# handle conflicts and broken nodes
node = app.nm.getByUUID(uuid)
if node:
if node.isBroken():
raise BrokenNodeDisallowedError
else:
node = app.nm.getByAddress(address)
if node:
if node.isRunning():
# cloned/evil/buggy node connecting to us
raise ProtocolError('already connected')
else:
assert not node.isConnected()
node.setAddress(address)
node.setRunning()
# ask the app the node identification, if refused, an exception is
# raised
result = app.identifyNode(node_type, uuid, node)
(uuid, node, state, handler, node_ctor) = result
if uuid is None:
# no valid uuid, give it one
uuid = app.getNewUUID(node_type)
if node is None:
# new node
node = node_ctor(uuid=uuid, address=address)
# set up the node
node.setUUID(uuid)
node.setState(state)
node.setConnection(conn)
# set up the connection
conn.setUUID(uuid)
conn.setHandler(handler)
# answer
args = (NodeTypes.MASTER, app.uuid, app.pt.getPartitions(),
app.pt.getReplicas(), uuid)
conn.answer(Packets.AcceptIdentification(*args))
# trigger the event
handler.connectionCompleted(conn)
app.broadcastNodesInformation([node])
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/handlers/secondary.py 0000664 0000000 0000000 00000010030 11634614701 0026616 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.master.handlers import MasterHandler
from neo.lib.exception import ElectionFailure, PrimaryFailure
from neo.lib.protocol import NodeTypes, Packets
class SecondaryMasterHandler(MasterHandler):
""" Handler used by primary to handle secondary masters"""
def connectionLost(self, conn, new_state):
node = self.app.nm.getByUUID(conn.getUUID())
assert node is not None
node.setDown()
self.app.broadcastNodesInformation([node])
def announcePrimary(self, conn):
raise ElectionFailure, 'another primary arises'
def reelectPrimary(self, conn):
raise ElectionFailure, 'reelection requested'
class PrimaryHandler(MasterHandler):
""" Handler used by secondaries to handle primary master"""
def packetReceived(self, conn, packet):
if not conn.isServer():
node = self.app.nm.getByAddress(conn.getAddress())
if not node.isBroken():
node.setRunning()
MasterHandler.packetReceived(self, conn, packet)
def connectionLost(self, conn, new_state):
self.app.primary_master_node.setDown()
raise PrimaryFailure, 'primary master is dead'
def connectionFailed(self, conn):
self.app.primary_master_node.setDown()
raise PrimaryFailure, 'primary master is dead'
def connectionCompleted(self, conn):
addr = conn.getAddress()
node = self.app.nm.getByAddress(addr)
# connection successfull, set it as running
node.setRunning()
conn.ask(Packets.AskPrimary())
MasterHandler.connectionCompleted(self, conn)
def reelectPrimary(self, conn):
raise ElectionFailure, 'reelection requested'
def notifyNodeInformation(self, conn, node_list):
app = self.app
for node_type, addr, uuid, state in node_list:
if node_type != NodeTypes.MASTER:
# No interest.
continue
# Register new master nodes.
if app.server == addr:
# This is self.
continue
else:
n = app.nm.getByAddress(addr)
# master node must be known
assert n is not None
if uuid is not None:
# If I don't know the UUID yet, believe what the peer
# told me at the moment.
if n.getUUID() is None:
n.setUUID(uuid)
def acceptIdentification(self, conn, node_type,
uuid, num_partitions,
num_replicas, your_uuid):
app = self.app
node = app.nm.getByAddress(conn.getAddress())
assert node_type == NodeTypes.MASTER
if your_uuid != app.uuid:
# uuid conflict happened, accept the new one
app.uuid = your_uuid
conn.setUUID(uuid)
node.setUUID(uuid)
def answerPrimary(self, conn, primary_uuid, known_master_list):
app = self.app
if primary_uuid != app.primary_master_node.getUUID():
raise PrimaryFailure, 'unexpected primary uuid'
conn.ask(Packets.RequestIdentification(
NodeTypes.MASTER,
app.uuid,
app.server,
app.name
))
def notifyClusterInformation(self, conn, state):
pass
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/handlers/shutdown.py 0000664 0000000 0000000 00000003010 11634614701 0026502 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.lib import protocol
from neo.master.handlers import BaseServiceHandler
class ShutdownHandler(BaseServiceHandler):
"""This class deals with events for a shutting down phase."""
def requestIdentification(self, conn, node_type,
uuid, address, name):
neo.lib.logging.error('reject any new connection')
raise protocol.ProtocolError('cluster is shutting down')
def askPrimary(self, conn):
neo.lib.logging.error('reject any new demand for primary master')
raise protocol.ProtocolError('cluster is shutting down')
def askBeginTransaction(self, conn, tid):
neo.lib.logging.error('reject any new demand for new tid')
raise protocol.ProtocolError('cluster is shutting down')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/handlers/storage.py 0000664 0000000 0000000 00000006765 11634614701 0026317 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.lib.protocol import ProtocolError
from neo.lib.protocol import Packets
from neo.master.handlers import BaseServiceHandler
from neo.lib.exception import OperationFailure
from neo.lib.util import dump
from neo.lib.connector import ConnectorConnectionClosedException
from neo.lib.pt import PartitionTableException
class StorageServiceHandler(BaseServiceHandler):
""" Handler dedicated to storages during service state """
def connectionCompleted(self, conn):
# TODO: unit test
app = self.app
uuid = conn.getUUID()
node = app.nm.getByUUID(uuid)
app.setStorageNotReady(uuid)
# XXX: what other values could happen ?
if node.isRunning():
conn.notify(Packets.StartOperation())
def nodeLost(self, conn, node):
neo.lib.logging.info('storage node lost')
assert not node.isRunning(), node.getState()
if not self.app.pt.operational():
raise OperationFailure, 'cannot continue operation'
# this is intentionaly placed after the raise because the last cell in a
# partition must not oudated to allows a cluster restart.
self.app.outdateAndBroadcastPartition()
self.app.tm.forget(conn.getUUID())
if self.app.packing is not None:
self.answerPack(conn, False)
def askLastIDs(self, conn):
app = self.app
loid = app.tm.getLastOID()
ltid = app.tm.getLastTID()
conn.answer(Packets.AnswerLastIDs(loid, ltid, app.pt.getID()))
def askUnfinishedTransactions(self, conn):
tm = self.app.tm
pending_list = tm.registerForNotification(conn.getUUID())
last_tid = tm.getLastTID()
p = Packets.AnswerUnfinishedTransactions(last_tid, pending_list)
conn.answer(p)
def answerInformationLocked(self, conn, ttid):
tm = self.app.tm
if ttid not in tm:
raise ProtocolError('Unknown transaction')
# transaction locked on this storage node
self.app.tm.lock(ttid, conn.getUUID())
def notifyReplicationDone(self, conn, offset):
node = self.app.nm.getByUUID(conn.getUUID())
neo.lib.logging.debug("%s is up for offset %s" % (node, offset))
try:
cell_list = self.app.pt.setUpToDate(node, offset)
except PartitionTableException, e:
raise ProtocolError(str(e))
self.app.broadcastPartitionChanges(cell_list)
def answerPack(self, conn, status):
app = self.app
if app.packing is not None:
client, msg_id, uid_set = app.packing
uid_set.remove(conn.getUUID())
if not uid_set:
app.packing = None
if not client.isClosed():
client.answer(Packets.AnswerPack(True), msg_id=msg_id)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/pt.py 0000664 0000000 0000000 00000027543 11634614701 0023473 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib.pt
from struct import pack, unpack
from neo.lib.protocol import CellStates
from neo.lib.pt import PartitionTableException
from neo.lib.pt import PartitionTable
class PartitionTable(PartitionTable):
"""This class manages a partition table for the primary master node"""
def setID(self, id):
assert isinstance(id, (int, long)) or id is None, id
self._id = id
def setNextID(self):
if self._id is None:
raise RuntimeError, 'I do not know the last Partition Table ID'
self._id += 1
return self._id
def make(self, node_list):
"""Make a new partition table from scratch."""
# start with the first PTID
self._id = 1
# First, filter the list of nodes.
node_list = [n for n in node_list if n.isRunning() \
and n.getUUID() is not None]
if len(node_list) == 0:
# Impossible.
raise RuntimeError, 'cannot make a partition table with an ' \
'empty storage node list'
# Take it into account that the number of storage nodes may be less
# than the number of replicas.
repeats = min(self.nr + 1, len(node_list))
index = 0
for offset in xrange(self.np):
row = []
for _ in xrange(repeats):
node = node_list[index]
row.append(neo.lib.pt.Cell(node))
self.count_dict[node] = self.count_dict.get(node, 0) + 1
index += 1
if index == len(node_list):
index = 0
self.partition_list[offset] = row
self.num_filled_rows = self.np
def findLeastUsedNode(self, excluded_node_list = ()):
min_count = self.np + 1
min_node = None
for node, count in self.count_dict.iteritems():
if min_count > count \
and node not in excluded_node_list \
and node.isRunning():
min_node = node
min_count = count
return min_node
def dropNode(self, node):
cell_list = []
uuid = node.getUUID()
for offset, row in enumerate(self.partition_list):
if row is None:
continue
for cell in row:
if cell.getNode() is node:
if not cell.isFeeding():
# If this cell is not feeding, find another node
# to be added.
node_list = [c.getNode() for c in row]
n = self.findLeastUsedNode(node_list)
if n is not None:
row.append(neo.lib.pt.Cell(n,
CellStates.OUT_OF_DATE))
self.count_dict[n] += 1
cell_list.append((offset, n.getUUID(),
CellStates.OUT_OF_DATE))
row.remove(cell)
cell_list.append((offset, uuid, CellStates.DISCARDED))
break
try:
del self.count_dict[node]
except KeyError:
pass
return cell_list
def load(self, ptid, row_list, nm):
"""
Load a partition table from a storage node during the recovery.
Return the new storage nodes registered
"""
# check offsets
for offset, _row in row_list:
if offset >= self.getPartitions():
raise IndexError, offset
# store the partition table
self.clear()
self._id = ptid
new_nodes = []
for offset, row in row_list:
for uuid, state in row:
node = nm.getByUUID(uuid)
if node is None:
node = nm.createStorage(uuid=uuid)
new_nodes.append(node.asTuple())
self.setCell(offset, node, state)
return new_nodes
def setUpToDate(self, node, offset):
"""Set a cell as up-to-date"""
uuid = node.getUUID()
# check the partition is assigned and known as outdated
for cell in self.getCellList(offset):
if cell.getUUID() == uuid:
if not cell.isOutOfDate():
raise PartitionTableException('Non-oudated partition')
break
else:
raise PartitionTableException('Non-assigned partition')
# update the partition table
cell_list = [self.setCell(offset, node, CellStates.UP_TO_DATE)]
cell_list = [(offset, uuid, CellStates.UP_TO_DATE)]
# If the partition contains a feeding cell, drop it now.
for feeding_cell in self.getCellList(offset):
if feeding_cell.isFeeding():
cell_list.append(self.removeCell(offset,
feeding_cell.getNode()))
break
return cell_list
def addNode(self, node):
"""Add a node. Take it into account that it might not be really a new
node. The strategy is, if a row does not contain a good number of
cells, add this node to the row, unless the node is already present
in the same row. Otherwise, check if this node should replace another
cell."""
cell_list = []
node_count = self.count_dict.get(node, 0)
for offset, row in enumerate(self.partition_list):
feeding_cell = None
max_count = 0
max_cell = None
num_cells = 0
skip = False
for cell in row:
if cell.getNode() == node:
skip = True
break
if cell.isFeeding():
feeding_cell = cell
else:
num_cells += 1
count = self.count_dict[cell.getNode()]
if count > max_count:
max_count = count
max_cell = cell
if skip:
continue
if num_cells <= self.nr:
row.append(neo.lib.pt.Cell(node, CellStates.OUT_OF_DATE))
cell_list.append((offset, node.getUUID(),
CellStates.OUT_OF_DATE))
node_count += 1
else:
if max_count - node_count > 1:
if feeding_cell is not None or max_cell.isOutOfDate():
# If there is a feeding cell already or it is
# out-of-date, just drop the node.
row.remove(max_cell)
cell_list.append((offset, max_cell.getUUID(),
CellStates.DISCARDED))
self.count_dict[max_cell.getNode()] -= 1
else:
# Otherwise, use it as a feeding cell for safety.
max_cell.setState(CellStates.FEEDING)
cell_list.append((offset, max_cell.getUUID(),
CellStates.FEEDING))
# Don't count a feeding cell.
self.count_dict[max_cell.getNode()] -= 1
row.append(neo.lib.pt.Cell(node, CellStates.OUT_OF_DATE))
cell_list.append((offset, node.getUUID(),
CellStates.OUT_OF_DATE))
node_count += 1
self.count_dict[node] = node_count
self.log()
return cell_list
def tweak(self):
"""Test if nodes are distributed uniformly. Otherwise, correct the
partition table."""
changed_cell_list = []
for offset, row in enumerate(self.partition_list):
removed_cell_list = []
feeding_cell = None
out_of_date_cell_list = []
up_to_date_cell_list = []
for cell in row:
if cell.getNode().isBroken():
# Remove a broken cell.
removed_cell_list.append(cell)
elif cell.isFeeding():
if feeding_cell is None:
feeding_cell = cell
else:
# Remove an excessive feeding cell.
removed_cell_list.append(cell)
elif cell.isOutOfDate():
out_of_date_cell_list.append(cell)
else:
up_to_date_cell_list.append(cell)
# If all cells are up-to-date, a feeding cell is not required.
if len(out_of_date_cell_list) == 0 and feeding_cell is not None:
removed_cell_list.append(feeding_cell)
ideal_num = self.nr + 1
while len(out_of_date_cell_list) + len(up_to_date_cell_list) > \
ideal_num:
# This row contains too many cells.
if len(up_to_date_cell_list) > 1:
# There are multiple up-to-date cells, so choose whatever
# used too much.
cell_list = out_of_date_cell_list + up_to_date_cell_list
else:
# Drop an out-of-date cell.
cell_list = out_of_date_cell_list
max_count = 0
chosen_cell = None
for cell in cell_list:
count = self.count_dict[cell.getNode()]
if max_count < count:
max_count = count
chosen_cell = cell
removed_cell_list.append(chosen_cell)
try:
out_of_date_cell_list.remove(chosen_cell)
except ValueError:
up_to_date_cell_list.remove(chosen_cell)
# Now remove cells really.
for cell in removed_cell_list:
row.remove(cell)
if not cell.isFeeding():
self.count_dict[cell.getNode()] -= 1
changed_cell_list.append((offset, cell.getUUID(),
CellStates.DISCARDED))
# Add cells, if a row contains less than the number of replicas.
for offset, row in enumerate(self.partition_list):
num_cells = 0
for cell in row:
if not cell.isFeeding():
num_cells += 1
while num_cells <= self.nr:
node = self.findLeastUsedNode([cell.getNode() for cell in row])
if node is None:
break
row.append(neo.lib.pt.Cell(node, CellStates.OUT_OF_DATE))
changed_cell_list.append((offset, node.getUUID(),
CellStates.OUT_OF_DATE))
self.count_dict[node] += 1
num_cells += 1
self.log()
return changed_cell_list
def outdate(self):
"""Outdate all non-working nodes."""
cell_list = []
for offset, row in enumerate(self.partition_list):
for cell in row:
if not cell.getNode().isRunning() and not cell.isOutOfDate():
cell.setState(CellStates.OUT_OF_DATE)
cell_list.append((offset, cell.getUUID(),
CellStates.OUT_OF_DATE))
return cell_list
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/recovery.py 0000664 0000000 0000000 00000013356 11634614701 0024703 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from struct import pack
import neo
from neo.lib.util import dump
from neo.lib.protocol import Packets, ProtocolError, ClusterStates, NodeStates
from neo.lib.protocol import NotReadyError, ZERO_OID, ZERO_TID
from neo.master.handlers import MasterHandler
class RecoveryManager(MasterHandler):
"""
Manage the cluster recovery
"""
def __init__(self, app):
super(RecoveryManager, self).__init__(app)
# The target node's uuid to request next.
self.target_ptid = None
def getHandler(self):
return self
def identifyStorageNode(self, uuid, node):
"""
Returns the handler for storage nodes
"""
return uuid, NodeStates.PENDING, self
def run(self):
"""
Recover the status about the cluster. Obtain the last OID, the last
TID, and the last Partition Table ID from storage nodes, then get
back the latest partition table or make a new table from scratch,
if this is the first time.
"""
neo.lib.logging.info('begin the recovery of the status')
self.app.changeClusterState(ClusterStates.RECOVERING)
em = self.app.em
self.app.tm.setLastOID(None)
self.app.pt.setID(None)
# collect the last partition table available
while 1:
em.poll(1)
if self.app._startup_allowed:
allowed_node_set = set()
for node in self.app.nm.getStorageList():
if node.isPending():
break # waiting for an answer
if node.isRunning():
allowed_node_set.add(node)
else:
if allowed_node_set:
break # no ready storage node
neo.lib.logging.info('startup allowed')
if self.app.pt.getID() is None:
neo.lib.logging.info('creating a new partition table')
# reset IDs generators & build new partition with running nodes
self.app.tm.setLastOID(ZERO_OID)
self.app.pt.make(allowed_node_set)
self._broadcastPartitionTable(self.app.pt.getID(),
self.app.pt.getRowList())
# collect node that are connected but not in the selected partition
# table and set them in pending state
refused_node_set = allowed_node_set.difference(
self.app.pt.getNodeList())
if refused_node_set:
for node in refused_node_set:
node.setPending()
self.app.broadcastNodesInformation(refused_node_set)
self.app.setLastTransaction(self.app.tm.getLastTID())
neo.lib.logging.debug(
'cluster starts with loid=%s and this partition ' \
'table :', dump(self.app.tm.getLastOID()))
self.app.pt.log()
def connectionLost(self, conn, new_state):
node = self.app.nm.getByUUID(conn.getUUID())
assert node is not None
if node.getState() == new_state:
return
node.setState(new_state)
# broadcast to all so that admin nodes gets informed
self.app.broadcastNodesInformation([node])
def connectionCompleted(self, conn):
# ask the last IDs to perform the recovery
conn.ask(Packets.AskLastIDs())
def answerLastIDs(self, conn, loid, ltid, lptid):
# Get max values.
if loid is not None:
self.app.tm.setLastOID(max(loid, self.app.tm.getLastOID()))
if ltid is not None:
self.app.tm.setLastTID(ltid)
if lptid > self.target_ptid:
# something newer
self.target_ptid = lptid
conn.ask(Packets.AskPartitionTable())
else:
node = self.app.nm.getByUUID(conn.getUUID())
assert node.isPending()
node.setRunning()
self.app.broadcastNodesInformation([node])
def answerPartitionTable(self, conn, ptid, row_list):
node = self.app.nm.getByUUID(conn.getUUID())
assert node.isPending()
node.setRunning()
if ptid != self.target_ptid:
# If this is not from a target node, ignore it.
neo.lib.logging.warn('Got %s while waiting %s', dump(ptid),
dump(self.target_ptid))
else:
self._broadcastPartitionTable(ptid, row_list)
self.app.broadcastNodesInformation([node])
def _broadcastPartitionTable(self, ptid, row_list):
try:
new_nodes = self.app.pt.load(ptid, row_list, self.app.nm)
except IndexError:
raise ProtocolError('Invalid offset')
else:
notification = Packets.NotifyNodeInformation(new_nodes)
ptid = self.app.pt.getID()
row_list = self.app.pt.getRowList()
partition_table = Packets.SendPartitionTable(ptid, row_list)
# notify the admin nodes
for node in self.app.nm.getAdminList(only_identified=True):
node.notify(notification)
node.notify(partition_table)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/transactions.py 0000664 0000000 0000000 00000036175 11634614701 0025561 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from time import time, gmtime
from struct import pack, unpack
from neo.lib.protocol import ZERO_TID
from datetime import timedelta, datetime
from neo.lib.util import dump, u64, p64
import neo.lib
TID_LOW_OVERFLOW = 2**32
TID_LOW_MAX = TID_LOW_OVERFLOW - 1
SECOND_PER_TID_LOW = 60.0 / TID_LOW_OVERFLOW
TID_CHUNK_RULES = (
(-1900, 0),
(-1, 12),
(-1, 31),
(0, 24),
(0, 60),
)
def packTID(utid):
"""
Pack given 2-tuple containing:
- a 5-tuple containing year, month, day, hour and minute
- seconds scaled to 60:2**32
into a 64 bits TID.
"""
higher, lower = utid
assert len(higher) == len(TID_CHUNK_RULES), higher
packed_higher = 0
for value, (offset, multiplicator) in zip(higher, TID_CHUNK_RULES):
assert isinstance(value, (int, long)), value
value += offset
assert 0 <= value, (value, offset, multiplicator)
assert multiplicator == 0 or value < multiplicator, (value,
offset, multiplicator)
packed_higher *= multiplicator
packed_higher += value
assert isinstance(lower, (int, long)), lower
assert 0 <= lower < TID_LOW_OVERFLOW, hex(lower)
return pack('!LL', packed_higher, lower)
def unpackTID(ptid):
"""
Unpack given 64 bits TID in to a 2-tuple containing:
- a 5-tuple containing year, month, day, hour and minute
- seconds scaled to 60:2**32
"""
packed_higher, lower = unpack('!LL', ptid)
higher = []
append = higher.append
for offset, multiplicator in reversed(TID_CHUNK_RULES):
if multiplicator:
packed_higher, value = divmod(packed_higher, multiplicator)
else:
packed_higher, value = 0, packed_higher
append(value - offset)
higher.reverse()
return (tuple(higher), lower)
def addTID(ptid, offset):
"""
Offset given packed TID.
"""
higher, lower = unpackTID(ptid)
high_offset, lower = divmod(lower + offset, TID_LOW_OVERFLOW)
if high_offset:
d = datetime(*higher) + timedelta(0, 60 * high_offset)
higher = (d.year, d.month, d.day, d.hour, d.minute)
return packTID((higher, lower))
class DelayedError(Exception):
pass
class Transaction(object):
"""
A pending transaction
"""
_tid = None
_msg_id = None
_oid_list = None
_prepared = False
# uuid dict hold flag to known who has locked the transaction
_uuid_set = None
_lock_wait_uuid_set = None
def __init__(self, node, ttid):
"""
Prepare the transaction, set OIDs and UUIDs related to it
"""
self._node = node
self._ttid = ttid
self._birth = time()
# store storage uuids that must be notified at commit
self._notification_set = set()
def __repr__(self):
return "<%s(client=%r, tid=%r, oids=%r, storages=%r, age=%.2fs) at %x>" % (
self.__class__.__name__,
self._node,
dump(self._tid),
[dump(x) for x in self._oid_list or ()],
[dump(x) for x in self._uuid_set or ()],
time() - self._birth,
id(self),
)
def getNode(self):
"""
Return the node that had began the transaction
"""
return self._node
def getTTID(self):
"""
Return the temporary transaction ID.
"""
return self._ttid
def getTID(self):
"""
Return the transaction ID
"""
return self._tid
def getMessageId(self):
"""
Returns the packet ID to use in the answer
"""
return self._msg_id
def getUUIDList(self):
"""
Returns the list of node's UUID that lock the transaction
"""
return list(self._uuid_set)
def getOIDList(self):
"""
Returns the list of OIDs used in the transaction
"""
return list(self._oid_list)
def isPrepared(self):
"""
Returns True if the commit has been requested by the client
"""
return self._prepared
def registerForNotification(self, uuid):
"""
Register a storage node that requires a notification at commit
"""
self._notification_set.add(uuid)
def getNotificationUUIDList(self):
"""
Returns the list of storage waiting for the transaction to be
finished
"""
return list(self._notification_set)
def prepare(self, tid, oid_list, uuid_list, msg_id):
self._tid = tid
self._oid_list = oid_list
self._msg_id = msg_id
self._uuid_set = set(uuid_list)
self._lock_wait_uuid_set = set(uuid_list)
self._prepared = True
def forget(self, uuid):
"""
Given storage was lost while waiting for its lock, stop waiting
for it.
Does nothing if the node was not part of the transaction.
"""
# XXX: We might lose information that a storage successfully locked
# data but was later found to be disconnected. This loss has no impact
# on current code, but it might be disturbing to reader or future code.
if self._prepared:
self._lock_wait_uuid_set.discard(uuid)
self._uuid_set.discard(uuid)
return self.locked()
return False
def lock(self, uuid):
"""
Define that a node has locked the transaction
Returns true if all nodes are locked
"""
self._lock_wait_uuid_set.remove(uuid)
return self.locked()
def locked(self):
"""
Returns true if all nodes are locked
"""
return not self._lock_wait_uuid_set
class TransactionManager(object):
"""
Manage current transactions
"""
_last_tid = ZERO_TID
_next_ttid = 0
def __init__(self, on_commit):
# ttid -> transaction
self._ttid_dict = {}
# node -> transactions mapping
self._node_dict = {}
self._last_oid = None
self._on_commit = on_commit
# queue filled with ttids pointing to transactions with increasing tids
self._queue = []
def __getitem__(self, ttid):
"""
Return the transaction object for this TID
"""
# XXX: used by unit tests only
return self._ttid_dict[ttid]
def __contains__(self, ttid):
"""
Returns True if this is a pending transaction
"""
return ttid in self._ttid_dict
def getNextOIDList(self, num_oids):
""" Generate a new OID list """
if self._last_oid is None:
raise RuntimeError, 'I do not know the last OID'
oid = unpack('!Q', self._last_oid)[0] + 1
oid_list = [pack('!Q', oid + i) for i in xrange(num_oids)]
self._last_oid = oid_list[-1]
return oid_list
def updateLastOID(self, oid_list):
"""
Updates the last oid with the max of those supplied if greater than
the current known, returns True if changed
"""
max_oid = oid_list and max(oid_list) or None # oid_list might be empty
if max_oid > self._last_oid:
self._last_oid = max_oid
return True
return False
def setLastOID(self, oid):
self._last_oid = oid
def getLastOID(self):
return self._last_oid
def _nextTID(self, ttid, divisor):
"""
Compute the next TID based on the current time and check collisions.
Also, adjust it so that
tid % divisor == ttid % divisor
while preserving
min_tid < tid
When constraints allow, prefer decreasing generated TID, to avoid
fast-forwarding to future dates.
"""
assert isinstance(ttid, basestring), repr(ttid)
assert isinstance(divisor, (int, long)), repr(divisor)
tm = time()
gmt = gmtime(tm)
tid = packTID((
(gmt.tm_year, gmt.tm_mon, gmt.tm_mday, gmt.tm_hour,
gmt.tm_min),
int((gmt.tm_sec % 60 + (tm - int(tm))) / SECOND_PER_TID_LOW)
))
min_tid = self._last_tid
if tid <= min_tid:
tid = addTID(min_tid, 1)
# We know we won't have room to adjust by decreasing.
try_decrease = False
else:
try_decrease = True
ref_remainder = u64(ttid) % divisor
remainder = u64(tid) % divisor
if ref_remainder != remainder:
if try_decrease:
new_tid = addTID(tid, ref_remainder - divisor - remainder)
assert u64(new_tid) % divisor == ref_remainder, (dump(new_tid),
ref_remainder)
if new_tid <= min_tid:
new_tid = addTID(new_tid, divisor)
else:
if ref_remainder > remainder:
ref_remainder += divisor
new_tid = addTID(tid, ref_remainder - remainder)
assert min_tid < new_tid, (dump(min_tid), dump(tid), dump(new_tid))
tid = new_tid
self._last_tid = tid
return self._last_tid
def getLastTID(self):
"""
Returns the last TID used
"""
return self._last_tid
def setLastTID(self, tid):
"""
Set the last TID, keep the previous if lower
"""
self._last_tid = max(self._last_tid, tid)
def getTTID(self):
"""
Generate a temporary TID, to be used only during a single node's
2PC.
"""
self._next_ttid += 1
return p64(self._next_ttid)
def reset(self):
"""
Discard all manager content
This doesn't reset the last TID.
"""
self._ttid_dict = {}
self._node_dict = {}
def hasPending(self):
"""
Returns True if some transactions are pending
"""
return bool(self._ttid_dict)
def registerForNotification(self, uuid):
"""
Return the list of pending transaction IDs
"""
# remember that this node must be notified when pending transactions
# will be finished
for txn in self._ttid_dict.itervalues():
txn.registerForNotification(uuid)
return set(self._ttid_dict.keys())
def begin(self, node, tid=None):
"""
Generate a new TID
"""
if tid is None:
# No TID requested, generate a temporary one
ttid = self.getTTID()
else:
# Use of specific TID requested, queue it immediately and update
# last TID.
self._queue.append((node.getUUID(), tid))
self.setLastTID(tid)
ttid = tid
txn = Transaction(node, ttid)
self._ttid_dict[ttid] = txn
self._node_dict.setdefault(node, {})[ttid] = txn
neo.lib.logging.debug('Begin %s', txn)
return ttid
def prepare(self, ttid, divisor, oid_list, uuid_list, msg_id):
"""
Prepare a transaction to be finished
"""
# XXX: not efficient but the list should be often small
txn = self._ttid_dict[ttid]
node = txn.getNode()
for _, tid in self._queue:
if ttid == tid:
break
else:
tid = self._nextTID(ttid, divisor)
self._queue.append((node.getUUID(), ttid))
neo.lib.logging.debug('Finish TXN %s for %s (was %s)',
dump(tid), node, dump(ttid))
txn.prepare(tid, oid_list, uuid_list, msg_id)
return tid
def remove(self, uuid, ttid):
"""
Remove a transaction, commited or aborted
"""
neo.lib.logging.debug('Remove TXN %s', dump(ttid))
try:
# only in case of an import:
self._queue.remove((uuid, ttid))
except ValueError:
# finish might not have been started
pass
ttid_dict = self._ttid_dict
if ttid in ttid_dict:
txn = ttid_dict[ttid]
node = txn.getNode()
# ...and tried to finish
del ttid_dict[ttid]
del self._node_dict[node][ttid]
def lock(self, ttid, uuid):
"""
Set that a node has locked the transaction.
If transaction is completely locked, calls function given at
instanciation time.
"""
neo.lib.logging.debug('Lock TXN %s for %s', dump(ttid), dump(uuid))
assert ttid in self._ttid_dict, "Transaction not started"
txn = self._ttid_dict[ttid]
if txn.lock(uuid) and self._queue[0][1] == ttid:
# all storage are locked and we unlock the commit queue
self._unlockPending()
def forget(self, uuid):
"""
A storage node has been lost, don't expect a reply from it for
current transactions
"""
unlock = False
# iterate over a copy because _unlockPending may alter the dict
for ttid, txn in self._ttid_dict.items():
if txn.forget(uuid) and self._queue[0][1] == ttid:
unlock = True
if unlock:
self._unlockPending()
def _unlockPending(self):
# unlock pending transactions
queue = self._queue
pop = queue.pop
insert = queue.insert
on_commit = self._on_commit
get = self._ttid_dict.get
while queue:
uuid, ttid = pop(0)
txn = get(ttid, None)
# _queue can contain un-prepared transactions
if txn is not None and txn.locked():
on_commit(txn)
else:
insert(0, (uuid, ttid))
break
def abortFor(self, node):
"""
Abort pending transactions initiated by a node
"""
neo.lib.logging.debug('Abort TXN for %s', node)
uuid = node.getUUID()
# XXX: this loop is usefull only during an import
for nuuid, ntid in list(self._queue):
if nuuid == uuid:
self._queue.remove((uuid, ntid))
if node in self._node_dict:
# remove transactions
remove = self.remove
for ttid in self._node_dict[node].keys():
if not self._ttid_dict[ttid].isPrepared():
remove(uuid, ttid)
# discard node entry
del self._node_dict[node]
def log(self):
neo.lib.logging.info('Transactions:')
for txn in self._ttid_dict.itervalues():
neo.lib.logging.info(' %r', txn)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/master/verification.py 0000664 0000000 0000000 00000021335 11634614701 0025523 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
from neo.lib.util import dump
from neo.lib.protocol import ClusterStates, Packets, NodeStates
from neo.master.handlers import BaseServiceHandler
class VerificationFailure(Exception):
"""
Exception raised each time the cluster integrity failed.
- An required storage node is missing
- A transaction or an object is missing on a node
"""
pass
class VerificationManager(BaseServiceHandler):
"""
Manager for verification step of a NEO cluster:
- Wait for at least one available storage per partition
- Check if all expected content is present
"""
def __init__(self, app):
BaseServiceHandler.__init__(self, app)
self._oid_set = set()
self._tid_set = set()
self._uuid_set = set()
self._object_present = False
def _askStorageNodesAndWait(self, packet, node_list):
poll = self.app.em.poll
operational = self.app.pt.operational
uuid_set = self._uuid_set
uuid_set.clear()
for node in node_list:
uuid_set.add(node.getUUID())
node.ask(packet)
while True:
poll(1)
if not operational():
raise VerificationFailure
if not uuid_set:
break
def _gotAnswerFrom(self, uuid):
"""
Returns True if answer from given uuid is waited upon by
_askStorageNodesAndWait, False otherwise.
Also, mark this uuid as having answered, so it stops being waited upon
by _askStorageNodesAndWait.
"""
try:
self._uuid_set.remove(uuid)
except KeyError:
result = False
else:
result = True
return result
def getHandler(self):
return self
def identifyStorageNode(self, uuid, node):
"""
Returns the handler to manager the given node
"""
state = NodeStates.RUNNING
if uuid is None or node is None:
# if node is unknown, it has been forget when the current
# partition was validated by the admin
# Here the uuid is not cleared to allow lookup pending nodes by
# uuid from the test framework. It's safe since nodes with a
# conflicting UUID are rejected in the identification handler.
state = NodeStates.PENDING
return (uuid, state, self)
def run(self):
self.app.changeClusterState(ClusterStates.VERIFYING)
while True:
try:
self.verifyData()
except VerificationFailure:
continue
break
# At this stage, all non-working nodes are out-of-date.
cell_list = self.app.pt.outdate()
# Tweak the partition table, if the distribution of storage nodes
# is not uniform.
cell_list.extend(self.app.pt.tweak())
# If anything changed, send the changes.
self.app.broadcastPartitionChanges(cell_list)
def verifyData(self):
"""Verify the data in storage nodes and clean them up, if necessary."""
em, nm = self.app.em, self.app.nm
# wait for any missing node
neo.lib.logging.debug('waiting for the cluster to be operational')
while not self.app.pt.operational():
em.poll(1)
neo.lib.logging.info('start to verify data')
# Gather all unfinished transactions.
self._askStorageNodesAndWait(Packets.AskUnfinishedTransactions(),
[x for x in self.app.nm.getIdentifiedList() if x.isStorage()])
# Gather OIDs for each unfinished TID, and verify whether the
# transaction can be finished or must be aborted. This could be
# in parallel in theory, but not so easy. Thus do it one-by-one
# at the moment.
for tid in self._tid_set:
uuid_set = self.verifyTransaction(tid)
if uuid_set is None:
packet = Packets.DeleteTransaction(tid, self._oid_set or [])
# Make sure that no node has this transaction.
for node in self.app.nm.getIdentifiedList():
if node.isStorage():
node.notify(packet)
else:
packet = Packets.CommitTransaction(tid)
for node in self.app.nm.getIdentifiedList(pool_set=uuid_set):
node.notify(packet)
self._oid_set = set()
# If possible, send the packets now.
em.poll(0)
def verifyTransaction(self, tid):
em = self.app.em
nm = self.app.nm
uuid_set = set()
# Determine to which nodes I should ask.
partition = self.app.pt.getPartition(tid)
uuid_list = [cell.getUUID() for cell \
in self.app.pt.getCellList(partition, readable=True)]
if len(uuid_list) == 0:
raise VerificationFailure
uuid_set.update(uuid_list)
# Gather OIDs.
node_list = self.app.nm.getIdentifiedList(pool_set=uuid_list)
if len(node_list) == 0:
raise VerificationFailure
self._askStorageNodesAndWait(Packets.AskTransactionInformation(tid),
node_list)
if self._oid_set is None or len(self._oid_set) == 0:
# Not commitable.
return None
# Verify that all objects are present.
for oid in self._oid_set:
partition = self.app.pt.getPartition(oid)
object_uuid_list = [cell.getUUID() for cell \
in self.app.pt.getCellList(partition, readable=True)]
if len(object_uuid_list) == 0:
raise VerificationFailure
uuid_set.update(object_uuid_list)
self._object_present = True
self._askStorageNodesAndWait(Packets.AskObjectPresent(oid, tid),
nm.getIdentifiedList(pool_set=object_uuid_list))
if not self._object_present:
# Not commitable.
return None
return uuid_set
def answerLastIDs(self, conn, loid, ltid, lptid):
# FIXME: this packet should not allowed here, the master already
# accepted the current partition table end IDs. As there were manually
# approved during recovery, there is no need to check them here.
pass
def answerUnfinishedTransactions(self, conn, max_tid, tid_list):
uuid = conn.getUUID()
neo.lib.logging.info('got unfinished transactions %s from %r',
[dump(tid) for tid in tid_list], conn)
if not self._gotAnswerFrom(uuid):
return
self._tid_set.update(tid_list)
def answerTransactionInformation(self, conn, tid,
user, desc, ext, packed, oid_list):
uuid = conn.getUUID()
app = self.app
if not self._gotAnswerFrom(uuid):
return
oid_set = set(oid_list)
if self._oid_set is None:
# Someone does not agree.
pass
elif len(self._oid_set) == 0:
# This is the first answer.
self._oid_set.update(oid_set)
elif self._oid_set != oid_set:
raise ValueError, "Inconsistent transaction %s" % \
(dump(tid, ))
def tidNotFound(self, conn, message):
uuid = conn.getUUID()
neo.lib.logging.info('TID not found: %s', message)
if not self._gotAnswerFrom(uuid):
return
self._oid_set = None
def answerObjectPresent(self, conn, oid, tid):
uuid = conn.getUUID()
neo.lib.logging.info('object %s:%s found', dump(oid), dump(tid))
self._gotAnswerFrom(uuid)
def oidNotFound(self, conn, message):
uuid = conn.getUUID()
neo.lib.logging.info('OID not found: %s', message)
app = self.app
if not self._gotAnswerFrom(uuid):
return
app._object_present = False
def connectionCompleted(self, conn):
pass
def nodeLost(self, conn, node):
if not self.app.pt.operational():
raise VerificationFailure, 'cannot continue verification'
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/neoctl/ 0000775 0000000 0000000 00000000000 11634614701 0022454 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/neoctl/__init__.py 0000664 0000000 0000000 00000000000 11634614701 0024553 0 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/neoctl/app.py 0000664 0000000 0000000 00000021657 11634614701 0023621 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.neoctl.neoctl import NeoCTL, NotReadyException
from neo.lib.util import bin, dump
from neo.lib.protocol import ClusterStates, NodeStates, NodeTypes
action_dict = {
'print': {
'pt': 'getPartitionRowList',
'node': 'getNodeList',
'cluster': 'getClusterState',
'primary': 'getPrimary',
},
'set': {
'node': 'setNodeState',
'cluster': 'setClusterState',
},
'start': 'startCluster',
'add': 'enableStorageList',
'drop': 'dropNode',
}
class TerminalNeoCTL(object):
def __init__(self, address):
self.neoctl = NeoCTL(address)
def __del__(self):
self.neoctl.close()
# Utility methods (could be functions)
def asNodeState(self, value):
return NodeStates.getByName(value.upper())
def asNodeType(self, value):
return NodeTypes.getByName(value.upper())
def asClusterState(self, value):
return ClusterStates.getByName(value.upper())
def asNode(self, value):
return bin(value)
def formatRowList(self, row_list):
return '\n'.join('%03d | %s' % (offset,
''.join('%s - %s |' % (dump(uuid), state)
for (uuid, state) in cell_list))
for (offset, cell_list) in row_list)
def formatNodeList(self, node_list):
if not node_list:
return 'Empty list!'
result = []
for node_type, address, uuid, state in node_list:
if address is None:
address = (None, None)
ip, port = address
result.append('%s - %s - %s:%s - %s' % (node_type, dump(uuid), ip,
port, state))
return '\n'.join(result)
def formatUUID(self, uuid):
return dump(uuid)
# Actual actions
def getPartitionRowList(self, params):
"""
Get a list of partition rows, bounded by min & max and involving
given node.
Parameters: [min [max [node]]]
min: offset of the first row to fetch (starts at 0)
max: offset of the last row to fetch (0 for no limit)
node: filters the list of nodes serving a line to this node
"""
params = params + [0, 0, None][len(params):]
min_offset, max_offset, node = params
min_offset = int(min_offset)
max_offset = int(max_offset)
if node is not None:
node = self.asNode(node)
ptid, row_list = self.neoctl.getPartitionRowList(
min_offset=min_offset, max_offset=max_offset, node=node)
# TODO: return ptid
return self.formatRowList(row_list)
def getNodeList(self, params):
"""
Get a list of nodes, filtering with given type.
Parameters: [type]
type: type of node to display
"""
assert len(params) < 2
if len(params):
node_type = self.asNodeType(params[0])
else:
node_type = None
node_list = self.neoctl.getNodeList(node_type=node_type)
return self.formatNodeList(node_list)
def getClusterState(self, params):
"""
Get cluster state.
"""
assert len(params) == 0
return str(self.neoctl.getClusterState())
def setNodeState(self, params):
"""
Set node state, and allow (or not) updating partition table.
Parameters: node state [update]
node: node to modify
state: state to put the node in
update: disallow (0, default) or allow (other integer) partition
table to be updated
"""
assert len(params) in (2, 3)
node = self.asNode(params[0])
state = self.asNodeState(params[1])
if len(params) == 3:
update_partition_table = bool(int(params[2]))
else:
update_partition_table = False
return self.neoctl.setNodeState(node, state,
update_partition_table=update_partition_table)
def setClusterState(self, params):
"""
Set cluster state.
Parameters: state
state: state to put the cluster in
"""
assert len(params) == 1
return self.neoctl.setClusterState(self.asClusterState(params[0]))
def startCluster(self, params):
"""
Starts cluster operation after a startup.
Equivalent to:
set cluster verifying
"""
assert len(params) == 0
return self.neoctl.startCluster()
def enableStorageList(self, params):
"""
Enable cluster to make use of pending storages.
Parameters: all
node [node [...]]
node: if "all", add all pending storage nodes.
otherwise, the list of storage nodes to enable.
"""
if len(params) == 1 and params[0] == 'all':
node_list = self.neoctl.getNodeList(NodeTypes.STORAGE)
uuid_list = [node[2] for node in node_list]
else:
uuid_list = [self.asNode(x) for x in params]
return self.neoctl.enableStorageList(uuid_list)
def dropNode(self, params):
"""
Set node into DOWN state.
Parameters: node
node: node the pu into DOWN state
Equivalent to:
set node state (node) DOWN
"""
assert len(params) == 1
return self.neoctl.dropNode(self.asNode(params[0]))
def getPrimary(self, params):
"""
Get primary master node.
"""
return self.formatUUID(self.neoctl.getPrimary())
class Application(object):
"""The storage node application."""
def __init__(self, address):
self.neoctl = TerminalNeoCTL(address)
def execute(self, args):
"""Execute the command given."""
# print node type : print list of node of the given type
# (STORAGE_NODE_TYPE, MASTER_NODE_TYPE...)
# set node uuid state [1|0] : set the node for the given uuid to the
# state (RUNNING, DOWN...) and modify the partition if asked
# set cluster name [shutdown|operational] : either shutdown the
# cluster or mark it as operational
current_action = action_dict
level = 0
while current_action is not None and \
level < len(args) and \
isinstance(current_action, dict):
current_action = current_action.get(args[level])
level += 1
action = None
if isinstance(current_action, basestring):
action = getattr(self.neoctl, current_action, None)
if action is None:
return self.usage('unknown command')
try:
return action(args[level:])
except NotReadyException, message:
return 'ERROR: %s' % (message, )
def _usage(self, action_dict, level=0):
result = []
append = result.append
sub_level = level + 1
for name, action in action_dict.iteritems():
append('%s%s' % (' ' * level, name))
if isinstance(action, dict):
append(self._usage(action, level=sub_level))
else:
real_action = getattr(self.neoctl, action, None)
if real_action is None:
continue
docstring = getattr(real_action, '__doc__', None)
if docstring is None:
docstring = '(no docstring)'
docstring_line_list = docstring.split('\n')
# Strip empty lines at begining & end of line list
for end in (0, -1):
while len(docstring_line_list) \
and docstring_line_list[end] == '':
docstring_line_list.pop(end)
# Get the indentation of first line, to preserve other lines
# relative indentation.
first_line = docstring_line_list[0]
base_indentation = len(first_line) - len(first_line.lstrip())
result.extend([(' ' * sub_level) + x[base_indentation:] \
for x in docstring_line_list])
return '\n'.join(result)
def usage(self, message):
output_list = [message, 'Available commands:', self._usage(action_dict)]
return '\n'.join(output_list)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/neoctl/handler.py 0000664 0000000 0000000 00000004104 11634614701 0024442 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.lib.handler import EventHandler
from neo.lib.protocol import ErrorCodes, Packets
class CommandEventHandler(EventHandler):
""" Base handler for command """
def connectionCompleted(self, conn):
# connected to admin node
self.app.connected = True
EventHandler.connectionCompleted(self, conn)
def __disconnected(self):
app = self.app
app.connected = False
app.connection = None
def __respond(self, response):
self.app.response_queue.append(response)
def connectionClosed(self, conn):
super(CommandEventHandler, self).connectionClosed(conn)
self.__disconnected()
def connectionFailed(self, conn):
super(CommandEventHandler, self).connectionFailed(conn)
self.__disconnected()
def ack(self, conn, msg):
self.__respond((Packets.Error, ErrorCodes.ACK, msg))
def notReady(self, conn, msg):
self.__respond((Packets.Error, ErrorCodes.NOT_READY, msg))
def __answer(packet_type):
def answer(self, conn, *args):
self.__respond((packet_type, ) + args)
return answer
answerPartitionList = __answer(Packets.AnswerPartitionList)
answerNodeList = __answer(Packets.AnswerNodeList)
answerClusterState = __answer(Packets.AnswerClusterState)
answerPrimary = __answer(Packets.AnswerPrimary)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/neoctl/neoctl.py 0000664 0000000 0000000 00000012366 11634614701 0024322 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.lib.connector import getConnectorHandler
from neo.lib.connection import ClientConnection
from neo.lib.event import EventManager
from neo.neoctl.handler import CommandEventHandler
from neo.lib.protocol import ClusterStates, NodeStates, ErrorCodes, Packets
from neo.lib.util import getConnectorFromAddress
class NotReadyException(Exception):
pass
class NeoCTL(object):
connection = None
connected = False
def __init__(self, address):
connector_name = getConnectorFromAddress(address)
self.connector_handler = getConnectorHandler(connector_name)
self.server = address
self.em = EventManager()
self.handler = CommandEventHandler(self)
self.response_queue = []
def close(self):
self.em.close()
del self.__dict__
def __getConnection(self):
if not self.connected:
self.connection = ClientConnection(self.em, self.handler,
addr=self.server, connector=self.connector_handler())
while self.connection is not None:
if self.connected:
break
self.em.poll(1)
else:
raise NotReadyException('not connected')
return self.connection
def __ask(self, packet):
# TODO: make thread-safe
connection = self.__getConnection()
connection.ask(packet)
response_queue = self.response_queue
assert len(response_queue) == 0
while self.connected:
self.em.poll(1)
if response_queue:
break
else:
raise NotReadyException, 'Connection closed'
response = response_queue.pop()
if response[0] == Packets.Error and \
response[1] == ErrorCodes.NOT_READY:
raise NotReadyException(response[2])
return response
def enableStorageList(self, uuid_list):
"""
Put all given storage nodes in "running" state.
"""
packet = Packets.AddPendingNodes(uuid_list)
response = self.__ask(packet)
assert response[0] == Packets.Error
assert response[1] == ErrorCodes.ACK
return response[2]
def setClusterState(self, state):
"""
Set cluster state.
"""
packet = Packets.SetClusterState(state)
response = self.__ask(packet)
assert response[0] == Packets.Error
assert response[1] == ErrorCodes.ACK
return response[2]
def setNodeState(self, node, state, update_partition_table=False):
"""
Set node state, and allow (or not) updating partition table.
"""
if update_partition_table:
update_partition_table = 1
else:
update_partition_table = 0
packet = Packets.SetNodeState(node, state, update_partition_table)
response = self.__ask(packet)
assert response[0] == Packets.Error
assert response[1] == ErrorCodes.ACK
return response[2]
def getClusterState(self):
"""
Get cluster state.
"""
packet = Packets.AskClusterState()
response = self.__ask(packet)
assert response[0] == Packets.AnswerClusterState
return response[1]
def getNodeList(self, node_type=None):
"""
Get a list of nodes, filtering with given type.
"""
packet = Packets.AskNodeList(node_type)
response = self.__ask(packet)
assert response[0] == Packets.AnswerNodeList
return response[1] # node_list
def getPartitionRowList(self, min_offset=0, max_offset=0, node=None):
"""
Get a list of partition rows, bounded by min & max and involving
given node.
"""
packet = Packets.AskPartitionList(min_offset, max_offset, node)
response = self.__ask(packet)
assert response[0] == Packets.AnswerPartitionList
return response[1:3] # ptid, row_list
def startCluster(self):
"""
Set cluster into "verifying" state.
"""
return self.setClusterState(ClusterStates.VERIFYING)
def dropNode(self, node):
"""
Set node into "down" state and remove it from partition table.
"""
return self.setNodeState(node, NodeStates.DOWN,
update_partition_table=1)
def getPrimary(self):
"""
Return the primary master UUID.
"""
packet = Packets.AskPrimary()
response = self.__ask(packet)
assert response[0] == Packets.AnswerPrimary
return response[1]
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/scripts/ 0000775 0000000 0000000 00000000000 11634614701 0022657 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/scripts/__init__.py 0000664 0000000 0000000 00000000000 11634614701 0024756 0 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/scripts/neoadmin.py 0000775 0000000 0000000 00000004650 11634614701 0025033 0 ustar 00root root 0000000 0000000 # neoadmin - run an administrator node of NEO
#
# Copyright (C) 2009 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from optparse import OptionParser
from neo.lib import setupLog
from neo.lib.config import ConfigurationManager
parser = OptionParser()
parser.add_option('-u', '--uuid', help='specify an UUID to use for this ' \
'process')
parser.add_option('-v', '--verbose', action = 'store_true',
help = 'print verbose messages')
parser.add_option('-f', '--file', help = 'specify a configuration file')
parser.add_option('-s', '--section', help = 'specify a configuration section')
parser.add_option('-l', '--logfile', help = 'specify a logging file')
parser.add_option('-c', '--cluster', help = 'the cluster name')
parser.add_option('-m', '--masters', help = 'master node list')
parser.add_option('-b', '--bind', help = 'the local address to bind to')
parser.add_option('-n', '--name', help = 'the node name (improve logging)')
defaults = dict(
name = 'admin',
bind = '127.0.0.1:9999',
masters = '127.0.0.1:10000',
)
def main(args=None):
# build configuration dict from command line options
(options, args) = parser.parse_args(args=args)
arguments = dict(
uuid = options.uuid,
name = options.name or options.section,
cluster = options.cluster,
masters = options.masters,
bind = options.bind,
)
config = ConfigurationManager(
defaults,
options.file,
options.section or 'admin',
arguments,
)
# setup custom logging
setupLog(config.getName().upper(), options.logfile or None, options.verbose)
# and then, load and run the application
from neo.admin.app import Application
app = Application(config)
app.run()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/scripts/neoctl.py 0000775 0000000 0000000 00000003066 11634614701 0024525 0 ustar 00root root 0000000 0000000 # neoadmin - run an administrator node of NEO
#
# Copyright (C) 2009 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import sys
from optparse import OptionParser
from neo.lib import setupLog
from neo.lib.util import parseNodeAddress
parser = OptionParser()
parser.add_option('-v', '--verbose', action = 'store_true',
help = 'print verbose messages')
parser.add_option('-a', '--address', help = 'specify the address (ip:port) ' \
'of an admin node', default = '127.0.0.1:9999')
parser.add_option('--handler', help = 'specify the connection handler')
def main(args=None):
(options, args) = parser.parse_args(args=args)
if options.address is not None:
address = parseNodeAddress(options.address, 9999)
else:
address = ('127.0.0.1', 9999)
setupLog('NEOCTL', options.verbose)
from neo.neoctl.app import Application
print Application(address).execute(args)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/scripts/neomaster.py 0000775 0000000 0000000 00000005166 11634614701 0025241 0 ustar 00root root 0000000 0000000 # neomaster - run a master node of NEO
#
# Copyright (C) 2006 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from optparse import OptionParser
from neo.lib import setupLog
from neo.lib.config import ConfigurationManager
parser = OptionParser()
parser.add_option('-v', '--verbose', action = 'store_true',
help = 'print verbose messages')
parser.add_option('-f', '--file', help = 'specify a configuration file')
parser.add_option('-s', '--section', help = 'specify a configuration section')
parser.add_option('-u', '--uuid', help='the node UUID (testing purpose)')
parser.add_option('-n', '--name', help = 'the node name (impove logging)')
parser.add_option('-b', '--bind', help = 'the local address to bind to')
parser.add_option('-c', '--cluster', help = 'the cluster name')
parser.add_option('-m', '--masters', help = 'master node list')
parser.add_option('-r', '--replicas', help = 'replicas number')
parser.add_option('-p', '--partitions', help = 'partitions number')
parser.add_option('-l', '--logfile', help = 'specify a logging file')
defaults = dict(
name = 'master',
bind = '127.0.0.1:10000',
masters = '',
replicas = 0,
partitions = 100,
)
def main(args=None):
# build configuration dict from command line options
(options, args) = parser.parse_args(args=args)
arguments = dict(
uuid = options.uuid or None,
bind = options.bind,
name = options.name or options.section,
cluster = options.cluster,
masters = options.masters,
replicas = options.replicas,
partitions = options.partitions,
)
config = ConfigurationManager(
defaults,
options.file,
options.section or 'master',
arguments,
)
# setup custom logging
setupLog(config.getName().upper(), options.logfile or None, options.verbose)
# and then, load and run the application
from neo.master.app import Application
app = Application(config)
app.run()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/scripts/neomigrate.py 0000775 0000000 0000000 00000005151 11634614701 0025370 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python2.4
#
# neomaster - run a master node of NEO
#
# Copyright (C) 2006 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from optparse import OptionParser
import logging
import time
import os
from neo.lib import setupLog
# register options
parser = OptionParser()
parser.add_option('-v', '--verbose', action = 'store_true',
help = 'print verbose messages')
parser.add_option('-s', '--source', help = 'the source database')
parser.add_option('-d', '--destination', help = 'the destination database')
parser.add_option('-c', '--cluster', help = 'the NEO cluster name')
def main(args=None):
# parse options
(options, args) = parser.parse_args(args=args)
source = options.source or None
destination = options.destination or None
cluster = options.cluster or None
# check options
if source is None or destination is None:
raise RuntimeError('Source and destination databases must be supplied')
if cluster is None:
raise RuntimeError('The NEO cluster name must be supplied')
# set up logging
setupLog('NEOMIGRATE', None, options.verbose or False)
# open storages
from ZODB.FileStorage import FileStorage
#from ZEO.ClientStorage import ClientStorage as ZEOStorage
from neo.client.Storage import Storage as NEOStorage
if os.path.exists(source):
src = FileStorage(file_name=source)
dst = NEOStorage(master_nodes=destination, name=cluster)
else:
print("WARNING: due to a bug in FileStorage (at least up to ZODB trunk"
"@121629), output database may be corrupted if input database is"
" not packed.")
src = NEOStorage(master_nodes=source, name=cluster)
dst = FileStorage(file_name=destination)
# do the job
print "Migrating from %s to %s" % (source, destination)
start = time.time()
dst.copyTransactionsFrom(src)
elapsed = time.time() - start
print "Migration done in %3.5f" % (elapsed, )
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/scripts/neostorage.py 0000775 0000000 0000000 00000005600 11634614701 0025403 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python2.4
#
# neostorage - run a storage node of NEO
#
# Copyright (C) 2006 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from optparse import OptionParser
from neo.lib import setupLog
from neo.lib.config import ConfigurationManager
parser = OptionParser()
parser.add_option('-v', '--verbose', action = 'store_true',
help = 'print verbose messages')
parser.add_option('-u', '--uuid', help='specify an UUID to use for this ' \
'process. Previously assigned UUID takes precedence (ie ' \
'you should always use -R with this switch)')
parser.add_option('-f', '--file', help = 'specify a configuration file')
parser.add_option('-s', '--section', help = 'specify a configuration section')
parser.add_option('-l', '--logfile', help = 'specify a logging file')
parser.add_option('-R', '--reset', action = 'store_true',
help = 'remove an existing database if any')
parser.add_option('-n', '--name', help = 'the node name (impove logging)')
parser.add_option('-b', '--bind', help = 'the local address to bind to')
parser.add_option('-c', '--cluster', help = 'the cluster name')
parser.add_option('-m', '--masters', help = 'master node list')
parser.add_option('-a', '--adapter', help = 'database adapter to use')
parser.add_option('-d', '--database', help = 'database connections string')
defaults = dict(
name = 'storage',
bind = '127.0.0.1',
masters = '127.0.0.1:10000',
adapter = 'MySQL',
)
def main(args=None):
(options, args) = parser.parse_args(args=args)
arguments = dict(
uuid = options.uuid,
bind = options.bind,
name = options.name or options.section,
cluster = options.cluster,
masters = options.masters,
database = options.database,
reset = options.reset,
adapter = options.adapter,
)
config = ConfigurationManager(
defaults,
options.file,
options.section or 'storage',
arguments,
)
# setup custom logging
setupLog(config.getName().upper(), options.logfile or None, options.verbose)
# and then, load and run the application
from neo.storage.app import Application
app = Application(config)
app.run()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/scripts/runner.py 0000775 0000000 0000000 00000025326 11634614701 0024555 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
#
# Copyright (C) 2009 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import traceback
import unittest
import logging
import time
import sys
import neo
import os
from neo.tests import getTempDirectory
from neo.tests.benchmark import BenchmarkRunner
# list of test modules
# each of them have to import its TestCase classes
UNIT_TEST_MODULES = [
# generic parts
'neo.tests.testBootstrap',
'neo.tests.testConnection',
'neo.tests.testEvent',
'neo.tests.testHandler',
'neo.tests.testNodes',
'neo.tests.testProtocol',
'neo.tests.testDispatcher',
'neo.tests.testUtil',
'neo.tests.testPT',
# master application
'neo.tests.master.testClientHandler',
'neo.tests.master.testElectionHandler',
'neo.tests.master.testMasterApp',
'neo.tests.master.testMasterPT',
'neo.tests.master.testRecovery',
'neo.tests.master.testStorageHandler',
'neo.tests.master.testVerification',
'neo.tests.master.testTransactions',
# storage application
'neo.tests.storage.testClientHandler',
'neo.tests.storage.testInitializationHandler',
'neo.tests.storage.testMasterHandler',
'neo.tests.storage.testStorageApp',
'neo.tests.storage.testStorageHandler',
'neo.tests.storage.testStorageMySQLdb',
'neo.tests.storage.testStorageBTree',
'neo.tests.storage.testVerificationHandler',
'neo.tests.storage.testIdentificationHandler',
'neo.tests.storage.testTransactions',
'neo.tests.storage.testReplicationHandler',
'neo.tests.storage.testReplicator',
'neo.tests.storage.testReplication',
# client application
'neo.tests.client.testClientApp',
'neo.tests.client.testMasterHandler',
'neo.tests.client.testStorageHandler',
'neo.tests.client.testConnectionPool',
# light functional tests
'neo.tests.threaded.test',
]
FUNC_TEST_MODULES = [
'neo.tests.functional.testMaster',
'neo.tests.functional.testClient',
'neo.tests.functional.testCluster',
'neo.tests.functional.testStorage',
]
ZODB_TEST_MODULES = [
('neo.tests.zodb.testBasic', 'check'),
('neo.tests.zodb.testConflict', 'check'),
('neo.tests.zodb.testHistory', 'check'),
('neo.tests.zodb.testIterator', 'check'),
('neo.tests.zodb.testMT', 'check'),
('neo.tests.zodb.testPack', 'check'),
('neo.tests.zodb.testPersistent', 'check'),
('neo.tests.zodb.testReadOnly', 'check'),
('neo.tests.zodb.testRevision', 'check'),
#('neo.tests.zodb.testRecovery', 'check'),
('neo.tests.zodb.testSynchronization', 'check'),
# ('neo.tests.zodb.testVersion', 'check'),
('neo.tests.zodb.testUndo', 'check'),
('neo.tests.zodb.testZODB', 'check'),
]
class NeoTestRunner(unittest.TestResult):
""" Custom result class to build report with statistics per module """
def __init__(self, title):
unittest.TestResult.__init__(self)
self._title = title
self.modulesStats = {}
self.failedImports = {}
self.lastStart = None
self.temp_directory = getTempDirectory()
def run(self, name, modules):
print '\n', name
suite = unittest.TestSuite()
loader = unittest.defaultTestLoader
for test_module in modules:
# load prefix if supplied
if isinstance(test_module, tuple):
test_module, prefix = test_module
loader.testMethodPrefix = prefix
else:
loader.testMethodPrefix = 'test'
try:
test_module = __import__(test_module, globals(), locals(), ['*'])
except ImportError, err:
self.failedImports[test_module] = err
print "Import of %s failed : %s" % (test_module, err)
traceback.print_exc()
continue
suite.addTests(loader.loadTestsFromModule(test_module))
suite.run(self)
class ModuleStats(object):
run = 0
errors = 0
success = 0
failures = 0
time = 0.0
def _getModuleStats(self, test):
module = test.__class__.__module__
module = tuple(module.split('.'))
try:
return self.modulesStats[module]
except KeyError:
self.modulesStats[module] = self.ModuleStats()
return self.modulesStats[module]
def _updateTimer(self, stats):
stats.time += time.time() - self.lastStart
def startTest(self, test):
unittest.TestResult.startTest(self, test)
logging.info(" * TEST %s", test)
stats = self._getModuleStats(test)
stats.run += 1
self.lastStart = time.time()
def addSuccess(self, test):
print "OK"
unittest.TestResult.addSuccess(self, test)
stats = self._getModuleStats(test)
stats.success += 1
self._updateTimer(stats)
def addError(self, test, err):
print "ERROR"
unittest.TestResult.addError(self, test, err)
stats = self._getModuleStats(test)
stats.errors += 1
self._updateTimer(stats)
def addFailure(self, test, err):
print "FAIL"
unittest.TestResult.addFailure(self, test, err)
stats = self._getModuleStats(test)
stats.failures += 1
self._updateTimer(stats)
def _buildSummary(self, add_status):
success = self.testsRun - len(self.errors) - len(self.failures)
add_status('Directory', self.temp_directory)
if self.testsRun:
add_status('Status', '%.3f%%' % (success * 100.0 / self.testsRun))
for var in os.environ.iterkeys():
if var.startswith('NEO_TEST'):
add_status(var, os.environ[var])
# visual
header = "%25s | run | success | errors | fails | time \n" % 'Test Module'
separator = "%25s-+---------+---------+---------+---------+----------\n" % ('-' * 25)
format = "%25s | %3s | %3s | %3s | %3s | %6.2fs \n"
group_f = "%25s | | | | | \n"
# header
s = ' ' * 30 + ' NEO TESTS REPORT'
s += '\n'
s += '\n' + header + separator
group = None
t_success = 0
# for each test case
for k, v in sorted(self.modulesStats.items()):
# display group below its content
_group = '.'.join(k[:-1])
if group is None:
group = _group
if _group != group:
s += separator + group_f % group + separator
group = _group
# test case stats
t_success += v.success
run, success = v.run or '.', v.success or '.'
errors, failures = v.errors or '.', v.failures or '.'
name = k[-1].lstrip('test')
args = (name, run, success, errors, failures, v.time)
s += format % args
# the last group
s += separator + group_f % group + separator
# the final summary
errors, failures = len(self.errors) or '.', len(self.failures) or '.'
args = ("Summary", self.testsRun, t_success, errors, failures, self.time)
s += format % args + separator + '\n'
return s
def _buildErrors(self):
s = ''
test_formatter = lambda t: t.id()
if len(self.errors):
s += '\nERRORS:\n'
for test, trace in self.errors:
s += "%s\n" % test_formatter(test)
s += "-------------------------------------------------------------\n"
s += trace
s += "-------------------------------------------------------------\n"
s += '\n'
if len(self.failures):
s += '\nFAILURES:\n'
for test, trace in self.failures:
s += "%s\n" % test_formatter(test)
s += "-------------------------------------------------------------\n"
s += trace
s += "-------------------------------------------------------------\n"
s += '\n'
return s
def _buildWarnings(self):
s = '\n'
if self.failedImports:
s += 'Failed imports :\n'
for module, err in self.failedImports.items():
s += '%s:\n%s' % (module, err)
s += '\n'
return s
def buildReport(self, add_status):
self.time = sum([s.time for s in self.modulesStats.values()])
self.subject = "%s Tests, %s Errors, %s Failures" % (
self.testsRun, len(self.errors), len(self.failures))
summary = self._buildSummary(add_status)
errors = self._buildErrors()
warnings = self._buildWarnings()
report = '\n'.join([summary, errors, warnings])
return (self.subject, report)
class TestRunner(BenchmarkRunner):
def add_options(self, parser):
parser.add_option('-f', '--functional', action='store_true')
parser.add_option('-u', '--unit', action='store_true')
parser.add_option('-z', '--zodb', action='store_true')
def load_options(self, options, args):
if not (options.unit or options.functional or options.zodb or args):
sys.exit('Nothing to run, please give one of -f, -u, -z')
return dict(
unit = options.unit,
functional = options.functional,
zodb = options.zodb,
)
def start(self):
config = self._config
# run requested tests
runner = NeoTestRunner(
title=config.title or 'Neo',
)
try:
if config.unit:
runner.run('Unit tests', UNIT_TEST_MODULES)
if config.functional:
runner.run('Functional tests', FUNC_TEST_MODULES)
if config.zodb:
runner.run('ZODB tests', ZODB_TEST_MODULES)
except KeyboardInterrupt:
config['mail_to'] = None
traceback.print_exc()
# build report
self._successful = runner.wasSuccessful()
return runner.buildReport(self.add_status)
def main(args=None):
runner = TestRunner()
runner.run()
if not runner.was_successful():
sys.exit(1)
sys.exit(0)
if __name__ == "__main__":
main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/scripts/simple.py 0000664 0000000 0000000 00000005535 11634614701 0024532 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
##############################################################################
#
# Copyright (c) 2011 Nexedi SARL and Contributors. All Rights Reserved.
# Julien Muchembled
#
# WARNING: This program as such is intended to be used by professional
# programmers who take the whole responsibility of assessing all potential
# consequences resulting from its eventual inadequacies and bugs
# End users who are looking for a ready-to-use solution with commercial
# guarantees and support are strongly advised to contract a Free Software
# Service Company
#
# This program is Free Software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
##############################################################################
import inspect, random, signal, sys
from optparse import OptionParser
from neo.lib import logger, logging
from neo.tests import functional
def main():
args, _, _, defaults = inspect.getargspec(functional.NEOCluster.__init__)
option_list = zip(args[-len(defaults):], defaults)
parser = OptionParser(usage="%prog [options] [db...]",
description="Quickly setup a simple NEO cluster for testing purpose.")
parser.add_option('--seed', help="settings like node ports/uuids and"
" cluster name are random: pass any string to initialize the RNG")
defaults = {}
for option, default in sorted(option_list):
kw = {}
if type(default) is bool:
kw['action'] = "store_true"
defaults[option] = False
elif default is not None:
defaults[option] = default
if isinstance(default, int):
kw['type'] = "int"
parser.add_option('--' + option, **kw)
parser.set_defaults(**defaults)
options, args = parser.parse_args()
if options.verbose:
logger.PACKET_LOGGER.enable(True)
if options.seed:
functional.random = random.Random(options.seed)
cluster = functional.NEOCluster(args, **dict((x, getattr(options, x))
for x, _ in option_list))
try:
cluster.start()
logging.info("Cluster running ...")
signal.pause()
finally:
cluster.stop()
if __name__ == "__main__":
sys.exit(main())
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/ 0000775 0000000 0000000 00000000000 11634614701 0022634 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/__init__.py 0000664 0000000 0000000 00000000000 11634614701 0024733 0 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/app.py 0000664 0000000 0000000 00000033472 11634614701 0023777 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
import sys
from collections import deque
from neo.lib.protocol import NodeTypes, CellStates, Packets
from neo.lib.node import NodeManager
from neo.lib.event import EventManager
from neo.lib.connection import ListeningConnection
from neo.lib.exception import OperationFailure, PrimaryFailure
from neo.storage.handlers import identification, verification, initialization
from neo.storage.handlers import master, hidden
from neo.storage.replicator import Replicator
from neo.storage.database import buildDatabaseManager
from neo.storage.transactions import TransactionManager
from neo.storage.exception import AlreadyPendingError
from neo.lib.connector import getConnectorHandler
from neo.lib.pt import PartitionTable
from neo.lib.util import dump
from neo.lib.bootstrap import BootstrapManager
from neo.lib.debug import register as registerLiveDebugger
class Application(object):
"""The storage node application."""
def __init__(self, config):
# set the cluster name
self.name = config.getCluster()
# Internal attributes.
self.em = EventManager()
self.nm = NodeManager()
self.tm = TransactionManager(self)
self.dm = buildDatabaseManager(config.getAdapter(), config.getDatabase())
# load master nodes
master_addresses, connector_name = config.getMasters()
self.connector_handler = getConnectorHandler(connector_name)
for master_address in master_addresses :
self.nm.createMaster(address=master_address)
# set the bind address
self.server = config.getBind()
neo.lib.logging.debug('IP address is %s, port is %d', *(self.server))
# The partition table is initialized after getting the number of
# partitions.
self.pt = None
self.replicator = Replicator(self)
self.listening_conn = None
self.master_conn = None
self.master_node = None
# operation related data
self.event_queue = None
self.event_queue_dict = None
self.operational = False
# ready is True when operational and got all informations
self.ready = False
self.has_node_information = False
self.has_partition_table = False
self.dm.setup(reset=config.getReset())
self.loadConfiguration()
# force node uuid from command line argument, for testing purpose only
if config.getUUID() is not None:
self.uuid = config.getUUID()
registerLiveDebugger(on_log=self.log)
def close(self):
self.listening_conn = None
self.nm.close()
self.em.close()
try:
self.dm.close()
except AttributeError:
pass
del self.__dict__
def _poll(self):
self.em.poll(1)
def log(self):
self.em.log()
self.logQueuedEvents()
self.nm.log()
self.tm.log()
if self.pt is not None:
self.pt.log()
def loadConfiguration(self):
"""Load persistent configuration data from the database.
If data is not present, generate it."""
def NoneOnKeyError(getter):
try:
return getter()
except KeyError:
return None
dm = self.dm
# check cluster name
try:
if dm.getName() != self.name:
raise RuntimeError('name does not match with the database')
except KeyError:
dm.setName(self.name)
# load configuration
self.uuid = NoneOnKeyError(dm.getUUID)
num_partitions = NoneOnKeyError(dm.getNumPartitions)
num_replicas = NoneOnKeyError(dm.getNumReplicas)
ptid = NoneOnKeyError(dm.getPTID)
# check partition table configuration
if num_partitions is not None and num_replicas is not None:
if num_partitions <= 0:
raise RuntimeError, 'partitions must be more than zero'
# create a partition table
self.pt = PartitionTable(num_partitions, num_replicas)
neo.lib.logging.info('Configuration loaded:')
neo.lib.logging.info('UUID : %s', dump(self.uuid))
neo.lib.logging.info('PTID : %s', dump(ptid))
neo.lib.logging.info('Name : %s', self.name)
neo.lib.logging.info('Partitions: %s', num_partitions)
neo.lib.logging.info('Replicas : %s', num_replicas)
def loadPartitionTable(self):
"""Load a partition table from the database."""
try:
ptid = self.dm.getPTID()
except KeyError:
ptid = None
cell_list = self.dm.getPartitionTable()
new_cell_list = []
for offset, uuid, state in cell_list:
# convert from int to Enum
state = CellStates[state]
# register unknown nodes
if self.nm.getByUUID(uuid) is None:
self.nm.createStorage(uuid=uuid)
new_cell_list.append((offset, uuid, state))
# load the partition table in manager
self.pt.clear()
self.pt.update(ptid, new_cell_list, self.nm)
def run(self):
try:
self._run()
except:
neo.lib.logging.info('\nPre-mortem informations:')
self.log()
raise
def _run(self):
"""Make sure that the status is sane and start a loop."""
if len(self.name) == 0:
raise RuntimeError, 'cluster name must be non-empty'
# Make a listening port
handler = identification.IdentificationHandler(self)
self.listening_conn = ListeningConnection(self.em, handler,
addr=self.server, connector=self.connector_handler())
self.server = self.listening_conn.getAddress()
# Connect to a primary master node, verify data, and
# start the operation. This cycle will be executed permanently,
# until the user explicitly requests a shutdown.
while True:
self.ready = False
self.operational = False
if self.master_node is None:
# look for the primary master
self.connectToPrimary()
# check my state
node = self.nm.getByUUID(self.uuid)
if node is not None and node.isHidden():
self.wait()
# drop any client node
for conn in self.em.getConnectionList():
if conn not in (self.listening_conn, self.master_conn):
conn.close()
# create/clear event queue
self.event_queue = deque()
self.event_queue_dict = dict()
try:
self.verifyData()
self.initialize()
self.doOperation()
raise RuntimeError, 'should not reach here'
except OperationFailure, msg:
neo.lib.logging.error('operation stopped: %s', msg)
except PrimaryFailure, msg:
self.replicator.masterLost()
neo.lib.logging.error('primary master is down: %s', msg)
self.master_node = None
def connectToPrimary(self):
"""Find a primary master node, and connect to it.
If a primary master node is not elected or ready, repeat
the attempt of a connection periodically.
Note that I do not accept any connection from non-master nodes
at this stage."""
pt = self.pt
# First of all, make sure that I have no connection.
for conn in self.em.getConnectionList():
if not conn.isListening():
conn.close()
# search, find, connect and identify to the primary master
bootstrap = BootstrapManager(self, self.name,
NodeTypes.STORAGE, self.uuid, self.server)
data = bootstrap.getPrimaryConnection(self.connector_handler)
(node, conn, uuid, num_partitions, num_replicas) = data
self.master_node = node
self.master_conn = conn
neo.lib.logging.info('I am %s', dump(uuid))
self.uuid = uuid
self.dm.setUUID(uuid)
# Reload a partition table from the database. This is necessary
# when a previous primary master died while sending a partition
# table, because the table might be incomplete.
if pt is not None:
self.loadPartitionTable()
if num_partitions != pt.getPartitions():
raise RuntimeError('the number of partitions is inconsistent')
if pt is None or pt.getReplicas() != num_replicas:
# changing number of replicas is not an issue
self.dm.setNumPartitions(num_partitions)
self.dm.setNumReplicas(num_replicas)
self.pt = PartitionTable(num_partitions, num_replicas)
self.loadPartitionTable()
def verifyData(self):
"""Verify data under the control by a primary master node.
Connections from client nodes may not be accepted at this stage."""
neo.lib.logging.info('verifying data')
handler = verification.VerificationHandler(self)
self.master_conn.setHandler(handler)
_poll = self._poll
while not self.operational:
_poll()
def initialize(self):
""" Retreive partition table and node informations from the primary """
neo.lib.logging.debug('initializing...')
_poll = self._poll
handler = initialization.InitializationHandler(self)
self.master_conn.setHandler(handler)
# ask node list and partition table
self.has_node_information = False
self.has_partition_table = False
self.has_last_ids = False
self.pt.clear()
self.master_conn.ask(Packets.AskLastIDs())
self.master_conn.ask(Packets.AskNodeInformation())
self.master_conn.ask(Packets.AskPartitionTable())
while not self.has_node_information or not self.has_partition_table \
or not self.has_last_ids:
_poll()
self.ready = True
self.replicator.populate()
self.master_conn.notify(Packets.NotifyReady())
def doOperation(self):
"""Handle everything, including replications and transactions."""
neo.lib.logging.info('doing operation')
_poll = self._poll
handler = master.MasterOperationHandler(self)
self.master_conn.setHandler(handler)
# Forget all unfinished data.
self.dm.dropUnfinishedData()
self.tm.reset()
while True:
_poll()
if self.replicator.pending():
# Call processDelayedTasks before act, so tasks added in the
# act call are executed after one poll call, so that sent
# packets are already on the network and delayed task
# processing happens in parallel with the same task on the
# other storage node.
self.replicator.processDelayedTasks()
self.replicator.act()
def wait(self):
# change handler
neo.lib.logging.info("waiting in hidden state")
_poll = self._poll
handler = hidden.HiddenHandler(self)
for conn in self.em.getConnectionList():
conn.setHandler(handler)
node = self.nm.getByUUID(self.uuid)
while True:
_poll()
if not node.isHidden():
break
def queueEvent(self, some_callable, conn, args, key=None,
raise_on_duplicate=True):
msg_id = conn.getPeerId()
event_queue_dict = self.event_queue_dict
if raise_on_duplicate and key in event_queue_dict:
raise AlreadyPendingError()
else:
self.event_queue.append((key, some_callable, msg_id, conn, args))
if key is not None:
try:
event_queue_dict[key] += 1
except KeyError:
event_queue_dict[key] = 1
def executeQueuedEvents(self):
l = len(self.event_queue)
p = self.event_queue.popleft
event_queue_dict = self.event_queue_dict
for _ in xrange(l):
key, some_callable, msg_id, conn, args = p()
if key is not None:
event_queue_dict[key] -= 1
if event_queue_dict[key] == 0:
del event_queue_dict[key]
if conn.isAborted() or conn.isClosed():
continue
orig_msg_id = conn.getPeerId()
conn.setPeerId(msg_id)
some_callable(conn, *args)
conn.setPeerId(orig_msg_id)
def logQueuedEvents(self):
if self.event_queue is None:
return
neo.lib.logging.info("Pending events:")
for key, event, _msg_id, _conn, args in self.event_queue:
neo.lib.logging.info(' %r:%r: %r:%r %r %r', key, event.__name__,
_msg_id, _conn, args)
def shutdown(self, erase=False):
"""Close all connections and exit"""
for c in self.em.getConnectionList():
try:
c.close()
except PrimaryFailure:
pass
# clear database to avoid polluting the cluster at restart
self.dm.setup(reset=erase)
neo.lib.logging.info("Application has been asked to shut down")
sys.exit()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/database/ 0000775 0000000 0000000 00000000000 11634614701 0024400 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/database/__init__.py 0000664 0000000 0000000 00000003166 11634614701 0026517 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.lib.exception import DatabaseFailure
from neo.storage.database.manager import DatabaseManager
DATABASE_MANAGER_DICT = {}
try:
from neo.storage.database.mysqldb import MySQLDatabaseManager
except ImportError:
pass
else:
DATABASE_MANAGER_DICT['MySQL'] = MySQLDatabaseManager
try:
from neo.storage.database.btree import BTreeDatabaseManager
except ImportError:
pass
else:
# XXX: warning: name might change in the future.
DATABASE_MANAGER_DICT['BTree'] = BTreeDatabaseManager
if not DATABASE_MANAGER_DICT:
raise ImportError('No database back-end available.')
def buildDatabaseManager(name, config):
if name is None:
name = DATABASE_MANAGER_DICT.keys()[0]
adapter_klass = DATABASE_MANAGER_DICT.get(name, None)
if adapter_klass is None:
raise DatabaseFailure('Cannot find a database adapter <%s>' % name)
return adapter_klass(config)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/database/btree.py 0000664 0000000 0000000 00000057443 11634614701 0026070 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
"""
Naive b-tree implementation.
Simple, though not so well tested.
Not persistent ! (no data retained after process exit)
"""
from BTrees.OOBTree import OOBTree as _OOBTree
import neo.lib
from hashlib import md5
from neo.storage.database import DatabaseManager
from neo.lib.protocol import CellStates, ZERO_OID, ZERO_TID
from neo.lib import util
# The only purpose of this value (and code using it) is to avoid creating
# arbitrarily-long lists of values when cleaning up dictionaries.
KEY_BATCH_SIZE = 1000
# Keep dropped trees in memory to avoid instanciating when not needed.
TREE_POOL = []
# How many empty BTree istance to keep in ram
MAX_TREE_POOL_SIZE = 100
def batchDelete(tree, tester_callback, iter_kw=None, recycle_subtrees=False):
"""
Iter over given BTree and delete found entries.
tree BTree
Tree to delete entries from.
tester_callback function(key, value) -> boolean
Called with each key, value pair found in tree.
If return value is true, delete entry. Otherwise, skip to next key.
iter_kw dict
Keyword arguments for tree.items .
Warning: altered in this function.
recycle_subtrees boolean (False)
If true, deleted values will be put in TREE_POOL for future reuse.
They must be BTrees.
If False, values are not touched.
"""
if iter_kw is None:
iter_kw = {}
if recycle_subtrees:
deleter_callback = _btreeDeleterCallback
else:
deleter_callback = _deleterCallback
items = tree.items
while True:
to_delete = []
append = to_delete.append
for key, value in safeIter(items, **iter_kw):
if tester_callback(key, value):
append(key)
if len(to_delete) >= KEY_BATCH_SIZE:
iter_kw['min'] = key
iter_kw['excludemin'] = True
break
if to_delete:
deleter_callback(tree, to_delete)
else:
break
def _deleterCallback(tree, key_list):
for key in key_list:
del tree[key]
if hasattr(_OOBTree, 'pop'):
def _btreeDeleterCallback(tree, key_list):
for key in key_list:
prune(tree.pop(key))
else:
def _btreeDeleterCallback(tree, key_list):
for key in key_list:
prune(tree[key])
del tree[key]
def OOBTree():
try:
result = TREE_POOL.pop()
except IndexError:
result = _OOBTree()
# Next btree we prune will have room, restore prune method
global prune
prune = _prune
return result
def _prune(tree):
tree.clear()
TREE_POOL.append(tree)
if len(TREE_POOL) >= MAX_TREE_POOL_SIZE:
# Already at/above max pool size, disable ourselve.
global prune
prune = _noPrune
def _noPrune(_):
pass
prune = _prune
class CreationUndone(Exception):
pass
def iterObjSerials(obj):
for tserial in obj.values():
for serial in tserial.keys():
yield serial
def descItems(tree):
try:
key = tree.maxKey()
except ValueError:
pass
else:
while True:
yield (key, tree[key])
try:
key = tree.maxKey(key - 1)
except ValueError:
break
def descKeys(tree):
try:
key = tree.maxKey()
except ValueError:
pass
else:
while True:
yield key
try:
key = tree.maxKey(key - 1)
except ValueError:
break
def safeIter(func, *args, **kw):
try:
some_list = func(*args, **kw)
except ValueError:
some_list = []
return some_list
class BTreeDatabaseManager(DatabaseManager):
_obj = None
_trans = None
_tobj = None
_ttrans = None
_pt = None
_config = None
def __init__(self, database):
super(BTreeDatabaseManager, self).__init__()
self.setup(reset=1)
def setup(self, reset=0):
if reset:
self._obj = OOBTree()
self._trans = OOBTree()
self.dropUnfinishedData()
self._pt = {}
self._config = {}
def _begin(self):
pass
def _commit(self):
pass
def _rollback(self):
pass
def getConfiguration(self, key):
return self._config[key]
def _setConfiguration(self, key, value):
self._config[key] = value
def _setPackTID(self, tid):
self._setConfiguration('_pack_tid', tid)
def _getPackTID(self):
try:
result = int(self.getConfiguration('_pack_tid'))
except KeyError:
result = -1
return result
def getPartitionTable(self):
pt = []
append = pt.append
for (offset, uuid), state in self._pt.iteritems():
append((offset, uuid, state))
return pt
def getLastTID(self, all=True):
try:
ltid = self._trans.maxKey()
except ValueError:
ltid = None
if all:
try:
tmp_ltid = self._ttrans.maxKey()
except ValueError:
tmp_ltid = None
tmp_serial = None
for tserial in self._tobj.values():
try:
max_tmp_serial = tserial.maxKey()
except ValueError:
pass
else:
tmp_serial = max(tmp_serial, max_tmp_serial)
ltid = max(ltid, tmp_ltid, tmp_serial)
if ltid is not None:
ltid = util.p64(ltid)
return ltid
def getUnfinishedTIDList(self):
p64 = util.p64
tid_set = set(p64(x) for x in self._ttrans.keys())
tid_set.update(p64(x) for x in iterObjSerials(self._tobj))
return list(tid_set)
def objectPresent(self, oid, tid, all=True):
u64 = util.u64
oid = u64(oid)
tid = u64(tid)
try:
result = self._obj[oid].has_key(tid)
except KeyError:
if all:
try:
result = self._tobj[oid].has_key(tid)
except KeyError:
result = False
else:
result = False
return result
def _getObjectData(self, oid, value_serial, tid):
if value_serial is None:
raise CreationUndone
if value_serial >= tid:
raise ValueError, "Incorrect value reference found for " \
"oid %d at tid %d: reference = %d" % (oid, value_serial, tid)
try:
tserial = self._obj[oid]
except KeyError:
raise IndexError(oid)
try:
compression, checksum, value, next_value_serial = tserial[
value_serial]
except KeyError:
raise IndexError(value_serial)
if value is None:
neo.lib.logging.info("Multiple levels of indirection when " \
"searching for object data for oid %d at tid %d. This " \
"causes suboptimal performance." % (oid, value_serial))
value_serial, compression, checksum, value = self._getObjectData(
oid, next_value_serial, value_serial)
return value_serial, compression, checksum, value
def _getObject(self, oid, tid=None, before_tid=None):
tserial = self._obj.get(oid)
if tserial is not None:
if tid is None:
try:
if before_tid is None:
tid = tserial.maxKey()
else:
tid = tserial.maxKey(before_tid - 1)
except ValueError:
return
result = tserial.get(tid)
if result:
try:
next_serial = tserial.minKey(tid + 1)
except ValueError:
next_serial = None
return (tid, next_serial) + result
def doSetPartitionTable(self, ptid, cell_list, reset):
pt = self._pt
if reset:
pt.clear()
for offset, uuid, state in cell_list:
# TODO: this logic should move out of database manager
# add 'dropCells(cell_list)' to API and use one query
key = (offset, uuid)
if state == CellStates.DISCARDED:
pt.pop(key, None)
else:
pt[key] = int(state)
self.setPTID(ptid)
def changePartitionTable(self, ptid, cell_list):
self.doSetPartitionTable(ptid, cell_list, False)
def setPartitionTable(self, ptid, cell_list):
self.doSetPartitionTable(ptid, cell_list, True)
def dropPartitions(self, num_partitions, offset_list):
offset_list = frozenset(offset_list)
def same_partition(key, _):
return key % num_partitions in offset_list
batchDelete(self._obj, same_partition, recycle_subtrees=True)
batchDelete(self._trans, same_partition)
def dropUnfinishedData(self):
self._tobj = OOBTree()
self._ttrans = OOBTree()
def storeTransaction(self, tid, object_list, transaction, temporary=True):
u64 = util.u64
tid = u64(tid)
if temporary:
obj = self._tobj
trans = self._ttrans
else:
obj = self._obj
trans = self._trans
for oid, compression, checksum, data, value_serial in object_list:
oid = u64(oid)
if data is None:
compression = checksum = data
else:
# TODO: unit-test this raise
if value_serial is not None:
raise ValueError, 'Either data or value_serial ' \
'must be None (oid %d, tid %d)' % (oid, tid)
try:
tserial = obj[oid]
except KeyError:
tserial = obj[oid] = OOBTree()
if value_serial is not None:
value_serial = u64(value_serial)
tserial[tid] = (compression, checksum, data, value_serial)
if transaction is not None:
oid_list, user, desc, ext, packed = transaction
trans[tid] = (tuple(oid_list), user, desc, ext, packed)
def _getDataTIDFromData(self, oid, result):
tid, _, _, _, data, value_serial = result
if data is None:
try:
data_serial = self._getObjectData(oid, value_serial, tid)[0]
except CreationUndone:
data_serial = None
else:
data_serial = tid
return tid, data_serial
def _getDataTID(self, oid, tid=None, before_tid=None):
result = self._getObject(oid, tid=tid, before_tid=before_tid)
if result is None:
result = (None, None)
else:
result = self._getDataTIDFromData(oid, result)
return result
def finishTransaction(self, tid):
tid = util.u64(tid)
self._popTransactionFromTObj(tid, True)
ttrans = self._ttrans
try:
data = ttrans[tid]
except KeyError:
pass
else:
del ttrans[tid]
self._trans[tid] = data
def _popTransactionFromTObj(self, tid, to_obj):
if to_obj:
recycle_subtrees = False
obj = self._obj
def callback(oid, data):
try:
tserial = obj[oid]
except KeyError:
tserial = obj[oid] = OOBTree()
tserial[tid] = data
else:
recycle_subtrees = True
callback = lambda oid, data: None
def tester_callback(oid, tserial):
try:
data = tserial[tid]
except KeyError:
pass
else:
del tserial[tid]
callback(oid, data)
return not tserial
batchDelete(self._tobj, tester_callback,
recycle_subtrees=recycle_subtrees)
def deleteTransaction(self, tid, oid_list=()):
tid = util.u64(tid)
self._popTransactionFromTObj(tid, False)
try:
del self._ttrans[tid]
except KeyError:
pass
for oid in oid_list:
self._deleteObject(oid, serial=tid)
try:
del self._trans[tid]
except KeyError:
pass
def deleteTransactionsAbove(self, num_partitions, partition, tid, max_tid):
def same_partition(key, _):
return key % num_partitions == partition
batchDelete(self._trans, same_partition,
iter_kw={'min': util.u64(tid), 'max': util.u64(max_tid)})
def deleteObject(self, oid, serial=None):
u64 = util.u64
oid = u64(oid)
if serial is not None:
serial = u64(serial)
self._deleteObject(oid, serial=serial)
def _deleteObject(self, oid, serial=None):
obj = self._obj
try:
tserial = obj[oid]
except KeyError:
pass
else:
if serial is not None:
try:
del tserial[serial]
except KeyError:
pass
if serial is None or not tserial:
prune(obj[oid])
del obj[oid]
def deleteObjectsAbove(self, num_partitions, partition, oid, serial,
max_tid):
obj = self._obj
u64 = util.u64
oid = u64(oid)
serial = u64(serial)
max_tid = u64(max_tid)
if oid % num_partitions == partition:
try:
tserial = obj[oid]
except KeyError:
pass
else:
batchDelete(tserial, lambda _, __: True,
iter_kw={'min': serial, 'max': max_tid})
def same_partition(key, _):
return key % num_partitions == partition
batchDelete(obj, same_partition,
iter_kw={'min': oid, 'excludemin': True, 'max': max_tid},
recycle_subtrees=True)
def getTransaction(self, tid, all=False):
tid = util.u64(tid)
try:
result = self._trans[tid]
except KeyError:
if all:
try:
result = self._ttrans[tid]
except KeyError:
result = None
else:
result = None
if result is not None:
oid_list, user, desc, ext, packed = result
result = (list(oid_list), user, desc, ext, packed)
return result
def getOIDList(self, min_oid, length, num_partitions,
partition_list):
p64 = util.p64
partition_list = frozenset(partition_list)
result = []
append = result.append
for oid in safeIter(self._obj.keys, min=min_oid):
if oid % num_partitions in partition_list:
if length == 0:
break
length -= 1
append(p64(oid))
return result
def _getObjectLength(self, oid, value_serial):
if value_serial is None:
raise CreationUndone
_, _, value, value_serial = self._obj[oid][value_serial]
if value is None:
neo.lib.logging.info("Multiple levels of indirection when " \
"searching for object data for oid %d at tid %d. This " \
"causes suboptimal performance." % (oid, value_serial))
length = self._getObjectLength(oid, value_serial)
else:
length = len(value)
return length
def getObjectHistory(self, oid, offset=0, length=1):
# FIXME: This method doesn't take client's current ransaction id as
# parameter, which means it can return transactions in the future of
# client's transaction.
oid = util.u64(oid)
p64 = util.p64
pack_tid = self._getPackTID()
try:
tserial = self._obj[oid]
except KeyError:
result = None
else:
result = []
append = result.append
tserial_iter = descItems(tserial)
while offset > 0:
tserial_iter.next()
offset -= 1
for serial, (_, _, value, value_serial) in tserial_iter:
if length == 0 or serial < pack_tid:
break
length -= 1
if value is None:
try:
data_length = self._getObjectLength(oid, value_serial)
except CreationUndone:
data_length = 0
else:
data_length = len(value)
append((p64(serial), data_length))
if not result:
result = None
return result
def getObjectHistoryFrom(self, min_oid, min_serial, max_serial, length,
num_partitions, partition):
u64 = util.u64
p64 = util.p64
min_oid = u64(min_oid)
min_serial = u64(min_serial)
max_serial = u64(max_serial)
result = {}
for oid, tserial in safeIter(self._obj.items, min=min_oid):
if oid % num_partitions == partition:
if length == 0:
break
if oid == min_oid:
try:
tid_seq = tserial.keys(min=min_serial, max=max_serial)
except ValueError:
continue
else:
tid_seq = tserial.keys(max=max_serial)
if not tid_seq:
continue
result[p64(oid)] = tid_list = []
append = tid_list.append
for tid in tid_seq:
if length == 0:
break
length -= 1
append(p64(tid))
else:
continue
break
return result
def getTIDList(self, offset, length, num_partitions, partition_list):
p64 = util.p64
partition_list = frozenset(partition_list)
result = []
append = result.append
trans_iter = descKeys(self._trans)
while offset > 0:
tid = trans_iter.next()
if tid % num_partitions in partition_list:
offset -= 1
for tid in trans_iter:
if tid % num_partitions in partition_list:
if length == 0:
break
length -= 1
append(p64(tid))
return result
def getReplicationTIDList(self, min_tid, max_tid, length, num_partitions,
partition):
p64 = util.p64
u64 = util.u64
result = []
append = result.append
for tid in safeIter(self._trans.keys, min=u64(min_tid), max=u64(max_tid)):
if tid % num_partitions == partition:
if length == 0:
break
length -= 1
append(p64(tid))
return result
def _updatePackFuture(self, oid, orig_serial, max_serial,
updateObjectDataForPack):
p64 = util.p64
# Before deleting this objects revision, see if there is any
# transaction referencing its value at max_serial or above.
# If there is, copy value to the first future transaction. Any further
# reference is just updated to point to the new data location.
value_serial = None
obj = self._obj
for tree in (obj, self._tobj):
try:
tserial = tree[oid]
except KeyError:
continue
for serial, record in tserial.items(
min=max_serial):
if record[3] == orig_serial:
if value_serial is None:
value_serial = serial
tserial[serial] = tserial[orig_serial]
else:
record = list(record)
record[3] = value_serial
tserial[serial] = tuple(record)
def getObjectData():
assert value_serial is None
return obj[oid][orig_serial][:3]
if value_serial:
value_serial = p64(value_serial)
updateObjectDataForPack(p64(oid), p64(orig_serial), value_serial,
getObjectData)
def pack(self, tid, updateObjectDataForPack):
tid = util.u64(tid)
updatePackFuture = self._updatePackFuture
self._setPackTID(tid)
def obj_callback(oid, tserial):
try:
max_serial = tserial.maxKey(tid)
except ValueError:
# No entry before pack TID, nothing to pack on this object.
pass
else:
if tserial[max_serial][2] == '':
# Last version before/at pack TID is a creation undo, drop
# it too.
max_serial += 1
def serial_callback(serial, _):
updatePackFuture(oid, serial, max_serial,
updateObjectDataForPack)
batchDelete(tserial, serial_callback,
iter_kw={'max': max_serial, 'excludemax': True})
return not tserial
batchDelete(self._obj, obj_callback, recycle_subtrees=True)
def checkTIDRange(self, min_tid, max_tid, length, num_partitions, partition):
if length:
tid_list = []
for tid in safeIter(self._trans.keys, min=util.u64(min_tid),
max=util.u64(max_tid)):
if tid % num_partitions == partition:
tid_list.append(tid)
if len(tid_list) >= length:
break
if tid_list:
return (len(tid_list),
md5(','.join(map(str, tid_list))).digest(),
util.p64(tid_list[-1]))
return 0, None, ZERO_TID
def checkSerialRange(self, min_oid, min_serial, max_tid, length,
num_partitions, partition):
if length:
u64 = util.u64
min_oid = u64(min_oid)
max_tid = u64(max_tid)
oid_list = []
serial_list = []
for oid, tserial in safeIter(self._obj.items, min=min_oid):
if oid % num_partitions == partition:
try:
if oid == min_oid:
tserial = tserial.keys(min=u64(min_serial),
max=max_tid)
else:
tserial = tserial.keys(max=max_tid)
except ValueError:
continue
for serial in tserial:
oid_list.append(oid)
serial_list.append(serial)
if len(oid_list) >= length:
break
else:
continue
break
if oid_list:
p64 = util.p64
return (len(oid_list),
md5(','.join(map(str, oid_list))).digest(),
p64(oid_list[-1]),
md5(','.join(map(str, serial_list))).digest(),
p64(serial_list[-1]))
return 0, None, ZERO_OID, None, ZERO_TID
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/database/manager.py 0000664 0000000 0000000 00000044352 11634614701 0026374 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.lib import util
from neo.lib.exception import DatabaseFailure
class CreationUndone(Exception):
pass
class DatabaseManager(object):
"""This class only describes an interface for database managers."""
def __init__(self):
"""
Initialize the object.
"""
self._under_transaction = False
def isUnderTransaction(self):
return self._under_transaction
def begin(self):
"""
Begin a transaction
"""
if self._under_transaction:
raise DatabaseFailure('A transaction has already begun')
self._begin()
self._under_transaction = True
def commit(self):
"""
Commit the current transaction
"""
if not self._under_transaction:
raise DatabaseFailure('The transaction has not begun')
self._commit()
self._under_transaction = False
def rollback(self):
"""
Rollback the current transaction
"""
self._rollback()
self._under_transaction = False
def setup(self, reset = 0):
"""Set up a database. If reset is true, existing data must be
discarded."""
raise NotImplementedError
def _begin(self):
raise NotImplementedError
def _commit(self):
raise NotImplementedError
def _rollback(self):
raise NotImplementedError
def _getPartition(self, oid_or_tid):
return oid_or_tid % self.getNumPartitions()
def getConfiguration(self, key):
"""
Return a configuration value, returns None if not found or not set
"""
raise NotImplementedError
def setConfiguration(self, key, value):
"""
Set a configuration value
"""
if self._under_transaction:
self._setConfiguration(key, value)
else:
self.begin()
try:
self._setConfiguration(key, value)
except:
self.rollback()
raise
self.commit()
def _setConfiguration(self, key, value):
raise NotImplementedError
def getUUID(self):
"""
Load an UUID from a database.
"""
return util.bin(self.getConfiguration('uuid'))
def setUUID(self, uuid):
"""
Store an UUID into a database.
"""
self.setConfiguration('uuid', util.dump(uuid))
def getNumPartitions(self):
"""
Load the number of partitions from a database.
"""
n = self.getConfiguration('partitions')
if n is not None:
return int(n)
def setNumPartitions(self, num_partitions):
"""
Store the number of partitions into a database.
"""
self.setConfiguration('partitions', num_partitions)
def getNumReplicas(self):
"""
Load the number of replicas from a database.
"""
n = self.getConfiguration('replicas')
if n is not None:
return int(n)
def setNumReplicas(self, num_replicas):
"""
Store the number of replicas into a database.
"""
self.setConfiguration('replicas', num_replicas)
def getName(self):
"""
Load a name from a database.
"""
return self.getConfiguration('name')
def setName(self, name):
"""
Store a name into a database.
"""
self.setConfiguration('name', name)
def getPTID(self):
"""
Load a Partition Table ID from a database.
"""
return long(self.getConfiguration('ptid'))
def setPTID(self, ptid):
"""
Store a Partition Table ID into a database.
"""
if ptid is not None:
assert isinstance(ptid, (int, long)), ptid
ptid = str(ptid)
self.setConfiguration('ptid', ptid)
def getLastOID(self):
"""
Returns the last OID used
"""
return util.bin(self.getConfiguration('loid'))
def setLastOID(self, loid):
"""
Set the last OID used
"""
self.setConfiguration('loid', util.dump(loid))
def getPartitionTable(self):
"""Return a whole partition table as a tuple of rows. Each row
is again a tuple of an offset (row ID), an UUID of a storage
node, and a cell state."""
raise NotImplementedError
def getLastTID(self, all = True):
"""Return the last TID in a database. If all is true,
unfinished transactions must be taken account into. If there
is no TID in the database, return None."""
raise NotImplementedError
def getUnfinishedTIDList(self):
"""Return a list of unfinished transaction's IDs."""
raise NotImplementedError
def objectPresent(self, oid, tid, all = True):
"""Return true iff an object specified by a given pair of an
object ID and a transaction ID is present in a database.
Otherwise, return false. If all is true, the object must be
searched from unfinished transactions as well."""
raise NotImplementedError
def _getObject(self, oid, tid=None, before_tid=None):
"""
oid (int)
Identifier of object to retrieve.
tid (int, None)
Exact serial to retrieve.
before_tid (packed, None)
Serial to retrieve is the highest existing one strictly below this
value.
"""
raise NotImplementedError
def getObject(self, oid, tid=None, before_tid=None, resolve_data=True):
"""
oid (packed)
Identifier of object to retrieve.
tid (packed, None)
Exact serial to retrieve.
before_tid (packed, None)
Serial to retrieve is the highest existing one strictly below this
value.
resolve_data (bool, True)
If actual object data is desired, or raw record content.
This is different in case retrieved line undoes a transaction.
Return value:
None: Given oid doesn't exist in database.
False: No record found, but another one exists for given oid.
6-tuple: Record content.
- record serial (packed)
- serial or next record modifying object (packed, None)
- compression (boolean-ish, None)
- checksum (integer, None)
- data (binary string, None)
- data_serial (packed, None)
"""
# TODO: resolve_data must be unit-tested
u64 = util.u64
p64 = util.p64
oid = u64(oid)
if tid is not None:
tid = u64(tid)
if before_tid is not None:
before_tid = u64(before_tid)
result = self._getObject(oid, tid, before_tid)
if result is None:
# See if object exists at all
result = self._getObject(oid)
if result is not None:
# Object exists
result = False
else:
serial, next_serial, compression, checksum, data, data_serial = \
result
assert before_tid is None or next_serial is None or \
before_tid <= next_serial
if data is None and resolve_data:
try:
_, compression, checksum, data = self._getObjectData(oid,
data_serial, serial)
except CreationUndone:
compression = 0
# XXX: this is the valid checksum for empty string
checksum = 1
data = ''
data_serial = None
if serial is not None:
serial = p64(serial)
if next_serial is not None:
next_serial = p64(next_serial)
if data_serial is not None:
data_serial = p64(data_serial)
result = serial, next_serial, compression, checksum, data, data_serial
return result
def changePartitionTable(self, ptid, cell_list):
"""Change a part of a partition table. The list of cells is
a tuple of tuples, each of which consists of an offset (row ID),
an UUID of a storage node, and a cell state. The Partition
Table ID must be stored as well."""
raise NotImplementedError
def setPartitionTable(self, ptid, cell_list):
"""Set a whole partition table. The semantics is the same as
changePartitionTable, except that existing data must be
thrown away."""
raise NotImplementedError
def dropPartitions(self, num_partitions, offset_list):
""" Drop any data of non-assigned partitions for a given UUID """
raise NotImplementedError('this method must be overriden')
def dropUnfinishedData(self):
"""Drop any unfinished data from a database."""
raise NotImplementedError
def storeTransaction(self, tid, object_list, transaction, temporary = True):
"""Store a transaction temporarily, if temporary is true. Note
that this transaction is not finished yet. The list of objects
contains tuples, each of which consists of an object ID,
a compression specification, a checksum and object data.
The transaction is either None or a tuple of the list of OIDs,
user information, a description, extension information and transaction
pack state (True for packed)."""
raise NotImplementedError
def _getDataTID(self, oid, tid=None, before_tid=None):
"""
Return a 2-tuple:
tid (int)
tid corresponding to received parameters
serial
tid at which actual object data is located
If 'tid is None', requested object and transaction could
not be found.
If 'serial is None', requested object exist but has no data (its creation
has been undone).
If 'tid == serial', it means that requested transaction
contains object data.
Otherwise, it's an undo transaction which did not involve conflict
resolution.
"""
raise NotImplementedError
def findUndoTID(self, oid, tid, ltid, undone_tid, transaction_object):
"""
oid
Object OID
tid
Transation doing the undo
ltid
Upper (exclued) bound of transactions visible to transaction doing
the undo.
undone_tid
Transaction to undo
transaction_object
Object data from memory, if it was modified by running
transaction.
None if is was not modified by running transaction.
Returns a 3-tuple:
current_tid (p64)
TID of most recent version of the object client's transaction can
see. This is used later to detect current conflicts (eg, another
client modifying the same object in parallel)
data_tid (int)
TID containing (without indirection) the data prior to undone
transaction.
None if object doesn't exist prior to transaction being undone
(its creation is being undone).
is_current (bool)
False if object was modified by later transaction (ie, data_tid is
not current), True otherwise.
"""
u64 = util.u64
p64 = util.p64
oid = u64(oid)
tid = u64(tid)
if ltid:
ltid = u64(ltid)
undone_tid = u64(undone_tid)
_getDataTID = self._getDataTID
if transaction_object is not None:
_, _, _, _, tvalue_serial = transaction_object
current_tid = current_data_tid = u64(tvalue_serial)
else:
current_tid, current_data_tid = _getDataTID(oid, before_tid=ltid)
if current_tid is None:
return (None, None, False)
found_undone_tid, undone_data_tid = _getDataTID(oid, tid=undone_tid)
assert found_undone_tid is not None, (oid, undone_tid)
is_current = undone_data_tid in (current_data_tid, tid)
# Load object data as it was before given transaction.
# It can be None, in which case it means we are undoing object
# creation.
_, data_tid = _getDataTID(oid, before_tid=undone_tid)
if data_tid is not None:
data_tid = p64(data_tid)
return p64(current_tid), data_tid, is_current
def finishTransaction(self, tid):
"""Finish a transaction specified by a given ID, by moving
temporarily data to a finished area."""
raise NotImplementedError
def deleteTransaction(self, tid, oid_list=()):
"""Delete a transaction and its content specified by a given ID and
an oid list"""
raise NotImplementedError
def deleteTransactionsAbove(self, num_partitions, partition, tid, max_tid):
"""Delete all transactions above given TID (inclued) in given
partition, but never above max_tid (in case transactions are committed
during replication)."""
raise NotImplementedError
def deleteObject(self, oid, serial=None):
"""Delete given object. If serial is given, only delete that serial for
given oid."""
raise NotImplementedError
def deleteObjectsAbove(self, num_partitions, partition, oid, serial,
max_tid):
"""Delete all objects above given OID and serial (inclued) in given
partition, but never above max_tid (in case objects are stored during
replication)"""
raise NotImplementedError
def getTransaction(self, tid, all = False):
"""Return a tuple of the list of OIDs, user information,
a description, and extension information, for a given transaction
ID. If there is no such transaction ID in a database, return None.
If all is true, the transaction must be searched from a temporary
area as well."""
raise NotImplementedError
def getObjectHistory(self, oid, offset = 0, length = 1):
"""Return a list of serials and sizes for a given object ID.
The length specifies the maximum size of such a list. Result starts
with latest serial, and the list must be sorted in descending order.
If there is no such object ID in a database, return None."""
raise NotImplementedError
def getObjectHistoryFrom(self, oid, min_serial, max_serial, length,
num_partitions, partition):
"""Return a dict of length serials grouped by oid at (or above)
min_oid and min_serial and below max_serial, for given partition,
sorted in ascending order."""
raise NotImplementedError
def getTIDList(self, offset, length, num_partitions, partition_list):
"""Return a list of TIDs in ascending order from an offset,
at most the specified length. The list of partitions are passed
to filter out non-applicable TIDs."""
raise NotImplementedError
def getReplicationTIDList(self, min_tid, max_tid, length, num_partitions,
partition):
"""Return a list of TIDs in ascending order from an initial tid value,
at most the specified length up to max_tid. The partition number is
passed to filter out non-applicable TIDs."""
raise NotImplementedError
def pack(self, tid, updateObjectDataForPack):
"""Prune all non-current object revisions at given tid.
updateObjectDataForPack is a function called for each deleted object
and revision with:
- OID
- packed TID
- new value_serial
If object data was moved to an after-pack-tid revision, this
parameter contains the TID of that revision, allowing to backlink
to it.
- getObjectData function
To call if value_serial is None and an object needs to be updated.
Takes no parameter, returns a 3-tuple: compression, checksum,
value
"""
raise NotImplementedError
def checkTIDRange(self, min_tid, max_tid, length, num_partitions, partition):
"""
Generate a diggest from transaction list.
min_tid (packed)
TID at which verification starts.
length (int)
Maximum number of records to include in result.
num_partitions, partition (int, int)
Specifies concerned partition.
Returns a 3-tuple:
- number of records actually found
- a XOR computed from record's TID
0 if no record found
- biggest TID found (ie, TID of last record read)
ZERO_TID if not record found
"""
raise NotImplementedError
def checkSerialRange(self, min_oid, min_serial, max_tid, length,
num_partitions, partition):
"""
Generate a diggest from object list.
min_oid (packed)
OID at which verification starts.
min_serial (packed)
Serial of min_oid object at which search should start.
length
Maximum number of records to include in result.
num_partitions, partition (int, int)
Specifies concerned partition.
Returns a 5-tuple:
- number of records actually found
- a XOR computed from record's OID
0 if no record found
- biggest OID found (ie, OID of last record read)
ZERO_OID if no record found
- a XOR computed from record's serial
0 if no record found
- biggest serial found for biggest OID found (ie, serial of last
record read)
ZERO_TID if no record found
"""
raise NotImplementedError
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/database/mysqldb.py 0000664 0000000 0000000 00000105113 11634614701 0026426 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from binascii import a2b_hex
import MySQLdb
from MySQLdb import OperationalError
from MySQLdb.constants.CR import SERVER_GONE_ERROR, SERVER_LOST
import neo.lib
from array import array
from hashlib import md5
import string
from neo.storage.database import DatabaseManager
from neo.storage.database.manager import CreationUndone
from neo.lib.exception import DatabaseFailure
from neo.lib.protocol import CellStates, ZERO_OID, ZERO_TID
from neo.lib import util
LOG_QUERIES = False
def splitOIDField(tid, oids):
if (len(oids) % 8) != 0 or len(oids) == 0:
raise DatabaseFailure('invalid oids length for tid %d: %d' % (tid,
len(oids)))
oid_list = []
append = oid_list.append
for i in xrange(0, len(oids), 8):
append(oids[i:i+8])
return oid_list
class MySQLDatabaseManager(DatabaseManager):
"""This class manages a database on MySQL."""
# Disabled even on MySQL 5.1 & 5.5 because 'select count(*) from obj'
# sometimes returns incorrect values.
_use_partition = False
def __init__(self, database):
super(MySQLDatabaseManager, self).__init__()
self.user, self.passwd, self.db = self._parse(database)
self.conn = None
self._config = {}
self._connect()
def _parse(self, database):
""" Get the database credentials (username, password, database) """
# expected pattern : [user[:password]@]database
username = None
password = None
if '@' in database:
(username, database) = database.split('@')
if ':' in username:
(username, password) = username.split(':')
return (username, password, database)
def close(self):
self.conn.close()
def _connect(self):
kwd = {'db' : self.db, 'user' : self.user}
if self.passwd is not None:
kwd['passwd'] = self.passwd
neo.lib.logging.info(
'connecting to MySQL on the database %s with user %s',
self.db, self.user)
self.conn = MySQLdb.connect(**kwd)
self.conn.autocommit(False)
self.conn.query("SET SESSION group_concat_max_len = -1")
def _begin(self):
self.query("""BEGIN""")
def _commit(self):
if LOG_QUERIES:
neo.lib.logging.debug('committing...')
self.conn.commit()
def _rollback(self):
if LOG_QUERIES:
neo.lib.logging.debug('aborting...')
self.conn.rollback()
def query(self, query):
"""Query data from a database."""
conn = self.conn
try:
if LOG_QUERIES:
printable_char_list = []
for c in query.split('\n', 1)[0][:70]:
if c not in string.printable or c in '\t\x0b\x0c\r':
c = '\\x%02x' % ord(c)
printable_char_list.append(c)
query_part = ''.join(printable_char_list)
neo.lib.logging.debug('querying %s...', query_part)
conn.query(query)
r = conn.store_result()
if r is not None:
new_r = []
for row in r.fetch_row(r.num_rows()):
new_row = []
for d in row:
if isinstance(d, array):
d = d.tostring()
new_row.append(d)
new_r.append(tuple(new_row))
r = tuple(new_r)
except OperationalError, m:
if m[0] in (SERVER_GONE_ERROR, SERVER_LOST):
neo.lib.logging.info('the MySQL server is gone; reconnecting')
self._connect()
return self.query(query)
raise DatabaseFailure('MySQL error %d: %s' % (m[0], m[1]))
return r
def escape(self, s):
"""Escape special characters in a string."""
return self.conn.escape_string(s)
def setup(self, reset = 0):
self._config.clear()
q = self.query
if reset:
q('DROP TABLE IF EXISTS config, pt, trans, obj, obj_short, '
'ttrans, tobj')
# The table "config" stores configuration parameters which affect the
# persistent data.
q("""CREATE TABLE IF NOT EXISTS config (
name VARBINARY(16) NOT NULL PRIMARY KEY,
value VARBINARY(255) NULL
) ENGINE = InnoDB""")
# The table "pt" stores a partition table.
q("""CREATE TABLE IF NOT EXISTS pt (
rid INT UNSIGNED NOT NULL,
uuid CHAR(32) NOT NULL,
state TINYINT UNSIGNED NOT NULL,
PRIMARY KEY (rid, uuid)
) ENGINE = InnoDB""")
p = self._use_partition and """ PARTITION BY LIST (partition) (
PARTITION dummy VALUES IN (NULL))""" or ''
# The table "trans" stores information on committed transactions.
q("""CREATE TABLE IF NOT EXISTS trans (
partition SMALLINT UNSIGNED NOT NULL,
tid BIGINT UNSIGNED NOT NULL,
packed BOOLEAN NOT NULL,
oids MEDIUMBLOB NOT NULL,
user BLOB NOT NULL,
description BLOB NOT NULL,
ext BLOB NOT NULL,
PRIMARY KEY (partition, tid)
) ENGINE = InnoDB""" + p)
# The table "obj" stores committed object data.
q("""CREATE TABLE IF NOT EXISTS obj (
partition SMALLINT UNSIGNED NOT NULL,
oid BIGINT UNSIGNED NOT NULL,
serial BIGINT UNSIGNED NOT NULL,
compression TINYINT UNSIGNED NULL,
checksum INT UNSIGNED NULL,
value LONGBLOB NULL,
value_serial BIGINT UNSIGNED NULL,
PRIMARY KEY (partition, oid, serial)
) ENGINE = InnoDB""" + p)
# The table "obj_short" contains columns which are accessed in queries
# which don't need to access object data. This is needed because InnoDB
# loads a whole row even when it only needs columns in primary key.
q('CREATE TABLE IF NOT EXISTS obj_short ('
'partition SMALLINT UNSIGNED NOT NULL,'
'oid BIGINT UNSIGNED NOT NULL,'
'serial BIGINT UNSIGNED NOT NULL,'
'PRIMARY KEY (partition, oid, serial)'
') ENGINE = InnoDB' + p)
# The table "ttrans" stores information on uncommitted transactions.
q("""CREATE TABLE IF NOT EXISTS ttrans (
partition SMALLINT UNSIGNED NOT NULL,
tid BIGINT UNSIGNED NOT NULL,
packed BOOLEAN NOT NULL,
oids MEDIUMBLOB NOT NULL,
user BLOB NOT NULL,
description BLOB NOT NULL,
ext BLOB NOT NULL
) ENGINE = InnoDB""")
# The table "tobj" stores uncommitted object data.
q("""CREATE TABLE IF NOT EXISTS tobj (
partition SMALLINT UNSIGNED NOT NULL,
oid BIGINT UNSIGNED NOT NULL,
serial BIGINT UNSIGNED NOT NULL,
compression TINYINT UNSIGNED NULL,
checksum INT UNSIGNED NULL,
value LONGBLOB NULL,
value_serial BIGINT UNSIGNED NULL
) ENGINE = InnoDB""")
def objQuery(self, query):
"""
Execute given query for both obj and obj_short tables.
query: query string, must contain "%(table)s" where obj table name is
needed.
"""
q = self.query
for table in ('obj', 'obj_short'):
q(query % {'table': table})
def getConfiguration(self, key):
if key in self._config:
return self._config[key]
q = self.query
e = self.escape
sql_key = e(str(key))
try:
r = q("SELECT value FROM config WHERE name = '%s'" % sql_key)[0][0]
except IndexError:
raise KeyError, key
self._config[key] = r
return r
def _setConfiguration(self, key, value):
q = self.query
e = self.escape
self._config[key] = value
key = e(str(key))
if value is None:
value = 'NULL'
else:
value = "'%s'" % (e(str(value)), )
q("""REPLACE INTO config VALUES ('%s', %s)""" % (key, value))
def _setPackTID(self, tid):
self._setConfiguration('_pack_tid', tid)
def _getPackTID(self):
try:
result = int(self.getConfiguration('_pack_tid'))
except KeyError:
result = -1
return result
def getPartitionTable(self):
q = self.query
cell_list = q("""SELECT rid, uuid, state FROM pt""")
pt = []
for offset, uuid, state in cell_list:
uuid = util.bin(uuid)
pt.append((offset, uuid, state))
return pt
def getLastTID(self, all = True):
# XXX this does not consider serials in obj.
# I am not sure if this is really harmful. For safety,
# check for tobj only at the moment. The reason why obj is
# not tested is that it is too slow to get the max serial
# from obj when it has a huge number of objects, because
# serial is the second part of the primary key, so the index
# is not used in this case. If doing it, it is better to
# make another index for serial, but I doubt the cost increase
# is worth.
q = self.query
self.begin()
ltid = q("SELECT MAX(value) FROM (SELECT MAX(tid) AS value FROM trans "
"GROUP BY partition) AS foo")[0][0]
if all:
tmp_ltid = q("""SELECT MAX(tid) FROM ttrans""")[0][0]
if ltid is None or (tmp_ltid is not None and ltid < tmp_ltid):
ltid = tmp_ltid
tmp_serial = q("""SELECT MAX(serial) FROM tobj""")[0][0]
if ltid is None or (tmp_serial is not None and ltid < tmp_serial):
ltid = tmp_serial
self.commit()
if ltid is not None:
ltid = util.p64(ltid)
return ltid
def getUnfinishedTIDList(self):
q = self.query
tid_set = set()
self.begin()
r = q("""SELECT tid FROM ttrans""")
tid_set.update((util.p64(t[0]) for t in r))
r = q("""SELECT serial FROM tobj""")
self.commit()
tid_set.update((util.p64(t[0]) for t in r))
return list(tid_set)
def objectPresent(self, oid, tid, all = True):
q = self.query
oid = util.u64(oid)
tid = util.u64(tid)
partition = self._getPartition(oid)
self.begin()
r = q("SELECT oid FROM obj_short WHERE partition=%d AND oid=%d AND "
"serial=%d" % (partition, oid, tid))
if not r and all:
r = q("""SELECT oid FROM tobj WHERE oid = %d AND serial = %d""" \
% (oid, tid))
self.commit()
if r:
return True
return False
def _getObjectData(self, oid, value_serial, tid):
if value_serial is None:
raise CreationUndone
if value_serial >= tid:
raise ValueError, "Incorrect value reference found for " \
"oid %d at tid %d: reference = %d" % (oid, value_serial, tid)
r = self.query("""SELECT compression, checksum, value, """ \
"""value_serial FROM obj WHERE partition = %(partition)d """
"""AND oid = %(oid)d AND serial = %(serial)d""" % {
'partition': self._getPartition(oid),
'oid': oid,
'serial': value_serial,
})
compression, checksum, value, next_value_serial = r[0]
if value is None:
neo.lib.logging.info("Multiple levels of indirection when " \
"searching for object data for oid %d at tid %d. This " \
"causes suboptimal performance." % (oid, value_serial))
value_serial, compression, checksum, value = self._getObjectData(
oid, next_value_serial, value_serial)
return value_serial, compression, checksum, value
def _getObject(self, oid, tid=None, before_tid=None):
q = self.query
partition = self._getPartition(oid)
sql = """SELECT serial, compression, checksum, value, value_serial
FROM obj
WHERE partition = %d
AND oid = %d""" % (partition, oid)
if tid is not None:
sql += ' AND serial = %d' % tid
elif before_tid is not None:
sql += ' AND serial < %d ORDER BY serial DESC LIMIT 1' % before_tid
else:
# XXX I want to express "HAVING serial = MAX(serial)", but
# MySQL does not use an index for a HAVING clause!
sql += ' ORDER BY serial DESC LIMIT 1'
r = q(sql)
try:
serial, compression, checksum, data, value_serial = r[0]
except IndexError:
return None
r = q("""SELECT serial FROM obj_short
WHERE partition = %d AND oid = %d AND serial > %d
ORDER BY serial LIMIT 1""" % (partition, oid, serial))
try:
next_serial = r[0][0]
except IndexError:
next_serial = None
return serial, next_serial, compression, checksum, data, value_serial
def doSetPartitionTable(self, ptid, cell_list, reset):
q = self.query
e = self.escape
offset_list = []
self.begin()
try:
if reset:
q("""TRUNCATE pt""")
for offset, uuid, state in cell_list:
uuid = e(util.dump(uuid))
# TODO: this logic should move out of database manager
# add 'dropCells(cell_list)' to API and use one query
if state == CellStates.DISCARDED:
q("""DELETE FROM pt WHERE rid = %d AND uuid = '%s'""" \
% (offset, uuid))
else:
offset_list.append(offset)
q("""INSERT INTO pt VALUES (%d, '%s', %d)
ON DUPLICATE KEY UPDATE state = %d""" \
% (offset, uuid, state, state))
self.setPTID(ptid)
except:
self.rollback()
raise
self.commit()
if self._use_partition:
for offset in offset_list:
add = """ALTER TABLE %%s ADD PARTITION (
PARTITION p%u VALUES IN (%u))""" % (offset, offset)
for table in 'trans', 'obj', 'obj_short':
try:
self.conn.query(add % table)
except OperationalError, (code, _):
if code != 1517: # duplicate partition name
raise
def changePartitionTable(self, ptid, cell_list):
self.doSetPartitionTable(ptid, cell_list, False)
def setPartitionTable(self, ptid, cell_list):
self.doSetPartitionTable(ptid, cell_list, True)
def dropPartitions(self, num_partitions, offset_list):
q = self.query
if self._use_partition:
drop = "ALTER TABLE %s DROP PARTITION" + \
','.join(' p%u' % i for i in offset_list)
for table in 'trans', 'obj', 'obj_short':
try:
self.conn.query(drop % table)
except OperationalError, (code, _):
if code != 1508: # already dropped
raise
return
e = self.escape
offset_list = ', '.join((str(i) for i in offset_list))
self.begin()
try:
# XXX: these queries are inefficient (execution time increase with
# row count, although we use indexes) when there are rows to
# delete. It should be done as an idle task, by chunks.
self.objQuery('DELETE FROM %%(table)s WHERE partition IN (%s)' %
(offset_list, ))
q("""DELETE FROM trans WHERE partition IN (%s)""" %
(offset_list, ))
except:
self.rollback()
raise
self.commit()
def dropUnfinishedData(self):
q = self.query
self.begin()
try:
q("""TRUNCATE tobj""")
q("""TRUNCATE ttrans""")
except:
self.rollback()
raise
self.commit()
def storeTransaction(self, tid, object_list, transaction, temporary = True):
q = self.query
e = self.escape
u64 = util.u64
tid = u64(tid)
if temporary:
obj_table = 'tobj'
trans_table = 'ttrans'
else:
obj_table = 'obj'
trans_table = 'trans'
self.begin()
try:
for oid, compression, checksum, data, value_serial in object_list:
oid = u64(oid)
if data is None:
compression = checksum = data = 'NULL'
else:
# TODO: unit-test this raise
if value_serial is not None:
raise ValueError, 'Either data or value_serial ' \
'must be None (oid %d, tid %d)' % (oid, tid)
compression = '%d' % (compression, )
checksum = '%d' % (checksum, )
data = "'%s'" % (e(data), )
if value_serial is None:
value_serial = 'NULL'
else:
value_serial = '%d' % (u64(value_serial), )
partition = self._getPartition(oid)
q("""REPLACE INTO %s VALUES (%d, %d, %d, %s, %s, %s, %s)""" \
% (obj_table, partition, oid, tid, compression, checksum,
data, value_serial))
if obj_table == 'obj':
# Update obj_short too
q('REPLACE INTO obj_short VALUES (%d, %d, %d)' % (
partition, oid, tid))
if transaction is not None:
oid_list, user, desc, ext, packed = transaction
packed = packed and 1 or 0
oids = e(''.join(oid_list))
user = e(user)
desc = e(desc)
ext = e(ext)
partition = self._getPartition(tid)
q("REPLACE INTO %s VALUES (%d, %d, %i, '%s', '%s', '%s', '%s')"
% (trans_table, partition, tid, packed, oids, user, desc,
ext))
except:
self.rollback()
raise
self.commit()
def _getDataTIDFromData(self, oid, result):
tid, next_serial, compression, checksum, data, value_serial = result
if data is None:
try:
data_serial = self._getObjectData(oid, value_serial, tid)[0]
except CreationUndone:
data_serial = None
else:
data_serial = tid
return tid, data_serial
def _getDataTID(self, oid, tid=None, before_tid=None):
result = self._getObject(oid, tid=tid, before_tid=before_tid)
if result is None:
result = (None, None)
else:
result = self._getDataTIDFromData(oid, result)
return result
def finishTransaction(self, tid):
q = self.query
tid = util.u64(tid)
self.begin()
try:
q("""INSERT INTO obj SELECT * FROM tobj WHERE tobj.serial = %d""" \
% tid)
q('INSERT INTO obj_short SELECT partition, oid, serial FROM tobj'
' WHERE tobj.serial = %d' % (tid, ))
q("""DELETE FROM tobj WHERE serial = %d""" % tid)
q("""INSERT INTO trans SELECT * FROM ttrans WHERE ttrans.tid = %d"""
% tid)
q("""DELETE FROM ttrans WHERE tid = %d""" % tid)
except:
self.rollback()
raise
self.commit()
def deleteTransaction(self, tid, oid_list=()):
q = self.query
objQuery = self.objQuery
u64 = util.u64
tid = u64(tid)
getPartition = self._getPartition
self.begin()
try:
q("""DELETE FROM tobj WHERE serial = %d""" % tid)
q("""DELETE FROM ttrans WHERE tid = %d""" % tid)
q("""DELETE FROM trans WHERE partition = %d AND tid = %d""" %
(getPartition(tid), tid))
# delete from obj using indexes
for oid in oid_list:
oid = u64(oid)
partition = getPartition(oid)
objQuery('DELETE FROM %%(table)s WHERE '
'partition=%(partition)d '
'AND oid = %(oid)d AND serial = %(serial)d' % {
'partition': partition,
'oid': oid,
'serial': tid,
})
except:
self.rollback()
raise
self.commit()
def deleteTransactionsAbove(self, num_partitions, partition, tid, max_tid):
self.begin()
try:
self.query('DELETE FROM trans WHERE partition=%(partition)d AND '
'%(tid)d <= tid AND tid <= %(max_tid)d' % {
'partition': partition,
'tid': util.u64(tid),
'max_tid': util.u64(max_tid),
})
except:
self.rollback()
raise
self.commit()
def deleteObject(self, oid, serial=None):
u64 = util.u64
oid = u64(oid)
query_param_dict = {
'partition': self._getPartition(oid),
'oid': oid,
}
query_fmt = 'DELETE FROM %%(table)s WHERE ' \
'partition = %(partition)d AND oid = %(oid)d'
if serial is not None:
query_param_dict['serial'] = u64(serial)
query_fmt = query_fmt + ' AND serial = %(serial)d'
self.begin()
try:
self.objQuery(query_fmt % query_param_dict)
except:
self.rollback()
raise
self.commit()
def deleteObjectsAbove(self, num_partitions, partition, oid, serial,
max_tid):
u64 = util.u64
self.begin()
try:
self.objQuery('DELETE FROM %%(table)s WHERE partition=%(partition)d'
' AND serial <= %(max_tid)d AND ('
'oid > %(oid)d OR (oid = %(oid)d AND serial >= %(serial)d))' % {
'partition': partition,
'max_tid': u64(max_tid),
'oid': u64(oid),
'serial': u64(serial),
})
except:
self.rollback()
raise
self.commit()
def getTransaction(self, tid, all = False):
q = self.query
tid = util.u64(tid)
self.begin()
r = q("""SELECT oids, user, description, ext, packed FROM trans
WHERE partition = %d AND tid = %d""" \
% (self._getPartition(tid), tid))
if not r and all:
r = q("""SELECT oids, user, description, ext, packed FROM ttrans
WHERE tid = %d""" \
% tid)
self.commit()
if r:
oids, user, desc, ext, packed = r[0]
oid_list = splitOIDField(tid, oids)
return oid_list, user, desc, ext, bool(packed)
return None
def _getObjectLength(self, oid, value_serial):
if value_serial is None:
raise CreationUndone
r = self.query("""SELECT LENGTH(value), value_serial FROM obj """ \
"""WHERE partition = %d AND oid = %d AND serial = %d""" %
(self._getPartition(oid), oid, value_serial))
length, value_serial = r[0]
if length is None:
neo.lib.logging.info("Multiple levels of indirection when " \
"searching for object data for oid %d at tid %d. This " \
"causes suboptimal performance." % (oid, value_serial))
length = self._getObjectLength(oid, value_serial)
return length
def getObjectHistory(self, oid, offset = 0, length = 1):
# FIXME: This method doesn't take client's current ransaction id as
# parameter, which means it can return transactions in the future of
# client's transaction.
q = self.query
oid = util.u64(oid)
p64 = util.p64
pack_tid = self._getPackTID()
r = q("""SELECT serial, LENGTH(value), value_serial FROM obj
WHERE partition = %d AND oid = %d AND serial >= %d
ORDER BY serial DESC LIMIT %d, %d""" \
% (self._getPartition(oid), oid, pack_tid, offset, length))
if r:
result = []
append = result.append
for serial, length, value_serial in r:
if length is None:
try:
length = self._getObjectLength(oid, value_serial)
except CreationUndone:
length = 0
append((p64(serial), length))
return result
return None
def getObjectHistoryFrom(self, min_oid, min_serial, max_serial, length,
num_partitions, partition):
q = self.query
u64 = util.u64
p64 = util.p64
min_oid = u64(min_oid)
min_serial = u64(min_serial)
max_serial = u64(max_serial)
r = q('SELECT oid, serial FROM obj_short '
'WHERE partition = %(partition)s '
'AND serial <= %(max_serial)d '
'AND ((oid = %(min_oid)d AND serial >= %(min_serial)d) '
'OR oid > %(min_oid)d) '
'ORDER BY oid ASC, serial ASC LIMIT %(length)d' % {
'min_oid': min_oid,
'min_serial': min_serial,
'max_serial': max_serial,
'length': length,
'partition': partition,
})
result = {}
for oid, serial in r:
try:
serial_list = result[oid]
except KeyError:
serial_list = result[oid] = []
serial_list.append(p64(serial))
return dict((p64(x), y) for x, y in result.iteritems())
def getTIDList(self, offset, length, num_partitions, partition_list):
q = self.query
r = q("""SELECT tid FROM trans WHERE partition in (%s)
ORDER BY tid DESC LIMIT %d,%d""" \
% (','.join([str(p) for p in partition_list]), offset, length))
return [util.p64(t[0]) for t in r]
def getReplicationTIDList(self, min_tid, max_tid, length, num_partitions,
partition):
q = self.query
u64 = util.u64
p64 = util.p64
min_tid = u64(min_tid)
max_tid = u64(max_tid)
r = q("""SELECT tid FROM trans
WHERE partition = %(partition)d
AND tid >= %(min_tid)d AND tid <= %(max_tid)d
ORDER BY tid ASC LIMIT %(length)d""" % {
'partition': partition,
'min_tid': min_tid,
'max_tid': max_tid,
'length': length,
})
return [p64(t[0]) for t in r]
def _updatePackFuture(self, oid, orig_serial, max_serial,
updateObjectDataForPack):
q = self.query
p64 = util.p64
getPartition = self._getPartition
# Before deleting this objects revision, see if there is any
# transaction referencing its value at max_serial or above.
# If there is, copy value to the first future transaction. Any further
# reference is just updated to point to the new data location.
value_serial = None
for table in ('obj', 'tobj'):
for (serial, ) in q('SELECT serial FROM %(table)s WHERE '
'partition = %(partition)d AND oid = %(oid)d '
'AND serial >= %(max_serial)d AND '
'value_serial = %(orig_serial)d ORDER BY serial ASC' % {
'table': table,
'partition': getPartition(oid),
'oid': oid,
'orig_serial': orig_serial,
'max_serial': max_serial,
}):
if value_serial is None:
# First found, copy data to it and mark its serial for
# future reference.
value_serial = serial
q('REPLACE INTO %(table)s (partition, oid, serial, compression, '
'checksum, value, value_serial) SELECT partition, oid, '
'%(serial)d, compression, checksum, value, NULL FROM '
'obj WHERE partition = %(partition)d AND oid = %(oid)d '
'AND serial = %(orig_serial)d' \
% {
'table': table,
'partition': getPartition(oid),
'oid': oid,
'serial': serial,
'orig_serial': orig_serial,
})
else:
q('REPLACE INTO %(table)s (partition, oid, serial, value_serial) '
'VALUES (%(partition)d, %(oid)d, %(serial)d, '
'%(value_serial)d)' % {
'table': table,
'partition': getPartition(oid),
'oid': oid,
'serial': serial,
'value_serial': value_serial,
})
def getObjectData():
assert value_serial is None
return q('SELECT compression, checksum, value FROM obj WHERE '
'partition = %(partition)d AND oid = %(oid)d '
'AND serial = %(orig_serial)d' % {
'partition': getPartition(oid),
'oid': oid,
'orig_serial': orig_serial,
})[0]
if value_serial:
value_serial = p64(value_serial)
updateObjectDataForPack(p64(oid), p64(orig_serial), value_serial,
getObjectData)
def pack(self, tid, updateObjectDataForPack):
# TODO: unit test (along with updatePackFuture)
q = self.query
objQuery = self.objQuery
tid = util.u64(tid)
updatePackFuture = self._updatePackFuture
getPartition = self._getPartition
self.begin()
try:
self._setPackTID(tid)
for count, oid, max_serial in q('SELECT COUNT(*) - 1, oid, '
'MAX(serial) FROM obj_short WHERE serial <= %(tid)d '
'GROUP BY oid' % {'tid': tid}):
if q('SELECT LENGTH(value) FROM obj WHERE partition ='
'%(partition)s AND oid = %(oid)d AND '
'serial = %(max_serial)d' % {
'oid': oid,
'partition': getPartition(oid),
'max_serial': max_serial,
})[0][0] == 0:
count += 1
max_serial += 1
if count:
# There are things to delete for this object
for (serial, ) in q('SELECT serial FROM obj_short WHERE '
'partition=%(partition)d AND oid=%(oid)d AND '
'serial < %(max_serial)d' % {
'oid': oid,
'partition': getPartition(oid),
'max_serial': max_serial,
}):
updatePackFuture(oid, serial, max_serial,
updateObjectDataForPack)
objQuery('DELETE FROM %%(table)s WHERE '
'partition=%(partition)d '
'AND oid=%(oid)d AND serial=%(serial)d' % {
'partition': getPartition(oid),
'oid': oid,
'serial': serial
})
except:
self.rollback()
raise
self.commit()
def checkTIDRange(self, min_tid, max_tid, length, num_partitions, partition):
count, tid_checksum, max_tid = self.query(
"""SELECT COUNT(*), MD5(GROUP_CONCAT(tid SEPARATOR ",")), MAX(tid)
FROM (SELECT tid FROM trans
WHERE partition = %(partition)s
AND tid >= %(min_tid)d
AND tid <= %(max_tid)d
ORDER BY tid ASC LIMIT %(length)d) AS t""" % {
'partition': partition,
'min_tid': util.u64(min_tid),
'max_tid': util.u64(max_tid),
'length': length,
})[0]
if count == 0:
max_tid = ZERO_TID
else:
tid_checksum = a2b_hex(tid_checksum)
max_tid = util.p64(max_tid)
return count, tid_checksum, max_tid
def checkSerialRange(self, min_oid, min_serial, max_tid, length,
num_partitions, partition):
u64 = util.u64
# We don't ask MySQL to compute everything (like in checkTIDRange)
# because it's difficult to get the last serial _for the last oid_.
# We would need a function (that be named 'LAST') that return the
# last grouped value, instead of the greatest one.
r = self.query(
"""SELECT oid, serial
FROM obj_short
WHERE partition = %(partition)s
AND serial <= %(max_tid)d
AND (oid > %(min_oid)d OR
oid = %(min_oid)d AND serial >= %(min_serial)d)
ORDER BY oid ASC, serial ASC LIMIT %(length)d""" % {
'min_oid': u64(min_oid),
'min_serial': u64(min_serial),
'max_tid': u64(max_tid),
'length': length,
'partition': partition,
})
if r:
p64 = util.p64
return (len(r),
md5(','.join(str(x[0]) for x in r)).digest(),
p64(r[-1][0]),
md5(','.join(str(x[1]) for x in r)).digest(),
p64(r[-1][1]))
return 0, None, ZERO_OID, None, ZERO_TID
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/exception.py 0000664 0000000 0000000 00000001426 11634614701 0025207 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
class AlreadyPendingError(Exception):
pass
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/ 0000775 0000000 0000000 00000000000 11634614701 0024434 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/__init__.py 0000664 0000000 0000000 00000010005 11634614701 0026541 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
from neo.lib.handler import EventHandler
from neo.lib import protocol
from neo.lib.util import dump
from neo.lib.exception import PrimaryFailure, OperationFailure
from neo.lib.protocol import NodeStates, NodeTypes, Packets, Errors
class BaseMasterHandler(EventHandler):
def connectionLost(self, conn, new_state):
if self.app.listening_conn: # if running
raise PrimaryFailure('connection lost')
def stopOperation(self, conn):
raise OperationFailure('operation stopped')
def reelectPrimary(self, conn):
raise PrimaryFailure('re-election occurs')
def notifyClusterInformation(self, conn, state):
neo.lib.logging.warning('ignoring notify cluster information in %s' %
self.__class__.__name__)
def notifyLastOID(self, conn, oid):
self.app.dm.setLastOID(oid)
def notifyNodeInformation(self, conn, node_list):
"""Store information on nodes, only if this is sent by a primary
master node."""
self.app.nm.update(node_list)
for node_type, addr, uuid, state in node_list:
if uuid == self.app.uuid:
# This is me, do what the master tell me
neo.lib.logging.info("I was told I'm %s" %(state))
if state in (NodeStates.DOWN, NodeStates.TEMPORARILY_DOWN,
NodeStates.BROKEN):
erase = state == NodeStates.DOWN
self.app.shutdown(erase=erase)
elif state == NodeStates.HIDDEN:
raise OperationFailure
elif node_type == NodeTypes.CLIENT and state != NodeStates.RUNNING:
neo.lib.logging.info(
'Notified of non-running client, abort (%r)',
dump(uuid))
self.app.tm.abortFor(uuid)
class BaseClientAndStorageOperationHandler(EventHandler):
""" Accept requests common to client and storage nodes """
def askTransactionInformation(self, conn, tid):
app = self.app
t = app.dm.getTransaction(tid)
if t is None:
p = Errors.TidNotFound('%s does not exist' % dump(tid))
else:
p = Packets.AnswerTransactionInformation(tid, t[1], t[2], t[3],
t[4], t[0])
conn.answer(p)
def _askObject(self, oid, serial, tid):
raise NotImplementedError
def askObject(self, conn, oid, serial, tid):
app = self.app
if self.app.tm.loadLocked(oid):
# Delay the response.
app.queueEvent(self.askObject, conn, (oid, serial, tid))
return
o = self._askObject(oid, serial, tid)
if o is None:
neo.lib.logging.debug('oid = %s does not exist', dump(oid))
p = Errors.OidDoesNotExist(dump(oid))
elif o is False:
neo.lib.logging.debug('oid = %s not found', dump(oid))
p = Errors.OidNotFound(dump(oid))
else:
serial, next_serial, compression, checksum, data, data_serial = o
neo.lib.logging.debug('oid = %s, serial = %s, next_serial = %s',
dump(oid), dump(serial), dump(next_serial))
p = Packets.AnswerObject(oid, serial, next_serial,
compression, checksum, data, data_serial)
conn.answer(p)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/client.py 0000664 0000000 0000000 00000020611 11634614701 0026264 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.lib import protocol
from neo.lib.util import dump
from neo.lib.protocol import Packets, LockState, Errors
from neo.storage.handlers import BaseClientAndStorageOperationHandler
from neo.storage.transactions import ConflictError, DelayedError
from neo.storage.exception import AlreadyPendingError
import time
# Log stores taking (incl. lock delays) more than this many seconds.
# Set to None to disable.
SLOW_STORE = 2
class ClientOperationHandler(BaseClientAndStorageOperationHandler):
def _askObject(self, oid, serial, ttid):
return self.app.dm.getObject(oid, serial, ttid)
def connectionLost(self, conn, new_state):
uuid = conn.getUUID()
node = self.app.nm.getByUUID(uuid)
if self.app.listening_conn: # if running
assert node is not None, conn
self.app.nm.remove(node)
def abortTransaction(self, conn, ttid):
self.app.tm.abort(ttid)
def askStoreTransaction(self, conn, ttid, user, desc, ext, oid_list):
self.app.tm.register(conn.getUUID(), ttid)
self.app.tm.storeTransaction(ttid, oid_list, user, desc, ext, False)
conn.answer(Packets.AnswerStoreTransaction(ttid))
def _askStoreObject(self, conn, oid, serial, compression, checksum, data,
data_serial, ttid, unlock, request_time):
if ttid not in self.app.tm:
# transaction was aborted, cancel this event
neo.lib.logging.info('Forget store of %s:%s by %s delayed by %s',
dump(oid), dump(serial), dump(ttid),
dump(self.app.tm.getLockingTID(oid)))
# send an answer as the client side is waiting for it
conn.answer(Packets.AnswerStoreObject(0, oid, serial))
return
try:
self.app.tm.storeObject(ttid, serial, oid, compression,
checksum, data, data_serial, unlock)
except ConflictError, err:
# resolvable or not
ttid_or_serial = err.getTID()
conn.answer(Packets.AnswerStoreObject(1, oid, ttid_or_serial))
except DelayedError:
# locked by a previous transaction, retry later
# If we are unlocking, we want queueEvent to raise
# AlreadyPendingError, to avoid making lcient wait for an unneeded
# response.
try:
self.app.queueEvent(self._askStoreObject, conn, (oid, serial,
compression, checksum, data, data_serial, ttid,
unlock, request_time), key=(oid, ttid),
raise_on_duplicate=unlock)
except AlreadyPendingError:
conn.answer(Errors.AlreadyPending(dump(oid)))
else:
if SLOW_STORE is not None:
duration = time.time() - request_time
if duration > SLOW_STORE:
neo.lib.logging.info('StoreObject delay: %.02fs', duration)
conn.answer(Packets.AnswerStoreObject(0, oid, serial))
def askStoreObject(self, conn, oid, serial,
compression, checksum, data, data_serial, ttid, unlock):
# register the transaction
self.app.tm.register(conn.getUUID(), ttid)
if data_serial is not None:
assert data == '', repr(data)
# Change data to None here, to do it only once, even if store gets
# delayed.
data = None
self._askStoreObject(conn, oid, serial, compression, checksum, data,
data_serial, ttid, unlock, time.time())
def askTIDsFrom(self, conn, min_tid, max_tid, length, partition_list):
app = self.app
getReplicationTIDList = app.dm.getReplicationTIDList
partitions = app.pt.getPartitions()
tid_list = []
extend = tid_list.extend
for partition in partition_list:
extend(getReplicationTIDList(min_tid, max_tid, length,
partitions, partition))
conn.answer(Packets.AnswerTIDsFrom(tid_list))
def askTIDs(self, conn, first, last, partition):
# This method is complicated, because I must return TIDs only
# about usable partitions assigned to me.
if first >= last:
raise protocol.ProtocolError('invalid offsets')
app = self.app
if partition == protocol.INVALID_PARTITION:
partition_list = app.pt.getAssignedPartitionList(app.uuid)
else:
partition_list = [partition]
tid_list = app.dm.getTIDList(first, last - first,
app.pt.getPartitions(), partition_list)
conn.answer(Packets.AnswerTIDs(tid_list))
def askObjectUndoSerial(self, conn, ttid, ltid, undone_tid, oid_list):
app = self.app
findUndoTID = app.dm.findUndoTID
getObjectFromTransaction = app.tm.getObjectFromTransaction
object_tid_dict = {}
for oid in oid_list:
current_serial, undo_serial, is_current = findUndoTID(oid, ttid,
ltid, undone_tid, getObjectFromTransaction(ttid, oid))
if current_serial is None:
p = Errors.OidNotFound(dump(oid))
break
object_tid_dict[oid] = (current_serial, undo_serial, is_current)
else:
p = Packets.AnswerObjectUndoSerial(object_tid_dict)
conn.answer(p)
def askHasLock(self, conn, ttid, oid):
locking_tid = self.app.tm.getLockingTID(oid)
neo.lib.logging.info('%r check lock of %r:%r', conn,
dump(ttid), dump(oid))
if locking_tid is None:
state = LockState.NOT_LOCKED
elif locking_tid is ttid:
state = LockState.GRANTED
else:
state = LockState.GRANTED_TO_OTHER
conn.answer(Packets.AnswerHasLock(oid, state))
def askObjectHistory(self, conn, oid, first, last):
if first >= last:
raise protocol.ProtocolError( 'invalid offsets')
app = self.app
history_list = app.dm.getObjectHistory(oid, first, last - first)
if history_list is None:
p = Errors.OidNotFound(dump(oid))
else:
p = Packets.AnswerObjectHistory(oid, history_list)
conn.answer(p)
def askCheckCurrentSerial(self, conn, ttid, serial, oid):
self._askCheckCurrentSerial(conn, ttid, serial, oid, time.time())
def _askCheckCurrentSerial(self, conn, ttid, serial, oid, request_time):
if ttid not in self.app.tm:
# transaction was aborted, cancel this event
neo.lib.logging.info(
'Forget serial check of %s:%s by %s delayed by '
'%s', dump(oid), dump(serial), dump(ttid),
dump(self.app.tm.getLockingTID(oid)))
# send an answer as the client side is waiting for it
conn.answer(Packets.AnswerStoreObject(0, oid, serial))
return
try:
self.app.tm.checkCurrentSerial(ttid, serial, oid)
except ConflictError, err:
# resolvable or not
conn.answer(Packets.AnswerCheckCurrentSerial(1, oid,
err.getTID()))
except DelayedError:
# locked by a previous transaction, retry later
try:
self.app.queueEvent(self._askCheckCurrentSerial, conn, (ttid,
serial, oid, request_time), key=(oid, ttid))
except AlreadyPendingError:
conn.answer(Errors.AlreadyPending(dump(oid)))
else:
if SLOW_STORE is not None:
duration = time.time() - request_time
if duration > SLOW_STORE:
neo.lib.logging.info('CheckCurrentSerial delay: %.02fs',
duration)
conn.answer(Packets.AnswerCheckCurrentSerial(0, oid, serial))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/hidden.py 0000664 0000000 0000000 00000003746 11634614701 0026253 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.storage.handlers import BaseMasterHandler
from neo.lib.protocol import CellStates
class HiddenHandler(BaseMasterHandler):
"""This class implements a generic part of the event handlers."""
def notifyPartitionChanges(self, conn, ptid, cell_list):
"""This is very similar to Send Partition Table, except that
the information is only about changes from the previous."""
app = self.app
if ptid <= app.pt.getID():
# Ignore this packet.
neo.lib.logging.debug('ignoring older partition changes')
return
# update partition table in memory and the database
app.pt.update(ptid, cell_list, app.nm)
app.dm.changePartitionTable(ptid, cell_list)
# Check changes for replications
for offset, uuid, state in cell_list:
if uuid == app.uuid and app.replicator is not None:
# If this is for myself, this can affect replications.
if state == CellStates.DISCARDED:
app.replicator.removePartition(offset)
elif state == CellStates.OUT_OF_DATE:
app.replicator.addPartition(offset)
def startOperation(self, conn):
self.app.operational = True
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/identification.py 0000664 0000000 0000000 00000005777 11634614701 0030017 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.lib.handler import EventHandler
from neo.lib.protocol import NodeTypes, Packets, NotReadyError
from neo.lib.protocol import ProtocolError, BrokenNodeDisallowedError
from neo.lib.util import dump
class IdentificationHandler(EventHandler):
""" Handler used for incoming connections during operation state """
def connectionLost(self, conn, new_state):
neo.lib.logging.warning('A connection was lost during identification')
def requestIdentification(self, conn, node_type,
uuid, address, name):
self.checkClusterName(name)
# reject any incoming connections if not ready
if not self.app.ready:
raise NotReadyError
app = self.app
node = app.nm.getByUUID(uuid)
# If this node is broken, reject it.
if node is not None and node.isBroken():
raise BrokenNodeDisallowedError
# choose the handler according to the node type
if node_type == NodeTypes.CLIENT:
from neo.storage.handlers.client import ClientOperationHandler
handler = ClientOperationHandler
if node is None:
node = app.nm.createClient()
elif node.isConnected():
# cut previous connection
node.getConnection().close()
assert not node.isConnected()
node.setRunning()
elif node_type == NodeTypes.STORAGE:
from neo.storage.handlers.storage import StorageOperationHandler
handler = StorageOperationHandler
if node is None:
neo.lib.logging.error('reject an unknown storage node %s',
dump(uuid))
raise NotReadyError
else:
raise ProtocolError('reject non-client-or-storage node')
# apply the handler and set up the connection
handler = handler(self.app)
conn.setUUID(uuid)
conn.setHandler(handler)
node.setUUID(uuid)
node.setConnection(conn)
args = (NodeTypes.STORAGE, app.uuid, app.pt.getPartitions(),
app.pt.getReplicas(), uuid)
# accept the identification and trigger an event
conn.answer(Packets.AcceptIdentification(*args))
handler.connectionCompleted(conn)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/initialization.py 0000664 0000000 0000000 00000005732 11634614701 0030044 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.storage.handlers import BaseMasterHandler
from neo.lib import protocol
class InitializationHandler(BaseMasterHandler):
def answerNodeInformation(self, conn):
self.app.has_node_information = True
def notifyNodeInformation(self, conn, node_list):
# the whole node list is received here
BaseMasterHandler.notifyNodeInformation(self, conn, node_list)
def answerPartitionTable(self, conn, ptid, row_list):
app = self.app
pt = app.pt
pt.load(ptid, row_list, self.app.nm)
if not pt.filled():
raise protocol.ProtocolError('Partial partition table received')
neo.lib.logging.debug('Got the partition table :')
self.app.pt.log()
# Install the partition table into the database for persistency.
cell_list = []
num_partitions = app.pt.getPartitions()
unassigned_set = set(xrange(num_partitions))
for offset in xrange(num_partitions):
for cell in pt.getCellList(offset):
cell_list.append((offset, cell.getUUID(), cell.getState()))
if cell.getUUID() == app.uuid:
unassigned_set.remove(offset)
# delete objects database
if unassigned_set:
neo.lib.logging.debug(
'drop data for partitions %r' % unassigned_set)
app.dm.dropPartitions(num_partitions, unassigned_set)
app.dm.setPartitionTable(ptid, cell_list)
self.app.has_partition_table = True
def answerLastIDs(self, conn, loid, ltid, lptid):
self.app.dm.setLastOID(loid)
self.app.has_last_ids = True
def notifyPartitionChanges(self, conn, ptid, cell_list):
# XXX: This is safe to ignore those notifications because all of the
# following applies:
# - we first ask for node information, and *then* partition
# table content, so it is possible to get notifyPartitionChanges
# packets in between (or even before asking for node information).
# - this handler will be changed after receiving answerPartitionTable
# and before handling the next packet
neo.lib.logging.debug('ignoring notifyPartitionChanges during '\
'initialization')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/master.py 0000664 0000000 0000000 00000006221 11634614701 0026302 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from neo.lib.util import dump
from neo.lib.protocol import CellStates, Packets, ProtocolError
from neo.storage.handlers import BaseMasterHandler
class MasterOperationHandler(BaseMasterHandler):
""" This handler is used for the primary master """
def answerLastIDs(self, conn, loid, ltid, lptid):
self.app.replicator.setCriticalTID(ltid)
def answerUnfinishedTransactions(self, conn, max_tid, ttid_list):
self.app.replicator.setUnfinishedTIDList(max_tid, ttid_list)
def notifyTransactionFinished(self, conn, ttid, max_tid):
self.app.replicator.transactionFinished(ttid, max_tid)
def notifyPartitionChanges(self, conn, ptid, cell_list):
"""This is very similar to Send Partition Table, except that
the information is only about changes from the previous."""
app = self.app
if ptid <= app.pt.getID():
# Ignore this packet.
neo.lib.logging.debug('ignoring older partition changes')
return
# update partition table in memory and the database
app.pt.update(ptid, cell_list, app.nm)
app.dm.changePartitionTable(ptid, cell_list)
# Check changes for replications
if app.replicator is not None:
for offset, uuid, state in cell_list:
if uuid == app.uuid:
# If this is for myself, this can affect replications.
if state == CellStates.DISCARDED:
app.replicator.removePartition(offset)
elif state == CellStates.OUT_OF_DATE:
app.replicator.addPartition(offset)
def askLockInformation(self, conn, ttid, tid, oid_list):
if not ttid in self.app.tm:
raise ProtocolError('Unknown transaction')
self.app.tm.lock(ttid, tid, oid_list)
if not conn.isClosed():
conn.answer(Packets.AnswerInformationLocked(ttid))
def notifyUnlockInformation(self, conn, ttid):
if not ttid in self.app.tm:
raise ProtocolError('Unknown transaction')
# TODO: send an answer
self.app.tm.unlock(ttid)
def askPack(self, conn, tid):
app = self.app
neo.lib.logging.info('Pack started, up to %s...', dump(tid))
app.dm.pack(tid, app.tm.updateObjectDataForPack)
neo.lib.logging.info('Pack finished.')
if not conn.isClosed():
conn.answer(Packets.AnswerPack(True))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/replication.py 0000664 0000000 0000000 00000035706 11634614701 0027332 0 ustar 00root root 0000000 0000000
#
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from functools import wraps
import neo.lib
from neo.lib.handler import EventHandler
from neo.lib.protocol import Packets, ZERO_TID, ZERO_OID
from neo.lib.util import add64, u64
# TODO: benchmark how different values behave
RANGE_LENGTH = 4000
MIN_RANGE_LENGTH = 1000
CHECK_CHUNK = 0
CHECK_REPLICATE = 1
CHECK_DONE = 2
"""
Replication algorithm
Purpose: replicate the content of a reference node into a replicating node,
bringing it up-to-date.
This happens both when a new storage is added to en existing cluster, as well
as when a nde was separated from cluster and rejoins it.
Replication happens per partition. Reference node can change between
partitions.
2 parts, done sequentially:
- Transaction (metadata) replication
- Object (data) replication
Both parts follow the same mechanism:
- On both sides (replicating and reference), compute a checksum of a chunk
(RANGE_LENGTH number of entries). If there is a mismatch, chunk size is
reduced, and scan restarts from same row, until it reaches a minimal length
(MIN_RANGE_LENGTH). Then, it replicates all rows in that chunk. If the
content of chunks match, it moves on to the next chunk.
- Replicating a chunk starts with asking for a list of all entries (only their
identifier) and skipping those both side have, deleting those which reference
has and replicating doesn't, and asking individually all entries missing in
replicating.
"""
# TODO: Make object replication get ordered by serial first and oid second, so
# changes are in a big segment at the end, rather than in many segments (one
# per object).
# TODO: To improve performance when a pack happened, the following algorithm
# should be used:
# - If reference node packed, find non-existant oids in reference node (their
# creation was undone, and pack pruned them), and delete them.
# - Run current algorithm, starting at our last pack TID.
# - Pack partition at reference's TID.
def checkConnectionIsReplicatorConnection(func):
def decorator(self, conn, *args, **kw):
if self.app.replicator.isCurrentConnection(conn):
return func(self, conn, *args, **kw)
# Should probably raise & close connection...
return wraps(func)(decorator)
class ReplicationHandler(EventHandler):
"""This class handles events for replications."""
def connectionLost(self, conn, new_state):
replicator = self.app.replicator
if replicator.isCurrentConnection(conn):
if replicator.pending():
neo.lib.logging.warning(
'replication is stopped due to a connection lost')
replicator.storageLost()
def connectionFailed(self, conn):
neo.lib.logging.warning(
'replication is stopped due to connection failure')
self.app.replicator.storageLost()
def acceptIdentification(self, conn, node_type,
uuid, num_partitions, num_replicas, your_uuid):
# set the UUID on the connection
conn.setUUID(uuid)
self.startReplication(conn)
def startReplication(self, conn):
max_tid = self.app.replicator.getCurrentCriticalTID()
conn.ask(self._doAskCheckTIDRange(ZERO_TID, max_tid), timeout=300)
@checkConnectionIsReplicatorConnection
def answerTIDsFrom(self, conn, tid_list):
assert tid_list
app = self.app
ask = conn.ask
# If I have pending TIDs, check which TIDs I don't have, and
# request the data.
tid_set = frozenset(tid_list)
my_tid_set = frozenset(app.replicator.getTIDsFromResult())
extra_tid_set = my_tid_set - tid_set
if extra_tid_set:
deleteTransaction = app.dm.deleteTransaction
for tid in extra_tid_set:
deleteTransaction(tid)
missing_tid_set = tid_set - my_tid_set
for tid in missing_tid_set:
ask(Packets.AskTransactionInformation(tid), timeout=300)
if len(tid_list) == MIN_RANGE_LENGTH:
# If we received fewer, we knew it before sending AskTIDsFrom, and
# we should have finished TID replication at that time.
max_tid = self.app.replicator.getCurrentCriticalTID()
ask(self._doAskCheckTIDRange(add64(tid_list[-1], 1), max_tid,
RANGE_LENGTH))
@checkConnectionIsReplicatorConnection
def answerTransactionInformation(self, conn, tid,
user, desc, ext, packed, oid_list):
app = self.app
# Directly store the transaction.
app.dm.storeTransaction(tid, (), (oid_list, user, desc, ext, packed),
False)
@checkConnectionIsReplicatorConnection
def answerObjectHistoryFrom(self, conn, object_dict):
assert object_dict
app = self.app
ask = conn.ask
deleteObject = app.dm.deleteObject
my_object_dict = app.replicator.getObjectHistoryFromResult()
object_set = set()
max_oid = max(object_dict.iterkeys())
max_serial = max(object_dict[max_oid])
for oid, serial_list in object_dict.iteritems():
for serial in serial_list:
object_set.add((oid, serial))
my_object_set = set()
for oid, serial_list in my_object_dict.iteritems():
filter = lambda x: True
if max_oid is not None:
if oid > max_oid:
continue
elif oid == max_oid:
filter = lambda x: x <= max_serial
for serial in serial_list:
if filter(serial):
my_object_set.add((oid, serial))
extra_object_set = my_object_set - object_set
for oid, serial in extra_object_set:
deleteObject(oid, serial)
missing_object_set = object_set - my_object_set
for oid, serial in missing_object_set:
if not app.dm.objectPresent(oid, serial):
ask(Packets.AskObject(oid, serial, None), timeout=300)
if sum((len(x) for x in object_dict.itervalues())) == MIN_RANGE_LENGTH:
max_tid = self.app.replicator.getCurrentCriticalTID()
ask(self._doAskCheckSerialRange(max_oid, add64(max_serial, 1),
max_tid, RANGE_LENGTH))
@checkConnectionIsReplicatorConnection
def answerObject(self, conn, oid, serial_start,
serial_end, compression, checksum, data, data_serial):
app = self.app
# Directly store the transaction.
obj = (oid, compression, checksum, data, data_serial)
app.dm.storeTransaction(serial_start, [obj], None, False)
del obj
del data
def _doAskCheckSerialRange(self, min_oid, min_tid, max_tid,
length=RANGE_LENGTH):
replicator = self.app.replicator
partition = replicator.getCurrentOffset()
neo.lib.logging.debug("Check serial range (offset=%s, min_oid=%x,"
" min_tid=%x, max_tid=%x, length=%s)", partition, u64(min_oid),
u64(min_tid), u64(max_tid), length)
check_args = (min_oid, min_tid, max_tid, length, partition)
replicator.checkSerialRange(*check_args)
return Packets.AskCheckSerialRange(*check_args)
def _doAskCheckTIDRange(self, min_tid, max_tid, length=RANGE_LENGTH):
replicator = self.app.replicator
partition = replicator.getCurrentOffset()
neo.lib.logging.debug(
"Check TID range (offset=%s, min_tid=%x, max_tid=%x, length=%s)",
partition, u64(min_tid), u64(max_tid), length)
replicator.checkTIDRange(min_tid, max_tid, length, partition)
return Packets.AskCheckTIDRange(min_tid, max_tid, length, partition)
def _doAskTIDsFrom(self, min_tid, length):
replicator = self.app.replicator
partition_id = replicator.getCurrentOffset()
max_tid = replicator.getCurrentCriticalTID()
replicator.getTIDsFrom(min_tid, max_tid, length, partition_id)
neo.lib.logging.debug("Ask TIDs (offset=%s, min_tid=%x, max_tid=%x,"
"length=%s)", partition_id, u64(min_tid), u64(max_tid), length)
return Packets.AskTIDsFrom(min_tid, max_tid, length, [partition_id])
def _doAskObjectHistoryFrom(self, min_oid, min_serial, length):
replicator = self.app.replicator
partition_id = replicator.getCurrentOffset()
max_serial = replicator.getCurrentCriticalTID()
replicator.getObjectHistoryFrom(min_oid, min_serial, max_serial,
length, partition_id)
return Packets.AskObjectHistoryFrom(min_oid, min_serial, max_serial,
length, partition_id)
def _checkRange(self, match, current_boundary, next_boundary, length,
count):
if count == 0:
# Reference storage has no data for this chunk, stop and truncate.
return CHECK_DONE, (current_boundary, )
if match:
# Same data on both sides
if length < RANGE_LENGTH and length == count:
# ...and previous check detected a difference - and we still
# haven't reached the end. This means that we just check the
# first half of a chunk which, as a whole, is different. So
# next test must happen on the next chunk.
recheck_min_boundary = next_boundary
else:
# ...and we just checked a whole chunk, move on to the next
# one.
recheck_min_boundary = None
else:
# Something is different in current chunk
recheck_min_boundary = current_boundary
if recheck_min_boundary is None:
if count == length:
# Go on with next chunk
action = CHECK_CHUNK
params = (next_boundary, RANGE_LENGTH)
else:
# No more chunks.
action = CHECK_DONE
params = (next_boundary, )
else:
# We must recheck current chunk.
if not match and count <= MIN_RANGE_LENGTH:
# We are already at minimum chunk length, replicate.
action = CHECK_REPLICATE
params = (recheck_min_boundary, )
else:
# Check a smaller chunk.
# Note: +1, so we can detect we reached the end when answer
# comes back.
action = CHECK_CHUNK
params = (recheck_min_boundary, max(min(length / 2, count + 1),
MIN_RANGE_LENGTH))
return action, params
@checkConnectionIsReplicatorConnection
def answerCheckTIDRange(self, conn, min_tid, length, count, tid_checksum,
max_tid):
pkt_min_tid = min_tid
ask = conn.ask
app = self.app
replicator = app.replicator
next_tid = add64(max_tid, 1)
action, params = self._checkRange(
replicator.getTIDCheckResult(min_tid, length) == (
count, tid_checksum, max_tid), min_tid, next_tid, length,
count)
critical_tid = replicator.getCurrentCriticalTID()
if action == CHECK_REPLICATE:
(min_tid, ) = params
ask(self._doAskTIDsFrom(min_tid, count))
if length != count:
action = CHECK_DONE
params = (next_tid, )
if action == CHECK_CHUNK:
(min_tid, count) = params
if min_tid >= critical_tid:
# Stop if past critical TID
action = CHECK_DONE
params = (next_tid, )
else:
ask(self._doAskCheckTIDRange(min_tid, critical_tid, count))
if action == CHECK_DONE:
# Delete all transactions we might have which are beyond what peer
# knows.
(last_tid, ) = params
offset = replicator.getCurrentOffset()
neo.lib.logging.debug("TID range checked (offset=%s, min_tid=%x,"
" length=%s, count=%s, max_tid=%x, last_tid=%x,"
" critical_tid=%x)", offset, u64(pkt_min_tid), length, count,
u64(max_tid), u64(last_tid), u64(critical_tid))
app.dm.deleteTransactionsAbove(app.pt.getPartitions(),
offset, last_tid, critical_tid)
# If no more TID, a replication of transactions is finished.
# So start to replicate objects now.
ask(self._doAskCheckSerialRange(ZERO_OID, ZERO_TID, critical_tid))
@checkConnectionIsReplicatorConnection
def answerCheckSerialRange(self, conn, min_oid, min_serial, length, count,
oid_checksum, max_oid, serial_checksum, max_serial):
ask = conn.ask
app = self.app
replicator = app.replicator
next_params = (max_oid, add64(max_serial, 1))
action, params = self._checkRange(
replicator.getSerialCheckResult(min_oid, min_serial, length) == (
count, oid_checksum, max_oid, serial_checksum, max_serial),
(min_oid, min_serial), next_params, length, count)
if action == CHECK_REPLICATE:
((min_oid, min_serial), ) = params
ask(self._doAskObjectHistoryFrom(min_oid, min_serial, count))
if length != count:
action = CHECK_DONE
params = (next_params, )
if action == CHECK_CHUNK:
((min_oid, min_serial), count) = params
max_tid = replicator.getCurrentCriticalTID()
ask(self._doAskCheckSerialRange(min_oid, min_serial, max_tid, count))
if action == CHECK_DONE:
# Delete all objects we might have which are beyond what peer
# knows.
((last_oid, last_serial), ) = params
offset = replicator.getCurrentOffset()
max_tid = replicator.getCurrentCriticalTID()
neo.lib.logging.debug("Serial range checked (offset=%s, min_oid=%x,"
" min_serial=%x, length=%s, count=%s, max_oid=%x,"
" max_serial=%x, last_oid=%x, last_serial=%x, critical_tid=%x)",
offset, u64(min_oid), u64(min_serial), length, count,
u64(max_oid), u64(max_serial), u64(last_oid), u64(last_serial),
u64(max_tid))
app.dm.deleteObjectsAbove(app.pt.getPartitions(),
offset, last_oid, last_serial, max_tid)
# Nothing remains, so the replication for this partition is
# finished.
replicator.setReplicationDone()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/storage.py 0000664 0000000 0000000 00000005301 11634614701 0026451 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.storage.handlers import BaseClientAndStorageOperationHandler
from neo.lib.protocol import Packets
class StorageOperationHandler(BaseClientAndStorageOperationHandler):
def _askObject(self, oid, serial, tid):
return self.app.dm.getObject(oid, serial, tid, resolve_data=False)
def askLastIDs(self, conn):
app = self.app
oid = app.dm.getLastOID()
tid = app.dm.getLastTID()
conn.answer(Packets.AnswerLastIDs(oid, tid, app.pt.getID()))
def askTIDsFrom(self, conn, min_tid, max_tid, length, partition_list):
assert len(partition_list) == 1, partition_list
partition = partition_list[0]
app = self.app
tid_list = app.dm.getReplicationTIDList(min_tid, max_tid, length,
app.pt.getPartitions(), partition)
conn.answer(Packets.AnswerTIDsFrom(tid_list))
def askObjectHistoryFrom(self, conn, min_oid, min_serial, max_serial,
length, partition):
app = self.app
object_dict = app.dm.getObjectHistoryFrom(min_oid, min_serial, max_serial,
length, app.pt.getPartitions(), partition)
conn.answer(Packets.AnswerObjectHistoryFrom(object_dict))
def askCheckTIDRange(self, conn, min_tid, max_tid, length, partition):
app = self.app
count, tid_checksum, max_tid = app.dm.checkTIDRange(min_tid, max_tid,
length, app.pt.getPartitions(), partition)
conn.answer(Packets.AnswerCheckTIDRange(min_tid, length,
count, tid_checksum, max_tid))
def askCheckSerialRange(self, conn, min_oid, min_serial, max_tid, length,
partition):
app = self.app
count, oid_checksum, max_oid, serial_checksum, max_serial = \
app.dm.checkSerialRange(min_oid, min_serial, max_tid, length,
app.pt.getPartitions(), partition)
conn.answer(Packets.AnswerCheckSerialRange(min_oid, min_serial, length,
count, oid_checksum, max_oid, serial_checksum, max_serial))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/handlers/verification.py 0000664 0000000 0000000 00000006366 11634614701 0027503 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo
from neo.storage.handlers import BaseMasterHandler
from neo.lib.protocol import Packets, Errors, ProtocolError, INVALID_TID
from neo.lib.util import dump
from neo.lib.exception import OperationFailure
class VerificationHandler(BaseMasterHandler):
"""This class deals with events for a verification phase."""
def askLastIDs(self, conn):
app = self.app
try:
oid = app.dm.getLastOID()
except KeyError:
oid = None
try:
tid = app.dm.getLastTID()
except KeyError:
tid = None
conn.answer(Packets.AnswerLastIDs(oid, tid, app.pt.getID()))
def askPartitionTable(self, conn):
ptid = self.app.pt.getID()
row_list = self.app.pt.getRowList()
conn.answer(Packets.AnswerPartitionTable(ptid, row_list))
def notifyPartitionChanges(self, conn, ptid, cell_list):
"""This is very similar to Send Partition Table, except that
the information is only about changes from the previous."""
app = self.app
if ptid <= app.pt.getID():
# Ignore this packet.
neo.lib.logging.debug('ignoring older partition changes')
return
# update partition table in memory and the database
app.pt.update(ptid, cell_list, app.nm)
app.dm.changePartitionTable(ptid, cell_list)
def startOperation(self, conn):
self.app.operational = True
def stopOperation(self, conn):
raise OperationFailure('operation stopped')
def askUnfinishedTransactions(self, conn):
tid_list = self.app.dm.getUnfinishedTIDList()
conn.answer(Packets.AnswerUnfinishedTransactions(INVALID_TID, tid_list))
def askTransactionInformation(self, conn, tid):
app = self.app
t = app.dm.getTransaction(tid, all=True)
if t is None:
p = Errors.TidNotFound('%s does not exist' % dump(tid))
else:
p = Packets.AnswerTransactionInformation(tid, t[1], t[2], t[3],
t[4], t[0])
conn.answer(p)
def askObjectPresent(self, conn, oid, tid):
if self.app.dm.objectPresent(oid, tid):
p = Packets.AnswerObjectPresent(oid, tid)
else:
p = Errors.OidNotFound(
'%s:%s do not exist' % (dump(oid), dump(tid)))
conn.answer(p)
def deleteTransaction(self, conn, tid, oid_list):
self.app.dm.deleteTransaction(tid, oid_list)
def commitTransaction(self, conn, tid):
self.app.dm.finishTransaction(tid)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/replicator.py 0000664 0000000 0000000 00000033321 11634614701 0025354 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import neo.lib
from random import choice
from neo.storage.handlers import replication
from neo.lib.protocol import NodeTypes, NodeStates, Packets
from neo.lib.connection import ClientConnection
from neo.lib.util import dump
class Partition(object):
"""This class abstracts the state of a partition."""
def __init__(self, offset, max_tid, ttid_list):
# Possible optimization:
# _pending_ttid_list & _critical_tid can be shared amongst partitions
# created at the same time (cf Replicator.setUnfinishedTIDList).
# Replicator.transactionFinished would only have to iterate on these
# different sets, instead of all partitions.
self._offset = offset
self._pending_ttid_list = set(ttid_list)
# pending upper bound
self._critical_tid = max_tid
def getOffset(self):
return self._offset
def getCriticalTID(self):
return self._critical_tid
def transactionFinished(self, ttid, max_tid):
self._pending_ttid_list.remove(ttid)
assert max_tid is not None
# final upper bound
self._critical_tid = max_tid
def safe(self):
return not self._pending_ttid_list
class Task(object):
"""
A Task is a callable to execute at another time, with given parameters.
Execution result is kept and can be retrieved later.
"""
_func = None
_args = None
_kw = None
_result = None
_processed = False
def __init__(self, func, args=(), kw=None):
self._func = func
self._args = args
if kw is None:
kw = {}
self._kw = kw
def process(self):
if self._processed:
raise ValueError, 'You cannot process a single Task twice'
self._processed = True
self._result = self._func(*self._args, **self._kw)
def getResult(self):
# Should we instead execute immediately rather than raising ?
if not self._processed:
raise ValueError, 'You cannot get a result until task is executed'
return self._result
def __repr__(self):
fmt = '<%s at %x %r(*%r, **%r)%%s>' % (self.__class__.__name__,
id(self), self._func, self._args, self._kw)
if self._processed:
extra = ' => %r' % (self._result, )
else:
extra = ''
return fmt % (extra, )
class Replicator(object):
"""This class handles replications of objects and transactions.
Assumptions:
- Client nodes recognize partition changes reasonably quickly.
- When an out of date partition is added, next transaction ID
is given after the change is notified and serialized.
Procedures:
- Get the last TID right after a partition is added. This TID
is called a "critical TID", because this and TIDs before this
may not be present in this storage node yet. After a critical
TID, all transactions must exist in this storage node.
- Check if a primary master node still has pending transactions
before and at a critical TID. If so, I must wait for them to be
committed or aborted.
- In order to copy data, first get the list of TIDs. This is done
part by part, because the list can be very huge. When getting
a part of the list, I verify if they are in my database, and
ask data only for non-existing TIDs. This is performed until
the check reaches a critical TID.
- Next, get the list of OIDs. And, for each OID, ask the history,
namely, a list of serials. This is also done part by part, and
I ask only non-existing data. """
# new_partition_set
# outdated partitions for which no pending transactions was asked to
# primary master yet
# partition_dict
# outdated partitions with pending transaction and temporary critical
# tid
# current_partition
# partition being currently synchronised
# current_connection
# connection to a storage node we are replicating from
# waiting_for_unfinished_tids
# unfinished tids have been asked to primary master node, but it
# didn't answer yet.
# replication_done
# False if we know there is something to replicate.
# True when current_partition is replicated, or we don't know yet if
# there is something to replicate
current_partition = None
current_connection = None
waiting_for_unfinished_tids = False
replication_done = True
def __init__(self, app):
self.app = app
self.new_partition_set = set()
self.partition_dict = {}
self.task_list = []
self.task_dict = {}
def masterLost(self):
"""
When connection to primary master is lost, stop waiting for unfinished
transactions.
"""
self.waiting_for_unfinished_tids = False
def storageLost(self):
"""
Restart replicating.
"""
self.reset()
def populate(self):
"""
Populate partitions to replicate. Must be called when partition
table is the one accepted by primary master.
Implies a reset.
"""
partition_list = self.app.pt.getOutdatedOffsetListFor(self.app.uuid)
self.new_partition_set = set(partition_list)
self.partition_dict = {}
self.reset()
def reset(self):
"""Reset attributes to restart replicating."""
self.task_list = []
self.task_dict = {}
self.current_partition = None
self.current_connection = None
self.replication_done = True
def pending(self):
"""Return whether there is any pending partition."""
return len(self.partition_dict) or len(self.new_partition_set)
def getCurrentOffset(self):
assert self.current_partition is not None
return self.current_partition.getOffset()
def getCurrentCriticalTID(self):
assert self.current_partition is not None
return self.current_partition.getCriticalTID()
def setReplicationDone(self):
""" Callback from ReplicationHandler """
self.replication_done = True
def isCurrentConnection(self, conn):
return self.current_connection is conn
def setUnfinishedTIDList(self, max_tid, ttid_list):
"""This is a callback from MasterOperationHandler."""
neo.lib.logging.debug('setting unfinished TTIDs %s',
','.join([dump(tid) for tid in ttid_list]))
# all new outdated partition must wait those ttid
new_partition_set = self.new_partition_set
while new_partition_set:
offset = new_partition_set.pop()
self.partition_dict[offset] = Partition(offset, max_tid, ttid_list)
self.waiting_for_unfinished_tids = False
def transactionFinished(self, ttid, max_tid):
""" Callback from MasterOperationHandler """
for partition in self.partition_dict.itervalues():
partition.transactionFinished(ttid, max_tid)
def _askUnfinishedTIDs(self):
conn = self.app.master_conn
conn.ask(Packets.AskUnfinishedTransactions())
self.waiting_for_unfinished_tids = True
def _startReplication(self):
# Choose a storage node for the source.
app = self.app
cell_list = app.pt.getCellList(self.current_partition.getOffset(),
readable=True)
node_list = [cell.getNode() for cell in cell_list
if cell.getNodeState() == NodeStates.RUNNING]
try:
node = choice(node_list)
except IndexError:
# Not operational.
neo.lib.logging.error('not operational', exc_info = 1)
self.current_partition = None
return
addr = node.getAddress()
if addr is None:
neo.lib.logging.error("no address known for the selected node %s" %
(dump(node.getUUID()), ))
return
connection = self.current_connection
if connection is None or connection.getAddress() != addr:
handler = replication.ReplicationHandler(app)
self.current_connection = ClientConnection(app.em, handler,
addr=addr, connector=app.connector_handler())
p = Packets.RequestIdentification(NodeTypes.STORAGE,
app.uuid, app.server, app.name)
self.current_connection.ask(p)
if connection is not None:
connection.close()
else:
connection.getHandler().startReplication(connection)
self.replication_done = False
def _finishReplication(self):
# TODO: remove try..except: pass
try:
# Notify to a primary master node that my cell is now up-to-date.
conn = self.app.master_conn
offset = self.current_partition.getOffset()
self.partition_dict.pop(offset)
conn.notify(Packets.NotifyReplicationDone(offset))
except KeyError:
pass
if self.pending():
self.current_partition = None
else:
self.current_connection.close()
def act(self):
if self.current_partition is not None:
# Don't end replication until we have received all expected
# answers, as we might have asked object data just before the last
# AnswerCheckSerialRange.
if self.replication_done and \
not self.current_connection.isPending():
# finish a replication
neo.lib.logging.info('replication is done for %s' %
(self.current_partition.getOffset(), ))
self._finishReplication()
return
if self.waiting_for_unfinished_tids:
# Still waiting.
neo.lib.logging.debug('waiting for unfinished tids')
return
if self.new_partition_set:
# Ask pending transactions.
neo.lib.logging.debug('asking unfinished tids')
self._askUnfinishedTIDs()
return
# Try to select something.
for partition in self.partition_dict.values():
# XXX: replication could start up to the initial critical tid, that
# is below the pending transactions, then finish when all pending
# transactions are committed.
if partition.safe():
self.current_partition = partition
break
else:
# Not yet.
neo.lib.logging.debug('not ready yet')
return
self._startReplication()
def removePartition(self, offset):
"""This is a callback from MasterOperationHandler."""
self.partition_dict.pop(offset, None)
self.new_partition_set.discard(offset)
def addPartition(self, offset):
"""This is a callback from MasterOperationHandler."""
if not self.partition_dict.has_key(offset):
self.new_partition_set.add(offset)
def _addTask(self, key, func, args=(), kw=None):
task = Task(func, args, kw)
task_dict = self.task_dict
if key in task_dict:
raise ValueError, 'Task with key %r already exists (%r), cannot ' \
'add %r' % (key, task_dict[key], task)
task_dict[key] = task
self.task_list.append(task)
def processDelayedTasks(self):
task_list = self.task_list
if task_list:
for task in task_list:
task.process()
self.task_list = []
def checkTIDRange(self, min_tid, max_tid, length, partition):
app = self.app
self._addTask(('TID', min_tid, length), app.dm.checkTIDRange,
(min_tid, max_tid, length, app.pt.getPartitions(), partition))
def checkSerialRange(self, min_oid, min_serial, max_tid, length,
partition):
app = self.app
self._addTask(('Serial', min_oid, min_serial, length),
app.dm.checkSerialRange, (min_oid, min_serial, max_tid, length,
app.pt.getPartitions(), partition))
def getTIDsFrom(self, min_tid, max_tid, length, partition):
app = self.app
self._addTask('TIDsFrom',
app.dm.getReplicationTIDList, (min_tid, max_tid, length,
app.pt.getPartitions(), partition))
def getObjectHistoryFrom(self, min_oid, min_serial, max_serial, length,
partition):
app = self.app
self._addTask('ObjectHistoryFrom',
app.dm.getObjectHistoryFrom, (min_oid, min_serial, max_serial,
length, app.pt.getPartitions(), partition))
def _getCheckResult(self, key):
return self.task_dict.pop(key).getResult()
def getTIDCheckResult(self, min_tid, length):
return self._getCheckResult(('TID', min_tid, length))
def getSerialCheckResult(self, min_oid, min_serial, length):
return self._getCheckResult(('Serial', min_oid, min_serial, length))
def getTIDsFromResult(self):
return self._getCheckResult('TIDsFrom')
def getObjectHistoryFromResult(self):
return self._getCheckResult('ObjectHistoryFrom')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/storage/transactions.py 0000664 0000000 0000000 00000034370 11634614701 0025725 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from time import time
import neo.lib
from neo.lib.util import dump
from neo.lib.protocol import ZERO_TID
class ConflictError(Exception):
"""
Raised when a resolvable conflict occurs
Argument: tid of locking transaction or latest revision
"""
def __init__(self, tid):
Exception.__init__(self)
self._tid = tid
def getTID(self):
return self._tid
class DelayedError(Exception):
"""
Raised when an object is locked by a previous transaction
"""
class Transaction(object):
"""
Container for a pending transaction
"""
_tid = None
def __init__(self, uuid, ttid):
self._uuid = uuid
self._ttid = ttid
self._object_dict = {}
self._transaction = None
self._locked = False
self._birth = time()
self._checked_set = set()
def __repr__(self):
return "<%s(ttid=%r, tid=%r, uuid=%r, locked=%r, age=%.2fs)> at %x" % (
self.__class__.__name__,
dump(self._ttid),
dump(self._tid),
dump(self._uuid),
self.isLocked(),
time() - self._birth,
id(self),
)
def addCheckedObject(self, oid):
assert oid not in self._object_dict, dump(oid)
self._checked_set.add(oid)
def getTTID(self):
return self._ttid
def setTID(self, tid):
assert self._tid is None, dump(self._tid)
assert tid is not None
self._tid = tid
def getTID(self):
return self._tid
def getUUID(self):
return self._uuid
def lock(self):
assert not self._locked
self._locked = True
def isLocked(self):
return self._locked
def prepare(self, oid_list, user, desc, ext, packed):
"""
Set the transaction informations
"""
# assert self._transaction is not None
self._transaction = (oid_list, user, desc, ext, packed)
def addObject(self, oid, compression, checksum, data, value_serial):
"""
Add an object to the transaction
"""
assert oid not in self._checked_set, dump(oid)
self._object_dict[oid] = (oid, compression, checksum, data,
value_serial)
def delObject(self, oid):
try:
del self._object_dict[oid]
except KeyError:
self._checked_set.remove(oid)
def getObject(self, oid):
return self._object_dict.get(oid)
def getObjectList(self):
return self._object_dict.values()
def getOIDList(self):
return self._object_dict.keys()
def getLockedOIDList(self):
return self._object_dict.keys() + list(self._checked_set)
def getTransactionInformations(self):
return self._transaction
class TransactionManager(object):
"""
Manage pending transaction and locks
"""
def __init__(self, app):
self._app = app
self._transaction_dict = {}
self._store_lock_dict = {}
self._load_lock_dict = {}
self._uuid_dict = {}
def __contains__(self, ttid):
"""
Returns True if the TID is known by the manager
"""
return ttid in self._transaction_dict
def register(self, uuid, ttid):
"""
Register a transaction, it may be already registered
"""
neo.lib.logging.debug('Register TXN %s for %s', dump(ttid), dump(uuid))
transaction = self._transaction_dict.get(ttid, None)
if transaction is None:
transaction = Transaction(uuid, ttid)
self._uuid_dict.setdefault(uuid, set()).add(transaction)
self._transaction_dict[ttid] = transaction
return transaction
def getObjectFromTransaction(self, ttid, oid):
"""
Return object data for given running transaction.
Return None if not found.
"""
result = self._transaction_dict.get(ttid)
if result is not None:
result = result.getObject(oid)
return result
def reset(self):
"""
Reset the transaction manager
"""
self._transaction_dict.clear()
self._store_lock_dict.clear()
self._load_lock_dict.clear()
self._uuid_dict.clear()
def lock(self, ttid, tid, oid_list):
"""
Lock a transaction
"""
neo.lib.logging.debug('Lock TXN %s (ttid=%s)', dump(tid), dump(ttid))
transaction = self._transaction_dict[ttid]
# remember that the transaction has been locked
transaction.lock()
for oid in transaction.getOIDList():
self._load_lock_dict[oid] = ttid
# check every object that should be locked
uuid = transaction.getUUID()
is_assigned = self._app.pt.isAssigned
for oid in oid_list:
if is_assigned(oid, uuid) and \
self._load_lock_dict.get(oid) != ttid:
raise ValueError, 'Some locks are not held'
object_list = transaction.getObjectList()
# txn_info is None is the transaction information is not stored on
# this storage.
txn_info = transaction.getTransactionInformations()
# store data from memory to temporary table
self._app.dm.storeTransaction(tid, object_list, txn_info)
# ...and remember its definitive TID
transaction.setTID(tid)
def getTIDFromTTID(self, ttid):
return self._transaction_dict[ttid].getTID()
def unlock(self, ttid):
"""
Unlock transaction
"""
neo.lib.logging.debug('Unlock TXN %s', dump(ttid))
self._app.dm.finishTransaction(self.getTIDFromTTID(ttid))
self.abort(ttid, even_if_locked=True)
def storeTransaction(self, ttid, oid_list, user, desc, ext, packed):
"""
Store transaction information received from client node
"""
assert ttid in self, "Transaction not registered"
transaction = self._transaction_dict[ttid]
transaction.prepare(oid_list, user, desc, ext, packed)
def getLockingTID(self, oid):
return self._store_lock_dict.get(oid)
def lockObject(self, ttid, serial, oid, unlock=False):
"""
Take a write lock on given object, checking that "serial" is
current.
Raises:
DelayedError
ConflictError
"""
# check if the object if locked
locking_tid = self._store_lock_dict.get(oid)
if locking_tid == ttid and unlock:
neo.lib.logging.info('Deadlock resolution on %r:%r', dump(oid),
dump(ttid))
# A duplicate store means client is resolving a deadlock, so
# drop the lock it held on this object, and drop object data for
# consistency.
del self._store_lock_dict[oid]
self._transaction_dict[ttid].delObject(oid)
# Give a chance to pending events to take that lock now.
self._app.executeQueuedEvents()
# Attemp to acquire lock again.
locking_tid = self._store_lock_dict.get(oid)
if locking_tid in (None, ttid):
# check if this is generated from the latest revision.
if locking_tid == ttid:
# If previous store was an undo, next store must be based on
# undo target.
_, _, _, _, previous_serial = self._transaction_dict[
ttid].getObject(oid)
if previous_serial is None:
# XXX: use some special serial when previous store was not
# an undo ? Maybe it should just not happen.
neo.lib.logging.info('Transaction %s storing %s more than '
'once', dump(ttid), dump(oid))
else:
previous_serial = None
if previous_serial is None:
history_list = self._app.dm.getObjectHistory(oid)
if history_list:
previous_serial = history_list[0][0]
if previous_serial is not None and previous_serial != serial:
neo.lib.logging.info('Resolvable conflict on %r:%r',
dump(oid), dump(ttid))
raise ConflictError(previous_serial)
neo.lib.logging.debug('Transaction %s storing %s',
dump(ttid), dump(oid))
self._store_lock_dict[oid] = ttid
elif locking_tid > ttid:
# We have a smaller TID than locking transaction, so we are older:
# enter waiting queue so we are handled when lock gets released.
neo.lib.logging.info('Store delayed for %r:%r by %r', dump(oid),
dump(ttid), dump(locking_tid))
raise DelayedError
else:
# We have a bigger TTID than locking transaction, so we are
# younger: this is a possible deadlock case, as we might already
# hold locks that older transaction is waiting upon. Make client
# release locks & reacquire them by notifying it of the possible
# deadlock.
neo.lib.logging.info('Possible deadlock on %r:%r with %r',
dump(oid), dump(ttid), dump(locking_tid))
raise ConflictError(ZERO_TID)
def checkCurrentSerial(self, ttid, serial, oid):
self.lockObject(ttid, serial, oid, unlock=True)
assert ttid in self, "Transaction not registered"
transaction = self._transaction_dict[ttid]
transaction.addCheckedObject(oid)
def storeObject(self, ttid, serial, oid, compression, checksum, data,
value_serial, unlock=False):
"""
Store an object received from client node
"""
self.lockObject(ttid, serial, oid, unlock=unlock)
# store object
assert ttid in self, "Transaction not registered"
transaction = self._transaction_dict[ttid]
transaction.addObject(oid, compression, checksum, data, value_serial)
def abort(self, ttid, even_if_locked=False):
"""
Abort a transaction
Releases locks held on all transaction objects, deletes Transaction
instance, and executed queued events.
Note: does not alter persistent content.
"""
if ttid not in self._transaction_dict:
# the tid may be unknown as the transaction is aborted on every node
# of the partition, even if no data was received (eg. conflict on
# another node)
return
neo.lib.logging.debug('Abort TXN %s', dump(ttid))
transaction = self._transaction_dict[ttid]
has_load_lock = transaction.isLocked()
# if the transaction is locked, ensure we can drop it
if not even_if_locked and has_load_lock:
return
# unlock any object
for oid in transaction.getLockedOIDList():
if has_load_lock:
lock_ttid = self._load_lock_dict.pop(oid, None)
assert lock_ttid in (ttid, None), 'Transaction %s tried to ' \
'release the lock on oid %s, but it was held by %s' % (
dump(ttid), dump(oid), dump(lock_tid))
write_locking_tid = self._store_lock_dict.pop(oid)
assert write_locking_tid == ttid, 'Inconsistent locking state: ' \
'aborting %s:%s but %s has the lock.' % (dump(ttid), dump(oid),
dump(write_locking_tid))
# remove the transaction
uuid = transaction.getUUID()
self._uuid_dict[uuid].discard(transaction)
# clean node index if there is no more current transactions
if not self._uuid_dict[uuid]:
del self._uuid_dict[uuid]
del self._transaction_dict[ttid]
# some locks were released, some pending locks may now succeed
self._app.executeQueuedEvents()
def abortFor(self, uuid):
"""
Abort any non-locked transaction of a node
"""
neo.lib.logging.debug('Abort for %s', dump(uuid))
# abort any non-locked transaction of this node
for ttid in [x.getTTID() for x in self._uuid_dict.get(uuid, [])]:
self.abort(ttid)
# cleanup _uuid_dict if no transaction remains for this node
transaction_set = self._uuid_dict.get(uuid)
if transaction_set is not None and not transaction_set:
del self._uuid_dict[uuid]
def loadLocked(self, oid):
return oid in self._load_lock_dict
def log(self):
neo.lib.logging.info("Transactions:")
for txn in self._transaction_dict.values():
neo.lib.logging.info(' %r', txn)
neo.lib.logging.info(' Read locks:')
for oid, ttid in self._load_lock_dict.items():
neo.lib.logging.info(' %r by %r', dump(oid), dump(ttid))
neo.lib.logging.info(' Write locks:')
for oid, ttid in self._store_lock_dict.items():
neo.lib.logging.info(' %r by %r', dump(oid), dump(ttid))
def updateObjectDataForPack(self, oid, orig_serial, new_serial,
getObjectData):
lock_tid = self.getLockingTID(oid)
if lock_tid is not None:
transaction = self._transaction_dict[lock_tid]
oid, compression, checksum, data, value_serial = \
transaction.getObject(oid)
if value_serial == orig_serial:
if new_serial:
value_serial = new_serial
else:
compression, checksum, data = getObjectData()
value_serial = None
transaction.addObject(oid, compression, checksum, data,
value_serial)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/ 0000775 0000000 0000000 00000000000 11634614701 0022332 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/__init__.py 0000664 0000000 0000000 00000046706 11634614701 0024460 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import __builtin__
import os
import random
import socket
import sys
import tempfile
import unittest
import MySQLdb
import neo
import transaction
from mock import Mock
from neo.lib import debug, logger, protocol, setupLog
from neo.lib.protocol import Packets
from neo.lib.util import getAddressType
from time import time, gmtime
from struct import pack, unpack
DB_PREFIX = os.getenv('NEO_DB_PREFIX', 'test_neo')
DB_ADMIN = os.getenv('NEO_DB_ADMIN', 'root')
DB_PASSWD = os.getenv('NEO_DB_PASSWD', '')
DB_USER = os.getenv('NEO_DB_USER', 'test')
IP_VERSION_FORMAT_DICT = {
socket.AF_INET: '127.0.0.1',
socket.AF_INET6: '::1',
}
ADDRESS_TYPE = socket.AF_INET
debug.ENABLED = True
debug.register()
# prevent "signal only works in main thread" errors in subprocesses
debug.ENABLED = False
def mockDefaultValue(name, function):
def method(self, *args, **kw):
if name in self.mockReturnValues:
return self.__getattr__(name)(*args, **kw)
return function(self, *args, **kw)
method.__name__ = name
setattr(Mock, name, method)
mockDefaultValue('__nonzero__', lambda self: self.__len__() != 0)
mockDefaultValue('__repr__', lambda self:
'<%s object at 0x%x>' % (self.__class__.__name__, id(self)))
mockDefaultValue('__str__', repr)
def buildUrlFromString(address):
try:
socket.inet_pton(socket.AF_INET6, address)
address = '[%s]' % address
except Exception:
pass
return address
def getTempDirectory():
"""get the current temp directory or a new one"""
try:
temp_dir = os.environ['TEMP']
except KeyError:
neo_dir = os.path.join(tempfile.gettempdir(), 'neo_tests')
while True:
temp_dir = os.path.join(neo_dir, repr(time()))
try:
os.makedirs(temp_dir)
break
except OSError, e:
if e.errno != errno.EEXIST:
raise
os.environ['TEMP'] = temp_dir
print 'Using temp directory %r.' % temp_dir
return temp_dir
def setupMySQLdb(db_list, user=DB_USER, password='', clear_databases=True):
from MySQLdb.constants.ER import BAD_DB_ERROR
conn = MySQLdb.Connect(user=DB_ADMIN, passwd=DB_PASSWD)
cursor = conn.cursor()
for database in db_list:
try:
conn.select_db(database)
if not clear_databases:
continue
cursor.execute('DROP DATABASE `%s`' % database)
except MySQLdb.OperationalError, (code, _):
if code != BAD_DB_ERROR:
raise
cursor.execute('GRANT ALL ON `%s`.* TO "%s"@"localhost" IDENTIFIED'
' BY "%s"' % (database, user, password))
cursor.execute('CREATE DATABASE `%s`' % database)
cursor.close()
conn.commit()
conn.close()
class NeoTestBase(unittest.TestCase):
def setUp(self):
logger.PACKET_LOGGER.enable(True)
sys.stdout.write(' * %s ' % (self.id(), ))
sys.stdout.flush()
self.setupLog()
unittest.TestCase.setUp(self)
def setupLog(self):
test_case, test_method = self.id().rsplit('.', 1)
log_file = os.path.join(getTempDirectory(), test_case + '.log')
setupLog(test_method, log_file, True)
def tearDown(self):
# Kill all unfinished transactions for next test.
# Note we don't even abort them because it may require a valid
# connection to a master node (see Storage.sync()).
transaction.manager.__init__()
unittest.TestCase.tearDown(self)
sys.stdout.write('\n')
sys.stdout.flush()
failIfEqual = failUnlessEqual = assertEquals = assertNotEquals = None
def assertNotEqual(self, first, second, msg=None):
assert not (isinstance(first, Mock) or isinstance(second, Mock)), \
"Mock objects can't be compared with '==' or '!='"
return super(NeoTestBase, self).assertNotEqual(first, second, msg=msg)
def assertEqual(self, first, second, msg=None):
assert not (isinstance(first, Mock) or isinstance(second, Mock)), \
"Mock objects can't be compared with '==' or '!='"
return super(NeoTestBase, self).assertEqual(first, second, msg=msg)
class NeoUnitTestBase(NeoTestBase):
""" Base class for neo tests, implements common checks """
local_ip = IP_VERSION_FORMAT_DICT[ADDRESS_TYPE]
def prepareDatabase(self, number, prefix='test_neo'):
""" create empties databases """
setupMySQLdb(['%s%u' % (prefix, i) for i in xrange(number)])
def getMasterConfiguration(self, cluster='main', master_number=2,
replicas=2, partitions=1009, uuid=None):
assert master_number >= 1 and master_number <= 10
masters = ([(self.local_ip, 10010 + i)
for i in xrange(master_number)])
return Mock({
'getCluster': cluster,
'getBind': masters[0],
'getMasters': (masters, getAddressType((
self.local_ip, 0))),
'getReplicas': replicas,
'getPartitions': partitions,
'getUUID': uuid,
})
def getStorageConfiguration(self, cluster='main', master_number=2,
index=0, prefix=DB_PREFIX, uuid=None):
assert master_number >= 1 and master_number <= 10
assert index >= 0 and index <= 9
masters = [(buildUrlFromString(self.local_ip),
10010 + i) for i in xrange(master_number)]
database = '%s@%s%s' % (DB_USER, prefix, index)
return Mock({
'getCluster': cluster,
'getName': 'storage',
'getBind': (masters[0], 10020 + index),
'getMasters': (masters, getAddressType((
self.local_ip, 0))),
'getDatabase': database,
'getUUID': uuid,
'getReset': False,
'getAdapter': 'MySQL',
})
def _makeUUID(self, prefix):
"""
Retuns a 16-bytes UUID according to namespace 'prefix'
"""
assert len(prefix) == 1
uuid = protocol.INVALID_UUID
while uuid[1:] == protocol.INVALID_UUID[1:]:
uuid = prefix + os.urandom(15)
return uuid
def getNewUUID(self):
return self._makeUUID('\0')
def getClientUUID(self):
return self._makeUUID('C')
def getMasterUUID(self):
return self._makeUUID('M')
def getStorageUUID(self):
return self._makeUUID('S')
def getAdminUUID(self):
return self._makeUUID('A')
def getNextTID(self, ltid=None):
tm = time()
gmt = gmtime(tm)
upper = ((((gmt.tm_year - 1900) * 12 + gmt.tm_mon - 1) * 31 \
+ gmt.tm_mday - 1) * 24 + gmt.tm_hour) * 60 + gmt.tm_min
lower = int((gmt.tm_sec % 60 + (tm - int(tm))) / (60.0 / 65536.0 / 65536.0))
tid = pack('!LL', upper, lower)
if ltid is not None and tid <= ltid:
upper, lower = unpack('!LL', self._last_tid)
if lower == 0xffffffff:
# This should not happen usually.
from datetime import timedelta, datetime
d = datetime(gmt.tm_year, gmt.tm_mon, gmt.tm_mday,
gmt.tm_hour, gmt.tm_min) \
+ timedelta(0, 60)
upper = ((((d.year - 1900) * 12 + d.month - 1) * 31 \
+ d.day - 1) * 24 + d.hour) * 60 + d.minute
lower = 0
else:
lower += 1
tid = pack('!LL', upper, lower)
return tid
def getPTID(self, i=None):
""" Return an integer PTID """
if i is None:
return random.randint(1, 2**64)
return i
def getOID(self, i=None):
""" Return a 8-bytes OID """
if i is None:
return os.urandom(8)
return pack('!Q', i)
def getTwoIDs(self):
""" Return a tuple of two sorted UUIDs """
# generate two ptid, first is lower
uuids = self.getNewUUID(), self.getNewUUID()
return min(uuids), max(uuids)
def getFakeConnector(self, descriptor=None):
return Mock({
'__repr__': 'FakeConnector',
'getDescriptor': descriptor,
'getAddress': ('', 0),
})
def getFakeConnection(self, uuid=None, address=('127.0.0.1', 10000),
is_server=False, connector=None, peer_id=None):
if connector is None:
connector = self.getFakeConnector()
return Mock({
'getUUID': uuid,
'getAddress': address,
'isServer': is_server,
'__repr__': 'FakeConnection',
'__nonzero__': 0,
'getConnector': connector,
'getPeerId': peer_id,
})
def checkProtocolErrorRaised(self, method, *args, **kwargs):
""" Check if the ProtocolError exception was raised """
self.assertRaises(protocol.ProtocolError, method, *args, **kwargs)
def checkUnexpectedPacketRaised(self, method, *args, **kwargs):
""" Check if the UnexpectedPacketError exception wxas raised """
self.assertRaises(protocol.UnexpectedPacketError, method, *args, **kwargs)
def checkIdenficationRequired(self, method, *args, **kwargs):
""" Check is the identification_required decorator is applied """
self.checkUnexpectedPacketRaised(method, *args, **kwargs)
def checkBrokenNodeDisallowedErrorRaised(self, method, *args, **kwargs):
""" Check if the BrokenNodeDisallowedError exception wxas raised """
self.assertRaises(protocol.BrokenNodeDisallowedError, method, *args, **kwargs)
def checkNotReadyErrorRaised(self, method, *args, **kwargs):
""" Check if the NotReadyError exception wxas raised """
self.assertRaises(protocol.NotReadyError, method, *args, **kwargs)
def checkAborted(self, conn):
""" Ensure the connection was aborted """
self.assertEqual(len(conn.mockGetNamedCalls('abort')), 1)
def checkNotAborted(self, conn):
""" Ensure the connection was not aborted """
self.assertEqual(len(conn.mockGetNamedCalls('abort')), 0)
def checkClosed(self, conn):
""" Ensure the connection was closed """
self.assertEqual(len(conn.mockGetNamedCalls('close')), 1)
def checkNotClosed(self, conn):
""" Ensure the connection was not closed """
self.assertEqual(len(conn.mockGetNamedCalls('close')), 0)
def _checkNoPacketSend(self, conn, method_id):
call_list = conn.mockGetNamedCalls(method_id)
self.assertEqual(len(call_list), 0, call_list)
def checkNoPacketSent(self, conn, check_notify=True, check_answer=True,
check_ask=True):
""" check if no packet were sent """
if check_notify:
self._checkNoPacketSend(conn, 'notify')
if check_answer:
self._checkNoPacketSend(conn, 'answer')
if check_ask:
self._checkNoPacketSend(conn, 'ask')
def checkNoUUIDSet(self, conn):
""" ensure no UUID was set on the connection """
self.assertEqual(len(conn.mockGetNamedCalls('setUUID')), 0)
def checkUUIDSet(self, conn, uuid=None):
""" ensure no UUID was set on the connection """
calls = conn.mockGetNamedCalls('setUUID')
self.assertEqual(len(calls), 1)
call = calls.pop()
if uuid is not None:
self.assertEqual(call.getParam(0), uuid)
# in check(Ask|Answer|Notify)Packet we return the packet so it can be used
# in tests if more accurates checks are required
def checkErrorPacket(self, conn, decode=False):
""" Check if an error packet was answered """
calls = conn.mockGetNamedCalls("answer")
self.assertEqual(len(calls), 1)
packet = calls.pop().getParam(0)
self.assertTrue(isinstance(packet, protocol.Packet))
self.assertEqual(type(packet), Packets.Error)
if decode:
return packet.decode()
return protocol.decode_table[type(packet)](packet._body)
return packet
def checkAskPacket(self, conn, packet_type, decode=False):
""" Check if an ask-packet with the right type is sent """
calls = conn.mockGetNamedCalls('ask')
self.assertEqual(len(calls), 1)
packet = calls.pop().getParam(0)
self.assertTrue(isinstance(packet, protocol.Packet))
self.assertEqual(type(packet), packet_type)
if decode:
return packet.decode()
return packet
def checkAnswerPacket(self, conn, packet_type, decode=False):
""" Check if an answer-packet with the right type is sent """
calls = conn.mockGetNamedCalls('answer')
self.assertEqual(len(calls), 1)
packet = calls.pop().getParam(0)
self.assertTrue(isinstance(packet, protocol.Packet))
self.assertEqual(type(packet), packet_type)
if decode:
return packet.decode()
return packet
def checkNotifyPacket(self, conn, packet_type, packet_number=0, decode=False):
""" Check if a notify-packet with the right type is sent """
calls = conn.mockGetNamedCalls('notify')
packet = calls.pop(packet_number).getParam(0)
self.assertTrue(isinstance(packet, protocol.Packet))
self.assertEqual(type(packet), packet_type)
if decode:
return packet.decode()
return packet
def checkNotify(self, conn, **kw):
return self.checkNotifyPacket(conn, Packets.Notify, **kw)
def checkNotifyNodeInformation(self, conn, **kw):
return self.checkNotifyPacket(conn, Packets.NotifyNodeInformation, **kw)
def checkSendPartitionTable(self, conn, **kw):
return self.checkNotifyPacket(conn, Packets.SendPartitionTable, **kw)
def checkStartOperation(self, conn, **kw):
return self.checkNotifyPacket(conn, Packets.StartOperation, **kw)
def checkInvalidateObjects(self, conn, **kw):
return self.checkNotifyPacket(conn, Packets.InvalidateObjects, **kw)
def checkAbortTransaction(self, conn, **kw):
return self.checkNotifyPacket(conn, Packets.AbortTransaction, **kw)
def checkNotifyLastOID(self, conn, **kw):
return self.checkNotifyPacket(conn, Packets.NotifyLastOID, **kw)
def checkAnswerTransactionFinished(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerTransactionFinished, **kw)
def checkAnswerInformationLocked(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerInformationLocked, **kw)
def checkAskLockInformation(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskLockInformation, **kw)
def checkNotifyUnlockInformation(self, conn, **kw):
return self.checkNotifyPacket(conn, Packets.NotifyUnlockInformation, **kw)
def checkNotifyTransactionFinished(self, conn, **kw):
return self.checkNotifyPacket(conn, Packets.NotifyTransactionFinished, **kw)
def checkRequestIdentification(self, conn, **kw):
return self.checkAskPacket(conn, Packets.RequestIdentification, **kw)
def checkAskPrimary(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskPrimary)
def checkAskUnfinishedTransactions(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskUnfinishedTransactions)
def checkAskTransactionInformation(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskTransactionInformation, **kw)
def checkAskObjectPresent(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskObjectPresent, **kw)
def checkAskObject(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskObject, **kw)
def checkAskStoreObject(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskStoreObject, **kw)
def checkAskStoreTransaction(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskStoreTransaction, **kw)
def checkAskFinishTransaction(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskFinishTransaction, **kw)
def checkAskNewTid(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskBeginTransaction, **kw)
def checkAskLastIDs(self, conn, **kw):
return self.checkAskPacket(conn, Packets.AskLastIDs, **kw)
def checkAcceptIdentification(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AcceptIdentification, **kw)
def checkAnswerPrimary(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerPrimary, **kw)
def checkAnswerLastIDs(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerLastIDs, **kw)
def checkAnswerUnfinishedTransactions(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerUnfinishedTransactions, **kw)
def checkAnswerObject(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerObject, **kw)
def checkAnswerTransactionInformation(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerTransactionInformation, **kw)
def checkAnswerBeginTransaction(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerBeginTransaction, **kw)
def checkAnswerTids(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerTIDs, **kw)
def checkAnswerTidsFrom(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerTIDsFrom, **kw)
def checkAnswerObjectHistory(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerObjectHistory, **kw)
def checkAnswerObjectHistoryFrom(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerObjectHistoryFrom, **kw)
def checkAnswerStoreTransaction(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerStoreTransaction, **kw)
def checkAnswerStoreObject(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerStoreObject, **kw)
def checkAnswerOids(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerOIDs, **kw)
def checkAnswerPartitionTable(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerPartitionTable, **kw)
def checkAnswerObjectPresent(self, conn, **kw):
return self.checkAnswerPacket(conn, Packets.AnswerObjectPresent, **kw)
connector_cpt = 0
class DoNothingConnector(Mock):
def __init__(self, s=None):
neo.lib.logging.info("initializing connector")
global connector_cpt
self.desc = connector_cpt
connector_cpt += 1
self.packet_cpt = 0
Mock.__init__(self)
def getAddress(self):
return self.addr
def makeClientConnection(self, addr):
self.addr = addr
def makeListeningConnection(self, addr):
self.addr = addr
def getDescriptor(self):
return self.desc
__builtin__.pdb = lambda depth=0: \
debug.getPdb().set_trace(sys._getframe(depth+1))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/benchmark.py 0000664 0000000 0000000 00000007407 11634614701 0024646 0 ustar 00root root 0000000 0000000
import sys
import email
import smtplib
import optparse
import platform
import datetime
from email.MIMEMultipart import MIMEMultipart
from email.MIMEText import MIMEText
from neo.lib import logger
MAIL_SERVER = '127.0.0.1:25'
class AttributeDict(dict):
def __getattr__(self, item):
return self.__getitem__(item)
class BenchmarkRunner(object):
"""
Base class for a command-line benchmark test runner.
"""
def __init__(self):
self._successful = True
self._status = []
parser = optparse.OptionParser()
# register common options
parser.add_option('', '--title')
parser.add_option('-v', '--verbose', action='store_true')
parser.add_option('', '--mail-to', action='append')
parser.add_option('', '--mail-from')
parser.add_option('', '--mail-server')
parser.add_option('', '--repeat', type='int', default=1)
self.add_options(parser)
# check common arguments
options, self._args = parser.parse_args()
if bool(options.mail_to) ^ bool(options.mail_from):
sys.exit('Need a sender and recipients to mail report')
mail_server = options.mail_server or MAIL_SERVER
# check specifics arguments
self._config = AttributeDict()
self._config.update(self.load_options(options, self._args))
self._config.update(
title = options.title or self.__class__.__name__,
verbose = bool(options.verbose),
mail_from = options.mail_from,
mail_to = options.mail_to,
mail_server = mail_server.split(':'),
repeat = options.repeat,
)
def add_status(self, key, value):
self._status.append((key, value))
def build_report(self, content):
fmt = "%-25s : %s"
status = "\n".join([fmt % item for item in [
('Title', self._config.title),
('Date', datetime.date.today().isoformat()),
('Node', platform.node()),
('Machine', platform.machine()),
('System', platform.system()),
('Python', platform.python_version()),
]])
status += '\n\n'
status += "\n".join([fmt % item for item in self._status])
return "%s\n\n%s" % (status, content)
def send_report(self, subject, report):
# build report
# build email
msg = MIMEMultipart()
msg['Subject'] = '%s: %s' % (self._config.title, subject)
msg['From'] = self._config.mail_from
msg['To'] = ', '.join(self._config.mail_to)
msg['X-ERP5-Tests'] = 'NEO'
if self._successful:
msg['X-ERP5-Tests-Status'] = 'OK'
msg.epilogue = ''
msg.attach(MIMEText(report))
# send it
s = smtplib.SMTP()
s.connect(*self._config.mail_server)
mail = msg.as_string()
for recipient in self._config.mail_to:
try:
s.sendmail(self._config.mail_from, recipient, mail)
except smtplib.SMTPRecipientsRefused:
print "Mail for %s fails" % recipient
s.close()
def run(self):
logger.PACKET_LOGGER.enable(self._config.verbose)
subject, report = self.start()
report = self.build_report(report)
if self._config.mail_to:
self.send_report(subject, report)
print subject
print
print report
def was_successful(self):
return self._successful
def add_options(self, parser):
""" Append options to command line parser """
raise NotImplementedError
def load_options(self, options, args):
""" Check options and return a configuration dict """
raise NotImplementedError
def start(self):
""" Run the test """
raise NotImplementedError
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/client/ 0000775 0000000 0000000 00000000000 11634614701 0023610 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/client/__init__.py 0000664 0000000 0000000 00000000000 11634614701 0025707 0 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/client/testClientApp.py 0000664 0000000 0000000 00000107176 11634614701 0026755 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from cPickle import dumps
from mock import Mock, ReturnValues
from ZODB.POSException import StorageTransactionError, UndoError, ConflictError
from neo.tests import NeoUnitTestBase, buildUrlFromString, ADDRESS_TYPE
from neo.client.app import Application
from neo.client.exception import NEOStorageError, NEOStorageNotFoundError
from neo.client.exception import NEOStorageDoesNotExistError
from neo.lib.protocol import Packet, Packets, Errors, INVALID_TID, \
INVALID_PARTITION
from neo.lib.util import makeChecksum, SOCKET_CONNECTORS_DICT
import time
def _getMasterConnection(self):
if self.master_conn is None:
self.uuid = 'C' * 16
self.num_partitions = 10
self.num_replicas = 1
self.pt = Mock({
'getCellListForOID': (),
'getCellListForTID': (),
})
self.master_conn = Mock()
return self.master_conn
def getPartitionTable(self):
if self.pt is None:
self.master_conn = _getMasterConnection(self)
return self.pt
def _ask(self, conn, packet, handler=None):
self.setHandlerData(None)
conn.ask(packet)
if handler is None:
raise NotImplementedError
else:
handler.dispatch(conn, conn.fakeReceived())
return self.getHandlerData()
def resolving_tryToResolveConflict(oid, conflict_serial, serial, data):
return data
def failing_tryToResolveConflict(oid, conflict_serial, serial, data):
return None
class ClientApplicationTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
# apply monkey patches
self._getMasterConnection = Application._getMasterConnection
self._ask = Application._ask
self.getPartitionTable = Application.getPartitionTable
Application._getMasterConnection = _getMasterConnection
Application._ask = _ask
Application.getPartitionTable = getPartitionTable
self._to_stop_list = []
def tearDown(self):
# stop threads
for app in self._to_stop_list:
app.close()
# restore environnement
Application._getMasterConnection = self._getMasterConnection
Application._ask = self._ask
Application.getPartitionTable = self.getPartitionTable
NeoUnitTestBase.tearDown(self)
# some helpers
def _begin(self, app, txn, tid=None):
txn_context = app._txn_container.new(txn)
if tid is None:
tid = self.makeTID()
txn_context['ttid'] = tid
return txn_context
def getApp(self, master_nodes=None, name='test', **kw):
connector = SOCKET_CONNECTORS_DICT[ADDRESS_TYPE]
if master_nodes is None:
master_nodes = '%s:10010' % buildUrlFromString(self.local_ip)
app = Application(master_nodes, name, connector, **kw)
self._to_stop_list.append(app)
app.dispatcher = Mock({ })
return app
def getConnectionPool(self, conn_list):
return Mock({
'iterateForObject': conn_list,
})
def makeOID(self, value=None):
from random import randint
if value is None:
value = randint(0, 255)
return '\00' * 7 + chr(value)
makeTID = makeOID
def getNodeCellConn(self, index=1, address=('127.0.0.1', 10000), uuid=None):
conn = Mock({
'getAddress': address,
'__repr__': 'connection mock',
'getUUID': uuid,
})
node = Mock({
'__repr__': 'node%s' % index,
'__hash__': index,
'getConnection': conn,
})
cell = Mock({
'getAddress': 'FakeServer',
'getState': 'FakeState',
'getNode': node,
})
return (node, cell, conn)
def makeTransactionObject(self, user='u', description='d', _extension='e'):
class Transaction(object):
pass
txn = Transaction()
txn.user = user
txn.description = description
txn._extension = _extension
return txn
def beginTransaction(self, app, tid):
packet = Packets.AnswerBeginTransaction(tid=tid)
packet.setId(0)
app.master_conn = Mock({ 'fakeReceived': packet, })
txn = self.makeTransactionObject()
app.tpc_begin(txn, tid=tid)
return txn
# common checks
def checkDispatcherRegisterCalled(self, app, conn):
calls = app.dispatcher.mockGetNamedCalls('register')
#self.assertEqual(len(calls), 1)
#self.assertEqual(calls[0].getParam(0), conn)
#self.assertTrue(isinstance(calls[0].getParam(2), Queue))
def test_registerDB(self):
app = self.getApp()
dummy_db = []
app.registerDB(dummy_db, None)
self.assertTrue(app.getDB() is dummy_db)
def test_new_oid(self):
app = self.getApp()
test_msg_id = 50
test_oid_list = ['\x00\x00\x00\x00\x00\x00\x00\x01', '\x00\x00\x00\x00\x00\x00\x00\x02']
response_packet = Packets.AnswerNewOIDs(test_oid_list[:])
response_packet.setId(0)
app.master_conn = Mock({'getNextId': test_msg_id, '_addPacket': None,
'expectMessage': None, 'lock': None,
'unlock': None,
# Test-specific method
'fakeReceived': response_packet})
new_oid = app.new_oid()
self.assertTrue(new_oid in test_oid_list)
self.assertEqual(len(app.new_oid_list), 1)
self.assertTrue(app.new_oid_list[0] in test_oid_list)
self.assertNotEqual(app.new_oid_list[0], new_oid)
def test_load(self):
app = self.getApp()
cache = app._cache
oid = self.makeOID()
tid1 = self.makeTID(1)
tid2 = self.makeTID(2)
tid3 = self.makeTID(3)
tid4 = self.makeTID(4)
# connection to SN close
self.assertFalse(oid in cache._oid_dict)
conn = Mock({'getAddress': ('', 0)})
app.cp = Mock({'iterateForObject': [(Mock(), conn)]})
def fakeReceived(packet):
packet.setId(0)
conn.fakeReceived = iter((packet,)).next
def fakeObject(oid, serial, next_serial, data):
fakeReceived(Packets.AnswerObject(oid, serial, next_serial, 0,
makeChecksum(data), data, None))
return data, serial, next_serial
fakeReceived(Errors.OidNotFound(''))
#Application._waitMessage = self._waitMessage
# XXX: test disabled because of an infinite loop
# self.assertRaises(NEOStorageError, app.load, oid, None, tid2)
# self.checkAskObject(conn)
#Application._waitMessage = _waitMessage
# object not found in NEO -> NEOStorageNotFoundError
self.assertFalse(oid in cache._oid_dict)
fakeReceived(Errors.OidNotFound(''))
self.assertRaises(NEOStorageNotFoundError, app.load, oid)
self.checkAskObject(conn)
r1 = fakeObject(oid, tid1, tid3, 'FOO')
self.assertEqual(r1, app.load(oid, None, tid2))
self.checkAskObject(conn)
for t in tid2, tid3:
self.assertEqual(cache._load(oid, t).tid, tid1)
self.assertEqual(r1, app.load(oid, tid1))
self.assertEqual(r1, app.load(oid, None, tid3))
self.assertRaises(StandardError, app.load, oid, tid2)
self.assertRaises(StopIteration, app.load, oid)
self.checkAskObject(conn)
r2 = fakeObject(oid, tid3, None, 'BAR')
self.assertEqual(r2, app.load(oid, None, tid4))
self.checkAskObject(conn)
self.assertEqual(r2, app.load(oid))
self.assertEqual(r2, app.load(oid, tid3))
cache.invalidate(oid, tid4)
self.assertRaises(StopIteration, app.load, oid)
self.checkAskObject(conn)
self.assertEqual(len(cache._oid_dict[oid]), 2)
def test_tpc_begin(self):
app = self.getApp()
tid = self.makeTID()
txn = Mock()
# first, tid is supplied
self.assertTrue(app._txn_container.get(txn) is None)
packet = Packets.AnswerBeginTransaction(tid=tid)
packet.setId(0)
app.master_conn = Mock({
'getNextId': 1,
'fakeReceived': packet,
})
app.tpc_begin(transaction=txn, tid=tid)
txn_context = app._txn_container.get(txn)
self.assertTrue(txn_context['txn'] is txn)
self.assertEqual(txn_context['ttid'], tid)
# next, the transaction already begin -> raise
self.assertRaises(StorageTransactionError, app.tpc_begin,
transaction=txn, tid=None)
txn_context = app._txn_container.get(txn)
self.assertTrue(txn_context['txn'] is txn)
self.assertEqual(txn_context['ttid'], tid)
# start a transaction without tid
txn = Mock()
# no connection -> NEOStorageError (wait until connected to primary)
#self.assertRaises(NEOStorageError, app.tpc_begin, transaction=txn, tid=None)
# ask a tid to pmn
packet = Packets.AnswerBeginTransaction(tid=tid)
packet.setId(0)
app.master_conn = Mock({
'getNextId': 1,
'fakeReceived': packet,
})
app.tpc_begin(transaction=txn, tid=None)
self.checkAskNewTid(app.master_conn)
self.checkDispatcherRegisterCalled(app, app.master_conn)
# check attributes
txn_context = app._txn_container.get(txn)
self.assertTrue(txn_context['txn'] is txn)
self.assertEqual(txn_context['ttid'], tid)
def test_store1(self):
app = self.getApp()
oid = self.makeOID(11)
tid = self.makeTID()
txn = self.makeTransactionObject()
# invalid transaction > StorageTransactionError
self.assertRaises(StorageTransactionError, app.store, oid, tid, '',
None, txn)
# check partition_id and an empty cell list -> NEOStorageError
self._begin(app, txn, self.makeTID())
app.pt = Mock({ 'getCellListForOID': (), })
app.num_partitions = 2
self.assertRaises(NEOStorageError, app.store, oid, tid, '', None,
txn)
calls = app.pt.mockGetNamedCalls('getCellListForOID')
self.assertEqual(len(calls), 1)
self.assertEqual(calls[0].getParam(0), oid) # oid=11
def test_store2(self):
app = self.getApp()
oid = self.makeOID(11)
tid = self.makeTID()
txn = self.makeTransactionObject()
# build conflicting state
txn_context = self._begin(app, txn, tid)
packet = Packets.AnswerStoreObject(conflicting=1, oid=oid, serial=tid)
packet.setId(0)
storage_address = ('127.0.0.1', 10020)
node, cell, conn = self.getNodeCellConn(address=storage_address)
app.pt = Mock({ 'getCellListForOID': (cell, cell)})
app.cp = self.getConnectionPool([(node, conn)])
class Dispatcher(object):
def pending(self, queue):
return not queue.empty()
app.dispatcher = Dispatcher()
app.nm.createStorage(address=storage_address)
data_dict = txn_context['data_dict']
data_dict[oid] = 'BEFORE'
txn_context['data_list'].append(oid)
app.store(oid, tid, '', None, txn)
txn_context['queue'].put((conn, packet))
self.assertRaises(ConflictError, app.waitStoreResponses, txn_context,
failing_tryToResolveConflict)
self.assertTrue(oid not in data_dict)
self.assertEqual(txn_context['object_stored_counter_dict'][oid], {})
self.checkAskStoreObject(conn)
def test_store3(self):
app = self.getApp()
uuid = self.getNewUUID()
oid = self.makeOID(11)
tid = self.makeTID()
txn = self.makeTransactionObject()
# case with no conflict
txn_context = self._begin(app, txn, tid)
packet = Packets.AnswerStoreObject(conflicting=0, oid=oid, serial=tid)
packet.setId(0)
storage_address = ('127.0.0.1', 10020)
node, cell, conn = self.getNodeCellConn(address=storage_address,
uuid=uuid)
app.cp = self.getConnectionPool([(node, conn)])
app.pt = Mock({ 'getCellListForOID': (cell, cell, ) })
class Dispatcher(object):
def pending(self, queue):
return not queue.empty()
app.dispatcher = Dispatcher()
app.nm.createStorage(address=storage_address)
app.store(oid, tid, 'DATA', None, txn)
self.checkAskStoreObject(conn)
txn_context['queue'].put((conn, packet))
app.waitStoreResponses(txn_context, resolving_tryToResolveConflict)
self.assertEqual(txn_context['object_stored_counter_dict'][oid],
{tid: set([uuid])})
self.assertEqual(txn_context['data_dict'].get(oid, None), 'DATA')
self.assertFalse(oid in txn_context['conflict_serial_dict'])
def test_tpc_vote1(self):
app = self.getApp()
txn = self.makeTransactionObject()
# invalid transaction > StorageTransactionError
self.assertRaises(StorageTransactionError, app.tpc_vote, txn,
resolving_tryToResolveConflict)
def test_tpc_vote3(self):
app = self.getApp()
tid = self.makeTID()
txn = self.makeTransactionObject()
self._begin(app, txn, tid)
# response -> OK
packet = Packets.AnswerStoreTransaction(tid=tid)
packet.setId(0)
conn = Mock({
'getNextId': 1,
'fakeReceived': packet,
})
node = Mock({
'__hash__': 1,
'__repr__': 'FakeNode',
})
app.cp = self.getConnectionPool([(node, conn)])
app.tpc_vote(txn, resolving_tryToResolveConflict)
self.checkAskStoreTransaction(conn)
self.checkDispatcherRegisterCalled(app, conn)
def test_tpc_abort1(self):
# ignore mismatch transaction
app = self.getApp()
tid = self.makeTID()
txn = self.makeTransactionObject()
old_txn = object()
self._begin(app, old_txn, tid)
app.master_conn = Mock()
conn = Mock()
cell = Mock()
app.pt = Mock({'getCellListForTID': (cell, cell)})
app.cp = Mock({'getConnForCell': ReturnValues(None, cell)})
app.tpc_abort(txn)
# no packet sent
self.checkNoPacketSent(conn)
self.checkNoPacketSent(app.master_conn)
txn_context = app._txn_container.get(old_txn)
self.assertTrue(txn_context['txn'] is old_txn)
self.assertEqual(txn_context['ttid'], tid)
def test_tpc_abort2(self):
# 2 nodes : 1 transaction in the first, 2 objects in the second
# connections to each node should received only one packet to abort
# and transaction must also be aborted on the master node
# for simplicity, just one cell per partition
oid1, oid2 = self.makeOID(2), self.makeOID(4) # on partition 0
app, tid = self.getApp(), self.makeTID(1) # on partition 1
txn = self.makeTransactionObject()
txn_context = self._begin(app, txn, tid)
app.master_conn = Mock({'__hash__': 0})
app.num_partitions = 2
cell1 = Mock({ 'getNode': 'NODE1', '__hash__': 1 })
cell2 = Mock({ 'getNode': 'NODE2', '__hash__': 2 })
conn1, conn2 = Mock({ 'getNextId': 1, }), Mock({ 'getNextId': 2, })
app.cp = Mock({ 'getConnForNode': ReturnValues(conn1, conn2), })
# fake data
txn_context['involved_nodes'].update([cell1, cell2])
app.tpc_abort(txn)
# will check if there was just one call/packet :
self.checkNotifyPacket(conn1, Packets.AbortTransaction)
self.checkNotifyPacket(conn2, Packets.AbortTransaction)
self.checkNotifyPacket(app.master_conn, Packets.AbortTransaction)
self.assertEqual(app._txn_container.get(txn), None)
def test_tpc_abort3(self):
""" check that abort is sent to all nodes involved in the transaction """
app = self.getApp()
# three partitions/storages: one per object/transaction
app.num_partitions = num_partitions = 3
app.num_replicas = 0
tid = self.makeTID(num_partitions) # on partition 0
oid1 = self.makeOID(num_partitions + 1) # on partition 1, conflicting
oid2 = self.makeOID(num_partitions + 2) # on partition 2
# storage nodes
uuid1, uuid2, uuid3 = [self.getNewUUID() for _ in range(3)]
address1 = ('127.0.0.1', 10000)
address2 = ('127.0.0.1', 10001)
address3 = ('127.0.0.1', 10002)
app.nm.createMaster(address=address1, uuid=uuid1)
app.nm.createStorage(address=address2, uuid=uuid2)
app.nm.createStorage(address=address3, uuid=uuid3)
# answer packets
packet1 = Packets.AnswerStoreTransaction(tid=tid)
packet2 = Packets.AnswerStoreObject(conflicting=1, oid=oid1, serial=tid)
packet3 = Packets.AnswerStoreObject(conflicting=0, oid=oid2, serial=tid)
[p.setId(i) for p, i in zip([packet1, packet2, packet3], range(3))]
conn1 = Mock({'__repr__': 'conn1', 'getAddress': address1,
'fakeReceived': packet1, 'getUUID': uuid1})
conn2 = Mock({'__repr__': 'conn2', 'getAddress': address2,
'fakeReceived': packet2, 'getUUID': uuid2})
conn3 = Mock({'__repr__': 'conn3', 'getAddress': address3,
'fakeReceived': packet3, 'getUUID': uuid3})
node1 = Mock({'__repr__': 'node1', '__hash__': 1, 'getConnection': conn1})
node2 = Mock({'__repr__': 'node2', '__hash__': 2, 'getConnection': conn2})
node3 = Mock({'__repr__': 'node3', '__hash__': 3, 'getConnection': conn3})
cell1 = Mock({ 'getNode': node1, '__hash__': 1, 'getConnection': conn1})
cell2 = Mock({ 'getNode': node2, '__hash__': 2, 'getConnection': conn2})
cell3 = Mock({ 'getNode': node3, '__hash__': 3, 'getConnection': conn3})
# fake environment
app.pt = Mock({
'getCellListForTID': [cell1],
'getCellListForOID': ReturnValues([cell2], [cell3]),
})
app.cp = Mock({'getConnForCell': ReturnValues(conn2, conn3, conn1)})
app.cp = Mock({
'getConnForNode': ReturnValues(conn2, conn3, conn1),
'iterateForObject': [(node2, conn2), (node3, conn3), (node1, conn1)],
})
app.master_conn = Mock({'__hash__': 0})
txn = self.makeTransactionObject()
txn_context = self._begin(app, txn, tid)
class Dispatcher(object):
def pending(self, queue):
return not queue.empty()
def forget_queue(self, queue, flush_queue=True):
pass
app.dispatcher = Dispatcher()
# conflict occurs on storage 2
app.store(oid1, tid, 'DATA', None, txn)
app.store(oid2, tid, 'DATA', None, txn)
queue = txn_context['queue']
queue.put((conn2, packet2))
queue.put((conn3, packet3))
# vote fails as the conflict is not resolved, nothing is sent to storage 3
self.assertRaises(ConflictError, app.tpc_vote, txn, failing_tryToResolveConflict)
# abort must be sent to storage 1 and 2
app.tpc_abort(txn)
self.checkAbortTransaction(conn2)
self.checkAbortTransaction(conn3)
def test_tpc_finish1(self):
# transaction mismatch: raise
app = self.getApp()
txn = self.makeTransactionObject()
app.master_conn = Mock()
self.assertRaises(StorageTransactionError, app.tpc_finish, txn, None)
# no packet sent
self.checkNoPacketSent(app.master_conn)
def test_tpc_finish3(self):
# transaction is finished
app = self.getApp()
tid = self.makeTID()
ttid = self.makeTID()
txn = self.makeTransactionObject()
txn_context = self._begin(app, txn, tid)
self.f_called = False
self.f_called_with_tid = None
def hook(tid):
self.f_called = True
self.f_called_with_tid = tid
packet = Packets.AnswerTransactionFinished(ttid, tid)
packet.setId(0)
app.master_conn = Mock({
'getNextId': 1,
'getAddress': ('127.0.0.1', 10010),
'fakeReceived': packet,
})
txn_context['txn_voted'] = True
app.tpc_finish(txn, None, hook)
self.assertTrue(self.f_called)
self.assertEqual(self.f_called_with_tid, tid)
self.checkAskFinishTransaction(app.master_conn)
#self.checkDispatcherRegisterCalled(app, app.master_conn)
self.assertEqual(app._txn_container.get(txn), None)
def test_undo1(self):
# invalid transaction
app = self.getApp()
tid = self.makeTID()
snapshot_tid = self.getNextTID()
txn = self.makeTransactionObject()
def tryToResolveConflict(oid, conflict_serial, serial, data):
pass
app.master_conn = Mock()
conn = Mock()
self.assertRaises(StorageTransactionError, app.undo, snapshot_tid, tid,
txn, tryToResolveConflict)
# no packet sent
self.checkNoPacketSent(conn)
self.checkNoPacketSent(app.master_conn)
def _getAppForUndoTests(self, oid0, tid0, tid1, tid2):
app = self.getApp()
cell = Mock({
'getAddress': 'FakeServer',
'getState': 'FakeState',
})
app.pt = Mock({
'getCellListForTID': [cell, ],
'getCellListForOID': [cell, ],
'getCellList': [cell, ],
})
transaction_info = Packets.AnswerTransactionInformation(tid1, '', '',
'', False, (oid0, ))
transaction_info.setId(1)
conn = Mock({
'getNextId': 1,
'fakeReceived': transaction_info,
'getAddress': ('127.0.0.1', 10010),
})
node = app.nm.createStorage(address=conn.getAddress())
app.cp = Mock({
'iterateForObject': [(node, conn)],
'getConnForCell': conn,
})
class Dispatcher(object):
def pending(self, queue):
return not queue.empty()
app.dispatcher = Dispatcher()
def load(oid, tid=None, before_tid=None):
self.assertEqual(oid, oid0)
return ({tid0: 'dummy', tid2: 'cdummy'}[tid], None, None)
app.load = load
store_marker = []
def _store(txn_context, oid, serial, data, data_serial=None,
unlock=False):
store_marker.append((oid, serial, data, data_serial))
app._store = _store
return app, conn, store_marker
def test_undoWithResolutionSuccess(self):
"""
Try undoing transaction tid1, which contains object oid.
Object oid previous revision before tid1 is tid0.
Transaction tid2 modified oid (and contains its data).
Undo is accepted, because conflict resolution succeeds.
"""
oid0 = self.makeOID(1)
tid0 = self.getNextTID()
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tid3 = self.getNextTID()
snapshot_tid = self.getNextTID()
app, conn, store_marker = self._getAppForUndoTests(oid0, tid0, tid1,
tid2)
undo_serial = Packets.AnswerObjectUndoSerial({
oid0: (tid2, tid0, False)})
undo_serial.setId(2)
app._getThreadQueue().put((conn, undo_serial))
marker = []
def tryToResolveConflict(oid, conflict_serial, serial, data,
committedData=''):
marker.append((oid, conflict_serial, serial, data, committedData))
return 'solved'
# The undo
txn = self.beginTransaction(app, tid=tid3)
app.undo(snapshot_tid, tid1, txn, tryToResolveConflict)
# Checking what happened
moid, mconflict_serial, mserial, mdata, mcommittedData = marker[0]
self.assertEqual(moid, oid0)
self.assertEqual(mconflict_serial, tid2)
self.assertEqual(mserial, tid1)
self.assertEqual(mdata, 'dummy')
self.assertEqual(mcommittedData, 'cdummy')
moid, mserial, mdata, mdata_serial = store_marker[0]
self.assertEqual(moid, oid0)
self.assertEqual(mserial, tid2)
self.assertEqual(mdata, 'solved')
self.assertEqual(mdata_serial, None)
def test_undoWithResolutionFailure(self):
"""
Try undoing transaction tid1, which contains object oid.
Object oid previous revision before tid1 is tid0.
Transaction tid2 modified oid (and contains its data).
Undo is rejeced with a raise, because conflict resolution fails.
"""
oid0 = self.makeOID(1)
tid0 = self.getNextTID()
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tid3 = self.getNextTID()
snapshot_tid = self.getNextTID()
undo_serial = Packets.AnswerObjectUndoSerial({
oid0: (tid2, tid0, False)})
undo_serial.setId(2)
app, conn, store_marker = self._getAppForUndoTests(oid0, tid0, tid1,
tid2)
app._getThreadQueue().put((conn, undo_serial))
marker = []
def tryToResolveConflict(oid, conflict_serial, serial, data,
committedData=''):
marker.append((oid, conflict_serial, serial, data, committedData))
return None
# The undo
txn = self.beginTransaction(app, tid=tid3)
self.assertRaises(UndoError, app.undo, snapshot_tid, tid1, txn,
tryToResolveConflict)
# Checking what happened
moid, mconflict_serial, mserial, mdata, mcommittedData = marker[0]
self.assertEqual(moid, oid0)
self.assertEqual(mconflict_serial, tid2)
self.assertEqual(mserial, tid1)
self.assertEqual(mdata, 'dummy')
self.assertEqual(mcommittedData, 'cdummy')
self.assertEqual(len(store_marker), 0)
# Likewise, but conflict resolver raises a ConflictError.
# Still, exception raised by undo() must be UndoError.
marker = []
def tryToResolveConflict(oid, conflict_serial, serial, data,
committedData=''):
marker.append((oid, conflict_serial, serial, data, committedData))
raise ConflictError
# The undo
app._getThreadQueue().put((conn, undo_serial))
self.assertRaises(UndoError, app.undo, snapshot_tid, tid1, txn,
tryToResolveConflict)
# Checking what happened
moid, mconflict_serial, mserial, mdata, mcommittedData = marker[0]
self.assertEqual(moid, oid0)
self.assertEqual(mconflict_serial, tid2)
self.assertEqual(mserial, tid1)
self.assertEqual(mdata, 'dummy')
self.assertEqual(mcommittedData, 'cdummy')
self.assertEqual(len(store_marker), 0)
def test_undo(self):
"""
Try undoing transaction tid1, which contains object oid.
Object oid previous revision before tid1 is tid0.
Undo is accepted, because tid1 is object's current revision.
"""
oid0 = self.makeOID(1)
tid0 = self.getNextTID()
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tid3 = self.getNextTID()
snapshot_tid = self.getNextTID()
transaction_info = Packets.AnswerTransactionInformation(tid1, '', '',
'', False, (oid0, ))
transaction_info.setId(1)
undo_serial = Packets.AnswerObjectUndoSerial({
oid0: (tid1, tid0, True)})
undo_serial.setId(2)
app, conn, store_marker = self._getAppForUndoTests(oid0, tid0, tid1,
tid2)
app._getThreadQueue().put((conn, undo_serial))
def tryToResolveConflict(oid, conflict_serial, serial, data,
committedData=''):
raise Exception, 'Test called conflict resolution, but there ' \
'is no conflict in this test !'
# The undo
txn = self.beginTransaction(app, tid=tid3)
app.undo(snapshot_tid, tid1, txn, tryToResolveConflict)
# Checking what happened
moid, mserial, mdata, mdata_serial = store_marker[0]
self.assertEqual(moid, oid0)
self.assertEqual(mserial, tid1)
self.assertEqual(mdata, None)
self.assertEqual(mdata_serial, tid0)
def test_undoLog(self):
app = self.getApp()
app.num_partitions = 2
uuid1, uuid2 = '\x00' * 15 + '\x01', '\x00' * 15 + '\x02'
# two nodes, two partition, two transaction, two objects :
node1, node2 = Mock({}), Mock({})
cell1, cell2 = Mock({}), Mock({})
tid1, tid2 = self.makeTID(1), self.makeTID(2)
oid1, oid2 = self.makeOID(1), self.makeOID(2)
# TIDs packets supplied by _ask hook
# TXN info packets
extension = dumps({})
p1 = Packets.AnswerTIDs([tid1])
p2 = Packets.AnswerTIDs([tid2])
p3 = Packets.AnswerTransactionInformation(tid1, '', '',
extension, False, (oid1, ))
p4 = Packets.AnswerTransactionInformation(tid2, '', '',
extension, False, (oid2, ))
p1.setId(0)
p2.setId(1)
p3.setId(2)
p4.setId(3)
conn = Mock({
'getNextId': 1,
'getUUID': ReturnValues(uuid1, uuid2),
'fakeGetApp': app,
'fakeReceived': ReturnValues(p3, p4),
'getAddress': ('127.0.0.1', 10010),
})
storage_1_conn = Mock()
storage_2_conn = Mock()
app.pt = Mock({
'getNodeList': (node1, node2, ),
'getCellListForTID': ReturnValues([cell1], [cell2]),
})
app.cp = Mock({
'getConnForNode': ReturnValues(storage_1_conn, storage_2_conn),
'iterateForObject': [(Mock(), conn)]
})
def waitResponses(queue, handler_data):
app.setHandlerData(handler_data)
for p in (p1, p2):
app._handlePacket(Mock(), p, handler=app.storage_handler)
app.waitResponses = waitResponses
def txn_filter(info):
return info['id'] > '\x00' * 8
first = 0
last = 4
result = app.undoLog(first, last, filter=txn_filter)
pfirst, plast, ppartition = self.checkAskPacket(storage_1_conn,
Packets.AskTIDs, decode=True)
self.assertEqual(pfirst, first)
self.assertEqual(plast, last)
self.assertEqual(ppartition, INVALID_PARTITION)
pfirst, plast, ppartition = self.checkAskPacket(storage_2_conn,
Packets.AskTIDs, decode=True)
self.assertEqual(pfirst, first)
self.assertEqual(plast, last)
self.assertEqual(ppartition, INVALID_PARTITION)
self.assertEqual(result[0]['id'], tid1)
self.assertEqual(result[1]['id'], tid2)
def test_connectToPrimaryNode(self):
# here we have three master nodes :
# the connection to the first will fail
# the second will have changed
# the third will not be ready
# after the third, the partition table will be operational
# (as if it was connected to the primary master node)
from neo.tests import DoNothingConnector
# will raise IndexError at the third iteration
app = self.getApp('127.0.0.1:10010 127.0.0.1:10011')
# TODO: test more connection failure cases
# Seventh packet : askNodeInformation succeeded
all_passed = []
def _ask8(_):
all_passed.append(1)
# Sixth packet : askPartitionTable succeeded
def _ask7(_):
app.pt = Mock({'operational': True})
# fifth packet : request node identification succeeded
def _ask6(conn):
conn.setUUID('D' * 16)
app.uuid = 'C' * 16
# fourth iteration : connection to primary master succeeded
def _ask5(_):
app.trying_master_node = app.primary_master_node = Mock({
'getAddress': ('192.168.1.1', 10000),
'__str__': 'Fake master node',
})
# third iteration : node not ready
def _ask4(_):
app.trying_master_node = None
# second iteration : master node changed
def _ask3(_):
app.primary_master_node = Mock({
'getAddress': ('192.168.1.1', 10000),
'__str__': 'Fake master node',
})
# first iteration : connection failed
def _ask2(_):
app.trying_master_node = None
# do nothing for the first call
def _ask1(_):
pass
ask_func_list = [_ask1, _ask2, _ask3, _ask4, _ask5, _ask6, _ask7,
_ask8]
def _ask_base(conn, _, handler=None):
ask_func_list.pop(0)(conn)
app._ask = _ask_base
# faked environnement
app.connector_handler = DoNothingConnector
app.em = Mock({'getConnectionList': []})
app.pt = Mock({ 'operational': False})
app.master_conn = app._connectToPrimaryNode()
self.assertEqual(len(all_passed), 1)
self.assertTrue(app.master_conn is not None)
self.assertTrue(app.pt.operational())
def test_askPrimary(self):
""" _askPrimary is private but test it anyway """
app = self.getApp()
conn = Mock()
app.master_conn = conn
app.primary_handler = Mock()
self.test_ok = False
def _ask_hook(app, conn, packet, handler=None):
conn.ask(packet)
self.assertTrue(handler is app.primary_handler)
self.test_ok = True
_ask_old = Application._ask
Application._ask = _ask_hook
packet = Packets.AskBeginTransaction()
packet.setId(0)
try:
app._askPrimary(packet)
finally:
Application._ask = _ask_old
# check packet sent, connection locked during process and dispatcher updated
self.checkAskNewTid(conn)
self.checkDispatcherRegisterCalled(app, conn)
# and _ask called
self.assertTrue(self.test_ok)
# check NEOStorageError is raised when the primary connection is lost
app.master_conn = None
# check disabled since we reonnect to pmn
#self.assertRaises(NEOStorageError, app._askPrimary, packet)
def test_threadContextIsolation(self):
""" Thread context properties must not be visible accross instances
while remaining in the same thread """
app1 = self.getApp()
app1_local = app1._thread_container.get()
app2 = self.getApp()
app2_local = app2._thread_container.get()
property_id = 'thread_context_test'
value = 'value'
self.assertRaises(KeyError, app1_local.__getitem__, property_id)
self.assertRaises(KeyError, app2_local.__getitem__, property_id)
app1_local[property_id] = value
self.assertEqual(app1_local[property_id], value)
self.assertRaises(KeyError, app2_local.__getitem__, property_id)
def test_pack(self):
app = self.getApp()
marker = []
def askPrimary(packet):
marker.append(packet)
app._askPrimary = askPrimary
# XXX: could not identify a value causing TimeStamp to return ZERO_TID
#self.assertRaises(NEOStorageError, app.pack, )
self.assertEqual(len(marker), 0)
now = time.time()
app.pack(now)
self.assertEqual(len(marker), 1)
self.assertEqual(type(marker[0]), Packets.AskPack)
# XXX: how to validate packet content ?
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/client/testConnectionPool.py 0000664 0000000 0000000 00000010526 11634614701 0030017 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock, ReturnValues
from neo.tests import NeoUnitTestBase
from neo.client.app import ConnectionPool
from neo.client.exception import NEOStorageError
class ConnectionPoolTests(NeoUnitTestBase):
def test_removeConnection(self):
app = None
pool = ConnectionPool(app)
test_node_uuid = self.getNewUUID()
other_node_uuid = test_node_uuid
while other_node_uuid == test_node_uuid:
other_node_uuid = self.getNewUUID()
test_node = Mock({'getUUID': test_node_uuid})
other_node = Mock({'getUUID': other_node_uuid})
# Test sanity check
self.assertEqual(getattr(pool, 'connection_dict', None), {})
# Call must not raise if node is not known
self.assertEqual(len(pool.connection_dict), 0)
pool.removeConnection(test_node)
# Test that removal with another uuid doesn't affect entry
pool.connection_dict[test_node_uuid] = None
self.assertEqual(len(pool.connection_dict), 1)
pool.removeConnection(other_node)
self.assertEqual(len(pool.connection_dict), 1)
# Test that removeConnection works
pool.removeConnection(test_node)
self.assertEqual(len(pool.connection_dict), 0)
# TODO: test getConnForNode (requires splitting complex functionalities)
def test_CellSortKey(self):
pool = ConnectionPool(None)
node_uuid_1 = self.getNewUUID()
node_uuid_2 = self.getNewUUID()
node_uuid_3 = self.getNewUUID()
# We are connected to node 1
pool.connection_dict[node_uuid_1] = None
# A connection to node 3 failed, will be forgotten at 5
pool._notifyFailure(node_uuid_3, 5)
getCellSortKey = pool._getCellSortKey
# At 0, key values are not ambiguous
self.assertTrue(getCellSortKey(node_uuid_1, 0) < getCellSortKey(
node_uuid_2, 0) < getCellSortKey(node_uuid_3, 0))
# At 10, nodes 2 and 3 have the same key value
self.assertTrue(getCellSortKey(node_uuid_1, 10) < getCellSortKey(
node_uuid_2, 10))
self.assertEqual(getCellSortKey(node_uuid_2, 10), getCellSortKey(
node_uuid_3, 10))
def test_iterateForObject_noStorageAvailable(self):
# no node available
oid = self.getOID(1)
pt = Mock({'getCellListForOID': []})
app = Mock({'getPartitionTable': pt})
pool = ConnectionPool(app)
self.assertRaises(NEOStorageError, pool.iterateForObject(oid).next)
def test_iterateForObject_connectionRefused(self):
# connection refused at the first try
oid = self.getOID(1)
node = Mock({'__repr__': 'node', 'isRunning': True})
cell = Mock({'__repr__': 'cell', 'getNode': node})
conn = Mock({'__repr__': 'conn'})
pt = Mock({'getCellListForOID': [cell]})
app = Mock({'getPartitionTable': pt})
pool = ConnectionPool(app)
pool.getConnForNode = Mock({'__call__': ReturnValues(None, conn)})
self.assertEqual(list(pool.iterateForObject(oid)), [(node, conn)])
def test_iterateForObject_connectionAccepted(self):
# connection accepted
oid = self.getOID(1)
node = Mock({'__repr__': 'node', 'isRunning': True})
cell = Mock({'__repr__': 'cell', 'getNode': node})
conn = Mock({'__repr__': 'conn'})
pt = Mock({'getCellListForOID': [cell]})
app = Mock({'getPartitionTable': pt})
pool = ConnectionPool(app)
pool.getConnForNode = Mock({'__call__': conn})
self.assertEqual(list(pool.iterateForObject(oid)), [(node, conn)])
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/client/testMasterHandler.py 0000664 0000000 0000000 00000017553 11634614701 0027626 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.lib.pt import PartitionTable
from neo.lib.protocol import NodeTypes, NodeStates
from neo.client.handlers.master import PrimaryBootstrapHandler
from neo.client.handlers.master import PrimaryNotificationsHandler, \
PrimaryAnswersHandler
from neo.client.exception import NEOStorageError
MARKER = []
class MasterHandlerTests(NeoUnitTestBase):
def getConnection(self):
return self.getFakeConnection()
class MasterBootstrapHandlerTests(MasterHandlerTests):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.app = Mock()
self.handler = PrimaryBootstrapHandler(self.app)
def checkCalledOnApp(self, method, index=0):
calls = self.app.mockGetNamedCalls(method)
self.assertTrue(len(calls) > index)
return calls[index].params
def test_notReady(self):
conn = self.getConnection()
self.handler.notReady(conn, 'message')
self.assertEqual(self.app.trying_master_node, None)
def test_acceptIdentification1(self):
""" Non-master node """
conn = self.getConnection()
uuid = self.getNewUUID()
self.handler.acceptIdentification(conn, NodeTypes.CLIENT,
uuid, 100, 0, None)
self.checkClosed(conn)
def test_acceptIdentification2(self):
""" No UUID supplied """
conn = self.getConnection()
uuid = self.getNewUUID()
self.checkProtocolErrorRaised(self.handler.acceptIdentification,
conn, NodeTypes.MASTER, uuid, 100, 0, None)
def test_acceptIdentification3(self):
""" identification accepted """
node = Mock()
conn = self.getConnection()
uuid = self.getNewUUID()
your_uuid = self.getNewUUID()
partitions = 100
replicas = 2
self.app.nm = Mock({'getByAddress': node})
self.handler.acceptIdentification(conn, NodeTypes.MASTER, uuid,
partitions, replicas, your_uuid)
self.assertEqual(self.app.uuid, your_uuid)
self.checkUUIDSet(conn, uuid)
self.checkUUIDSet(node, uuid)
self.assertTrue(isinstance(self.app.pt, PartitionTable))
def _getMasterList(self, uuid_list):
port = 1000
master_list = []
for uuid in uuid_list:
master_list.append((('127.0.0.1', port), uuid))
port += 1
return master_list
def test_answerPrimary1(self):
""" Primary not known, master udpated """
node, uuid = Mock(), self.getNewUUID()
conn = self.getConnection()
master_list = [(('127.0.0.1', 1000), uuid)]
self.app.primary_master_node = Mock()
self.app.trying_master_node = Mock()
self.app.nm = Mock({'getByAddress': node})
self.handler.answerPrimary(conn, None, master_list)
self.checkUUIDSet(node, uuid)
# previously known primary master forgoten
self.assertEqual(self.app.primary_master_node, None)
self.assertEqual(self.app.trying_master_node, None)
self.checkClosed(conn)
def test_answerPrimary2(self):
""" Primary known """
current_node = Mock({'__repr__': '1'})
node, uuid = Mock({'__repr__': '2'}), self.getNewUUID()
conn = self.getConnection()
master_list = [(('127.0.0.1', 1000), uuid)]
self.app.primary_master_node = None
self.app.trying_master_node = current_node
self.app.nm = Mock({
'getByAddress': node,
'getByUUID': node,
})
self.handler.answerPrimary(conn, uuid, [])
self.assertEqual(self.app.trying_master_node, None)
self.assertTrue(self.app.primary_master_node is node)
self.checkClosed(conn)
def test_answerPartitionTable(self):
conn = self.getConnection()
self.app.pt = Mock()
ptid = 0
row_list = ([], [])
self.handler.answerPartitionTable(conn, ptid, row_list)
load_calls = self.app.pt.mockGetNamedCalls('load')
self.assertEqual(len(load_calls), 1)
# load_calls[0].checkArgs(ptid, row_list, self.app.nm)
class MasterNotificationsHandlerTests(MasterHandlerTests):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.db = Mock()
self.app = Mock({'getDB': self.db})
self.app.nm = Mock()
self.app.dispatcher = Mock()
self.handler = PrimaryNotificationsHandler(self.app)
def test_connectionClosed(self):
conn = self.getConnection()
node = Mock()
self.app.master_conn = conn
self.app.primary_master_node = node
self.handler.connectionClosed(conn)
self.assertEqual(self.app.master_conn, None)
self.assertEqual(self.app.primary_master_node, None)
def test_invalidateObjects(self):
conn = self.getConnection()
tid = self.getNextTID()
oid1, oid2, oid3 = self.getOID(1), self.getOID(2), self.getOID(3)
self.app._cache = Mock({
'invalidate': None,
})
self.handler.invalidateObjects(conn, tid, [oid1, oid3])
cache_calls = self.app._cache.mockGetNamedCalls('invalidate')
self.assertEqual(len(cache_calls), 2)
cache_calls[0].checkArgs(oid1, tid)
cache_calls[1].checkArgs(oid3, tid)
invalidation_calls = self.db.mockGetNamedCalls('invalidate')
self.assertEqual(len(invalidation_calls), 1)
invalidation_calls[0].checkArgs(tid, [oid1, oid3])
def test_notifyPartitionChanges(self):
conn = self.getConnection()
self.app.pt = Mock({'filled': True})
ptid = 0
cell_list = (Mock(), Mock())
self.handler.notifyPartitionChanges(conn, ptid, cell_list)
update_calls = self.app.pt.mockGetNamedCalls('update')
self.assertEqual(len(update_calls), 1)
update_calls[0].checkArgs(ptid, cell_list, self.app.nm)
class MasterAnswersHandlerTests(MasterHandlerTests):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.app = Mock()
self.handler = PrimaryAnswersHandler(self.app)
def test_answerBeginTransaction(self):
tid = self.getNextTID()
conn = self.getConnection()
self.handler.answerBeginTransaction(conn, tid)
calls = self.app.mockGetNamedCalls('setHandlerData')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(tid)
def test_answerNewOIDs(self):
conn = self.getConnection()
oid1, oid2, oid3 = self.getOID(0), self.getOID(1), self.getOID(2)
self.handler.answerNewOIDs(conn, [oid1, oid2, oid3])
self.assertEqual(self.app.new_oid_list, [oid1, oid2, oid3])
def test_answerTransactionFinished(self):
conn = self.getConnection()
ttid2 = self.getNextTID()
tid2 = self.getNextTID()
self.handler.answerTransactionFinished(conn, ttid2, tid2)
calls = self.app.mockGetNamedCalls('setHandlerData')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(tid2)
def test_answerPack(self):
self.assertRaises(NEOStorageError, self.handler.answerPack, None, False)
# Check it doesn't raise
self.handler.answerPack(None, True)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/client/testStorageHandler.py 0000664 0000000 0000000 00000023766 11634614701 0030002 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import NodeTypes, LockState
from neo.client.handlers.storage import StorageBootstrapHandler, \
StorageAnswersHandler
from neo.client.exception import NEOStorageError, NEOStorageNotFoundError
from neo.client.exception import NEOStorageDoesNotExistError
from ZODB.POSException import ConflictError
from neo.lib.exception import NodeNotReady
from ZODB.TimeStamp import TimeStamp
MARKER = []
class StorageBootstrapHandlerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.app = Mock()
self.handler = StorageBootstrapHandler(self.app)
def getConnection(self):
return self.getFakeConnection()
def test_notReady(self):
conn = self.getConnection()
self.assertRaises(NodeNotReady, self.handler.notReady, conn, 'message')
def test_acceptIdentification1(self):
""" Not a storage node """
uuid = self.getNewUUID()
conn = self.getConnection()
conn = self.getConnection()
node = Mock()
self.app.nm = Mock({'getByAddress': node})
self.handler.acceptIdentification(conn, NodeTypes.CLIENT, uuid,
10, 0, None)
self.checkClosed(conn)
def test_acceptIdentification2(self):
uuid = self.getNewUUID()
conn = self.getConnection()
node = Mock()
self.app.nm = Mock({'getByAddress': node})
self.handler.acceptIdentification(conn, NodeTypes.STORAGE, uuid,
10, 0, None)
self.checkUUIDSet(node, uuid)
self.checkUUIDSet(conn, uuid)
class StorageAnswerHandlerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.app = Mock()
self.handler = StorageAnswersHandler(self.app)
def getConnection(self):
return self.getFakeConnection()
def _checkHandlerData(self, ref):
calls = self.app.mockGetNamedCalls('setHandlerData')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(ref)
def test_answerObject(self):
conn = self.getConnection()
oid = self.getOID(0)
tid1 = self.getNextTID()
tid2 = self.getNextTID(tid1)
the_object = (oid, tid1, tid2, 0, '', 'DATA', None)
self.handler.answerObject(conn, *the_object)
self._checkHandlerData(the_object[:-1])
# Check handler raises on non-None data_serial.
the_object = (oid, tid1, tid2, 0, '', 'DATA', self.getNextTID())
self.assertRaises(NEOStorageError, self.handler.answerObject, conn,
*the_object)
def _getAnswerStoreObjectHandler(self, object_stored_counter_dict,
conflict_serial_dict, resolved_conflict_serial_dict):
app = Mock({
'getHandlerData': {
'object_stored_counter_dict': object_stored_counter_dict,
'conflict_serial_dict': conflict_serial_dict,
'resolved_conflict_serial_dict': resolved_conflict_serial_dict,
}
})
return StorageAnswersHandler(app)
def test_answerStoreObject_1(self):
conn = self.getConnection()
oid = self.getOID(0)
tid = self.getNextTID()
# conflict
object_stored_counter_dict = {oid: {}}
conflict_serial_dict = {}
resolved_conflict_serial_dict = {}
self._getAnswerStoreObjectHandler(object_stored_counter_dict,
conflict_serial_dict, resolved_conflict_serial_dict,
).answerStoreObject(conn, 1, oid, tid)
self.assertEqual(conflict_serial_dict[oid], set([tid, ]))
self.assertEqual(object_stored_counter_dict[oid], {})
self.assertFalse(oid in resolved_conflict_serial_dict)
# object was already accepted by another storage, raise
handler = self._getAnswerStoreObjectHandler({oid: {tid: set([1])}}, {}, {})
self.assertRaises(NEOStorageError, handler.answerStoreObject,
conn, 1, oid, tid)
def test_answerStoreObject_2(self):
conn = self.getConnection()
oid = self.getOID(0)
tid = self.getNextTID()
tid_2 = self.getNextTID()
# resolution-pending conflict
object_stored_counter_dict = {oid: {}}
conflict_serial_dict = {oid: set([tid, ])}
resolved_conflict_serial_dict = {}
self._getAnswerStoreObjectHandler(object_stored_counter_dict,
conflict_serial_dict, resolved_conflict_serial_dict,
).answerStoreObject(conn, 1, oid, tid)
self.assertEqual(conflict_serial_dict[oid], set([tid, ]))
self.assertFalse(oid in resolved_conflict_serial_dict)
self.assertEqual(object_stored_counter_dict[oid], {})
# object was already accepted by another storage, raise
handler = self._getAnswerStoreObjectHandler({oid: {tid: set([1])}},
{oid: set([tid, ])}, {})
self.assertRaises(NEOStorageError, handler.answerStoreObject,
conn, 1, oid, tid)
# detected conflict is different, don't raise
self._getAnswerStoreObjectHandler({oid: {}}, {oid: set([tid, ])}, {},
).answerStoreObject(conn, 1, oid, tid_2)
def test_answerStoreObject_3(self):
conn = self.getConnection()
oid = self.getOID(0)
tid = self.getNextTID()
tid_2 = self.getNextTID()
# already-resolved conflict
# This case happens if a storage is answering a store action for which
# any other storage already answered (with same conflict) and any other
# storage accepted the resolved object.
object_stored_counter_dict = {oid: {tid_2: 1}}
conflict_serial_dict = {}
resolved_conflict_serial_dict = {oid: set([tid, ])}
self._getAnswerStoreObjectHandler(object_stored_counter_dict,
conflict_serial_dict, resolved_conflict_serial_dict,
).answerStoreObject(conn, 1, oid, tid)
self.assertFalse(oid in conflict_serial_dict)
self.assertEqual(resolved_conflict_serial_dict[oid],
set([tid, ]))
self.assertEqual(object_stored_counter_dict[oid], {tid_2: 1})
# detected conflict is different, don't raise
self._getAnswerStoreObjectHandler({oid: {tid: 1}}, {},
{oid: set([tid, ])}).answerStoreObject(conn, 1, oid, tid_2)
def test_answerStoreObject_4(self):
uuid = self.getNewUUID()
conn = self.getFakeConnection(uuid=uuid)
oid = self.getOID(0)
tid = self.getNextTID()
# no conflict
object_stored_counter_dict = {oid: {}}
conflict_serial_dict = {}
resolved_conflict_serial_dict = {}
self._getAnswerStoreObjectHandler(object_stored_counter_dict,
conflict_serial_dict, resolved_conflict_serial_dict,
).answerStoreObject(conn, 0, oid, tid)
self.assertFalse(oid in conflict_serial_dict)
self.assertFalse(oid in resolved_conflict_serial_dict)
self.assertEqual(object_stored_counter_dict[oid], {tid: set([uuid])})
def test_answerTransactionInformation(self):
conn = self.getConnection()
tid = self.getNextTID()
user = 'USER'
desc = 'DESC'
ext = 'EXT'
packed = False
oid_list = [self.getOID(0), self.getOID(1)]
self.handler.answerTransactionInformation(conn, tid, user, desc, ext,
packed, oid_list)
self._checkHandlerData(({
'time': TimeStamp(tid).timeTime(),
'user_name': user,
'description': desc,
'id': tid,
'oids': oid_list,
'packed': packed,
}, ext))
def test_oidNotFound(self):
conn = self.getConnection()
self.assertRaises(NEOStorageNotFoundError, self.handler.oidNotFound,
conn, 'message')
def test_oidDoesNotExist(self):
conn = self.getConnection()
self.assertRaises(NEOStorageDoesNotExistError,
self.handler.oidDoesNotExist, conn, 'message')
def test_tidNotFound(self):
conn = self.getConnection()
self.assertRaises(NEOStorageNotFoundError, self.handler.tidNotFound,
conn, 'message')
def test_answerTIDs(self):
uuid = self.getNewUUID()
tid1 = self.getNextTID()
tid2 = self.getNextTID(tid1)
tid_list = [tid1, tid2]
conn = self.getFakeConnection(uuid=uuid)
tid_set = set()
app = Mock({
'getHandlerData': tid_set,
})
handler = StorageAnswersHandler(app)
handler.answerTIDs(conn, tid_list)
self.assertEqual(tid_set, set(tid_list))
def test_answerObjectUndoSerial(self):
uuid = self.getNewUUID()
conn = self.getFakeConnection(uuid=uuid)
oid1 = self.getOID(1)
oid2 = self.getOID(2)
tid0 = self.getNextTID()
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tid3 = self.getNextTID()
undo_dict = {}
app = Mock({
'getHandlerData': undo_dict,
})
handler = StorageAnswersHandler(app)
handler.answerObjectUndoSerial(conn, {oid1: [tid0, tid1]})
self.assertEqual(undo_dict, {oid1: [tid0, tid1]})
handler.answerObjectUndoSerial(conn, {oid2: [tid2, tid3]})
self.assertEqual(undo_dict, {
oid1: [tid0, tid1],
oid2: [tid2, tid3],
})
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/cluster.py 0000664 0000000 0000000 00000022552 11634614701 0024373 0 ustar 00root root 0000000 0000000 #
# Copyright (c) 2011 Nexedi SARL and Contributors. All Rights Reserved.
# Julien Muchembled
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import __builtin__
import errno
import mmap
import os
import psutil
import signal
import socket
import sys
import tempfile
from cPickle import dumps, loads
from functools import wraps
from time import time, sleep
from neo.lib import debug
class SocketLock(object):
"""Basic system-wide lock"""
_socket = None
def __init__(self, address, family=socket.AF_UNIX, type=socket.SOCK_DGRAM):
if family == socket.AF_UNIX:
address = '\0' + address
self.address = address
self.socket_args = family, type
def locked(self):
return self._socket is not None
def acquire(self, blocking=1):
assert self._socket is None
s = socket.socket(*self.socket_args)
try:
while True:
try:
s.bind(self.address)
except socket.error, e:
if e[0] != errno.EADDRINUSE:
raise
if not blocking:
return False
sleep(1)
else:
self._socket = s
return True
finally:
if self._socket is None:
s.close()
def release(self):
s = self._socket
del self._socket
s.close()
class ClusterDict(dict):
"""Simple storage (dict), shared with forked processes"""
_acquired = 0
def __init__(self, *args, **kw):
dict.__init__(self, *args, **kw)
self._r, self._w = os.pipe()
# shm_open(3) would be better but Python doesn't provide it.
# See also http://nikitathespider.com/python/shm/
f = tempfile.TemporaryFile()
try:
f.write(dumps(self.copy(), -1))
f.flush()
self._shared = mmap.mmap(f.fileno(), f.tell())
finally:
f.close()
self.release()
def __del__(self):
try:
os.close(self._r)
os.close(self._w)
except TypeError: # if os.close is None
pass
def acquire(self):
self._acquired += 1
if not self._acquired:
os.read(self._r, 1)
try:
self.clear()
shared = self._shared
shared.resize(shared.size())
self.update(loads(shared[:]))
except:
self.release()
raise
def release(self, commit=False):
if not self._acquired:
if commit:
self.commit()
os.write(self._w, '\0')
self._acquired -= 1
def commit(self):
shared = self._shared
p = dumps(self.copy(), -1)
shared.resize(len(p))
shared[:] = p
cluster_dict = ClusterDict()
class ClusterPdb(object):
"""Multiprocess-aware wrapper around console and winpdb debuggers
__call__ is the method to break.
TODO: monkey-patch normal code not to timeout
if another node is being debugged
"""
def __init__(self):
self._count_dict = {}
def __setattr__(self, name, value):
try:
hook = getattr(self, name)
setattr(value.im_self, value.__name__, wraps(value)(
lambda *args, **kw: hook(value, *args, **kw)))
except AttributeError:
object.__setattr__(self, name, value)
@property
def broken_peer(self):
return self._getLastPdb(os.getpid()) is None
def __call__(self, max_count=None, depth=0, text=None):
depth += 1
if max_count:
frame = sys._getframe(depth)
key = id(frame.f_code), frame.f_lineno
del frame
self._count_dict[key] = count = 1 + self._count_dict.get(key, 0)
if max_count < count:
return
if not text:
try:
import rpdb2
except ImportError:
if text is not None:
raise
else:
if rpdb2.g_debugger is None:
rpdb2_CStateManager = rpdb2.CStateManager
def CStateManager(*args, **kw):
rpdb2.CStateManager = rpdb2_CStateManager
state_manager = rpdb2.CStateManager(*args, **kw)
self._rpdb2_set_state = state_manager.set_state
return state_manager
rpdb2.CStateManager = CStateManager
return debug.winpdb(depth)
try:
debugger = self.__dict__['_debugger']
except KeyError:
assert 'rpdb2' not in sys.modules
self._debugger = debugger = debug.getPdb()
self._bdb_interaction = debugger.interaction
return debugger.set_trace(sys._getframe(depth))
def kill(self, pid, sig):
force = []
sigint_handler = None
try:
while 1:
cluster_dict.acquire()
try:
last_pdb = cluster_dict.get('last_pdb', {})
if force or pid not in last_pdb:
os.kill(pid, sig)
last_pdb.pop(pid, None)
cluster_dict.commit()
break
try:
if psutil.Process(pid).status == psutil.STATUS_ZOMBIE:
break
except psutil.NoSuchProcess:
raise OSError(errno.ESRCH, 'No such process')
finally:
cluster_dict.release()
if sigint_handler is None:
sigint_handler = signal.signal(signal.SIGINT,
lambda *args: force.append(None))
sys.stderr.write('Pid %u is/was debugged.'
' Press ^C to kill it...' % pid)
sleep(1)
finally:
if sigint_handler is not None:
signal.signal(signal.SIGINT, sigint_handler)
if force:
sys.stderr.write('\n')
def _lock_console(self):
while 1:
cluster_dict.acquire()
try:
if 'text_pdb' not in cluster_dict:
cluster_dict['text_pdb'] = pid = os.getpid()
cluster_dict.setdefault('last_pdb', {})[pid] = None
cluster_dict.commit()
break
finally:
cluster_dict.release()
sleep(0.5)
def _unlock_console(self):
cluster_dict.acquire()
try:
pid = cluster_dict.pop('text_pdb')
cluster_dict['last_pdb'][pid] = time()
cluster_dict.commit()
finally:
cluster_dict.release()
def _bdb_interaction(self, hooked, *args, **kw):
self._lock_console()
try:
return hooked(*args, **kw)
finally:
self._unlock_console()
def _rpdb2_set_state(self, hooked, state=None, *args, **kw):
from rpdb2 import STATE_BROKEN, STATE_DETACHED
cluster_dict.acquire()
try:
if state is None:
state = hooked.im_self.get_state()
last_pdb = cluster_dict.setdefault('last_pdb', {})
pid = os.getpid()
if state == STATE_DETACHED:
last_pdb.pop(pid, None)
else:
last_pdb[pid] = state != STATE_BROKEN and time() or None
return hooked(state=state, *args, **kw)
finally:
cluster_dict.release(True)
def _getLastPdb(self, *exclude):
result = 0
for pid, last_pdb in cluster_dict.get('last_pdb', {}).iteritems():
if pid not in exclude:
if last_pdb is None:
return
if result < last_pdb:
result = last_pdb
return result
def wait(self, test, timeout):
end_time = time() + timeout
period = 0.1
while not test():
cluster_dict.acquire()
try:
last_pdb = self._getLastPdb()
if last_pdb is None:
next_sleep = 1
else:
next_sleep = max(last_pdb + timeout, end_time) - time()
if next_sleep > period:
next_sleep = period
period *= 1.5
elif next_sleep < 0:
return False
finally:
cluster_dict.release()
sleep(next_sleep)
return True
__builtin__.pdb = ClusterPdb()
signal.signal(signal.SIGUSR2, debug.decorate(lambda sig, frame: pdb(depth=2)))
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/functional/ 0000775 0000000 0000000 00000000000 11634614701 0024474 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/functional/__init__.py 0000664 0000000 0000000 00000055037 11634614701 0026617 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import errno
import os
import sys
import time
import ZODB
import socket
import signal
import random
import weakref
import MySQLdb
import unittest
import tempfile
import traceback
import threading
import psutil
import neo.scripts
from neo.neoctl.neoctl import NeoCTL, NotReadyException
from neo.lib import setupLog
from neo.lib.protocol import ClusterStates, NodeTypes, CellStates, NodeStates
from neo.lib.util import dump
from neo.tests import DB_USER, setupMySQLdb, NeoTestBase, buildUrlFromString, \
ADDRESS_TYPE, IP_VERSION_FORMAT_DICT, getTempDirectory
from neo.tests.cluster import SocketLock
from neo.client.Storage import Storage
NEO_MASTER = 'neomaster'
NEO_STORAGE = 'neostorage'
NEO_ADMIN = 'neoadmin'
DELAY_SAFETY_MARGIN = 10
MAX_START_TIME = 30
class NodeProcessError(Exception):
pass
class AlreadyRunning(Exception):
pass
class AlreadyStopped(Exception):
pass
class NotFound(Exception):
pass
class PortAllocator(object):
lock = SocketLock('neo.PortAllocator')
allocator_set = weakref.WeakKeyDictionary() # BBB: use WeakSet instead
def __init__(self):
self.socket_list = []
def allocate(self, address_type, local_ip):
s = socket.socket(address_type, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
if not self.lock.locked():
self.lock.acquire()
self.allocator_set[self] = None
self.socket_list.append(s)
while True:
# Do not let the system choose the port to avoid conflicts
# with other software. IOW, use a range different than:
# - /proc/sys/net/ipv4/ip_local_port_range on Linux
# - what IANA recommends (49152 to 65535)
try:
s.bind((local_ip, random.randint(16384, 32767)))
return s.getsockname()[1]
except socket.error, e:
if e.errno != errno.EADDRINUSE:
raise
def release(self):
for s in self.socket_list:
s.close()
self.socket_list = None
def reset(self):
if self.lock.locked():
self.allocator_set.pop(self, None)
if not self.allocator_set:
self.lock.release()
if self.socket_list:
for s in self.socket_list:
s.close()
self.__init__()
__del__ = reset
class ChildException(KeyboardInterrupt):
"""Wrap any exception into an exception that is not catched by TestCase.run
The exception is not wrapped and re-raised immediately if there is no need
to wrap.
"""
def __init__(self, type, value, tb):
code = unittest.TestCase.run.im_func.func_code
f = tb.tb_frame
while f is not None:
if f.f_code is code:
break
f = f.f_back
else:
raise type, value, tb
KeyboardInterrupt.__init__(self, type, value, tb)
def __call__(self):
"""Re-raise wrapped exception"""
type, value, tb = self.args
if type is KeyboardInterrupt:
sys.exit(1)
raise type, value, tb
class NEOProcess(object):
pid = 0
def __init__(self, command, uuid, arg_dict):
try:
__import__('neo.scripts.' + command)
except ImportError:
raise NotFound, '%s not found' % (command)
self.command = command
self.arg_dict = arg_dict
self.with_uuid = True
self.setUUID(uuid)
def start(self, with_uuid=True):
# Prevent starting when already forked and wait wasn't called.
if self.pid != 0:
raise AlreadyRunning, 'Already running with PID %r' % (self.pid, )
command = self.command
args = []
self.with_uuid = with_uuid
for arg, param in self.arg_dict.iteritems():
if with_uuid is False and arg == '--uuid':
continue
args.append(arg)
if param is not None:
args.append(str(param))
self.pid = os.fork()
if self.pid == 0:
# Child
# prevent child from killing anything
del self.__class__.__del__
try:
# release system-wide lock
for allocator in PortAllocator.allocator_set.copy():
allocator.reset()
sys.argv = [command] + args
getattr(neo.scripts, command).main()
sys.exit()
except:
raise ChildException(*sys.exc_info())
neo.lib.logging.info('pid %u: %s %s',
self.pid, command, ' '.join(map(repr, args)))
def kill(self, sig=signal.SIGTERM):
if self.pid:
neo.lib.logging.info('kill pid %u', self.pid)
try:
pdb.kill(self.pid, sig)
except OSError:
traceback.print_last()
else:
raise AlreadyStopped
def __del__(self):
# If we get killed, kill subprocesses aswell.
try:
self.kill(signal.SIGKILL)
self.wait()
except:
# We can ignore all exceptions at this point, since there is no
# garanteed way to handle them (other objects we would depend on
# might already have been deleted).
pass
def wait(self, options=0):
if self.pid == 0:
raise AlreadyStopped
result = os.WEXITSTATUS(os.waitpid(self.pid, options)[1])
self.pid = 0
if result:
raise NodeProcessError('%r %r exited with status %r' % (
self.command, self.arg_dict, result))
return result
def stop(self):
self.kill()
self.wait()
def getPID(self):
return self.pid
def getUUID(self):
assert self.with_uuid, 'UUID disabled on this process'
return self.uuid
def setUUID(self, uuid):
"""
Note: for this change to take effect, the node must be restarted.
"""
self.uuid = uuid
self.arg_dict['--uuid'] = dump(uuid)
def isAlive(self):
try:
return psutil.Process(self.pid).status != psutil.STATUS_ZOMBIE
except psutil.NoSuchProcess:
return False
class NEOCluster(object):
def __init__(self, db_list, master_count=1, partitions=1, replicas=0,
db_user=DB_USER, db_password='',
cleanup_on_delete=False, temp_dir=None, clear_databases=True,
adapter=os.getenv('NEO_TESTS_ADAPTER'),
verbose=True,
address_type=ADDRESS_TYPE,
):
if not adapter:
adapter = 'MySQL'
self.adapter = adapter
self.zodb_storage_list = []
self.cleanup_on_delete = cleanup_on_delete
self.verbose = verbose
self.uuid_set = set()
self.db_user = db_user
self.db_password = db_password
self.db_list = db_list
self.address_type = address_type
self.local_ip = local_ip = IP_VERSION_FORMAT_DICT[self.address_type]
self.setupDB(clear_databases)
self.process_dict = {}
if temp_dir is None:
temp_dir = tempfile.mkdtemp(prefix='neo_')
print 'Using temp directory %r.' % (temp_dir, )
self.temp_dir = temp_dir
self.port_allocator = PortAllocator()
admin_port = self.port_allocator.allocate(address_type, local_ip)
self.cluster_name = 'neo_%s' % (random.randint(0, 100), )
master_node_list = [self.port_allocator.allocate(address_type, local_ip)
for i in xrange(master_count)]
self.master_nodes = '/'.join('%s:%s' % (
buildUrlFromString(self.local_ip), x, )
for x in master_node_list)
# create admin node
self.__newProcess(NEO_ADMIN, {
'--cluster': self.cluster_name,
'--name': 'admin',
'--bind': '%s:%d' % (buildUrlFromString(
self.local_ip), admin_port, ),
'--masters': self.master_nodes,
})
# create master nodes
for index, port in enumerate(master_node_list):
self.__newProcess(NEO_MASTER, {
'--cluster': self.cluster_name,
'--name': 'master_%d' % index,
'--bind': '%s:%d' % (buildUrlFromString(
self.local_ip), port, ),
'--masters': self.master_nodes,
'--replicas': replicas,
'--partitions': partitions,
})
# create storage nodes
for index, db in enumerate(db_list):
self.__newProcess(NEO_STORAGE, {
'--cluster': self.cluster_name,
'--name': 'storage_%d' % index,
'--bind': '%s:%d' % (buildUrlFromString(
self.local_ip),
0 ),
'--masters': self.master_nodes,
'--database': '%s:%s@%s' % (db_user, db_password, db),
'--adapter': adapter,
})
# create neoctl
self.neoctl = NeoCTL((self.local_ip, admin_port))
def __newProcess(self, command, arguments):
uuid = self.__allocateUUID()
arguments['--uuid'] = uuid
if self.verbose:
arguments['--verbose'] = True
logfile = arguments['--name']
arguments['--logfile'] = os.path.join(self.temp_dir, '%s.log' % (logfile, ))
self.process_dict.setdefault(command, []).append(
NEOProcess(command, uuid, arguments))
def __allocateUUID(self):
uuid = ('%032x' % random.getrandbits(128)).decode('hex')
self.uuid_set.add(uuid)
return uuid
def setupDB(self, clear_databases=True):
if self.adapter == 'MySQL':
setupMySQLdb(self.db_list, self.db_user, self.db_password,
clear_databases)
def run(self, except_storages=()):
""" Start cluster processes except some storage nodes """
assert len(self.process_dict)
self.port_allocator.release()
for process_list in self.process_dict.itervalues():
for process in process_list:
if process not in except_storages:
process.start()
# wait for the admin node availability
def test():
try:
self.neoctl.getClusterState()
except NotReadyException:
return False
return True
if not pdb.wait(test, MAX_START_TIME):
raise AssertionError('Timeout when starting cluster')
self.port_allocator.reset()
def start(self, except_storages=()):
""" Do a complete start of a cluster """
self.run(except_storages=except_storages)
neoctl = self.neoctl
neoctl.startCluster()
target_count = len(self.db_list) - len(except_storages)
storage_node_list = []
def test():
storage_node_list[:] = neoctl.getNodeList(
node_type=NodeTypes.STORAGE)
# wait at least number of started storages, admin node can know
# more nodes when the cluster restart with an existing partition
# table referencing non-running nodes
return len(storage_node_list) >= target_count
if not pdb.wait(test, MAX_START_TIME):
raise AssertionError('Timeout when starting cluster')
if storage_node_list:
self.expectClusterRunning()
neoctl.enableStorageList([x[2] for x in storage_node_list])
def stop(self, clients=True):
error_list = []
for process_list in self.process_dict.itervalues():
for process in process_list:
try:
process.kill(signal.SIGKILL)
process.wait()
except AlreadyStopped:
pass
except NodeProcessError, e:
error_list += e.args
if clients:
for zodb_storage in self.zodb_storage_list:
zodb_storage.close()
self.zodb_storage_list = []
time.sleep(0.5)
if error_list:
raise NodeProcessError('\n'.join(error_list))
def getNEOCTL(self):
return self.neoctl
def getZODBStorage(self, **kw):
master_nodes = self.master_nodes.replace('/', ' ')
result = Storage(
master_nodes=master_nodes,
name=self.cluster_name,
logfile=os.path.join(self.temp_dir, 'client.log'),
verbose=self.verbose,
**kw)
self.zodb_storage_list.append(result)
return result
def getZODBConnection(self, **kw):
""" Return a tuple with the database and a connection """
db = ZODB.DB(storage=self.getZODBStorage(**kw))
return (db, db.open())
def getSQLConnection(self, db, autocommit=False):
assert db in self.db_list
conn = MySQLdb.Connect(user=self.db_user, passwd=self.db_password,
db=db)
conn.autocommit(autocommit)
return conn
def _getProcessList(self, type):
return self.process_dict.get(type)
def getMasterProcessList(self):
return self._getProcessList(NEO_MASTER)
def getStorageProcessList(self):
return self._getProcessList(NEO_STORAGE)
def getAdminProcessList(self):
return self._getProcessList(NEO_ADMIN)
def _killMaster(self, primary=False, all=False):
killed_uuid_list = []
primary_uuid = self.neoctl.getPrimary()
for master in self.getMasterProcessList():
master_uuid = master.getUUID()
is_primary = master_uuid == primary_uuid
if primary and is_primary or not (primary or is_primary):
killed_uuid_list.append(master_uuid)
master.kill()
master.wait()
if not all:
break
return killed_uuid_list
def killPrimary(self):
return self._killMaster(primary=True)
def killSecondaryMaster(self, all=False):
return self._killMaster(primary=False, all=all)
def killMasters(self):
secondary_list = self.killSecondaryMaster(all=True)
primary_list = self.killPrimary()
return secondary_list + primary_list
def killStorage(self, all=False):
killed_uuid_list = []
for storage in self.getStorageProcessList():
killed_uuid_list.append(storage.getUUID())
storage.kill()
storage.wait()
if not all:
break
return killed_uuid_list
def __getNodeList(self, node_type, state=None):
return [x for x in self.neoctl.getNodeList(node_type)
if state is None or x[3] == state]
def getMasterList(self, state=None):
return self.__getNodeList(NodeTypes.MASTER, state)
def getStorageList(self, state=None):
return self.__getNodeList(NodeTypes.STORAGE, state)
def getClientlist(self, state=None):
return self.__getNodeList(NodeTypes.CLIENT, state)
def __getNodeState(self, node_type, uuid):
node_list = self.__getNodeList(node_type)
for node_type, address, node_uuid, state in node_list:
if node_uuid == uuid:
break
else:
state = None
return state
def getMasterNodeState(self, uuid):
return self.__getNodeState(NodeTypes.MASTER, uuid)
def getPrimary(self):
try:
current_try = self.neoctl.getPrimary()
except NotReadyException:
current_try = None
return current_try
def expectCondition(self, condition, timeout=0, on_fail=None):
end = time.time() + timeout + DELAY_SAFETY_MARGIN
opaque_history = [None]
def test():
reached, opaque = condition(opaque_history[-1])
if not reached:
opaque_history.append(opaque)
return reached
if not pdb.wait(test, timeout + DELAY_SAFETY_MARGIN):
del opaque_history[0]
if on_fail is not None:
on_fail(opaque_history)
raise AssertionError('Timeout while expecting condition. '
'History: %s' % opaque_history)
def expectAllMasters(self, node_count, state=None, *args, **kw):
def callback(last_try):
current_try = len(self.getMasterList(state=state))
if last_try is not None and current_try < last_try:
raise AssertionError, 'Regression: %s became %s' % \
(last_try, current_try)
return (current_try == node_count, current_try)
self.expectCondition(callback, *args, **kw)
def __expectNodeState(self, node_type, uuid, state, *args, **kw):
if not isinstance(state, (tuple, list)):
state = (state, )
def callback(last_try):
current_try = self.__getNodeState(node_type, uuid)
return current_try in state, current_try
self.expectCondition(callback, *args, **kw)
def expectMasterState(self, uuid, state, *args, **kw):
self.__expectNodeState(NodeTypes.MASTER, uuid, state, *args, **kw)
def expectStorageState(self, uuid, state, *args, **kw):
self.__expectNodeState(NodeTypes.STORAGE, uuid, state, *args, **kw)
def expectRunning(self, process, *args, **kw):
self.expectStorageState(process.getUUID(), NodeStates.RUNNING,
*args, **kw)
def expectPending(self, process, *args, **kw):
self.expectStorageState(process.getUUID(), NodeStates.PENDING,
*args, **kw)
def expectUnknown(self, process, *args, **kw):
self.expectStorageState(process.getUUID(), NodeStates.UNKNOWN,
*args, **kw)
def expectUnavailable(self, process, *args, **kw):
self.expectStorageState(process.getUUID(),
NodeStates.TEMPORARILY_DOWN, *args, **kw)
def expectPrimary(self, uuid=None, *args, **kw):
def callback(last_try):
current_try = self.getPrimary()
if None not in (uuid, current_try) and uuid != current_try:
raise AssertionError, 'An unexpected primary arised: %r, ' \
'expected %r' % (dump(current_try), dump(uuid))
return uuid is None or uuid == current_try, current_try
self.expectCondition(callback, *args, **kw)
def expectOudatedCells(self, number, *args, **kw):
def callback(last_try):
row_list = self.neoctl.getPartitionRowList()[1]
number_of_oudated = 0
for row in row_list:
for cell in row[1]:
if cell[1] == CellStates.OUT_OF_DATE:
number_of_oudated += 1
return number_of_oudated == number, number_of_oudated
self.expectCondition(callback, *args, **kw)
def expectAssignedCells(self, process, number, *args, **kw):
def callback(last_try):
row_list = self.neoctl.getPartitionRowList()[1]
assigned_cells_number = 0
for row in row_list:
for cell in row[1]:
if cell[0] == process.getUUID():
assigned_cells_number += 1
return assigned_cells_number == number, assigned_cells_number
self.expectCondition(callback, *args, **kw)
def expectClusterState(self, state, *args, **kw):
def callback(last_try):
current_try = self.neoctl.getClusterState()
return current_try == state, current_try
self.expectCondition(callback, *args, **kw)
def expectClusterRecovering(self, *args, **kw):
self.expectClusterState(ClusterStates.RECOVERING, *args, **kw)
def expectClusterVerifying(self, *args, **kw):
self.expectClusterState(ClusterStates.VERIFYING, *args, **kw)
def expectClusterRunning(self, *args, **kw):
self.expectClusterState(ClusterStates.RUNNING, *args, **kw)
def expectAlive(self, process, *args, **kw):
def callback(last_try):
current_try = process.isAlive()
return current_try, current_try
self.expectCondition(callback, *args, **kw)
def expectStorageNotKnown(self, process, *args, **kw):
# /!\ Not Known != Unknown
process_uuid = process.getUUID()
def expected_storage_not_known(last_try):
for storage in self.getStorageList():
if storage[2] == process_uuid:
return False, storage
return True, None
self.expectCondition(expected_storage_not_known, *args, **kw)
def __del__(self):
if self.cleanup_on_delete:
os.removedirs(self.temp_dir)
class NEOFunctionalTest(NeoTestBase):
def setupLog(self):
log_file = os.path.join(self.getTempDirectory(), 'test.log')
setupLog('TEST', log_file, True)
def getTempDirectory(self):
# build the full path based on test case and current test method
temp_dir = os.path.join(getTempDirectory(), self.id())
# build the path if needed
if not os.path.exists(temp_dir):
os.makedirs(temp_dir)
return temp_dir
def run(self, *args, **kw):
try:
return super(NEOFunctionalTest, self).run(*args, **kw)
except ChildException, e:
e()
def runWithTimeout(self, timeout, method, args=(), kwargs=None):
if kwargs is None:
kwargs = {}
exc_list = []
def excWrapper(*args, **kw):
try:
method(*args, **kw)
except:
exc_list.append(sys.exc_info())
thread = threading.Thread(None, excWrapper, args=args, kwargs=kwargs)
thread.setDaemon(True)
thread.start()
thread.join(timeout)
self.assertFalse(thread.isAlive(), 'Run timeout')
if exc_list:
assert len(exc_list) == 1, exc_list
exc = exc_list[0]
raise exc[0], exc[1], exc[2]
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/functional/testClient.py 0000664 0000000 0000000 00000026526 11634614701 0027177 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import os
import unittest
import transaction
import ZODB
import socket
from struct import pack, unpack
from neo.neoctl.neoctl import NeoCTL
from ZODB.FileStorage import FileStorage
from ZODB.POSException import ConflictError
from ZODB.tests.StorageTestBase import zodb_pickle
from persistent import Persistent
from neo.lib.util import SOCKET_CONNECTORS_DICT
from neo.tests.functional import NEOCluster, NEOFunctionalTest
from neo.tests import IP_VERSION_FORMAT_DICT
TREE_SIZE = 6
class Tree(Persistent):
""" A simple binary tree """
def __init__(self, depth):
self.depth = depth
if depth <= 0:
return
depth -= 1
self.right = Tree(depth)
self.left = Tree(depth)
# simple persitent object with conflict resolution
class PCounter(Persistent):
_value = 0
def value(self):
return self._value
def inc(self):
self._value += 1
class PCounterWithResolution(PCounter):
def _p_resolveConflict(self, old, saved, new):
new['_value'] = saved['_value'] + new['_value']
return new
class PObject(Persistent):
pass
class ClientTests(NEOFunctionalTest):
def setUp(self):
NEOFunctionalTest.setUp(self)
self.neo = NEOCluster(
['test_neo1', 'test_neo2', 'test_neo3', 'test_neo4'],
replicas=2,
master_count=1,
temp_dir=self.getTempDirectory()
)
def tearDown(self):
if self.neo is not None:
self.neo.stop()
NEOFunctionalTest.tearDown(self)
def __setup(self):
# start cluster
self.neo.setupDB()
self.neo.start()
self.neo.expectClusterRunning()
self.db = ZODB.DB(self.neo.getZODBStorage())
def makeTransaction(self):
# create a transaction a get the root object
txn = transaction.TransactionManager()
conn = self.db.open(transaction_manager=txn)
return (txn, conn)
def testConflictResolutionTriggered1(self):
""" Check that ConflictError is raised on write conflict """
# create the initial objects
self.__setup()
t, c = self.makeTransaction()
c.root()['without_resolution'] = PCounter()
t.commit()
# first with no conflict resolution
t1, c1 = self.makeTransaction()
t2, c2 = self.makeTransaction()
o1 = c1.root()['without_resolution']
o2 = c2.root()['without_resolution']
self.assertEqual(o1.value(), 0)
self.assertEqual(o2.value(), 0)
o1.inc()
o2.inc()
o2.inc()
t1.commit()
self.assertEqual(o1.value(), 1)
self.assertEqual(o2.value(), 2)
self.assertRaises(ConflictError, t2.commit)
def testIsolationAtZopeLevel(self):
""" Check transaction isolation within zope connection """
self.__setup()
t, c = self.makeTransaction()
root = c.root()
root['item'] = 0
root['other'] = 'bla'
t.commit()
t1, c1 = self.makeTransaction()
t2, c2 = self.makeTransaction()
# Makes c2 take a snapshot of database state
c2.root()['other']
c1.root()['item'] = 1
t1.commit()
# load objet from zope cache
self.assertEqual(c1.root()['item'], 1)
self.assertEqual(c2.root()['item'], 0)
def testIsolationWithoutZopeCache(self):
""" Check isolation with zope cache cleared """
self.__setup()
t, c = self.makeTransaction()
root = c.root()
root['item'] = 0
root['other'] = 'bla'
t.commit()
t1, c1 = self.makeTransaction()
t2, c2 = self.makeTransaction()
# Makes c2 take a snapshot of database state
c2.root()['other']
c1.root()['item'] = 1
t1.commit()
# clear zope cache to force re-ask NEO
c1.cacheMinimize()
c2.cacheMinimize()
self.assertEqual(c1.root()['item'], 1)
self.assertEqual(c2.root()['item'], 0)
def __checkTree(self, tree, depth=TREE_SIZE):
self.assertTrue(isinstance(tree, Tree))
self.assertEqual(depth, tree.depth)
depth -= 1
if depth <= 0:
return
self.__checkTree(tree.right, depth)
self.__checkTree(tree.left, depth)
def __getDataFS(self, reset=False):
name = os.path.join(self.getTempDirectory(), 'data.fs')
if reset and os.path.exists(name):
os.remove(name)
storage = FileStorage(file_name=name)
db = ZODB.DB(storage=storage)
return (db, storage)
def __populate(self, db, tree_size=TREE_SIZE, filestorage_bug=True):
conn = db.open()
root = conn.root()
root['trees'] = Tree(tree_size)
if filestorage_bug:
ob = root['trees'].right
left = ob.left
del ob.left
transaction.commit()
ob._p_changed = 1
transaction.commit()
ob.left = left
transaction.commit()
conn.close()
def testImport(self):
# source database
dfs_db, dfs_storage = self.__getDataFS()
self.__populate(dfs_db)
# create a neo storage
self.neo.start()
neo_storage = self.neo.getZODBStorage()
# copy data fs to neo
neo_storage.copyTransactionsFrom(dfs_storage, verbose=0)
# check neo content
(neo_db, neo_conn) = self.neo.getZODBConnection()
self.__checkTree(neo_conn.root()['trees'])
def testExport(self, filestorage_bug=False):
# create a neo storage
self.neo.start()
(neo_db, neo_conn) = self.neo.getZODBConnection()
self.__populate(neo_db, filestorage_bug=filestorage_bug)
# copy neo to data fs
dfs_db, dfs_storage = self.__getDataFS(reset=True)
neo_storage = self.neo.getZODBStorage()
dfs_storage.copyTransactionsFrom(neo_storage)
# check data fs content
conn = dfs_db.open()
root = conn.root()
self.__checkTree(root['trees'])
def testExportFileStorageBug(self):
# currently fails due to a bug in ZODB.FileStorage
self.testExport(True)
def testLockTimeout(self):
""" Hold a lock on an object to block a second transaction """
def test():
self.neo = NEOCluster(['test_neo1'], replicas=0,
temp_dir=self.getTempDirectory())
neoctl = self.neo.getNEOCTL()
self.neo.start()
db1, conn1 = self.neo.getZODBConnection()
db2, conn2 = self.neo.getZODBConnection()
st1, st2 = conn1._storage, conn2._storage
t1, t2 = transaction.Transaction(), transaction.Transaction()
t1.user = t2.user = 'user'
t1.description = t2.description = 'desc'
oid = st1.new_oid()
rev = '\0' * 8
data = zodb_pickle(PObject())
st2.tpc_begin(t2)
st1.tpc_begin(t1)
st1.store(oid, rev, data, '', t1)
# this store will be delayed
st2.store(oid, rev, data, '', t2)
# the vote will timeout as t1 never release the lock
self.assertRaises(ConflictError, st2.tpc_vote, t2)
self.runWithTimeout(40, test)
def testIPv6Client(self):
""" Test the connectivity of an IPv6 connection for neo client """
def test():
"""
Implement the IPv6Client test
"""
self.neo = NEOCluster(['test_neo1'], replicas=0,
temp_dir = self.getTempDirectory(),
address_type = socket.AF_INET6
)
neoctl = NeoCTL(('::1', 0))
self.neo.start()
db1, conn1 = self.neo.getZODBConnection()
db2, conn2 = self.neo.getZODBConnection()
self.runWithTimeout(40, test)
def testDelayedLocksCancelled(self):
"""
Hold a lock on an object, try to get another lock on the same
object to delay it. Then cancel the second transaction and check
that the lock is not hold when the first transaction ends
"""
def test():
self.neo = NEOCluster(['test_neo1'], replicas=0,
temp_dir=self.getTempDirectory())
neoctl = self.neo.getNEOCTL()
self.neo.start()
db1, conn1 = self.neo.getZODBConnection()
db2, conn2 = self.neo.getZODBConnection()
st1, st2 = conn1._storage, conn2._storage
t1, t2 = transaction.Transaction(), transaction.Transaction()
t1.user = t2.user = 'user'
t1.description = t2.description = 'desc'
oid = st1.new_oid()
rev = '\0' * 8
data = zodb_pickle(PObject())
st1.tpc_begin(t1)
st2.tpc_begin(t2)
# t1 own the lock
st1.store(oid, rev, data, '', t1)
# t2 store is delayed
st2.store(oid, rev, data, '', t2)
# cancel t2, should cancel the store too
st2.tpc_abort(t2)
# finish t1, should release the lock
st1.tpc_vote(t1)
st1.tpc_finish(t1)
db3, conn3 = self.neo.getZODBConnection()
st3 = conn3._storage
t3 = transaction.Transaction()
t3.user = 'user'
t3.description = 'desc'
st3.tpc_begin(t3)
# retreive the last revision
data, serial = st3.load(oid, '')
# try to store again, should not be delayed
st3.store(oid, serial, data, '', t3)
# the vote should not timeout
st3.tpc_vote(t3)
st3.tpc_finish(t3)
self.runWithTimeout(10, test)
def testGreaterOIDSaved(self):
"""
Store an object with an OID greater than the last generated by the
master. This OID must be intercepted at commit, used for next OID
generations and persistently saved on storage nodes.
"""
self.neo = NEOCluster(['test_neo1'], replicas=0,
temp_dir=self.getTempDirectory())
neoctl = self.neo.getNEOCTL()
self.neo.start()
db1, conn1 = self.neo.getZODBConnection()
st1 = conn1._storage
t1 = transaction.Transaction()
rev = '\0' * 8
data = zodb_pickle(PObject())
my_oid = pack('!Q', 100000)
# store an object with this OID
st1.tpc_begin(t1)
st1.store(my_oid, rev, data, '', t1)
st1.tpc_vote(t1)
st1.tpc_finish(t1)
# request an oid, should be greater than mine
oid = st1.new_oid()
self.assertTrue(oid > my_oid)
def test_suite():
return unittest.makeSuite(ClientTests)
if __name__ == "__main__":
unittest.main(defaultTest="test_suite")
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/functional/testCluster.py 0000664 0000000 0000000 00000012433 11634614701 0027372 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
import transaction
from persistent import Persistent
from neo.tests.functional import NEOCluster, NEOFunctionalTest
class ClusterTests(NEOFunctionalTest):
def setUp(self):
NEOFunctionalTest.setUp(self)
self.neo = None
def tearDown(self):
if self.neo is not None:
self.neo.stop()
NEOFunctionalTest.tearDown(self)
def testClusterBreaks(self):
self.neo = NEOCluster(['test_neo1'],
master_count=1, temp_dir=self.getTempDirectory())
neoctl = self.neo.getNEOCTL()
self.neo.setupDB()
self.neo.start()
self.neo.expectClusterRunning()
self.neo.expectOudatedCells(number=0)
self.neo.killStorage()
self.neo.expectClusterVerifying()
def testClusterBreaksWithTwoNodes(self):
self.neo = NEOCluster(['test_neo1', 'test_neo2'],
partitions=2, master_count=1, replicas=0,
temp_dir=self.getTempDirectory())
neoctl = self.neo.getNEOCTL()
self.neo.setupDB()
self.neo.start()
self.neo.expectClusterRunning()
self.neo.expectOudatedCells(number=0)
self.neo.killStorage()
self.neo.expectClusterVerifying()
def testClusterDoesntBreakWithTwoNodesOneReplica(self):
self.neo = NEOCluster(['test_neo1', 'test_neo2'],
partitions=2, replicas=1, master_count=1,
temp_dir=self.getTempDirectory())
neoctl = self.neo.getNEOCTL()
self.neo.setupDB()
self.neo.start()
self.neo.expectClusterRunning()
self.neo.expectOudatedCells(number=0)
self.neo.killStorage()
self.neo.expectClusterRunning()
def testElectionWithManyMasters(self):
MASTER_COUNT = 20
self.neo = NEOCluster(['test_neo1', 'test_neo2'],
partitions=10, replicas=0, master_count=MASTER_COUNT,
temp_dir=self.getTempDirectory())
neoctl = self.neo.getNEOCTL()
self.neo.start()
self.neo.expectClusterRunning()
self.neo.expectAllMasters(MASTER_COUNT)
self.neo.expectOudatedCells(0)
def testLeavingOperationalStateDropClientNodes(self):
"""
Check that client nodes are dropped where the cluster leaves the
operational state.
"""
# start a cluster
self.neo = NEOCluster(['test_neo1'], replicas=0,
temp_dir=self.getTempDirectory())
neoctl = self.neo.getNEOCTL()
self.neo.start()
self.neo.expectClusterRunning()
self.neo.expectOudatedCells(0)
# connect a client a check it's known
db, conn = self.neo.getZODBConnection()
self.assertEqual(len(self.neo.getClientlist()), 1)
# drop the storage, the cluster is no more operational...
self.neo.getStorageProcessList()[0].stop()
self.neo.expectClusterVerifying()
# ...and the client gets disconnected
self.assertEqual(len(self.neo.getClientlist()), 0)
# restart storage so that the cluster is operational again
self.neo.getStorageProcessList()[0].start()
self.neo.expectClusterRunning()
self.neo.expectOudatedCells(0)
# and reconnect the client, there must be only one known by the admin
conn.root()['plop'] = 1
transaction.commit()
self.assertEqual(len(self.neo.getClientlist()), 1)
def testStorageLostDuringRecovery(self):
"""
Check that admin node receive notifications of storage
connection and disconnection during recovery
"""
self.neo = NEOCluster(['test_neo%d' % i for i in xrange(2)],
master_count=1, partitions=10, replicas=1,
temp_dir=self.getTempDirectory(), clear_databases=True,
)
storages = self.neo.getStorageProcessList()
self.neo.run(except_storages=storages)
self.neo.expectStorageNotKnown(storages[0])
self.neo.expectStorageNotKnown(storages[1])
storages[0].start()
self.neo.expectRunning(storages[0])
self.neo.expectStorageNotKnown(storages[1])
storages[1].start()
self.neo.expectRunning(storages[0])
self.neo.expectRunning(storages[1])
storages[0].stop()
self.neo.expectUnavailable(storages[0])
self.neo.expectRunning(storages[1])
storages[1].stop()
self.neo.expectUnavailable(storages[0])
self.neo.expectUnavailable(storages[1])
def test_suite():
return unittest.makeSuite(ClusterTests)
if __name__ == "__main__":
unittest.main(defaultTest="test_suite")
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/functional/testMaster.py 0000664 0000000 0000000 00000012506 11634614701 0027205 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from neo.tests.functional import NEOCluster, NEOFunctionalTest
from neo.lib.protocol import NodeStates
MASTER_NODE_COUNT = 3
class MasterTests(NEOFunctionalTest):
def setUp(self):
NEOFunctionalTest.setUp(self)
self.neo = NEOCluster([], master_count=MASTER_NODE_COUNT,
temp_dir=self.getTempDirectory())
self.neo.stop()
self.neo.start()
self.storage = self.neo.getZODBStorage()
self.neoctl = self.neo.getNEOCTL()
def tearDown(self):
self.neo.stop()
NEOFunctionalTest.tearDown(self)
def testStoppingSecondaryMaster(self):
# Wait for masters to stabilize
self.neo.expectAllMasters(MASTER_NODE_COUNT)
# Kill
killed_uuid_list = self.neo.killSecondaryMaster()
# Test sanity check.
self.assertEqual(len(killed_uuid_list), 1)
uuid = killed_uuid_list[0]
# Check node state has changed.
self.neo.expectMasterState(uuid, None)
def testStoppingPrimaryWithTwoSecondaries(self):
# Wait for masters to stabilize
self.neo.expectAllMasters(MASTER_NODE_COUNT)
# Kill
killed_uuid_list = self.neo.killPrimary()
# Test sanity check.
self.assertEqual(len(killed_uuid_list), 1)
uuid = killed_uuid_list[0]
# Check the state of the primary we just killed
self.neo.expectMasterState(uuid, (None, NodeStates.UNKNOWN))
self.assertEqual(self.neo.getPrimary(), None)
# Check that a primary master arised.
self.neo.expectPrimary(timeout=10)
# Check that the uuid really changed.
new_uuid = self.neo.getPrimary()
self.assertNotEqual(new_uuid, uuid)
def testStoppingPrimaryWithOneSecondary(self):
self.neo.expectAllMasters(MASTER_NODE_COUNT,
state=NodeStates.RUNNING)
# Kill one secondary master.
killed_uuid_list = self.neo.killSecondaryMaster()
# Test sanity checks.
self.assertEqual(len(killed_uuid_list), 1)
self.neo.expectMasterState(killed_uuid_list[0], None)
self.assertEqual(len(self.neo.getMasterList()), 2)
killed_uuid_list = self.neo.killPrimary()
# Test sanity check.
self.assertEqual(len(killed_uuid_list), 1)
uuid = killed_uuid_list[0]
# Check the state of the primary we just killed
self.neo.expectMasterState(uuid, (None, NodeStates.UNKNOWN))
self.assertEqual(self.neo.getPrimary(), None)
# Check that a primary master arised.
self.neo.expectPrimary(timeout=10)
# Check that the uuid really changed.
new_uuid = self.neo.getPrimary()
self.assertNotEqual(new_uuid, uuid)
def testMasterSequentialStart(self):
self.neo.expectAllMasters(MASTER_NODE_COUNT,
state=NodeStates.RUNNING)
master_list = self.neo.getMasterProcessList()
# Stop the cluster (so we can start processes manually)
self.neo.killMasters()
# Start the first master.
first_master = master_list[0]
first_master.start()
first_master_uuid = first_master.getUUID()
# Check that the master node we started elected itself.
self.neo.expectPrimary(first_master_uuid, timeout=30)
# Check that no other node is known as running.
self.assertEqual(len(self.neo.getMasterList(
state=NodeStates.RUNNING)), 1)
# Start a second master.
second_master = master_list[1]
# Check that the second master is known as being down.
self.assertEqual(self.neo.getMasterNodeState(second_master.getUUID()),
None)
second_master.start()
# Check that the second master is running under his known UUID.
self.neo.expectMasterState(second_master.getUUID(),
NodeStates.RUNNING)
# Check that the primary master didn't change.
self.assertEqual(self.neo.getPrimary(), first_master_uuid)
# Start a third master.
third_master = master_list[2]
# Check that the third master is known as being down.
self.assertEqual(self.neo.getMasterNodeState(third_master.getUUID()),
None)
third_master.start()
# Check that the third master is running under his known UUID.
self.neo.expectMasterState(third_master.getUUID(),
NodeStates.RUNNING)
# Check that the primary master didn't change.
self.assertEqual(self.neo.getPrimary(), first_master_uuid)
def test_suite():
return unittest.makeSuite(MasterTests)
if __name__ == "__main__":
unittest.main(defaultTest="test_suite")
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/functional/testStorage.py 0000664 0000000 0000000 00000052240 11634614701 0027355 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import time
import unittest
import transaction
from persistent import Persistent
from neo.tests.functional import NEOCluster, NEOFunctionalTest
from neo.lib.protocol import ClusterStates, NodeStates
from ZODB.tests.StorageTestBase import zodb_pickle
from MySQLdb import ProgrammingError
from MySQLdb.constants.ER import NO_SUCH_TABLE
class PObject(Persistent):
def __init__(self, value):
self.value = value
OBJECT_NUMBER = 100
class StorageTests(NEOFunctionalTest):
def setUp(self):
NEOFunctionalTest.setUp(self)
self.neo = None
def tearDown(self):
if self.neo is not None:
self.neo.stop()
NEOFunctionalTest.tearDown(self)
def queryCount(self, db, query):
db.query(query)
result = db.store_result().fetch_row()[0][0]
return result
def __setup(self, storage_number=2, pending_number=0, replicas=1,
partitions=10, master_count=2):
# create a neo cluster
self.neo = NEOCluster(['test_neo%d' % i for i in xrange(storage_number)],
master_count=master_count,
partitions=partitions, replicas=replicas,
temp_dir=self.getTempDirectory(),
clear_databases=True,
adapter='MySQL',
)
# too many pending storage nodes requested
assert pending_number <= storage_number
storage_processes = self.neo.getStorageProcessList()
start_storage_number = len(storage_processes) - pending_number
# return a tuple of storage processes lists
started_processes = storage_processes[:start_storage_number]
stopped_processes = storage_processes[start_storage_number:]
self.neo.start(except_storages=stopped_processes)
return (started_processes, stopped_processes)
def __populate(self):
db, conn = self.neo.getZODBConnection()
root = conn.root()
for i in xrange(OBJECT_NUMBER):
root[i] = PObject(i)
transaction.commit()
conn.close()
db.close()
def __checkDatabase(self, db_name):
db = self.neo.getSQLConnection(db_name, autocommit=True)
# wait for the sql transaction to be commited
def callback(last_try):
object_number = self.queryCount(db, 'select count(*) from obj')
return object_number == OBJECT_NUMBER + 2, object_number
self.neo.expectCondition(callback)
# no more temporarily objects
t_objects = self.queryCount(db, 'select count(*) from tobj')
self.assertEqual(t_objects, 0)
# One revision per object and two for the root, before and after
revisions = self.queryCount(db, 'select count(*) from obj')
self.assertEqual(revisions, OBJECT_NUMBER + 2)
# One object more for the root
query = 'select count(*) from (select * from obj group by oid) as t'
objects = self.queryCount(db, query)
self.assertEqual(objects, OBJECT_NUMBER + 1)
# Check object content
db, conn = self.neo.getZODBConnection()
root = conn.root()
for i in xrange(OBJECT_NUMBER):
obj = root[i]
self.assertEqual(obj.value, i)
transaction.abort()
conn.close()
db.close()
def __checkReplicationDone(self):
# wait for replication to finish
def expect_all_storages(last_try):
storage_number = len(self.neo.getStorageList())
return storage_number == len(self.neo.db_list), storage_number
self.neo.expectCondition(expect_all_storages, timeout=10)
self.neo.expectOudatedCells(number=0, timeout=10)
# check databases
for db_name in self.neo.db_list:
self.__checkDatabase(db_name)
# check storages state
storage_list = self.neo.getStorageList(NodeStates.RUNNING)
self.assertEqual(len(storage_list), 2)
def __checkReplicateCount(self, db_name, target_count, timeout=0, delay=1):
db = self.neo.getSQLConnection(db_name, autocommit=True)
def callback(last_try):
try:
replicate_count = self.queryCount(db,
'select count(distinct uuid) from pt')
except ProgrammingError, exc:
if exc[0] != NO_SUCH_TABLE:
raise
replicate_count = 0
if last_try is not None and last_try < replicate_count:
raise AssertionError, 'Regression: %s became %s' % \
(last_try, replicate_count)
return replicate_count == target_count, replicate_count
self.neo.expectCondition(callback, timeout, delay)
def testNewNodesInPendingState(self):
""" Check that new storage nodes are set as pending, the cluster remains
running """
# start with the first storage
processes = self.__setup(storage_number=3, replicas=1, pending_number=2)
started, stopped = processes
self.neo.expectRunning(started[0])
self.neo.expectClusterRunning()
# start the second then the third
stopped[0].start()
self.neo.expectPending(stopped[0])
self.neo.expectClusterRunning()
stopped[1].start()
self.neo.expectPending(stopped[1])
self.neo.expectClusterRunning()
def testReplicationWithNewStorage(self):
""" create a cluster with one storage, populate it, add a new storage
then check the database content to ensure the replication process is
well done """
# populate one storage
processes = self.__setup(storage_number=2, replicas=1, pending_number=1,
partitions=10)
started, stopped = processes
self.neo.expectOudatedCells(number=0)
self.__populate()
self.neo.expectClusterRunning()
self.neo.expectAssignedCells(started[0], number=10)
# start the second
stopped[0].start()
self.neo.expectPending(stopped[0])
self.neo.expectClusterRunning()
# add it to the partition table
self.neo.neoctl.enableStorageList([stopped[0].getUUID()])
self.neo.expectRunning(stopped[0])
self.neo.expectAssignedCells(stopped[0], number=10)
self.neo.expectClusterRunning()
# wait for replication to finish then check
self.__checkReplicationDone()
self.neo.expectClusterRunning()
def testOudatedCellsOnDownStorage(self):
""" Check that the storage cells are set as oudated when the node is
down, the cluster remains up since there is a replica """
# populate the two storages
(started, _) = self.__setup(storage_number=2, replicas=1)
self.neo.expectRunning(started[0])
self.neo.expectRunning(started[1])
self.neo.expectOudatedCells(number=0)
self.__populate()
self.__checkReplicationDone()
self.neo.expectClusterRunning()
# stop one storage and check outdated cells
started[0].stop()
self.neo.expectOudatedCells(number=10)
self.neo.expectClusterRunning()
def testVerificationTriggered(self):
""" Check that the verification stage is executed when a storage node
required to be operationnal is lost, and the cluster come back in
running state when the storage is up again """
# start neo with one storages
(started, _) = self.__setup(replicas=0, storage_number=1)
self.neo.expectRunning(started[0])
self.neo.expectOudatedCells(number=0)
# add a client node
db, conn = self.neo.getZODBConnection()
root = conn.root()['test'] = 'ok'
transaction.commit()
self.assertEqual(len(self.neo.getClientlist()), 1)
# stop it, the cluster must switch to verification
started[0].stop()
self.neo.expectUnavailable(started[0])
self.neo.expectClusterVerifying()
# client must have been disconnected
self.assertEqual(len(self.neo.getClientlist()), 0)
conn.close()
db.close()
# restart it, the cluster must come back to running state
started[0].start()
self.neo.expectRunning(started[0])
self.neo.expectClusterRunning()
def testSequentialStorageKill(self):
""" Check that the cluster remains running until the last storage node
died when all are replicas """
# start neo with three storages / two replicas
(started, _) = self.__setup(replicas=2, storage_number=3, partitions=10)
self.neo.expectRunning(started[0])
self.neo.expectRunning(started[1])
self.neo.expectRunning(started[2])
self.neo.expectOudatedCells(number=0)
self.neo.expectClusterRunning()
# stop one storage, cluster must remains running
started[0].stop()
self.neo.expectUnavailable(started[0])
self.neo.expectRunning(started[1])
self.neo.expectRunning(started[2])
self.neo.expectOudatedCells(number=10)
self.neo.expectClusterRunning()
# stop a second storage, cluster is still running
started[1].stop()
self.neo.expectUnavailable(started[0])
self.neo.expectUnavailable(started[1])
self.neo.expectRunning(started[2])
self.neo.expectOudatedCells(number=20)
self.neo.expectClusterRunning()
# stop the last, cluster died
started[2].stop()
self.neo.expectUnavailable(started[0])
self.neo.expectUnavailable(started[1])
self.neo.expectUnavailable(started[2])
self.neo.expectOudatedCells(number=20)
self.neo.expectClusterVerifying()
def testConflictingStorageRejected(self):
""" Check that a storage coming after the recovery process with the same
UUID as another already running is refused """
# start with one storage
(started, stopped) = self.__setup(storage_number=2, pending_number=1)
self.neo.expectRunning(started[0])
self.neo.expectClusterRunning()
self.neo.expectOudatedCells(number=0)
# start the second with the same UUID as the first
stopped[0].setUUID(started[0].getUUID())
stopped[0].start()
self.neo.expectOudatedCells(number=0)
# check the first and the cluster are still running
self.neo.expectRunning(started[0])
self.neo.expectClusterRunning()
# XXX: should wait for the storage rejection
# check that no node were added
storage_number = len(self.neo.getStorageList())
self.assertEqual(storage_number, 1)
def testPartitionTableReorganizedWithNewStorage(self):
""" Check if the partition change when adding a new storage to a cluster
with one storage and no replicas """
# start with one storage and no replicas
(started, stopped) = self.__setup(storage_number=2, pending_number=1,
partitions=10, replicas=0)
self.neo.expectRunning(started[0])
self.neo.expectClusterRunning()
self.neo.expectAssignedCells(started[0], 10)
self.neo.expectOudatedCells(number=0)
# start the second and add it to the partition table
stopped[0].start()
self.neo.expectPending(stopped[0])
self.neo.neoctl.enableStorageList([stopped[0].getUUID()])
self.neo.expectRunning(stopped[0])
self.neo.expectClusterRunning()
self.neo.expectOudatedCells(number=0)
# the partition table must change, each node should be assigned to
# five partitions
self.neo.expectAssignedCells(started[0], 5)
self.neo.expectAssignedCells(stopped[0], 5)
def testPartitionTableReorganizedAfterDrop(self):
""" Check that the partition change when dropping a replicas from a
cluster with two storages """
# start with two storage / one replicas
(started, stopped) = self.__setup(storage_number=2, replicas=1,
partitions=10, pending_number=0)
self.neo.expectRunning(started[0])
self.neo.expectRunning(started[1])
self.neo.expectOudatedCells(number=0)
self.neo.expectAssignedCells(started[0], 10)
self.neo.expectAssignedCells(started[1], 10)
# kill one storage, it should be set as unavailable
started[0].stop()
self.neo.expectUnavailable(started[0])
self.neo.expectRunning(started[1])
# and the partition table must not change
self.neo.expectAssignedCells(started[0], 10)
self.neo.expectAssignedCells(started[1], 10)
# ask neoctl to drop it
self.neo.neoctl.dropNode(started[0].getUUID())
self.neo.expectStorageNotKnown(started[0])
self.neo.expectAssignedCells(started[0], 0)
self.neo.expectAssignedCells(started[1], 10)
def testReplicationThenRunningWithReplicas(self):
""" Add a replicas to a cluster, wait for the replication to finish,
shutdown the first storage then check the new storage content """
# start with one storage
(started, stopped) = self.__setup(storage_number=2, replicas=1,
pending_number=1, partitions=10)
self.neo.expectRunning(started[0])
self.neo.expectStorageNotKnown(stopped[0])
self.neo.expectOudatedCells(number=0)
# populate the cluster with some data
self.__populate()
self.neo.expectClusterRunning()
self.neo.expectOudatedCells(number=0)
self.neo.expectAssignedCells(started[0], 10)
self.__checkDatabase(self.neo.db_list[0])
# add a second storage
stopped[0].start()
self.neo.expectPending(stopped[0])
self.neo.neoctl.enableStorageList([stopped[0].getUUID()])
self.neo.expectRunning(stopped[0])
self.neo.expectClusterRunning()
self.neo.expectAssignedCells(started[0], 10)
self.neo.expectAssignedCells(stopped[0], 10)
# wait for replication to finish
self.neo.expectOudatedCells(number=0)
self.neo.expectClusterRunning()
self.__checkReplicationDone()
# kill the first storage
started[0].stop()
self.neo.expectUnavailable(started[0])
self.neo.expectOudatedCells(number=10)
self.neo.expectAssignedCells(started[0], 10)
self.neo.expectAssignedCells(stopped[0], 10)
self.neo.expectClusterRunning()
self.__checkDatabase(self.neo.db_list[0])
# drop it from partition table
self.neo.neoctl.dropNode(started[0].getUUID())
self.neo.expectStorageNotKnown(started[0])
self.neo.expectRunning(stopped[0])
self.neo.expectAssignedCells(started[0], 0)
self.neo.expectAssignedCells(stopped[0], 10)
self.__checkDatabase(self.neo.db_list[1])
def testStartWithManyPartitions(self):
""" Just tests that cluster can start with more than 1000 partitions.
1000, because currently there is an arbitrary packet split at
every 1000 partition when sending a partition table. """
self.__setup(storage_number=2, partitions=5000, master_count=1)
self.neo.expectClusterState(ClusterStates.RUNNING)
def testDropNodeThenRestartCluster(self):
""" Start a cluster with more than one storage, down one, shutdown the
cluster then restart it. The partition table recovered must not include
the dropped node """
# start with two storage / one replica
(started, stopped) = self.__setup(storage_number=2, replicas=1,
master_count=1, partitions=10)
self.neo.expectRunning(started[0])
self.neo.expectRunning(started[1])
self.neo.expectOudatedCells(number=0)
# drop one
self.neo.neoctl.dropNode(started[0].getUUID())
self.neo.expectStorageNotKnown(started[0])
self.neo.expectRunning(started[1])
# wait for running storage to store new partition table
self.__checkReplicateCount(self.neo.db_list[1], 1)
# restart all nodes except the dropped, it must not be known
self.neo.stop()
self.neo.start(except_storages=[started[0]])
self.neo.expectStorageNotKnown(started[0])
self.neo.expectRunning(started[1])
# then restart it, it must be in pending state
started[0].start()
self.neo.expectPending(started[0])
self.neo.expectRunning(started[1])
def testAcceptFirstEmptyStorageAfterStartupAllowed(self):
""" Create a new cluster with no storage node, allow it to starts
then run the first empty storage, it must be accepted """
(started, stopped) = self.__setup(storage_number=1, replicas=0,
pending_number=1, partitions=10)
# start without storage
self.neo.expectClusterRecovering()
self.neo.expectStorageNotKnown(stopped[0])
# start the empty storage, it must be accepted
stopped[0].start(with_uuid=False)
self.neo.expectClusterRunning()
self.assertEqual(len(self.neo.getStorageList()), 1)
self.neo.expectOudatedCells(number=0)
def testDropNodeWithOtherPending(self):
""" Ensure we can drop a node """
# start with one storage
(started, stopped) = self.__setup(storage_number=2, replicas=1,
pending_number=1, partitions=10)
self.neo.expectRunning(started[0])
self.neo.expectStorageNotKnown(stopped[0])
self.neo.expectOudatedCells(number=0)
self.neo.expectClusterRunning()
# set the second storage in pending state and drop the first
stopped[0].start()
self.neo.expectPending(stopped[0])
self.neo.neoctl.dropNode(started[0].getUUID())
self.neo.expectStorageNotKnown(started[0])
self.neo.expectPending(stopped[0])
def testRecoveryWithMultiplePT(self):
# start a cluster with 2 storages and a replica
(started, stopped) = self.__setup(storage_number=2, replicas=1,
pending_number=0, partitions=10)
self.neo.expectRunning(started[0])
self.neo.expectRunning(started[1])
self.neo.expectOudatedCells(number=0)
self.neo.expectClusterRunning()
# drop the first then the second storage
started[0].stop()
self.neo.expectUnavailable(started[0])
self.neo.expectRunning(started[1])
self.neo.expectOudatedCells(number=10)
started[1].stop()
self.neo.expectUnavailable(started[0])
self.neo.expectUnavailable(started[1])
self.neo.expectOudatedCells(number=10)
self.neo.expectClusterVerifying()
# XXX: need to sync with storages first
self.neo.stop()
# restart the cluster with the first storage killed
self.neo.run(except_storages=[started[1]])
self.neo.expectRunning(started[0])
self.neo.expectUnknown(started[1])
self.neo.expectClusterRecovering()
self.neo.expectOudatedCells(number=0)
started[1].start()
self.neo.expectRunning(started[0])
self.neo.expectRunning(started[1])
self.neo.expectClusterRecovering()
self.neo.expectOudatedCells(number=10)
def testReplicationBlockedByUnfinished(self):
# start a cluster with 1 of 2 storages and a replica
(started, stopped) = self.__setup(storage_number=2, replicas=1,
pending_number=1, partitions=10)
self.neo.expectRunning(started[0])
self.neo.expectStorageNotKnown(stopped[0])
self.neo.expectOudatedCells(number=0)
self.neo.expectClusterRunning()
self.__populate()
self.neo.expectOudatedCells(number=0)
# start a transaction that will block the end of the replication
db, conn = self.neo.getZODBConnection()
st = conn._storage
t = transaction.Transaction()
t.user = 'user'
t.description = 'desc'
oid = st.new_oid()
rev = '\0' * 8
data = zodb_pickle(PObject(42))
st.tpc_begin(t)
st.store(oid, rev, data, '', t)
# start the oudated storage
stopped[0].start()
self.neo.expectPending(stopped[0])
self.neo.neoctl.enableStorageList([stopped[0].getUUID()])
self.neo.expectRunning(stopped[0])
self.neo.expectClusterRunning()
self.neo.expectAssignedCells(started[0], 10)
self.neo.expectAssignedCells(stopped[0], 10)
# wait a bit, replication must not happen. This hack is required
# because we cannot gather informations directly from the storages
time.sleep(10)
self.neo.expectOudatedCells(number=10)
# finish the transaction, the replication must happen and finish
st.tpc_vote(t)
st.tpc_finish(t)
self.neo.expectOudatedCells(number=0, timeout=10)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/ 0000775 0000000 0000000 00000000000 11634614701 0023625 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/__init__.py 0000664 0000000 0000000 00000000000 11634614701 0025724 0 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/testClientHandler.py 0000664 0000000 0000000 00000021726 11634614701 0027623 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from struct import pack, unpack
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import NodeTypes, NodeStates, Packets
from neo.master.handlers.client import ClientServiceHandler
from neo.master.app import Application
class MasterClientHandlerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
# create an application object
config = self.getMasterConfiguration(master_number=1, replicas=1)
self.app = Application(config)
self.app.pt.clear()
self.app.pt.setID(1)
self.app.em = Mock()
self.app.loid = '\0' * 8
self.app.tm.setLastTID('\0' * 8)
self.service = ClientServiceHandler(self.app)
# define some variable to simulate client and storage node
self.client_port = 11022
self.storage_port = 10021
self.master_port = 10010
self.master_address = ('127.0.0.1', self.master_port)
self.client_address = ('127.0.0.1', self.client_port)
self.storage_address = ('127.0.0.1', self.storage_port)
# register the storage
kw = {'uuid':self.getNewUUID(), 'address': self.master_address}
self.app.nm.createStorage(**kw)
def getLastUUID(self):
return self.uuid
def identifyToMasterNode(self, node_type=NodeTypes.STORAGE, ip="127.0.0.1",
port=10021):
"""Do first step of identification to MN """
# register the master itself
uuid = self.getNewUUID()
self.app.nm.createFromNodeType(
node_type,
address=(ip, port),
uuid=uuid,
state=NodeStates.RUNNING,
)
return uuid
# Tests
def test_07_askBeginTransaction(self):
tid1 = self.getNextTID()
tid2 = self.getNextTID()
service = self.service
tm_org = self.app.tm
self.app.tm = tm = Mock({
'begin': '\x00\x00\x00\x00\x00\x00\x00\x01',
})
# client call it
client_uuid = self.identifyToMasterNode(node_type=NodeTypes.CLIENT, port=self.client_port)
client_node = self.app.nm.getByUUID(client_uuid)
conn = self.getFakeConnection(client_uuid, self.client_address)
service.askBeginTransaction(conn, None)
calls = tm.mockGetNamedCalls('begin')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(client_node, None)
self.checkAnswerBeginTransaction(conn)
# Client asks for a TID
conn = self.getFakeConnection(client_uuid, self.client_address)
self.app.tm = tm_org
service.askBeginTransaction(conn, tid1)
calls = tm.mockGetNamedCalls('begin')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(client_node, None)
args = self.checkAnswerBeginTransaction(conn, decode=True)
self.assertEqual(args, (tid1, ))
def test_08_askNewOIDs(self):
service = self.service
oid1, oid2 = self.getOID(1), self.getOID(2)
self.app.tm.setLastOID(oid1)
# client call it
client_uuid = self.identifyToMasterNode(node_type=NodeTypes.CLIENT, port=self.client_port)
conn = self.getFakeConnection(client_uuid, self.client_address)
for node in self.app.nm.getStorageList():
conn = self.getFakeConnection(node.getUUID(), node.getAddress())
node.setConnection(conn)
service.askNewOIDs(conn, 1)
self.assertTrue(self.app.tm.getLastOID() > oid1)
for node in self.app.nm.getStorageList():
conn = node.getConnection()
self.assertEqual(self.checkNotifyLastOID(conn, decode=True), (oid2,))
def test_09_askFinishTransaction(self):
service = self.service
uuid = self.identifyToMasterNode()
# do the right job
client_uuid = self.identifyToMasterNode(node_type=NodeTypes.CLIENT, port=self.client_port)
storage_uuid = self.identifyToMasterNode()
storage_conn = self.getFakeConnection(storage_uuid, self.storage_address)
storage2_uuid = self.identifyToMasterNode()
storage2_conn = self.getFakeConnection(storage2_uuid,
(self.storage_address[0], self.storage_address[1] + 1))
self.app.setStorageReady(storage2_uuid)
self.assertNotEqual(uuid, client_uuid)
conn = self.getFakeConnection(client_uuid, self.client_address)
self.app.pt = Mock({
'getPartition': 0,
'getCellList': [
Mock({'getUUID': storage_uuid}),
Mock({'getUUID': storage2_uuid}),
],
'getPartitions': 2,
})
ttid = self.getNextTID()
service.askBeginTransaction(conn, ttid)
oid_list = []
conn = self.getFakeConnection(client_uuid, self.client_address)
self.app.nm.getByUUID(storage_uuid).setConnection(storage_conn)
# No packet sent if storage node is not ready
self.assertFalse(self.app.isStorageReady(storage_uuid))
service.askFinishTransaction(conn, ttid, oid_list)
self.checkNoPacketSent(storage_conn)
self.app.tm.abortFor(self.app.nm.getByUUID(client_uuid))
# ...but AskLockInformation is sent if it is ready
self.app.setStorageReady(storage_uuid)
self.assertTrue(self.app.isStorageReady(storage_uuid))
service.askFinishTransaction(conn, ttid, oid_list)
self.checkAskLockInformation(storage_conn)
self.assertEqual(len(self.app.tm.registerForNotification(storage_uuid)), 1)
txn = self.app.tm[ttid]
pending_ttid = list(self.app.tm.registerForNotification(storage_uuid))[0]
self.assertEqual(ttid, pending_ttid)
self.assertEqual(len(txn.getOIDList()), 0)
self.assertEqual(len(txn.getUUIDList()), 1)
def test_askNodeInformations(self):
# check that only informations about master and storages nodes are
# send to a client
self.app.nm.createClient()
conn = self.getFakeConnection()
self.service.askNodeInformation(conn)
calls = conn.mockGetNamedCalls('notify')
self.assertEqual(len(calls), 1)
packet = calls[0].getParam(0)
(node_list, ) = packet.decode()
self.assertEqual(len(node_list), 2)
def test_connectionClosed(self):
# give a client uuid which have unfinished transactions
client_uuid = self.identifyToMasterNode(node_type=NodeTypes.CLIENT,
port = self.client_port)
conn = self.getFakeConnection(client_uuid, self.client_address)
self.app.listening_conn = object() # mark as running
lptid = self.app.pt.getID()
self.assertEqual(self.app.nm.getByUUID(client_uuid).getState(),
NodeStates.RUNNING)
self.service.connectionClosed(conn)
# node must be have been remove, and no more transaction must remains
self.assertEqual(self.app.nm.getByUUID(client_uuid), None)
self.assertEqual(lptid, self.app.pt.getID())
def test_askPack(self):
self.assertEqual(self.app.packing, None)
self.app.nm.createClient()
tid = self.getNextTID()
peer_id = 42
conn = self.getFakeConnection(peer_id=peer_id)
storage_uuid = self.identifyToMasterNode()
storage_conn = self.getFakeConnection(storage_uuid,
self.storage_address)
self.app.nm.getByUUID(storage_uuid).setConnection(storage_conn)
self.service.askPack(conn, tid)
self.checkNoPacketSent(conn)
ptid = self.checkAskPacket(storage_conn, Packets.AskPack,
decode=True)[0]
self.assertEqual(ptid, tid)
self.assertTrue(self.app.packing[0] is conn)
self.assertEqual(self.app.packing[1], peer_id)
self.assertEqual(self.app.packing[2], set([storage_uuid, ]))
# Asking again to pack will cause an immediate error
storage_uuid = self.identifyToMasterNode()
storage_conn = self.getFakeConnection(storage_uuid,
self.storage_address)
self.app.nm.getByUUID(storage_uuid).setConnection(storage_conn)
self.service.askPack(conn, tid)
self.checkNoPacketSent(storage_conn)
status = self.checkAnswerPacket(conn, Packets.AnswerPack,
decode=True)[0]
self.assertFalse(status)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/testElectionHandler.py 0000664 0000000 0000000 00000036671 11634614701 0030154 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.lib import protocol
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import Packet, NodeTypes, NodeStates
from neo.master.handlers.election import ClientElectionHandler, \
ServerElectionHandler
from neo.master.app import Application
from neo.lib.exception import ElectionFailure
from neo.lib.connection import ClientConnection
# patch connection so that we can register _addPacket messages
# in mock object
def _addPacket(self, packet):
if self.connector is not None:
self.connector._addPacket(packet)
class MasterClientElectionTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
# create an application object
config = self.getMasterConfiguration(master_number=1)
self.app = Application(config)
self.app.pt.clear()
self.app.em = Mock()
self.app.uuid = self._makeUUID('M')
self.app.server = (self.local_ip, 10000)
self.app.name = 'NEOCLUSTER'
self.election = ClientElectionHandler(self.app)
self.app.unconnected_master_node_set = set()
self.app.negotiating_master_node_set = set()
for node in self.app.nm.getMasterList():
self.app.unconnected_master_node_set.add(node.getAddress())
# define some variable to simulate client and storage node
self.storage_port = 10021
self.master_port = 10011
# apply monkey patches
self._addPacket = ClientConnection._addPacket
ClientConnection._addPacket = _addPacket
def tearDown(self):
# restore patched methods
ClientConnection._addPacket = self._addPacket
NeoUnitTestBase.tearDown(self)
def identifyToMasterNode(self):
node = self.app.nm.getMasterList()[0]
node.setUUID(self.getNewUUID())
conn = self.getFakeConnection(uuid=node.getUUID(),
address=node.getAddress())
return (node, conn)
def _checkUnconnected(self, node):
addr = node.getAddress()
self.assertTrue(addr in self.app.unconnected_master_node_set)
self.assertFalse(addr in self.app.negotiating_master_node_set)
def _checkNegociating(self, node):
addr = node.getAddress()
self.assertTrue(addr in self.app.negotiating_master_node_set)
self.assertFalse(addr in self.app.unconnected_master_node_set)
def test_connectionStarted(self):
node, conn = self.identifyToMasterNode()
self.assertTrue(node.isUnknown())
self._checkUnconnected(node)
self.election.connectionStarted(conn)
self.assertTrue(node.isUnknown())
self._checkNegociating(node)
def test_connectionFailed(self):
node, conn = self.identifyToMasterNode()
self.assertTrue(node.isUnknown())
self._checkUnconnected(node)
self.election.connectionFailed(conn)
self._checkUnconnected(node)
self.assertTrue(node.isUnknown())
def test_connectionCompleted(self):
node, conn = self.identifyToMasterNode()
self.assertTrue(node.isUnknown())
self._checkUnconnected(node)
self.election.connectionCompleted(conn)
self._checkUnconnected(node)
self.assertTrue(node.isUnknown())
self.checkAskPrimary(conn)
def _setNegociating(self, node):
self._checkUnconnected(node)
addr = node.getAddress()
self.app.negotiating_master_node_set.add(addr)
self.app.unconnected_master_node_set.discard(addr)
self._checkNegociating(node)
def test_connectionClosed(self):
node, conn = self.identifyToMasterNode()
self._setNegociating(node)
self.election.connectionClosed(conn)
self.assertTrue(node.isUnknown())
addr = node.getAddress()
self.assertFalse(addr in self.app.unconnected_master_node_set)
self.assertFalse(addr in self.app.negotiating_master_node_set)
def test_acceptIdentification1(self):
""" A non-master node accept identification """
node, conn = self.identifyToMasterNode()
args = (node.getUUID(), 0, 10, self.app.uuid)
self.election.acceptIdentification(conn,
NodeTypes.CLIENT, *args)
self.assertFalse(node in self.app.unconnected_master_node_set)
self.assertFalse(node in self.app.negotiating_master_node_set)
self.checkClosed(conn)
def test_acceptIdentification2(self):
""" UUID conflict """
node, conn = self.identifyToMasterNode()
new_uuid = self._makeUUID('M')
args = (node.getUUID(), 0, 10, new_uuid)
self.assertRaises(ElectionFailure, self.election.acceptIdentification,
conn, NodeTypes.MASTER, *args)
self.assertEqual(self.app.uuid, new_uuid)
def test_acceptIdentification3(self):
""" Identification accepted """
node, conn = self.identifyToMasterNode()
args = (node.getUUID(), 0, 10, self.app.uuid)
self.election.acceptIdentification(conn, NodeTypes.MASTER, *args)
self.checkUUIDSet(conn, node.getUUID())
self.assertTrue(self.app.primary or node.getUUID() < self.app.uuid)
self.assertFalse(node in self.app.negotiating_master_node_set)
def _getMasterList(self, with_node=None):
master_list = self.app.nm.getMasterList()
return [(x.getAddress(), x.getUUID()) for x in master_list]
def test_answerPrimary1(self):
""" Multiple primary masters -> election failure raised """
node, conn = self.identifyToMasterNode()
self.app.primary = True
self.app.primary_master_node = node
master_list = self._getMasterList()
self.assertRaises(ElectionFailure, self.election.answerPrimary,
conn, self.app.uuid, master_list)
def test_answerPrimary2(self):
""" Don't known who's the primary """
node, conn = self.identifyToMasterNode()
master_list = self._getMasterList()
self.election.answerPrimary(conn, None, master_list)
self.assertFalse(self.app.primary)
self.assertEqual(self.app.primary_master_node, None)
self.checkRequestIdentification(conn)
def test_answerPrimary3(self):
""" Answer who's the primary """
node, conn = self.identifyToMasterNode()
master_list = self._getMasterList()
self.election.answerPrimary(conn, node.getUUID(), master_list)
self.assertEqual(len(self.app.unconnected_master_node_set), 0)
self.assertEqual(len(self.app.negotiating_master_node_set), 0)
self.assertFalse(self.app.primary)
self.assertEqual(self.app.primary_master_node, node)
self.checkRequestIdentification(conn)
class MasterServerElectionTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
# create an application object
config = self.getMasterConfiguration(master_number=1)
self.app = Application(config)
self.app.pt.clear()
self.app.name = 'NEOCLUSTER'
self.app.em = Mock()
self.election = ServerElectionHandler(self.app)
self.app.unconnected_master_node_set = set()
self.app.negotiating_master_node_set = set()
for node in self.app.nm.getMasterList():
self.app.unconnected_master_node_set.add(node.getAddress())
node.setState(NodeStates.RUNNING)
# define some variable to simulate client and storage node
self.client_address = (self.local_ip, 1000)
self.storage_address = (self.local_ip, 2000)
self.master_address = (self.local_ip, 3000)
# apply monkey patches
self._addPacket = ClientConnection._addPacket
ClientConnection._addPacket = _addPacket
def tearDown(self):
NeoUnitTestBase.tearDown(self)
# restore environnement
ClientConnection._addPacket = self._addPacket
def identifyToMasterNode(self, uuid=True):
node = self.app.nm.getMasterList()[0]
if uuid is True:
uuid = self.getNewUUID()
node.setUUID(uuid)
conn = self.getFakeConnection(
uuid=node.getUUID(),
address=node.getAddress(),
)
return (node, conn)
# Tests
def test_requestIdentification1(self):
""" A non-master node request identification """
node, conn = self.identifyToMasterNode()
args = (node.getUUID(), node.getAddress(), self.app.name)
self.assertRaises(protocol.NotReadyError,
self.election.requestIdentification,
conn, NodeTypes.CLIENT, *args)
def test_requestIdentification2(self):
""" A unknown master node request identification """
node, conn = self.identifyToMasterNode()
args = (node.getUUID(), ('127.0.0.1', 1000), self.app.name)
self.checkProtocolErrorRaised(self.election.requestIdentification,
conn, NodeTypes.MASTER, *args)
def test_requestIdentification3(self):
""" A broken master node request identification """
node, conn = self.identifyToMasterNode()
node.setBroken()
args = (node.getUUID(), node.getAddress(), self.app.name)
self.assertRaises(protocol.BrokenNodeDisallowedError,
self.election.requestIdentification,
conn, NodeTypes.MASTER, *args)
def test_requestIdentification4(self):
""" No conflict """
node, conn = self.identifyToMasterNode()
args = (node.getUUID(), node.getAddress(), self.app.name)
self.election.requestIdentification(conn,
NodeTypes.MASTER, *args)
self.checkUUIDSet(conn, node.getUUID())
args = self.checkAcceptIdentification(conn, decode=True)
node_type, uuid, partitions, replicas, new_uuid = args
self.assertEqual(node.getUUID(), new_uuid)
self.assertNotEqual(node.getUUID(), uuid)
def test_requestIdentification5(self):
""" UUID conflict """
node, conn = self.identifyToMasterNode()
args = (self.app.uuid, node.getAddress(), self.app.name)
self.election.requestIdentification(conn,
NodeTypes.MASTER, *args)
self.checkUUIDSet(conn)
args = self.checkAcceptIdentification(conn, decode=True)
node_type, uuid, partitions, replicas, new_uuid = args
self.assertNotEqual(self.app.uuid, new_uuid)
self.assertEqual(self.app.uuid, uuid)
def _getNodeList(self):
return [x.asTuple() for x in self.app.nm.getList()]
def __getClient(self):
uuid = self.getNewUUID()
conn = self.getFakeConnection(uuid=uuid, address=self.client_address)
self.app.nm.createClient(uuid=uuid, address=self.client_address)
return conn
def __getMaster(self, port=1000, register=True):
uuid = self.getNewUUID()
address = ('127.0.0.1', port)
conn = self.getFakeConnection(uuid=uuid, address=address)
if register:
self.app.nm.createMaster(uuid=uuid, address=address)
return conn
def testRequestIdentification1(self):
""" Check with a non-master node, must be refused """
conn = self.__getClient()
self.checkNotReadyErrorRaised(
self.election.requestIdentification,
conn=conn,
node_type=NodeTypes.CLIENT,
uuid=conn.getUUID(),
address=conn.getAddress(),
name=self.app.name
)
def testRequestIdentification2(self):
""" Check with an unknown master node """
conn = self.__getMaster(register=False)
self.checkProtocolErrorRaised(
self.election.requestIdentification,
conn=conn,
node_type=NodeTypes.MASTER,
uuid=conn.getUUID(),
address=conn.getAddress(),
name=self.app.name,
)
def testAnnouncePrimary1(self):
""" check the wrong cases """
announce = self.election.announcePrimary
# No uuid
node, conn = self.identifyToMasterNode(uuid=None)
self.checkProtocolErrorRaised(announce, conn)
# Announce to a primary, raise
self.app.primary = True
node, conn = self.identifyToMasterNode()
self.assertTrue(self.app.primary)
self.assertEqual(self.app.primary_master_node, None)
self.assertRaises(ElectionFailure, announce, conn)
def testAnnouncePrimary2(self):
""" Check the good case """
announce = self.election.announcePrimary
# Announce, must set the primary
self.app.primary = False
node, conn = self.identifyToMasterNode()
self.assertFalse(self.app.primary)
self.assertFalse(self.app.primary_master_node)
announce(conn)
self.assertFalse(self.app.primary)
self.assertEqual(self.app.primary_master_node, node)
def test_askPrimary1(self):
""" Ask the primary to the primary """
node, conn = self.identifyToMasterNode()
self.app.primary = True
self.election.askPrimary(conn)
uuid, master_list = self.checkAnswerPrimary(conn, decode=True)
self.assertEqual(uuid, self.app.uuid)
self.assertEqual(len(master_list), 2)
self.assertEqual(master_list[0], (self.app.server, self.app.uuid))
master_node = self.app.nm.getMasterList()[0]
master_node = (master_node.getAddress(), master_node.getUUID())
self.assertEqual(master_list[1], master_node)
def test_askPrimary2(self):
""" Ask the primary to a secondary that known who's te primary """
node, conn = self.identifyToMasterNode()
self.app.primary = False
# it will answer ourself as primary
self.app.primary_master_node = node
self.election.askPrimary(conn)
uuid, master_list = self.checkAnswerPrimary(conn, decode=True)
self.assertEqual(uuid, node.getUUID())
self.assertEqual(len(master_list), 2)
self.assertEqual(master_list[0], (self.app.server, self.app.uuid))
master_node = (node.getAddress(), node.getUUID())
self.assertEqual(master_list[1], master_node)
def test_askPrimary3(self):
""" Ask the primary to a master that don't known who's the primary """
node, conn = self.identifyToMasterNode()
self.app.primary = False
self.app.primary_master_node = None
self.election.askPrimary(conn)
uuid, master_list = self.checkAnswerPrimary(conn, decode=True)
self.assertEqual(uuid, None)
self.assertEqual(len(master_list), 2)
self.assertEqual(master_list[0], (self.app.server, self.app.uuid))
master_node = self.app.nm.getMasterList()[0]
master_node = (node.getAddress(), node.getUUID())
self.assertEqual(master_list[1], master_node)
def test_reelectPrimary(self):
node, conn = self.identifyToMasterNode()
self.assertRaises(ElectionFailure, self.election.reelectPrimary, conn)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/testMasterApp.py 0000664 0000000 0000000 00000010450 11634614701 0026773 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.master.app import Application
from neo.lib.util import p64, u64
class MasterAppTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
# create an application object
config = self.getMasterConfiguration()
self.app = Application(config)
self.app.pt.clear()
def test_06_broadcastNodeInformation(self):
# defined some nodes to which data will be send
master_uuid = self.getNewUUID()
master = self.app.nm.createMaster(uuid=master_uuid)
storage_uuid = self.getNewUUID()
storage = self.app.nm.createStorage(uuid=storage_uuid)
client_uuid = self.getNewUUID()
client = self.app.nm.createClient(uuid=client_uuid)
# create conn and patch em
master_conn = self.getFakeConnection()
storage_conn = self.getFakeConnection()
client_conn = self.getFakeConnection()
master.setConnection(master_conn)
storage.setConnection(storage_conn)
client.setConnection(client_conn)
master.setRunning()
client.setRunning()
storage.setRunning()
self.app.nm.add(storage)
self.app.nm.add(client)
# no address defined, not send to client node
c_node = self.app.nm.createClient(uuid = self.getNewUUID())
self.app.broadcastNodesInformation([c_node])
# check conn
self.checkNoPacketSent(client_conn)
self.checkNoPacketSent(master_conn)
self.checkNotifyNodeInformation(storage_conn)
# address defined and client type
s_node = self.app.nm.createClient(
uuid = self.getNewUUID(),
address=("127.1.0.1", 3361)
)
self.app.broadcastNodesInformation([c_node])
# check conn
self.checkNoPacketSent(client_conn)
self.checkNoPacketSent(master_conn)
self.checkNotifyNodeInformation(storage_conn)
# address defined and storage type
s_node = self.app.nm.createStorage(
uuid=self.getNewUUID(),
address=("127.0.0.1", 1351)
)
self.app.broadcastNodesInformation([s_node])
# check conn
self.checkNotifyNodeInformation(client_conn)
self.checkNoPacketSent(master_conn)
self.checkNotifyNodeInformation(storage_conn)
# node not running, don't send informations
client.setPending()
self.app.broadcastNodesInformation([s_node])
# check conn
self.assertFalse(client_conn.mockGetNamedCalls('notify'))
self.checkNoPacketSent(master_conn)
self.checkNotifyNodeInformation(storage_conn)
def test_storageReadinessAPI(self):
uuid_1 = self.getNewUUID()
uuid_2 = self.getNewUUID()
self.assertFalse(self.app.isStorageReady(uuid_1))
self.assertFalse(self.app.isStorageReady(uuid_2))
# Must not raise, nor change readiness
self.app.setStorageNotReady(uuid_1)
self.assertFalse(self.app.isStorageReady(uuid_1))
self.assertFalse(self.app.isStorageReady(uuid_2))
# Mark as ready, only one must change
self.app.setStorageReady(uuid_1)
self.assertTrue(self.app.isStorageReady(uuid_1))
self.assertFalse(self.app.isStorageReady(uuid_2))
self.app.setStorageReady(uuid_2)
# Mark not ready, only one must change
self.app.setStorageNotReady(uuid_1)
self.assertFalse(self.app.isStorageReady(uuid_1))
self.assertTrue(self.app.isStorageReady(uuid_2))
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/testMasterPT.py 0000664 0000000 0000000 00000051422 11634614701 0026602 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import NodeStates, CellStates
from neo.master.pt import PartitionTable
from neo.lib.node import StorageNode
class MasterPartitionTableTests(NeoUnitTestBase):
def test_02_PartitionTable_creation(self):
num_partitions = 5
num_replicas = 3
pt = PartitionTable(num_partitions, num_replicas)
self.assertEqual(pt.np, num_partitions)
self.assertEqual(pt.nr, num_replicas)
self.assertEqual(pt.num_filled_rows, 0)
partition_list = pt.partition_list
self.assertEqual(len(partition_list), num_partitions)
for x in xrange(num_partitions):
part = partition_list[x]
self.assertTrue(isinstance(part, list))
self.assertEqual(len(part), 0)
self.assertEqual(len(pt.count_dict), 0)
# no nodes or cells for now
self.assertEqual(len(pt.getNodeList()), 0)
for x in xrange(num_partitions):
self.assertEqual(len(pt.getCellList(x)), 0)
self.assertEqual(len(pt.getCellList(x, True)), 0)
self.assertEqual(len(pt.getRow(x)), 0)
self.assertFalse(pt.operational())
self.assertFalse(pt.filled())
self.assertRaises(RuntimeError, pt.make, [])
self.assertFalse(pt.operational())
self.assertFalse(pt.filled())
def test_11_findLeastUsedNode(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
# add nodes
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1, NodeStates.RUNNING)
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
pt.setCell(1, sn1, CellStates.UP_TO_DATE)
pt.setCell(2, sn1, CellStates.UP_TO_DATE)
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19001)
sn2 = StorageNode(Mock(), server2, uuid2, NodeStates.RUNNING)
pt.setCell(0, sn2, CellStates.UP_TO_DATE)
pt.setCell(1, sn2, CellStates.UP_TO_DATE)
uuid3 = self.getNewUUID()
server3 = ("127.0.0.3", 19001)
sn3 = StorageNode(Mock(), server3, uuid3, NodeStates.RUNNING)
pt.setCell(0, sn3, CellStates.UP_TO_DATE)
# test
node = pt.findLeastUsedNode()
self.assertEqual(node, sn3)
node = pt.findLeastUsedNode((sn3, ))
self.assertEqual(node, sn2)
node = pt.findLeastUsedNode((sn3, sn2))
self.assertEqual(node, sn1)
def test_13_outdate(self):
# create nodes
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19002)
sn2 = StorageNode(Mock(), server2, uuid2)
uuid3 = self.getNewUUID()
server3 = ("127.0.0.3", 19003)
sn3 = StorageNode(Mock(), server3, uuid3)
uuid4 = self.getNewUUID()
server4 = ("127.0.0.4", 19004)
sn4 = StorageNode(Mock(), server4, uuid4)
uuid5 = self.getNewUUID()
server5 = ("127.0.0.5", 19005)
sn5 = StorageNode(Mock(), server5, uuid5)
# create partition table
num_partitions = 5
num_replicas = 3
pt = PartitionTable(num_partitions, num_replicas)
pt.setCell(0, sn1, CellStates.OUT_OF_DATE)
sn1.setState(NodeStates.RUNNING)
pt.setCell(1, sn2, CellStates.UP_TO_DATE)
sn2.setState(NodeStates.TEMPORARILY_DOWN)
pt.setCell(2, sn3, CellStates.UP_TO_DATE)
sn3.setState(NodeStates.DOWN)
pt.setCell(3, sn4, CellStates.UP_TO_DATE)
sn4.setState(NodeStates.BROKEN)
pt.setCell(4, sn5, CellStates.UP_TO_DATE)
sn5.setState(NodeStates.RUNNING)
# outdate nodes
cells_outdated = pt.outdate()
self.assertEqual(len(cells_outdated), 3)
for offset, uuid, state in cells_outdated:
self.assertTrue(offset in (1, 2, 3))
self.assertTrue(uuid in (uuid2, uuid3, uuid4))
self.assertEqual(state, CellStates.OUT_OF_DATE)
# check each cell
# part 1, already outdated
cells = pt.getCellList(0)
self.assertEqual(len(cells), 1)
cell = cells[0]
self.assertEqual(cell.getState(), CellStates.OUT_OF_DATE)
# part 2, must be outdated
cells = pt.getCellList(1)
self.assertEqual(len(cells), 1)
cell = cells[0]
self.assertEqual(cell.getState(), CellStates.OUT_OF_DATE)
# part 3, must be outdated
cells = pt.getCellList(2)
self.assertEqual(len(cells), 1)
cell = cells[0]
self.assertEqual(cell.getState(), CellStates.OUT_OF_DATE)
# part 4, already outdated
cells = pt.getCellList(3)
self.assertEqual(len(cells), 1)
cell = cells[0]
self.assertEqual(cell.getState(), CellStates.OUT_OF_DATE)
# part 5, remains running
cells = pt.getCellList(4)
self.assertEqual(len(cells), 1)
cell = cells[0]
self.assertEqual(cell.getState(), CellStates.UP_TO_DATE)
def test_14_addNode(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
# add nodes
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
# add it to an empty pt
cell_list = pt.addNode(sn1)
self.assertEqual(len(cell_list), 5)
# it must be added to all partitions
for x in xrange(num_replicas):
self.assertEqual(len(pt.getCellList(x)), 1)
self.assertEqual(pt.getCellList(x)[0].getState(), CellStates.OUT_OF_DATE)
self.assertEqual(pt.getCellList(x)[0].getNode(), sn1)
self.assertEqual(pt.count_dict[sn1], 5)
# add same node again, must remain the same
cell_list = pt.addNode(sn1)
self.assertEqual(len(cell_list), 0)
for x in xrange(num_replicas):
self.assertEqual(len(pt.getCellList(x)), 1)
self.assertEqual(pt.getCellList(x)[0].getState(), CellStates.OUT_OF_DATE)
self.assertEqual(pt.getCellList(x)[0].getNode(), sn1)
self.assertEqual(pt.count_dict[sn1], 5)
# add a second node to fill the partition table
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19002)
sn2 = StorageNode(Mock(), server2, uuid2)
# add it
cell_list = pt.addNode(sn2)
self.assertEqual(len(cell_list), 5)
for x in xrange(num_replicas):
self.assertEqual(len(pt.getCellList(x)), 2)
self.assertEqual(pt.getCellList(x)[0].getState(), CellStates.OUT_OF_DATE)
self.assertTrue(pt.getCellList(x)[0].getNode() in (sn1, sn2))
# test the most used node is remove from some partition
uuid3 = self.getNewUUID()
server3 = ("127.0.0.3", 19001)
sn3 = StorageNode(Mock(), server3, uuid3)
uuid4 = self.getNewUUID()
server4 = ("127.0.0.4", 19001)
sn4 = StorageNode(Mock(), server4, uuid4)
uuid5 = self.getNewUUID()
server5 = ("127.0.0.5", 1900)
sn5 = StorageNode(Mock(), server5, uuid5)
# partition looks like:
# 0 : sn1, sn2
# 1 : sn1, sn3
# 2 : sn1, sn4
# 3 : sn1, sn5
num_partitions = 4
num_replicas = 1
pt = PartitionTable(num_partitions, num_replicas)
# node most used is out of date, just dropped
pt.setCell(0, sn1, CellStates.OUT_OF_DATE)
pt.setCell(0, sn2, CellStates.UP_TO_DATE)
pt.setCell(1, sn1, CellStates.OUT_OF_DATE)
pt.setCell(1, sn3, CellStates.UP_TO_DATE)
pt.setCell(2, sn1, CellStates.OUT_OF_DATE)
pt.setCell(2, sn4, CellStates.UP_TO_DATE)
pt.setCell(3, sn1, CellStates.OUT_OF_DATE)
pt.setCell(3, sn5, CellStates.UP_TO_DATE)
uuid6 = self.getNewUUID()
server6 = ("127.0.0.6", 19006)
sn6 = StorageNode(Mock(), server6, uuid6)
cell_list = pt.addNode(sn6)
# sn1 is removed twice and sn6 is added twice
self.assertEqual(len(cell_list), 4)
for offset, uuid, state in cell_list:
if offset in (0, 1):
if uuid == uuid1:
self.assertEqual(state, CellStates.DISCARDED)
elif uuid == uuid6:
self.assertEqual(state, CellStates.OUT_OF_DATE)
else:
self.assertTrue(uuid in (uuid1, uuid6))
else:
self.assertTrue(offset in (0, 1))
for x in xrange(num_replicas):
self.assertEqual(len(pt.getCellList(x)), 2)
# there is a feeding cell, just dropped
pt.clear()
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
pt.setCell(0, sn2, CellStates.UP_TO_DATE)
pt.setCell(0, sn3, CellStates.FEEDING)
pt.setCell(1, sn1, CellStates.UP_TO_DATE)
pt.setCell(1, sn2, CellStates.FEEDING)
pt.setCell(1, sn3, CellStates.UP_TO_DATE)
pt.setCell(2, sn1, CellStates.UP_TO_DATE)
pt.setCell(2, sn4, CellStates.FEEDING)
pt.setCell(2, sn5, CellStates.UP_TO_DATE)
pt.setCell(3, sn1, CellStates.UP_TO_DATE)
pt.setCell(3, sn4, CellStates.UP_TO_DATE)
pt.setCell(3, sn5, CellStates.FEEDING)
cell_list = pt.addNode(sn6)
# sn1 is removed twice and sn6 is added twice
self.assertEqual(len(cell_list), 4)
for offset, uuid, state in cell_list:
if offset in (0, 1):
if uuid == uuid1:
self.assertEqual(state, CellStates.DISCARDED)
elif uuid == uuid6:
self.assertEqual(state, CellStates.OUT_OF_DATE)
else:
self.assertTrue(uuid in (uuid1, uuid6))
else:
self.assertTrue(offset in (0, 1))
for x in xrange(num_replicas):
self.assertEqual(len(pt.getCellList(x)), 3)
# there is no feeding cell, marked as feeding
pt.clear()
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
pt.setCell(0, sn2, CellStates.UP_TO_DATE)
pt.setCell(1, sn1, CellStates.UP_TO_DATE)
pt.setCell(1, sn3, CellStates.UP_TO_DATE)
pt.setCell(2, sn1, CellStates.UP_TO_DATE)
pt.setCell(2, sn4, CellStates.UP_TO_DATE)
pt.setCell(3, sn1, CellStates.UP_TO_DATE)
pt.setCell(3, sn5, CellStates.UP_TO_DATE)
cell_list = pt.addNode(sn6)
# sn1 is removed twice and sn6 is added twice
self.assertEqual(len(cell_list), 4)
for offset, uuid, state in cell_list:
if offset in (0, 1):
if uuid == uuid1:
self.assertEqual(state, CellStates.FEEDING)
elif uuid == uuid6:
self.assertEqual(state, CellStates.OUT_OF_DATE)
else:
self.assertTrue(uuid in (uuid1, uuid6))
else:
self.assertTrue(offset in (0, 1))
for x in xrange(num_replicas):
self.assertEqual(len(pt.getCellList(x)), 3)
def test_15_dropNode(self):
num_partitions = 4
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
# add nodes
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1, NodeStates.RUNNING)
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19002)
sn2 = StorageNode(Mock(), server2, uuid2, NodeStates.RUNNING)
uuid3 = self.getNewUUID()
server3 = ("127.0.0.3", 19001)
sn3 = StorageNode(Mock(), server3, uuid3, NodeStates.RUNNING)
uuid4 = self.getNewUUID()
server4 = ("127.0.0.4", 19001)
sn4 = StorageNode(Mock(), server4, uuid4, NodeStates.RUNNING)
# partition looks like:
# 0 : sn1, sn2
# 1 : sn1, sn3
# 2 : sn1, sn3
# 3 : sn1, sn4
# node is not feeding, so retrive least use node to replace it
# so sn2 must be repaced by sn4 in partition 0
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
pt.setCell(0, sn2, CellStates.UP_TO_DATE)
pt.setCell(1, sn1, CellStates.UP_TO_DATE)
pt.setCell(1, sn3, CellStates.UP_TO_DATE)
pt.setCell(2, sn1, CellStates.UP_TO_DATE)
pt.setCell(2, sn3, CellStates.UP_TO_DATE)
pt.setCell(3, sn1, CellStates.UP_TO_DATE)
pt.setCell(3, sn4, CellStates.UP_TO_DATE)
cell_list = pt.dropNode(sn2)
self.assertEqual(len(cell_list), 2)
for offset, uuid, state in cell_list:
self.assertEqual(offset, 0)
if uuid == uuid2:
self.assertEqual(state, CellStates.DISCARDED)
elif uuid == uuid4:
self.assertEqual(state, CellStates.OUT_OF_DATE)
else:
self.assertTrue(uuid in (uuid2, uuid4))
for x in xrange(num_replicas):
self.assertEqual(len(pt.getCellList(x)), 2)
# same test but with feeding state, no other will be added
pt.clear()
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
pt.setCell(0, sn2, CellStates.FEEDING)
pt.setCell(1, sn1, CellStates.UP_TO_DATE)
pt.setCell(1, sn3, CellStates.UP_TO_DATE)
pt.setCell(2, sn1, CellStates.UP_TO_DATE)
pt.setCell(2, sn3, CellStates.UP_TO_DATE)
pt.setCell(3, sn1, CellStates.UP_TO_DATE)
pt.setCell(3, sn4, CellStates.UP_TO_DATE)
cell_list = pt.dropNode(sn2)
self.assertEqual(len(cell_list), 1)
for offset, uuid, state in cell_list:
self.assertEqual(offset, 0)
self.assertEqual(state, CellStates.DISCARDED)
self.assertEqual(uuid, uuid2)
for x in xrange(num_replicas):
if x == 0:
self.assertEqual(len(pt.getCellList(x)), 1)
else:
self.assertEqual(len(pt.getCellList(x)), 2)
def test_16_make(self):
num_partitions = 5
num_replicas = 1
pt = PartitionTable(num_partitions, num_replicas)
# add nodes
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1, NodeStates.RUNNING)
# add not running node
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19001)
sn2 = StorageNode(Mock(), server2, uuid2)
sn2.setState(NodeStates.TEMPORARILY_DOWN)
# add node without uuid
server3 = ("127.0.0.3", 19001)
sn3 = StorageNode(Mock(), server3, None, NodeStates.RUNNING)
# add clear node
uuid4 = self.getNewUUID()
server4 = ("127.0.0.4", 19001)
sn4 = StorageNode(Mock(), server4, uuid4, NodeStates.RUNNING)
uuid5 = self.getNewUUID()
server5 = ("127.0.0.5", 1900)
sn5 = StorageNode(Mock(), server5, uuid5, NodeStates.RUNNING)
# make the table
pt.make([sn1, sn2, sn3, sn4, sn5])
# check it's ok, only running nodes and node with uuid
# must be present
for x in xrange(num_partitions):
cells = pt.getCellList(x)
self.assertEqual(len(cells), 2)
nodes = [x.getNode() for x in cells]
for node in nodes:
self.assertTrue(node in (sn1, sn4, sn5))
self.assertTrue(node not in (sn2, sn3))
self.assertTrue(pt.filled())
self.assertTrue(pt.operational())
# create a pt with less nodes
pt.clear()
self.assertFalse(pt.filled())
self.assertFalse(pt.operational())
pt.make([sn1])
# check it's ok
for x in xrange(num_partitions):
cells = pt.getCellList(x)
self.assertEqual(len(cells), 1)
nodes = [x.getNode() for x in cells]
for node in nodes:
self.assertEqual(node, sn1)
self.assertTrue(pt.filled())
self.assertTrue(pt.operational())
def test_17_tweak(self):
# remove broken node
# remove if too many feeding nodes
# remove feeding if all cells are up to date
# if too many cells, remove most used cell
# if not enought cell, add least used node
# create nodes
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1, NodeStates.RUNNING)
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19002)
sn2 = StorageNode(Mock(), server2, uuid2, NodeStates.RUNNING)
uuid3 = self.getNewUUID()
server3 = ("127.0.0.3", 19003)
sn3 = StorageNode(Mock(), server3, uuid3, NodeStates.RUNNING)
uuid4 = self.getNewUUID()
server4 = ("127.0.0.4", 19004)
sn4 = StorageNode(Mock(), server4, uuid4, NodeStates.RUNNING)
uuid5 = self.getNewUUID()
server5 = ("127.0.0.5", 19005)
sn5 = StorageNode(Mock(), server5, uuid5, NodeStates.RUNNING)
# create partition table
# 0 : sn1(discarded), sn2(up), -> sn2 must remain
# 1 : sn1(feeding), sn2(feeding), sn3(up) -> one feeding and sn3 must remain
# 2 : sn1(feeding), sn2(up), sn3(up) -> sn2 and sn3 must remain, feeding must go away
# 3 : sn1(up), sn2(up), sn3(up), sn4(up) -> only 3 cell must remain
# 4 : sn1(up), sn5(up) -> one more cell must be added
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
# part 0
pt.setCell(0, sn1, CellStates.DISCARDED)
pt.setCell(0, sn2, CellStates.UP_TO_DATE)
# part 1
pt.setCell(1, sn1, CellStates.FEEDING)
pt.setCell(1, sn2, CellStates.FEEDING)
pt.setCell(1, sn3, CellStates.OUT_OF_DATE)
# part 2
pt.setCell(2, sn1, CellStates.FEEDING)
pt.setCell(2, sn2, CellStates.UP_TO_DATE)
pt.setCell(2, sn3, CellStates.UP_TO_DATE)
# part 3
pt.setCell(3, sn1, CellStates.UP_TO_DATE)
pt.setCell(3, sn2, CellStates.UP_TO_DATE)
pt.setCell(3, sn3, CellStates.UP_TO_DATE)
pt.setCell(3, sn4, CellStates.UP_TO_DATE)
# part 4
pt.setCell(4, sn1, CellStates.UP_TO_DATE)
pt.setCell(4, sn5, CellStates.UP_TO_DATE)
# now tweak the table
pt.tweak()
# check part 1
cells = pt.getCellList(0)
self.assertEqual(len(cells), 3)
for cell in cells:
self.assertNotEqual(cell.getState(), CellStates.DISCARDED)
if cell.getNode() == sn2:
self.assertEqual(cell.getState(), CellStates.UP_TO_DATE)
else:
self.assertEqual(cell.getState(), CellStates.OUT_OF_DATE)
self.assertTrue(sn2 in [x.getNode() for x in cells])
# check part 2
cells = pt.getCellList(1)
self.assertEqual(len(cells), 4)
for cell in cells:
if cell.getNode() == sn1:
self.assertEqual(cell.getState(), CellStates.FEEDING)
else:
self.assertEqual(cell.getState(), CellStates.OUT_OF_DATE)
self.assertTrue(sn3 in [x.getNode() for x in cells])
self.assertTrue(sn1 in [x.getNode() for x in cells])
# check part 3
cells = pt.getCellList(2)
self.assertEqual(len(cells), 3)
for cell in cells:
if cell.getNode() in (sn2, sn3):
self.assertEqual(cell.getState(), CellStates.UP_TO_DATE)
else:
self.assertEqual(cell.getState(), CellStates.OUT_OF_DATE)
self.assertTrue(sn3 in [x.getNode() for x in cells])
self.assertTrue(sn2 in [x.getNode() for x in cells])
# check part 4
cells = pt.getCellList(3)
self.assertEqual(len(cells), 3)
for cell in cells:
self.assertEqual(cell.getState(), CellStates.UP_TO_DATE)
# check part 5
cells = pt.getCellList(4)
self.assertEqual(len(cells), 3)
for cell in cells:
if cell.getNode() in (sn1, sn5):
self.assertEqual(cell.getState(), CellStates.UP_TO_DATE)
else:
self.assertEqual(cell.getState(), CellStates.OUT_OF_DATE)
self.assertTrue(sn1 in [x.getNode() for x in cells])
self.assertTrue(sn5 in [x.getNode() for x in cells])
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/testRecovery.py 0000664 0000000 0000000 00000013040 11634614701 0026673 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from struct import pack, unpack
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import NodeTypes, NodeStates, CellStates
from neo.master.recovery import RecoveryManager
from neo.master.app import Application
class MasterRecoveryTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
# create an application object
config = self.getMasterConfiguration()
self.app = Application(config)
self.app.pt.clear()
self.recovery = RecoveryManager(self.app)
self.app.unconnected_master_node_set = set()
self.app.negotiating_master_node_set = set()
for node in self.app.nm.getMasterList():
self.app.unconnected_master_node_set.add(node.getAddress())
node.setState(NodeStates.RUNNING)
# define some variable to simulate client and storage node
self.client_port = 11022
self.storage_port = 10021
self.master_port = 10011
self.master_address = ('127.0.0.1', self.master_port)
self.storage_address = ('127.0.0.1', self.storage_port)
# Common methods
def getLastUUID(self):
return self.uuid
def identifyToMasterNode(self, node_type=NodeTypes.STORAGE, ip="127.0.0.1",
port=10021):
"""Do first step of identification to MN
"""
address = (ip, port)
uuid = self.getNewUUID()
self.app.nm.createFromNodeType(node_type, address=address, uuid=uuid,
state=NodeStates.RUNNING)
return uuid
# Tests
def test_01_connectionClosed(self):
uuid = self.identifyToMasterNode(node_type=NodeTypes.MASTER, port=self.master_port)
conn = self.getFakeConnection(uuid, self.master_address)
self.assertEqual(self.app.nm.getByAddress(conn.getAddress()).getState(),
NodeStates.RUNNING)
self.recovery.connectionClosed(conn)
self.assertEqual(self.app.nm.getByAddress(conn.getAddress()).getState(),
NodeStates.TEMPORARILY_DOWN)
def test_09_answerLastIDs(self):
recovery = self.recovery
uuid = self.identifyToMasterNode()
oid1 = self.getOID(1)
oid2 = self.getOID(2)
tid1 = self.getNextTID()
tid2 = self.getNextTID(tid1)
ptid1 = self.getPTID(1)
ptid2 = self.getPTID(2)
self.app.tm.setLastOID(oid1)
self.app.tm.setLastTID(tid1)
self.app.pt.setID(ptid1)
# send information which are later to what PMN knows, this must update target node
conn = self.getFakeConnection(uuid, self.storage_port)
self.assertTrue(ptid2 > self.app.pt.getID())
self.assertTrue(oid2 > self.app.tm.getLastOID())
self.assertTrue(tid2 > self.app.tm.getLastTID())
recovery.answerLastIDs(conn, oid2, tid2, ptid2)
self.assertEqual(oid2, self.app.tm.getLastOID())
self.assertEqual(tid2, self.app.tm.getLastTID())
self.assertEqual(ptid2, recovery.target_ptid)
def test_10_answerPartitionTable(self):
recovery = self.recovery
uuid = self.identifyToMasterNode(NodeTypes.MASTER, port=self.master_port)
# not from target node, ignore
uuid = self.identifyToMasterNode(NodeTypes.STORAGE, port=self.storage_port)
conn = self.getFakeConnection(uuid, self.storage_port)
node = self.app.nm.getByUUID(conn.getUUID())
offset = 1
cell_list = [(offset, uuid, CellStates.UP_TO_DATE)]
cells = self.app.pt.getRow(offset)
for cell, state in cells:
self.assertEqual(state, CellStates.OUT_OF_DATE)
recovery.target_ptid = 2
node.setPending()
recovery.answerPartitionTable(conn, 1, cell_list)
cells = self.app.pt.getRow(offset)
for cell, state in cells:
self.assertEqual(state, CellStates.OUT_OF_DATE)
# from target node, taken into account
conn = self.getFakeConnection(uuid, self.storage_port)
offset = 1
cell_list = [(offset, ((uuid, CellStates.UP_TO_DATE,),),)]
cells = self.app.pt.getRow(offset)
for cell, state in cells:
self.assertEqual(state, CellStates.OUT_OF_DATE)
node.setPending()
recovery.answerPartitionTable(conn, None, cell_list)
cells = self.app.pt.getRow(offset)
for cell, state in cells:
self.assertEqual(state, CellStates.UP_TO_DATE)
# give a bad offset, must send error
self.recovery.target_uuid = uuid
conn = self.getFakeConnection(uuid, self.storage_port)
offset = 1000000
self.assertFalse(self.app.pt.hasOffset(offset))
cell_list = [(offset, ((uuid, NodeStates.DOWN,),),)]
node.setPending()
self.checkProtocolErrorRaised(recovery.answerPartitionTable, conn,
2, cell_list)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/testStorageHandler.py 0000664 0000000 0000000 00000025345 11634614701 0030012 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from struct import pack
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import NodeTypes, NodeStates, Packets
from neo.master.handlers.storage import StorageServiceHandler
from neo.master.handlers.client import ClientServiceHandler
from neo.master.app import Application
from neo.lib.exception import OperationFailure
class MasterStorageHandlerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
# create an application object
config = self.getMasterConfiguration(master_number=1, replicas=1)
self.app = Application(config)
self.app.pt.clear()
self.app.em = Mock()
self.service = StorageServiceHandler(self.app)
self.client_handler = ClientServiceHandler(self.app)
# define some variable to simulate client and storage node
self.client_port = 11022
self.storage_port = 10021
self.master_port = 10010
self.master_address = ('127.0.0.1', self.master_port)
self.client_address = ('127.0.0.1', self.client_port)
self.storage_address = ('127.0.0.1', self.storage_port)
def _allocatePort(self):
self.port = getattr(self, 'port', 1000) + 1
return self.port
def _getClient(self):
return self.identifyToMasterNode(node_type=NodeTypes.CLIENT,
ip='127.0.0.1', port=self._allocatePort())
def _getStorage(self):
return self.identifyToMasterNode(node_type=NodeTypes.STORAGE,
ip='127.0.0.1', port=self._allocatePort())
def getLastUUID(self):
return self.uuid
def identifyToMasterNode(self, node_type=NodeTypes.STORAGE, ip="127.0.0.1",
port=10021):
"""Do first step of identification to MN
"""
nm = self.app.nm
uuid = self.getNewUUID()
node = nm.createFromNodeType(node_type, address=(ip, port),
uuid=uuid)
conn = self.getFakeConnection(node.getUUID(),node.getAddress())
node.setConnection(conn)
return (node, conn)
def test_answerInformationLocked_1(self):
"""
Master must refuse to lock if the TID is greater than the last TID
"""
tid1 = self.getNextTID()
tid2 = self.getNextTID(tid1)
self.app.tm.setLastTID(tid1)
self.assertTrue(tid1 < tid2)
node, conn = self.identifyToMasterNode()
self.checkProtocolErrorRaised(self.service.answerInformationLocked,
conn, tid2)
self.checkNoPacketSent(conn)
def test_answerInformationLocked_2(self):
"""
Master must:
- lock each storage
- notify the client
- invalidate other clients
- unlock storages
"""
# one client and two storages required
client_1, client_conn_1 = self._getClient()
client_2, client_conn_2 = self._getClient()
storage_1, storage_conn_1 = self._getStorage()
storage_2, storage_conn_2 = self._getStorage()
uuid_list = storage_1.getUUID(), storage_2.getUUID()
oid_list = self.getOID(), self.getOID()
msg_id = 1
# register a transaction
ttid = self.app.tm.begin(client_1)
tid = self.app.tm.prepare(ttid, 1, oid_list, uuid_list,
msg_id)
self.assertTrue(ttid in self.app.tm)
# the first storage acknowledge the lock
self.service.answerInformationLocked(storage_conn_1, ttid)
self.checkNoPacketSent(client_conn_1)
self.checkNoPacketSent(client_conn_2)
self.checkNoPacketSent(storage_conn_1)
self.checkNoPacketSent(storage_conn_2)
# then the second
self.service.answerInformationLocked(storage_conn_2, ttid)
self.checkAnswerTransactionFinished(client_conn_1)
self.checkInvalidateObjects(client_conn_2)
self.checkNotifyUnlockInformation(storage_conn_1)
self.checkNotifyUnlockInformation(storage_conn_2)
def test_12_askLastIDs(self):
service = self.service
node, conn = self.identifyToMasterNode()
# give a uuid
conn = self.getFakeConnection(node.getUUID(), self.storage_address)
ptid = self.app.pt.getID()
oid = self.getOID(1)
tid = self.getNextTID()
self.app.tm.setLastOID(oid)
self.app.tm.setLastTID(tid)
service.askLastIDs(conn)
packet = self.checkAnswerLastIDs(conn)
loid, ltid, lptid = packet.decode()
self.assertEqual(loid, oid)
self.assertEqual(ltid, tid)
self.assertEqual(lptid, ptid)
def test_13_askUnfinishedTransactions(self):
service = self.service
node, conn = self.identifyToMasterNode()
# give a uuid
service.askUnfinishedTransactions(conn)
packet = self.checkAnswerUnfinishedTransactions(conn)
max_tid, tid_list = packet.decode()
self.assertEqual(tid_list, [])
# create some transaction
node, conn = self.identifyToMasterNode(node_type=NodeTypes.CLIENT,
port=self.client_port)
ttid = self.app.tm.begin(node)
self.app.tm.prepare(ttid, 1,
[self.getOID(1)], [node.getUUID()], 1)
conn = self.getFakeConnection(node.getUUID(), self.storage_address)
service.askUnfinishedTransactions(conn)
max_tid, tid_list = self.checkAnswerUnfinishedTransactions(conn, decode=True)
self.assertEqual(len(tid_list), 1)
def test_connectionClosed(self):
method = self.service.connectionClosed
state = NodeStates.TEMPORARILY_DOWN
# define two nodes
node1, conn1 = self.identifyToMasterNode()
node2, conn2 = self.identifyToMasterNode()
node1.setRunning()
node2.setRunning()
self.assertEqual(node1.getState(), NodeStates.RUNNING)
self.assertEqual(node2.getState(), NodeStates.RUNNING)
# filled the pt
self.app.pt.make(self.app.nm.getStorageList())
self.assertTrue(self.app.pt.filled())
self.assertTrue(self.app.pt.operational())
# drop one node
lptid = self.app.pt.getID()
method(conn1)
self.assertEqual(node1.getState(), state)
self.assertTrue(lptid < self.app.pt.getID())
# drop the second, no storage node left
lptid = self.app.pt.getID()
self.assertEqual(node2.getState(), NodeStates.RUNNING)
self.assertRaises(OperationFailure, method, conn2)
self.assertEqual(node2.getState(), state)
self.assertEqual(lptid, self.app.pt.getID())
def test_nodeLostAfterAskLockInformation(self):
# 2 storage nodes, one will die
node1, conn1 = self._getStorage()
node2, conn2 = self._getStorage()
# client nodes, to distinguish answers for the sample transactions
client1, cconn1 = self._getClient()
client2, cconn2 = self._getClient()
client3, cconn3 = self._getClient()
oid_list = [self.getOID(), ]
# Some shortcuts to simplify test code
self.app.pt = Mock({'operational': True})
self.app.outdateAndBroadcastPartition = lambda: None
# Register some transactions
tm = self.app.tm
# Transaction 1: 2 storage nodes involved, one will die and the other
# already answered node lock
msg_id_1 = 1
ttid1 = tm.begin(client1)
tid1 = tm.prepare(ttid1, 1, oid_list,
[node1.getUUID(), node2.getUUID()], msg_id_1)
tm.lock(ttid1, node2.getUUID())
# storage 1 request a notification at commit
tm. registerForNotification(node1.getUUID())
self.checkNoPacketSent(cconn1)
# Storage 1 dies
node1.setTemporarilyDown()
self.service.nodeLost(conn1, node1)
# T1: last locking node lost, client receives AnswerTransactionFinished
self.checkAnswerTransactionFinished(cconn1)
self.checkNotifyTransactionFinished(conn1)
self.checkNotifyUnlockInformation(conn2)
# ...and notifications are sent to other clients
self.checkInvalidateObjects(cconn2)
self.checkInvalidateObjects(cconn3)
# Transaction 2: 2 storage nodes involved, one will die
msg_id_2 = 2
ttid2 = tm.begin(node1)
tid2 = tm.prepare(ttid2, 1, oid_list,
[node1.getUUID(), node2.getUUID()], msg_id_2)
# T2: pending locking answer, client keeps waiting
self.checkNoPacketSent(cconn2, check_notify=False)
tm.remove(node1.getUUID(), ttid2)
# Transaction 3: 1 storage node involved, which won't die
msg_id_3 = 3
ttid3 = tm.begin(node1)
tid3 = tm.prepare(ttid3, 1, oid_list,
[node2.getUUID(), ], msg_id_3)
# T3: action not significant to this transacion, so no response
self.checkNoPacketSent(cconn3, check_notify=False)
tm.remove(node1.getUUID(), ttid3)
def test_answerPack(self):
# Note: incomming status has no meaning here, so it's left to False.
node1, conn1 = self._getStorage()
node2, conn2 = self._getStorage()
self.app.packing = None
# Does nothing
self.service.answerPack(None, False)
client_conn = Mock({
'getPeerId': 512,
})
client_peer_id = 42
self.app.packing = (client_conn, client_peer_id, set([conn1.getUUID(),
conn2.getUUID()]))
self.service.answerPack(conn1, False)
self.checkNoPacketSent(client_conn)
self.assertEqual(self.app.packing[2], set([conn2.getUUID(), ]))
self.service.answerPack(conn2, False)
status = self.checkAnswerPacket(client_conn, Packets.AnswerPack,
decode=True)[0]
# TODO: verify packet peer id
self.assertTrue(status)
self.assertEqual(self.app.packing, None)
def test_notifyReady(self):
node, conn = self._getStorage()
uuid = node.getUUID()
self.assertFalse(self.app.isStorageReady(uuid))
self.service.notifyReady(conn)
self.assertTrue(self.app.isStorageReady(uuid))
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/testTransactions.py 0000664 0000000 0000000 00000024560 11634614701 0027556 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from struct import pack, unpack
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import ZERO_TID
from neo.master.transactions import Transaction, TransactionManager
from neo.master.transactions import packTID, unpackTID, addTID, DelayedError
class testTransactionManager(NeoUnitTestBase):
def makeTID(self, i):
return pack('!Q', i)
def makeOID(self, i):
return pack('!Q', i)
def makeUUID(self, i):
return '\0' * 12 + pack('!Q', i)
def makeNode(self, i):
uuid = self.makeUUID(i)
node = Mock({'getUUID': uuid, '__hash__': i, '__repr__': 'FakeNode'})
return uuid, node
def testTransaction(self):
# test data
node = Mock({'__repr__': 'Node'})
tid = self.makeTID(1)
ttid = self.makeTID(2)
oid_list = (oid1, oid2) = [self.makeOID(1), self.makeOID(2)]
uuid_list = (uuid1, uuid2) = [self.makeUUID(1), self.makeUUID(2)]
msg_id = 1
# create transaction object
txn = Transaction(node, ttid)
txn.prepare(tid, oid_list, uuid_list, msg_id)
self.assertEqual(txn.getUUIDList(), uuid_list)
self.assertEqual(txn.getOIDList(), oid_list)
# lock nodes one by one
self.assertFalse(txn.lock(uuid1))
self.assertTrue(txn.lock(uuid2))
# check that repr() works
repr(txn)
def testManager(self):
# test data
node = Mock({'__hash__': 1})
msg_id = 1
oid_list = (oid1, oid2) = self.makeOID(1), self.makeOID(2)
uuid_list = (uuid1, uuid2) = self.makeUUID(1), self.makeUUID(2)
client_uuid = self.makeUUID(3)
# create transaction manager
callback = Mock()
txnman = TransactionManager(on_commit=callback)
self.assertFalse(txnman.hasPending())
self.assertEqual(txnman.registerForNotification(uuid1), set())
# begin the transaction
ttid = txnman.begin(node)
self.assertTrue(ttid is not None)
self.assertEqual(len(txnman.registerForNotification(uuid1)), 1)
self.assertTrue(txnman.hasPending())
# prepare the transaction
tid = txnman.prepare(ttid, 1, oid_list, uuid_list, msg_id)
self.assertTrue(txnman.hasPending())
self.assertEqual(txnman.registerForNotification(uuid1), set([ttid]))
txn = txnman[ttid]
self.assertEqual(txn.getTID(), tid)
self.assertEqual(txn.getUUIDList(), list(uuid_list))
self.assertEqual(txn.getOIDList(), list(oid_list))
# lock nodes
txnman.lock(ttid, uuid1)
self.assertEqual(len(callback.getNamedCalls('__call__')), 0)
txnman.lock(ttid, uuid2)
self.assertEqual(len(callback.getNamedCalls('__call__')), 1)
# transaction finished
txnman.remove(client_uuid, ttid)
self.assertEqual(txnman.registerForNotification(uuid1), set())
def testAbortFor(self):
oid_list = [self.makeOID(1), ]
storage_1_uuid, node1 = self.makeNode(1)
storage_2_uuid, node2 = self.makeNode(2)
client_uuid, client = self.makeNode(3)
txnman = TransactionManager(lambda tid, txn: None)
# register 4 transactions made by two nodes
self.assertEqual(txnman.registerForNotification(storage_1_uuid), set())
ttid1 = txnman.begin(client)
tid1 = txnman.prepare(ttid1, 1, oid_list, [storage_1_uuid], 1)
self.assertEqual(txnman.registerForNotification(storage_1_uuid), set([ttid1]))
# abort transactions of another node, transaction stays
txnman.abortFor(node2)
self.assertEqual(txnman.registerForNotification(storage_1_uuid), set([ttid1]))
# abort transactions of requesting node, transaction is not removed
# because the transaction is prepared and must remains until the end of
# the 2PC
txnman.abortFor(node1)
self.assertEqual(txnman.registerForNotification(storage_1_uuid), set([ttid1]))
self.assertTrue(txnman.hasPending())
# ...and the lock is available
txnman.begin(client, self.getNextTID())
def test_getNextOIDList(self):
txnman = TransactionManager(lambda tid, txn: None)
# must raise as we don"t have one
self.assertEqual(txnman.getLastOID(), None)
self.assertRaises(RuntimeError, txnman.getNextOIDList, 1)
# ask list
txnman.setLastOID(self.getOID(1))
oid_list = txnman.getNextOIDList(15)
self.assertEqual(len(oid_list), 15)
# begin from 1, so generated oid from 2 to 16
for i, oid in zip(xrange(len(oid_list)), oid_list):
self.assertEqual(oid, self.getOID(i+2))
def test_forget(self):
client1 = Mock({'__hash__': 1})
client2 = Mock({'__hash__': 2})
client3 = Mock({'__hash__': 3})
storage_1_uuid = self.makeUUID(1)
storage_2_uuid = self.makeUUID(2)
oid_list = [self.makeOID(1), ]
client_uuid = self.makeUUID(3)
tm = TransactionManager(lambda tid, txn: None)
# Transaction 1: 2 storage nodes involved, one will die and the other
# already answered node lock
msg_id_1 = 1
ttid1 = tm.begin(client1)
tid1 = tm.prepare(ttid1, 1, oid_list,
[storage_1_uuid, storage_2_uuid], msg_id_1)
tm.lock(ttid1, storage_2_uuid)
t1 = tm[ttid1]
self.assertFalse(t1.locked())
# Storage 1 dies:
# t1 is over
self.assertTrue(t1.forget(storage_1_uuid))
self.assertEqual(t1.getUUIDList(), [storage_2_uuid])
tm.remove(client_uuid, tid1)
# Transaction 2: 2 storage nodes involved, one will die
msg_id_2 = 2
ttid2 = tm.begin(client2)
tid2 = tm.prepare(ttid2, 1, oid_list,
[storage_1_uuid, storage_2_uuid], msg_id_2)
t2 = tm[ttid2]
self.assertFalse(t2.locked())
# Storage 1 dies:
# t2 still waits for storage 2
self.assertFalse(t2.forget(storage_1_uuid))
self.assertEqual(t2.getUUIDList(), [storage_2_uuid])
self.assertTrue(t2.lock(storage_2_uuid))
tm.remove(client_uuid, tid2)
# Transaction 3: 1 storage node involved, which won't die
msg_id_3 = 3
ttid3 = tm.begin(client3)
tid3 = tm.prepare(ttid3, 1, oid_list, [storage_2_uuid, ],
msg_id_3)
t3 = tm[ttid3]
self.assertFalse(t3.locked())
# Storage 1 dies:
# t3 doesn't care
self.assertFalse(t3.forget(storage_1_uuid))
self.assertEqual(t3.getUUIDList(), [storage_2_uuid])
self.assertTrue(t3.lock(storage_2_uuid))
tm.remove(client_uuid, tid3)
def testTIDUtils(self):
"""
Tests packTID/unpackTID/addTID.
"""
min_tid = pack('!LL', 0, 0)
min_unpacked_tid = ((1900, 1, 1, 0, 0), 0)
max_tid = pack('!LL', 2**32 - 1, 2 ** 32 - 1)
# ((((9917 - 1900) * 12 + (10 - 1)) * 31 + (14 - 1)) * 24 + 4) * 60 +
# 15 == 2**32 - 1
max_unpacked_tid = ((9917, 10, 14, 4, 15), 2**32 - 1)
self.assertEqual(unpackTID(min_tid), min_unpacked_tid)
self.assertEqual(unpackTID(max_tid), max_unpacked_tid)
self.assertEqual(packTID(min_unpacked_tid), min_tid)
self.assertEqual(packTID(max_unpacked_tid), max_tid)
self.assertEqual(addTID(min_tid, 1), pack('!LL', 0, 1))
self.assertEqual(addTID(pack('!LL', 0, 2**32 - 1), 1),
pack('!LL', 1, 0))
self.assertEqual(addTID(pack('!LL', 0, 2**32 - 1), 2**32 + 1),
pack('!LL', 2, 0))
# Check impossible dates are avoided (2010/11/31 doesn't exist)
self.assertEqual(
unpackTID(addTID(packTID(((2010, 11, 30, 23, 59), 2**32 - 1)), 1)),
((2010, 12, 1, 0, 0), 0))
def testTransactionLock(self):
"""
Transaction lock is present to ensure invalidation TIDs are sent in
strictly increasing order.
Note: this implementation might change later, to allow more paralelism.
"""
client_uuid, client = self.makeNode(1)
tm = TransactionManager(lambda tid, txn: None)
# With a requested TID, lock spans from begin to remove
ttid1 = self.getNextTID()
ttid2 = self.getNextTID()
tid1 = tm.begin(client, ttid1)
self.assertEqual(tid1, ttid1)
tm.remove(client_uuid, tid1)
# Without a requested TID, lock spans from prepare to remove only
ttid3 = tm.begin(client)
ttid4 = tm.begin(client) # Doesn't raise
node = Mock({'getUUID': client_uuid, '__hash__': 0})
tid4 = tm.prepare(ttid4, 1, [], [], 0)
tm.remove(client_uuid, tid4)
tm.prepare(ttid3, 1, [], [], 0)
def testClientDisconectsAfterBegin(self):
client_uuid1, node1 = self.makeNode(1)
tm = TransactionManager(lambda tid, txn: None)
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tm.begin(node1, tid1)
tm.abortFor(node1)
self.assertTrue(tid1 not in tm)
def testUnlockPending(self):
callback = Mock()
uuid1, node1 = self.makeNode(1)
uuid2, node2 = self.makeNode(2)
storage_uuid = self.makeUUID(3)
tm = TransactionManager(callback)
ttid1 = tm.begin(node1)
ttid2 = tm.begin(node2)
tid1 = tm.prepare(ttid1, 1, [], [storage_uuid], 0)
tid2 = tm.prepare(ttid2, 1, [], [storage_uuid], 0)
tm.lock(ttid2, storage_uuid)
# txn 2 is still blocked by txn 1
self.assertEqual(len(callback.getNamedCalls('__call__')), 0)
tm.lock(ttid1, storage_uuid)
# both transactions are unlocked when txn 1 is fully locked
self.assertEqual(len(callback.getNamedCalls('__call__')), 2)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/master/testVerification.py 0000664 0000000 0000000 00000022644 11634614701 0027531 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from struct import pack, unpack
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import NodeTypes, NodeStates
from neo.master.verification import VerificationManager, VerificationFailure
from neo.master.app import Application
class MasterVerificationTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
# create an application object
config = self.getMasterConfiguration()
self.app = Application(config)
self.app.pt.clear()
self.verification = VerificationManager(self.app)
self.app.loid = '\0' * 8
self.app.tm.setLastTID('\0' * 8)
for node in self.app.nm.getMasterList():
self.app.unconnected_master_node_set.add(node.getAddress())
node.setState(NodeStates.RUNNING)
# define some variable to simulate client and storage node
self.client_port = 11022
self.storage_port = 10021
self.master_port = 10011
self.master_address = ('127.0.0.1', self.master_port)
self.storage_address = ('127.0.0.1', self.storage_port)
# Common methods
def getLastUUID(self):
return self.uuid
def identifyToMasterNode(self, node_type=NodeTypes.STORAGE, ip="127.0.0.1",
port=10021):
"""Do first step of identification to MN
"""
uuid = self.getNewUUID()
self.app.nm.createFromNodeType(
node_type,
address=(ip, port),
uuid=uuid,
)
return uuid
# Tests
def test_01_connectionClosed(self):
# test a storage, must raise as cluster no longer op
uuid = self.identifyToMasterNode()
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(self.app.nm.getByAddress(conn.getAddress()).getState(),
NodeStates.UNKNOWN)
self.assertRaises(VerificationFailure, self.verification.connectionClosed,conn)
self.assertEqual(self.app.nm.getByAddress(conn.getAddress()).getState(),
NodeStates.TEMPORARILY_DOWN)
def _test_09_answerLastIDs(self):
# XXX: test disabled, should be an unexpected packet
verification = self.verification
uuid = self.identifyToMasterNode()
loid = self.app.loid
ltid = self.app.tm.getLastTID()
lptid = '\0' * 8
# send information which are later to what PMN knows, this must raise
conn = self.getFakeConnection(uuid, self.storage_address)
node_list = []
new_ptid = unpack('!Q', lptid)[0]
new_ptid = pack('!Q', new_ptid + 1)
oid = unpack('!Q', loid)[0]
new_oid = pack('!Q', oid + 1)
upper, lower = unpack('!LL', ltid)
new_tid = pack('!LL', upper, lower + 10)
self.assertTrue(new_ptid > self.app.pt.getID())
self.assertTrue(new_oid > self.app.loid)
self.assertTrue(new_tid > self.app.tm.getLastTID())
self.assertRaises(VerificationFailure, verification.answerLastIDs, conn, new_oid, new_tid, new_ptid)
self.assertNotEqual(new_oid, self.app.loid)
self.assertNotEqual(new_tid, self.app.tm.getLastTID())
self.assertNotEqual(new_ptid, self.app.pt.getID())
def test_11_answerUnfinishedTransactions(self):
verification = self.verification
uuid = self.identifyToMasterNode()
# do nothing
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.assertEqual(len(self.verification._tid_set), 0)
new_tid = self.getNextTID()
verification.answerUnfinishedTransactions(conn, new_tid, [new_tid])
self.assertEqual(len(self.verification._tid_set), 0)
# update dict
conn = self.getFakeConnection(uuid, self.storage_address)
self.verification._uuid_set.add(uuid)
self.assertEqual(len(self.verification._tid_set), 0)
new_tid = self.getNextTID(new_tid)
verification.answerUnfinishedTransactions(conn, new_tid, [new_tid])
self.assertTrue(uuid not in self.verification._uuid_set)
self.assertEqual(len(self.verification._tid_set), 1)
self.assertTrue(new_tid in self.verification._tid_set)
def test_12_answerTransactionInformation(self):
verification = self.verification
uuid = self.identifyToMasterNode()
# do nothing, as unfinished_oid_set is None
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.verification._uuid_set.add(uuid)
self.verification._oid_set = None
new_tid = self.getNextTID()
new_oid = self.getOID(1)
verification.answerTransactionInformation(conn, new_tid,
"user", "desc", "ext", False, [new_oid,])
self.assertEqual(self.verification._oid_set, None)
# do nothing as asking_uuid_dict is True
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.verification._oid_set = set()
self.assertEqual(len(self.verification._oid_set), 0)
verification.answerTransactionInformation(conn, new_tid,
"user", "desc", "ext", False, [new_oid,])
self.assertEqual(len(self.verification._oid_set), 0)
# do work
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.verification._uuid_set.add(uuid)
self.assertEqual(len(self.verification._oid_set), 0)
verification.answerTransactionInformation(conn, new_tid,
"user", "desc", "ext", False, [new_oid,])
self.assertEqual(len(self.verification._oid_set), 1)
self.assertTrue(new_oid in self.verification._oid_set)
# do not work as oid is diff
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.verification._uuid_set.add(uuid)
self.assertEqual(len(self.verification._oid_set), 1)
new_oid = self.getOID(2)
self.assertRaises(ValueError, verification.answerTransactionInformation,
conn, new_tid, "user", "desc", "ext", False, [new_oid,])
def test_13_tidNotFound(self):
verification = self.verification
uuid = self.identifyToMasterNode()
# do nothing as asking_uuid_dict is True
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.verification._oid_set = []
verification.tidNotFound(conn, "msg")
self.assertNotEqual(self.verification._oid_set, None)
# do work as asking_uuid_dict is False
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.verification._uuid_set.add(uuid)
self.verification._oid_set = []
verification.tidNotFound(conn, "msg")
self.assertEqual(self.verification._oid_set, None)
def test_14_answerObjectPresent(self):
verification = self.verification
uuid = self.identifyToMasterNode()
# do nothing as asking_uuid_dict is True
new_tid = self.getNextTID()
new_oid = self.getOID(1)
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
verification.answerObjectPresent(conn, new_oid, new_tid)
# do work
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.verification._uuid_set.add(uuid)
verification.answerObjectPresent(conn, new_oid, new_tid)
self.assertTrue(uuid not in self.verification._uuid_set)
def test_15_oidNotFound(self):
verification = self.verification
uuid = self.identifyToMasterNode()
# do nothing as asking_uuid_dict is True
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.app._object_present = True
self.assertTrue(self.app._object_present)
verification.oidNotFound(conn, "msg")
self.assertTrue(self.app._object_present)
# do work as asking_uuid_dict is False
conn = self.getFakeConnection(uuid, self.storage_address)
self.assertEqual(len(self.verification._uuid_set), 0)
self.verification._uuid_set.add(uuid)
self.assertTrue(self.app._object_present)
verification.oidNotFound(conn, "msg")
self.assertFalse(self.app._object_present)
self.assertTrue(uuid not in self.verification._uuid_set)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/stat_zodb.py 0000775 0000000 0000000 00000012755 11634614701 0024712 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
# -*- coding: utf-8 -*-
import math, os, random, sys
from cStringIO import StringIO
from ZODB.utils import p64
from ZODB.BaseStorage import TransactionRecord
from ZODB.FileStorage import FileStorage
# Stats of a 43.5 GB production Data.fs
# µ σ
# size of object 6.04237779991 1.55811487853
# # objects / transaction 1.04108991045 0.906703192546
# size of transaction 7.98615420517 1.6624220402
#
# % of new object / transaction: 0.810080409164
# # of transactions: 1541194
# compression ratio: 28.5 % (gzip -6)
PROD1 = lambda random=random: DummyZODB(6.04237779991, 1.55811487853,
1.04108991045, 0.906703192546,
0.810080409164, random)
def DummyData(random=random):
# returns data that gzip at about 28.5 %
# make sure sample is bigger than dictionary of compressor
data = ''.join(chr(int(random.gauss(0, .8)) % 256) for x in xrange(100000))
return StringIO(data)
class DummyZODB(object):
"""
Object size and count of generated transaction follows a log normal
distribution, where *_mu and *_sigma are their parameters.
"""
def __init__(self, obj_size_mu, obj_size_sigma,
obj_count_mu, obj_count_sigma,
new_ratio, random=random):
self.obj_size_mu = obj_size_mu
self.obj_size_sigma = obj_size_sigma
self.obj_count_mu = obj_count_mu
self.obj_count_sigma = obj_count_sigma
self.random = random
self.new_ratio = new_ratio
self.next_oid = 0
self.err_count = 0
def __call__(self):
variate = self.random.lognormvariate
oid_set = set()
for i in xrange(int(round(variate(self.obj_count_mu,
self.obj_count_sigma))) or 1):
if len(oid_set) >= self.next_oid or \
self.random.random() < self.new_ratio:
oid = self.next_oid
self.next_oid = oid + 1
else:
while True:
oid = self.random.randrange(self.next_oid)
if oid not in oid_set:
break
oid_set.add(oid)
yield p64(oid), int(round(variate(self.obj_size_mu,
self.obj_size_sigma))) or 1
def as_storage(self, transaction_count, dummy_data_file=None):
if dummy_data_file is None:
dummy_data_file = DummyData(self.random)
class dummy_change(object):
data_txn = None
version = ''
def __init__(self, tid, oid, size):
self.tid = tid
self.oid = oid
data = ''
while size:
d = dummy_data_file.read(size)
size -= len(d)
data += d
if size:
dummy_data_file.seek(0)
self.data = data
class dummy_transaction(TransactionRecord):
def __init__(transaction, *args):
TransactionRecord.__init__(transaction, *args)
transaction_size = 0
transaction.record_list = []
add_record = transaction.record_list.append
for x in self():
oid, size = x
transaction_size += size
add_record(dummy_change(transaction.tid, oid, size))
transaction.size = transaction_size
def __iter__(transaction):
return iter(transaction.record_list)
class dummy_storage(object):
size = 0
def iterator(storage, *args):
args = ' ', '', '', {}
for i in xrange(1, transaction_count+1):
t = dummy_transaction(p64(i), *args)
storage.size += t.size
yield t
def getSize(self):
return self.size
return dummy_storage()
def lognorm_stat(X):
Y = map(math.log, X)
n = len(Y)
mu = sum(Y) / n
s2 = sum(d*d for d in (y - mu for y in Y)) / n
return mu, math.sqrt(s2)
def stat(*storages):
obj_size_list = []
obj_count_list = []
tr_size_list = []
oid_set = set()
for storage in storages:
for transaction in storage.iterator():
obj_count = tr_size = 0
for r in transaction:
if r.data:
obj_count += 1
oid = r.oid
if oid not in oid_set:
oid_set.add(oid)
size = len(r.data)
tr_size += size
obj_size_list.append(size)
obj_count_list.append(obj_count)
tr_size_list.append(tr_size)
new_ratio = float(len(oid_set)) / len(obj_size_list)
return (lognorm_stat(obj_size_list),
lognorm_stat(obj_count_list),
lognorm_stat(tr_size_list),
new_ratio, len(tr_size_list))
def main():
s = stat(*(FileStorage(x, read_only=True) for x in sys.argv[1:]))
print(u" %-15s σ\n"
"size of object %-15s %s\n"
"# objects / transaction %-15s %s\n"
"size of transaction %-15s %s\n"
"\n%% of new object / transaction: %s"
"\n# of transactions: %s"
% ((u"µ",) + s[0] + s[1] + s[2] + s[3:]))
if __name__ == "__main__":
sys.exit(main())
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/ 0000775 0000000 0000000 00000000000 11634614701 0023776 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/__init__.py 0000664 0000000 0000000 00000000000 11634614701 0026075 0 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testClientHandler.py 0000664 0000000 0000000 00000030544 11634614701 0027772 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock, ReturnValues
from collections import deque
from neo.tests import NeoUnitTestBase
from neo.storage.app import Application
from neo.storage.transactions import ConflictError, DelayedError
from neo.storage.handlers.client import ClientOperationHandler
from neo.lib.protocol import INVALID_PARTITION
from neo.lib.protocol import INVALID_TID, INVALID_OID
from neo.lib.protocol import Packets, LockState
class StorageClientHandlerTests(NeoUnitTestBase):
def checkHandleUnexpectedPacket(self, _call, _msg_type, _listening=True, **kwargs):
conn = self.getFakeConnection(address=("127.0.0.1", self.master_port),
is_server=_listening)
# hook
self.operation.peerBroken = lambda c: c.peerBrokendCalled()
self.checkUnexpectedPacketRaised(_call, conn=conn, **kwargs)
def setUp(self):
NeoUnitTestBase.setUp(self)
self.prepareDatabase(number=1)
# create an application object
config = self.getStorageConfiguration(master_number=1)
self.app = Application(config)
self.app.transaction_dict = {}
self.app.store_lock_dict = {}
self.app.load_lock_dict = {}
self.app.event_queue = deque()
self.app.event_queue_dict = {}
self.app.tm = Mock({'__contains__': True})
# handler
self.operation = ClientOperationHandler(self.app)
# set pmn
self.master_uuid = self.getNewUUID()
pmn = self.app.nm.getMasterList()[0]
pmn.setUUID(self.master_uuid)
self.app.primary_master_node = pmn
self.master_port = 10010
def tearDown(self):
self.app.close()
del self.app
super(StorageClientHandlerTests, self).tearDown()
def _getConnection(self, uuid=None):
return self.getFakeConnection(uuid=uuid, address=('127.0.0.1', 1000))
def _checkTransactionsAborted(self, uuid):
calls = self.app.tm.mockGetNamedCalls('abortFor')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(uuid)
def test_connectionLost(self):
uuid = self.getNewUUID()
self.app.nm.createClient(uuid=uuid)
conn = self._getConnection(uuid=uuid)
self.operation.connectionClosed(conn)
def test_18_askTransactionInformation1(self):
# transaction does not exists
conn = self._getConnection()
self.app.dm = Mock({'getNumPartitions': 1})
self.operation.askTransactionInformation(conn, INVALID_TID)
self.checkErrorPacket(conn)
def test_18_askTransactionInformation2(self):
# answer
conn = self._getConnection()
oid_list = [self.getOID(1), self.getOID(2)]
dm = Mock({ "getTransaction": (oid_list, 'user', 'desc', '', False), })
self.app.dm = dm
self.operation.askTransactionInformation(conn, INVALID_TID)
self.checkAnswerTransactionInformation(conn)
def test_24_askObject1(self):
# delayed response
conn = self._getConnection()
self.app.dm = Mock()
self.app.tm = Mock({'loadLocked': True})
self.app.load_lock_dict[INVALID_OID] = object()
self.assertEqual(len(self.app.event_queue), 0)
self.operation.askObject(conn, oid=INVALID_OID,
serial=INVALID_TID, tid=INVALID_TID)
self.assertEqual(len(self.app.event_queue), 1)
self.checkNoPacketSent(conn)
self.assertEqual(len(self.app.dm.mockGetNamedCalls('getObject')), 0)
def test_24_askObject2(self):
# invalid serial / tid / packet not found
self.app.dm = Mock({'getObject': None})
conn = self._getConnection()
self.assertEqual(len(self.app.event_queue), 0)
self.operation.askObject(conn, oid=INVALID_OID,
serial=INVALID_TID, tid=INVALID_TID)
calls = self.app.dm.mockGetNamedCalls('getObject')
self.assertEqual(len(self.app.event_queue), 0)
self.assertEqual(len(calls), 1)
calls[0].checkArgs(INVALID_OID, INVALID_TID, INVALID_TID)
self.checkErrorPacket(conn)
def test_24_askObject3(self):
# object found => answer
serial = self.getNextTID()
next_serial = self.getNextTID()
oid = self.getOID(1)
tid = self.getNextTID()
self.app.dm = Mock({'getObject': (serial, next_serial, 0, 0, '', None)})
conn = self._getConnection()
self.assertEqual(len(self.app.event_queue), 0)
self.operation.askObject(conn, oid=oid, serial=serial, tid=tid)
self.assertEqual(len(self.app.event_queue), 0)
self.checkAnswerObject(conn)
def test_25_askTIDs1(self):
# invalid offsets => error
app = self.app
app.pt = Mock()
app.dm = Mock()
conn = self._getConnection()
self.checkProtocolErrorRaised(self.operation.askTIDs, conn, 1, 1, None)
self.assertEqual(len(app.pt.mockGetNamedCalls('getCellList')), 0)
self.assertEqual(len(app.dm.mockGetNamedCalls('getTIDList')), 0)
def test_25_askTIDs2(self):
# well case => answer
conn = self._getConnection()
self.app.pt = Mock({'getPartitions': 1})
self.app.dm = Mock({'getTIDList': (INVALID_TID, )})
self.operation.askTIDs(conn, 1, 2, 1)
calls = self.app.dm.mockGetNamedCalls('getTIDList')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(1, 1, 1, [1, ])
self.checkAnswerTids(conn)
def test_25_askTIDs3(self):
# invalid partition => answer usable partitions
conn = self._getConnection()
cell = Mock({'getUUID':self.app.uuid})
self.app.dm = Mock({'getTIDList': (INVALID_TID, )})
self.app.pt = Mock({
'getCellList': (cell, ),
'getPartitions': 1,
'getAssignedPartitionList': [0],
})
self.operation.askTIDs(conn, 1, 2, INVALID_PARTITION)
self.assertEqual(len(self.app.pt.mockGetNamedCalls('getAssignedPartitionList')), 1)
calls = self.app.dm.mockGetNamedCalls('getTIDList')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(1, 1, 1, [0])
self.checkAnswerTids(conn)
def test_26_askObjectHistory1(self):
# invalid offsets => error
app = self.app
app.dm = Mock()
conn = self._getConnection()
self.checkProtocolErrorRaised(self.operation.askObjectHistory, conn,
1, 1, None)
self.assertEqual(len(app.dm.mockGetNamedCalls('getObjectHistory')), 0)
def test_26_askObjectHistory2(self):
oid1, oid2 = self.getOID(1), self.getOID(2)
# first case: empty history
conn = self._getConnection()
self.app.dm = Mock({'getObjectHistory': None})
self.operation.askObjectHistory(conn, oid1, 1, 2)
self.checkErrorPacket(conn)
# second case: not empty history
conn = self._getConnection()
serial = self.getNextTID()
self.app.dm = Mock({'getObjectHistory': [(serial, 0, ), ]})
self.operation.askObjectHistory(conn, oid2, 1, 2)
self.checkAnswerObjectHistory(conn)
def test_askStoreTransaction(self):
uuid = self.getNewUUID()
conn = self._getConnection(uuid=uuid)
tid = self.getNextTID()
user = 'USER'
desc = 'DESC'
ext = 'EXT'
oid_list = (self.getOID(1), self.getOID(2))
self.operation.askStoreTransaction(conn, tid, user, desc, ext, oid_list)
calls = self.app.tm.mockGetNamedCalls('storeTransaction')
self.assertEqual(len(calls), 1)
self.checkAnswerStoreTransaction(conn)
def _getObject(self):
oid = self.getOID(0)
serial = self.getNextTID()
return (oid, serial, 1, '1', 'DATA')
def _checkStoreObjectCalled(self, *args):
calls = self.app.tm.mockGetNamedCalls('storeObject')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(*args)
def test_askStoreObject1(self):
# no conflict => answer
uuid = self.getNewUUID()
conn = self._getConnection(uuid=uuid)
tid = self.getNextTID()
oid, serial, comp, checksum, data = self._getObject()
self.operation.askStoreObject(conn, oid, serial, comp, checksum,
data, None, tid, False)
self._checkStoreObjectCalled(tid, serial, oid, comp,
checksum, data, None, False)
pconflicting, poid, pserial = self.checkAnswerStoreObject(conn,
decode=True)
self.assertEqual(pconflicting, 0)
self.assertEqual(poid, oid)
self.assertEqual(pserial, serial)
def test_askStoreObjectWithDataTID(self):
# same as test_askStoreObject1, but with a non-None data_tid value
uuid = self.getNewUUID()
conn = self._getConnection(uuid=uuid)
tid = self.getNextTID()
oid, serial, comp, checksum, data = self._getObject()
data_tid = self.getNextTID()
self.operation.askStoreObject(conn, oid, serial, comp, checksum,
'', data_tid, tid, False)
self._checkStoreObjectCalled(tid, serial, oid, comp,
checksum, None, data_tid, False)
pconflicting, poid, pserial = self.checkAnswerStoreObject(conn,
decode=True)
self.assertEqual(pconflicting, 0)
self.assertEqual(poid, oid)
self.assertEqual(pserial, serial)
def test_askStoreObject2(self):
# conflict error
uuid = self.getNewUUID()
conn = self._getConnection(uuid=uuid)
tid = self.getNextTID()
locking_tid = self.getNextTID(tid)
def fakeStoreObject(*args):
raise ConflictError(locking_tid)
self.app.tm.storeObject = fakeStoreObject
oid, serial, comp, checksum, data = self._getObject()
self.operation.askStoreObject(conn, oid, serial, comp, checksum,
data, None, tid, False)
pconflicting, poid, pserial = self.checkAnswerStoreObject(conn,
decode=True)
self.assertEqual(pconflicting, 1)
self.assertEqual(poid, oid)
self.assertEqual(pserial, locking_tid)
def test_abortTransaction(self):
conn = self._getConnection()
tid = self.getNextTID()
self.operation.abortTransaction(conn, tid)
calls = self.app.tm.mockGetNamedCalls('abort')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(tid)
def test_askObjectUndoSerial(self):
uuid = self.getNewUUID()
conn = self._getConnection(uuid=uuid)
tid = self.getNextTID()
ltid = self.getNextTID()
undone_tid = self.getNextTID()
# Keep 2 entries here, so we check findUndoTID is called only once.
oid_list = [self.getOID(1), self.getOID(2)]
obj2_data = [] # Marker
self.app.tm = Mock({
'getObjectFromTransaction': None,
})
self.app.dm = Mock({
'findUndoTID': ReturnValues((None, None, False), )
})
self.operation.askObjectUndoSerial(conn, tid, ltid, undone_tid, oid_list)
self.checkErrorPacket(conn)
def test_askHasLock(self):
tid_1 = self.getNextTID()
tid_2 = self.getNextTID()
oid = self.getNextTID()
def getLockingTID(oid):
return locking_tid
self.app.tm.getLockingTID = getLockingTID
for locking_tid, status in (
(None, LockState.NOT_LOCKED),
(tid_1, LockState.GRANTED),
(tid_2, LockState.GRANTED_TO_OTHER),
):
conn = self._getConnection()
self.operation.askHasLock(conn, tid_1, oid)
p_oid, p_status = self.checkAnswerPacket(conn,
Packets.AnswerHasLock, decode=True)
self.assertEqual(oid, p_oid)
self.assertEqual(status, p_status)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testIdentificationHandler.py 0000664 0000000 0000000 00000007214 11634614701 0031503 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import NodeTypes, NotReadyError, \
BrokenNodeDisallowedError
from neo.lib.pt import PartitionTable
from neo.storage.app import Application
from neo.storage.handlers.identification import IdentificationHandler
class StorageIdentificationHandlerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
config = self.getStorageConfiguration(master_number=1)
self.app = Application(config)
self.app.name = 'NEO'
self.app.ready = True
self.app.pt = PartitionTable(4, 1)
self.identification = IdentificationHandler(self.app)
def tearDown(self):
self.app.close()
del self.app
super(StorageIdentificationHandlerTests, self).tearDown()
def test_requestIdentification1(self):
""" nodes are rejected during election or if unknown storage """
self.app.ready = False
self.assertRaises(
NotReadyError,
self.identification.requestIdentification,
self.getFakeConnection(),
NodeTypes.CLIENT,
self.getNewUUID(),
None,
self.app.name,
)
self.app.ready = True
self.assertRaises(
NotReadyError,
self.identification.requestIdentification,
self.getFakeConnection(),
NodeTypes.STORAGE,
self.getNewUUID(),
None,
self.app.name,
)
def test_requestIdentification3(self):
""" broken nodes must be rejected """
uuid = self.getNewUUID()
conn = self.getFakeConnection(uuid=uuid)
node = self.app.nm.createClient(uuid=uuid)
node.setBroken()
self.assertRaises(BrokenNodeDisallowedError,
self.identification.requestIdentification,
conn,
NodeTypes.CLIENT,
uuid,
None,
self.app.name,
)
def test_requestIdentification2(self):
""" accepted client must be connected and running """
uuid = self.getNewUUID()
conn = self.getFakeConnection(uuid=uuid)
node = self.app.nm.createClient(uuid=uuid)
self.identification.requestIdentification(conn, NodeTypes.CLIENT, uuid,
None, self.app.name)
self.assertTrue(node.isRunning())
self.assertTrue(node.isConnected())
self.assertEqual(node.getUUID(), uuid)
self.assertTrue(node.getConnection() is conn)
self.checkUUIDSet(conn, uuid)
args = self.checkAcceptIdentification(conn, decode=True)
node_type, address, _np, _nr, _uuid = args
self.assertEqual(node_type, NodeTypes.STORAGE)
self.assertEqual(address, None)
self.assertEqual(_uuid, uuid)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testInitializationHandler.py 0000664 0000000 0000000 00000007105 11634614701 0031540 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.lib.pt import PartitionTable
from neo.storage.app import Application
from neo.storage.handlers.initialization import InitializationHandler
from neo.lib.protocol import CellStates, ProtocolError
from neo.lib.exception import PrimaryFailure
class StorageInitializationHandlerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.prepareDatabase(number=1)
# create an application object
config = self.getStorageConfiguration(master_number=1)
self.app = Application(config)
self.verification = InitializationHandler(self.app)
# define some variable to simulate client and storage node
self.master_port = 10010
self.storage_port = 10020
self.client_port = 11011
self.num_partitions = 1009
self.num_replicas = 2
self.app.operational = False
self.app.load_lock_dict = {}
self.app.pt = PartitionTable(self.num_partitions, self.num_replicas)
def tearDown(self):
self.app.close()
del self.app
super(StorageInitializationHandlerTests, self).tearDown()
# Common methods
def getLastUUID(self):
return self.uuid
def getClientConnection(self):
address = ("127.0.0.1", self.client_port)
return self.getFakeConnection(uuid=self.getNewUUID(), address=address)
def test_03_connectionClosed(self):
conn = self.getClientConnection()
self.app.listening_conn = object() # mark as running
self.assertRaises(PrimaryFailure, self.verification.connectionClosed, conn,)
# nothing happens
self.checkNoPacketSent(conn)
def test_09_answerPartitionTable(self):
# send a table
conn = self.getClientConnection()
self.app.pt = PartitionTable(3, 2)
node_1 = self.getNewUUID()
node_2 = self.getNewUUID()
node_3 = self.getNewUUID()
self.app.uuid = node_1
# SN already know all nodes
self.app.nm.createStorage(uuid=node_1)
self.app.nm.createStorage(uuid=node_2)
self.app.nm.createStorage(uuid=node_3)
self.assertEqual(self.app.dm.getPartitionTable(), [])
row_list = [(0, ((node_1, CellStates.UP_TO_DATE), (node_2, CellStates.UP_TO_DATE))),
(1, ((node_3, CellStates.UP_TO_DATE), (node_1, CellStates.UP_TO_DATE))),
(2, ((node_2, CellStates.UP_TO_DATE), (node_3, CellStates.UP_TO_DATE)))]
self.assertFalse(self.app.pt.filled())
# send a complete new table and ack
self.verification.answerPartitionTable(conn, 2, row_list)
self.assertTrue(self.app.pt.filled())
self.assertEqual(self.app.pt.getID(), 2)
self.assertNotEqual(self.app.dm.getPartitionTable(), [])
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testMasterHandler.py 0000664 0000000 0000000 00000017441 11634614701 0030010 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from collections import deque
from neo.tests import NeoUnitTestBase
from neo.storage.app import Application
from neo.storage.handlers.master import MasterOperationHandler
from neo.lib.exception import PrimaryFailure, OperationFailure
from neo.lib.pt import PartitionTable
from neo.lib.protocol import CellStates, ProtocolError, Packets
from neo.lib.protocol import INVALID_TID, INVALID_OID
class StorageMasterHandlerTests(NeoUnitTestBase):
def checkHandleUnexpectedPacket(self, _call, _msg_type, _listening=True, **kwargs):
conn = self.getMasterConnection(is_server=_listening)
# hook
self.operation.peerBroken = lambda c: c.peerBrokendCalled()
self.checkUnexpectedPacketRaised(_call, conn=conn, **kwargs)
def setUp(self):
NeoUnitTestBase.setUp(self)
self.prepareDatabase(number=1)
# create an application object
config = self.getStorageConfiguration(master_number=1)
self.app = Application(config)
self.app.transaction_dict = {}
self.app.store_lock_dict = {}
self.app.load_lock_dict = {}
self.app.event_queue = deque()
# handler
self.operation = MasterOperationHandler(self.app)
# set pmn
self.master_uuid = self.getNewUUID()
pmn = self.app.nm.getMasterList()[0]
pmn.setUUID(self.master_uuid)
self.app.primary_master_node = pmn
self.master_port = 10010
def tearDown(self):
self.app.close()
del self.app
super(StorageMasterHandlerTests, self).tearDown()
def getMasterConnection(self):
address = ("127.0.0.1", self.master_port)
return self.getFakeConnection(uuid=self.master_uuid, address=address)
def test_07_connectionClosed2(self):
# primary has closed the connection
conn = self.getMasterConnection()
self.app.listening_conn = object() # mark as running
self.assertRaises(PrimaryFailure, self.operation.connectionClosed, conn)
self.checkNoPacketSent(conn)
def test_14_notifyPartitionChanges1(self):
# old partition change -> do nothing
app = self.app
conn = self.getMasterConnection()
app.replicator = Mock({})
self.app.pt = Mock({'getID': 1})
count = len(self.app.nm.getList())
self.operation.notifyPartitionChanges(conn, 0, ())
self.assertEqual(self.app.pt.getID(), 1)
self.assertEqual(len(self.app.nm.getList()), count)
calls = self.app.replicator.mockGetNamedCalls('removePartition')
self.assertEqual(len(calls), 0)
calls = self.app.replicator.mockGetNamedCalls('addPartition')
self.assertEqual(len(calls), 0)
def test_14_notifyPartitionChanges2(self):
# cases :
uuid1, uuid2, uuid3 = [self.getNewUUID() for i in range(3)]
cells = (
(0, uuid1, CellStates.UP_TO_DATE),
(1, uuid2, CellStates.DISCARDED),
(2, uuid3, CellStates.OUT_OF_DATE),
)
# context
conn = self.getMasterConnection()
app = self.app
# register nodes
app.nm.createStorage(uuid=uuid1)
app.nm.createStorage(uuid=uuid2)
app.nm.createStorage(uuid=uuid3)
ptid1, ptid2 = (1, 2)
self.assertNotEqual(ptid1, ptid2)
app.pt = PartitionTable(3, 1)
app.dm = Mock({ })
app.replicator = Mock({})
self.operation.notifyPartitionChanges(conn, ptid2, cells)
# ptid set
self.assertEqual(app.pt.getID(), ptid2)
# dm call
calls = self.app.dm.mockGetNamedCalls('changePartitionTable')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(ptid2, cells)
def test_16_stopOperation1(self):
# OperationFailure
conn = self.getFakeConnection(is_server=False)
self.assertRaises(OperationFailure, self.operation.stopOperation, conn)
def _getConnection(self):
return self.getFakeConnection()
def test_askLockInformation1(self):
""" Unknown transaction """
self.app.tm = Mock({'__contains__': False})
conn = self._getConnection()
oid_list = [self.getOID(1), self.getOID(2)]
tid = self.getNextTID()
ttid = self.getNextTID()
handler = self.operation
self.assertRaises(ProtocolError, handler.askLockInformation, conn,
ttid, tid, oid_list)
def test_askLockInformation2(self):
""" Lock transaction """
self.app.tm = Mock({'__contains__': True})
conn = self._getConnection()
tid = self.getNextTID()
ttid = self.getNextTID()
oid_list = [self.getOID(1), self.getOID(2)]
self.operation.askLockInformation(conn, ttid, tid, oid_list)
calls = self.app.tm.mockGetNamedCalls('lock')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(ttid, tid, oid_list)
self.checkAnswerInformationLocked(conn)
def test_notifyUnlockInformation1(self):
""" Unknown transaction """
self.app.tm = Mock({'__contains__': False})
conn = self._getConnection()
tid = self.getNextTID()
handler = self.operation
self.assertRaises(ProtocolError, handler.notifyUnlockInformation,
conn, tid)
def test_notifyUnlockInformation2(self):
""" Unlock transaction """
self.app.tm = Mock({'__contains__': True})
conn = self._getConnection()
tid = self.getNextTID()
self.operation.notifyUnlockInformation(conn, tid)
calls = self.app.tm.mockGetNamedCalls('unlock')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(tid)
self.checkNoPacketSent(conn)
def test_30_answerLastIDs(self):
# set critical TID on replicator
conn = self.getFakeConnection()
self.app.replicator = Mock()
self.operation.answerLastIDs(
conn=conn,
loid=INVALID_OID,
ltid=INVALID_TID,
lptid=INVALID_TID,
)
calls = self.app.replicator.mockGetNamedCalls('setCriticalTID')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(INVALID_TID)
def test_31_answerUnfinishedTransactions(self):
# set unfinished TID on replicator
conn = self.getFakeConnection()
self.app.replicator = Mock()
self.operation.answerUnfinishedTransactions(
conn=conn,
max_tid=INVALID_TID,
ttid_list=(INVALID_TID, ),
)
calls = self.app.replicator.mockGetNamedCalls('setUnfinishedTIDList')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(INVALID_TID, (INVALID_TID, ))
def test_askPack(self):
self.app.dm = Mock({'pack': None})
conn = self.getFakeConnection()
tid = self.getNextTID()
self.operation.askPack(conn, tid)
calls = self.app.dm.mockGetNamedCalls('pack')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(tid, self.app.tm.updateObjectDataForPack)
# Content has no meaning here, don't check.
self.checkAnswerPacket(conn, Packets.AnswerPack)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testReplication.py 0000664 0000000 0000000 00000022137 11634614701 0027526 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from struct import pack
from collections import deque
from neo.tests import NeoUnitTestBase
from neo.storage.database import buildDatabaseManager
from neo.storage.handlers.replication import ReplicationHandler
from neo.storage.handlers.replication import RANGE_LENGTH
from neo.storage.handlers.storage import StorageOperationHandler
from neo.storage.replicator import Replicator
from neo.lib.protocol import ZERO_OID, ZERO_TID
MAX_TRANSACTIONS = 10000
MAX_OBJECTS = 100000
MAX_TID = '\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFE' # != INVALID_TID
class FakeConnection(object):
def __init__(self):
self._msg_id = 0
self._queue = deque()
def allocateId(self):
self._msg_id += 1
return self._msg_id
def _addPacket(self, packet, *args, **kw):
packet.setId(self.allocateId())
self._queue.append(packet)
ask = _addPacket
answer = _addPacket
notify = _addPacket
def setPeerId(self, msg_id):
pass
def process(self, dhandler, dconn):
if not self._queue:
return False
while self._queue:
dhandler.dispatch(dconn, self._queue.popleft())
return True
class ReplicationTests(NeoUnitTestBase):
def checkReplicationProcess(self, reference, outdated):
pt = Mock({'getPartitions': 1})
# reference application
rapp = Mock({})
rapp.pt = pt
rapp.dm = reference
rapp.tm = Mock({'loadLocked': False})
mconn = FakeConnection()
rapp.master_conn = mconn
# outdated application
oapp = Mock({})
oapp.dm = outdated
oapp.pt = pt
oapp.master_conn = mconn
oapp.replicator = Replicator(oapp)
oapp.replicator.getCurrentOffset = lambda: 0
oapp.replicator.isCurrentConnection = lambda c: True
oapp.replicator.getCurrentCriticalTID = lambda: MAX_TID
# handlers and connections
rhandler = StorageOperationHandler(rapp)
rconn = FakeConnection()
ohandler = ReplicationHandler(oapp)
oconn = FakeConnection()
# run replication
ohandler.startReplication(oconn)
process = True
while process:
process = oconn.process(rhandler, rconn)
oapp.replicator.processDelayedTasks()
process |= rconn.process(ohandler, oconn)
# check transactions
for tid in reference.getTIDList(0, MAX_TRANSACTIONS, 1, [0]):
self.assertEqual(
reference.getTransaction(tid),
outdated.getTransaction(tid),
)
for tid in outdated.getTIDList(0, MAX_TRANSACTIONS, 1, [0]):
self.assertEqual(
outdated.getTransaction(tid),
reference.getTransaction(tid),
)
# check transactions
params = (ZERO_TID, '\xFF' * 8, MAX_TRANSACTIONS, 1, 0)
self.assertEqual(
reference.getReplicationTIDList(*params),
outdated.getReplicationTIDList(*params),
)
# check objects
params = (ZERO_OID, ZERO_TID, '\xFF' * 8, MAX_OBJECTS, 1, 0)
self.assertEqual(
reference.getObjectHistoryFrom(*params),
outdated.getObjectHistoryFrom(*params),
)
def buildStorage(self, transactions, objects, name='BTree', config=None):
def makeid(oid_or_tid):
return pack('!Q', oid_or_tid)
storage = buildDatabaseManager(name, config)
storage.getNumPartitions = lambda: 1
storage.setup(reset=True)
storage._transactions = transactions
storage._objects = objects
# store transactions
for tid in transactions:
transaction = ([ZERO_OID], 'user', 'desc', '', False)
storage.storeTransaction(makeid(tid), [], transaction, False)
# store object history
for tid, oid_list in objects.iteritems():
object_list = [(makeid(oid), False, 0, '', None) for oid in oid_list]
storage.storeTransaction(makeid(tid), object_list, None, False)
return storage
def testReplication0(self):
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=[1, 2, 3],
objects={1: [1], 2: [1], 3: [1]},
),
outdated=self.buildStorage(
transactions=[],
objects={},
),
)
def testReplication1(self):
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=[1, 2, 3],
objects={1: [1], 2: [1], 3: [1]},
),
outdated=self.buildStorage(
transactions=[1],
objects={1: [1]},
),
)
def testReplication2(self):
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=[1, 2, 3],
objects={1: [1, 2, 3]},
),
outdated=self.buildStorage(
transactions=[1, 2, 3],
objects={1: [1, 2, 3]},
),
)
def testChunkBeginning(self):
ref_number = range(RANGE_LENGTH + 1)
out_number = range(RANGE_LENGTH)
obj_list = [1, 2, 3]
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=ref_number,
objects=dict.fromkeys(ref_number, obj_list),
),
outdated=self.buildStorage(
transactions=out_number,
objects=dict.fromkeys(out_number, obj_list),
),
)
def testChunkEnd(self):
ref_number = range(RANGE_LENGTH)
out_number = range(RANGE_LENGTH - 1)
obj_list = [1, 2, 3]
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=ref_number,
objects=dict.fromkeys(ref_number, obj_list)
),
outdated=self.buildStorage(
transactions=out_number,
objects=dict.fromkeys(out_number, obj_list)
),
)
def testChunkMiddle(self):
obj_list = [1, 2, 3]
ref_number = range(RANGE_LENGTH)
out_number = range(4000)
out_number.remove(3000)
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=ref_number,
objects=dict.fromkeys(ref_number, obj_list)
),
outdated=self.buildStorage(
transactions=out_number,
objects=dict.fromkeys(out_number, obj_list)
),
)
def testFullChunkPart(self):
obj_list = [1, 2, 3]
ref_number = range(1001)
out_number = {}
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=ref_number,
objects=dict.fromkeys(ref_number, obj_list)
),
outdated=self.buildStorage(
transactions=out_number,
objects=dict.fromkeys(out_number, obj_list)
),
)
def testSameData(self):
obj_list = [1, 2, 3]
number = range(RANGE_LENGTH * 2)
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=number,
objects=dict.fromkeys(number, obj_list)
),
outdated=self.buildStorage(
transactions=number,
objects=dict.fromkeys(number, obj_list)
),
)
def testTooManyData(self):
obj_list = [0, 1]
ref_number = range(RANGE_LENGTH)
out_number = range(RANGE_LENGTH + 2)
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=ref_number,
objects=dict.fromkeys(ref_number, obj_list)
),
outdated=self.buildStorage(
transactions=out_number,
objects=dict.fromkeys(out_number, obj_list)
),
)
def testMissingObject(self):
self.checkReplicationProcess(
reference=self.buildStorage(
transactions=[1, 2],
objects=dict.fromkeys([1, 2], [1, 2]),
),
outdated=self.buildStorage(
transactions=[1, 2],
objects=dict.fromkeys([1], [1]),
),
)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testReplicationHandler.py 0000664 0000000 0000000 00000065577 11634614701 0031043 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.lib.util import add64
from neo.tests import NeoUnitTestBase
from neo.lib.protocol import Packets, ZERO_OID, ZERO_TID
from neo.storage.handlers.replication import ReplicationHandler
from neo.storage.handlers.replication import RANGE_LENGTH, MIN_RANGE_LENGTH
class FakeDict(object):
def __init__(self, items):
self._items = items
self._dict = dict(items)
assert len(self._dict) == len(items), self._dict
def iteritems(self):
for item in self._items:
yield item
def iterkeys(self):
for key, value in self.iteritems():
yield key
def itervalues(self):
for key, value in self.iteritems():
yield value
def items(self):
return self._items[:]
def keys(self):
return [x for x, y in self._items]
def values(self):
return [y for x, y in self._items]
def __getitem__(self, key):
return self._dict[key]
def __getattr__(self, key):
return getattr(self._dict, key)
def __len__(self):
return len(self._dict)
class StorageReplicationHandlerTests(NeoUnitTestBase):
def setup(self):
pass
def teardown(self):
pass
def getApp(self, conn=None, tid_check_result=(0, 0, ZERO_TID),
serial_check_result=(0, 0, ZERO_OID, 0, ZERO_TID),
tid_result=(),
history_result=None,
rid=0, critical_tid=ZERO_TID,
num_partitions=1,
):
if history_result is None:
history_result = {}
replicator = Mock({
'__repr__': 'Fake replicator',
'reset': None,
'checkSerialRange': None,
'checkTIDRange': None,
'getTIDCheckResult': tid_check_result,
'getSerialCheckResult': serial_check_result,
'getTIDsFromResult': tid_result,
'getObjectHistoryFromResult': history_result,
'checkSerialRange': None,
'checkTIDRange': None,
'getTIDsFrom': None,
'getObjectHistoryFrom': None,
'getCurrentOffset': rid,
'getCurrentCriticalTID': critical_tid,
})
def isCurrentConnection(other_conn):
return other_conn is conn
replicator.isCurrentConnection = isCurrentConnection
real_replicator = replicator
class FakeApp(object):
replicator = real_replicator
dm = Mock({
'storeTransaction': None,
'deleteObject': None,
})
pt = Mock({
'getPartitions': num_partitions,
})
return FakeApp
def _checkReplicationStarted(self, conn, rid, replicator):
min_tid, max_tid, length, partition = self.checkAskPacket(conn,
Packets.AskCheckTIDRange, decode=True)
self.assertEqual(min_tid, ZERO_TID)
self.assertEqual(length, RANGE_LENGTH)
self.assertEqual(partition, rid)
calls = replicator.mockGetNamedCalls('checkTIDRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(min_tid, max_tid, length, partition)
def _checkPacketTIDList(self, conn, tid_list, next_tid, app):
packet_list = [x.getParam(0) for x in conn.mockGetNamedCalls('ask')]
packet_list, next_range = packet_list[:-1], packet_list[-1]
self.assertEqual(type(next_range), Packets.AskCheckTIDRange)
pmin_tid, plength, ppartition = next_range.decode()
self.assertEqual(pmin_tid, add64(next_tid, 1))
self.assertEqual(plength, RANGE_LENGTH)
self.assertEqual(ppartition, app.replicator.getCurrentOffset())
calls = app.replicator.mockGetNamedCalls('checkTIDRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_tid, plength, ppartition)
self.assertEqual(len(packet_list), len(tid_list))
for packet in packet_list:
self.assertEqual(type(packet),
Packets.AskTransactionInformation)
ptid = packet.decode()[0]
for tid in tid_list:
if ptid == tid:
tid_list.remove(tid)
break
else:
raise AssertionFailed, '%s not found in %r' % (dump(ptid),
[dump(x) for x in tid_list])
def _checkPacketSerialList(self, conn, object_list, next_oid, next_serial, app):
packet_list = [x.getParam(0) for x in conn.mockGetNamedCalls('ask')]
packet_list, next_range = packet_list[:-1], packet_list[-1]
self.assertEqual(type(next_range), Packets.AskCheckSerialRange)
pmin_oid, pmin_serial, plength, ppartition = next_range.decode()
self.assertEqual(pmin_oid, next_oid)
self.assertEqual(pmin_serial, add64(next_serial, 1))
self.assertEqual(plength, RANGE_LENGTH)
self.assertEqual(ppartition, app.replicator.getCurrentOffset())
calls = app.replicator.mockGetNamedCalls('checkSerialRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_oid, pmin_serial, plength, ppartition)
self.assertEqual(len(packet_list), len(object_list),
([x.decode() for x in packet_list], object_list))
reference_set = set((x + (None, ) for x in object_list))
packet_set = set((x.decode() for x in packet_list))
assert len(packet_list) == len(reference_set) == len(packet_set)
self.assertEqual(reference_set, packet_set)
def test_connectionLost(self):
app = self.getApp()
ReplicationHandler(app).connectionLost(None, None)
self.assertEqual(len(app.replicator.mockGetNamedCalls('storageLost')), 1)
def test_connectionFailed(self):
app = self.getApp()
ReplicationHandler(app).connectionFailed(None)
self.assertEqual(len(app.replicator.mockGetNamedCalls('storageLost')), 1)
def test_acceptIdentification(self):
rid = 24
app = self.getApp(rid=rid)
conn = self.getFakeConnection()
replication = ReplicationHandler(app)
replication.acceptIdentification(conn, None, None, None,
None, None)
self._checkReplicationStarted(conn, rid, app.replicator)
def test_startReplication(self):
rid = 24
app = self.getApp(rid=rid)
conn = self.getFakeConnection()
ReplicationHandler(app).startReplication(conn)
self._checkReplicationStarted(conn, rid, app.replicator)
def test_answerTIDsFrom(self):
conn = self.getFakeConnection()
tid_list = [self.getOID(0), self.getOID(1), self.getOID(2)]
app = self.getApp(conn=conn, tid_result=[])
# With no known TID
ReplicationHandler(app).answerTIDsFrom(conn, tid_list)
# With some TIDs known
conn = self.getFakeConnection()
known_tid_list = [tid_list[0], tid_list[1]]
unknown_tid_list = [tid_list[2], ]
app = self.getApp(conn=conn, tid_result=known_tid_list)
ReplicationHandler(app).answerTIDsFrom(conn, tid_list[1:])
calls = app.dm.mockGetNamedCalls('deleteTransaction')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(tid_list[0])
def test_answerTransactionInformation(self):
conn = self.getFakeConnection()
app = self.getApp(conn=conn)
tid = self.getNextTID()
user = 'foo'
desc = 'bar'
ext = 'baz'
packed = True
oid_list = [self.getOID(1), self.getOID(2)]
ReplicationHandler(app).answerTransactionInformation(conn, tid, user,
desc, ext, packed, oid_list)
calls = app.dm.mockGetNamedCalls('storeTransaction')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(tid, (), (oid_list, user, desc, ext, packed), False)
def test_answerObjectHistoryFrom(self):
conn = self.getFakeConnection()
oid_1 = self.getOID(1)
oid_2 = self.getOID(2)
oid_3 = self.getOID(3)
oid_4 = self.getOID(4)
oid_5 = self.getOID(5)
tid_list = [self.getOID(x) for x in xrange(7)]
oid_dict = FakeDict((
(oid_1, [tid_list[0], tid_list[1]]),
(oid_2, [tid_list[2], tid_list[3]]),
(oid_4, [tid_list[5]]),
))
flat_oid_list = []
for oid, serial_list in oid_dict.iteritems():
for serial in serial_list:
flat_oid_list.append((oid, serial))
app = self.getApp(conn=conn, history_result={})
# With no known OID/Serial
ReplicationHandler(app).answerObjectHistoryFrom(conn, oid_dict)
# With some known OID/Serials
# For test to be realist, history_result should contain the same
# number of serials as oid_dict, otherise it just tests the special
# case of the last check in a partition.
conn = self.getFakeConnection()
app = self.getApp(conn=conn, history_result={
oid_1: [oid_dict[oid_1][0], ],
oid_3: [tid_list[2]],
oid_4: [tid_list[4], oid_dict[oid_4][0], tid_list[6]],
oid_5: [tid_list[6]],
})
ReplicationHandler(app).answerObjectHistoryFrom(conn, oid_dict)
calls = app.dm.mockGetNamedCalls('deleteObject')
actual_deletes = set(((x.getParam(0), x.getParam(1)) for x in calls))
expected_deletes = set((
(oid_3, tid_list[2]),
(oid_4, tid_list[4]),
))
self.assertEqual(actual_deletes, expected_deletes)
def test_answerObject(self):
conn = self.getFakeConnection()
app = self.getApp(conn=conn)
oid = self.getOID(1)
serial_start = self.getNextTID()
serial_end = self.getNextTID()
compression = 1
checksum = 2
data = 'foo'
data_serial = None
ReplicationHandler(app).answerObject(conn, oid, serial_start,
serial_end, compression, checksum, data, data_serial)
calls = app.dm.mockGetNamedCalls('storeTransaction')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(serial_start, [(oid, compression, checksum, data,
data_serial)], None, False)
# CheckTIDRange
def test_answerCheckTIDFullRangeIdenticalChunkWithNext(self):
min_tid = self.getNextTID()
max_tid = self.getNextTID()
critical_tid = self.getNextTID()
assert max_tid < critical_tid
length = RANGE_LENGTH
rid = 12
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length, 0, max_tid), rid=rid,
conn=conn, critical_tid=critical_tid)
handler = ReplicationHandler(app)
# Peer has the same data as we have: length, checksum and max_tid
# match.
handler.answerCheckTIDRange(conn, min_tid, length, length, 0, max_tid)
# Result: go on with next chunk
pmin_tid, pmax_tid, plength, ppartition = self.checkAskPacket(conn,
Packets.AskCheckTIDRange, decode=True)
self.assertEqual(pmin_tid, add64(max_tid, 1))
self.assertEqual(plength, RANGE_LENGTH)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('checkTIDRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_tid, pmax_tid, plength, ppartition)
def test_answerCheckTIDSmallRangeIdenticalChunkWithNext(self):
min_tid = self.getNextTID()
max_tid = self.getNextTID()
critical_tid = self.getNextTID()
assert max_tid < critical_tid
length = RANGE_LENGTH / 2
rid = 12
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length, 0, max_tid), rid=rid,
conn=conn, critical_tid=critical_tid)
handler = ReplicationHandler(app)
# Peer has the same data as we have: length, checksum and max_tid
# match.
handler.answerCheckTIDRange(conn, min_tid, length, length, 0, max_tid)
# Result: go on with next chunk
pmin_tid, pmax_tid, plength, ppartition = self.checkAskPacket(conn,
Packets.AskCheckTIDRange, decode=True)
self.assertEqual(pmax_tid, critical_tid)
self.assertEqual(pmin_tid, add64(max_tid, 1))
self.assertEqual(plength, length / 2)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('checkTIDRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_tid, pmax_tid, plength, ppartition)
def test_answerCheckTIDRangeIdenticalChunkAboveCriticalTID(self):
critical_tid = self.getNextTID()
min_tid = self.getNextTID()
max_tid = self.getNextTID()
assert critical_tid < max_tid
length = RANGE_LENGTH / 2
rid = 12
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length, 0, max_tid), rid=rid,
conn=conn, critical_tid=critical_tid)
handler = ReplicationHandler(app)
# Peer has the same data as we have: length, checksum and max_tid
# match.
handler.answerCheckTIDRange(conn, min_tid, length, length, 0, max_tid)
# Result: go on with object range checks
pmin_oid, pmin_serial, pmax_tid, plength, ppartition = \
self.checkAskPacket(conn, Packets.AskCheckSerialRange, decode=True)
self.assertEqual(pmin_oid, ZERO_OID)
self.assertEqual(pmin_serial, ZERO_TID)
self.assertEqual(plength, RANGE_LENGTH)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('checkSerialRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_oid, pmin_serial, pmax_tid, plength, ppartition)
def test_answerCheckTIDRangeIdenticalChunkWithoutNext(self):
min_tid = self.getNextTID()
max_tid = self.getNextTID()
length = RANGE_LENGTH / 2
rid = 12
num_partitions = 13
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length - 1, 0, max_tid), rid=rid,
conn=conn, num_partitions=num_partitions)
handler = ReplicationHandler(app)
# Peer has the same data as we have: length, checksum and max_tid
# match.
handler.answerCheckTIDRange(conn, min_tid, length, length - 1, 0,
max_tid)
# Result: go on with object range checks
pmin_oid, pmin_serial, pmax_tid, plength, ppartition = \
self.checkAskPacket(conn, Packets.AskCheckSerialRange, decode=True)
self.assertEqual(pmin_oid, ZERO_OID)
self.assertEqual(pmin_serial, ZERO_TID)
self.assertEqual(plength, RANGE_LENGTH)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('checkSerialRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_oid, pmin_serial, pmax_tid, plength, ppartition)
# ...and delete partition tail
calls = app.dm.mockGetNamedCalls('deleteTransactionsAbove')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(num_partitions, rid, add64(max_tid, 1), ZERO_TID)
def test_answerCheckTIDRangeDifferentBigChunk(self):
min_tid = self.getNextTID()
max_tid = self.getNextTID()
critical_tid = self.getNextTID()
assert min_tid < max_tid < critical_tid, (min_tid, max_tid,
critical_tid)
length = RANGE_LENGTH / 2
rid = 12
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length - 5, 0, max_tid), rid=rid,
conn=conn, critical_tid=critical_tid)
handler = ReplicationHandler(app)
# Peer has different data
handler.answerCheckTIDRange(conn, min_tid, length, length, 0, max_tid)
# Result: ask again, length halved
pmin_tid, pmax_tid, plength, ppartition = self.checkAskPacket(conn,
Packets.AskCheckTIDRange, decode=True)
self.assertEqual(pmin_tid, min_tid)
self.assertEqual(plength, length / 2)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('checkTIDRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_tid, pmax_tid, plength, ppartition)
def test_answerCheckTIDRangeDifferentSmallChunkWithNext(self):
min_tid = self.getNextTID()
max_tid = self.getNextTID()
critical_tid = self.getNextTID()
length = MIN_RANGE_LENGTH - 1
rid = 12
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length - 5, 0, max_tid), rid=rid,
conn=conn, critical_tid=critical_tid)
handler = ReplicationHandler(app)
# Peer has different data
handler.answerCheckTIDRange(conn, min_tid, length, length, 0, max_tid)
# Result: ask tid list, and ask next chunk
calls = conn.mockGetNamedCalls('ask')
self.assertEqual(len(calls), 1)
tid_packet = calls[0].getParam(0)
self.assertEqual(type(tid_packet), Packets.AskTIDsFrom)
pmin_tid, pmax_tid, plength, ppartition = tid_packet.decode()
self.assertEqual(pmin_tid, min_tid)
self.assertEqual(pmax_tid, critical_tid)
self.assertEqual(plength, length)
self.assertEqual(ppartition, [rid])
calls = app.replicator.mockGetNamedCalls('getTIDsFrom')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_tid, pmax_tid, plength, ppartition[0])
def test_answerCheckTIDRangeDifferentSmallChunkWithoutNext(self):
min_tid = self.getNextTID()
max_tid = self.getNextTID()
critical_tid = self.getNextTID()
length = MIN_RANGE_LENGTH - 1
rid = 12
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length - 5, 0, max_tid), rid=rid,
conn=conn, critical_tid=critical_tid)
handler = ReplicationHandler(app)
# Peer has different data, and less than length
handler.answerCheckTIDRange(conn, min_tid, length, length - 1, 0,
max_tid)
# Result: ask tid list, and start replicating object range
calls = conn.mockGetNamedCalls('ask')
self.assertEqual(len(calls), 2)
tid_packet = calls[0].getParam(0)
self.assertEqual(type(tid_packet), Packets.AskTIDsFrom)
pmin_tid, pmax_tid, plength, ppartition = tid_packet.decode()
self.assertEqual(pmin_tid, min_tid)
self.assertEqual(pmax_tid, critical_tid)
self.assertEqual(plength, length - 1)
self.assertEqual(ppartition, [rid])
calls = app.replicator.mockGetNamedCalls('getTIDsFrom')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_tid, pmax_tid, plength, ppartition[0])
# CheckSerialRange
def test_answerCheckSerialFullRangeIdenticalChunkWithNext(self):
min_oid = self.getOID(1)
max_oid = self.getOID(10)
min_serial = self.getNextTID()
max_serial = self.getNextTID()
length = RANGE_LENGTH
rid = 12
conn = self.getFakeConnection()
app = self.getApp(serial_check_result=(length, 0, max_oid, 1,
max_serial), rid=rid, conn=conn)
handler = ReplicationHandler(app)
# Peer has the same data as we have
handler.answerCheckSerialRange(conn, min_oid, min_serial, length,
length, 0, max_oid, 1, max_serial)
# Result: go on with next chunk
pmin_oid, pmin_serial, pmax_tid, plength, ppartition = \
self.checkAskPacket(conn, Packets.AskCheckSerialRange, decode=True)
self.assertEqual(pmin_oid, max_oid)
self.assertEqual(pmin_serial, add64(max_serial, 1))
self.assertEqual(plength, RANGE_LENGTH)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('checkSerialRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_oid, pmin_serial, pmax_tid, plength, ppartition)
def test_answerCheckSerialSmallRangeIdenticalChunkWithNext(self):
min_oid = self.getOID(1)
max_oid = self.getOID(10)
min_serial = self.getNextTID()
max_serial = self.getNextTID()
length = RANGE_LENGTH / 2
rid = 12
conn = self.getFakeConnection()
app = self.getApp(serial_check_result=(length, 0, max_oid, 1,
max_serial), rid=rid, conn=conn)
handler = ReplicationHandler(app)
# Peer has the same data as we have
handler.answerCheckSerialRange(conn, min_oid, min_serial, length,
length, 0, max_oid, 1, max_serial)
# Result: go on with next chunk
pmin_oid, pmin_serial, pmax_tid, plength, ppartition = \
self.checkAskPacket(conn, Packets.AskCheckSerialRange, decode=True)
self.assertEqual(pmin_oid, max_oid)
self.assertEqual(pmin_serial, add64(max_serial, 1))
self.assertEqual(plength, length / 2)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('checkSerialRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_oid, pmin_serial, pmax_tid, plength, ppartition)
def test_answerCheckSerialRangeIdenticalChunkWithoutNext(self):
min_oid = self.getOID(1)
max_oid = self.getOID(10)
min_serial = self.getNextTID()
max_serial = self.getNextTID()
length = RANGE_LENGTH / 2
rid = 12
num_partitions = 13
conn = self.getFakeConnection()
app = self.getApp(serial_check_result=(length - 1, 0, max_oid, 1,
max_serial), rid=rid, conn=conn, num_partitions=num_partitions)
handler = ReplicationHandler(app)
# Peer has the same data as we have
handler.answerCheckSerialRange(conn, min_oid, min_serial, length,
length - 1, 0, max_oid, 1, max_serial)
# Result: mark replication as done
self.checkNoPacketSent(conn)
self.assertTrue(app.replicator.replication_done)
# ...and delete partition tail
calls = app.dm.mockGetNamedCalls('deleteObjectsAbove')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(num_partitions, rid, max_oid, add64(max_serial, 1),
ZERO_TID)
def test_answerCheckSerialRangeDifferentBigChunk(self):
min_oid = self.getOID(1)
max_oid = self.getOID(10)
min_serial = self.getNextTID()
max_serial = self.getNextTID()
length = RANGE_LENGTH / 2
rid = 12
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length - 5, 0, max_oid, 1,
max_serial), rid=rid, conn=conn)
handler = ReplicationHandler(app)
# Peer has different data
handler.answerCheckSerialRange(conn, min_oid, min_serial, length,
length, 0, max_oid, 1, max_serial)
# Result: ask again, length halved
pmin_oid, pmin_serial, pmax_tid, plength, ppartition = \
self.checkAskPacket(conn, Packets.AskCheckSerialRange, decode=True)
self.assertEqual(pmin_oid, min_oid)
self.assertEqual(pmin_serial, min_serial)
self.assertEqual(plength, length / 2)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('checkSerialRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_oid, pmin_serial, pmax_tid, plength, ppartition)
def test_answerCheckSerialRangeDifferentSmallChunkWithNext(self):
min_oid = self.getOID(1)
max_oid = self.getOID(10)
min_serial = self.getNextTID()
max_serial = self.getNextTID()
critical_tid = self.getNextTID()
length = MIN_RANGE_LENGTH - 1
rid = 12
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length - 5, 0, max_oid, 1,
max_serial), rid=rid, conn=conn, critical_tid=critical_tid)
handler = ReplicationHandler(app)
# Peer has different data
handler.answerCheckSerialRange(conn, min_oid, min_serial, length,
length, 0, max_oid, 1, max_serial)
# Result: ask serial list, and ask next chunk
calls = conn.mockGetNamedCalls('ask')
self.assertEqual(len(calls), 1)
serial_packet = calls[0].getParam(0)
self.assertEqual(type(serial_packet), Packets.AskObjectHistoryFrom)
pmin_oid, pmin_serial, pmax_serial, plength, ppartition = \
serial_packet.decode()
self.assertEqual(pmin_oid, min_oid)
self.assertEqual(pmin_serial, min_serial)
self.assertEqual(pmax_serial, critical_tid)
self.assertEqual(plength, length)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('getObjectHistoryFrom')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_oid, pmin_serial, pmax_serial, plength,
ppartition)
def test_answerCheckSerialRangeDifferentSmallChunkWithoutNext(self):
min_oid = self.getOID(1)
max_oid = self.getOID(10)
min_serial = self.getNextTID()
max_serial = self.getNextTID()
critical_tid = self.getNextTID()
length = MIN_RANGE_LENGTH - 1
rid = 12
num_partitions = 13
conn = self.getFakeConnection()
app = self.getApp(tid_check_result=(length - 5, 0, max_oid,
1, max_serial), rid=rid, conn=conn, critical_tid=critical_tid,
num_partitions=num_partitions,
)
handler = ReplicationHandler(app)
# Peer has different data, and less than length
handler.answerCheckSerialRange(conn, min_oid, min_serial, length,
length - 1, 0, max_oid, 1, max_serial)
# Result: ask tid list, and mark replication as done
pmin_oid, pmin_serial, pmax_serial, plength, ppartition = \
self.checkAskPacket(conn, Packets.AskObjectHistoryFrom,
decode=True)
self.assertEqual(pmin_oid, min_oid)
self.assertEqual(pmin_serial, min_serial)
self.assertEqual(pmax_serial, critical_tid)
self.assertEqual(plength, length - 1)
self.assertEqual(ppartition, rid)
calls = app.replicator.mockGetNamedCalls('getObjectHistoryFrom')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(pmin_oid, pmin_serial, pmax_serial, plength,
ppartition)
self.assertTrue(app.replicator.replication_done)
# ...and delete partition tail
calls = app.dm.mockGetNamedCalls('deleteObjectsAbove')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(num_partitions, rid, max_oid, add64(max_serial, 1),
critical_tid)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testReplicator.py 0000664 0000000 0000000 00000020126 11634614701 0027355 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock, ReturnValues
from neo.tests import NeoUnitTestBase
from neo.storage.replicator import Replicator, Partition, Task
from neo.lib.protocol import CellStates, NodeStates, Packets
class StorageReplicatorTests(NeoUnitTestBase):
def setup(self):
pass
def teardown(self):
pass
def test_populate(self):
my_uuid = self.getNewUUID()
other_uuid = self.getNewUUID()
app = Mock()
app.uuid = my_uuid
app.pt = Mock({
'getPartitions': 2,
'getOutdatedOffsetListFor': [0],
})
replicator = Replicator(app)
self.assertEqual(replicator.new_partition_set, set())
replicator.populate()
self.assertEqual(replicator.new_partition_set, set([0]))
def test_reset(self):
replicator = Replicator(None)
replicator.task_list = ['foo']
replicator.task_dict = {'foo': 'bar'}
replicator.current_partition = 'foo'
replicator.current_connection = 'foo'
replicator.replication_done = 'foo'
replicator.reset()
self.assertEqual(replicator.task_list, [])
self.assertEqual(replicator.task_dict, {})
self.assertEqual(replicator.current_partition, None)
self.assertEqual(replicator.current_connection, None)
self.assertTrue(replicator.replication_done)
def test_setCriticalTID(self):
critical_tid = self.getNextTID()
partition = Partition(0, critical_tid, [])
self.assertEqual(partition.getCriticalTID(), critical_tid)
self.assertEqual(partition.getOffset(), 0)
def test_act(self):
# Also tests "pending"
uuid = self.getNewUUID()
master_uuid = self.getNewUUID()
critical_tid_0 = self.getNextTID()
critical_tid_1 = self.getNextTID()
critical_tid_2 = self.getNextTID()
unfinished_ttid_1 = self.getOID(1)
unfinished_ttid_2 = self.getOID(2)
app = Mock()
app.server = ('127.0.0.1', 10000)
app.name = 'fake cluster'
app.em = Mock({
'register': None,
})
def connectorGenerator():
return Mock()
app.connector_handler = connectorGenerator
app.uuid = uuid
node_addr = ('127.0.0.1', 1234)
node = Mock({
'getAddress': node_addr,
})
running_cell = Mock({
'getNodeState': NodeStates.RUNNING,
'getNode': node,
})
unknown_cell = Mock({
'getNodeState': NodeStates.UNKNOWN,
})
app.pt = Mock({
'getCellList': [running_cell, unknown_cell],
'getOutdatedOffsetListFor': [0],
'getPartition': 0,
})
node_conn_handler = Mock({
'startReplication': None,
})
node_conn = Mock({
'getAddress': node_addr,
'getHandler': node_conn_handler,
})
replicator = Replicator(app)
replicator.populate()
def act():
app.master_conn = self.getFakeConnection(uuid=master_uuid)
self.assertTrue(replicator.pending())
replicator.act()
# ask unfinished tids
act()
unfinished_tids = app.master_conn.mockGetNamedCalls('ask')[0].getParam(0)
self.assertTrue(replicator.new_partition_set)
self.assertEqual(type(unfinished_tids),
Packets.AskUnfinishedTransactions)
self.assertTrue(replicator.waiting_for_unfinished_tids)
# nothing happens until waiting_for_unfinished_tids becomes False
act()
self.checkNoPacketSent(app.master_conn)
self.assertTrue(replicator.waiting_for_unfinished_tids)
# first time, there is an unfinished tid before critical tid,
# replication cannot start, and unfinished TIDs are asked again
replicator.setUnfinishedTIDList(critical_tid_0,
[unfinished_ttid_1, unfinished_ttid_2])
self.assertFalse(replicator.waiting_for_unfinished_tids)
# Note: detection that nothing can be replicated happens on first call
# and unfinished tids are asked again on second call. This is ok, but
# might change, so just call twice.
act()
replicator.transactionFinished(unfinished_ttid_1, critical_tid_1)
act()
replicator.transactionFinished(unfinished_ttid_2, critical_tid_2)
replicator.current_connection = node_conn
act()
self.assertEqual(replicator.current_partition,
replicator.partition_dict[0])
self.assertEqual(len(node_conn_handler.mockGetNamedCalls(
'startReplication')), 1)
self.assertFalse(replicator.replication_done)
# Other calls should do nothing
replicator.current_connection = Mock()
act()
self.checkNoPacketSent(app.master_conn)
self.checkNoPacketSent(replicator.current_connection)
# Mark replication over for this partition
replicator.replication_done = True
# Don't finish while there are pending answers
replicator.current_connection = Mock({
'isPending': True,
})
act()
self.assertTrue(replicator.pending())
replicator.current_connection = Mock({
'isPending': False,
})
act()
# also, replication is over
self.assertFalse(replicator.pending())
def test_removePartition(self):
replicator = Replicator(None)
replicator.partition_dict = {0: None, 2: None}
replicator.new_partition_set = set([1])
replicator.removePartition(0)
self.assertEqual(replicator.partition_dict, {2: None})
self.assertEqual(replicator.new_partition_set, set([1]))
replicator.removePartition(1)
replicator.removePartition(2)
self.assertEqual(replicator.partition_dict, {})
self.assertEqual(replicator.new_partition_set, set())
# Must not raise
replicator.removePartition(3)
def test_addPartition(self):
replicator = Replicator(None)
replicator.partition_dict = {0: None}
replicator.new_partition_set = set([1])
replicator.addPartition(0)
replicator.addPartition(1)
self.assertEqual(replicator.partition_dict, {0: None})
self.assertEqual(replicator.new_partition_set, set([1]))
replicator.addPartition(2)
self.assertEqual(replicator.partition_dict, {0: None})
self.assertEqual(len(replicator.new_partition_set), 2)
self.assertEqual(replicator.new_partition_set, set([1, 2]))
def test_processDelayedTasks(self):
replicator = Replicator(None)
replicator.reset()
marker = []
def someCallable(foo, bar=None):
return (foo, bar)
replicator._addTask(1, someCallable, args=('foo', ))
self.assertRaises(ValueError, replicator._addTask, 1, None)
replicator._addTask(2, someCallable, args=('foo', ), kw={'bar': 'bar'})
replicator.processDelayedTasks()
self.assertEqual(replicator._getCheckResult(1), ('foo', None))
self.assertEqual(replicator._getCheckResult(2), ('foo', 'bar'))
# Also test Task
task = Task(someCallable, args=('foo', ))
self.assertRaises(ValueError, task.getResult)
task.process()
self.assertRaises(ValueError, task.process)
self.assertEqual(task.getResult(), ('foo', None))
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testStorageApp.py 0000664 0000000 0000000 00000015312 11634614701 0027317 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock, ReturnValues
from neo.tests import NeoUnitTestBase
from neo.storage.app import Application
from neo.lib.protocol import CellStates
from collections import deque
from neo.lib.pt import PartitionTable
from neo.lib.util import dump
from neo.storage.exception import AlreadyPendingError
class StorageAppTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.prepareDatabase(number=1)
# create an application object
config = self.getStorageConfiguration(master_number=1)
self.app = Application(config)
self.app.event_queue = deque()
self.app.event_queue_dict = {}
def tearDown(self):
self.app.close()
del self.app
super(StorageAppTests, self).tearDown()
def test_01_loadPartitionTable(self):
self.app.dm = Mock({
'getPartitionTable': [],
})
self.assertEqual(self.app.pt, None)
num_partitions = 3
num_replicas = 2
self.app.pt = PartitionTable(num_partitions, num_replicas)
self.assertEqual(self.app.pt.getNodeList(), [])
self.assertFalse(self.app.pt.filled())
for x in xrange(num_partitions):
self.assertFalse(self.app.pt.hasOffset(x))
# load an empty table
self.app.loadPartitionTable()
self.assertEqual(self.app.pt.getNodeList(), [])
self.assertFalse(self.app.pt.filled())
for x in xrange(num_partitions):
self.assertFalse(self.app.pt.hasOffset(x))
# add some node, will be remove when loading table
master_uuid = self.getNewUUID()
master = self.app.nm.createMaster(uuid=master_uuid)
storage_uuid = self.getNewUUID()
storage = self.app.nm.createStorage(uuid=storage_uuid)
client_uuid = self.getNewUUID()
self.app.pt.setCell(0, master, CellStates.UP_TO_DATE)
self.app.pt.setCell(0, storage, CellStates.UP_TO_DATE)
self.assertEqual(len(self.app.pt.getNodeList()), 2)
self.assertFalse(self.app.pt.filled())
for x in xrange(num_partitions):
if x == 0:
self.assertTrue(self.app.pt.hasOffset(x))
else:
self.assertFalse(self.app.pt.hasOffset(x))
# load an empty table, everything removed
self.app.loadPartitionTable()
self.assertEqual(self.app.pt.getNodeList(), [])
self.assertFalse(self.app.pt.filled())
for x in xrange(num_partitions):
self.assertFalse(self.app.pt.hasOffset(x))
# add some node
self.app.pt.setCell(0, master, CellStates.UP_TO_DATE)
self.app.pt.setCell(0, storage, CellStates.UP_TO_DATE)
self.assertEqual(len(self.app.pt.getNodeList()), 2)
self.assertFalse(self.app.pt.filled())
for x in xrange(num_partitions):
if x == 0:
self.assertTrue(self.app.pt.hasOffset(x))
else:
self.assertFalse(self.app.pt.hasOffset(x))
# fill partition table
self.app.dm = Mock({
'getPartitionTable': [
(0, client_uuid, CellStates.UP_TO_DATE),
(1, client_uuid, CellStates.UP_TO_DATE),
(1, storage_uuid, CellStates.UP_TO_DATE),
(2, storage_uuid, CellStates.UP_TO_DATE),
(2, master_uuid, CellStates.UP_TO_DATE),
],
'getPTID': 1,
})
self.app.pt.clear()
self.app.loadPartitionTable()
self.assertTrue(self.app.pt.filled())
for x in xrange(num_partitions):
self.assertTrue(self.app.pt.hasOffset(x))
# check each row
cell_list = self.app.pt.getCellList(0)
self.assertEqual(len(cell_list), 1)
self.assertEqual(cell_list[0].getUUID(), client_uuid)
cell_list = self.app.pt.getCellList(1)
self.assertEqual(len(cell_list), 2)
self.assertTrue(cell_list[0].getUUID() in (client_uuid, storage_uuid))
self.assertTrue(cell_list[1].getUUID() in (client_uuid, storage_uuid))
cell_list = self.app.pt.getCellList(2)
self.assertEqual(len(cell_list), 2)
self.assertTrue(cell_list[0].getUUID() in (master_uuid, storage_uuid))
self.assertTrue(cell_list[1].getUUID() in (master_uuid, storage_uuid))
def test_02_queueEvent(self):
self.assertEqual(len(self.app.event_queue), 0)
msg_id = 1325136
event = Mock({'__repr__': 'event'})
conn = Mock({'__repr__': 'conn', 'getPeerId': msg_id})
key = 'foo'
self.app.queueEvent(event, conn, ("test", ), key=key)
self.assertEqual(len(self.app.event_queue), 1)
_key, _event, _msg_id, _conn, args = self.app.event_queue[0]
self.assertEqual(key, _key)
self.assertEqual(msg_id, _msg_id)
self.assertEqual(len(args), 1)
self.assertEqual(args[0], "test")
self.assertRaises(AlreadyPendingError, self.app.queueEvent, event,
conn, ("test2", ), key=key)
self.assertEqual(len(self.app.event_queue), 1)
self.app.queueEvent(event, conn, ("test3", ), key=key,
raise_on_duplicate=False)
self.assertEqual(len(self.app.event_queue), 2)
def test_03_executeQueuedEvents(self):
self.assertEqual(len(self.app.event_queue), 0)
msg_id = 1325136
msg_id_2 = 1325137
event = Mock({'__repr__': 'event'})
conn = Mock({'__repr__': 'conn', 'getPeerId': ReturnValues(msg_id, msg_id_2)})
self.app.queueEvent(event, conn, ("test", ))
self.app.executeQueuedEvents()
self.assertEqual(len(event.mockGetNamedCalls("__call__")), 1)
call = event.mockGetNamedCalls("__call__")[0]
params = call.getParam(1)
self.assertEqual(params, "test")
params = call.kwparams
self.assertEqual(params, {})
calls = conn.mockGetNamedCalls("setPeerId")
self.assertEqual(len(calls), 2)
calls[0].checkArgs(msg_id)
calls[1].checkArgs(msg_id_2)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testStorageBTree.py 0000664 0000000 0000000 00000002177 11634614701 0027605 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests.storage.testStorageDBTests import StorageDBTests
from neo.storage.database.btree import BTreeDatabaseManager
class StorageBTreeTests(StorageDBTests):
def getDB(self, reset=0):
# db manager
db = BTreeDatabaseManager('')
db.setup(reset)
return db
del StorageDBTests
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testStorageDBTests.py 0000664 0000000 0000000 00000075206 11634614701 0030117 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.lib.util import dump, p64, u64
from neo.lib.protocol import CellStates, ZERO_OID, ZERO_TID
from neo.tests import NeoUnitTestBase
from neo.lib.exception import DatabaseFailure
from neo.storage.database.mysqldb import MySQLDatabaseManager
MAX_TID = '\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFE' # != INVALID_TID
class StorageDBTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
@property
def db(self):
try:
return self._db
except AttributeError:
self.setNumPartitions(1)
return self._db
def tearDown(self):
try:
self.__dict__.pop('_db', None).close()
except AttributeError:
pass
NeoUnitTestBase.tearDown(self)
def getDB(self):
raise NotImplementedError
def setNumPartitions(self, num_partitions, reset=0):
try:
db = self._db
except AttributeError:
self._db = db = self.getDB(reset)
else:
if reset:
db.setup(reset)
else:
try:
n = db.getNumPartitions()
except KeyError:
n = 0
if num_partitions == n:
return
if num_partitions < n:
db.dropPartitions(n, range(num_partitions, n))
db.setNumPartitions(num_partitions)
self.assertEqual(num_partitions, db.getNumPartitions())
uuid = self.getNewUUID()
db.setUUID(uuid)
self.assertEqual(uuid, db.getUUID())
db.setPartitionTable(1,
[(i, uuid, CellStates.UP_TO_DATE) for i in xrange(num_partitions)])
def checkConfigEntry(self, get_call, set_call, value):
# generic test for all configuration entries accessors
self.assertRaises(KeyError, get_call)
set_call(value)
self.assertEqual(get_call(), value)
set_call(value * 2)
self.assertEqual(get_call(), value * 2)
def test_UUID(self):
db = self.getDB()
self.checkConfigEntry(db.getUUID, db.setUUID, 'TEST_VALUE')
def test_Name(self):
db = self.getDB()
self.checkConfigEntry(db.getName, db.setName, 'TEST_NAME')
def test_15_PTID(self):
db = self.getDB()
self.checkConfigEntry(db.getPTID, db.setPTID, self.getPTID(1))
def test_getPartitionTable(self):
db = self.getDB()
ptid = self.getPTID(1)
uuid1, uuid2 = self.getNewUUID(), self.getNewUUID()
cell1 = (0, uuid1, CellStates.OUT_OF_DATE)
cell2 = (1, uuid1, CellStates.UP_TO_DATE)
db.setPartitionTable(ptid, [cell1, cell2])
result = db.getPartitionTable()
self.assertEqual(set(result), set([cell1, cell2]))
def test_getLastOID(self):
db = self.getDB()
oid1 = self.getOID(1)
db.setLastOID(oid1)
result1 = db.getLastOID()
self.assertEqual(result1, oid1)
def getOIDs(self, count):
return map(self.getOID, xrange(count))
def getTIDs(self, count):
tid_list = [self.getNextTID()]
while len(tid_list) != count:
tid_list.append(self.getNextTID(tid_list[-1]))
return tid_list
def getTransaction(self, oid_list):
transaction = (oid_list, 'user', 'desc', 'ext', False)
object_list = [(oid, 1, 0, '', None) for oid in oid_list]
return (transaction, object_list)
def checkSet(self, list1, list2):
self.assertEqual(set(list1), set(list2))
def test_getLastTID(self):
tid1, tid2, tid3, tid4 = self.getTIDs(4)
oid1, oid2 = self.getOIDs(2)
txn, objs = self.getTransaction([oid1, oid2])
# max TID is in obj table
self.db.storeTransaction(tid1, objs, txn, False)
self.db.storeTransaction(tid2, objs, txn, False)
self.assertEqual(self.db.getLastTID(), tid2)
# max tid is in ttrans table
self.db.storeTransaction(tid3, objs, txn)
result = self.db.getLastTID()
self.assertEqual(self.db.getLastTID(), tid3)
# max tid is in tobj (serial)
self.db.storeTransaction(tid4, objs, None)
self.assertEqual(self.db.getLastTID(), tid4)
def test_getUnfinishedTIDList(self):
tid1, tid2, tid3, tid4 = self.getTIDs(4)
oid1, oid2 = self.getOIDs(2)
txn, objs = self.getTransaction([oid1, oid2])
# nothing pending
self.db.storeTransaction(tid1, objs, txn, False)
self.checkSet(self.db.getUnfinishedTIDList(), [])
# one unfinished txn
self.db.storeTransaction(tid2, objs, txn)
self.checkSet(self.db.getUnfinishedTIDList(), [tid2])
# no changes
self.db.storeTransaction(tid3, objs, None, False)
self.checkSet(self.db.getUnfinishedTIDList(), [tid2])
# a second txn known by objs only
self.db.storeTransaction(tid4, objs, None)
self.checkSet(self.db.getUnfinishedTIDList(), [tid2, tid4])
def test_objectPresent(self):
tid = self.getNextTID()
oid = self.getOID(1)
txn, objs = self.getTransaction([oid])
# not present
self.assertFalse(self.db.objectPresent(oid, tid, all=True))
self.assertFalse(self.db.objectPresent(oid, tid, all=False))
# available in temp table
self.db.storeTransaction(tid, objs, txn)
self.assertTrue(self.db.objectPresent(oid, tid, all=True))
self.assertFalse(self.db.objectPresent(oid, tid, all=False))
# available in both tables
self.db.finishTransaction(tid)
self.assertTrue(self.db.objectPresent(oid, tid, all=True))
self.assertTrue(self.db.objectPresent(oid, tid, all=False))
def test_getObject(self):
oid1, = self.getOIDs(1)
tid1, tid2 = self.getTIDs(2)
FOUND_BUT_NOT_VISIBLE = False
OBJECT_T1_NO_NEXT = (tid1, None, 1, 0, '', None)
OBJECT_T1_NEXT = (tid1, tid2, 1, 0, '', None)
OBJECT_T2 = (tid2, None, 1, 0, '', None)
txn1, objs1 = self.getTransaction([oid1])
txn2, objs2 = self.getTransaction([oid1])
# non-present
self.assertEqual(self.db.getObject(oid1), None)
self.assertEqual(self.db.getObject(oid1, tid1), None)
self.assertEqual(self.db.getObject(oid1, before_tid=tid1), None)
# one non-commited version
self.db.storeTransaction(tid1, objs1, txn1)
self.assertEqual(self.db.getObject(oid1), None)
self.assertEqual(self.db.getObject(oid1, tid1), None)
self.assertEqual(self.db.getObject(oid1, before_tid=tid1), None)
# one commited version
self.db.finishTransaction(tid1)
self.assertEqual(self.db.getObject(oid1), OBJECT_T1_NO_NEXT)
self.assertEqual(self.db.getObject(oid1, tid1), OBJECT_T1_NO_NEXT)
self.assertEqual(self.db.getObject(oid1, before_tid=tid1),
FOUND_BUT_NOT_VISIBLE)
# two version available, one non-commited
self.db.storeTransaction(tid2, objs2, txn2)
self.assertEqual(self.db.getObject(oid1), OBJECT_T1_NO_NEXT)
self.assertEqual(self.db.getObject(oid1, tid1), OBJECT_T1_NO_NEXT)
self.assertEqual(self.db.getObject(oid1, before_tid=tid1),
FOUND_BUT_NOT_VISIBLE)
self.assertEqual(self.db.getObject(oid1, tid2), FOUND_BUT_NOT_VISIBLE)
self.assertEqual(self.db.getObject(oid1, before_tid=tid2),
OBJECT_T1_NO_NEXT)
# two commited versions
self.db.finishTransaction(tid2)
self.assertEqual(self.db.getObject(oid1), OBJECT_T2)
self.assertEqual(self.db.getObject(oid1, tid1), OBJECT_T1_NEXT)
self.assertEqual(self.db.getObject(oid1, before_tid=tid1),
FOUND_BUT_NOT_VISIBLE)
self.assertEqual(self.db.getObject(oid1, tid2), OBJECT_T2)
self.assertEqual(self.db.getObject(oid1, before_tid=tid2),
OBJECT_T1_NEXT)
def test_setPartitionTable(self):
db = self.getDB()
ptid = self.getPTID(1)
uuid1, uuid2 = self.getNewUUID(), self.getNewUUID()
cell1 = (0, uuid1, CellStates.OUT_OF_DATE)
cell2 = (1, uuid1, CellStates.UP_TO_DATE)
cell3 = (1, uuid1, CellStates.DISCARDED)
# no partition table
self.assertEqual(db.getPartitionTable(), [])
# set one
db.setPartitionTable(ptid, [cell1])
result = db.getPartitionTable()
self.assertEqual(result, [cell1])
# then another
db.setPartitionTable(ptid, [cell2])
result = db.getPartitionTable()
self.assertEqual(result, [cell2])
# drop discarded cells
db.setPartitionTable(ptid, [cell2, cell3])
result = db.getPartitionTable()
self.assertEqual(result, [])
def test_changePartitionTable(self):
db = self.getDB()
ptid = self.getPTID(1)
uuid1, uuid2 = self.getNewUUID(), self.getNewUUID()
cell1 = (0, uuid1, CellStates.OUT_OF_DATE)
cell2 = (1, uuid1, CellStates.UP_TO_DATE)
cell3 = (1, uuid1, CellStates.DISCARDED)
# no partition table
self.assertEqual(db.getPartitionTable(), [])
# set one
db.changePartitionTable(ptid, [cell1])
result = db.getPartitionTable()
self.assertEqual(result, [cell1])
# add more entries
db.changePartitionTable(ptid, [cell2])
result = db.getPartitionTable()
self.assertEqual(set(result), set([cell1, cell2]))
# drop discarded cells
db.changePartitionTable(ptid, [cell2, cell3])
result = db.getPartitionTable()
self.assertEqual(result, [cell1])
def test_dropUnfinishedData(self):
oid1, oid2 = self.getOIDs(2)
tid1, tid2 = self.getTIDs(2)
txn1, objs1 = self.getTransaction([oid1])
txn2, objs2 = self.getTransaction([oid1])
# nothing
self.assertEqual(self.db.getObject(oid1), None)
self.assertEqual(self.db.getObject(oid2), None)
self.assertEqual(self.db.getUnfinishedTIDList(), [])
# one is still pending
self.db.storeTransaction(tid1, objs1, txn1)
self.db.storeTransaction(tid2, objs2, txn2)
self.db.finishTransaction(tid1)
result = self.db.getObject(oid1)
self.assertEqual(result, (tid1, None, 1, 0, '', None))
self.assertEqual(self.db.getObject(oid2), None)
self.assertEqual(self.db.getUnfinishedTIDList(), [tid2])
# drop it
self.db.dropUnfinishedData()
self.assertEqual(self.db.getUnfinishedTIDList(), [])
result = self.db.getObject(oid1)
self.assertEqual(result, (tid1, None, 1, 0, '', None))
self.assertEqual(self.db.getObject(oid2), None)
def test_storeTransaction(self):
oid1, oid2 = self.getOIDs(2)
tid1, tid2 = self.getTIDs(2)
txn1, objs1 = self.getTransaction([oid1])
txn2, objs2 = self.getTransaction([oid2])
# nothing in database
self.assertEqual(self.db.getLastTID(), None)
self.assertEqual(self.db.getUnfinishedTIDList(), [])
self.assertEqual(self.db.getObject(oid1), None)
self.assertEqual(self.db.getObject(oid2), None)
self.assertEqual(self.db.getTransaction(tid1, True), None)
self.assertEqual(self.db.getTransaction(tid2, True), None)
self.assertEqual(self.db.getTransaction(tid1, False), None)
self.assertEqual(self.db.getTransaction(tid2, False), None)
# store in temporary tables
self.db.storeTransaction(tid1, objs1, txn1)
self.db.storeTransaction(tid2, objs2, txn2)
result = self.db.getTransaction(tid1, True)
self.assertEqual(result, ([oid1], 'user', 'desc', 'ext', False))
result = self.db.getTransaction(tid2, True)
self.assertEqual(result, ([oid2], 'user', 'desc', 'ext', False))
self.assertEqual(self.db.getTransaction(tid1, False), None)
self.assertEqual(self.db.getTransaction(tid2, False), None)
# commit pending transaction
self.db.finishTransaction(tid1)
self.db.finishTransaction(tid2)
result = self.db.getTransaction(tid1, True)
self.assertEqual(result, ([oid1], 'user', 'desc', 'ext', False))
result = self.db.getTransaction(tid2, True)
self.assertEqual(result, ([oid2], 'user', 'desc', 'ext', False))
result = self.db.getTransaction(tid1, False)
self.assertEqual(result, ([oid1], 'user', 'desc', 'ext', False))
result = self.db.getTransaction(tid2, False)
self.assertEqual(result, ([oid2], 'user', 'desc', 'ext', False))
def test_askFinishTransaction(self):
oid1, oid2 = self.getOIDs(2)
tid1, tid2 = self.getTIDs(2)
txn1, objs1 = self.getTransaction([oid1])
txn2, objs2 = self.getTransaction([oid2])
# stored but not finished
self.db.storeTransaction(tid1, objs1, txn1)
self.db.storeTransaction(tid2, objs2, txn2)
result = self.db.getTransaction(tid1, True)
self.assertEqual(result, ([oid1], 'user', 'desc', 'ext', False))
result = self.db.getTransaction(tid2, True)
self.assertEqual(result, ([oid2], 'user', 'desc', 'ext', False))
self.assertEqual(self.db.getTransaction(tid1, False), None)
self.assertEqual(self.db.getTransaction(tid2, False), None)
# stored and finished
self.db.finishTransaction(tid1)
self.db.finishTransaction(tid2)
result = self.db.getTransaction(tid1, True)
self.assertEqual(result, ([oid1], 'user', 'desc', 'ext', False))
result = self.db.getTransaction(tid2, True)
self.assertEqual(result, ([oid2], 'user', 'desc', 'ext', False))
result = self.db.getTransaction(tid1, False)
self.assertEqual(result, ([oid1], 'user', 'desc', 'ext', False))
result = self.db.getTransaction(tid2, False)
self.assertEqual(result, ([oid2], 'user', 'desc', 'ext', False))
def test_deleteTransaction(self):
oid1, oid2 = self.getOIDs(2)
tid1, tid2 = self.getTIDs(2)
txn1, objs1 = self.getTransaction([oid1])
txn2, objs2 = self.getTransaction([oid2])
self.db.storeTransaction(tid1, objs1, txn1)
self.db.storeTransaction(tid2, objs2, txn2)
self.db.finishTransaction(tid1)
self.db.deleteTransaction(tid1, [oid1])
self.db.deleteTransaction(tid2, [oid2])
self.assertEqual(self.db.getTransaction(tid1, True), None)
self.assertEqual(self.db.getTransaction(tid2, True), None)
def test_deleteTransactionsAbove(self):
self.setNumPartitions(2)
tid1 = self.getOID(0)
tid2 = self.getOID(1)
tid3 = self.getOID(2)
oid1 = self.getOID(1)
for tid in (tid1, tid2, tid3):
txn, objs = self.getTransaction([oid1])
self.db.storeTransaction(tid, objs, txn)
self.db.finishTransaction(tid)
self.db.deleteTransactionsAbove(2, 0, tid2, tid3)
# Right partition, below cutoff
self.assertNotEqual(self.db.getTransaction(tid1, True), None)
# Wrong partition, above cutoff
self.assertNotEqual(self.db.getTransaction(tid2, True), None)
# Right partition, above cutoff
self.assertEqual(self.db.getTransaction(tid3, True), None)
def test_deleteObject(self):
oid1, oid2 = self.getOIDs(2)
tid1, tid2 = self.getTIDs(2)
txn1, objs1 = self.getTransaction([oid1, oid2])
txn2, objs2 = self.getTransaction([oid1, oid2])
self.db.storeTransaction(tid1, objs1, txn1)
self.db.storeTransaction(tid2, objs2, txn2)
self.db.finishTransaction(tid1)
self.db.finishTransaction(tid2)
self.db.deleteObject(oid1)
self.assertEqual(self.db.getObject(oid1, tid=tid1), None)
self.assertEqual(self.db.getObject(oid1, tid=tid2), None)
self.db.deleteObject(oid2, serial=tid1)
self.assertFalse(self.db.getObject(oid2, tid=tid1))
self.assertEqual(self.db.getObject(oid2, tid=tid2), (tid2, None) + \
objs2[1][1:])
def test_deleteObjectsAbove(self):
self.setNumPartitions(2)
tid1 = self.getOID(1)
tid2 = self.getOID(2)
tid3 = self.getOID(3)
oid1 = self.getOID(0)
oid2 = self.getOID(1)
oid3 = self.getOID(2)
for tid in (tid1, tid2, tid3):
txn, objs = self.getTransaction([oid1, oid2, oid3])
self.db.storeTransaction(tid, objs, txn)
self.db.finishTransaction(tid)
self.db.deleteObjectsAbove(2, 0, oid1, tid2, tid3)
# Check getObjectHistoryFrom because MySQL adapter use two tables
# that must be synchronized
self.assertEqual(self.db.getObjectHistoryFrom(ZERO_OID, ZERO_TID,
MAX_TID, 10, 2, 0), {oid1: [tid1]})
# Right partition, below cutoff
self.assertNotEqual(self.db.getObject(oid1, tid=tid1), None)
# Right partition, above tid cutoff
self.assertFalse(self.db.getObject(oid1, tid=tid2))
self.assertFalse(self.db.getObject(oid1, tid=tid3))
# Wrong partition, above cutoff
self.assertNotEqual(self.db.getObject(oid2, tid=tid1), None)
self.assertNotEqual(self.db.getObject(oid2, tid=tid2), None)
self.assertNotEqual(self.db.getObject(oid2, tid=tid3), None)
# Right partition, above cutoff
self.assertEqual(self.db.getObject(oid3), None)
def test_getTransaction(self):
oid1, oid2 = self.getOIDs(2)
tid1, tid2 = self.getTIDs(2)
txn1, objs1 = self.getTransaction([oid1])
txn2, objs2 = self.getTransaction([oid2])
# get from temporary table or not
self.db.storeTransaction(tid1, objs1, txn1)
self.db.storeTransaction(tid2, objs2, txn2)
self.db.finishTransaction(tid1)
result = self.db.getTransaction(tid1, True)
self.assertEqual(result, ([oid1], 'user', 'desc', 'ext', False))
result = self.db.getTransaction(tid2, True)
self.assertEqual(result, ([oid2], 'user', 'desc', 'ext', False))
# get from non-temporary only
result = self.db.getTransaction(tid1, False)
self.assertEqual(result, ([oid1], 'user', 'desc', 'ext', False))
self.assertEqual(self.db.getTransaction(tid2, False), None)
def test_getObjectHistory(self):
oid = self.getOID(1)
tid1, tid2, tid3 = self.getTIDs(3)
txn1, objs1 = self.getTransaction([oid])
txn2, objs2 = self.getTransaction([oid])
txn3, objs3 = self.getTransaction([oid])
# one revision
self.db.storeTransaction(tid1, objs1, txn1)
self.db.finishTransaction(tid1)
result = self.db.getObjectHistory(oid, 0, 3)
self.assertEqual(result, [(tid1, 0)])
result = self.db.getObjectHistory(oid, 1, 1)
self.assertEqual(result, None)
# two revisions
self.db.storeTransaction(tid2, objs2, txn2)
self.db.finishTransaction(tid2)
result = self.db.getObjectHistory(oid, 0, 3)
self.assertEqual(result, [(tid2, 0), (tid1, 0)])
result = self.db.getObjectHistory(oid, 1, 3)
self.assertEqual(result, [(tid1, 0)])
result = self.db.getObjectHistory(oid, 2, 3)
self.assertEqual(result, None)
def test_getObjectHistoryFrom(self):
self.setNumPartitions(2)
oid1 = self.getOID(0)
oid2 = self.getOID(2)
oid3 = self.getOID(1)
tid1, tid2, tid3, tid4, tid5 = self.getTIDs(5)
txn1, objs1 = self.getTransaction([oid1])
txn2, objs2 = self.getTransaction([oid2])
txn3, objs3 = self.getTransaction([oid1])
txn4, objs4 = self.getTransaction([oid2])
txn5, objs5 = self.getTransaction([oid3])
self.db.storeTransaction(tid1, objs1, txn1)
self.db.storeTransaction(tid2, objs2, txn2)
self.db.storeTransaction(tid3, objs3, txn3)
self.db.storeTransaction(tid4, objs4, txn4)
self.db.storeTransaction(tid5, objs5, txn5)
self.db.finishTransaction(tid1)
self.db.finishTransaction(tid2)
self.db.finishTransaction(tid3)
self.db.finishTransaction(tid4)
self.db.finishTransaction(tid5)
# Check full result
result = self.db.getObjectHistoryFrom(ZERO_OID, ZERO_TID, MAX_TID, 10,
2, 0)
self.assertEqual(result, {
oid1: [tid1, tid3],
oid2: [tid2, tid4],
})
# Lower bound is inclusive
result = self.db.getObjectHistoryFrom(oid1, tid1, MAX_TID, 10, 2, 0)
self.assertEqual(result, {
oid1: [tid1, tid3],
oid2: [tid2, tid4],
})
# Upper bound is inclusive
result = self.db.getObjectHistoryFrom(ZERO_OID, ZERO_TID, tid3, 10,
2, 0)
self.assertEqual(result, {
oid1: [tid1, tid3],
oid2: [tid2],
})
# Length is total number of serials
result = self.db.getObjectHistoryFrom(ZERO_OID, ZERO_TID, MAX_TID, 3,
2, 0)
self.assertEqual(result, {
oid1: [tid1, tid3],
oid2: [tid2],
})
# Partition constraints are honored
result = self.db.getObjectHistoryFrom(ZERO_OID, ZERO_TID, MAX_TID, 10,
2, 1)
self.assertEqual(result, {
oid3: [tid5],
})
def _storeTransactions(self, count):
# use OID generator to know result of tid % N
tid_list = self.getOIDs(count)
oid = self.getOID(1)
for tid in tid_list:
txn, objs = self.getTransaction([oid])
self.db.storeTransaction(tid, objs, txn)
self.db.finishTransaction(tid)
return tid_list
def test_getTIDList(self):
self.setNumPartitions(2, True)
tid1, tid2, tid3, tid4 = self._storeTransactions(4)
# get tids
# - all partitions
result = self.db.getTIDList(0, 4, 2, [0, 1])
self.checkSet(result, [tid1, tid2, tid3, tid4])
# - one partition
result = self.db.getTIDList(0, 4, 2, [0])
self.checkSet(result, [tid1, tid3])
result = self.db.getTIDList(0, 4, 2, [1])
self.checkSet(result, [tid2, tid4])
# get a subset of tids
result = self.db.getTIDList(0, 1, 2, [0])
self.checkSet(result, [tid3]) # desc order
result = self.db.getTIDList(1, 1, 2, [1])
self.checkSet(result, [tid2])
result = self.db.getTIDList(2, 2, 2, [0])
self.checkSet(result, [])
def test_getReplicationTIDList(self):
self.setNumPartitions(2, True)
tid1, tid2, tid3, tid4 = self._storeTransactions(4)
# get tids
# - all
result = self.db.getReplicationTIDList(ZERO_TID, MAX_TID, 10, 2, 0)
self.checkSet(result, [tid1, tid3])
# - one partition
result = self.db.getReplicationTIDList(ZERO_TID, MAX_TID, 10, 2, 0)
self.checkSet(result, [tid1, tid3])
# - another partition
result = self.db.getReplicationTIDList(ZERO_TID, MAX_TID, 10, 2, 1)
self.checkSet(result, [tid2, tid4])
# - min_tid is inclusive
result = self.db.getReplicationTIDList(tid3, MAX_TID, 10, 2, 0)
self.checkSet(result, [tid3])
# - max tid is inclusive
result = self.db.getReplicationTIDList(ZERO_TID, tid2, 10, 2, 0)
self.checkSet(result, [tid1])
# - limit
result = self.db.getReplicationTIDList(ZERO_TID, MAX_TID, 1, 2, 0)
self.checkSet(result, [tid1])
def test__getObjectData(self):
self.setNumPartitions(4, True)
db = self.db
tid0 = self.getNextTID()
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tid3 = self.getNextTID()
assert tid0 < tid1 < tid2 < tid3
oid1 = self.getOID(1)
oid2 = self.getOID(2)
oid3 = self.getOID(3)
db.storeTransaction(
tid1, (
(oid1, 0, 0, 'foo', None),
(oid2, None, None, None, tid0),
(oid3, None, None, None, tid2),
), None, temporary=False)
db.storeTransaction(
tid2, (
(oid1, None, None, None, tid1),
(oid2, None, None, None, tid1),
(oid3, 0, 0, 'bar', None),
), None, temporary=False)
original_getObjectData = db._getObjectData
def _getObjectData(*args, **kw):
call_counter.append(1)
return original_getObjectData(*args, **kw)
db._getObjectData = _getObjectData
# NOTE: all tests are done as if values were fetched by _getObject, so
# there is already one indirection level.
# oid1 at tid1: data is immediately found
call_counter = []
self.assertEqual(
db._getObjectData(u64(oid1), u64(tid1), u64(tid3)),
(u64(tid1), 0, 0, 'foo'))
self.assertEqual(sum(call_counter), 1)
# oid2 at tid1: missing data in table, raise IndexError on next
# recursive call
call_counter = []
self.assertRaises(IndexError, db._getObjectData, u64(oid2), u64(tid1),
u64(tid3))
self.assertEqual(sum(call_counter), 2)
# oid3 at tid1: data_serial grater than row's tid, raise ValueError
# on next recursive call - even if data does exist at that tid (see
# "oid3 at tid2" case below)
call_counter = []
self.assertRaises(ValueError, db._getObjectData, u64(oid3), u64(tid1),
u64(tid3))
self.assertEqual(sum(call_counter), 2)
# Same with wrong parameters (tid0 < tid1)
call_counter = []
self.assertRaises(ValueError, db._getObjectData, u64(oid3), u64(tid1),
u64(tid0))
self.assertEqual(sum(call_counter), 1)
# Same with wrong parameters (tid1 == tid1)
call_counter = []
self.assertRaises(ValueError, db._getObjectData, u64(oid3), u64(tid1),
u64(tid1))
self.assertEqual(sum(call_counter), 1)
# oid1 at tid2: data is found after ons recursive call
call_counter = []
self.assertEqual(
db._getObjectData(u64(oid1), u64(tid2), u64(tid3)),
(u64(tid1), 0, 0, 'foo'))
self.assertEqual(sum(call_counter), 2)
# oid2 at tid2: missing data in table, raise IndexError after two
# recursive calls
call_counter = []
self.assertRaises(IndexError, db._getObjectData, u64(oid2), u64(tid2),
u64(tid3))
self.assertEqual(sum(call_counter), 3)
# oid3 at tid2: data is immediately found
call_counter = []
self.assertEqual(
db._getObjectData(u64(oid3), u64(tid2), u64(tid3)),
(u64(tid2), 0, 0, 'bar'))
self.assertEqual(sum(call_counter), 1)
def test__getDataTIDFromData(self):
self.setNumPartitions(4, True)
db = self.db
tid1 = self.getNextTID()
tid2 = self.getNextTID()
oid1 = self.getOID(1)
db.storeTransaction(
tid1, (
(oid1, 0, 0, 'foo', None),
), None, temporary=False)
db.storeTransaction(
tid2, (
(oid1, None, None, None, tid1),
), None, temporary=False)
self.assertEqual(
db._getDataTIDFromData(u64(oid1),
db._getObject(u64(oid1), tid=u64(tid1))),
(u64(tid1), u64(tid1)))
self.assertEqual(
db._getDataTIDFromData(u64(oid1),
db._getObject(u64(oid1), tid=u64(tid2))),
(u64(tid2), u64(tid1)))
def test__getDataTID(self):
self.setNumPartitions(4, True)
db = self.db
tid1 = self.getNextTID()
tid2 = self.getNextTID()
oid1 = self.getOID(1)
db.storeTransaction(
tid1, (
(oid1, 0, 0, 'foo', None),
), None, temporary=False)
db.storeTransaction(
tid2, (
(oid1, None, None, None, tid1),
), None, temporary=False)
self.assertEqual(
db._getDataTID(u64(oid1), tid=u64(tid1)),
(u64(tid1), u64(tid1)))
self.assertEqual(
db._getDataTID(u64(oid1), tid=u64(tid2)),
(u64(tid2), u64(tid1)))
def test_findUndoTID(self):
self.setNumPartitions(4, True)
db = self.db
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tid3 = self.getNextTID()
tid4 = self.getNextTID()
tid5 = self.getNextTID()
oid1 = self.getOID(1)
db.storeTransaction(
tid1, (
(oid1, 0, 0, 'foo', None),
), None, temporary=False)
# Undoing oid1 tid1, OK: tid1 is latest
# Result: current tid is tid1, data_tid is None (undoing object
# creation)
self.assertEqual(
db.findUndoTID(oid1, tid5, tid4, tid1, None),
(tid1, None, True))
# Store a new transaction
db.storeTransaction(
tid2, (
(oid1, 0, 0, 'bar', None),
), None, temporary=False)
# Undoing oid1 tid2, OK: tid2 is latest
# Result: current tid is tid2, data_tid is tid1
self.assertEqual(
db.findUndoTID(oid1, tid5, tid4, tid2, None),
(tid2, tid1, True))
# Undoing oid1 tid1, Error: tid2 is latest
# Result: current tid is tid2, data_tid is -1
self.assertEqual(
db.findUndoTID(oid1, tid5, tid4, tid1, None),
(tid2, None, False))
# Undoing oid1 tid1 with tid2 being undone in same transaction,
# OK: tid1 is latest
# Result: current tid is tid1, data_tid is None (undoing object
# creation)
# Explanation of transaction_object: oid1, no data but a data serial
# to tid1
self.assertEqual(
db.findUndoTID(oid1, tid5, tid4, tid1,
(u64(oid1), None, None, None, tid1)),
(tid1, None, True))
# Store a new transaction
db.storeTransaction(
tid3, (
(oid1, None, None, None, tid1),
), None, temporary=False)
# Undoing oid1 tid1, OK: tid3 is latest with tid1 data
# Result: current tid is tid2, data_tid is None (undoing object
# creation)
self.assertEqual(
db.findUndoTID(oid1, tid5, tid4, tid1, None),
(tid3, None, True))
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testStorageHandler.py 0000664 0000000 0000000 00000021001 11634614701 0030144 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from collections import deque
from neo.tests import NeoUnitTestBase
from neo.storage.app import Application
from neo.storage.handlers.storage import StorageOperationHandler
from neo.lib.protocol import INVALID_PARTITION, Packets
from neo.lib.protocol import INVALID_TID, INVALID_OID
class StorageStorageHandlerTests(NeoUnitTestBase):
def checkHandleUnexpectedPacket(self, _call, _msg_type, _listening=True, **kwargs):
conn = self.getFakeConnection(address=("127.0.0.1", self.master_port),
is_server=_listening)
# hook
self.operation.peerBroken = lambda c: c.peerBrokendCalled()
self.checkUnexpectedPacketRaised(_call, conn=conn, **kwargs)
def setUp(self):
NeoUnitTestBase.setUp(self)
self.prepareDatabase(number=1)
# create an application object
config = self.getStorageConfiguration(master_number=1)
self.app = Application(config)
self.app.transaction_dict = {}
self.app.store_lock_dict = {}
self.app.load_lock_dict = {}
self.app.event_queue = deque()
self.app.event_queue_dict = {}
# handler
self.operation = StorageOperationHandler(self.app)
# set pmn
self.master_uuid = self.getNewUUID()
pmn = self.app.nm.getMasterList()[0]
pmn.setUUID(self.master_uuid)
self.app.primary_master_node = pmn
self.master_port = 10010
def test_18_askTransactionInformation1(self):
# transaction does not exists
conn = self.getFakeConnection()
self.app.dm = Mock({'getNumPartitions': 1})
self.operation.askTransactionInformation(conn, INVALID_TID)
self.checkErrorPacket(conn)
def test_18_askTransactionInformation2(self):
# answer
conn = self.getFakeConnection()
tid = self.getNextTID()
oid_list = [self.getOID(1), self.getOID(2)]
dm = Mock({"getTransaction": (oid_list, 'user', 'desc', '', False), })
self.app.dm = dm
self.operation.askTransactionInformation(conn, tid)
self.checkAnswerTransactionInformation(conn)
def test_24_askObject1(self):
# delayed response
conn = self.getFakeConnection()
oid = self.getOID(1)
tid = self.getNextTID()
serial = self.getNextTID()
self.app.dm = Mock()
self.app.tm = Mock({'loadLocked': True})
self.app.load_lock_dict[oid] = object()
self.assertEqual(len(self.app.event_queue), 0)
self.operation.askObject(conn, oid=oid, serial=serial, tid=tid)
self.assertEqual(len(self.app.event_queue), 1)
self.checkNoPacketSent(conn)
self.assertEqual(len(self.app.dm.mockGetNamedCalls('getObject')), 0)
def test_24_askObject2(self):
# invalid serial / tid / packet not found
self.app.dm = Mock({'getObject': None})
conn = self.getFakeConnection()
oid = self.getOID(1)
tid = self.getNextTID()
serial = self.getNextTID()
self.assertEqual(len(self.app.event_queue), 0)
self.operation.askObject(conn, oid=oid, serial=serial, tid=tid)
calls = self.app.dm.mockGetNamedCalls('getObject')
self.assertEqual(len(self.app.event_queue), 0)
self.assertEqual(len(calls), 1)
calls[0].checkArgs(oid, serial, tid, resolve_data=False)
self.checkErrorPacket(conn)
def test_24_askObject3(self):
oid = self.getOID(1)
tid = self.getNextTID()
serial = self.getNextTID()
next_serial = self.getNextTID()
# object found => answer
self.app.dm = Mock({'getObject': (serial, next_serial, 0, 0, '', None)})
conn = self.getFakeConnection()
self.assertEqual(len(self.app.event_queue), 0)
self.operation.askObject(conn, oid=oid, serial=serial, tid=tid)
self.assertEqual(len(self.app.event_queue), 0)
self.checkAnswerObject(conn)
def test_25_askTIDsFrom(self):
# well case => answer
conn = self.getFakeConnection()
self.app.dm = Mock({'getReplicationTIDList': (INVALID_TID, )})
self.app.pt = Mock({'getPartitions': 1})
tid = self.getNextTID()
tid2 = self.getNextTID()
self.operation.askTIDsFrom(conn, tid, tid2, 2, [1])
calls = self.app.dm.mockGetNamedCalls('getReplicationTIDList')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(tid, tid2, 2, 1, 1)
self.checkAnswerTidsFrom(conn)
def test_26_askObjectHistoryFrom(self):
min_oid = self.getOID(2)
min_serial = self.getNextTID()
max_serial = self.getNextTID()
length = 4
partition = 8
num_partitions = 16
tid = self.getNextTID()
conn = self.getFakeConnection()
self.app.dm = Mock({'getObjectHistoryFrom': {min_oid: [tid]},})
self.app.pt = Mock({
'getPartitions': num_partitions,
})
self.operation.askObjectHistoryFrom(conn, min_oid, min_serial,
max_serial, length, partition)
self.checkAnswerObjectHistoryFrom(conn)
calls = self.app.dm.mockGetNamedCalls('getObjectHistoryFrom')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(min_oid, min_serial, max_serial, length,
num_partitions, partition)
def test_askCheckTIDRange(self):
count = 1
tid_checksum = self.getNewUUID()
min_tid = self.getNextTID()
num_partitions = 4
length = 5
partition = 6
max_tid = self.getNextTID()
self.app.dm = Mock({'checkTIDRange': (count, tid_checksum, max_tid)})
self.app.pt = Mock({'getPartitions': num_partitions})
conn = self.getFakeConnection()
self.operation.askCheckTIDRange(conn, min_tid, max_tid, length, partition)
calls = self.app.dm.mockGetNamedCalls('checkTIDRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(min_tid, max_tid, length, num_partitions, partition)
pmin_tid, plength, pcount, ptid_checksum, pmax_tid = \
self.checkAnswerPacket(conn, Packets.AnswerCheckTIDRange,
decode=True)
self.assertEqual(min_tid, pmin_tid)
self.assertEqual(length, plength)
self.assertEqual(count, pcount)
self.assertEqual(tid_checksum, ptid_checksum)
self.assertEqual(max_tid, pmax_tid)
def test_askCheckSerialRange(self):
count = 1
oid_checksum = self.getNewUUID()
min_oid = self.getOID(1)
num_partitions = 4
length = 5
partition = 6
serial_checksum = self.getNewUUID()
min_serial = self.getNextTID()
max_serial = self.getNextTID()
max_oid = self.getOID(2)
self.app.dm = Mock({'checkSerialRange': (count, oid_checksum, max_oid,
serial_checksum, max_serial)})
self.app.pt = Mock({'getPartitions': num_partitions})
conn = self.getFakeConnection()
self.operation.askCheckSerialRange(conn, min_oid, min_serial,
max_serial, length, partition)
calls = self.app.dm.mockGetNamedCalls('checkSerialRange')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(min_oid, min_serial, max_serial, length,
num_partitions, partition)
pmin_oid, pmin_serial, plength, pcount, poid_checksum, pmax_oid, \
pserial_checksum, pmax_serial = self.checkAnswerPacket(conn,
Packets.AnswerCheckSerialRange, decode=True)
self.assertEqual(min_oid, pmin_oid)
self.assertEqual(min_serial, pmin_serial)
self.assertEqual(length, plength)
self.assertEqual(count, pcount)
self.assertEqual(oid_checksum, poid_checksum)
self.assertEqual(max_oid, pmax_oid)
self.assertEqual(serial_checksum, pserial_checksum)
self.assertEqual(max_serial, pmax_serial)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testStorageMySQLdb.py 0000664 0000000 0000000 00000012644 11634614701 0030057 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
import MySQLdb
from mock import Mock
from neo.lib.exception import DatabaseFailure
from neo.tests.storage.testStorageDBTests import StorageDBTests
from neo.storage.database.mysqldb import MySQLDatabaseManager
NEO_SQL_DATABASE = 'test_mysqldb0'
NEO_SQL_USER = 'test'
class StorageMySQSLdbTests(StorageDBTests):
def getDB(self, reset=0):
self.prepareDatabase(number=1, prefix=NEO_SQL_DATABASE[:-1])
# db manager
database = '%s@%s' % (NEO_SQL_USER, NEO_SQL_DATABASE)
db = MySQLDatabaseManager(database)
db.setup(reset)
return db
def checkCalledQuery(self, query=None, call=0):
self.assertTrue(len(self.db.conn.mockGetNamedCalls('query')) > call)
call = self.db.conn.mockGetNamedCalls('query')[call]
call.checkArgs('BEGIN')
def test_MySQLDatabaseManagerInit(self):
db = MySQLDatabaseManager('%s@%s' % (NEO_SQL_USER, NEO_SQL_DATABASE))
# init
self.assertEqual(db.db, NEO_SQL_DATABASE)
self.assertEqual(db.user, NEO_SQL_USER)
# & connect
self.assertTrue(isinstance(db.conn, MySQLdb.connection))
self.assertFalse(db.isUnderTransaction())
def test_begin(self):
# no current transaction
self.db.conn = Mock({ })
self.assertFalse(self.db.isUnderTransaction())
self.db.begin()
self.checkCalledQuery(query='COMMIT')
self.assertTrue(self.db.isUnderTransaction())
def test_commit(self):
self.db.conn = Mock()
self.db.begin()
self.db.commit()
self.assertEqual(len(self.db.conn.mockGetNamedCalls('commit')), 1)
self.assertFalse(self.db.isUnderTransaction())
def test_rollback(self):
# rollback called and no current transaction
self.db.conn = Mock({ })
self.db.under_transaction = True
self.db.rollback()
self.assertEqual(len(self.db.conn.mockGetNamedCalls('rollback')), 1)
self.assertFalse(self.db.isUnderTransaction())
def test_query1(self):
# fake result object
from array import array
result_object = Mock({
"num_rows": 1,
"fetch_row": ((1, 2, array('b', (1, 2, ))), ),
})
# expected formatted result
expected_result = (
(1, 2, '\x01\x02', ),
)
self.db.conn = Mock({ 'store_result': result_object })
result = self.db.query('QUERY')
self.assertEqual(result, expected_result)
calls = self.db.conn.mockGetNamedCalls('query')
self.assertEqual(len(calls), 1)
calls[0].checkArgs('QUERY')
def test_query2(self):
# test the OperationalError exception
# fake object, raise exception during the first call
from MySQLdb import OperationalError
from MySQLdb.constants.CR import SERVER_GONE_ERROR
class FakeConn(object):
def query(*args):
raise OperationalError(SERVER_GONE_ERROR, 'this is a test')
self.db.conn = FakeConn()
self.connect_called = False
def connect_hook():
# mock object, break raise/connect loop
self.db.conn = Mock({'num_rows': 0})
self.connect_called = True
self.db._connect = connect_hook
# make a query, exception will be raised then connect() will be
# called and the second query will use the mock object
self.db.query('QUERY')
self.assertTrue(self.connect_called)
def test_query3(self):
# OperationalError > raise DatabaseFailure exception
from MySQLdb import OperationalError
class FakeConn(object):
def close(self):
pass
def query(*args):
raise OperationalError(-1, 'this is a test')
self.db.conn = FakeConn()
self.assertRaises(DatabaseFailure, self.db.query, 'QUERY')
def test_escape(self):
self.assertEqual(self.db.escape('a"b'), 'a\\"b')
self.assertEqual(self.db.escape("a'b"), "a\\'b")
def test_setup(self):
# XXX: this test verifies irrelevant symptoms. It should instead check that
# - setup, store, setup, load -> data still there
# - setup, store, setup(reset=True), load -> data not found
# Then, it should be moved to generic test class.
# create all tables
self.db.conn = Mock()
self.db.setup()
calls = self.db.conn.mockGetNamedCalls('query')
self.assertEqual(len(calls), 7)
# create all tables but drop them first
self.db.conn = Mock()
self.db.setup(reset=True)
calls = self.db.conn.mockGetNamedCalls('query')
self.assertEqual(len(calls), 8)
del StorageDBTests
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testTransactions.py 0000664 0000000 0000000 00000041757 11634614701 0027736 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.storage.transactions import Transaction, TransactionManager
from neo.storage.transactions import ConflictError, DelayedError
class TransactionTests(NeoUnitTestBase):
def testInit(self):
uuid = self.getNewUUID()
ttid = self.getNextTID()
tid = self.getNextTID()
txn = Transaction(uuid, ttid)
self.assertEqual(txn.getUUID(), uuid)
self.assertEqual(txn.getTTID(), ttid)
self.assertEqual(txn.getTID(), None)
txn.setTID(tid)
self.assertEqual(txn.getTID(), tid)
self.assertEqual(txn.getObjectList(), [])
self.assertEqual(txn.getOIDList(), [])
def testRepr(self):
""" Just check if the __repr__ implementation will not raise """
uuid = self.getNewUUID()
tid = self.getNextTID()
txn = Transaction(uuid, tid)
repr(txn)
def testLock(self):
txn = Transaction(self.getNewUUID(), self.getNextTID())
self.assertFalse(txn.isLocked())
txn.lock()
self.assertTrue(txn.isLocked())
# disallow lock more than once
self.assertRaises(AssertionError, txn.lock)
def testTransaction(self):
txn = Transaction(self.getNewUUID(), self.getNextTID())
oid_list = [self.getOID(1), self.getOID(2)]
txn_info = (oid_list, 'USER', 'DESC', 'EXT', False)
txn.prepare(*txn_info)
self.assertEqual(txn.getTransactionInformations(), txn_info)
def testObjects(self):
txn = Transaction(self.getNewUUID(), self.getNextTID())
oid1, oid2 = self.getOID(1), self.getOID(2)
object1 = (oid1, 1, '1', 'O1', None)
object2 = (oid2, 1, '2', 'O2', None)
self.assertEqual(txn.getObjectList(), [])
self.assertEqual(txn.getOIDList(), [])
txn.addObject(*object1)
self.assertEqual(txn.getObjectList(), [object1])
self.assertEqual(txn.getOIDList(), [oid1])
txn.addObject(*object2)
self.assertEqual(txn.getObjectList(), [object1, object2])
self.assertEqual(txn.getOIDList(), [oid1, oid2])
def test_getObject(self):
oid_1 = self.getOID(1)
oid_2 = self.getOID(2)
txn = Transaction(self.getNewUUID(), self.getNextTID())
object_info = (oid_1, None, None, None, None)
txn.addObject(*object_info)
self.assertEqual(txn.getObject(oid_2), None)
self.assertEqual(txn.getObject(oid_1), object_info)
class TransactionManagerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.app = Mock()
# no history
self.app.dm = Mock({'getObjectHistory': []})
self.app.pt = Mock({'isAssigned': True})
self.manager = TransactionManager(self.app)
self.ltid = None
def _getTransaction(self):
tid = self.getNextTID(self.ltid)
oid_list = [self.getOID(1), self.getOID(2)]
return (tid, (oid_list, 'USER', 'DESC', 'EXT', False))
def _storeTransactionObjects(self, tid, txn):
for i, oid in enumerate(txn[0]):
self.manager.storeObject(tid, None,
oid, 1, str(i), '0' + str(i), None)
def _getObject(self, value):
oid = self.getOID(value)
serial = self.getNextTID()
return (serial, (oid, 1, str(value), 'O' + str(value), None))
def _checkTransactionStored(self, *args):
calls = self.app.dm.mockGetNamedCalls('storeTransaction')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(*args)
def _checkTransactionFinished(self, tid):
calls = self.app.dm.mockGetNamedCalls('finishTransaction')
self.assertEqual(len(calls), 1)
calls[0].checkArgs(tid)
def _checkQueuedEventExecuted(self, number=1):
calls = self.app.mockGetNamedCalls('executeQueuedEvents')
self.assertEqual(len(calls), number)
def testSimpleCase(self):
""" One node, one transaction, not abort """
uuid = self.getNewUUID()
ttid = self.getNextTID()
tid, txn = self._getTransaction()
serial1, object1 = self._getObject(1)
serial2, object2 = self._getObject(2)
self.manager.register(uuid, ttid)
self.manager.storeTransaction(ttid, *txn)
self.manager.storeObject(ttid, serial1, *object1)
self.manager.storeObject(ttid, serial2, *object2)
self.assertTrue(ttid in self.manager)
self.manager.lock(ttid, tid, txn[0])
self._checkTransactionStored(tid, [object1, object2], txn)
self.manager.unlock(ttid)
self.assertFalse(ttid in self.manager)
self._checkTransactionFinished(tid)
def testDelayed(self):
""" Two transactions, the first cause the second to be delayed """
uuid = self.getNewUUID()
ttid2 = self.getNextTID()
ttid1 = self.getNextTID()
tid1, txn1 = self._getTransaction()
tid2, txn2 = self._getTransaction()
serial, obj = self._getObject(1)
# first transaction lock the object
self.manager.register(uuid, ttid1)
self.manager.storeTransaction(ttid1, *txn1)
self.assertTrue(ttid1 in self.manager)
self._storeTransactionObjects(ttid1, txn1)
self.manager.lock(ttid1, tid1, txn1[0])
# the second is delayed
self.manager.register(uuid, ttid2)
self.manager.storeTransaction(ttid2, *txn2)
self.assertTrue(ttid2 in self.manager)
self.assertRaises(DelayedError, self.manager.storeObject,
ttid2, serial, *obj)
def testUnresolvableConflict(self):
""" A newer transaction has already modified an object """
uuid = self.getNewUUID()
ttid2 = self.getNextTID()
ttid1 = self.getNextTID()
tid1, txn1 = self._getTransaction()
tid2, txn2 = self._getTransaction()
serial, obj = self._getObject(1)
# the (later) transaction lock (change) the object
self.manager.register(uuid, ttid2)
self.manager.storeTransaction(ttid2, *txn2)
self.assertTrue(ttid2 in self.manager)
self._storeTransactionObjects(ttid2, txn2)
self.manager.lock(ttid2, tid2, txn2[0])
# the previous it's not using the latest version
self.manager.register(uuid, ttid1)
self.manager.storeTransaction(ttid1, *txn1)
self.assertTrue(ttid1 in self.manager)
self.assertRaises(ConflictError, self.manager.storeObject,
ttid1, serial, *obj)
def testResolvableConflict(self):
""" Try to store an object with the lastest revision """
uuid = self.getNewUUID()
tid, txn = self._getTransaction()
serial, obj = self._getObject(1)
next_serial = self.getNextTID(serial)
# try to store without the last revision
self.app.dm = Mock({'getObjectHistory': [next_serial]})
self.manager.register(uuid, tid)
self.manager.storeTransaction(tid, *txn)
self.assertRaises(ConflictError, self.manager.storeObject,
tid, serial, *obj)
def testLockDelayed(self):
""" Check lock delay """
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
self.assertNotEqual(uuid1, uuid2)
ttid2 = self.getNextTID()
ttid1 = self.getNextTID()
tid1, txn1 = self._getTransaction()
tid2, txn2 = self._getTransaction()
serial1, obj1 = self._getObject(1)
serial2, obj2 = self._getObject(2)
# first transaction lock objects
self.manager.register(uuid1, ttid1)
self.manager.storeTransaction(ttid1, *txn1)
self.assertTrue(ttid1 in self.manager)
self.manager.storeObject(ttid1, serial1, *obj1)
self.manager.storeObject(ttid1, serial1, *obj2)
self.manager.lock(ttid1, tid1, txn1[0])
# second transaction is delayed
self.manager.register(uuid2, ttid2)
self.manager.storeTransaction(ttid2, *txn2)
self.assertTrue(ttid2 in self.manager)
self.assertRaises(DelayedError, self.manager.storeObject,
ttid2, serial1, *obj1)
self.assertRaises(DelayedError, self.manager.storeObject,
ttid2, serial2, *obj2)
def testLockConflict(self):
""" Check lock conflict """
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
self.assertNotEqual(uuid1, uuid2)
ttid2 = self.getNextTID()
ttid1 = self.getNextTID()
tid1, txn1 = self._getTransaction()
tid2, txn2 = self._getTransaction()
serial1, obj1 = self._getObject(1)
serial2, obj2 = self._getObject(2)
# the second transaction lock objects
self.manager.register(uuid2, ttid2)
self.manager.storeTransaction(ttid2, *txn2)
self.manager.storeObject(ttid2, serial1, *obj1)
self.manager.storeObject(ttid2, serial2, *obj2)
self.assertTrue(ttid2 in self.manager)
self.manager.lock(ttid2, tid2, txn1[0])
# the first get a conflict
self.manager.register(uuid1, ttid1)
self.manager.storeTransaction(ttid1, *txn1)
self.assertTrue(ttid1 in self.manager)
self.assertRaises(ConflictError, self.manager.storeObject,
ttid1, serial1, *obj1)
self.assertRaises(ConflictError, self.manager.storeObject,
ttid1, serial2, *obj2)
def testAbortUnlocked(self):
""" Abort a non-locked transaction """
uuid = self.getNewUUID()
tid, txn = self._getTransaction()
serial, obj = self._getObject(1)
self.manager.register(uuid, tid)
self.manager.storeTransaction(tid, *txn)
self.manager.storeObject(tid, serial, *obj)
self.assertTrue(tid in self.manager)
# transaction is not locked
self.manager.abort(tid)
self.assertFalse(tid in self.manager)
self.assertFalse(self.manager.loadLocked(obj[0]))
self._checkQueuedEventExecuted()
def testAbortLockedDoNothing(self):
""" Try to abort a locked transaction """
uuid = self.getNewUUID()
ttid = self.getNextTID()
tid, txn = self._getTransaction()
self.manager.register(uuid, ttid)
self.manager.storeTransaction(ttid, *txn)
self._storeTransactionObjects(ttid, txn)
# lock transaction
self.manager.lock(ttid, tid, txn[0])
self.assertTrue(ttid in self.manager)
self.manager.abort(ttid)
self.assertTrue(ttid in self.manager)
for oid in txn[0]:
self.assertTrue(self.manager.loadLocked(oid))
self._checkQueuedEventExecuted(number=0)
def testAbortForNode(self):
""" Abort transaction for a node """
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
self.assertNotEqual(uuid1, uuid2)
ttid1 = self.getNextTID()
ttid2 = self.getNextTID()
ttid3 = self.getNextTID()
tid1, txn1 = self._getTransaction()
tid2, txn2 = self._getTransaction()
tid3, txn3 = self._getTransaction()
self.manager.register(uuid1, ttid1)
self.manager.register(uuid2, ttid2)
self.manager.register(uuid2, ttid3)
self.manager.storeTransaction(ttid1, *txn1)
# node 2 owns tid2 & tid3 and lock tid2 only
self.manager.storeTransaction(ttid2, *txn2)
self.manager.storeTransaction(ttid3, *txn3)
self._storeTransactionObjects(ttid2, txn2)
self.manager.lock(ttid2, tid2, txn2[0])
self.assertTrue(ttid1 in self.manager)
self.assertTrue(ttid2 in self.manager)
self.assertTrue(ttid3 in self.manager)
self.manager.abortFor(uuid2)
# only tid3 is aborted
self.assertTrue(ttid1 in self.manager)
self.assertTrue(ttid2 in self.manager)
self.assertFalse(ttid3 in self.manager)
self._checkQueuedEventExecuted(number=1)
def testReset(self):
""" Reset the manager """
uuid = self.getNewUUID()
tid, txn = self._getTransaction()
ttid = self.getNextTID()
self.manager.register(uuid, ttid)
self.manager.storeTransaction(ttid, *txn)
self._storeTransactionObjects(ttid, txn)
self.manager.lock(ttid, tid, txn[0])
self.assertTrue(ttid in self.manager)
self.manager.reset()
self.assertFalse(ttid in self.manager)
for oid in txn[0]:
self.assertFalse(self.manager.loadLocked(oid))
def test_getObjectFromTransaction(self):
uuid = self.getNewUUID()
tid1, txn1 = self._getTransaction()
tid2, txn2 = self._getTransaction()
serial1, obj1 = self._getObject(1)
serial2, obj2 = self._getObject(2)
self.manager.register(uuid, tid1)
self.manager.storeObject(tid1, serial1, *obj1)
self.assertEqual(self.manager.getObjectFromTransaction(tid2, obj1[0]),
None)
self.assertEqual(self.manager.getObjectFromTransaction(tid1, obj2[0]),
None)
self.assertEqual(self.manager.getObjectFromTransaction(tid1, obj1[0]),
obj1)
def test_getLockingTID(self):
uuid = self.getNewUUID()
serial1, obj1 = self._getObject(1)
oid1 = obj1[0]
tid1, txn1 = self._getTransaction()
self.assertEqual(self.manager.getLockingTID(oid1), None)
self.manager.register(uuid, tid1)
self.manager.storeObject(tid1, serial1, *obj1)
self.assertEqual(self.manager.getLockingTID(oid1), tid1)
def test_updateObjectDataForPack(self):
ram_serial = self.getNextTID()
oid = self.getOID(1)
orig_serial = self.getNextTID()
uuid = self.getNewUUID()
locking_serial = self.getNextTID()
other_serial = self.getNextTID()
new_serial = self.getNextTID()
compression = 1
checksum = 42
value = 'foo'
self.manager.register(uuid, locking_serial)
def getObjectData():
return (compression, checksum, value)
# Object not known, nothing happens
self.assertEqual(self.manager.getObjectFromTransaction(locking_serial,
oid), None)
self.manager.updateObjectDataForPack(oid, orig_serial, None, None)
self.assertEqual(self.manager.getObjectFromTransaction(locking_serial,
oid), None)
self.manager.abort(locking_serial, even_if_locked=True)
# Object known, but doesn't point at orig_serial, it is not updated
self.manager.register(uuid, locking_serial)
self.manager.storeObject(locking_serial, ram_serial, oid, 0, 512,
'bar', None)
orig_object = self.manager.getObjectFromTransaction(locking_serial,
oid)
self.manager.updateObjectDataForPack(oid, orig_serial, None, None)
self.assertEqual(self.manager.getObjectFromTransaction(locking_serial,
oid), orig_object)
self.manager.abort(locking_serial, even_if_locked=True)
self.manager.register(uuid, locking_serial)
self.manager.storeObject(locking_serial, ram_serial, oid, None, None,
None, other_serial)
orig_object = self.manager.getObjectFromTransaction(locking_serial,
oid)
self.manager.updateObjectDataForPack(oid, orig_serial, None, None)
self.assertEqual(self.manager.getObjectFromTransaction(locking_serial,
oid), orig_object)
self.manager.abort(locking_serial, even_if_locked=True)
# Object known and points at undone data it gets updated
# ...with data_serial: getObjectData must not be called
self.manager.register(uuid, locking_serial)
self.manager.storeObject(locking_serial, ram_serial, oid, None, None,
None, orig_serial)
self.manager.updateObjectDataForPack(oid, orig_serial, new_serial,
None)
self.assertEqual(self.manager.getObjectFromTransaction(locking_serial,
oid), (oid, None, None, None, new_serial))
self.manager.abort(locking_serial, even_if_locked=True)
# with data
self.manager.register(uuid, locking_serial)
self.manager.storeObject(locking_serial, ram_serial, oid, None, None,
None, orig_serial)
self.manager.updateObjectDataForPack(oid, orig_serial, None,
getObjectData)
self.assertEqual(self.manager.getObjectFromTransaction(locking_serial,
oid), (oid, compression, checksum, value, None))
self.manager.abort(locking_serial, even_if_locked=True)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/storage/testVerificationHandler.py 0000664 0000000 0000000 00000023226 11634614701 0031175 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.lib.pt import PartitionTable
from neo.storage.app import Application
from neo.storage.handlers.verification import VerificationHandler
from neo.lib.protocol import CellStates, ErrorCodes
from neo.lib.exception import PrimaryFailure, OperationFailure
from neo.lib.util import p64, u64
class StorageVerificationHandlerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.prepareDatabase(number=1)
# create an application object
config = self.getStorageConfiguration(master_number=1)
self.app = Application(config)
self.verification = VerificationHandler(self.app)
# define some variable to simulate client and storage node
self.master_port = 10010
self.storage_port = 10020
self.client_port = 11011
self.num_partitions = 1009
self.num_replicas = 2
self.app.operational = False
self.app.load_lock_dict = {}
self.app.pt = PartitionTable(self.num_partitions, self.num_replicas)
def tearDown(self):
self.app.close()
del self.app
super(StorageVerificationHandlerTests, self).tearDown()
# Common methods
def getLastUUID(self):
return self.uuid
def getClientConnection(self):
address = ("127.0.0.1", self.client_port)
return self.getFakeConnection(uuid=self.getNewUUID(), address=address)
def getMasterConnection(self):
return self.getFakeConnection(address=("127.0.0.1", self.master_port))
# Tests
def test_03_connectionClosed(self):
conn = self.getClientConnection()
self.app.listening_conn = object() # mark as running
self.assertRaises(PrimaryFailure, self.verification.connectionClosed, conn,)
# nothing happens
self.checkNoPacketSent(conn)
def test_07_askLastIDs(self):
conn = self.getClientConnection()
last_ptid = self.getPTID(1)
last_oid = self.getOID(2)
self.app.pt = Mock({'getID': last_ptid})
class DummyDM(object):
def getLastOID(self):
raise KeyError
getLastTID = getLastOID
self.app.dm = DummyDM()
self.verification.askLastIDs(conn)
oid, tid, ptid = self.checkAnswerLastIDs(conn, decode=True)
self.assertEqual(oid, None)
self.assertEqual(tid, None)
self.assertEqual(ptid, last_ptid)
# return value stored in db
conn = self.getClientConnection()
self.app.dm = Mock({
'getLastOID': last_oid,
'getLastTID': p64(4),
})
self.verification.askLastIDs(conn)
oid, tid, ptid = self.checkAnswerLastIDs(conn, decode=True)
self.assertEqual(oid, last_oid)
self.assertEqual(u64(tid), 4)
self.assertEqual(ptid, self.app.pt.getID())
call_list = self.app.dm.mockGetNamedCalls('getLastOID')
self.assertEqual(len(call_list), 1)
call_list[0].checkArgs()
call_list = self.app.dm.mockGetNamedCalls('getLastTID')
self.assertEqual(len(call_list), 1)
call_list[0].checkArgs()
def test_08_askPartitionTable(self):
node = self.app.nm.createStorage(
address=("127.7.9.9", 1),
uuid=self.getNewUUID()
)
self.app.pt.setCell(1, node, CellStates.UP_TO_DATE)
self.assertTrue(self.app.pt.hasOffset(1))
conn = self.getClientConnection()
self.verification.askPartitionTable(conn)
ptid, row_list = self.checkAnswerPartitionTable(conn, decode=True)
self.assertEqual(len(row_list), 1009)
def test_10_notifyPartitionChanges(self):
# old partition change
conn = self.getMasterConnection()
self.verification.notifyPartitionChanges(conn, 1, ())
self.verification.notifyPartitionChanges(conn, 0, ())
self.assertEqual(self.app.pt.getID(), 1)
# new node
conn = self.getMasterConnection()
new_uuid = self.getNewUUID()
cell = (0, new_uuid, CellStates.UP_TO_DATE)
self.app.nm.createStorage(uuid=new_uuid)
self.app.pt = PartitionTable(1, 1)
self.app.dm = Mock({ })
ptid, self.ptid = self.getTwoIDs()
# pt updated
self.verification.notifyPartitionChanges(conn, ptid, (cell, ))
# check db update
calls = self.app.dm.mockGetNamedCalls('changePartitionTable')
self.assertEqual(len(calls), 1)
self.assertEqual(calls[0].getParam(0), ptid)
self.assertEqual(calls[0].getParam(1), (cell, ))
def test_11_startOperation(self):
conn = self.getMasterConnection()
self.assertFalse(self.app.operational)
self.verification.startOperation(conn)
self.assertTrue(self.app.operational)
def test_12_stopOperation(self):
conn = self.getMasterConnection()
self.assertRaises(OperationFailure, self.verification.stopOperation, conn)
def test_13_askUnfinishedTransactions(self):
# client connection with no data
self.app.dm = Mock({
'getUnfinishedTIDList': [],
})
conn = self.getMasterConnection()
self.verification.askUnfinishedTransactions(conn)
(max_tid, tid_list) = self.checkAnswerUnfinishedTransactions(conn, decode=True)
self.assertEqual(len(tid_list), 0)
call_list = self.app.dm.mockGetNamedCalls('getUnfinishedTIDList')
self.assertEqual(len(call_list), 1)
call_list[0].checkArgs()
# client connection with some data
self.app.dm = Mock({
'getUnfinishedTIDList': [p64(4)],
})
conn = self.getMasterConnection()
self.verification.askUnfinishedTransactions(conn)
(max_tid, tid_list) = self.checkAnswerUnfinishedTransactions(conn, decode=True)
self.assertEqual(len(tid_list), 1)
self.assertEqual(u64(tid_list[0]), 4)
def test_14_askTransactionInformation(self):
# ask from client conn with no data
self.app.dm = Mock({
'getTransaction': None,
})
conn = self.getMasterConnection()
tid = p64(1)
self.verification.askTransactionInformation(conn, tid)
code, message = self.checkErrorPacket(conn, decode=True)
self.assertEqual(code, ErrorCodes.TID_NOT_FOUND)
call_list = self.app.dm.mockGetNamedCalls('getTransaction')
self.assertEqual(len(call_list), 1)
call_list[0].checkArgs(tid, all=True)
# input some tmp data and ask from client, must find both transaction
self.app.dm = Mock({
'getTransaction': ([p64(2)], 'u2', 'd2', 'e2', False),
})
conn = self.getClientConnection()
self.verification.askTransactionInformation(conn, p64(1))
tid, user, desc, ext, packed, oid_list = self.checkAnswerTransactionInformation(conn, decode=True)
self.assertEqual(u64(tid), 1)
self.assertEqual(user, 'u2')
self.assertEqual(desc, 'd2')
self.assertEqual(ext, 'e2')
self.assertFalse(packed)
self.assertEqual(len(oid_list), 1)
self.assertEqual(u64(oid_list[0]), 2)
def test_15_askObjectPresent(self):
# client connection with no data
self.app.dm = Mock({
'objectPresent': False,
})
conn = self.getMasterConnection()
oid, tid = p64(1), p64(2)
self.verification.askObjectPresent(conn, oid, tid)
code, message = self.checkErrorPacket(conn, decode=True)
self.assertEqual(code, ErrorCodes.OID_NOT_FOUND)
call_list = self.app.dm.mockGetNamedCalls('objectPresent')
self.assertEqual(len(call_list), 1)
call_list[0].checkArgs(oid, tid)
# client connection with some data
self.app.dm = Mock({
'objectPresent': True,
})
conn = self.getMasterConnection()
self.verification.askObjectPresent(conn, oid, tid)
oid, tid = self.checkAnswerObjectPresent(conn, decode=True)
self.assertEqual(u64(tid), 2)
self.assertEqual(u64(oid), 1)
def test_16_deleteTransaction(self):
# client connection with no data
self.app.dm = Mock({
'deleteTransaction': None,
})
conn = self.getMasterConnection()
oid_list = [self.getOID(1), self.getOID(2)]
tid = p64(1)
self.verification.deleteTransaction(conn, tid, oid_list)
call_list = self.app.dm.mockGetNamedCalls('deleteTransaction')
self.assertEqual(len(call_list), 1)
call_list[0].checkArgs(tid, oid_list)
def test_17_commitTransaction(self):
# commit a transaction
conn = self.getMasterConnection()
dm = Mock()
self.app.dm = dm
self.verification.commitTransaction(conn, p64(1))
self.assertEqual(len(dm.mockGetNamedCalls("finishTransaction")), 1)
call = dm.mockGetNamedCalls("finishTransaction")[0]
tid = call.getParam(0)
self.assertEqual(u64(tid), 1)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/testBootstrap.py 0000664 0000000 0000000 00000004562 11634614701 0025570 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.storage.app import Application
from neo.lib.bootstrap import BootstrapManager
from neo.lib.protocol import NodeTypes
class BootstrapManagerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.prepareDatabase(number=1)
# create an application object
config = self.getStorageConfiguration()
self.app = Application(config)
self.bootstrap = BootstrapManager(self.app, 'main', NodeTypes.STORAGE)
# define some variable to simulate client and storage node
self.master_port = 10010
self.storage_port = 10020
self.num_partitions = 1009
self.num_replicas = 2
def tearDown(self):
self.app.close()
del self.app
super(BootstrapManagerTests, self).tearDown()
# Common methods
def getLastUUID(self):
return self.uuid
# Tests
def testConnectionCompleted(self):
address=("127.0.0.1", self.master_port)
conn = self.getFakeConnection(address=address)
self.bootstrap.current = self.app.nm.createMaster(address=address)
self.bootstrap.connectionCompleted(conn)
self.checkAskPrimary(conn)
def testHandleNotReady(self):
# the primary is not ready
address=("127.0.0.1", self.master_port)
conn = self.getFakeConnection(address=address)
self.bootstrap.current = self.app.nm.createMaster(address=address)
self.bootstrap.notReady(conn, '')
self.checkClosed(conn)
self.checkNoPacketSent(conn)
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/testConnection.py 0000664 0000000 0000000 00000116460 11634614701 0025713 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
#
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from time import time
from mock import Mock
from neo.lib.connection import ListeningConnection, Connection, \
ClientConnection, ServerConnection, MTClientConnection, \
HandlerSwitcher, CRITICAL_TIMEOUT
from neo.lib.connector import getConnectorHandler, registerConnectorHandler
from neo.tests import DoNothingConnector
from neo.lib.connector import ConnectorException, ConnectorTryAgainException, \
ConnectorInProgressException, ConnectorConnectionRefusedException
from neo.lib.handler import EventHandler
from neo.lib.protocol import Packets, ParserState
from neo.tests import NeoUnitTestBase
from neo.lib.util import ReadBuffer
from neo.lib.locking import Queue
class ConnectionTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.app = Mock({'__repr__': 'Fake App'})
self.em = Mock({'__repr__': 'Fake Em'})
self.handler = Mock({'__repr__': 'Fake Handler'})
self.address = ("127.0.0.7", 93413)
def _makeListeningConnection(self, addr):
# create instance after monkey patches
self.connector = DoNothingConnector()
return ListeningConnection(event_manager=self.em, handler=self.handler,
connector=self.connector, addr=addr)
def _makeConnection(self):
self.connector = DoNothingConnector()
return Connection(event_manager=self.em, handler=self.handler,
connector=self.connector, addr=self.address)
def _makeClientConnection(self):
self.connector = DoNothingConnector()
return ClientConnection(event_manager=self.em, handler=self.handler,
connector=self.connector, addr=self.address)
def _makeServerConnection(self):
self.connector = DoNothingConnector()
return ServerConnection(event_manager=self.em, handler=self.handler,
connector=self.connector, addr=self.address)
def _checkRegistered(self, n=1):
self.assertEqual(len(self.em.mockGetNamedCalls("register")), n)
def _checkUnregistered(self, n=1):
self.assertEqual(len(self.em.mockGetNamedCalls("unregister")), n)
def _checkReaderAdded(self, n=1):
self.assertEqual(len(self.em.mockGetNamedCalls("addReader")), n)
def _checkReaderRemoved(self, n=1):
self.assertEqual(len(self.em.mockGetNamedCalls("removeReader")), n)
def _checkWriterAdded(self, n=1):
self.assertEqual(len(self.em.mockGetNamedCalls("addWriter")), n)
def _checkWriterRemoved(self, n=1):
self.assertEqual(len(self.em.mockGetNamedCalls("removeWriter")), n)
def _checkShutdown(self, n=1):
self.assertEqual(len(self.connector.mockGetNamedCalls("shutdown")), n)
def _checkClose(self, n=1):
self.assertEqual(len(self.connector.mockGetNamedCalls("close")), n)
def _checkGetNewConnection(self, n=1):
calls = self.connector.mockGetNamedCalls('getNewConnection')
self.assertEqual(len(calls), n)
def _checkSend(self, n=1, data=None):
calls = self.connector.mockGetNamedCalls('send')
self.assertEqual(len(calls), n)
if n > 1 and data is not None:
data = calls[n-1].getParam(0)
self.assertEqual(data, "testdata")
def _checkConnectionAccepted(self, n=1):
calls = self.handler.mockGetNamedCalls('connectionAccepted')
self.assertEqual(len(calls), n)
def _checkConnectionFailed(self, n=1):
calls = self.handler.mockGetNamedCalls('connectionFailed')
self.assertEqual(len(calls), n)
def _checkConnectionClosed(self, n=1):
calls = self.handler.mockGetNamedCalls('connectionClosed')
self.assertEqual(len(calls), n)
def _checkConnectionStarted(self, n=1):
calls = self.handler.mockGetNamedCalls('connectionStarted')
self.assertEqual(len(calls), n)
def _checkConnectionCompleted(self, n=1):
calls = self.handler.mockGetNamedCalls('connectionCompleted')
self.assertEqual(len(calls), n)
def _checkMakeListeningConnection(self, n=1):
calls = self.connector.mockGetNamedCalls('makeListeningConnection')
self.assertEqual(len(calls), n)
def _checkMakeClientConnection(self, n=1):
calls = self.connector.mockGetNamedCalls("makeClientConnection")
self.assertEqual(len(calls), n)
self.assertEqual(calls[n-1].getParam(0), self.address)
def _checkPacketReceived(self, n=1):
calls = self.handler.mockGetNamedCalls('packetReceived')
self.assertEqual(len(calls), n)
def _checkReadBuf(self, bc, data):
content = bc.read_buf.read(len(bc.read_buf))
self.assertEqual(''.join(content), data)
def _appendToReadBuf(self, bc, data):
bc.read_buf.append(data)
def _appendPacketToReadBuf(self, bc, packet):
data = ''.join(packet.encode())
bc.read_buf.append(data)
def _checkWriteBuf(self, bc, data):
self.assertEqual(''.join(bc.write_buf), data)
def test_01_BaseConnection1(self):
# init with connector
registerConnectorHandler(DoNothingConnector)
connector = getConnectorHandler("DoNothingConnector")()
self.assertFalse(connector is None)
bc = self._makeConnection()
self.assertFalse(bc.connector is None)
self._checkRegistered(1)
def test_01_BaseConnection2(self):
# init with address
bc = self._makeConnection()
self.assertEqual(bc.getAddress(), self.address)
self._checkRegistered(1)
def test_02_ListeningConnection1(self):
# test init part
def getNewConnection(self):
return self, ('', 0)
DoNothingConnector.getNewConnection = getNewConnection
addr = ("127.0.0.7", 93413)
bc = self._makeListeningConnection(addr=addr)
self.assertEqual(bc.getAddress(), addr)
self._checkRegistered()
self._checkReaderAdded()
self._checkMakeListeningConnection()
# test readable
bc.readable()
self._checkGetNewConnection()
self._checkConnectionAccepted()
def test_02_ListeningConnection2(self):
# test with exception raise when getting new connection
def getNewConnection(self):
raise ConnectorTryAgainException
DoNothingConnector.getNewConnection = getNewConnection
addr = ("127.0.0.7", 93413)
bc = self._makeListeningConnection(addr=addr)
self.assertEqual(bc.getAddress(), addr)
self._checkRegistered()
self._checkReaderAdded()
self._checkMakeListeningConnection()
# test readable
bc.readable()
self._checkGetNewConnection(1)
self._checkConnectionAccepted(0)
def test_03_Connection(self):
bc = self._makeConnection()
self.assertEqual(bc.getAddress(), self.address)
self._checkReaderAdded(1)
self._checkReadBuf(bc, '')
self._checkWriteBuf(bc, '')
self.assertEqual(bc.cur_id, 0)
self.assertFalse(bc.aborted)
# test uuid
self.assertEqual(bc.uuid, None)
self.assertEqual(bc.getUUID(), None)
uuid = self.getNewUUID()
bc.setUUID(uuid)
self.assertEqual(bc.getUUID(), uuid)
# test next id
cur_id = bc.cur_id
next_id = bc._getNextId()
self.assertEqual(next_id, cur_id)
next_id = bc._getNextId()
self.assertTrue(next_id > cur_id)
# test overflow of next id
bc.cur_id = 0xffffffff
next_id = bc._getNextId()
self.assertEqual(next_id, 0xffffffff)
next_id = bc._getNextId()
self.assertEqual(next_id, 0)
def test_Connection_pending(self):
bc = self._makeConnection()
self.assertEqual(''.join(bc.write_buf), '')
self.assertFalse(bc.pending())
bc.write_buf += '1'
self.assertTrue(bc.pending())
def test_Connection_recv1(self):
# patch receive method to return data
def receive(self):
return "testdata"
DoNothingConnector.receive = receive
bc = self._makeConnection()
self._checkReadBuf(bc, '')
bc._recv()
self._checkReadBuf(bc, 'testdata')
def test_Connection_recv2(self):
# patch receive method to raise try again
def receive(self):
raise ConnectorTryAgainException
DoNothingConnector.receive = receive
bc = self._makeConnection()
self._checkReadBuf(bc, '')
bc._recv()
self._checkReadBuf(bc, '')
self._checkConnectionClosed(0)
self._checkUnregistered(0)
def test_Connection_recv3(self):
# patch receive method to raise ConnectorConnectionRefusedException
def receive(self):
raise ConnectorConnectionRefusedException
DoNothingConnector.receive = receive
bc = self._makeConnection()
self._checkReadBuf(bc, '')
# fake client connection instance with connecting attribute
bc.connecting = True
bc._recv()
self._checkReadBuf(bc, '')
self._checkConnectionFailed(1)
self._checkUnregistered(1)
def test_Connection_recv4(self):
# patch receive method to raise any other connector error
def receive(self):
raise ConnectorException
DoNothingConnector.receive = receive
bc = self._makeConnection()
self._checkReadBuf(bc, '')
self.assertRaises(ConnectorException, bc._recv)
self._checkReadBuf(bc, '')
self._checkConnectionClosed(1)
self._checkUnregistered(1)
def test_Connection_send1(self):
# no data, nothing done
# patch receive method to return data
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc._send()
self._checkSend(0)
self._checkConnectionClosed(0)
self._checkUnregistered(0)
def test_Connection_send2(self):
# send all data
def send(self, data):
return len(data)
DoNothingConnector.send = send
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata"]
bc._send()
self._checkSend(1, "testdata")
self._checkWriteBuf(bc, '')
self._checkConnectionClosed(0)
self._checkUnregistered(0)
def test_Connection_send3(self):
# send part of the data
def send(self, data):
return len(data)/2
DoNothingConnector.send = send
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata"]
bc._send()
self._checkSend(1, "testdata")
self._checkWriteBuf(bc, 'data')
self._checkConnectionClosed(0)
self._checkUnregistered(0)
def test_Connection_send4(self):
# send multiple packet
def send(self, data):
return len(data)
DoNothingConnector.send = send
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata", "second", "third"]
bc._send()
self._checkSend(1, "testdatasecondthird")
self._checkWriteBuf(bc, '')
self._checkConnectionClosed(0)
self._checkUnregistered(0)
def test_Connection_send5(self):
# send part of multiple packet
def send(self, data):
return len(data)/2
DoNothingConnector.send = send
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata", "second", "third"]
bc._send()
self._checkSend(1, "testdatasecondthird")
self._checkWriteBuf(bc, 'econdthird')
self._checkConnectionClosed(0)
self._checkUnregistered(0)
def test_Connection_send6(self):
# raise try again
def send(self, data):
raise ConnectorTryAgainException
DoNothingConnector.send = send
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata", "second", "third"]
bc._send()
self._checkSend(1, "testdatasecondthird")
self._checkWriteBuf(bc, 'testdatasecondthird')
self._checkConnectionClosed(0)
self._checkUnregistered(0)
def test_Connection_send7(self):
# raise other error
def send(self, data):
raise ConnectorException
DoNothingConnector.send = send
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata", "second", "third"]
self.assertRaises(ConnectorException, bc._send)
self._checkSend(1, "testdatasecondthird")
# connection closed -> buffers flushed
self._checkWriteBuf(bc, '')
self._checkReaderRemoved(1)
self._checkConnectionClosed(1)
self._checkUnregistered(1)
def test_07_Connection_addPacket(self):
# new packet
p = Mock({"encode" : "testdata", "getId": 0})
p._body = ''
p.handler_method_name = 'testmethod'
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc._addPacket(p)
self._checkWriteBuf(bc, 'testdata')
self._checkWriterAdded(1)
def test_Connection_analyse1(self):
# nothing to read, nothing is done
bc = self._makeConnection()
bc._queue = Mock()
self._checkReadBuf(bc, '')
bc.analyse()
self._checkPacketReceived(0)
self._checkReadBuf(bc, '')
# give some data to analyse
master_list = (
(("127.0.0.1", 2135), self.getNewUUID()),
(("127.0.0.1", 2135), self.getNewUUID()),
(("127.0.0.1", 2235), self.getNewUUID()),
(("127.0.0.1", 2134), self.getNewUUID()),
(("127.0.0.1", 2335), self.getNewUUID()),
(("127.0.0.1", 2133), self.getNewUUID()),
(("127.0.0.1", 2435), self.getNewUUID()),
(("127.0.0.1", 2132), self.getNewUUID()))
p = Packets.AnswerPrimary(self.getNewUUID(), master_list)
p.setId(1)
p_data = ''.join(p.encode())
data_edge = len(p_data) - 1
p_data_1, p_data_2 = p_data[:data_edge], p_data[data_edge:]
# append an incomplete packet, nothing is done
bc.read_buf.append(p_data_1)
bc.analyse()
self._checkPacketReceived(0)
self.assertNotEqual(len(bc.read_buf), 0)
self.assertNotEqual(len(bc.read_buf), len(p_data))
# append the rest of the packet
bc.read_buf.append(p_data_2)
bc.analyse()
# check packet decoded
self.assertEqual(len(bc._queue.mockGetNamedCalls("append")), 1)
call = bc._queue.mockGetNamedCalls("append")[0]
data = call.getParam(0)
self.assertEqual(type(data), type(p))
self.assertEqual(data.getId(), p.getId())
self.assertEqual(data.decode(), p.decode())
self._checkReadBuf(bc, '')
def test_Connection_analyse2(self):
# give multiple packet
bc = self._makeConnection()
bc._queue = Mock()
# packet 1
master_list = (
(("127.0.0.1", 2135), self.getNewUUID()),
(("127.0.0.1", 2135), self.getNewUUID()),
(("127.0.0.1", 2235), self.getNewUUID()),
(("127.0.0.1", 2134), self.getNewUUID()),
(("127.0.0.1", 2335), self.getNewUUID()),
(("127.0.0.1", 2133), self.getNewUUID()),
(("127.0.0.1", 2435), self.getNewUUID()),
(("127.0.0.1", 2132), self.getNewUUID()))
p1 = Packets.AnswerPrimary(self.getNewUUID(), master_list)
p1.setId(1)
self._appendPacketToReadBuf(bc, p1)
# packet 2
master_list = (
(("127.0.0.1", 2135), self.getNewUUID()),
(("127.0.0.1", 2135), self.getNewUUID()),
(("127.0.0.1", 2235), self.getNewUUID()),
(("127.0.0.1", 2134), self.getNewUUID()),
(("127.0.0.1", 2335), self.getNewUUID()),
(("127.0.0.1", 2133), self.getNewUUID()),
(("127.0.0.1", 2435), self.getNewUUID()),
(("127.0.0.1", 2132), self.getNewUUID()))
p2 = Packets.AnswerPrimary( self.getNewUUID(), master_list)
p2.setId(2)
self._appendPacketToReadBuf(bc, p2)
self.assertEqual(len(bc.read_buf), len(p1) + len(p2))
bc.analyse()
# check two packets decoded
self.assertEqual(len(bc._queue.mockGetNamedCalls("append")), 2)
# packet 1
call = bc._queue.mockGetNamedCalls("append")[0]
data = call.getParam(0)
self.assertEqual(type(data), type(p1))
self.assertEqual(data.getId(), p1.getId())
self.assertEqual(data.decode(), p1.decode())
# packet 2
call = bc._queue.mockGetNamedCalls("append")[1]
data = call.getParam(0)
self.assertEqual(type(data), type(p2))
self.assertEqual(data.getId(), p2.getId())
self.assertEqual(data.decode(), p2.decode())
self._checkReadBuf(bc, '')
def test_Connection_analyse3(self):
# give a bad packet, won't be decoded
bc = self._makeConnection()
bc._queue = Mock()
self._appendToReadBuf(bc, 'datadatadatadata')
bc.analyse()
self.assertEqual(len(bc._queue.mockGetNamedCalls("append")), 0)
self.assertEqual(
len(self.handler.mockGetNamedCalls("_packetMalformed")), 1)
def test_Connection_analyse4(self):
# give an expected packet
bc = self._makeConnection()
bc._queue = Mock()
master_list = (
(("127.0.0.1", 2135), self.getNewUUID()),
(("127.0.0.1", 2135), self.getNewUUID()),
(("127.0.0.1", 2235), self.getNewUUID()),
(("127.0.0.1", 2134), self.getNewUUID()),
(("127.0.0.1", 2335), self.getNewUUID()),
(("127.0.0.1", 2133), self.getNewUUID()),
(("127.0.0.1", 2435), self.getNewUUID()),
(("127.0.0.1", 2132), self.getNewUUID()))
p = Packets.AnswerPrimary(self.getNewUUID(), master_list)
p.setId(1)
self._appendPacketToReadBuf(bc, p)
bc.analyse()
# check packet decoded
self.assertEqual(len(bc._queue.mockGetNamedCalls("append")), 1)
call = bc._queue.mockGetNamedCalls("append")[0]
data = call.getParam(0)
self.assertEqual(type(data), type(p))
self.assertEqual(data.getId(), p.getId())
self.assertEqual(data.decode(), p.decode())
self._checkReadBuf(bc, '')
def test_Connection_writable1(self):
# with pending operation after send
def send(self, data):
return len(data)/2
DoNothingConnector.send = send
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata"]
self.assertTrue(bc.pending())
self.assertFalse(bc.aborted)
bc.writable()
# test send was called
self._checkSend(1, "testdata")
self.assertEqual(''.join(bc.write_buf), "data")
self._checkConnectionClosed(0)
self._checkUnregistered(0)
# pending, so nothing called
self.assertTrue(bc.pending())
self.assertFalse(bc.aborted)
self._checkWriterRemoved(0)
self._checkReaderRemoved(0)
self._checkShutdown(0)
self._checkClose(0)
def test_Connection_writable2(self):
# with no longer pending operation after send
def send(self, data):
return len(data)
DoNothingConnector.send = send
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata"]
self.assertTrue(bc.pending())
self.assertFalse(bc.aborted)
bc.writable()
# test send was called
self._checkSend(1, "testdata")
self._checkWriteBuf(bc, '')
self._checkClose(0)
self._checkUnregistered(0)
# nothing else pending, and aborted is false, so writer has been removed
self.assertFalse(bc.pending())
self.assertFalse(bc.aborted)
self._checkWriterRemoved(1)
self._checkReaderRemoved(0)
self._checkShutdown(0)
self._checkClose(0)
def test_Connection_writable3(self):
# with no longer pending operation after send and aborted set to true
def send(self, data):
return len(data)
DoNothingConnector.send = send
bc = self._makeConnection()
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata"]
self.assertTrue(bc.pending())
bc.abort()
self.assertTrue(bc.aborted)
bc.writable()
# test send was called
self._checkSend(1, "testdata")
self._checkWriteBuf(bc, '')
self._checkConnectionClosed(1)
self._checkUnregistered(1)
# nothing else pending, and aborted is false, so writer has been removed
self.assertFalse(bc.pending())
self.assertTrue(bc.aborted)
self._checkWriterRemoved(1)
self._checkReaderRemoved(1)
self._checkShutdown(1)
self._checkClose(1)
def test_Connection_readable(self):
# With aborted set to false
# patch receive method to return data
def receive(self):
master_list = ((("127.0.0.1", 2135), self.getNewUUID()),
(("127.0.0.1", 2136), self.getNewUUID()),
(("127.0.0.1", 2235), self.getNewUUID()),
(("127.0.0.1", 2134), self.getNewUUID()),
(("127.0.0.1", 2335), self.getNewUUID()),
(("127.0.0.1", 2133), self.getNewUUID()),
(("127.0.0.1", 2435), self.getNewUUID()),
(("127.0.0.1", 2132), self.getNewUUID()))
uuid = self.getNewUUID()
p = Packets.AnswerPrimary(uuid, master_list)
p.setId(1)
return ''.join(p.encode())
DoNothingConnector.receive = receive
bc = self._makeConnection()
bc._queue = Mock()
self._checkReadBuf(bc, '')
self.assertFalse(bc.aborted)
bc.readable()
# check packet decoded
self._checkReadBuf(bc, '')
self.assertEqual(len(bc._queue.mockGetNamedCalls("append")), 1)
call = bc._queue.mockGetNamedCalls("append")[0]
data = call.getParam(0)
self.assertEqual(type(data), Packets.AnswerPrimary)
self.assertEqual(data.getId(), 1)
self._checkReadBuf(bc, '')
# check not aborted
self.assertFalse(bc.aborted)
self._checkUnregistered(0)
self._checkWriterRemoved(0)
self._checkReaderRemoved(0)
self._checkShutdown(0)
self._checkClose(0)
def test_ClientConnection_init1(self):
# create a good client connection
bc = self._makeClientConnection()
# check connector created and connection initialize
self.assertFalse(bc.connecting)
self.assertFalse(bc.isServer())
self._checkMakeClientConnection(1)
# check call to handler
self.assertFalse(bc.getHandler() is None)
self._checkConnectionStarted(1)
self._checkConnectionCompleted(1)
self._checkConnectionFailed(0)
# check call to event manager
self.assertFalse(bc.getEventManager() is None)
self._checkReaderAdded(1)
self._checkWriterAdded(0)
def test_ClientConnection_init2(self):
# raise connection in progress
makeClientConnection_org = DoNothingConnector.makeClientConnection
def makeClientConnection(self, *args, **kw):
raise ConnectorInProgressException
DoNothingConnector.makeClientConnection = makeClientConnection
try:
bc = self._makeClientConnection()
finally:
DoNothingConnector.makeClientConnection = makeClientConnection_org
# check connector created and connection initialize
self.assertTrue(bc.connecting)
self.assertFalse(bc.isServer())
self._checkMakeClientConnection(1)
# check call to handler
self.assertFalse(bc.getHandler() is None)
self._checkConnectionStarted(1)
self._checkConnectionCompleted(0)
self._checkConnectionFailed(0)
# check call to event manager
self.assertFalse(bc.getEventManager() is None)
self._checkReaderAdded(1)
self._checkWriterAdded(1)
def test_ClientConnection_init3(self):
# raise another error, connection must fail
makeClientConnection_org = DoNothingConnector.makeClientConnection
def makeClientConnection(self, *args, **kw):
raise ConnectorException
DoNothingConnector.makeClientConnection = makeClientConnection
try:
self.assertRaises(ConnectorException, self._makeClientConnection)
finally:
DoNothingConnector.makeClientConnection = makeClientConnection_org
# since the exception was raised, the connection is not created
# check call to handler
self._checkConnectionStarted(1)
self._checkConnectionCompleted(0)
self._checkConnectionFailed(1)
# check call to event manager
self._checkReaderAdded(1)
self._checkWriterAdded(0)
def test_ClientConnection_writable1(self):
# with a non connecting connection, will call parent's method
def makeClientConnection(self, *args, **kw):
return "OK"
def send(self, data):
return len(data)
makeClientConnection_org = DoNothingConnector.makeClientConnection
DoNothingConnector.send = send
DoNothingConnector.makeClientConnection = makeClientConnection
try:
bc = self._makeClientConnection()
finally:
DoNothingConnector.makeClientConnection = makeClientConnection_org
# check connector created and connection initialize
self.assertFalse(bc.connecting)
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata"]
self.assertTrue(bc.pending())
self.assertFalse(bc.aborted)
# call
self._checkConnectionCompleted(1)
self._checkReaderAdded(1)
bc.writable()
self.assertFalse(bc.pending())
self.assertFalse(bc.aborted)
self.assertFalse(bc.connecting)
self._checkSend(1, "testdata")
self._checkConnectionClosed(0)
self._checkConnectionCompleted(1)
self._checkConnectionFailed(0)
self._checkUnregistered(0)
self._checkReaderAdded(1)
self._checkWriterRemoved(1)
self._checkReaderRemoved(0)
self._checkShutdown(0)
self._checkClose(0)
def test_ClientConnection_writable2(self):
# with a connecting connection, must not call parent's method
# with errors, close connection
def getError(self):
return True
DoNothingConnector.getError = getError
bc = self._makeClientConnection()
# check connector created and connection initialize
bc.connecting = True
self._checkWriteBuf(bc, '')
bc.write_buf = ["testdata"]
self.assertTrue(bc.pending())
self.assertFalse(bc.aborted)
# call
self._checkConnectionCompleted(1)
self._checkReaderAdded(1)
bc.writable()
self.assertTrue(bc.connecting)
self.assertFalse(bc.pending())
self.assertFalse(bc.aborted)
self._checkWriteBuf(bc, '')
self._checkConnectionClosed(0)
self._checkConnectionCompleted(1)
self._checkConnectionFailed(1)
self._checkUnregistered(1)
self._checkReaderAdded(1)
self._checkWriterRemoved(1)
self._checkReaderRemoved(1)
def test_14_ServerConnection(self):
bc = self._makeServerConnection()
self.assertEqual(bc.getAddress(), ("127.0.0.7", 93413))
self._checkReaderAdded(1)
self._checkReadBuf(bc, '')
self._checkWriteBuf(bc, '')
self.assertEqual(bc.cur_id, 0)
self.assertFalse(bc.aborted)
# test uuid
self.assertEqual(bc.uuid, None)
self.assertEqual(bc.getUUID(), None)
uuid = self.getNewUUID()
bc.setUUID(uuid)
self.assertEqual(bc.getUUID(), uuid)
# test next id
cur_id = bc.cur_id
next_id = bc._getNextId()
self.assertEqual(next_id, cur_id)
next_id = bc._getNextId()
self.assertTrue(next_id > cur_id)
# test overflow of next id
bc.cur_id = 0xffffffff
next_id = bc._getNextId()
self.assertEqual(next_id, 0xffffffff)
next_id = bc._getNextId()
self.assertEqual(next_id, 0)
def test_15_Timeout(self):
# NOTE: This method uses ping/pong packets only because MT connection
# don't accept any other packet without specifying a queue.
self.handler = EventHandler(self.app)
conn = self._makeClientConnection()
use_case_list = (
# (a) For a single packet sent at T,
# the limit time for the answer is T + (1 * CRITICAL_TIMEOUT)
((), (1., 0)),
# (b) Same as (a), even if send another packet at (T + CT/2).
# But receiving a packet (at T + CT - ε) resets the timeout
# (which means the limit for the 2nd one is T + 2*CT)
((.5, None), (1., 0, 2., 1)),
# (c) Same as (b) with a first answer at well before the limit
# (T' = T + CT/2). The limit for the second one is T' + CT.
((.1, None, .5, 1), (1.5, 0)),
)
from neo.lib import connection
def set_time(t):
connection.time = lambda: int(CRITICAL_TIMEOUT * (1000 + t))
closed = []
conn.close = lambda: closed.append(connection.time())
def answer(packet_id):
p = Packets.Pong()
p.setId(packet_id)
conn.connector.receive = [''.join(p.encode())].pop
conn.readable()
conn.checkTimeout(connection.time())
conn.process()
try:
for use_case, expected in use_case_list:
i = iter(use_case)
conn.cur_id = 0
set_time(0)
# No timeout when no pending request
self.assertEqual(conn._handlers.getNextTimeout(), None)
conn.ask(Packets.Ping())
for t in i:
set_time(t)
conn.checkTimeout(connection.time())
packet_id = i.next()
if packet_id is None:
conn.ask(Packets.Ping())
else:
answer(packet_id)
i = iter(expected)
for t in i:
set_time(t - .1)
conn.checkTimeout(connection.time())
set_time(t)
# this test method relies on the fact that only
# conn.close is called in case of a timeout
conn.checkTimeout(connection.time())
self.assertEqual(closed.pop(), connection.time())
answer(i.next())
self.assertFalse(conn.isPending())
self.assertFalse(closed)
finally:
connection.time = time
class MTConnectionTests(ConnectionTests):
# XXX: here we test non-client-connection-related things too, which
# duplicates test suite work... Should be fragmented into finer-grained
# test classes.
def setUp(self):
super(MTConnectionTests, self).setUp()
self.dispatcher = Mock({'__repr__': 'Fake Dispatcher'})
def _makeClientConnection(self):
self.connector = DoNothingConnector()
return MTClientConnection(event_manager=self.em, handler=self.handler,
connector=self.connector, addr=self.address,
dispatcher=self.dispatcher)
def test_MTClientConnectionQueueParameter(self):
queue = Queue()
ask = self._makeClientConnection().ask
packet = Packets.AskPrimary() # Any non-Ping simple "ask" packet
# One cannot "ask" anything without a queue
self.assertRaises(TypeError, ask, packet)
ask(packet, queue=queue)
# ... except Ping
ask(Packets.Ping())
class HandlerSwitcherTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self._handler = handler = Mock({
'__repr__': 'initial handler',
})
self._connection = Mock({
'__repr__': 'connection',
'getAddress': ('127.0.0.1', 10000),
})
self._handlers = HandlerSwitcher(handler)
def _makeNotification(self, msg_id):
packet = Packets.StartOperation()
packet.setId(msg_id)
return packet
def _makeRequest(self, msg_id):
packet = Packets.AskBeginTransaction()
packet.setId(msg_id)
return packet
def _makeAnswer(self, msg_id):
packet = Packets.AnswerBeginTransaction(self.getNextTID())
packet.setId(msg_id)
return packet
def _makeHandler(self):
return Mock({'__repr__': 'handler'})
def _checkPacketReceived(self, handler, packet, index=0):
calls = handler.mockGetNamedCalls('packetReceived')
self.assertEqual(len(calls), index + 1)
def _checkCurrentHandler(self, handler):
self.assertTrue(self._handlers.getHandler() is handler)
def testInit(self):
self._checkCurrentHandler(self._handler)
self.assertFalse(self._handlers.isPending())
def testEmit(self):
# First case, emit is called outside of a handler
self.assertFalse(self._handlers.isPending())
request = self._makeRequest(1)
self._handlers.emit(request, 0, None)
self.assertTrue(self._handlers.isPending())
# Second case, emit is called from inside a handler with a pending
# handler change.
new_handler = self._makeHandler()
applied = self._handlers.setHandler(new_handler)
self.assertFalse(applied)
self._checkCurrentHandler(self._handler)
call_tracker = []
def packetReceived(conn, packet):
self._handlers.emit(self._makeRequest(2), 0, None)
call_tracker.append(True)
self._handler.packetReceived = packetReceived
self._handlers.handle(self._connection, self._makeAnswer(1))
self.assertEqual(call_tracker, [True])
# Effective handler must not have changed (new request is blocking
# it)
self._checkCurrentHandler(self._handler)
# Handling the next response will cause the handler to change
delattr(self._handler, 'packetReceived')
self._handlers.handle(self._connection, self._makeAnswer(2))
self._checkCurrentHandler(new_handler)
def testHandleNotification(self):
# handle with current handler
notif1 = self._makeNotification(1)
self._handlers.handle(self._connection, notif1)
self._checkPacketReceived(self._handler, notif1)
# emit a request and delay an handler
request = self._makeRequest(2)
self._handlers.emit(request, 0, None)
handler = self._makeHandler()
applied = self._handlers.setHandler(handler)
self.assertFalse(applied)
# next notification fall into the current handler
notif2 = self._makeNotification(3)
self._handlers.handle(self._connection, notif2)
self._checkPacketReceived(self._handler, notif2, index=1)
# handle with new handler
answer = self._makeAnswer(2)
self._handlers.handle(self._connection, answer)
notif3 = self._makeNotification(4)
self._handlers.handle(self._connection, notif3)
self._checkPacketReceived(handler, notif2)
def testHandleAnswer1(self):
# handle with current handler
request = self._makeRequest(1)
self._handlers.emit(request, 0, None)
answer = self._makeAnswer(1)
self._handlers.handle(self._connection, answer)
self._checkPacketReceived(self._handler, answer)
def testHandleAnswer2(self):
# handle with blocking handler
request = self._makeRequest(1)
self._handlers.emit(request, 0, None)
handler = self._makeHandler()
applied = self._handlers.setHandler(handler)
self.assertFalse(applied)
answer = self._makeAnswer(1)
self._handlers.handle(self._connection, answer)
self._checkPacketReceived(self._handler, answer)
self._checkCurrentHandler(handler)
def testHandleAnswer3(self):
# multiple setHandler
r1 = self._makeRequest(1)
r2 = self._makeRequest(2)
r3 = self._makeRequest(3)
a1 = self._makeAnswer(1)
a2 = self._makeAnswer(2)
a3 = self._makeAnswer(3)
h1 = self._makeHandler()
h2 = self._makeHandler()
h3 = self._makeHandler()
# emit all requests and setHandleres
self._handlers.emit(r1, 0, None)
applied = self._handlers.setHandler(h1)
self.assertFalse(applied)
self._handlers.emit(r2, 0, None)
applied = self._handlers.setHandler(h2)
self.assertFalse(applied)
self._handlers.emit(r3, 0, None)
applied = self._handlers.setHandler(h3)
self.assertFalse(applied)
self._checkCurrentHandler(self._handler)
self.assertTrue(self._handlers.isPending())
# process answers
self._handlers.handle(self._connection, a1)
self._checkCurrentHandler(h1)
self._handlers.handle(self._connection, a2)
self._checkCurrentHandler(h2)
self._handlers.handle(self._connection, a3)
self._checkCurrentHandler(h3)
def testHandleAnswer4(self):
# process in disorder
r1 = self._makeRequest(1)
r2 = self._makeRequest(2)
r3 = self._makeRequest(3)
a1 = self._makeAnswer(1)
a2 = self._makeAnswer(2)
a3 = self._makeAnswer(3)
h = self._makeHandler()
# emit all requests
self._handlers.emit(r1, 0, None)
self._handlers.emit(r2, 0, None)
self._handlers.emit(r3, 0, None)
applied = self._handlers.setHandler(h)
self.assertFalse(applied)
# process answers
self._handlers.handle(self._connection, a1)
self._checkCurrentHandler(self._handler)
self._handlers.handle(self._connection, a2)
self._checkCurrentHandler(self._handler)
self._handlers.handle(self._connection, a3)
self._checkCurrentHandler(h)
def testHandleUnexpected(self):
# process in disorder
r1 = self._makeRequest(1)
r2 = self._makeRequest(2)
a2 = self._makeAnswer(2)
h = self._makeHandler()
# emit requests aroung state setHandler
self._handlers.emit(r1, 0, None)
applied = self._handlers.setHandler(h)
self.assertFalse(applied)
self._handlers.emit(r2, 0, None)
# process answer for next state
self._handlers.handle(self._connection, a2)
self.checkAborted(self._connection)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/testDispatcher.py 0000664 0000000 0000000 00000013572 11634614701 0025702 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from mock import Mock
from neo.tests import NeoTestBase
from neo.lib.dispatcher import Dispatcher, ForgottenPacket
from Queue import Queue
import unittest
class DispatcherTests(NeoTestBase):
def setUp(self):
NeoTestBase.setUp(self)
self.fake_thread = Mock({'stopping': True})
self.dispatcher = Dispatcher(self.fake_thread)
def testRegister(self):
conn = object()
queue = Queue()
MARKER = object()
self.dispatcher.register(conn, 1, queue)
self.assertTrue(queue.empty())
self.assertTrue(self.dispatcher.dispatch(conn, 1, MARKER))
self.assertFalse(queue.empty())
self.assertEqual(queue.get(block=False), (conn, MARKER))
self.assertTrue(queue.empty())
self.assertFalse(self.dispatcher.dispatch(conn, 2, None))
self.assertEqual(len(self.fake_thread.mockGetNamedCalls('start')), 1)
def testUnregister(self):
conn = object()
queue = Mock()
self.dispatcher.register(conn, 2, queue)
self.dispatcher.unregister(conn)
self.assertEqual(len(queue.mockGetNamedCalls('put')), 1)
self.assertFalse(self.dispatcher.dispatch(conn, 2, None))
def testRegistered(self):
conn1 = object()
conn2 = object()
self.assertFalse(self.dispatcher.registered(conn1))
self.assertFalse(self.dispatcher.registered(conn2))
self.dispatcher.register(conn1, 1, Mock())
self.assertTrue(self.dispatcher.registered(conn1))
self.assertFalse(self.dispatcher.registered(conn2))
self.dispatcher.register(conn2, 2, Mock())
self.assertTrue(self.dispatcher.registered(conn1))
self.assertTrue(self.dispatcher.registered(conn2))
self.dispatcher.unregister(conn1)
self.assertFalse(self.dispatcher.registered(conn1))
self.assertTrue(self.dispatcher.registered(conn2))
self.dispatcher.unregister(conn2)
self.assertFalse(self.dispatcher.registered(conn1))
self.assertFalse(self.dispatcher.registered(conn2))
def testPending(self):
conn1 = object()
conn2 = object()
class Queue(object):
_empty = True
def empty(self):
return self._empty
def put(self, value):
pass
queue1 = Queue()
queue2 = Queue()
self.dispatcher.register(conn1, 1, queue1)
self.assertTrue(self.dispatcher.pending(queue1))
self.dispatcher.register(conn2, 2, queue1)
self.assertTrue(self.dispatcher.pending(queue1))
self.dispatcher.register(conn2, 3, queue2)
self.assertTrue(self.dispatcher.pending(queue1))
self.assertTrue(self.dispatcher.pending(queue2))
self.dispatcher.dispatch(conn1, 1, None)
self.assertTrue(self.dispatcher.pending(queue1))
self.assertTrue(self.dispatcher.pending(queue2))
self.dispatcher.dispatch(conn2, 2, None)
self.assertFalse(self.dispatcher.pending(queue1))
self.assertTrue(self.dispatcher.pending(queue2))
queue1._empty = False
self.assertTrue(self.dispatcher.pending(queue1))
queue1._empty = True
self.dispatcher.register(conn1, 4, queue1)
self.dispatcher.register(conn2, 5, queue1)
self.assertTrue(self.dispatcher.pending(queue1))
self.assertTrue(self.dispatcher.pending(queue2))
self.dispatcher.unregister(conn2)
self.assertTrue(self.dispatcher.pending(queue1))
self.assertFalse(self.dispatcher.pending(queue2))
self.dispatcher.unregister(conn1)
self.assertFalse(self.dispatcher.pending(queue1))
self.assertFalse(self.dispatcher.pending(queue2))
def testForget(self):
conn = object()
queue = Queue()
MARKER = object()
# Register an expectation
self.dispatcher.register(conn, 1, queue)
# ...and forget about it, returning registered queue
forgotten_queue = self.dispatcher.forget(conn, 1)
self.assertTrue(queue is forgotten_queue, (queue, forgotten_queue))
# A ForgottenPacket must have been put in the queue
queue_conn, packet = queue.get(block=False)
self.assertTrue(isinstance(packet, ForgottenPacket), packet)
# ...with appropriate packet id
self.assertEqual(packet.getId(), 1)
# ...and appropriate connection
self.assertTrue(conn is queue_conn, (conn, queue_conn))
# If forgotten twice, it must raise a KeyError
self.assertRaises(KeyError, self.dispatcher.forget, conn, 1)
# Event arrives, return value must be True (it was expected)
self.assertTrue(self.dispatcher.dispatch(conn, 1, MARKER))
# ...but must not have reached the queue
self.assertTrue(queue.empty())
# Register an expectation
self.dispatcher.register(conn, 1, queue)
# ...and forget about it
self.dispatcher.forget(conn, 1)
queue.get(block=False)
# No exception must happen if connection is lost.
self.dispatcher.unregister(conn)
# Forgotten message's queue must not have received a "None"
self.assertTrue(queue.empty())
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/testEvent.py 0000664 0000000 0000000 00000012421 11634614701 0024665 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.lib.epoll import Epoll
from neo.lib.event import EpollEventManager
class EventTests(NeoUnitTestBase):
def test_01_EpollEventManager(self):
# init one
em = EpollEventManager()
self.assertEqual(len(em.connection_dict), 0)
self.assertEqual(len(em.reader_set), 0)
self.assertEqual(len(em.writer_set), 0)
self.assertTrue(isinstance(em.epoll, Epoll))
# use a mock object instead of epoll
em.epoll = Mock()
connector = self.getFakeConnector(descriptor=1014)
conn = self.getFakeConnection(connector=connector)
self.assertEqual(len(em.getConnectionList()), 0)
# test register/unregister
em.register(conn)
self.assertEqual(len(connector.mockGetNamedCalls("getDescriptor")), 1)
self.assertEqual(len(em.epoll.mockGetNamedCalls("register")), 1)
call = em.epoll.mockGetNamedCalls("register")[0]
data = call.getParam(0)
self.assertEqual(data, 1014)
self.assertEqual(len(em.getConnectionList()), 1)
self.assertEqual(em.getConnectionList()[0].getDescriptor(), conn.getDescriptor())
connector = self.getFakeConnector(descriptor=1014)
conn = self.getFakeConnection(connector=connector)
em.unregister(conn)
self.assertEqual(len(connector.mockGetNamedCalls("getDescriptor")), 1)
self.assertEqual(len(em.epoll.mockGetNamedCalls("unregister")), 1)
call = em.epoll.mockGetNamedCalls("unregister")[0]
data = call.getParam(0)
self.assertEqual(data, 1014)
self.assertEqual(len(em.getConnectionList()), 0)
# add/removeReader
conn = self.getFakeConnection()
self.assertEqual(len(em.reader_set), 0)
em.addReader(conn)
self.assertEqual(len(em.reader_set), 1)
self.assertEqual(len(em.epoll.mockGetNamedCalls("modify")), 1)
em.addReader(conn) # do not add if already present
self.assertEqual(len(em.reader_set), 1)
self.assertEqual(len(em.epoll.mockGetNamedCalls("modify")), 1)
em.removeReader(conn)
self.assertEqual(len(em.reader_set), 0)
self.assertEqual(len(em.epoll.mockGetNamedCalls("modify")), 2)
em.removeReader(conn)
self.assertEqual(len(em.reader_set), 0)
self.assertEqual(len(em.epoll.mockGetNamedCalls("modify")), 2)
# add/removeWriter
conn = self.getFakeConnection()
self.assertEqual(len(em.writer_set), 0)
em.addWriter(conn)
self.assertEqual(len(em.writer_set), 1)
self.assertEqual(len(em.epoll.mockGetNamedCalls("modify")), 3)
em.addWriter(conn) # do not add if already present
self.assertEqual(len(em.writer_set), 1)
self.assertEqual(len(em.epoll.mockGetNamedCalls("modify")), 3)
em.removeWriter(conn)
self.assertEqual(len(em.writer_set), 0)
self.assertEqual(len(em.epoll.mockGetNamedCalls("modify")), 4)
em.removeWriter(conn)
self.assertEqual(len(em.writer_set), 0)
self.assertEqual(len(em.epoll.mockGetNamedCalls("modify")), 4)
# poll
r_connector = self.getFakeConnector(descriptor=14515)
r_conn = self.getFakeConnection(connector=r_connector)
em.register(r_conn)
w_connector = self.getFakeConnector(descriptor=351621)
w_conn = self.getFakeConnection(connector=w_connector)
em.register(w_conn)
em.epoll = Mock({"poll":(
(r_connector.getDescriptor(),),
(w_connector.getDescriptor(),),
(),
)})
em.poll(timeout=10)
# check it called poll on epoll
self.assertEqual(len(em.epoll.mockGetNamedCalls("poll")), 1)
call = em.epoll.mockGetNamedCalls("poll")[0]
data = call.getParam(0)
self.assertEqual(data, 10)
# need to rebuild completely this test and the the packet queue
# check readable conn
#self.assertEqual(len(r_conn.mockGetNamedCalls("lock")), 1)
#self.assertEqual(len(r_conn.mockGetNamedCalls("unlock")), 1)
#self.assertEqual(len(r_conn.mockGetNamedCalls("readable")), 1)
#self.assertEqual(len(r_conn.mockGetNamedCalls("writable")), 0)
# check writable conn
#self.assertEqual(len(w_conn.mockGetNamedCalls("lock")), 1)
#self.assertEqual(len(w_conn.mockGetNamedCalls("unlock")), 1)
#self.assertEqual(len(w_conn.mockGetNamedCalls("readable")), 0)
#self.assertEqual(len(w_conn.mockGetNamedCalls("writable")), 1)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/testHandler.py 0000664 0000000 0000000 00000006022 11634614701 0025161 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.tests import NeoUnitTestBase
from neo.lib.handler import EventHandler
from neo.lib.protocol import PacketMalformedError, UnexpectedPacketError, \
BrokenNodeDisallowedError, NotReadyError, ProtocolError
class HandlerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
app = Mock()
self.handler = EventHandler(app)
def setFakeMethod(self, method):
self.handler.fake_method = method
def getFakePacket(self):
p = Mock({
'decode': (),
'__repr__': 'Fake Packet',
})
p.handler_method_name = 'fake_method'
return p
def test_dispatch(self):
conn = self.getFakeConnection()
packet = self.getFakePacket()
# all is ok
self.setFakeMethod(lambda c: None)
self.handler.dispatch(conn, packet)
# raise UnexpectedPacketError
conn.mockCalledMethods = {}
def fake(c):
raise UnexpectedPacketError('fake packet')
self.setFakeMethod(fake)
self.handler.dispatch(conn, packet)
self.checkErrorPacket(conn)
self.checkAborted(conn)
# raise PacketMalformedError
conn.mockCalledMethods = {}
def fake(c):
raise PacketMalformedError('message')
self.setFakeMethod(fake)
self.handler.dispatch(conn, packet)
self.checkNotify(conn)
self.checkAborted(conn)
# raise BrokenNodeDisallowedError
conn.mockCalledMethods = {}
def fake(c):
raise BrokenNodeDisallowedError
self.setFakeMethod(fake)
self.handler.dispatch(conn, packet)
self.checkErrorPacket(conn)
self.checkAborted(conn)
# raise NotReadyError
conn.mockCalledMethods = {}
def fake(c):
raise NotReadyError
self.setFakeMethod(fake)
self.handler.dispatch(conn, packet)
self.checkErrorPacket(conn)
self.checkAborted(conn)
# raise ProtocolError
conn.mockCalledMethods = {}
def fake(c):
raise ProtocolError
self.setFakeMethod(fake)
self.handler.dispatch(conn, packet)
self.checkErrorPacket(conn)
self.checkAborted(conn)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/testNodes.py 0000664 0000000 0000000 00000027614 11634614701 0024666 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.lib import protocol
from neo.lib.protocol import NodeTypes, NodeStates
from neo.lib.node import Node, MasterNode, StorageNode, \
ClientNode, AdminNode, NodeManager
from neo.tests import NeoUnitTestBase
from time import time
class NodesTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.manager = Mock()
def _updatedByAddress(self, node, index=0):
calls = self.manager.mockGetNamedCalls('_updateAddress')
self.assertEqual(len(calls), index + 1)
self.assertEqual(calls[index].getParam(0), node)
def _updatedByUUID(self, node, index=0):
calls = self.manager.mockGetNamedCalls('_updateUUID')
self.assertEqual(len(calls), index + 1)
self.assertEqual(calls[index].getParam(0), node)
def testInit(self):
""" Check the node initialization """
address = ('127.0.0.1', 10000)
uuid = self.getNewUUID()
node = Node(self.manager, address=address, uuid=uuid)
self.assertEqual(node.getState(), NodeStates.UNKNOWN)
self.assertEqual(node.getAddress(), address)
self.assertEqual(node.getUUID(), uuid)
self.assertTrue(time() - 1 < node.getLastStateChange() < time())
def testState(self):
""" Check if the last changed time is updated when state is changed """
node = Node(self.manager)
self.assertEqual(node.getState(), NodeStates.UNKNOWN)
self.assertTrue(time() - 1 < node.getLastStateChange() < time())
previous_time = node.getLastStateChange()
node.setState(NodeStates.RUNNING)
self.assertEqual(node.getState(), NodeStates.RUNNING)
self.assertTrue(previous_time < node.getLastStateChange())
self.assertTrue(time() - 1 < node.getLastStateChange() < time())
def testAddress(self):
""" Check if the node is indexed by address """
node = Node(self.manager)
self.assertEqual(node.getAddress(), None)
address = ('127.0.0.1', 10000)
node.setAddress(address)
self._updatedByAddress(node)
def testUUID(self):
""" As for Address but UUID """
node = Node(self.manager)
self.assertEqual(node.getAddress(), None)
uuid = self.getNewUUID()
node.setUUID(uuid)
self._updatedByUUID(node)
def testTypes(self):
""" Check that the abstract node has no type """
node = Node(self.manager)
self.assertRaises(NotImplementedError, node.getType)
self.assertFalse(node.isStorage())
self.assertFalse(node.isMaster())
self.assertFalse(node.isClient())
self.assertFalse(node.isAdmin())
def testMaster(self):
""" Check Master sub class """
node = MasterNode(self.manager)
self.assertEqual(node.getType(), protocol.NodeTypes.MASTER)
self.assertTrue(node.isMaster())
self.assertFalse(node.isStorage())
self.assertFalse(node.isClient())
self.assertFalse(node.isAdmin())
def testStorage(self):
""" Check Storage sub class """
node = StorageNode(self.manager)
self.assertEqual(node.getType(), protocol.NodeTypes.STORAGE)
self.assertTrue(node.isStorage())
self.assertFalse(node.isMaster())
self.assertFalse(node.isClient())
self.assertFalse(node.isAdmin())
def testClient(self):
""" Check Client sub class """
node = ClientNode(self.manager)
self.assertEqual(node.getType(), protocol.NodeTypes.CLIENT)
self.assertTrue(node.isClient())
self.assertFalse(node.isMaster())
self.assertFalse(node.isStorage())
self.assertFalse(node.isAdmin())
def testAdmin(self):
""" Check Admin sub class """
node = AdminNode(self.manager)
self.assertEqual(node.getType(), protocol.NodeTypes.ADMIN)
self.assertTrue(node.isAdmin())
self.assertFalse(node.isMaster())
self.assertFalse(node.isStorage())
self.assertFalse(node.isClient())
class NodeManagerTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.manager = NodeManager()
def _addStorage(self):
self.storage = StorageNode(self.manager, ('127.0.0.1', 1000), self.getNewUUID())
def _addMaster(self):
self.master = MasterNode(self.manager, ('127.0.0.1', 2000), self.getNewUUID())
def _addClient(self):
self.client = ClientNode(self.manager, None, self.getNewUUID())
def _addAdmin(self):
self.admin = AdminNode(self.manager, ('127.0.0.1', 4000), self.getNewUUID())
def checkNodes(self, node_list):
manager = self.manager
self.assertEqual(sorted(manager.getList()), sorted(node_list))
def checkMasters(self, master_list):
manager = self.manager
self.assertEqual(manager.getMasterList(), master_list)
def checkStorages(self, storage_list):
manager = self.manager
self.assertEqual(manager.getStorageList(), storage_list)
def checkClients(self, client_list):
manager = self.manager
self.assertEqual(manager.getClientList(), client_list)
def checkByServer(self, node):
node_found = self.manager.getByAddress(node.getAddress())
self.assertEqual(node_found, node)
def checkByUUID(self, node):
node_found = self.manager.getByUUID(node.getUUID())
self.assertEqual(node_found, node)
def checkIdentified(self, node_list, pool_set=None):
identified_node_list = self.manager.getIdentifiedList(pool_set)
self.assertEqual(set(identified_node_list), set(node_list))
def testInit(self):
""" Check the manager is empty when started """
manager = self.manager
self.checkNodes([])
self.checkMasters([])
self.checkStorages([])
self.checkClients([])
address = ('127.0.0.1', 10000)
self.assertEqual(manager.getByAddress(address), None)
self.assertEqual(manager.getByAddress(None), None)
uuid = self.getNewUUID()
self.assertEqual(manager.getByUUID(uuid), None)
self.assertEqual(manager.getByUUID(None), None)
def testAdd(self):
""" Check if new nodes are registered in the manager """
manager = self.manager
self.checkNodes([])
# storage
self._addStorage()
self.checkNodes([self.storage])
self.checkStorages([self.storage])
self.checkMasters([])
self.checkClients([])
self.checkByServer(self.storage)
self.checkByUUID(self.storage)
# master
self._addMaster()
self.checkNodes([self.storage, self.master])
self.checkStorages([self.storage])
self.checkMasters([self.master])
self.checkClients([])
self.checkByServer(self.master)
self.checkByUUID(self.master)
# client
self._addClient()
self.checkNodes([self.storage, self.master, self.client])
self.checkStorages([self.storage])
self.checkMasters([self.master])
self.checkClients([self.client])
# client node has no address
self.assertEqual(manager.getByAddress(self.client.getAddress()), None)
self.checkByUUID(self.client)
# admin
self._addAdmin()
self.checkNodes([self.storage, self.master, self.client, self.admin])
self.checkStorages([self.storage])
self.checkMasters([self.master])
self.checkClients([self.client])
self.checkByServer(self.admin)
self.checkByUUID(self.admin)
def testReInit(self):
""" Check that the manager clear all its content """
manager = self.manager
self.checkNodes([])
self.checkStorages([])
self.checkMasters([])
self.checkClients([])
self._addMaster()
self.checkMasters([self.master])
manager.init()
self.checkNodes([])
self.checkMasters([])
self._addStorage()
self.checkStorages([self.storage])
manager.init()
self.checkNodes([])
self.checkStorages([])
self._addClient()
self.checkClients([self.client])
manager.init()
self.checkNodes([])
self.checkClients([])
def testUpdate(self):
""" Check manager content update """
# set up four nodes
manager = self.manager
self._addMaster()
self._addStorage()
self._addClient()
self._addAdmin()
self.checkNodes([self.master, self.storage, self.client, self.admin])
self.checkMasters([self.master])
self.checkStorages([self.storage])
self.checkClients([self.client])
# build changes
old_address = self.master.getAddress()
new_address = ('127.0.0.1', 2001)
old_uuid = self.storage.getUUID()
new_uuid = self.getNewUUID()
node_list = (
(NodeTypes.CLIENT, None, self.client.getUUID(), NodeStates.DOWN),
(NodeTypes.MASTER, new_address, self.master.getUUID(), NodeStates.RUNNING),
(NodeTypes.STORAGE, self.storage.getAddress(), new_uuid,
NodeStates.RUNNING),
(NodeTypes.ADMIN, self.admin.getAddress(), self.admin.getUUID(),
NodeStates.UNKNOWN),
)
# update manager content
manager.update(node_list)
# - the client gets down
self.checkClients([])
# - master change it's address
self.checkMasters([self.master])
self.assertEqual(manager.getByAddress(old_address), None)
self.master.setAddress(new_address)
self.checkByServer(self.master)
# - storage change it's UUID
storage_list = manager.getStorageList()
self.assertTrue(len(storage_list), 1)
new_storage = storage_list[0]
self.assertNotEqual(new_storage.getUUID(), old_uuid)
self.assertEqual(new_storage.getState(), NodeStates.RUNNING)
# admin is still here but in UNKNOWN state
self.checkNodes([self.master, self.admin, new_storage])
self.assertEqual(self.admin.getState(), NodeStates.UNKNOWN)
def testIdentified(self):
# set up four nodes
manager = self.manager
self._addMaster()
self._addStorage()
self._addClient()
self._addAdmin()
# switch node to connected
self.checkIdentified([])
self.master.setConnection(Mock())
self.checkIdentified([self.master])
self.storage.setConnection(Mock())
self.checkIdentified([self.master, self.storage])
self.client.setConnection(Mock())
self.checkIdentified([self.master, self.storage, self.client])
self.admin.setConnection(Mock())
self.checkIdentified([self.master, self.storage, self.client, self.admin])
# check the pool_set attribute
self.checkIdentified([self.master], pool_set=[self.master.getUUID()])
self.checkIdentified([self.storage], pool_set=[self.storage.getUUID()])
self.checkIdentified([self.client], pool_set=[self.client.getUUID()])
self.checkIdentified([self.admin], pool_set=[self.admin.getUUID()])
self.checkIdentified([self.master, self.storage], pool_set=[
self.master.getUUID(), self.storage.getUUID()])
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/testPT.py 0000664 0000000 0000000 00000045007 11634614701 0024135 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from mock import Mock
from neo.lib.protocol import NodeStates, CellStates
from neo.lib.pt import Cell, PartitionTable, PartitionTableException
from neo.lib.node import StorageNode
from neo.tests import NeoUnitTestBase
class PartitionTableTests(NeoUnitTestBase):
def test_01_Cell(self):
uuid = self.getNewUUID()
server = ("127.0.0.1", 19001)
sn = StorageNode(Mock(), server, uuid)
cell = Cell(sn)
self.assertEqual(cell.node, sn)
self.assertEqual(cell.state, CellStates.UP_TO_DATE)
cell = Cell(sn, CellStates.OUT_OF_DATE)
self.assertEqual(cell.node, sn)
self.assertEqual(cell.state, CellStates.OUT_OF_DATE)
# check getter
self.assertEqual(cell.getNode(), sn)
self.assertEqual(cell.getState(), CellStates.OUT_OF_DATE)
self.assertEqual(cell.getNodeState(), NodeStates.UNKNOWN)
self.assertEqual(cell.getUUID(), uuid)
self.assertEqual(cell.getAddress(), server)
# check state setter
cell.setState(CellStates.FEEDING)
self.assertEqual(cell.getState(), CellStates.FEEDING)
def test_03_setCell(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
for x in xrange(num_partitions):
self.assertEqual(len(pt.partition_list[x]), 0)
# add a cell to an empty row
self.assertFalse(pt.count_dict.has_key(sn1))
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
self.assertTrue(pt.count_dict.has_key(sn1))
self.assertEqual(pt.count_dict[sn1], 1)
for x in xrange(num_partitions):
if x == 0:
self.assertEqual(len(pt.partition_list[x]), 1)
cell = pt.partition_list[x][0]
self.assertEqual(cell.getState(), CellStates.UP_TO_DATE)
else:
self.assertEqual(len(pt.partition_list[x]), 0)
# try to add to an unexistant partition
self.assertRaises(IndexError, pt.setCell, 10, sn1, CellStates.UP_TO_DATE)
# if we add in discardes state, must be removed
pt.setCell(0, sn1, CellStates.DISCARDED)
for x in xrange(num_partitions):
self.assertEqual(len(pt.partition_list[x]), 0)
self.assertEqual(pt.count_dict[sn1], 0)
# add a feeding node into empty row
pt.setCell(0, sn1, CellStates.FEEDING)
self.assertTrue(pt.count_dict.has_key(sn1))
self.assertEqual(pt.count_dict[sn1], 0)
for x in xrange(num_partitions):
if x == 0:
self.assertEqual(len(pt.partition_list[x]), 1)
cell = pt.partition_list[x][0]
self.assertEqual(cell.getState(), CellStates.FEEDING)
else:
self.assertEqual(len(pt.partition_list[x]), 0)
# re-add it as feeding, nothing change
pt.setCell(0, sn1, CellStates.FEEDING)
self.assertTrue(pt.count_dict.has_key(sn1))
self.assertEqual(pt.count_dict[sn1], 0)
for x in xrange(num_partitions):
if x == 0:
self.assertEqual(len(pt.partition_list[x]), 1)
cell = pt.partition_list[x][0]
self.assertEqual(cell.getState(), CellStates.FEEDING)
else:
self.assertEqual(len(pt.partition_list[x]), 0)
# now add it as up to date
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
self.assertTrue(pt.count_dict.has_key(sn1))
self.assertEqual(pt.count_dict[sn1], 1)
for x in xrange(num_partitions):
if x == 0:
self.assertEqual(len(pt.partition_list[x]), 1)
cell = pt.partition_list[x][0]
self.assertEqual(cell.getState(), CellStates.UP_TO_DATE)
else:
self.assertEqual(len(pt.partition_list[x]), 0)
# now add broken and down state, must not be taken into account
pt.setCell(0, sn1, CellStates.DISCARDED)
for x in xrange(num_partitions):
self.assertEqual(len(pt.partition_list[x]), 0)
self.assertEqual(pt.count_dict[sn1], 0)
sn1.setState(NodeStates.BROKEN)
self.assertRaises(PartitionTableException, pt.setCell,
0, sn1, CellStates.UP_TO_DATE)
for x in xrange(num_partitions):
self.assertEqual(len(pt.partition_list[x]), 0)
self.assertEqual(pt.count_dict[sn1], 0)
sn1.setState(NodeStates.DOWN)
self.assertRaises(PartitionTableException, pt.setCell,
0, sn1, CellStates.UP_TO_DATE)
for x in xrange(num_partitions):
self.assertEqual(len(pt.partition_list[x]), 0)
self.assertEqual(pt.count_dict[sn1], 0)
def test_04_removeCell(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
for x in xrange(num_partitions):
self.assertEqual(len(pt.partition_list[x]), 0)
# add a cell to an empty row
self.assertFalse(pt.count_dict.has_key(sn1))
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
self.assertTrue(pt.count_dict.has_key(sn1))
self.assertEqual(pt.count_dict[sn1], 1)
for x in xrange(num_partitions):
if x == 0:
self.assertEqual(len(pt.partition_list[x]), 1)
else:
self.assertEqual(len(pt.partition_list[x]), 0)
# remove it
pt.removeCell(0, sn1)
self.assertEqual(pt.count_dict[sn1], 0)
for x in xrange(num_partitions):
self.assertEqual(len(pt.partition_list[x]), 0)
# add a feeding cell
pt.setCell(0, sn1, CellStates.FEEDING)
self.assertTrue(pt.count_dict.has_key(sn1))
self.assertEqual(pt.count_dict[sn1], 0)
for x in xrange(num_partitions):
if x == 0:
self.assertEqual(len(pt.partition_list[x]), 1)
else:
self.assertEqual(len(pt.partition_list[x]), 0)
# remove it
pt.removeCell(0, sn1)
self.assertEqual(pt.count_dict[sn1], 0)
for x in xrange(num_partitions):
self.assertEqual(len(pt.partition_list[x]), 0)
def test_05_getCellList(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
# add two kind of node, usable and unsable
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19001)
sn2 = StorageNode(Mock(), server2, uuid2)
pt.setCell(0, sn2, CellStates.OUT_OF_DATE)
uuid3 = self.getNewUUID()
server3 = ("127.0.0.3", 19001)
sn3 = StorageNode(Mock(), server3, uuid3)
pt.setCell(0, sn3, CellStates.FEEDING)
uuid4 = self.getNewUUID()
server4 = ("127.0.0.4", 19001)
sn4 = StorageNode(Mock(), server4, uuid4)
pt.setCell(0, sn4, CellStates.DISCARDED) # won't be added
# now checks result
self.assertEqual(len(pt.partition_list[0]), 3)
for x in xrange(num_partitions):
if x == 0:
# all nodes
all_cell = pt.getCellList(0)
all_nodes = [x.getNode() for x in all_cell]
self.assertEqual(len(all_cell), 3)
self.assertTrue(sn1 in all_nodes)
self.assertTrue(sn2 in all_nodes)
self.assertTrue(sn3 in all_nodes)
self.assertTrue(sn4 not in all_nodes)
# writable nodes
all_cell = pt.getCellList(0, writable=True)
all_nodes = [x.getNode() for x in all_cell]
self.assertEqual(len(all_cell), 3)
self.assertTrue(sn1 in all_nodes)
self.assertTrue(sn2 in all_nodes)
self.assertTrue(sn3 in all_nodes)
self.assertTrue(sn4 not in all_nodes)
# readable nodes
all_cell = pt.getCellList(0, readable=True)
all_nodes = [x.getNode() for x in all_cell]
self.assertEqual(len(all_cell), 2)
self.assertTrue(sn1 in all_nodes)
self.assertTrue(sn2 not in all_nodes)
self.assertTrue(sn3 in all_nodes)
self.assertTrue(sn4 not in all_nodes)
# writable & readable nodes
all_cell = pt.getCellList(0, readable=True, writable=True)
all_nodes = [x.getNode() for x in all_cell]
self.assertEqual(len(all_cell), 2)
self.assertTrue(sn1 in all_nodes)
self.assertTrue(sn2 not in all_nodes)
self.assertTrue(sn3 in all_nodes)
self.assertTrue(sn4 not in all_nodes)
else:
self.assertEqual(len(pt.getCellList(1, False)), 0)
self.assertEqual(len(pt.getCellList(1, True)), 0)
def test_06_clear(self):
# add some nodes
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
# add two kind of node, usable and unsable
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19001)
sn2 = StorageNode(Mock(), server2, uuid2)
pt.setCell(1, sn2, CellStates.OUT_OF_DATE)
uuid3 = self.getNewUUID()
server3 = ("127.0.0.3", 19001)
sn3 = StorageNode(Mock(), server3, uuid3)
pt.setCell(2, sn3, CellStates.FEEDING)
# now checks result
self.assertEqual(len(pt.partition_list[0]), 1)
self.assertEqual(len(pt.partition_list[1]), 1)
self.assertEqual(len(pt.partition_list[2]), 1)
pt.clear()
partition_list = pt.partition_list
self.assertEqual(len(partition_list), num_partitions)
for x in xrange(num_partitions):
part = partition_list[x]
self.assertTrue(isinstance(part, list))
self.assertEqual(len(part), 0)
self.assertEqual(len(pt.count_dict), 0)
def test_07_getNodeList(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
# add two kind of node, usable and unsable
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19001)
sn2 = StorageNode(Mock(), server2, uuid2)
pt.setCell(0, sn2, CellStates.OUT_OF_DATE)
uuid3 = self.getNewUUID()
server3 = ("127.0.0.3", 19001)
sn3 = StorageNode(Mock(), server3, uuid3)
pt.setCell(0, sn3, CellStates.FEEDING)
uuid4 = self.getNewUUID()
server4 = ("127.0.0.4", 19001)
sn4 = StorageNode(Mock(), server4, uuid4)
pt.setCell(0, sn4, CellStates.DISCARDED) # won't be added
# must get only two node as feeding and discarded not taken
# into account
self.assertEqual(len(pt.getNodeList()), 2)
nodes = pt.getNodeList()
self.assertTrue(sn1 in nodes)
self.assertTrue(sn2 in nodes)
self.assertTrue(sn3 not in nodes)
self.assertTrue(sn4 not in nodes)
def test_08_filled(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
self.assertEqual(pt.np, num_partitions)
self.assertEqual(pt.num_filled_rows, 0)
self.assertFalse(pt.filled())
# adding a node in all partition
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
for x in xrange(num_partitions):
pt.setCell(x, sn1, CellStates.UP_TO_DATE)
self.assertEqual(pt.num_filled_rows, num_partitions)
self.assertTrue(pt.filled())
def test_09_hasOffset(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
# add two kind of node, usable and unsable
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
# now test
self.assertTrue(pt.hasOffset(0))
self.assertFalse(pt.hasOffset(1))
# unknonw partition
self.assertFalse(pt.hasOffset(50))
def test_10_operational(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
self.assertFalse(pt.filled())
self.assertFalse(pt.operational())
# adding a node in all partition
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
for x in xrange(num_partitions):
pt.setCell(x, sn1, CellStates.UP_TO_DATE)
self.assertTrue(pt.filled())
# it's up to date and running, so operational
sn1.setState(NodeStates.RUNNING)
self.assertTrue(pt.operational())
# same with feeding state
pt.clear()
self.assertFalse(pt.filled())
self.assertFalse(pt.operational())
# adding a node in all partition
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
for x in xrange(num_partitions):
pt.setCell(x, sn1, CellStates.FEEDING)
self.assertTrue(pt.filled())
# it's feeding and running, so operational
sn1.setState(NodeStates.RUNNING)
self.assertTrue(pt.operational())
# same with feeding state but non running node
pt.clear()
self.assertFalse(pt.filled())
self.assertFalse(pt.operational())
# adding a node in all partition
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
sn1.setState(NodeStates.TEMPORARILY_DOWN)
for x in xrange(num_partitions):
pt.setCell(x, sn1, CellStates.FEEDING)
self.assertTrue(pt.filled())
# it's up to date and not running, so not operational
self.assertFalse(pt.operational())
# same with out of date state and running
pt.clear()
self.assertFalse(pt.filled())
self.assertFalse(pt.operational())
# adding a node in all partition
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
for x in xrange(num_partitions):
pt.setCell(x, sn1, CellStates.OUT_OF_DATE)
self.assertTrue(pt.filled())
# it's not up to date and running, so not operational
self.assertFalse(pt.operational())
def test_12_getRow(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
# add nodes
uuid1 = self.getNewUUID()
server1 = ("127.0.0.1", 19001)
sn1 = StorageNode(Mock(), server1, uuid1)
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
pt.setCell(1, sn1, CellStates.UP_TO_DATE)
pt.setCell(2, sn1, CellStates.UP_TO_DATE)
uuid2 = self.getNewUUID()
server2 = ("127.0.0.2", 19001)
sn2 = StorageNode(Mock(), server2, uuid2)
pt.setCell(0, sn2, CellStates.UP_TO_DATE)
pt.setCell(1, sn2, CellStates.UP_TO_DATE)
uuid3 = self.getNewUUID()
server3 = ("127.0.0.3", 19001)
sn3 = StorageNode(Mock(), server3, uuid3)
pt.setCell(0, sn3, CellStates.UP_TO_DATE)
# test
row_0 = pt.getRow(0)
self.assertEqual(len(row_0), 3)
for uuid, state in row_0:
self.assertTrue(uuid in (sn1.getUUID(), sn2.getUUID(), sn3.getUUID()))
self.assertEqual(state, CellStates.UP_TO_DATE)
row_1 = pt.getRow(1)
self.assertEqual(len(row_1), 2)
for uuid, state in row_1:
self.assertTrue(uuid in (sn1.getUUID(), sn2.getUUID()))
self.assertEqual(state, CellStates.UP_TO_DATE)
row_2 = pt.getRow(2)
self.assertEqual(len(row_2), 1)
for uuid, state in row_2:
self.assertEqual(uuid, sn1.getUUID())
self.assertEqual(state, CellStates.UP_TO_DATE)
row_3 = pt.getRow(3)
self.assertEqual(len(row_3), 0)
row_4 = pt.getRow(4)
self.assertEqual(len(row_4), 0)
# unknwon row
self.assertRaises(IndexError, pt.getRow, 5)
def test_getNodeMap(self):
num_partitions = 5
num_replicas = 2
pt = PartitionTable(num_partitions, num_replicas)
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
uuid3 = self.getNewUUID()
sn1 = StorageNode(Mock(),("127.0.0.1", 19001) , uuid1)
pt.setCell(0, sn1, CellStates.UP_TO_DATE)
pt.setCell(1, sn1, CellStates.UP_TO_DATE)
pt.setCell(2, sn1, CellStates.UP_TO_DATE)
self.assertEqual(pt.getNodeMap(), {
sn1: [0, 1, 2],
})
sn2 = StorageNode(Mock(), ("127.0.0.2", 19001), uuid2)
pt.setCell(0, sn2, CellStates.UP_TO_DATE)
pt.setCell(1, sn2, CellStates.UP_TO_DATE)
self.assertEqual(pt.getNodeMap(), {
sn1: [0, 1, 2],
sn2: [0, 1],
})
sn3 = StorageNode(Mock(), ("127.0.0.3", 19001), uuid3)
pt.setCell(0, sn3, CellStates.UP_TO_DATE)
self.assertEqual(pt.getNodeMap(), {
sn1: [0, 1, 2],
sn2: [0, 1],
sn3: [0],
})
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/testProtocol.py 0000664 0000000 0000000 00000067505 11634614701 0025422 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
import socket
from neo.lib.protocol import NodeTypes, NodeStates, CellStates, ClusterStates
from neo.lib.protocol import ErrorCodes, Packets, Errors, LockState
from neo.tests import NeoUnitTestBase, IP_VERSION_FORMAT_DICT
class ProtocolTests(NeoUnitTestBase):
def setUp(self):
NeoUnitTestBase.setUp(self)
self.ltid = None
def getNextTID(self):
self.ltid = super(ProtocolTests, self).getNextTID(self.ltid)
return self.ltid
def test_03_protocolError(self):
p = Errors.ProtocolError("bad protocol")
error_code, error_msg = p.decode()
self.assertEqual(error_code, ErrorCodes.PROTOCOL_ERROR)
self.assertEqual(error_msg, "bad protocol")
def test_05_notReady(self):
p = Errors.NotReady("wait")
error_code, error_msg = p.decode()
self.assertEqual(error_code, ErrorCodes.NOT_READY)
self.assertEqual(error_msg, "wait")
def test_06_brokenNodeDisallowedError(self):
p = Errors.BrokenNode("broken")
error_code, error_msg = p.decode()
self.assertEqual(error_code, ErrorCodes.BROKEN_NODE)
self.assertEqual(error_msg, "broken")
def test_07_oidNotFound(self):
p = Errors.OidNotFound("no oid")
error_code, error_msg = p.decode()
self.assertEqual(error_code, ErrorCodes.OID_NOT_FOUND)
self.assertEqual(error_msg, "no oid")
def test_08_tidNotFound(self):
p = Errors.TidNotFound("no tid")
error_code, error_msg = p.decode()
self.assertEqual(error_code, ErrorCodes.TID_NOT_FOUND)
self.assertEqual(error_msg, "no tid")
def test_09_ping(self):
p = Packets.Ping()
self.assertEqual(p.decode(), ())
def test_10_pong(self):
p = Packets.Pong()
self.assertEqual(p.decode(), ())
def test_11_RequestIdentification(self):
uuid = self.getNewUUID()
p = Packets.RequestIdentification(NodeTypes.CLIENT,
uuid, (self.local_ip, 9080), "unittest")
node, p_uuid, (ip, port), name = p.decode()
self.assertEqual(node, NodeTypes.CLIENT)
self.assertEqual(p_uuid, uuid)
self.assertEqual(ip, self.local_ip)
self.assertEqual(port, 9080)
self.assertEqual(name, "unittest")
def test_11_bis_RequestIdentification_IPv6(self):
uuid = self.getNewUUID()
self.local_ip = IP_VERSION_FORMAT_DICT[socket.AF_INET6]
p = Packets.RequestIdentification(NodeTypes.CLIENT,
uuid, (self.local_ip, 9080), "unittest")
node, p_uuid, (ip, port), name = p.decode()
self.assertEqual(node, NodeTypes.CLIENT)
self.assertEqual(p_uuid, uuid)
self.assertEqual(ip, self.local_ip)
self.assertEqual(port, 9080)
self.assertEqual(name, "unittest")
def test_12_AcceptIdentification(self):
uuid1, uuid2 = self.getNewUUID(), self.getNewUUID()
p = Packets.AcceptIdentification(NodeTypes.CLIENT, uuid1,
10, 20, uuid2)
node, p_uuid, nb_partitions, nb_replicas, your_uuid = p.decode()
self.assertEqual(node, NodeTypes.CLIENT)
self.assertEqual(p_uuid, uuid1)
self.assertEqual(nb_partitions, 10)
self.assertEqual(nb_replicas, 20)
self.assertEqual(your_uuid, uuid2)
def test_13_askPrimary(self):
p = Packets.AskPrimary()
self.assertEqual(p.decode(), ())
def test_14_answerPrimary(self):
uuid = self.getNewUUID()
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
uuid3 = self.getNewUUID()
master_list = [(("127.0.0.1", 1), uuid1),
(("127.0.0.2", 2), uuid2),
(("127.0.0.3", 3), uuid3)]
p = Packets.AnswerPrimary(uuid, master_list)
primary_uuid, p_master_list = p.decode()
self.assertEqual(primary_uuid, uuid)
self.assertEqual(master_list, p_master_list)
def test_14_bis_answerPrimaryIPv6(self):
""" Try to get primary master through IPv6 """
self.address_type = socket.AF_INET6
uuid = self.getNewUUID()
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
uuid3 = self.getNewUUID()
master_list = [(("::1", 1), uuid1),
(("::2", 2), uuid2),
(("::3", 3), uuid3)]
p = Packets.AnswerPrimary(uuid, master_list)
primary_uuid, p_master_list = p.decode()
self.assertEqual(primary_uuid, uuid)
self.assertEqual(master_list, p_master_list)
def test_15_announcePrimary(self):
p = Packets.AnnouncePrimary()
self.assertEqual(p.decode(), ())
def test_16_reelectPrimary(self):
p = Packets.ReelectPrimary()
self.assertEqual(p.decode(), ())
def test_17_notifyNodeInformation(self):
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
uuid3 = self.getNewUUID()
node_list = \
[(NodeTypes.CLIENT, ("127.0.0.1", 1), uuid1, NodeStates.RUNNING),
(NodeTypes.CLIENT, ("127.0.0.2", 2), uuid2, NodeStates.DOWN),
(NodeTypes.CLIENT, ("127.0.0.3", 3), uuid3, NodeStates.BROKEN
)]
p = Packets.NotifyNodeInformation(node_list)
p_node_list = p.decode()[0]
self.assertEqual(node_list, p_node_list)
def test_18_askLastIDs(self):
p = Packets.AskLastIDs()
self.assertEqual(p.decode(), ())
def test_19_answerLastIDs(self):
oid = self.getNextTID()
tid = self.getNextTID()
ptid = self.getPTID()
p = Packets.AnswerLastIDs(oid, tid, ptid)
loid, ltid, lptid = p.decode()
self.assertEqual(loid, oid)
self.assertEqual(ltid, tid)
self.assertEqual(lptid, ptid)
def test_20_askPartitionTable(self):
self.assertEqual(Packets.AskPartitionTable().decode(), ())
def test_21_answerPartitionTable(self):
ptid = self.getPTID()
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
uuid3 = self.getNewUUID()
cell_list = [
(0, [(uuid1, CellStates.UP_TO_DATE), (uuid2, CellStates.OUT_OF_DATE)]),
(43, [(uuid2, CellStates.OUT_OF_DATE), (uuid3, CellStates.DISCARDED)]),
(124, [(uuid1, CellStates.DISCARDED), (uuid3, CellStates.UP_TO_DATE)]),
]
p = Packets.AnswerPartitionTable(ptid, cell_list)
pptid, p_cell_list = p.decode()
self.assertEqual(pptid, ptid)
self.assertEqual(p_cell_list, cell_list)
def test_22_sendPartitionTable(self):
ptid = self.getPTID()
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
uuid3 = self.getNewUUID()
cell_list = [
(0, [(uuid1, CellStates.UP_TO_DATE), (uuid2, CellStates.OUT_OF_DATE)]),
(43, [(uuid2, CellStates.OUT_OF_DATE), (uuid3, CellStates.DISCARDED)]),
(124, [(uuid1, CellStates.DISCARDED), (uuid3, CellStates.UP_TO_DATE)]),
]
p = Packets.AnswerPartitionTable(ptid, cell_list)
pptid, p_cell_list = p.decode()
self.assertEqual(pptid, ptid)
self.assertEqual(p_cell_list, cell_list)
def test_23_notifyPartitionChanges(self):
ptid = self.getPTID()
uuid1 = self.getNewUUID()
uuid2 = self.getNewUUID()
cell_list = [(0, uuid1, CellStates.UP_TO_DATE),
(43, uuid2, CellStates.OUT_OF_DATE),
(124, uuid1, CellStates.DISCARDED)]
p = Packets.NotifyPartitionChanges(ptid, cell_list)
pptid, p_cell_list = p.decode()
self.assertEqual(pptid, ptid)
self.assertEqual(p_cell_list, cell_list)
def test_24_startOperation(self):
p = Packets.StartOperation()
self.assertEqual(p.decode(), ())
def test_25_stopOperation(self):
p = Packets.StopOperation()
self.assertEqual(p.decode(), ())
def test_26_askUnfinishedTransaction(self):
p = Packets.AskUnfinishedTransactions()
self.assertEqual(p.decode(), ())
def test_27_answerUnfinishedTransaction(self):
tid = self.getNextTID()
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tid3 = self.getNextTID()
tid4 = self.getNextTID()
tid_list = [tid1, tid2, tid3, tid4]
p = Packets.AnswerUnfinishedTransactions(tid, tid_list)
p_tid, p_tid_list = p.decode()
self.assertEqual(p_tid, tid)
self.assertEqual(p_tid_list, tid_list)
def test_28_askObjectPresent(self):
oid = self.getNextTID()
tid = self.getNextTID()
p = Packets.AskObjectPresent(oid, tid)
loid, ltid = p.decode()
self.assertEqual(loid, oid)
self.assertEqual(ltid, tid)
def test_29_answerObjectPresent(self):
oid = self.getNextTID()
tid = self.getNextTID()
p = Packets.AnswerObjectPresent(oid, tid)
loid, ltid = p.decode()
self.assertEqual(loid, oid)
self.assertEqual(ltid, tid)
def test_30_deleteTransaction(self):
tid = self.getNextTID()
oid_list = [self.getOID(1), self.getOID(2)]
p = Packets.DeleteTransaction(tid, oid_list)
self.assertEqual(type(p), Packets.DeleteTransaction)
self.assertEqual(p.decode(), (tid, oid_list))
def test_31_commitTransaction(self):
tid = self.getNextTID()
p = Packets.CommitTransaction(tid)
ptid = p.decode()[0]
self.assertEqual(ptid, tid)
def test_32_askBeginTransaction(self):
tid = self.getNextTID()
p = Packets.AskBeginTransaction(tid)
ptid = p.decode()[0]
self.assertEqual(tid, ptid)
def test_33_answerBeginTransaction(self):
tid = self.getNextTID()
p = Packets.AnswerBeginTransaction(tid)
ptid = p.decode()[0]
self.assertEqual(ptid, tid)
def test_34_askNewOIDs(self):
p = Packets.AskNewOIDs(10)
nb = p.decode()
self.assertEqual(nb, (10,))
def test_35_answerNewOIDs(self):
oid1 = self.getNextTID()
oid2 = self.getNextTID()
oid3 = self.getNextTID()
oid4 = self.getNextTID()
oid_list = [oid1, oid2, oid3, oid4]
p = Packets.AnswerNewOIDs(oid_list)
p_oid_list = p.decode()[0]
self.assertEqual(p_oid_list, oid_list)
def test_36_askFinishTransaction(self):
self._testXIDAndYIDList(Packets.AskFinishTransaction)
def _testXIDAndYIDList(self, packet):
oid1 = self.getNextTID()
oid2 = self.getNextTID()
oid3 = self.getNextTID()
oid4 = self.getNextTID()
tid = self.getNextTID()
oid_list = [oid1, oid2, oid3, oid4]
p = packet(tid, oid_list)
p_tid, p_oid_list = p.decode()
self.assertEqual(p_tid, tid)
self.assertEqual(p_oid_list, oid_list)
def test_37_answerTransactionFinished(self):
ttid = self.getNextTID()
tid = self.getNextTID()
p = Packets.AnswerTransactionFinished(ttid, tid)
pttid, ptid = p.decode()
self.assertEqual(pttid, ttid)
self.assertEqual(ptid, tid)
def test_38_askLockInformation(self):
oid1 = self.getNextTID()
oid2 = self.getNextTID()
oid_list = [oid1, oid2]
ttid = self.getNextTID()
tid = self.getNextTID()
p = Packets.AskLockInformation(ttid, tid, oid_list)
pttid, ptid, p_oid_list = p.decode()
self.assertEqual(pttid, ttid)
self.assertEqual(ptid, tid)
self.assertEqual(oid_list, p_oid_list)
def test_39_answerInformationLocked(self):
tid = self.getNextTID()
p = Packets.AnswerInformationLocked(tid)
ptid = p.decode()[0]
self.assertEqual(ptid, tid)
def test_40_invalidateObjects(self):
oid1 = self.getNextTID()
oid2 = self.getNextTID()
oid3 = self.getNextTID()
oid4 = self.getNextTID()
tid = self.getNextTID()
oid_list = [oid1, oid2, oid3, oid4]
p = Packets.InvalidateObjects(tid, oid_list)
p_tid, p_oid_list = p.decode()
self.assertEqual(p_tid, tid)
self.assertEqual(p_oid_list, oid_list)
def test_41_notifyUnlockInformation(self):
tid = self.getNextTID()
p = Packets.NotifyUnlockInformation(tid)
ptid = p.decode()[0]
self.assertEqual(ptid, tid)
def test_42_abortTransaction(self):
tid = self.getNextTID()
p = Packets.AbortTransaction(tid)
ptid = p.decode()[0]
self.assertEqual(ptid, tid)
def test_43_askStoreTransaction(self):
tid = self.getNextTID()
oid1 = self.getNextTID()
oid2 = self.getNextTID()
oid3 = self.getNextTID()
oid4 = self.getNextTID()
oid_list = [oid1, oid2, oid3, oid4]
p = Packets.AskStoreTransaction(tid, "moi", "transaction", "exti", oid_list)
ptid, user, desc, ext, p_oid_list = p.decode()
self.assertEqual(ptid, tid)
self.assertEqual(p_oid_list, oid_list)
self.assertEqual(user, "moi")
self.assertEqual(desc, "transaction")
self.assertEqual(ext, "exti")
def test_44_answerStoreTransaction(self):
tid = self.getNextTID()
p = Packets.AnswerStoreTransaction(tid)
ptid = p.decode()[0]
self.assertEqual(ptid, tid)
def test_45_askStoreObject(self):
oid = self.getNextTID()
serial = self.getNextTID()
tid = self.getNextTID()
tid2 = self.getNextTID()
unlock = False
p = Packets.AskStoreObject(oid, serial, 1, 55, "to", tid2, tid, unlock)
poid, pserial, compression, checksum, data, ptid2, ptid, punlock = \
p.decode()
self.assertEqual(oid, poid)
self.assertEqual(serial, pserial)
self.assertEqual(tid, ptid)
self.assertEqual(tid2, ptid2)
self.assertEqual(compression, 1)
self.assertEqual(checksum, 55)
self.assertEqual(data, "to")
self.assertEqual(unlock, punlock)
def test_46_answerStoreObject(self):
oid = self.getNextTID()
serial = self.getNextTID()
p = Packets.AnswerStoreObject(True, oid, serial)
conflicting, poid, pserial = p.decode()
self.assertEqual(oid, poid)
self.assertEqual(serial, pserial)
self.assertTrue(conflicting)
def test_47_askObject(self):
oid = self.getNextTID()
serial = self.getNextTID()
tid = self.getNextTID()
p = Packets.AskObject(oid, serial, tid)
poid, pserial, ptid = p.decode()
self.assertEqual(oid, poid)
self.assertEqual(serial, pserial)
self.assertEqual(tid, ptid)
def test_48_answerObject(self):
oid = self.getNextTID()
serial_start = self.getNextTID()
serial_end = self.getNextTID()
data_serial = self.getNextTID()
p = Packets.AnswerObject(oid, serial_start, serial_end, 1, 55, "to",
data_serial)
poid, pserial_start, pserial_end, compression, checksum, data, \
pdata_serial = p.decode()
self.assertEqual(oid, poid)
self.assertEqual(serial_start, pserial_start)
self.assertEqual(serial_end, pserial_end)
self.assertEqual(compression, 1)
self.assertEqual(checksum, 55)
self.assertEqual(data, "to")
self.assertEqual(pdata_serial, data_serial)
def test_49_askTIDs(self):
p = Packets.AskTIDs(1, 10, 5)
first, last, partition = p.decode()
self.assertEqual(first, 1)
self.assertEqual(last, 10)
self.assertEqual(partition, 5)
def test_50_answerTIDs(self):
self._test_AnswerTIDs(Packets.AnswerTIDs)
def _test_AnswerTIDs(self, packet):
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tid3 = self.getNextTID()
tid4 = self.getNextTID()
tid_list = [tid1, tid2, tid3, tid4]
p = packet(tid_list)
p_tid_list = p.decode()[0]
self.assertEqual(p_tid_list, tid_list)
def test_51_askTransactionInfomation(self):
tid = self.getNextTID()
p = Packets.AskTransactionInformation(tid)
ptid = p.decode()[0]
self.assertEqual(tid, ptid)
def test_52_answerTransactionInformation(self):
tid = self.getNextTID()
oid1 = self.getNextTID()
oid2 = self.getNextTID()
oid3 = self.getNextTID()
oid4 = self.getNextTID()
oid_list = [oid1, oid2, oid3, oid4]
p = Packets.AnswerTransactionInformation(tid, "moi",
"transaction", "exti", False, oid_list)
ptid, user, desc, ext, packed, p_oid_list = p.decode()
self.assertEqual(ptid, tid)
self.assertEqual(p_oid_list, oid_list)
self.assertEqual(user, "moi")
self.assertEqual(desc, "transaction")
self.assertEqual(ext, "exti")
self.assertFalse(packed)
def test_53_askObjectHistory(self):
oid = self.getNextTID()
p = Packets.AskObjectHistory(oid, 1, 10,)
poid, first, last = p.decode()
self.assertEqual(first, 1)
self.assertEqual(last, 10)
self.assertEqual(poid, oid)
def test_54_answerObjectHistory(self):
oid = self.getNextTID()
hist1 = (self.getNextTID(), 15)
hist2 = (self.getNextTID(), 353)
hist3 = (self.getNextTID(), 326)
hist4 = (self.getNextTID(), 652)
hist_list = [hist1, hist2, hist3, hist4]
p = Packets.AnswerObjectHistory(oid, hist_list)
poid, p_hist_list = p.decode()
self.assertEqual(p_hist_list, hist_list)
self.assertEqual(oid, poid)
def test_57_notifyReplicationDone(self):
offset = 10
p = Packets.NotifyReplicationDone(offset)
p_offset = p.decode()[0]
self.assertEqual(p_offset, offset)
def test_askObjectUndoSerial(self):
tid = self.getNextTID()
ltid = self.getNextTID()
undone_tid = self.getNextTID()
oid_list = [self.getOID(x) for x in xrange(4)]
p = Packets.AskObjectUndoSerial(tid, ltid, undone_tid, oid_list)
ptid, pltid, pundone_tid, poid_list = p.decode()
self.assertEqual(tid, ptid)
self.assertEqual(ltid, pltid)
self.assertEqual(undone_tid, pundone_tid)
self.assertEqual(oid_list, poid_list)
def test_answerObjectUndoSerial(self):
oid1 = self.getNextTID()
oid2 = self.getNextTID()
tid1 = self.getNextTID()
tid2 = self.getNextTID()
tid3 = self.getNextTID()
object_tid_dict = {
oid1: (tid1, tid2, True),
oid2: (tid3, None, False),
}
p = Packets.AnswerObjectUndoSerial(object_tid_dict)
pobject_tid_dict = p.decode()[0]
self.assertEqual(object_tid_dict, pobject_tid_dict)
def test_NotifyLastOID(self):
oid = self.getOID(1)
p = Packets.NotifyLastOID(oid)
self.assertEqual(p.decode(), (oid, ))
def test_AnswerClusterState(self):
state = ClusterStates.RUNNING
p = Packets.AnswerClusterState(state)
self.assertEqual(p.decode(), (state, ))
def test_AskClusterState(self):
p = Packets.AskClusterState()
self.assertEqual(p.decode(), ())
def test_NotifyClusterInformation(self):
state = ClusterStates.RECOVERING
p = Packets.NotifyClusterInformation(state)
self.assertEqual(p.decode(), (state, ))
def test_SetClusterState(self):
state = ClusterStates.VERIFYING
p = Packets.SetClusterState(state)
self.assertEqual(p.decode(), (state, ))
def test_AnswerNodeInformation(self):
p = Packets.AnswerNodeInformation()
self.assertEqual(p.decode(), ())
def test_AskNodeInformation(self):
p = Packets.AskNodeInformation()
self.assertEqual(p.decode(), ())
def test_AddPendingNodes(self):
uuid1, uuid2 = self.getNewUUID(), self.getNewUUID()
p = Packets.AddPendingNodes((uuid1, uuid2))
self.assertEqual(p.decode(), ([uuid1, uuid2], ))
def test_SetNodeState(self):
uuid = self.getNewUUID()
state = NodeStates.PENDING
p = Packets.SetNodeState(uuid, state, True)
self.assertEqual(p.decode(), (uuid, state, True))
def test_AskNodeList(self):
node_type = NodeTypes.STORAGE
p = Packets.AskNodeList(node_type)
self.assertEqual(p.decode(), (node_type, ))
def test_AnswerNodeList(self):
node1 = (NodeTypes.CLIENT, (self.local_ip, 1000),
self.getNewUUID(), NodeStates.DOWN)
node2 = (NodeTypes.MASTER, (self.local_ip, 2000),
self.getNewUUID(), NodeStates.RUNNING)
p = Packets.AnswerNodeList((node1, node2))
self.assertEqual(p.decode(), ([node1, node2], ))
def test_AnswerNodeListIPv6(self):
self.address_type = socket.AF_INET6
node1 = (NodeTypes.CLIENT, (self.local_ip, 1000),
self.getNewUUID(), NodeStates.DOWN)
node2 = (NodeTypes.MASTER, (self.local_ip, 2000),
self.getNewUUID(), NodeStates.RUNNING)
p = Packets.AnswerNodeList((node1, node2))
self.assertEqual(p.decode(), ([node1, node2], ))
def test_AskPartitionList(self):
min_offset = 10
max_offset = 20
uuid = self.getNewUUID()
p = Packets.AskPartitionList(min_offset, max_offset, uuid)
self.assertEqual(p.decode(), (min_offset, max_offset, uuid))
def test_AnswerPartitionList(self):
ptid = self.getPTID(1)
row_list = [
(0, [
(self.getNewUUID(), CellStates.UP_TO_DATE),
(self.getNewUUID(), CellStates.OUT_OF_DATE),
]),
(1, [
(self.getNewUUID(), CellStates.FEEDING),
(self.getNewUUID(), CellStates.DISCARDED),
]),
]
p = Packets.AnswerPartitionList(ptid, row_list)
self.assertEqual(p.decode(), (ptid, row_list))
def test_AskHasLock(self):
tid = self.getNextTID()
oid = self.getNextTID()
p = Packets.AskHasLock(tid, oid)
self.assertEqual(p.decode(), (tid, oid))
def test_AnswerHasLock(self):
oid = self.getNextTID()
for lock_state in LockState.itervalues():
p = Packets.AnswerHasLock(oid, lock_state)
self.assertEqual(p.decode(), (oid, lock_state))
def test_Notify(self):
msg = 'test'
self.assertEqual(Packets.Notify(msg).decode(), (msg, ))
def test_AskTIDsFrom(self):
tid = self.getNextTID()
tid2 = self.getNextTID()
p = Packets.AskTIDsFrom(tid, tid2, 1000, [5])
min_tid, max_tid, length, partition = p.decode()
self.assertEqual(min_tid, tid)
self.assertEqual(max_tid, tid2)
self.assertEqual(length, 1000)
self.assertEqual(partition, [5])
def test_AnswerTIDsFrom(self):
self._test_AnswerTIDs(Packets.AnswerTIDsFrom)
def test_AskObjectHistoryFrom(self):
oid = self.getOID(1)
min_serial = self.getNextTID()
max_serial = self.getNextTID()
length = 5
partition = 4
p = Packets.AskObjectHistoryFrom(oid, min_serial, max_serial, length,
partition)
p_oid, p_min_serial, p_max_serial, p_length, p_partition = p.decode()
self.assertEqual(p_oid, oid)
self.assertEqual(p_min_serial, min_serial)
self.assertEqual(p_max_serial, max_serial)
self.assertEqual(p_length, length)
self.assertEqual(p_partition, partition)
def test_AnswerObjectHistoryFrom(self):
object_dict = {}
for int_oid in xrange(4):
object_dict[self.getOID(int_oid)] = [self.getNextTID() \
for _ in xrange(5)]
p = Packets.AnswerObjectHistoryFrom(object_dict)
p_object_dict = p.decode()[0]
self.assertEqual(object_dict, p_object_dict)
def test_AskCheckTIDRange(self):
min_tid = self.getNextTID()
max_tid = self.getNextTID()
length = 2
partition = 4
p = Packets.AskCheckTIDRange(min_tid, max_tid, length, partition)
p_min_tid, p_max_tid, p_length, p_partition = p.decode()
self.assertEqual(p_min_tid, min_tid)
self.assertEqual(p_max_tid, max_tid)
self.assertEqual(p_length, length)
self.assertEqual(p_partition, partition)
def test_AnswerCheckTIDRange(self):
min_tid = self.getNextTID()
length = 2
count = 1
tid_checksum = self.getNewUUID()
max_tid = self.getNextTID()
p = Packets.AnswerCheckTIDRange(min_tid, length, count, tid_checksum,
max_tid)
p_min_tid, p_length, p_count, p_tid_checksum, p_max_tid = p.decode()
self.assertEqual(p_min_tid, min_tid)
self.assertEqual(p_length, length)
self.assertEqual(p_count, count)
self.assertEqual(p_tid_checksum, tid_checksum)
self.assertEqual(p_max_tid, max_tid)
def test_AskCheckSerialRange(self):
min_oid = self.getOID(1)
min_serial = self.getNextTID()
max_tid = self.getNextTID()
length = 2
partition = 4
p = Packets.AskCheckSerialRange(min_oid, min_serial, max_tid, length,
partition)
p_min_oid, p_min_serial, p_max_tid, p_length, p_partition = p.decode()
self.assertEqual(p_min_oid, min_oid)
self.assertEqual(p_min_serial, min_serial)
self.assertEqual(p_max_tid, max_tid)
self.assertEqual(p_length, length)
self.assertEqual(p_partition, partition)
def test_AnswerCheckSerialRange(self):
min_oid = self.getOID(1)
min_serial = self.getNextTID()
length = 2
count = 1
oid_checksum = self.getNewUUID()
max_oid = self.getOID(5)
tid_checksum = self.getNewUUID()
max_serial = self.getNextTID()
p = Packets.AnswerCheckSerialRange(min_oid, min_serial, length, count,
oid_checksum, max_oid, tid_checksum, max_serial)
p_min_oid, p_min_serial, p_length, p_count, p_oid_checksum, \
p_max_oid, p_tid_checksum, p_max_serial = p.decode()
self.assertEqual(p_min_oid, min_oid)
self.assertEqual(p_min_serial, min_serial)
self.assertEqual(p_length, length)
self.assertEqual(p_count, count)
self.assertEqual(p_oid_checksum, oid_checksum)
self.assertEqual(p_max_oid, max_oid)
self.assertEqual(p_tid_checksum, tid_checksum)
self.assertEqual(p_max_serial, max_serial)
def test_AskPack(self):
tid = self.getNextTID()
p = Packets.AskPack(tid)
ptid = p.decode()[0]
self.assertEqual(ptid, tid)
def test_AnswerPack(self):
status = True
p = Packets.AnswerPack(status)
pstatus = p.decode()[0]
self.assertEqual(pstatus, status)
def test_notifyReady(self):
p = Packets.NotifyReady()
self.assertEqual(tuple(), p.decode())
def test_AskLastTransaction(self):
Packets.AskLastTransaction()
def test_AnswerLastTransaction(self):
tid = self.getNextTID()
p = Packets.AnswerLastTransaction(tid)
ptid = p.decode()[0]
self.assertEqual(ptid, tid)
def test_AskCheckCurrentSerial(self):
tid = self.getNextTID()
serial = self.getNextTID()
oid = self.getNextTID()
p = Packets.AskCheckCurrentSerial(tid, serial, oid)
ptid, pserial, poid = p.decode()
self.assertEqual(ptid, tid)
self.assertEqual(pserial, serial)
self.assertEqual(poid, oid)
if __name__ == '__main__':
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/testUtil.py 0000664 0000000 0000000 00000007054 11634614701 0024527 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2006-2010 Nexedi SA
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
import socket
from neo.tests import NeoUnitTestBase, IP_VERSION_FORMAT_DICT
from neo.lib.util import ReadBuffer, getAddressType, parseNodeAddress, \
getConnectorFromAddress, SOCKET_CONNECTORS_DICT
class UtilTests(NeoUnitTestBase):
def test_getConnectorFromAddress(self):
""" Connector name must correspond to address type """
connector = getConnectorFromAddress((
IP_VERSION_FORMAT_DICT[socket.AF_INET], 0))
self.assertEqual(connector, SOCKET_CONNECTORS_DICT[socket.AF_INET])
connector = getConnectorFromAddress((
IP_VERSION_FORMAT_DICT[socket.AF_INET6], 0))
self.assertEqual(connector, SOCKET_CONNECTORS_DICT[socket.AF_INET6])
self.assertRaises(ValueError, getConnectorFromAddress, ('', 0))
self.assertRaises(ValueError, getConnectorFromAddress, ('test', 0))
def test_getAddressType(self):
""" Get the type on an IP Address """
self.assertRaises(ValueError, getAddressType, ('', 0))
address_type = getAddressType(('::1', 0))
self.assertEqual(address_type, socket.AF_INET6)
address_type = getAddressType(('0.0.0.0', 0))
self.assertEqual(address_type, socket.AF_INET)
address_type = getAddressType(('127.0.0.1', 0))
self.assertEqual(address_type, socket.AF_INET)
def test_parseNodeAddress(self):
""" Parsing of addesses """
ip_address = parseNodeAddress('127.0.0.1:0')
self.assertEqual(('127.0.0.1', 0), ip_address)
ip_address = parseNodeAddress('127.0.0.1:0', 100)
self.assertEqual(('127.0.0.1', 0), ip_address)
ip_address = parseNodeAddress('127.0.0.1', 500)
self.assertEqual(('127.0.0.1', 500), ip_address)
self.assertRaises(ValueError, parseNodeAddress, '127.0.0.1')
ip_address = parseNodeAddress('[::1]:0')
self.assertEqual(('::1', 0), ip_address)
ip_address = parseNodeAddress('[::1]:0', 100)
self.assertEqual(('::1', 0), ip_address)
ip_address = parseNodeAddress('[::1]', 500)
self.assertEqual(('::1', 500), ip_address)
self.assertRaises(ValueError, parseNodeAddress, ('[::1]'))
def testReadBufferRead(self):
""" Append some chunk then consume the data """
buf = ReadBuffer()
self.assertEqual(len(buf), 0)
buf.append('abc')
self.assertEqual(len(buf), 3)
# no enough data
self.assertEqual(buf.read(4), None)
self.assertEqual(len(buf), 3)
buf.append('def')
# consume a part
self.assertEqual(len(buf), 6)
self.assertEqual(buf.read(4), 'abcd')
self.assertEqual(len(buf), 2)
# consume the rest
self.assertEqual(buf.read(3), None)
self.assertEqual(buf.read(2), 'ef')
if __name__ == "__main__":
unittest.main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/threaded/ 0000775 0000000 0000000 00000000000 11634614701 0024112 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/threaded/__init__.py 0000664 0000000 0000000 00000043377 11634614701 0026241 0 ustar 00root root 0000000 0000000 #
# Copyright (c) 2011 Nexedi SARL and Contributors. All Rights Reserved.
# Julien Muchembled
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import os, random, socket, sys, tempfile, threading, time, types, weakref
from collections import deque
from functools import wraps
from Queue import Queue, Empty
from mock import Mock
import transaction, ZODB
import neo.admin.app, neo.master.app, neo.storage.app
import neo.client.app, neo.neoctl.app
from neo.client import Storage
from neo.lib import bootstrap, setupLog
from neo.lib.connection import BaseConnection
from neo.lib.connector import SocketConnector, \
ConnectorConnectionRefusedException
from neo.lib.event import EventManager
from neo.lib.protocol import CellStates, ClusterStates, NodeStates, NodeTypes
from neo.lib.util import SOCKET_CONNECTORS_DICT, parseMasterList
from neo.tests import NeoTestBase, getTempDirectory, setupMySQLdb, \
ADDRESS_TYPE, IP_VERSION_FORMAT_DICT, DB_PREFIX, DB_USER
BIND = IP_VERSION_FORMAT_DICT[ADDRESS_TYPE], 0
LOCAL_IP = socket.inet_pton(ADDRESS_TYPE, IP_VERSION_FORMAT_DICT[ADDRESS_TYPE])
SERVER_TYPE = ['master', 'storage', 'admin']
VIRTUAL_IP = [socket.inet_ntop(ADDRESS_TYPE, LOCAL_IP[:-1] + chr(2 + i))
for i in xrange(len(SERVER_TYPE))]
def getVirtualIp(server_type):
return VIRTUAL_IP[SERVER_TYPE.index(server_type)]
class Serialized(object):
@classmethod
def init(cls):
cls._global_lock = threading.Lock()
cls._global_lock.acquire()
# TODO: use something else than Queue, for inspection or editing
# (e.g. we'd like to suspend nodes temporarily)
cls._lock_list = Queue()
cls._pdb = False
cls.pending = 0
@classmethod
def release(cls, lock=None, wake_other=True, stop=False):
"""Suspend lock owner and resume first suspended thread"""
if lock is None:
lock = cls._global_lock
if stop: # XXX: we should fix ClusterStates.STOPPING
cls.pending = None
else:
cls.pending = 0
try:
sys._getframe(1).f_trace.im_self.set_continue()
cls._pdb = True
except AttributeError:
pass
q = cls._lock_list
q.put(lock)
if wake_other:
q.get().release()
@classmethod
def acquire(cls, lock=None):
"""Suspend all threads except lock owner"""
if lock is None:
lock = cls._global_lock
lock.acquire()
if cls.pending is None: # XXX
if lock is cls._global_lock:
cls.pending = 0
else:
sys.exit()
if cls._pdb:
cls._pdb = False
try:
sys.stdout.write(threading.currentThread().node_name)
except AttributeError:
pass
pdb(1)
@classmethod
def tic(cls, lock=None):
# switch to another thread
# (the following calls are not supposed to be debugged into)
cls.release(lock); cls.acquire(lock)
@classmethod
def background(cls):
try:
cls._lock_list.get(0).release()
except Empty:
pass
class SerializedEventManager(EventManager):
_lock = None
_timeout = 0
@classmethod
def decorate(cls, func):
def decorator(*args, **kw):
try:
EventManager.__init__ = types.MethodType(
cls.__init__.im_func, None, EventManager)
return func(*args, **kw)
finally:
EventManager.__init__ = types.MethodType(
cls._super__init__.im_func, None, EventManager)
return wraps(func)(decorator)
_super__init__ = EventManager.__init__.im_func
def __init__(self):
cls = self.__class__
assert cls is EventManager
self.__class__ = SerializedEventManager
self._super__init__()
def _poll(self, timeout=1):
if self._pending_processing:
assert not timeout
elif 0 == self._timeout == timeout == Serialized.pending == len(
self.writer_set):
return
else:
if self.writer_set and Serialized.pending is not None:
Serialized.pending = 1
# Jump to another thread before polling, so that when a message is
# sent on the network, one can debug immediately the receiving part.
# XXX: Unfortunately, this means we have a useless full-cycle
# before the first message is sent.
# TODO: Detect where a message is sent to jump immediately to nodes
# that will do something.
Serialized.tic(self._lock)
if timeout != 0:
timeout = self._timeout
if timeout != 0 and Serialized.pending:
Serialized.pending = timeout = 0
EventManager._poll(self, timeout)
class ServerNode(object):
class __metaclass__(type):
def __init__(cls, name, bases, d):
type.__init__(cls, name, bases, d)
if object not in bases and threading.Thread not in cls.__mro__:
cls.__bases__ = bases + (threading.Thread,)
@SerializedEventManager.decorate
def __init__(self, cluster, address, **kw):
self._init_args = (cluster, address), dict(kw)
threading.Thread.__init__(self)
self.daemon = True
h, p = address
self.node_type = getattr(NodeTypes,
SERVER_TYPE[VIRTUAL_IP.index(h)].upper())
self.node_name = '%s_%u' % (self.node_type, p)
kw.update(getCluster=cluster.name, getBind=address,
getMasters=parseMasterList(cluster.master_nodes, address))
super(ServerNode, self).__init__(Mock(kw))
def resetNode(self):
assert not self.isAlive()
args, kw = self._init_args
kw['getUUID'] = self.uuid
self.__dict__.clear()
self.__init__(*args, **kw)
def start(self):
Serialized.pending = 1
self.em._lock = l = threading.Lock()
l.acquire()
Serialized.release(l, wake_other=0)
threading.Thread.start(self)
def run(self):
try:
Serialized.acquire(self.em._lock)
super(ServerNode, self).run()
finally:
self._afterRun()
neo.lib.logging.debug('stopping %r', self)
Serialized.background()
def _afterRun(self):
try:
self.listening_conn.close()
except AttributeError:
pass
def getListeningAddress(self):
try:
return self.listening_conn.getAddress()
except AttributeError:
raise ConnectorConnectionRefusedException
class AdminApplication(ServerNode, neo.admin.app.Application):
pass
class MasterApplication(ServerNode, neo.master.app.Application):
pass
class StorageApplication(ServerNode, neo.storage.app.Application):
def resetNode(self, clear_database=False):
self._init_args[1]['getReset'] = clear_database
dm = self.dm
super(StorageApplication, self).resetNode()
if dm and not clear_database:
self.dm = dm
def _afterRun(self):
super(StorageApplication, self)._afterRun()
try:
self.dm.close()
self.dm = None
except StandardError: # AttributeError & ProgrammingError
pass
def switchTables(self):
adapter = self._init_args[1]['getAdapter']
dm = self.dm
if adapter == 'BTree':
dm._obj, dm._tobj = dm._tobj, dm._obj
dm._trans, dm._ttrans = dm._ttrans, dm._trans
elif adapter == 'MySQL':
q = dm.query
dm.begin()
for table in ('trans', 'obj'):
q('RENAME TABLE %s to tmp' % table)
q('RENAME TABLE t%s to %s' % (table, table))
q('RENAME TABLE tmp to t%s' % table)
q('TRUNCATE obj_short')
dm.commit()
else:
assert False
class ClientApplication(neo.client.app.Application):
@SerializedEventManager.decorate
def __init__(self, cluster):
super(ClientApplication, self).__init__(
cluster.master_nodes, cluster.name)
self.em._lock = threading.Lock()
def setPoll(self, master=False):
if master:
self.em._timeout = 1
if not self.em._lock.acquire(0):
Serialized.background()
else:
Serialized.release(wake_other=0); Serialized.acquire()
self.em._timeout = 0
def __del__(self):
try:
super(ClientApplication, self).__del__()
finally:
Serialized.background()
close = __del__
class NeoCTL(neo.neoctl.app.NeoCTL):
@SerializedEventManager.decorate
def __init__(self, cluster, address=(getVirtualIp('admin'), 0)):
self._cluster = cluster
super(NeoCTL, self).__init__(address)
self.em._timeout = None
server = property(lambda self: self._cluster.resolv(self._server),
lambda self, address: setattr(self, '_server', address))
class LoggerThreadName(object):
def __init__(self, default='TEST'):
self.__default = default
def __getattr__(self, attr):
return getattr(str(self), attr)
def __str__(self):
try:
return threading.currentThread().node_name
except AttributeError:
return self.__default
class NEOCluster(object):
BaseConnection_checkTimeout = staticmethod(BaseConnection.checkTimeout)
SocketConnector_makeClientConnection = staticmethod(
SocketConnector.makeClientConnection)
SocketConnector_makeListeningConnection = staticmethod(
SocketConnector.makeListeningConnection)
SocketConnector_send = staticmethod(SocketConnector.send)
Storage__init__ = staticmethod(Storage.__init__)
_patched = threading.Lock()
def _patch(cluster):
cls = cluster.__class__
if not cls._patched.acquire(0):
raise RuntimeError("Can't run several cluster at the same time")
def makeClientConnection(self, addr):
try:
real_addr = cluster.resolv(addr)
return cls.SocketConnector_makeClientConnection(self, real_addr)
finally:
self.remote_addr = addr
def send(self, msg):
result = cls.SocketConnector_send(self, msg)
if Serialized.pending is not None:
Serialized.pending = 1
return result
# TODO: 'sleep' should 'tic' in a smart way, so that storages can be
# safely started even if the cluster isn't.
bootstrap.sleep = lambda seconds: None
BaseConnection.checkTimeout = lambda self, t: None
SocketConnector.makeClientConnection = makeClientConnection
SocketConnector.makeListeningConnection = lambda self, addr: \
cls.SocketConnector_makeListeningConnection(self, BIND)
SocketConnector.send = send
Storage.setupLog = lambda *args, **kw: None
@classmethod
def _unpatch(cls):
bootstrap.sleep = time.sleep
BaseConnection.checkTimeout = cls.BaseConnection_checkTimeout
SocketConnector.makeClientConnection = \
cls.SocketConnector_makeClientConnection
SocketConnector.makeListeningConnection = \
cls.SocketConnector_makeListeningConnection
SocketConnector.send = cls.SocketConnector_send
Storage.setupLog = setupLog
cls._patched.release()
def __init__(self, master_count=1, partitions=1, replicas=0,
adapter=os.getenv('NEO_TESTS_ADAPTER', 'BTree'),
storage_count=None, db_list=None, clear_databases=True,
db_user=DB_USER, db_password='', verbose=None):
if verbose is not None:
temp_dir = os.getenv('TEMP') or \
os.path.join(tempfile.gettempdir(), 'neo_tests')
os.path.exists(temp_dir) or os.makedirs(temp_dir)
log_file = tempfile.mkstemp('.log', '', temp_dir)[1]
print 'Logging to %r' % log_file
setupLog(LoggerThreadName(), log_file, verbose)
self.name = 'neo_%s' % random.randint(0, 100)
ip = getVirtualIp('master')
self.master_nodes = ' '.join('%s:%s' % (ip, i)
for i in xrange(master_count))
weak_self = weakref.proxy(self)
kw = dict(cluster=weak_self, getReplicas=replicas, getAdapter=adapter,
getPartitions=partitions, getReset=clear_databases)
self.master_list = [MasterApplication(address=(ip, i), **kw)
for i in xrange(master_count)]
ip = getVirtualIp('storage')
if db_list is None:
if storage_count is None:
storage_count = replicas + 1
db_list = ['%s%u' % (DB_PREFIX, i) for i in xrange(storage_count)]
setupMySQLdb(db_list, db_user, db_password, clear_databases)
db = '%s:%s@%%s' % (db_user, db_password)
self.storage_list = [StorageApplication(address=(ip, i),
getDatabase=db % x, **kw)
for i, x in enumerate(db_list)]
ip = getVirtualIp('admin')
self.admin_list = [AdminApplication(address=(ip, 0), **kw)]
self.client = ClientApplication(weak_self)
self.neoctl = NeoCTL(weak_self)
# A few shortcuts that work when there's only 1 master/storage/admin
@property
def master(self):
master, = self.master_list
return master
@property
def storage(self):
storage, = self.storage_list
return storage
@property
def admin(self):
admin, = self.admin_list
return admin
###
def resolv(self, addr):
host, port = addr
try:
attr = SERVER_TYPE[VIRTUAL_IP.index(host)] + '_list'
except ValueError:
return addr
return getattr(self, attr)[port].getListeningAddress()
def reset(self, clear_database=False):
for node_type in SERVER_TYPE:
kw = {}
if node_type == 'storage':
kw['clear_database'] = clear_database
for node in getattr(self, node_type + '_list'):
node.resetNode(**kw)
self.client = ClientApplication(self)
self.neoctl = NeoCTL(weakref.proxy(self))
def start(self, storage_list=None, fast_startup=True):
self._patch()
Serialized.init()
for node_type in 'master', 'admin':
for node in getattr(self, node_type + '_list'):
node.start()
self.tic()
if fast_startup:
self.neoctl.startCluster()
if storage_list is None:
storage_list = self.storage_list
for node in storage_list:
node.start()
self.tic()
if not fast_startup:
self.neoctl.startCluster()
self.tic()
assert self.neoctl.getClusterState() == ClusterStates.RUNNING
self.enableStorageList(storage_list)
def enableStorageList(self, storage_list):
self.neoctl.enableStorageList([x.uuid for x in storage_list])
self.tic()
for node in storage_list:
assert self.getNodeState(node) == NodeStates.RUNNING
@property
def db(self):
try:
return self._db
except AttributeError:
self._db = db = ZODB.DB(storage=self.getZODBStorage())
return db
def stop(self):
self.__dict__.pop('_db', self.client).close()
#self.neoctl.setClusterState(ClusterStates.STOPPING) # TODO
try:
Serialized.release(stop=1)
for node_type in SERVER_TYPE[::-1]:
for node in getattr(self, node_type + '_list'):
if node.isAlive():
node.join()
finally:
Serialized.acquire()
self._unpatch()
def tic(self, force=False):
if force:
Serialized.tic()
while Serialized.pending:
Serialized.tic()
def getNodeState(self, node):
uuid = node.uuid
for node in self.neoctl.getNodeList(node.node_type):
if node[2] == uuid:
return node[3]
def getOudatedCells(self):
return [cell for row in self.neoctl.getPartitionRowList()[1]
for cell in row[1]
if cell[1] == CellStates.OUT_OF_DATE]
def getZODBStorage(self, **kw):
# automatically put client in master mode
if self.client.em._timeout == 0:
self.client.setPoll(True)
return Storage.Storage(None, self.name, _app=self.client, **kw)
def getTransaction(self):
txn = transaction.TransactionManager()
return txn, self.db.open(transaction_manager=txn)
def __del__(self):
self.neoctl.close()
for node_type in 'admin', 'storage', 'master':
for node in getattr(self, node_type + '_list'):
node.close()
self.client.em.close()
class NEOThreadedTest(NeoTestBase):
def setupLog(self):
log_file = os.path.join(getTempDirectory(), self.id() + '.log')
setupLog(LoggerThreadName(), log_file, True)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/threaded/test.py 0000664 0000000 0000000 00000013333 11634614701 0025446 0 ustar 00root root 0000000 0000000 #
# Copyright (c) 2011 Nexedi SARL and Contributors. All Rights Reserved.
# Julien Muchembled
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from persistent import Persistent
from neo.lib.protocol import NodeStates, ZERO_TID
from neo.tests.threaded import NEOCluster, NEOThreadedTest
from neo.client.pool import CELL_CONNECTED, CELL_GOOD
class PCounter(Persistent):
value = 0
class PCounterWithResolution(PCounter):
def _p_resolveConflict(self, old, saved, new):
new['value'] += saved['value'] - old.get('value', 0)
return new
class Test(NEOThreadedTest):
def testConflictResolutionTriggered2(self):
""" Check that conflict resolution works """
cluster = NEOCluster()
cluster.start()
try:
# create the initial object
t, c = cluster.getTransaction()
c.root()['with_resolution'] = ob = PCounterWithResolution()
t.commit()
self.assertEqual(ob._p_changed, 0)
oid = ob._p_oid
tid1 = ob._p_serial
self.assertNotEqual(tid1, ZERO_TID)
del ob, t, c
# then check resolution
t1, c1 = cluster.getTransaction()
t2, c2 = cluster.getTransaction()
o1 = c1.root()['with_resolution']
o2 = c2.root()['with_resolution']
self.assertEqual(o1.value, 0)
self.assertEqual(o2.value, 0)
o1.value += 1
o2.value += 2
t1.commit()
self.assertEqual(o1._p_changed, 0)
tid2 = o1._p_serial
self.assertTrue(tid1 < tid2)
self.assertEqual(o1.value, 1)
self.assertEqual(o2.value, 2)
t2.commit()
self.assertEqual(o2._p_changed, None)
t1.begin()
t2.begin()
self.assertEqual(o2.value, 3)
self.assertEqual(o1.value, 3)
tid3 = o1._p_serial
self.assertTrue(tid2 < tid3)
self.assertEqual(tid3, o2._p_serial)
# check history
history = c1.db().history
self.assertEqual([x['tid'] for x in history(oid, size=1)], [tid3])
self.assertEqual([x['tid'] for x in history(oid, size=10)],
[tid3, tid2, tid1])
finally:
cluster.stop()
def test_notifyNodeInformation(self):
# translated from MasterNotificationsHandlerTests
# (neo.tests.client.testMasterHandler)
cluster = NEOCluster()
try:
cluster.start()
cluster.db # open DB
cluster.client.setPoll(0)
storage, = cluster.client.nm.getStorageList()
conn = storage.getConnection()
self.assertFalse(conn.isClosed())
getCellSortKey = cluster.client.cp.getCellSortKey
self.assertEqual(getCellSortKey(storage), CELL_CONNECTED)
cluster.neoctl.dropNode(cluster.storage.uuid)
self.assertFalse(cluster.client.nm.getStorageList())
self.assertTrue(conn.isClosed())
self.assertEqual(getCellSortKey(storage), CELL_GOOD)
# XXX: the test originally checked that 'unregister' method
# was called (even if it's useless in this case),
# but we would need an API to do that easily.
self.assertFalse(cluster.client.dispatcher.registered(conn))
finally:
cluster.stop()
def testRestartWithMissingStorage(self, fast_startup=False):
# translated from neo.tests.functional.testStorage.StorageTest
cluster = NEOCluster(replicas=1, partitions=10)
s1, s2 = cluster.storage_list
try:
cluster.start()
self.assertEqual([], cluster.getOudatedCells())
finally:
cluster.stop()
# restart it with one storage only
cluster.reset()
try:
cluster.start(storage_list=(s1,), fast_startup=fast_startup)
self.assertEqual(NodeStates.UNKNOWN, cluster.getNodeState(s2))
finally:
cluster.stop()
def testRestartWithMissingStorageFastStartup(self):
self.testRestartWithMissingStorage(True)
def testVerificationCommitUnfinishedTransactions(self, fast_startup=False):
""" Verification step should commit unfinished transactions """
# translated from neo.tests.functional.testCluster.ClusterTests
cluster = NEOCluster()
try:
cluster.start()
t, c = cluster.getTransaction()
c.root()[0] = 'ok'
t.commit()
finally:
cluster.stop()
cluster.reset()
# XXX: (obj|trans) become t(obj|trans)
cluster.storage.switchTables()
try:
cluster.start(fast_startup=fast_startup)
t, c = cluster.getTransaction()
# transaction should be verified and commited
self.assertEqual(c.root()[0], 'ok')
finally:
cluster.stop()
def testVerificationCommitUnfinishedTransactionsFastStartup(self):
self.testVerificationCommitUnfinishedTransactions(True)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/ 0000775 0000000 0000000 00000000000 11634614701 0023270 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/__init__.py 0000664 0000000 0000000 00000004171 11634614701 0025404 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import os
from neo.tests import DB_PREFIX
functional = int(os.getenv('NEO_TEST_ZODB_FUNCTIONAL', 0))
if functional:
from neo.tests.functional import NEOCluster, NEOFunctionalTest as TestCase
else:
from neo.tests.threaded import NEOCluster, NEOThreadedTest as TestCase
class ZODBTestCase(TestCase):
def setUp(self, cluster_kw={}):
super(ZODBTestCase, self).setUp()
storages = int(os.getenv('NEO_TEST_ZODB_STORAGES', 1))
kw = {
'master_count': int(os.getenv('NEO_TEST_ZODB_MASTERS', 1)),
'replicas': int(os.getenv('NEO_TEST_ZODB_REPLICAS', 0)),
'partitions': int(os.getenv('NEO_TEST_ZODB_PARTITIONS', 1)),
'db_list': ['%s%u' % (DB_PREFIX, i) for i in xrange(storages)],
}
kw.update(cluster_kw)
if functional:
kw['temp_dir'] = self.getTempDirectory()
self.neo = NEOCluster(**kw)
self.neo.start()
self._storage = self.neo.getZODBStorage()
def tearDown(self):
self._storage.cleanup()
self.neo.stop()
del self.neo, self._storage
super(ZODBTestCase, self).tearDown()
assertEquals = failUnlessEqual = TestCase.assertEqual
assertNotEquals = failIfEqual = TestCase.assertNotEqual
def open(self, read_only=False):
# required for some tests (see PersitentTests), no-op for NEO ?
self._storage._is_read_only = read_only
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testBasic.py 0000664 0000000 0000000 00000002115 11634614701 0025562 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.BasicStorage import BasicStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class BasicTests(ZODBTestCase, StorageTestBase, BasicStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(BasicTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testConflict.py 0000664 0000000 0000000 00000002161 11634614701 0026303 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.ConflictResolution import ConflictResolvingStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class ConflictTests(ZODBTestCase, StorageTestBase, ConflictResolvingStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(ConflictTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testHistory.py 0000664 0000000 0000000 00000002127 11634614701 0026205 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.HistoryStorage import HistoryStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class HistoryTests(ZODBTestCase, StorageTestBase, HistoryStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(HistoryTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testIterator.py 0000664 0000000 0000000 00000002274 11634614701 0026340 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.IteratorStorage import IteratorStorage
from ZODB.tests.IteratorStorage import ExtendedIteratorStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class IteratorTests(ZODBTestCase, StorageTestBase, IteratorStorage,
ExtendedIteratorStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(IteratorTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testMT.py 0000664 0000000 0000000 00000002076 11634614701 0025067 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.MTStorage import MTStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class MTTests(ZODBTestCase, StorageTestBase, MTStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(MTTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testPack.py 0000664 0000000 0000000 00000002653 11634614701 0025426 0 ustar 00root root 0000000 0000000
#
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
try:
from ZODB.tests.PackableStorage import PackableStorageWithOptionalGC
except ImportError:
from ZODB.tests.PackableStorage import PackableStorage as \
PackableStorageWithOptionalGC
from ZODB.tests.PackableStorage import PackableUndoStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class PackableTests(ZODBTestCase, StorageTestBase,
PackableStorageWithOptionalGC, PackableUndoStorage):
def setUp(self):
super(PackableTests, self).setUp(cluster_kw={'adapter': 'MySQL'})
if __name__ == "__main__":
suite = unittest.makeSuite(PackableTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testPersistent.py 0000664 0000000 0000000 00000002146 11634614701 0026705 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.PersistentStorage import PersistentStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class PersistentTests(ZODBTestCase, StorageTestBase, PersistentStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(PersistentTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testReadOnly.py 0000664 0000000 0000000 00000002134 11634614701 0026257 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.ReadOnlyStorage import ReadOnlyStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class ReadOnlyTests(ZODBTestCase, StorageTestBase, ReadOnlyStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(ReadOnlyTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testRecovery.py 0000664 0000000 0000000 00000003464 11634614701 0026347 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import os
import unittest
import ZODB
from ZODB.tests.RecoveryStorage import RecoveryStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.functional import NEOCluster
from neo.tests.zodb import ZODBTestCase
class RecoveryTests(ZODBTestCase, StorageTestBase, RecoveryStorage):
def setUp(self):
super(RecoveryTests, self).setUp()
dst_temp_dir = self.getTempDirectory() + '-dst'
if not os.path.exists(dst_temp_dir):
os.makedirs(dst_temp_dir)
self.neo_dst = NEOCluster(['test_neo1-dst'], partitions=1, replicas=0,
master_count=1, temp_dir=dst_temp_dir)
self.neo_dst.stop()
self.neo_dst.setupDB()
self.neo_dst.start()
self._dst = self.neo.getZODBStorage()
self._dst_db = ZODB.DB(self._dst)
def tearDown(self):
super(RecoveryTests, self).tearDown()
self._dst_db.close()
self._dst.cleanup()
self.neo_dst.stop()
if __name__ == "__main__":
suite = unittest.makeSuite(RecoveryTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testRevision.py 0000664 0000000 0000000 00000002134 11634614701 0026340 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.RevisionStorage import RevisionStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class RevisionTests(ZODBTestCase, StorageTestBase, RevisionStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(RevisionTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testSynchronization.py 0000664 0000000 0000000 00000002162 11634614701 0027744 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.StorageTestBase import StorageTestBase
from ZODB.tests.Synchronization import SynchronizedStorage
from neo.tests.zodb import ZODBTestCase
class SynchronizationTests(ZODBTestCase, StorageTestBase, SynchronizedStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(SynchronizationTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testUndo.py 0000664 0000000 0000000 00000003324 11634614701 0025451 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.StorageTestBase import StorageTestBase
from ZODB.tests.TransactionalUndoStorage import TransactionalUndoStorage
from ZODB.tests.ConflictResolution import ConflictResolvingTransUndoStorage
from neo.tests.zodb import ZODBTestCase
class UndoTests(ZODBTestCase, StorageTestBase, TransactionalUndoStorage,
ConflictResolvingTransUndoStorage):
pass
# Don't run this test. It cannot run with pipelined store, and is not executed
# on Zeo - but because Zeo doesn't have an iterator, while Neo has.
# Note that it is possible to run this test on Neo with a simple fix:
# instead of expecting "store" to return object's serial, it should
# just load it after commit, and keep its serial.
# When iterator is fully implemented in Neo, a fork of that test should be
# done with above fix.
del TransactionalUndoStorage.checkTransactionalUndoIterator
if __name__ == "__main__":
suite = unittest.makeSuite(UndoTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testVersion.py 0000664 0000000 0000000 00000002342 11634614701 0026170 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests.VersionStorage import VersionStorage
from ZODB.tests.TransactionalUndoVersionStorage import \
TransactionalUndoVersionStorage
from ZODB.tests.StorageTestBase import StorageTestBase
from neo.tests.zodb import ZODBTestCase
class VersionTests(ZODBTestCase, StorageTestBase, VersionStorage,
TransactionalUndoVersionStorage):
pass
if __name__ == "__main__":
suite = unittest.makeSuite(VersionTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neo/tests/zodb/testZODB.py 0000664 0000000 0000000 00000003573 11634614701 0025310 0 ustar 00root root 0000000 0000000 #
# Copyright (C) 2009-2010 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
import unittest
from ZODB.tests import testZODB
import ZODB
from neo.tests.zodb import ZODBTestCase
class NEOZODBTests(ZODBTestCase, testZODB.ZODBTests):
def setUp(self):
super(NEOZODBTests, self).setUp()
self._db = ZODB.DB(self._storage)
def tearDown(self):
self._db.close()
super(NEOZODBTests, self).tearDown()
def checkMultipleUndoInOneTransaction(self):
# XXX: Upstream test accesses a persistent object outside a transaction
# (it should call transaction.begin() after the last commit)
# so disable our Connection.afterCompletion optimization.
# This should really be discussed on zodb-dev ML.
from ZODB.Connection import Connection
afterCompletion = Connection.__dict__['afterCompletion']
try:
Connection.afterCompletion = Connection.__dict__['newTransaction']
super(NEOZODBTests, self).checkMultipleUndoInOneTransaction()
finally:
Connection.afterCompletion = afterCompletion
if __name__ == "__main__":
suite = unittest.makeSuite(NEOZODBTests, 'check')
unittest.main(defaultTest='suite')
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neoadmin 0000775 0000000 0000000 00000001535 11634614701 0022133 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
#
# neoadmin - run an administrator node of NEO
#
# Copyright (C) 2009 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.scripts.neoadmin import main
main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neoctl 0000775 0000000 0000000 00000001533 11634614701 0021623 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
#
# neoadmin - run an administrator node of NEO
#
# Copyright (C) 2009 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.scripts.neoctl import main
main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neomaster 0000775 0000000 0000000 00000001526 11634614701 0022336 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
#
# neomaster - run a master node of NEO
#
# Copyright (C) 2006 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.scripts.neomaster import main
main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neomigrate 0000775 0000000 0000000 00000001560 11634614701 0022471 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
#
# neomigrate - import/export data between NEO and a FileStorage
#
# Copyright (C) 2006 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.scripts.neomigrate import main
main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/neostorage 0000775 0000000 0000000 00000001531 11634614701 0022503 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
#
# neostorage - run a storage node of NEO
#
# Copyright (C) 2006 Nexedi SA
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
from neo.scripts.neostorage import main
main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/setup.py 0000664 0000000 0000000 00000004650 11634614701 0022126 0 ustar 00root root 0000000 0000000 """Distributed, redundant and transactional storage for ZODB
"""
from setuptools import setup, find_packages
import os
classifiers = """\
Framework :: ZODB
Intended Audience :: Developers
License :: OSI Approved :: GNU General Public License (GPL)
Operating System :: POSIX :: Linux
Programming Language :: Python
Topic :: Database
Topic :: Software Development :: Libraries :: Python Modules
"""
if not os.path.exists('mock.py'):
import cStringIO, md5, urllib, zipfile
mock_py = zipfile.ZipFile(cStringIO.StringIO(urllib.urlopen(
'http://downloads.sf.net/sourceforge/python-mock/pythonmock-0.1.0.zip'
).read())).read('mock.py')
if md5.md5(mock_py).hexdigest() != '79f42f390678e5195d9ce4ae43bd18ec':
raise EnvironmentError("MD5 checksum mismatch downloading 'mock.py'")
open('mock.py', 'w').write(mock_py)
extras_require = {
'admin': [],
'client': ['ZODB3'], # ZODB3 >= 3.10
'ctl': [],
'master': [],
'storage-btree': ['ZODB3'],
'storage-mysqldb': ['MySQL-python'],
}
extras_require['tests'] = ['zope.testing', 'psutil',
'neoppod[%s]' % ', '.join(extras_require)]
setup(
name = 'neoppod',
version = '0.9',
description = __doc__.strip(),
author = 'NEOPPOD',
author_email = 'neo-dev@erp5.org',
url = 'http://www.neoppod.org/',
license = 'GPL 2+',
platforms = ["any"],
classifiers=classifiers.splitlines(),
long_description = ".. contents::\n\n" + open('README').read()
+ "\n" + open('CHANGES').read(),
packages = find_packages(),
py_modules = ['mock'],
entry_points = {
'console_scripts': [
# XXX: we'd like not to generate scripts for unwanted features
# (eg. we don't want neotestrunner if nothing depends on tests)
'neoadmin=neo.scripts.neoadmin:main',
'neoctl=neo.scripts.neoctl:main',
'neomaster=neo.scripts.neomaster:main',
'neomigrate=neo.scripts.neomigrate:main',
'neostorage=neo.scripts.neostorage:main',
'neotestrunner=neo.scripts.runner:main',
'neosimple=neo.scripts.simple:main',
'stat_zodb=neo.tests.stat_zodb:main',
],
},
# Raah!!! I wish I could write something like:
# install_requires = ['python>=2.5|ctypes'],
extras_require = extras_require,
package_data = {
'neo.client': [
'component.xml',
],
},
zip_safe = True,
)
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/tools/ 0000775 0000000 0000000 00000000000 11634614701 0021547 5 ustar 00root root 0000000 0000000 neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/tools/matrix 0000775 0000000 0000000 00000013341 11634614701 0023003 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
import sys
import os
import math
import traceback
from time import time
from neo.tests import DB_PREFIX
from neo.tests.benchmark import BenchmarkRunner
from ZODB.FileStorage import FileStorage
class MatrixImportBenchmark(BenchmarkRunner):
error_log = ''
_size = None
def add_options(self, parser):
parser.add_option('-d', '--datafs')
parser.add_option('', '--min-storages', type='int', default=1)
parser.add_option('', '--max-storages', type='int', default=2)
parser.add_option('', '--min-replicas', type='int', default=0)
parser.add_option('', '--max-replicas', type='int', default=1)
parser.add_option('', '--threaded', action="store_true")
def load_options(self, options, args):
if options.datafs and not os.path.exists(options.datafs):
sys.exit('Missing or wrong data.fs argument')
return dict(
datafs = options.datafs,
min_s = options.min_storages,
max_s = options.max_storages,
min_r = options.min_replicas,
max_r = options.max_replicas,
threaded = options.threaded,
)
def start(self):
# build storage (logarithm) & replicas (linear) lists
min_s, max_s = self._config.min_s, self._config.max_s
min_r, max_r = self._config.min_r, self._config.max_r
min_s2 = int(math.log(min_s, 2))
max_s2 = int(math.log(max_s, 2))
storages = [2 ** x for x in range(min_s2, max_s2 + 1)]
if storages[0] < min_s:
storages[0] = min_s
if storages[-1] < max_s:
storages.append(max_s)
replicas = range(min_r, max_r + 1)
result_list = [self.runMatrix(storages, replicas)
for x in xrange(self._config.repeat)]
results = {}
for s in storages:
results[s] = z = {}
for r in replicas:
if r < s:
x = [x[s][r] for x in result_list if x[s][r] is not None]
if x:
z[r] = min(x)
else:
z[r] = None
return self.buildReport(storages, replicas, results)
def runMatrix(self, storages, replicas):
stats = {}
for s in storages:
stats[s] = z = {}
for r in replicas:
if r < s:
z[r] = self.runImport(1, s, r, 100)
return stats
def runImport(self, masters, storages, replicas, partitions):
datafs = self._config.datafs
if datafs:
dfs_storage = FileStorage(file_name=self._config.datafs)
else:
datafs = 'PROD1'
import random, neo.tests.stat_zodb
dfs_storage = getattr(neo.tests.stat_zodb, datafs)(
random.Random(0)).as_storage(5000)
print "Import of %s with m=%s, s=%s, r=%s, p=%s" % (
datafs, masters, storages, replicas, partitions)
if self._config.threaded:
from neo.tests.threaded import NEOCluster
else:
from neo.tests.functional import NEOCluster
neo = NEOCluster(
db_list=['%s_matrix_%u' % (DB_PREFIX, i) for i in xrange(storages)],
clear_databases=True,
master_count=masters,
partitions=partitions,
replicas=replicas,
verbose=self._config.verbose,
)
neo.start()
neo_storage = neo.getZODBStorage()
if not self._config.threaded:
assert len(neo.getStorageList()) == storages
neo.expectOudatedCells(number=0)
# import
start = time()
try:
try:
neo_storage.copyTransactionsFrom(dfs_storage)
end = time()
size = dfs_storage.getSize()
if self._size is None:
self._size = size
else:
assert self._size == size
return end - start
except:
traceback.print_exc()
self.error_log += "Import with m=%s, s=%s, r=%s, p=%s:" % (
masters, storages, replicas, partitions)
self.error_log += "\n%s\n" % ''.join(traceback.format_exc())
return None
finally:
neo.stop()
def buildReport(self, storages, replicas, results):
# draw an array with results
dfs_size = self._size
self.add_status('Input size',
dfs_size and '%-.1f MB' % (dfs_size / 1e6) or 'N/A')
fmt = '|' + '|'.join([' %8s '] * (len(replicas) + 1)) + '|\n'
sep = '+' + '+'.join(['-' * 12] * (len(replicas) + 1)) + '+\n'
report = sep
report += fmt % tuple(['S\R'] + range(0, len(replicas)))
report += sep
failures = 0
speedlist = []
for s in storages:
values = []
assert s in results
for r in replicas:
if r in results[s]:
result = results[s][r]
if result is None:
values.append('FAIL')
failures += 1
else:
result = dfs_size / (result * 1e3)
values.append('%8.1f' % result)
speedlist.append(result)
else:
values.append('N/A')
report += fmt % (tuple([s] + values))
report += sep
report += self.error_log
if failures:
info = '%d failures' % (failures, )
else:
info = '%.1f KB/s' % (sum(speedlist) / len(speedlist))
return info, report
def main(args=None):
MatrixImportBenchmark().run()
if __name__ == "__main__":
main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/tools/perfs 0000775 0000000 0000000 00000010705 11634614701 0022617 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
import os
import sys
import platform
import datetime
import traceback
from time import time
from ZODB.FileStorage import FileStorage
from neo.tests import DB_PREFIX
from neo.tests.benchmark import BenchmarkRunner
from neo.tests.functional import NEOCluster
from neo.lib.profiling import PROFILING_ENABLED, profiler_decorator, \
profiler_report
class ImportBenchmark(BenchmarkRunner):
""" Test import of a datafs """
def add_options(self, parser):
parser.add_option('-d', '--datafs')
parser.add_option('-m', '--masters')
parser.add_option('-s', '--storages')
parser.add_option('-p', '--partitions')
parser.add_option('-r', '--replicas')
def load_options(self, options, args):
if options.datafs and not os.path.exists(options.datafs):
sys.exit('Missing or wrong data.fs argument')
return dict(
datafs = options.datafs,
masters = int(options.masters or 1),
storages = int(options.storages or 1),
partitions = int(options.partitions or 10),
replicas = int(options.replicas or 0),
)
def start(self):
config = self._config
# start neo
neo = NEOCluster(
db_list=['%s_perfs_%u' % (DB_PREFIX, i)
for i in xrange(config.storages)],
clear_databases=True,
partitions=config.partitions,
replicas=config.replicas,
master_count=config.masters,
verbose=False,
)
# import datafs
neo.start()
try:
try:
return self.buildReport(*self.runImport(neo))
except:
summary = 'Perf : import failed'
report = ''.join(traceback.format_exc())
return summary, report
finally:
neo.stop()
def runImport(self, neo):
def counter(wrapped, d):
@profiler_decorator
def wrapper(*args, **kw):
# count number of tick per second
t = int(time())
d.setdefault(t, 0)
d[t] += 1
# call original method
wrapped(*args, **kw)
return wrapper
# open storages clients
datafs = self._config.datafs
neo_storage = neo.getZODBStorage()
if datafs:
dfs_storage = FileStorage(file_name=datafs)
else:
from neo.tests.stat_zodb import PROD1
from random import Random
dfs_storage = PROD1(Random(0)).as_storage(10000)
# monkey patch storage
txn_dict, obj_dict = {}, {}
neo_storage.app.tpc_begin = counter(neo_storage.app.tpc_begin, txn_dict)
neo_storage.app.store = counter(neo_storage.app.store, obj_dict)
# run import
start = time()
stats = neo_storage.copyTransactionsFrom(dfs_storage)
elapsed = time() - start
# return stats
stats = {
'Transactions': txn_dict.values(),
'Objects': obj_dict.values(),
}
return (dfs_storage.getSize(), elapsed, stats)
def buildReport(self, dfs_size, elapsed, stats):
""" build a report for the given import data """
config = self._config
dfs_size /= 1e3
size = dfs_size / 1e3
speed = dfs_size / elapsed
# configuration
self.add_status('Masters', config.masters)
self.add_status('Storages', config.storages)
self.add_status('Replicas', config.replicas)
self.add_status('Partitions', config.partitions)
# results
self.add_status('Input size', '%-.1f MB' % size)
self.add_status('Import duration', '%-d secs' % elapsed)
self.add_status('Average speed', '%-.1f KB/s' % speed)
# stats on objects and transactions
pat = '%19s | %8s | %5s | %5s | %5s \n'
sep = '%19s+%8s+%5s+%5s+%5s\n'
sep %= ('-' * 20, '-' * 10) + ('-' * 7, ) * 3
report = pat % ('', ' num ', 'min/s', 'avg/s', 'max/s')
for k, v in stats.items():
report += sep
s = sum(v)
report += pat % (k, s, min(v), s / len(v), max(v))
report += sep
# build summary
summary = 'Perf : %.1f KB/s (%.1f MB)' % (speed, size)
return (summary, report)
def main(args=None):
ImportBenchmark().run()
if PROFILING_ENABLED:
print profiler_report()
if __name__ == "__main__":
main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/tools/pylintrc 0000664 0000000 0000000 00000002444 11634614701 0023342 0 ustar 00root root 0000000 0000000 [MASTER]
# neo/protocol.py does __global__ magic.
#init-hook="from neo import protocol"
# Don't validate tests, they must be rewriten anyway.
ignore=tests
[MESSAGES CONTROL]
# C0111 Disable "no docstring" for the moment
# C0301 Disable "Line too long" for the moment
# R0201 Disable "Method could be a function"
disable-msg=R0201
[DESIGN]
# Some classes are just beautiful when defining only operators & such.
min-public-methods=0
# Handler classes need to export many methods. We do define a complex API.
max-public-methods=100
# Handler methods need a big number of parameters.
max-args=10
[BASIC]
# Inspired by Debian's /usr/share/doc/pylint/examples/pylintrc_camelcase
module-rgx=(([a-z][a-z0-9]*)|([A-Z][a-zA-Z0-9]+))$
class-rgx=[A-Z][a-zA-Z0-9]+$
function-rgx=((_+|[a-z]))(([a-zA-Z0-9]*)|([a-z0-9_]*))$
method-rgx=((((_+|[a-z]))(([a-zA-Z0-9]*)|([a-z0-9_]*)))|(__.*__))$
argument-rgx=[a-z][a-z0-9_]*$
# variables can be:
# - variables ([a-z][a-z0-9_]*$)
# - method aliases (inner loop optimisation)
variable-rgx=(([a-z][a-z0-9_]*)|(((((_+|[a-z]))(([a-zA-Z0-9]*)|([a-z0-9_]*)))|(__.*__))))$
attr-rgx=[a-z_][a-z0-9_]*$
# Consts (as detected by pylint) can be:
# - functions
# - class aliases (class Bar: pass; Foo = Bar)
# - decorator functions
# - real consts
# For the moment, accept any name.
const-rgx=.*
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/tools/replication 0000775 0000000 0000000 00000015354 11634614701 0024016 0 ustar 00root root 0000000 0000000 #! /usr/bin/env python
import sys
import time
import traceback
import transaction
from persistent import Persistent
from ZODB.tests.StorageTestBase import zodb_pickle
from neo.lib.util import p64
from neo.lib.protocol import CellStates
from neo.tests import DB_PREFIX
from neo.tests.benchmark import BenchmarkRunner
from neo.tests.functional import NEOCluster
PARTITIONS = 16
TRANSACTIONS = 1024
OBJECTS = 1024
REVISIONS = 4
OBJECT_SIZE = 1024
CUT_AT = 0
def humanize(size):
units = ['%.2f KB', '%.2f MB', '%2.f GB']
unit = '%d bytes'
while size >= 1024 and units:
size /= 1024.0
unit, units = units[0], units[1:]
return unit % size
class DummyObject(Persistent):
def __init__(self, data):
self._data = None
class ReplicationBenchmark(BenchmarkRunner):
""" Test replication process """
def add_options(self, parser):
add_option = parser.add_option
add_option('', '--transactions', help="Total number of transactions")
add_option('', '--objects', help="Total number of objects")
add_option('', '--revisions', help="Number of revisions per object")
add_option('', '--partitions', help="Number of partition")
add_option('', '--object-size', help="Size of an object revision")
add_option('', '--cut-at', help="Populate the destination up to this %")
def load_options(self, options, args):
transactions = int(options.transactions or TRANSACTIONS)
objects = int(options.objects or OBJECTS)
revisions = int(options.revisions or REVISIONS)
if (objects * revisions) % transactions != 0:
sys.exit('Invalid parameters (need multiples)')
return dict(
partitions = int(options.partitions or PARTITIONS),
transactions = transactions,
objects = objects,
revisions = revisions,
object_size = int(options.object_size or OBJECT_SIZE),
cut_at = int(options.cut_at or CUT_AT),
)
def time_it(self, method, *args, **kw):
start = time.time()
method(*args, **kw)
return time.time() - start
def start(self):
config = self._config
# build a neo
neo = NEOCluster(
db_list=['%s_replication_%u' % (DB_PREFIX, i) for i in xrange(2)],
clear_databases=True,
partitions=config.partitions,
replicas=1,
master_count=1,
verbose=False,
)
neo.start()
p_time = r_time = None
content = ''
try:
try:
p_time = self.time_it(self.populate, neo)
neo.expectOudatedCells(self._config.partitions)
storage = neo.getStorageProcessList()[-1]
storage.start()
neo.expectRunning(storage, delay=0.1)
print "Source storage populated in %.3f secs" % p_time
r_time = self.time_it(self.replicate, neo) + 0.1
except Exception:
content = ''.join(traceback.format_exc())
finally:
neo.stop()
return self.buildReport(p_time, r_time), content
def replicate(self, neo):
def number_of_oudated_cell():
row_list = neo.neoctl.getPartitionRowList()[1]
number_of_oudated = 0
for row in row_list:
for cell in row[1]:
if cell[1] == CellStates.OUT_OF_DATE:
number_of_oudated += 1
return number_of_oudated
end_time = time.time() + 3600
while time.time() <= end_time and number_of_oudated_cell() > 0:
time.sleep(1)
if number_of_oudated_cell() > 0:
raise Exception('Replication takes too long')
def buildReport(self, p_time, r_time):
add_status = self.add_status
cut_at = self._config.cut_at
objects = self._config.objects
revisions = self._config.revisions
object_size = self._config.object_size
partitions = self._config.partitions
objects_revisions = revisions * objects
objects_space = objects_revisions * object_size
add_status('Partitions', self._config.partitions)
add_status('Transactions', self._config.transactions)
add_status('Objects', objects)
add_status('Revisions', revisions)
add_status('Cut at', '%d%%' % cut_at)
add_status('Object size', humanize(object_size))
add_status('Objects space', humanize(objects_space))
if p_time is None:
return 'Populate failed'
add_status('Population time', '%.3f secs' % p_time)
if r_time is None:
return 'Replication failed'
bandwidth = objects_space / r_time
add_status('Replication time', '%.3f secs' % r_time)
add_status('Time per partition', '%.3f secs' % (r_time / partitions))
add_status('Time per object', '%.3f secs' % (r_time / objects_revisions))
add_status('Global bandwidth', '%s/sec' % humanize(bandwidth))
summary = "%d%% of %s replicated at %s/sec" % (100 - cut_at,
humanize(objects_space), humanize(bandwidth))
return summary
def populate(self, neo):
print "Start populate"
db, conn = neo.getZODBConnection(compress=False)
storage = conn._storage
cut_at = self._config.cut_at
objects = self._config.objects
transactions = self._config.transactions
revisions = self._config.revisions
objects_turn = objects / transactions
objects_per_transaction = (objects * revisions) / transactions
objects_revisions = objects * revisions
base_oid = 1
data = zodb_pickle(DummyObject("-" * self._config.object_size))
prev = p64(0)
progress = 0
cutted = False
for tidx in xrange(transactions):
if not cutted and (100 * progress) / objects_revisions == cut_at:
print "Cut at %d%%" % (cut_at, )
neo.getStorageProcessList()[-1].stop()
cutted = True
txn = transaction.Transaction()
txn.description = "Transaction %s" % tidx
# print txn.description
storage.tpc_begin(txn)
for oidx in xrange(objects_per_transaction):
progress += 1
oid = base_oid + oidx
storage.store(p64(oid), prev, data, '', txn)
# print " OID %d" % oid
storage.tpc_vote(txn)
prev = storage.tpc_finish(txn)
if tidx % objects_turn == 1:
base_oid += objects_per_transaction
if not cutted:
assert cut_at == 100
neo.getStorageProcessList()[-1].stop()
def main(args=None):
ReplicationBenchmark().run()
if __name__ == "__main__":
main()
neoppod-cbb233f275f2d7e8bcf5c5a49a42b47d8c252e8c/tools/test_bot 0000775 0000000 0000000 00000006172 11634614701 0023326 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
import os, subprocess, sys, time
def clean():
for path, dir_list, file_list in os.walk('.'):
for file in file_list:
# delete *.pyc files so that deleted/moved files can not be imported
if file[-4:] in ('.pyc', '.pyo'):
os.remove(os.path.join(path, file))
class GitError(EnvironmentError):
def __init__(self, err, out, returncode):
EnvironmentError.__init__(self, err)
self.stdout = out
self.returncode = returncode
def _git(*args, **kw):
p = subprocess.Popen(('git',) + args, **kw)
out, err = p.communicate()
if p.returncode:
raise GitError(err, out, p.returncode)
return out
def git(*args, **kw):
out = _git(stdout=subprocess.PIPE, stderr=subprocess.PIPE, *args, **kw)
return out.strip()
def getRevision(*path):
return git('log', '-1', '--format=%H', '--', *path)
def main():
if 'LANG' in os.environ:
del os.environ['LANG']
os.environ.setdefault('NEO_TEST_ZODB_FUNCTIONAL', '1')
arg_count = 1
while arg_count < len(sys.argv):
arg = sys.argv[arg_count]
if arg[:2] != '--':
break
arg_count += '=' in arg and 1 or 2
branch = git('rev-parse', '--abbrev-ref', 'HEAD')
test_bot = os.path.realpath(__file__).split(os.getcwd())[1][1:]
test_bot_revision = getRevision(test_bot)
revision = 0
clean()
delay = None
while True:
delay = delay and time.sleep(delay) or 1800
old_revision = revision
try:
_git('fetch')
_git('reset', '--merge', '@{u}')
except GitError, e:
continue
revision = getRevision()
if revision == old_revision:
continue
if test_bot_revision != getRevision(test_bot):
os.execvp(sys.argv[0], sys.argv)
delay = None
for test_home in sys.argv[arg_count:]:
test_home, tasks = test_home.rsplit('=', 1)
tests = ''.join(x for x in tasks if x in 'fuz')
bin = os.path.join(test_home, 'bin')
if not subprocess.call((os.path.join(bin, 'buildout'), '-v'),
cwd=test_home):
title = '[%s:%s-g%s:%s]' % (branch,
git('rev-list', '--topo-order', '--count', revision),
revision[:7], os.path.basename(test_home))
if tests:
subprocess.call([os.path.join(bin, 'neotestrunner'),
'-' + tests, '--title',
'NEO tests ' + title,
] + sys.argv[1:arg_count])
if 'm' in tasks:
subprocess.call([os.path.join(bin, 'python'),
'tools/matrix', '--repeat=2',
'--min-storages=1', '--max-storages=24',
'--min-replicas=0', '--max-replicas=3',
'--title', 'Matrix ' + title,
] + sys.argv[1:arg_count])
clean()
if __name__ == '__main__':
sys.exit(main())
|