Commit ae181dbd authored by Vincent Pelletier's avatar Vincent Pelletier

Done.

git-svn-id: https://svn.erp5.org/repos/neo/trunk@2542 71dcc9de-d417-0410-9af5-da40c76e7ee4
parent 9609d1a1
......@@ -11,9 +11,6 @@ RC = Release Critical (for next release)
RC - Clarify cell state signification
- Add docstrings (think of doctests)
Tests
RC - write ZODB-API-level tests
Code
Code changes often impact more than just one node. They are categorised by
......@@ -79,9 +76,6 @@ RC - Review output of pylint (CODE)
partition table changes be broadcasted ? (BANDWITH, SPEED)
- Review PENDING/HIDDEN/SHUTDOWN states, don't use notifyNodeInformation()
to do a state-switch, use a exception-based mechanism ? (CODE)
- Clarify big packet handling, is it needed to split them at connection
level, application level, use the ask/send/answer scheme ? Currently it's
not consistent, essentially with ask/answer/send partition table.
- Split protocol.py in a 'protocol' module
- Review handler split (CODE)
The current handler split is the result of small incremental changes. A
......@@ -95,20 +89,11 @@ RC - Review output of pylint (CODE)
- Consider replace setNodeState admin packet by one per action, like
dropNode to reduce packet processing complexity and reduce bad actions
like set a node in TEMPORARILY_DOWN state.
- Consider process writable events in event.poll() method to ensure that
pending outgoing data are sent if the network is ready to avoid wait for
an incoming packet that trigger the poll() system call.
- Review node notfications. Eg. A storage don't have to be notified of new
clients but only when one is lost.
Storage
- Use Kyoto Cabinet instead of a stand-alone MySQL server.
- Make replication work even in non-operational cluster state
(HIGH AVAILABILITY)
When a master decided a partition change triggering replication,
replication should happen independently of cluster state. (Maybe we still
need a primary master, to void replicating from an outdated partition
table setup.)
- Notify master when storage becomes available for clients (LATENCY)
Currently, storage presence is broadcasted to client nodes too early, as
the storage node would refuse them until it has only up-to-date data (not
......@@ -214,7 +199,6 @@ RC - Review output of pylint (CODE)
- Discuss about dead lstorage notification. If a client fails to connect to
a storage node supposed in running state, then it should notify the master
to check if this node is well up or not.
- Cache for loadSerial/loadBefore
- Implement restore() ZODB API method to bypass consistency checks during
imports.
- tpc_finish failures (FUNCTIONALITY)
......@@ -234,12 +218,6 @@ RC - Review output of pylint (CODE)
- Choose how to compute the storage size
- Make storage check if the OID match with it's partitions during a store
- Send notifications when a storage node is lost
- When importing data, objects with non-allocated OIDs are stored. The
storage can detect this and could notify the master to not allocated lower
OIDs. But during import, each object stored trigger this notification and
may cause a big network overhead. It would be better to refuse any client
connection and thus no OID allocation during import. It may be interesting
to create a new stage for the cluster startup... to be discussed.
- Simple deployment solution, based on embedded database, integrated master
and storage node that works out of the box
- Simple import/export solution that generate SQL/data.fs.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment