RC = Release Critical (for next release) Documentation - Clarify node state signification, and consider renaming them in the code. Ideas: TEMPORARILY_DOWN becomes UNAVAILABLE BROKEN is removed ? - Clarify the use of each error codes: - NOT_READY removed (connection kept opened until ready) - Split PROTOCOL_ERROR (BAD IDENTIFICATION, ...) RC - Clarify cell state signification - Add docstrings (think of doctests) Tests RC - write ZODB-API-level tests Code Code changes often impact more than just one node. They are categorised by node where the most important changes are needed. General RC - Review XXX in the code (CODE) RC - Review TODO in the code (CODE) RC - Review output of pylint (CODE) - Keep-alive (HIGH AVAILABILITY) Consider the need to implement a keep-alive system (packets sent automatically when there is no activity on the connection for a period of time). - Factorise packet data when sending partition table cells (BANDWITH) Currently, each cell in a partition table update contains UUIDs of all involved nodes. It must be changed to a correspondance table using shorter keys (sent in the packet) to avoid repeating the same UUIDs many times. - Consider using multicast for cluster-wide notifications. (BANDWITH) Currently, multi-receivers notifications are sent in unicast to each receiver. Multicast should be used. - Remove sleeps (LATENCY, CPU WASTE) Code still contains many delays (explicit sleeps or polling timeouts). They must be removed to be either infinite (sleep until some condition becomes true, without waking up needlessly in the meantime) or null (don't wait at all). - Implements delayed connection acceptation. Currently, any node that connects to early to another that is busy for some reasons is immediately rejected with the 'not ready' error code. This should be replaced by a queue in the listening node that keep a pool a nodes that will be accepted late, when the conditions will be satisfied. This is mainly the case for : - Client rejected before the cluster is operational - Empty storages rejected during recovery process Masters implies in the election process should still reject any connection as the primary master is still unknown. - Connections must support 2 simultaneous handlers (CODE) Connections currently define only one handler, which is enough for monothreaded code. But when using multithreaded code, there are 2 possible handlers involved in a packet reception: - The first one handles notifications only (nothing special to do regarding multithreading) - The second one handles expected messages (such message must be directed to the right thread) The second handler must be possible to set on the connection when that connection is thread-safe (MT version of connection classes). Also, the code to detect wether a response is expected or not must be genericised and moved out of handlers. - Pack (FEATURE) - Control that client processed all invalidations before starting a transaction (CONSISTENCY) If a client starts a transaction before it received an invalidation message caused by a transaction commited, it will use outdated data. This is a bug known in Zeo. - Factorise node initialisation for admin, client and storage (CODE) The same code to ask/receive node list and partition table exists in too many places. - Clarify handler methods to call when a connection is accepted from a listening conenction and when remote node is identified (cf. neo/bootstrap.py). - Choose how to handle a storage integrity verification when it comes back. Do the replication process, the verification stage, with or without unfinished transactions, cells have to set as outdated, if yes, should the partition table changes be broadcasted ? (BANDWITH, SPEED) - Review PENDING/HIDDEN/SHUTDOWN states, don't use notifyNodeInformation() to do a state-switch, use a exception-based mechanism ? (CODE) - Clarify big packet handling, is it needed to split them at connection level, application level, use the ask/send/answer scheme ? Currently it's not consistent, essentially with ask/answer/send partition table. - Split protocol.py in a 'protocol' module - Review handler split (CODE) The current handler split is the result of small incremental changes. A global review is required to make them square. - Make handler instances become singletons (SPEED, MEMORY) In some places handlers are instanciated outside of App.__init__ . As a handler is completely re-entrant (no modifiable properties) it can and should be made a singleton (saves the CPU time needed to instanciates all the copies - often when a connection is established, saves the memory used by each copy). - Consider replace setNodeState admin packet by one per action, like dropNode to reduce packet processing complexity and reduce bad actions like set a node in TEMPORARILY_DOWN state. - Consider process writable events in event.poll() method to ensure that pending outgoing data are sent if the network is ready to avoid wait for an incoming packet that trigger the poll() system call. - Allow daemonize NEO processes, re-use code from TIDStorage and support start/stop/restart/status commands. Storage - Implement incremental storage verification (BANDWITH) When a partition cell is in out-of-date state, the entire transition history is checked. This is because there might be gaps in cell tid history, as an out-of-date node is writable (although non-readable). It should use an incremental mechanism to only check transaction past a certain TID known to have no gap. - Use embeded MySQL database instead of a stand-alone MySQL server. (LATENCY)(to be discussed) - Make replication work even in non-operational cluster state (HIGH AVAILABILITY) When a master decided a partition change triggering replication, replication should happen independently of cluster state. (Maybe we still need a primary master, to void replicating from an outdated partition table setup.) - Close connections to other storage nodes (SYSTEM RESOURCE USAGE) When a replication finishes, the connection is not closed currently. It should be closed (possibly asynchronously, and possibly by detecting that connection is idle - similar to keep-alive principle) - Notify master when storage becomes available for clients (LATENCY) Currently, storage presence is broadcasted to client nodes too early, as the storage node would refuse them until it has only up-to-date data (not only up-to-date cells, but also a partition table and node states). - Create a specialized PartitionTable that know the database and replicator to remove duplicates and remove logic from handlers (CODE) - Consider insert multiple objects at time in the database, with taking care of maximum SQL request size allowed. (SPEED) - Prevent from SQL injection, escape() from MySQLdb api is not sufficient, consider using query(request, args) instead of query(request % args) - Create database adapter for other RDBMS (sqlite, postgres) - fix __undoLog when there is out of date cells, there is a busy loop because the client expected more answer than the available number of storage nodes. - Improve replication process (BANDWITH) Current implementation do this way to replicate objects (for a given TID): S1 > S2 : Ask for a range of OIDs S1 < S2 : Answer the range fo OIDs For each OID : S1 > S2 : Ask a range of the object history S1 < S2 : Answer the object history For each missing version of the object : S1 > S2 : Ask object data S1 < S2 : Answer object data Proposal (just to keep the basics in mind): S1 > S2 : Send its object state list, with last serial for each oid S1 < S2 : Answer object data for latter state of each object Or something like that, the idea is to say what we have instead or check what we don't have. Master - Master node data redundancy (HIGH AVAILABILITY) Secondary master nodes should replicate primary master data (ie, primary master should inform them of such changes). This data takes too long to extract from storage nodes, and loosing it increases the risk of starting from underestimated values. This risk is (currently) unavoidable when all nodes stop running, but this case must be avoided. - Don't reject peers during startup phases (STARTUP LATENCY) When (for example) a client sends a RequestNodeIdentification to the primary master node while the cluster is not yet operational, the primary master should postpone the node acceptance until the cluster is operational, instead of closing the connection immediately. This would avoid the need to poll the master to know when it is ready. - Differential partition table updates (BANDWITH) When a storage asks for current partition table (when it connects to a cluster in service state), it must update its knowledge of the partition table. Currently it's done by fetching the entire table. If the master keeps a history of a few last changes to partition table, it would be able to only send a differential update (via the incremental update mechanism) - During recovery phase, store multiple partition tables (ADMINISTATION) When storage nodes know different version of the partition table, the master should be abdle to present them to admin to allow him to choose one when moving on to next phase. - Optimize operational status check by recording which rows are ready instead of parsing the whole partition table. (SPEED) - Improve partition table tweaking algorithm to reduce differences between frequently and rarely used nodes (SCALABILITY) Client - Client should prefer storage nodes it's already connected to when retrieving objects (LOAD LATENCY) - Implement C version of mq.py (LOAD LATENCY) - Move object data replication task to storage nodes (COMMIT LATENCY) Currently the client node must send a single object data to all storage nodes in charge of the partition cell containing that object. This increases the time the client has to wait for storage reponse, and increases client-to-storage bandwith usage. It must be possible to send object data to only one stroage and that storage should automatically replicate on other storages. Locks on objects would then be released by storage nodes. - Use generic bootstrap module (CODE) - Find a way to make ask() from the thread poll to allow send initial packet (requestNodeIdentification) from the connectionCompleted() event instead of app. This requires to know to what thread will wait for the answer. - Discuss about dead lstorage notification. If a client fails to connect to a storage node supposed in running state, then it should notify the master to check if this node is well up or not. - Cache for loadSerial/loadBefore - Implement restore() ZODB API method to bypass consistency checks during imports. - Implement and use an iterator on connection pool, to minimize the amount of reconnections: - first iterate over nodes with established connections, then on other nodes - allow caller to specify the node it wants to find (ex, which partitions it is interested in, etc) Later - Consider auto-generating cluster name upon initial startup (it might actualy be a partition property). - Consider ways to centralise the configuration file, or make the configuration updatable automaticaly on all nodes. - Consider storing some metadata on master nodes (partition table [version], ...). This data should be treated non-authoritatively, as a way to lower the probability to use an outdated partition table. - Decentralize primary master tasks as much as possible (consider distributed lock mechanisms, ...) - Make admin node able to monitor multiple clusters simultaneously - Choose how to compute the storage size - Make storage check if the OID match with it's partitions during a store - Send notifications when a storage node is lost - When importing data, objects with non-allocated OIDs are stored. The storage can detect this and could notify the master to not allocated lower OIDs. But during import, each object stored trigger this notification and may cause a big network overhead. It would be better to refuse any client connection and thus no OID allocation during import. It may be interesting to create a new stage for the cluster startup... to be discussed. - Simple deployment solution, based on embedded database, integrated master and storage node that works out of the box - Simple import/export solution that generate SQL/data.fs. - Consider using out-of-band TCP feature. Old TODO - Handling write timeouts. - Flushing write buffers only without reading packets. - Garbage collection of unused nodes. - Stopping packet processing by returning a boolean value from a handler, otherwise too tricky to exchange a handler with another. - Expiration of temporarily down nodes.