Commit f80e4cdd authored by Grégory Wisniewski's avatar Grégory Wisniewski

Update TODO list and removed some inlined XXXs about:

- split big packets
- gather multiple objects in one sql requests
- sql injection
- no more use notification packet to change a node state
- reduce duplicates


git-svn-id: https://svn.erp5.org/repos/neo/branches/prototype3@1122 71dcc9de-d417-0410-9af5-da40c76e7ee4
parent 3e440366
......@@ -56,10 +56,13 @@ RC - Review output of pylint (CODE)
Do the replication process, the verification stage, with or without
unfinished transactions, cells have to set as outdated, if yes, should the
partition table changes be broadcasted ?
- Review PENDING/HIDDEN/SHUTDOWN states, avoid use of notifyNodeInformation
- Review PENDING/HIDDEN/SHUTDOWN states, don't use notifyNodeInformation()
to do a state-switch, use a exception-based mechanism ?
- Ensure that registered timeout are canceled if the related connection was
closed.
- Clarify big packet handling, is it needed to split them at connection
level, application level, use the ask/send/answer scheme ? Currently it's
not consistent, essentially with ask/answer/send partition table.
Storage
......@@ -79,6 +82,10 @@ RC - Review output of pylint (CODE)
Currently, storage presence is broadcasted to client nodes too early, as the storage node would refuse them until it has only up-to-date data (not only up-to-date cells, but also a partition table and node states).
- Create a specialized PartitionTable that know the database and replicator
to remove duplicates and remove logic from handlers (CODE)
- Consider insert multiple objects at time in the database, with taking care
of maximum SQL request size allowed.
- Prevent from SQL injection, escape() from MySQLdb api is not sufficient,
consider using query(request, arg_list) instead of query(request % arg_list)
- Improve replication process (BANDWITH)
Current implementation do this way to replicate objects (for a given TID) :
S1 > S2 : Ask for a range of OIDs
......@@ -130,7 +137,7 @@ RC - Review output of pylint (CODE)
- Make storage check if the OID match with it's partitions during a store
- Send notifications when a storage node is lost
- When importing data, objects with non-allocated OIDs are stored. The
storage can detect this and could notify the master to not allocatexd lower
storage can detect this and could notify the master to not allocated lower
OIDs. But during import, each object stored trigger this notification and
may cause a big network overhead. It would be better to refuse any client
connection and thus no OID allocation during import. It may be interesting
......
......@@ -49,8 +49,6 @@ class BaseMasterHandler(BaseStorageHandler):
"""Store information on nodes, only if this is sent by a primary
master node."""
self.app.nm.update(node_list)
# XXX: iterate over the list to keep previous logic for now, but
# notification must not be used to change a node state
for node_type, addr, uuid, state in node_list:
if uuid == self.app.uuid:
# This is me, do what the master tell me
......
......@@ -33,7 +33,6 @@ class HiddenHandler(BaseMasterHandler):
master node."""
app = self.app
self.app.nm.update(node_list)
# XXX: notification must not be used to change a node state
for node_type, addr, uuid, state in node_list:
if node_type == STORAGE_NODE_TYPE:
if uuid == self.app.uuid:
......@@ -66,7 +65,6 @@ class HiddenHandler(BaseMasterHandler):
def handleNotifyPartitionChanges(self, conn, packet, ptid, cell_list):
"""This is very similar to Send Partition Table, except that
the information is only about changes from the previous."""
# XXX: this is a copy/paste from handlers/master.py
app = self.app
if ptid <= app.pt.getID():
# Ignore this packet.
......
......@@ -29,8 +29,7 @@ class InitializationHandler(BaseMasterHandler):
self.app.has_node_information = True
def handleNotifyNodeInformation(self, conn, packet, node_list):
# XXX: This message should be replaced by a SendNodeInformation to be
# consistent with SendPartitionTable.
# the whole node list is received here
BaseMasterHandler.handleNotifyNodeInformation(self, conn, packet, node_list)
def handleSendPartitionTable(self, conn, packet, ptid, row_list):
......
......@@ -394,12 +394,6 @@ class MySQLDatabaseManager(DatabaseManager):
self.begin()
try:
# XXX it might be more efficient to insert multiple objects
# at a time, but it is potentially dangerous, because
# a packet to MySQL can exceed the maximum packet size.
# However, I do not think this would be a big problem, because
# tobj has no index, so inserting one by one should not be
# significantly different from inserting many at a time.
for oid, compression, checksum, data in object_list:
oid = u64(oid)
data = e(data)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment