Commit 24afe7ac authored by Guido van Rossum's avatar Guido van Rossum

I set out making wait=1 work for fallback connections, i.e. the

ClientStorage constructor called with both wait=1 and
read_only_fallback=1 should return, indicating its readiness, when a
read-only connection was made.  This is done by calling
connect(sync=1).  Previously this waited for the ConnectThread to
finish, but that thread doesn't finish until it's made a read-write
connection, so a different mechanism is needed.

I ended up doing a major overhaul of the interfaces between
ClientStorage, ConnectionManager, ConnectThread/ConnectWrapper, and
even ManagedConnection.  Changes:

ClientStorage.py:

  ClientStorage:

  - testConnection() now returns just the preferred flag; stubs are
    cheap and I like to have the notifyConnected() signature be the
    same for clients and servers.

  - notifyConnected() now takes a connection (to match the signature
    of this method in StorageServer), and creates a new stub.  It also
    takes care of the reconnect business if the client was already
    connected, rather than the ClientManager.  It stores the
    connection as self._connection so it can close the previous one.
    This is also reset by notifyDisconnected().

zrpc/client.py:

  ConnectionManager:

  - Changed self.thread_lock into a condition variable.  It now also
    protects self.connection.  The condition is notified when
    self.connection is set to a non-None value in connect_done();
    connect(sync=1) waits for it.  The self.connected variable is no
    more; we test "self.connection is not None" instead.

  - Tried to made close() reentrant.  (There's a trick: you can't set
    self.connection to None, conn.close() ends up calling close_conn()
    which does this.)

  - Renamed notify_closed() to close_conn(), for symmetry with the
    StorageServer API.

  - Added an is_connected() method so ConnectThread.try_connect()
    doesn't have to dig inside the manager's guts to find out if the
    manager is connected (important for the disposition of fallback
    wrappers).

  ConnectThread and ConnectWrapper:

  - Follow above changes in the ClientStorage and ConnectionManager
    APIs: don't close the manager's connection when reconnecting, but
    leave that up to notifyConnected(); ConnectWrapper no longer
    manages the stub.

  - ConnectWrapper sets self.sock to None once it's created a
    ManagedConnection -- from there on the connection is is charge of
    closing the socket.

zrpc/connection.py:

  ManagedServerConnection:

  - Changed the order in which close() calls things; super_close()
    should be last.

  ManagedConnection:

  - Ditto, and call the manager's close_conn() instead of
    notify_closed().

tests/testZEO.py:

  - In checkReconnectSwitch(), we can now open the client storage with
    wait=1 and read_only_fallback=1.
parent f8411024
......@@ -13,7 +13,7 @@
##############################################################################
"""Network ZODB storage client
$Id: ClientStorage.py,v 1.64 2002/09/20 13:35:07 gvanrossum Exp $
$Id: ClientStorage.py,v 1.65 2002/09/20 17:37:34 gvanrossum Exp $
"""
# XXX TO DO
......@@ -107,6 +107,7 @@ class ClientStorage:
self._is_read_only = read_only
self._storage = storage
self._read_only_fallback = read_only_fallback
self._connection = None
self._info = {'length': 0, 'size': 0, 'name': 'ZEO Client',
'supportsUndo':0, 'supportsVersions': 0,
......@@ -200,11 +201,9 @@ class ClientStorage:
self._server._update()
def testConnection(self, conn):
"""Return a pair (stub, preferred).
"""Test a connection.
Where:
- stub is an RPC stub
- preferred is: 1 if the connection is an optimal match,
This returns: 1 if the connection is an optimal match,
0 if it is a suboptimal but acceptable match
It can also raise DisconnectedError or ReadOnlyError.
......@@ -217,27 +216,33 @@ class ClientStorage:
stub = ServerStub.StorageServer(conn)
try:
stub.register(str(self._storage), self._is_read_only)
return (stub, 1)
return 1
except POSException.ReadOnlyError:
if not self._read_only_fallback:
raise
log2(INFO, "Got ReadOnlyError; trying again with read_only=1")
stub.register(str(self._storage), read_only=1)
return (stub, 0)
return 0
def notifyConnected(self, stub):
"""Start using the given RPC stub.
def notifyConnected(self, conn):
"""Start using the given connection.
This is called by ConnectionManager after it has decided which
connection should be used. The stub is one returned by a
previous testConnection() call.
connection should be used.
"""
if self._connection is not None:
log2(INFO, "Reconnected to storage")
else:
log2(INFO, "Connected to storage")
stub = ServerStub.StorageServer(conn)
self._oids = []
self._info.update(stub.get_info())
self.verify_cache(stub)
# XXX The stub should be saved here and set in endVerify() below.
if self._connection is not None:
self._connection.close()
self._connection = conn
self._server = stub
def verify_cache(self, server):
......@@ -257,6 +262,7 @@ class ClientStorage:
def notifyDisconnected(self):
log2(PROBLEM, "Disconnected from storage")
self._connection = None
self._server = disconnected_stub
def __len__(self):
......
......@@ -499,7 +499,7 @@ class ConnectionTests(StorageTestBase.StorageTestBase):
# Start a read-only server
self._startServer(create=0, index=0, read_only=1)
# Start a client in fallback mode
self._storage = self.openClientStorage(wait=0, read_only_fallback=1)
self._storage = self.openClientStorage(wait=1, read_only_fallback=1)
# Stores should fail here
self.assertRaises(ReadOnlyError, self._dostore)
......
This diff is collapsed.
......@@ -427,8 +427,8 @@ class ManagedServerConnection(Connection):
def close(self):
self.obj.notifyDisconnected()
self.__super_close()
self.__mgr.close_conn(self)
self.__super_close()
class ManagedConnection(Connection):
"""Client-side Connection subclass."""
......@@ -469,5 +469,5 @@ class ManagedConnection(Connection):
return self.check_mgr_async()
def close(self):
self.__mgr.close_conn(self)
self.__super_close()
self.__mgr.notify_closed(self)
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment