An error occurred fetching the project authors.
- 18 Jul, 2006 2 commits
-
-
Jim Fulton authored
test for it.
-
Jim Fulton authored
ClientStorage could be in either "sync" mode or "async" mode. Now there is just "async" mode. There is now a dedicicated asyncore main loop dedicated to ZEO clients. This addresses a test failure on Mac OS X, http://www.zope.org/Collectors/Zope3-dev/650, that I believe was due to a bug in sync mode. Some asyncore-based code was being called from multiple threads that didn't expect to be. Converting to always-async mode revealed some bugs that weren't caught before because the tests ran in sync mode. These problems could explain some problems we've seen at times with clients taking a long time to reconnect after a disconnect. Added a partial heart beat to try to detect lost connections that aren't otherwise caught, http://mail.zope.org/pipermail/zodb-dev/2005-June/008951.html, by perioidically writing to all connections during periods of inactivity.
-
- 04 Oct, 2005 1 commit
-
-
Tim Peters authored
Port from 2.7 branch. Collector 1900. send_reply(), return_error(): Stop trying to catch an exception that doesn't exist, when marshal.encode() raises an exception. Jeremy simplified the marshal.encode() half of this about 3 years ago, but apparently forgot to change ZEO/zrpc/connection.py to match.
-
- 01 Apr, 2005 1 commit
-
-
Tim Peters authored
Rewrite ZEO protocol negotiation. 3.3 should have bumped the ZEO protocol number (new methods were added for MVCC support), but didn't. Untangling this is a mess.
-
- 11 Mar, 2005 1 commit
-
-
Tim Peters authored
-
- 09 Feb, 2005 1 commit
-
-
Tim Peters authored
Forward port from ZODB 3.2. Connection.__init__(): Python 2.4 added a new gimmick to asyncore (a ._map attribute on asyncore.dispatcher instances) that breaks the delicate ZEO startup dance. Repaired that.
-
- 05 Feb, 2005 1 commit
-
-
Tim Peters authored
Port from ZODB 3.2. Fixed several thread and asyncore races in ZEO's connection dance. ZEO/tests/ConnectionTests.py The pollUp() and pollDown() methods were pure busy loops whenever the asyncore socket map was empty, and at least on some flavors of Linux that starved the other thread(s) trying to do real work. This grossly increased the time needed to run tests using these, and sometimes caused bogus "timed out" test failures. ZEO/zrpc/client.py ZEO/zrpc/connection.py Renamed class ManagedConnection to ManagedClientConnection, for clarity. Moved the comment block about protocol negotiation from the guts of ManagedClientConnection to before the Connection base class -- the Connection constructor can't be understood without this context. Added more words about the delicate protocol negotiation dance. Connection class: made this an abstract base clase. Derived classes _must_ implement the handshake() method. There was really nothing in common between server and client wrt what handshake() needs to do, and it was confusing for one of them to use the base class handshake() while the other replaced handshake() completely. Connection.__init__: It isn't safe to register with asyncore's socket map before special-casing for the first (protocol handshake) message is set up. Repaired that. Also removed the pointless "optionalness" of the optional arguments. ManagedClientConnection.__init__: Added machinery to set up correct (thread-safe) message queueing. There was an unrepairable hole before, in the transition between "I'm queueing msgs waiting for the server handshake" and "I'm done queueing messages": it was impossible to know whether any calls to the client's "queue a message" method were in progress (in other threads), so impossible to make the transition safely in all cases. The client had to grow its own message_output() method, with a mutex protecting the transition from thread races. Changed zrpc-conn log messages to include "(S)" for server-side or "(C)" for client-side. This is especially helpful for figuring out logs produced while running the test suite (the server and client log messages end up in the same file then).
-
- 02 Jun, 2004 1 commit
-
-
Jim Fulton authored
-
- 24 Apr, 2004 1 commit
-
-
Gintautas Miliauskas authored
This probably broke the log analyzers... :(
-
- 27 Feb, 2004 1 commit
-
-
Martijn Faassen authored
-
- 31 Dec, 2003 1 commit
-
-
Jeremy Hylton authored
Connection initialized _map as a dict containing a single entry mapping the connection's fileno to the connection. That was a misuse of the _map variable, which is also used by the asyncore.dispatcher base class to indicate whether the dispatcher users the default socket_map or a custom socket_map. A recent change to asyncore caused it to use _map in its add_channel() and del_channel() methods, which presumes to be a bug fix (may get ported to 2.3). That causes our dubious use of _map to be a problem, because we also put the Connections in the global socket_map. The new asyncore won't remove it from the global socket map, because it has a custom _map. Also change a bunch of 0/1s to False/Trues.
-
- 02 Oct, 2003 1 commit
-
-
Jeremy Hylton authored
-
- 15 Sep, 2003 1 commit
-
-
Jeremy Hylton authored
Please make all future changes on the Zope-2_7-branch instead.
-
- 13 Jun, 2003 1 commit
-
-
Jeremy Hylton authored
-
- 30 May, 2003 1 commit
-
-
Jeremy Hylton authored
After the merge, I made several Python 2.1 compatibility changes for the auth code.
-
- 24 Apr, 2003 1 commit
-
-
Jeremy Hylton authored
-
- 22 Apr, 2003 1 commit
-
-
Jeremy Hylton authored
-
- 24 Jan, 2003 1 commit
-
-
Guido van Rossum authored
Guillaume's request.
-
- 17 Jan, 2003 1 commit
-
-
Jeremy Hylton authored
Closes SF bug #659068.
-
- 14 Jan, 2003 1 commit
-
-
Jeremy Hylton authored
Pending does reads and writes. In the case of server startup, we may need to write out zeoVerify() messages. Always check for read status, but don't check for write status only there is output to do. Only continue in this loop as long as there is data to read.
-
- 07 Jan, 2003 1 commit
-
-
Jeremy Hylton authored
XXX The deferred name isn't perfect, but async is already taken.
-
- 03 Jan, 2003 1 commit
-
-
Jeremy Hylton authored
-
- 13 Dec, 2002 1 commit
-
-
Jeremy Hylton authored
-
- 18 Nov, 2002 1 commit
-
-
Jeremy Hylton authored
XXX Not sure if berkeley still works.
-
- 29 Sep, 2002 1 commit
-
-
Guido van Rossum authored
-
- 27 Sep, 2002 1 commit
-
-
Guido van Rossum authored
asyncore.poll() with a timeout of 10 seconds. Change this to a variable timeout starting at 1 msec and doubling until 1 second. While debugging Win2k crashes in the check4ExtStorageThread test from ZODB/tests/MTStorage.py, Tim noticed that there were frequent 10 second gaps in the log file where *nothing* happens. These were caused by the following scenario. Suppose a ZEO client process has two threads using the same connection to the ZEO server, and there's no asyncore loop active. T1 makes a synchronous call, and enters the wait() function. Then T2 makes another synchronous call, and enters the wait() function. At this point, both are blocked in the select() call in asyncore.poll(), with a timeout of 10 seconds (in the old version). Now the replies for both calls arrive. Say T1 wakes up. The handle_read() method in smac.py calls self.recv(8096), so it gets both replies in its buffer, decodes both, and calls self.message_input() for both, which sticks both replies in the self.replies dict. Now T1 finds its response, its wait() call returns with it. But T2 is still stuck in asyncore.poll(): its select() call never woke up, and has to "sit out" the whole timeout of 10 seconds. (Good thing I added timeouts to everything! Or perhaps not, since it masked the problem.) One other condition must be satisfied before this becomes a disaster: T2 must have started a transaction, and all other threads must be waiting to start another transaction. This is what I saw in the log. (Hmm, maybe a message should be logged when a thread is waiting to start a transaction this way.) In a real Zope application, this won't happen, because there's a centralized asyncore loop in a separate thread (probably the client's main thread) and the various threads would be waiting on the condition variable; whenever a reply is inserted in the replies dict, all threads are notified. But in the test suite there's no asyncore loop, and I don't feel like adding one. So the exponential backoff seems the easiest "solution".
-
- 25 Sep, 2002 2 commits
-
-
Jeremy Hylton authored
If an exception occurs while decoding a message, there is really nothing the server can do to recover. If the message was a synchronous call, the client will wait for ever for the reply. The server can't send the reply, because it couldn't unpickle the message id. Instead of trying to recover, just let the exception propogate up to asyncore where the connection will be closed. As a result, eliminate DecodingError and special case in handle_error() that handled flags == None.
-
Guido van Rossum authored
instead. return_error(): be more careful calling repr() on err_value.
-
- 24 Sep, 2002 1 commit
-
-
Guido van Rossum authored
Rather than blaming window for reporting success as an error, the else clause on the second try block should be an except clause.
-
- 23 Sep, 2002 1 commit
-
-
Guido van Rossum authored
- Change pending() to use select.select() instead of select.poll(), so it'll work on Windows. - Clarify comment to say that only Exceptions are propagated. - Change some private variables to public (everything else is public). - Remove XXX comment about logging at INFO level (we already do that now :-).
-
- 20 Sep, 2002 1 commit
-
-
Guido van Rossum authored
ClientStorage constructor called with both wait=1 and read_only_fallback=1 should return, indicating its readiness, when a read-only connection was made. This is done by calling connect(sync=1). Previously this waited for the ConnectThread to finish, but that thread doesn't finish until it's made a read-write connection, so a different mechanism is needed. I ended up doing a major overhaul of the interfaces between ClientStorage, ConnectionManager, ConnectThread/ConnectWrapper, and even ManagedConnection. Changes: ClientStorage.py: ClientStorage: - testConnection() now returns just the preferred flag; stubs are cheap and I like to have the notifyConnected() signature be the same for clients and servers. - notifyConnected() now takes a connection (to match the signature of this method in StorageServer), and creates a new stub. It also takes care of the reconnect business if the client was already connected, rather than the ClientManager. It stores the connection as self._connection so it can close the previous one. This is also reset by notifyDisconnected(). zrpc/client.py: ConnectionManager: - Changed self.thread_lock into a condition variable. It now also protects self.connection. The condition is notified when self.connection is set to a non-None value in connect_done(); connect(sync=1) waits for it. The self.connected variable is no more; we test "self.connection is not None" instead. - Tried to made close() reentrant. (There's a trick: you can't set self.connection to None, conn.close() ends up calling close_conn() which does this.) - Renamed notify_closed() to close_conn(), for symmetry with the StorageServer API. - Added an is_connected() method so ConnectThread.try_connect() doesn't have to dig inside the manager's guts to find out if the manager is connected (important for the disposition of fallback wrappers). ConnectThread and ConnectWrapper: - Follow above changes in the ClientStorage and ConnectionManager APIs: don't close the manager's connection when reconnecting, but leave that up to notifyConnected(); ConnectWrapper no longer manages the stub. - ConnectWrapper sets self.sock to None once it's created a ManagedConnection -- from there on the connection is is charge of closing the socket. zrpc/connection.py: ManagedServerConnection: - Changed the order in which close() calls things; super_close() should be last. ManagedConnection: - Ditto, and call the manager's close_conn() instead of notify_closed(). tests/testZEO.py: - In checkReconnectSwitch(), we can now open the client storage with wait=1 and read_only_fallback=1.
-
- 19 Sep, 2002 2 commits
-
-
Guido van Rossum authored
until I added an is_connected() test to testConnection() is solved. After the ConnectThread has switched the client to the new, read-write connection, it closes the read-only connection(s) that it was saving up in case there was no read-write connection. But closing a ManagedConnection calls notify_closed() on the manager, which disconnected the manager and the client from its brand new read-write connection. The mistake here is that this should only be done when closing the manager's current connection! The fix was to add an argument to notify_closed() that passes the connection object being closed; notify_closed() returns without doing a thing when that is not the current connection. I presume this didn't happen on Linux because there the sockets happened to connect in a different order, and there was no read-only connection to close yet (just a socket trying to connect). I'm taking out the previous "fix" to ClientStorage, because that only masked the problem in this relatively simple test case. The problem could still occur when both a read-only and a read-write server are up initially, and the read-only server connects first; once the read-write server connects, the read-write connection is installed, and then the saved read-only connection is closed which would again mistakenly disconnect the read-write connection. Another (related) fix is not to call self.mgr.notify_closed() but to call self.mgr.connection.close() when reconnecting. (Hmm, I wonder if it would make more sense to have an explicit reconnect callback to the manager and the client? Later.)
-
Guido van Rossum authored
the socket's __str__ due to a __getattr__ method in asyncore's dispatcher base class that everybody hates but nobody dares take away.
-
- 17 Sep, 2002 5 commits
-
-
Guido van Rossum authored
calls. If multiple threads sharing a ZEO connection want to make overlapping calls, they can do that now. This is mostly useful when one thread is waiting for a long-running pack() or undo*() call -- the other thread can now proceed. Jeremy & I did a review of the StorageServer code and found no place where overlapping incoming calls from the same connection could do any harm -- given that the only places where incoming calls can be handled are those places where the server makes a callback to the client.
-
Guido van Rossum authored
Cleanup comments for Managed*Connection. Whitespace normalization.
-
Guido van Rossum authored
the client, don't log it at the ERROR level. If it really was a disaster, the client should log it. But if the client was expecting the exception, the esrver shouldn't get all upset about it. Change this to the INFO level. (When it *is* considered an error by the client, it's useful to be able to see the server-side traceback in the log.)
-
Guido van Rossum authored
-
Jeremy Hylton authored
-
- 16 Sep, 2002 2 commits
-
-
Guido van Rossum authored
parallel outstanding calls. However it also contains code (by Jeremy, with one notifyAll() call added by me) that enforces the old rule of a single outstanding call. This is hopefully unnecessessary, but we haven't reviewed the server side yet to make sure that that's really the case (the server was until now getting serialized calls per connection).
-
Guido van Rossum authored
-