An error occurred fetching the project authors.
  1. 18 Jul, 2006 2 commits
  2. 04 Oct, 2005 1 commit
    • Tim Peters's avatar
      Merge rev 38747 from 3.4 branch. · 2fe0d7e7
      Tim Peters authored
      Port from 2.7 branch.
      
      Collector 1900.
      
      send_reply(), return_error():  Stop trying to catch an exception that doesn't
      exist, when marshal.encode() raises an exception.  Jeremy simplified the
      marshal.encode() half of this about 3 years ago, but apparently forgot to
      change ZEO/zrpc/connection.py to match.
      2fe0d7e7
  3. 01 Apr, 2005 1 commit
    • Tim Peters's avatar
      Merge rev 29769 from 3.3 branch. · fc6633b1
      Tim Peters authored
      Rewrite ZEO protocol negotiation.
      
      3.3 should have bumped the ZEO protocol number (new methods were
      added for MVCC support), but didn't.  Untangling this is a mess.
      fc6633b1
  4. 11 Mar, 2005 1 commit
  5. 09 Feb, 2005 1 commit
    • Tim Peters's avatar
      Port rev 29092 from 3.3 branch. · 7a473f11
      Tim Peters authored
      Forward port from ZODB 3.2.
      
      Connection.__init__():  Python 2.4 added a new gimmick to asyncore (a
      ._map attribute on asyncore.dispatcher instances) that breaks the
      delicate ZEO startup dance.  Repaired that.
      7a473f11
  6. 05 Feb, 2005 1 commit
    • Tim Peters's avatar
      Merge rev 29052 from 3.3 branch. · 3ebdc9a2
      Tim Peters authored
      Port from ZODB 3.2.
      
      Fixed several thread and asyncore races in ZEO's connection dance.
      
      ZEO/tests/ConnectionTests.py
          The pollUp() and pollDown() methods were pure busy loops whenever
          the asyncore socket map was empty, and at least on some flavors of
          Linux that starved the other thread(s) trying to do real work.
          This grossly increased the time needed to run tests using these, and
          sometimes caused bogus "timed out" test failures.
      
      ZEO/zrpc/client.py
      ZEO/zrpc/connection.py
          Renamed class ManagedConnection to ManagedClientConnection, for clarity.
      
          Moved the comment block about protocol negotiation from the guts of
          ManagedClientConnection to before the Connection base class -- the
          Connection constructor can't be understood without this context.  Added
          more words about the delicate protocol negotiation dance.
      
          Connection class:  made this an abstract base clase.  Derived classes
          _must_ implement the handshake() method.  There was really nothing in
          common between server and client wrt what handshake() needs to do, and
          it was confusing for one of them to use the base class handshake() while
          the other replaced handshake() completely.
      
          Connection.__init__:  It isn't safe to register with asyncore's socket
          map before special-casing for the first (protocol handshake) message is
          set up.  Repaired that.  Also removed the pointless "optionalness" of
          the optional arguments.
      
          ManagedClientConnection.__init__:  Added machinery to set up correct
          (thread-safe) message queueing.  There was an unrepairable hole before,
          in the transition between "I'm queueing msgs waiting for the server
          handshake" and "I'm done queueing messages":  it was impossible to know
          whether any calls to the client's "queue a message" method were in
          progress (in other threads), so impossible to make the transition safely
          in all cases.  The client had to grow its own message_output() method,
          with a mutex protecting the transition from thread races.
      
          Changed zrpc-conn log messages to include "(S)" for server-side or
          "(C)" for client-side.  This is especially helpful for figuring out
          logs produced while running the test suite (the server and client
          log messages end up in the same file then).
      3ebdc9a2
  7. 02 Jun, 2004 1 commit
  8. 24 Apr, 2004 1 commit
  9. 27 Feb, 2004 1 commit
  10. 31 Dec, 2003 1 commit
    • Jeremy Hylton's avatar
      Fix bug that prevented ZEO from working with Python 2.4. · c8dc49bd
      Jeremy Hylton authored
      Connection initialized _map as a dict containing a single entry
      mapping the connection's fileno to the connection.  That was a misuse
      of the _map variable, which is also used by the asyncore.dispatcher
      base class to indicate whether the dispatcher users the default
      socket_map or a custom socket_map.  A recent change to asyncore caused
      it to use _map in its add_channel() and del_channel() methods, which
      presumes to be a bug fix (may get ported to 2.3).  That causes our
      dubious use of _map to be a problem, because we also put the
      Connections in the global socket_map.  The new asyncore won't remove
      it from the global socket map, because it has a custom _map.
      
      Also change a bunch of 0/1s to False/Trues.
      c8dc49bd
  11. 02 Oct, 2003 1 commit
  12. 15 Sep, 2003 1 commit
  13. 13 Jun, 2003 1 commit
  14. 30 May, 2003 1 commit
  15. 24 Apr, 2003 1 commit
  16. 22 Apr, 2003 1 commit
  17. 24 Jan, 2003 1 commit
  18. 17 Jan, 2003 1 commit
  19. 14 Jan, 2003 1 commit
    • Jeremy Hylton's avatar
      Rewrite pending() to handle input and output. · e3e3d8a9
      Jeremy Hylton authored
      Pending does reads and writes.  In the case of server startup, we may
      need to write out zeoVerify() messages.  Always check for read status,
      but don't check for write status only there is output to do.  Only
      continue in this loop as long as there is data to read.
      e3e3d8a9
  20. 07 Jan, 2003 1 commit
  21. 03 Jan, 2003 1 commit
  22. 13 Dec, 2002 1 commit
  23. 18 Nov, 2002 1 commit
  24. 29 Sep, 2002 1 commit
  25. 27 Sep, 2002 1 commit
    • Guido van Rossum's avatar
      In wait(), when there's no asyncore main loop, we called · 8cba5055
      Guido van Rossum authored
      asyncore.poll() with a timeout of 10 seconds.  Change this to a
      variable timeout starting at 1 msec and doubling until 1 second.
      
      While debugging Win2k crashes in the check4ExtStorageThread test from
      ZODB/tests/MTStorage.py, Tim noticed that there were frequent 10
      second gaps in the log file where *nothing* happens.  These were caused
      by the following scenario.
      
      Suppose a ZEO client process has two threads using the same connection
      to the ZEO server, and there's no asyncore loop active.  T1 makes a
      synchronous call, and enters the wait() function.  Then T2 makes
      another synchronous call, and enters the wait() function.  At this
      point, both are blocked in the select() call in asyncore.poll(), with
      a timeout of 10 seconds (in the old version).  Now the replies for
      both calls arrive.  Say T1 wakes up.  The handle_read() method in
      smac.py calls self.recv(8096), so it gets both replies in its buffer,
      decodes both, and calls self.message_input() for both, which sticks
      both replies in the self.replies dict.  Now T1 finds its response, its
      wait() call returns with it.  But T2 is still stuck in
      asyncore.poll(): its select() call never woke up, and has to "sit out"
      the whole timeout of 10 seconds.  (Good thing I added timeouts to
      everything!  Or perhaps not, since it masked the problem.)
      
      One other condition must be satisfied before this becomes a disaster:
      T2 must have started a transaction, and all other threads must be
      waiting to start another transaction.  This is what I saw in the log.
      (Hmm, maybe a message should be logged when a thread is waiting to
      start a transaction this way.)
      
      In a real Zope application, this won't happen, because there's a
      centralized asyncore loop in a separate thread (probably the client's
      main thread) and the various threads would be waiting on the condition
      variable; whenever a reply is inserted in the replies dict, all
      threads are notified.  But in the test suite there's no asyncore loop,
      and I don't feel like adding one.  So the exponential backoff seems
      the easiest "solution".
      8cba5055
  26. 25 Sep, 2002 2 commits
    • Jeremy Hylton's avatar
      Fix error handling logic for pickling errors. · 4a34bfaf
      Jeremy Hylton authored
      If an exception occurs while decoding a message, there is really
      nothing the server can do to recover.  If the message was a
      synchronous call, the client will wait for ever for the reply.  The
      server can't send the reply, because it couldn't unpickle the message
      id.  Instead of trying to recover, just let the exception propogate up
      to asyncore where the connection will be closed.
      
      As a result, eliminate DecodingError and special case in
      handle_error() that handled flags == None.
      4a34bfaf
    • Guido van Rossum's avatar
      send_reply(): catch errors in encode() and send a ZRPCError exception · 6d40690c
      Guido van Rossum authored
      instead.
      
      return_error(): be more careful calling repr() on err_value.
      6d40690c
  27. 24 Sep, 2002 1 commit
  28. 23 Sep, 2002 1 commit
    • Guido van Rossum's avatar
      Various repairs and nits: · 5e977514
      Guido van Rossum authored
      - Change pending() to use select.select() instead of select.poll(), so
        it'll work on Windows.
      
      - Clarify comment to say that only Exceptions are propagated.
      
      - Change some private variables to public (everything else is public).
      
      - Remove XXX comment about logging at INFO level (we already do that
        now :-).
      5e977514
  29. 20 Sep, 2002 1 commit
    • Guido van Rossum's avatar
      I set out making wait=1 work for fallback connections, i.e. the · 24afe7ac
      Guido van Rossum authored
      ClientStorage constructor called with both wait=1 and
      read_only_fallback=1 should return, indicating its readiness, when a
      read-only connection was made.  This is done by calling
      connect(sync=1).  Previously this waited for the ConnectThread to
      finish, but that thread doesn't finish until it's made a read-write
      connection, so a different mechanism is needed.
      
      I ended up doing a major overhaul of the interfaces between
      ClientStorage, ConnectionManager, ConnectThread/ConnectWrapper, and
      even ManagedConnection.  Changes:
      
      ClientStorage.py:
      
        ClientStorage:
      
        - testConnection() now returns just the preferred flag; stubs are
          cheap and I like to have the notifyConnected() signature be the
          same for clients and servers.
      
        - notifyConnected() now takes a connection (to match the signature
          of this method in StorageServer), and creates a new stub.  It also
          takes care of the reconnect business if the client was already
          connected, rather than the ClientManager.  It stores the
          connection as self._connection so it can close the previous one.
          This is also reset by notifyDisconnected().
      
      zrpc/client.py:
      
        ConnectionManager:
      
        - Changed self.thread_lock into a condition variable.  It now also
          protects self.connection.  The condition is notified when
          self.connection is set to a non-None value in connect_done();
          connect(sync=1) waits for it.  The self.connected variable is no
          more; we test "self.connection is not None" instead.
      
        - Tried to made close() reentrant.  (There's a trick: you can't set
          self.connection to None, conn.close() ends up calling close_conn()
          which does this.)
      
        - Renamed notify_closed() to close_conn(), for symmetry with the
          StorageServer API.
      
        - Added an is_connected() method so ConnectThread.try_connect()
          doesn't have to dig inside the manager's guts to find out if the
          manager is connected (important for the disposition of fallback
          wrappers).
      
        ConnectThread and ConnectWrapper:
      
        - Follow above changes in the ClientStorage and ConnectionManager
          APIs: don't close the manager's connection when reconnecting, but
          leave that up to notifyConnected(); ConnectWrapper no longer
          manages the stub.
      
        - ConnectWrapper sets self.sock to None once it's created a
          ManagedConnection -- from there on the connection is is charge of
          closing the socket.
      
      zrpc/connection.py:
      
        ManagedServerConnection:
      
        - Changed the order in which close() calls things; super_close()
          should be last.
      
        ManagedConnection:
      
        - Ditto, and call the manager's close_conn() instead of
          notify_closed().
      
      tests/testZEO.py:
      
        - In checkReconnectSwitch(), we can now open the client storage with
          wait=1 and read_only_fallback=1.
      24afe7ac
  30. 19 Sep, 2002 2 commits
    • Guido van Rossum's avatar
      The mystery of the Win98 hangs in the checkReconnectSwitch() test · da28b620
      Guido van Rossum authored
      until I added an is_connected() test to testConnection() is solved.
      
      After the ConnectThread has switched the client to the new, read-write
      connection, it closes the read-only connection(s) that it was saving
      up in case there was no read-write connection.  But closing a
      ManagedConnection calls notify_closed() on the manager, which
      disconnected the manager and the client from its brand new read-write
      connection.  The mistake here is that this should only be done when
      closing the manager's current connection!
      
      The fix was to add an argument to notify_closed() that passes the
      connection object being closed; notify_closed() returns without doing
      a thing when that is not the current connection.
      
      I presume this didn't happen on Linux because there the sockets
      happened to connect in a different order, and there was no read-only
      connection to close yet (just a socket trying to connect).
      
      I'm taking out the previous "fix" to ClientStorage, because that only
      masked the problem in this relatively simple test case.  The problem
      could still occur when both a read-only and a read-write server are up
      initially, and the read-only server connects first; once the
      read-write server connects, the read-write connection is installed,
      and then the saved read-only connection is closed which would again
      mistakenly disconnect the read-write connection.
      
      Another (related) fix is not to call self.mgr.notify_closed() but to
      call self.mgr.connection.close() when reconnecting.  (Hmm, I wonder if
      it would make more sense to have an explicit reconnect callback to the
      manager and the client?  Later.)
      da28b620
    • Guido van Rossum's avatar
      Define __str__ as an alias for __repr__. Otherwise __str__ will get · b0e16c71
      Guido van Rossum authored
      the socket's __str__ due to a __getattr__ method in asyncore's
      dispatcher base class that everybody hates but nobody dares take away.
      b0e16c71
  31. 17 Sep, 2002 5 commits
  32. 16 Sep, 2002 2 commits