- 28 Jul, 2016 4 commits
-
-
Jim Fulton authored
This is experimental in that passing credentials will cause connections to an ordinary ZEO server to fail, but it facilitates experimentation with custom ZEO servers. Doing this with custom ZEO clients would have been awkward due to the many levels of composition involved. In the future, we expect to support server security plugins that consume credentials for authentication (typically over SSL). Note that credentials are opaque to ZEO. They can be any object with a true value. The client mearly passes them to the server, which will someday pass them to a plugin.
-
Jim Fulton authored
Especially splitting the method used to send invalidations and info, since we no-longer send both in the same call.
-
Jim Fulton authored
If a transaction only adds objects (like the transaction that ZODB.DB uses to create the root objects, we still need to send invalidations so clients see the new tid, because MVCC.
-
-
- 21 Jul, 2016 9 commits
-
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
-
- 20 Jul, 2016 2 commits
-
-
Jim Fulton authored
-
Jim Fulton authored
That was causing test failutes when using the multi-threaded server.
-
- 19 Jul, 2016 7 commits
-
-
Jim Fulton authored
Test cleanup 2016 7 8
-
Jim Fulton authored
Rehabilitate mtacceptor
-
Jim Fulton authored
Simplify server commit lock management
-
Jim Fulton authored
Conflicts: src/ZEO/StorageServer.py
-
Jim Fulton authored
Fixed: SSL clients of servers with signed certs didn't load default
-
Jim Fulton authored
Asyncio cleanups
-
Jim Fulton authored
Optimizations, featuring prefetch
-
- 18 Jul, 2016 7 commits
-
-
Jim Fulton authored
(There seems to be a resource issue with Python 2, that I might run down some time.)
-
Jim Fulton authored
Mainly for tests. Also, add a constructor option to use a custom acceptor.
-
Jim Fulton authored
The new api will be in Python 3.6, and hopefullt, in uvloop soon. When binding to port 0, make sure we set our address to a 2-tuple, because IPV6 4-tuples confuse ClientStorage.
-
Jim Fulton authored
certs and were unable to connect.
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
-
- 17 Jul, 2016 2 commits
-
-
Jim Fulton authored
Bye bye Promise. Also fixed a comment.
-
Jim Fulton authored
-
- 16 Jul, 2016 6 commits
-
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
-
Jim Fulton authored
Useful for playing with ssl configurations and generally for being able to inspect server state in development and testing. This only makes sense when running the server in a thread, of course.
-
Jim Fulton authored
Useful for playing with ssl configurations.
-
Jim Fulton authored
Also added a comment with a reminder for how to create self-signed certs. Useful for playing with ssl configurations.
-
- 14 Jul, 2016 2 commits
-
-
Jim Fulton authored
-
Jim Fulton authored
IOW, if there's an outstanding call is made for a given oid and tid, and another call is made, the second call will use the result of the outstanding call, rather than making another call to the server.
-
- 09 Jul, 2016 1 commit
-
-
Jim Fulton authored
In working on the next iteration of the lock manager to provide object-level locking, I realized: - It was saner let all waiting try to get locks when locks are released, at least in the more complicated logic to follow. - We really do almost certianly want a multi-threaded server, even if it doesn't run faster (still an open question), because otherwise, big commits will completely block loads. - We don't really want to hold the lock-manager lock while calling the callback. Again, this really only matters if we have a multi-threaded server, but it also feels like a matter of hygiene :) I decided to rework this branch: - Don't hold lock-manager internal lock when calling the callnack. - When releasing the lock, use call_soon_threadsafe to let all waiting have a chance to get the lock. - A little bit of factoring to DRY. (This factoring will be much more useful in the follow-on branch. This rework restores the workability of the thread-per-client model.
-