Commit 949f0129 authored by Jason Madden's avatar Jason Madden

Update what's new doc and some docstrings.

Also simplify the Timeout/if timer idiom.
parent 8f275765
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
Detailed information an what has changed is available in the Detailed information an what has changed is available in the
:doc:`changelog`. This document summarizes the most important changes :doc:`changelog`. This document summarizes the most important changes
since gevent 1.0.3. since gevent 1.0.2.
Platform Support Platform Support
================ ================
...@@ -62,6 +62,10 @@ through 2.5.1, 2.6.0, 2.6.1, 4.0.0. ...@@ -62,6 +62,10 @@ through 2.5.1, 2.6.0, 2.6.1, 4.0.0.
when not acquired (which should be the typical case). The when not acquired (which should be the typical case). The
``c-ares`` package has not been audited for this issue. ``c-ares`` package has not been audited for this issue.
.. note:: PyPy 4.0.0 on Linux is known to *rarely* (once per 24 hours)
encounter crashes when running heavily loaded, heavily
networked gevent programs. The exact cause is unknown and is
being tracked in :issue:`677`.
.. _cffi 1.3.0: https://bitbucket.org/cffi/cffi/src/ad3140a30a7b0ca912185ef500546a9fb5525ece/doc/source/whatsnew.rst?at=default .. _cffi 1.3.0: https://bitbucket.org/cffi/cffi/src/ad3140a30a7b0ca912185ef500546a9fb5525ece/doc/source/whatsnew.rst?at=default
.. _1.2.0: https://cffi.readthedocs.org/en/latest/whatsnew.html#v1-2-0 .. _1.2.0: https://cffi.readthedocs.org/en/latest/whatsnew.html#v1-2-0
...@@ -71,7 +75,7 @@ Improved subprocess support ...@@ -71,7 +75,7 @@ Improved subprocess support
=========================== ===========================
In gevent 1.0, support and monkey patching for the :mod:`subprocess` In gevent 1.0, support and monkey patching for the :mod:`subprocess`
module was added. Monkey patching was off by default. module was added. Monkey patching this module was off by default.
In 1.1, monkey patching ``subprocess`` is on by default due to In 1.1, monkey patching ``subprocess`` is on by default due to
improvements in handling child processes and requirements by improvements in handling child processes and requirements by
...@@ -127,26 +131,27 @@ include: ...@@ -127,26 +131,27 @@ include:
- A gevent-friendly version of :obj:`select.poll` (on platforms that - A gevent-friendly version of :obj:`select.poll` (on platforms that
implement it). implement it).
- :class:`gevent.fileobject.FileObjectPosix` uses the :mod:`io` - :class:`~gevent.fileobject.FileObjectPosix` uses the :mod:`io`
package on both Python 2 and Python 3, increasing its functionality package on both Python 2 and Python 3, increasing its functionality,
correctness, and performance. (Previously, the Python 2 implementation used the correctness, and performance. (Previously, the Python 2 implementation used the
undocumented :class:`socket._fileobject`.) undocumented class :class:`socket._fileobject`.)
- Locks raise the same error as standard library locks if they are - Locks raise the same error as standard library locks if they are
over-released. over-released.
- :meth:`ThreadPool.apply <gevent.threadpool.ThreadPool.apply>` can - :meth:`ThreadPool.apply <gevent.threadpool.ThreadPool.apply>` can
now be used recursively. now be used recursively.
- The various pool objects (:class:`gevent.pool.Group`, - The various pool objects (:class:`~gevent.pool.Group`,
:class:`gevent.pool.Pool`, :class:`gevent.threadpool.ThreadPool`) :class:`~gevent.pool.Pool`, :class:`~gevent.threadpool.ThreadPool`)
support the same improved APIs: ``imap`` and ``imap_unordered`` support the same improved APIs: :meth:`imap <gevent.pool.Group.imap>`
accept multiple iterables, ``apply`` raises any exception raised by and :meth:`imap_unordered <gevent.pool.Group.imap_unordered>` accept
the target callable, etc. multiple iterables, :meth:`apply <gevent.pool.Group.apply>` raises any exception raised by the
target callable, etc.
- Killing a greenlet (with :func:`gevent.kill` or - Killing a greenlet (with :func:`gevent.kill` or
:meth:`Greenlet.kill <gevent.Greenlet.kill>`) before it is actually started and :meth:`Greenlet.kill <gevent.Greenlet.kill>`) before it is actually started and
switched to now prevents the greenlet from ever running, instead of switched to now prevents the greenlet from ever running, instead of
raising an exception when it is later switched to. Attempting to raising an exception when it is later switched to. Attempting to
spawn a greenlet with an invalid target now immediately produces spawn a greenlet with an invalid target now immediately produces
a useful TypeError, instead of spawning a greenlet that would a useful :exc:`TypeError`, instead of spawning a greenlet that would
immediately die the first time it was switched to. (usually) immediately die the first time it was switched to.
- Almost anywhere that gevent raises an exception from one greenlet to - Almost anywhere that gevent raises an exception from one greenlet to
another (e.g., :meth:`Greenlet.get <gevent.Greenlet.get>`), another (e.g., :meth:`Greenlet.get <gevent.Greenlet.get>`),
the original traceback is preserved and raised. the original traceback is preserved and raised.
...@@ -195,22 +200,25 @@ reduce the cases of undocumented or non-standard behaviour. ...@@ -195,22 +200,25 @@ reduce the cases of undocumented or non-standard behaviour.
:class:`gevent.pywsgi.WSGIServer` close the client socket. :class:`gevent.pywsgi.WSGIServer` close the client socket.
In gevent 1.0, the client socket was left to the mercies of the In gevent 1.0, the client socket was left to the mercies of the
garbage collector. In the typical case, the socket would still garbage collector (this was undocumented). In the typical case, the
be closed as soon as the request handler returned due to socket would still be closed as soon as the request handler returned
CPython's reference-counting garbage collector. But this meant due to CPython's reference-counting garbage collector. But this
that a reference cycle could leave a socket dangling open for meant that a reference cycle could leave a socket dangling open for
an indeterminate amount of time, and a reference leak would an indeterminate amount of time, and a reference leak would result
result in it never being closed. It also meant that Python 3 in it never being closed. It also meant that Python 3 would produce
would produce ResourceWarnings, and PyPy (which, unlike ResourceWarnings, and PyPy (which, unlike CPython, `does not use a
CPython, `does not use a reference-counted GC`_) would only close reference-counted GC`_) would only close (and flush!) the socket at
(and flush) the socket at an arbitrary time in the future. an arbitrary time in the future.
If your application relied on the socket not being closed when If your application relied on the socket not being closed when the
the request handler returned (e.g., you spawned a greenlet that request handler returned (e.g., you spawned a greenlet that
continued to use the socket) you will need to keep the request continued to use the socket) you will need to keep the request
handler from returning (e.g., ``join`` the greenlet) or handler from returning (e.g., ``join`` the greenlet). If for some
subclass the server to prevent it from closing the socket; the reason that isn't possible, you may subclass the server to prevent
former approach is strongly preferred. it from closing the socket, at which point the responsibility for
closing and flushing the socket is now yours; *but* the former
approach is strongly preferred, and subclassing the server for this
reason may not be supported in the future.
.. _does not use a reference-counted GC: http://doc.pypy.org/en/latest/cpython_differences.html#differences-related-to-garbage-collection-strategies .. _does not use a reference-counted GC: http://doc.pypy.org/en/latest/cpython_differences.html#differences-related-to-garbage-collection-strategies
...@@ -218,12 +226,13 @@ reduce the cases of undocumented or non-standard behaviour. ...@@ -218,12 +226,13 @@ reduce the cases of undocumented or non-standard behaviour.
status line set by the application can be encoded in the ISO-8859-1 status line set by the application can be encoded in the ISO-8859-1
(Latin-1) charset and are of the *native string type*. (Latin-1) charset and are of the *native string type*.
Under gevent 1.0, non-``bytes`` headers (that is, ``unicode`` since Under gevent 1.0, non-``bytes`` headers (that is, ``unicode``, since
gevent 1.0 only ran on Python 2) were encoded according to the gevent 1.0 only ran on Python 2) were encoded according to the
current default Python encoding. In some cases, this could allow current default Python encoding. In some cases, this could allow
non-Latin-1 characters to be sent in the headers, but this violated non-Latin-1 characters to be sent in the headers, but this violated
the HTTP specification, and their interpretation by the recipient is the HTTP specification, and their interpretation by the recipient is
unknown. Now, a :exc:`UnicodeError` will be raised. unknown. In other cases, gevent could send malformed partial HTTP
responses. Now, a :exc:`UnicodeError` will be raised proactively.
Most applications that adhered to the WSGI PEP, :pep:`3333`, will not Most applications that adhered to the WSGI PEP, :pep:`3333`, will not
need to make any changes. See :issue:`614` for more discussion. need to make any changes. See :issue:`614` for more discussion.
......
...@@ -81,7 +81,7 @@ class Event(object): ...@@ -81,7 +81,7 @@ class Event(object):
switch = getcurrent().switch switch = getcurrent().switch
self.rawlink(switch) self.rawlink(switch)
try: try:
timer = Timeout.start_new(timeout) if timeout is not None else None timer = Timeout._start_new_or_dummy(timeout)
try: try:
try: try:
result = self.hub.switch() result = self.hub.switch()
...@@ -91,7 +91,6 @@ class Event(object): ...@@ -91,7 +91,6 @@ class Event(object):
if ex is not timer: if ex is not timer:
raise raise
finally: finally:
if timer is not None:
timer.cancel() timer.cancel()
finally: finally:
self.unlink(switch) self.unlink(switch)
...@@ -280,7 +279,7 @@ class AsyncResult(object): ...@@ -280,7 +279,7 @@ class AsyncResult(object):
switch = getcurrent().switch switch = getcurrent().switch
self.rawlink(switch) self.rawlink(switch)
try: try:
timer = Timeout.start_new(timeout) timer = Timeout._start_new_or_dummy(timeout)
try: try:
result = self.hub.switch() result = self.hub.switch()
if result is not self: if result is not self:
...@@ -328,7 +327,7 @@ class AsyncResult(object): ...@@ -328,7 +327,7 @@ class AsyncResult(object):
switch = getcurrent().switch switch = getcurrent().switch
self.rawlink(switch) self.rawlink(switch)
try: try:
timer = Timeout.start_new(timeout) timer = Timeout._start_new_or_dummy(timeout)
try: try:
result = self.hub.switch() result = self.hub.switch()
if result is not self: if result is not self:
......
...@@ -446,13 +446,12 @@ class Greenlet(greenlet): ...@@ -446,13 +446,12 @@ class Greenlet(greenlet):
switch = getcurrent().switch switch = getcurrent().switch
self.rawlink(switch) self.rawlink(switch)
try: try:
t = Timeout.start_new(timeout) if timeout is not None else None t = Timeout._start_new_or_dummy(timeout)
try: try:
result = self.parent.switch() result = self.parent.switch()
if result is not self: if result is not self:
raise InvalidSwitchError('Invalid switch into Greenlet.get(): %r' % (result, )) raise InvalidSwitchError('Invalid switch into Greenlet.get(): %r' % (result, ))
finally: finally:
if t is not None:
t.cancel() t.cancel()
except: except:
# unlinking in 'except' instead of finally is an optimization: # unlinking in 'except' instead of finally is an optimization:
...@@ -478,7 +477,7 @@ class Greenlet(greenlet): ...@@ -478,7 +477,7 @@ class Greenlet(greenlet):
switch = getcurrent().switch switch = getcurrent().switch
self.rawlink(switch) self.rawlink(switch)
try: try:
t = Timeout.start_new(timeout) t = Timeout._start_new_or_dummy(timeout)
try: try:
result = self.parent.switch() result = self.parent.switch()
if result is not self: if result is not self:
...@@ -679,7 +678,7 @@ def killall(greenlets, exception=GreenletExit, block=True, timeout=None): ...@@ -679,7 +678,7 @@ def killall(greenlets, exception=GreenletExit, block=True, timeout=None):
if block: if block:
waiter = Waiter() waiter = Waiter()
loop.run_callback(_killall3, greenlets, exception, waiter) loop.run_callback(_killall3, greenlets, exception, waiter)
t = Timeout.start_new(timeout) t = Timeout._start_new_or_dummy(timeout)
try: try:
alive = waiter.get() alive = waiter.get()
if alive: if alive:
......
...@@ -228,7 +228,7 @@ class GroupMappingMixin(object): ...@@ -228,7 +228,7 @@ class GroupMappingMixin(object):
# - self.spawn(func, *args, **kwargs): a function that runs `func` with `args` # - self.spawn(func, *args, **kwargs): a function that runs `func` with `args`
# and `awargs`, potentially asynchronously. Return a value with a `get` method that # and `awargs`, potentially asynchronously. Return a value with a `get` method that
# blocks until the results of func are available # blocks until the results of func are available, and a `link` method.
# - self._apply_immediately(): should the function passed to apply be called immediately, # - self._apply_immediately(): should the function passed to apply be called immediately,
# synchronously? # synchronously?
...@@ -240,16 +240,28 @@ class GroupMappingMixin(object): ...@@ -240,16 +240,28 @@ class GroupMappingMixin(object):
# asynchronously, possibly synchronously. # asynchronously, possibly synchronously.
def apply_cb(self, func, args=None, kwds=None, callback=None): def apply_cb(self, func, args=None, kwds=None, callback=None):
"""
:meth:`apply` the given *func*, and, if a *callback* is given, run it with the
results of *func* (unless an exception was raised.)
The *callback* may be called synchronously or asynchronously. If called
asynchronously, it will not be tracked by this group. (:class:`Group` and :class:`Pool`
call it asynchronously in a new greenlet; :class:`~gevent.threadpool.ThreadPool` calls
it synchronously in the current greenlet.)
"""
result = self.apply(func, args, kwds) result = self.apply(func, args, kwds)
if callback is not None: if callback is not None:
self._apply_async_cb_spawn(callback, result) self._apply_async_cb_spawn(callback, result)
return result return result
def apply_async(self, func, args=None, kwds=None, callback=None): def apply_async(self, func, args=None, kwds=None, callback=None):
"""A variant of the apply() method which returns a Greenlet object. """
A variant of the apply() method which returns a Greenlet object.
If callback is specified then it should be a callable which accepts a single argument. When the result becomes ready If *callback* is specified, then it should be a callable which
callback is applied to it (unless the call failed).""" accepts a single argument. When the result becomes ready
callback is applied to it (unless the call failed).
"""
if args is None: if args is None:
args = () args = ()
if kwds is None: if kwds is None:
...@@ -274,6 +286,9 @@ class GroupMappingMixin(object): ...@@ -274,6 +286,9 @@ class GroupMappingMixin(object):
if the current greenlet or thread is already one that was if the current greenlet or thread is already one that was
spawned by this pool, the pool may choose to immediately run spawned by this pool, the pool may choose to immediately run
the `func` synchronously. the `func` synchronously.
Any exception ``func`` raises will be propagated to the caller of ``apply`` (that is,
this method will raise the exception that ``func`` raised).
""" """
if args is None: if args is None:
args = () args = ()
...@@ -370,10 +385,20 @@ class GroupMappingMixin(object): ...@@ -370,10 +385,20 @@ class GroupMappingMixin(object):
class Group(GroupMappingMixin): class Group(GroupMappingMixin):
"""Maintain a group of greenlets that are still running. """
Maintain a group of greenlets that are still running, without
limiting their number.
Links to each item and removes it upon notification. Links to each item and removes it upon notification.
Groups can be iterated to discover what greenlets they are tracking,
they can be tested to see if they contain a greenlet, and they know the
number (len) of greenlets they are tracking. If they are not tracking any
greenlets, they are False in a boolean context.
""" """
#: The type of Greenlet object we will :meth:`spawn`. This can be changed
#: on an instance or in a subclass.
greenlet_class = Greenlet greenlet_class = Greenlet
def __init__(self, *args): def __init__(self, *args):
...@@ -391,15 +416,31 @@ class Group(GroupMappingMixin): ...@@ -391,15 +416,31 @@ class Group(GroupMappingMixin):
return '<%s at 0x%x %s>' % (self.__class__.__name__, id(self), self.greenlets) return '<%s at 0x%x %s>' % (self.__class__.__name__, id(self), self.greenlets)
def __len__(self): def __len__(self):
"""
Answer how many greenlets we are tracking. Note that if we are empty,
we are False in a boolean context.
"""
return len(self.greenlets) return len(self.greenlets)
def __contains__(self, item): def __contains__(self, item):
"""
Answer if we are tracking the given greenlet.
"""
return item in self.greenlets return item in self.greenlets
def __iter__(self): def __iter__(self):
"""
Iterate across all the greenlets we are tracking, in no particular order.
"""
return iter(self.greenlets) return iter(self.greenlets)
def add(self, greenlet): def add(self, greenlet):
"""
Begin tracking the greenlet.
If this group is :meth:`full`, then this method may block
until it is possible to track the greenlet.
"""
try: try:
rawlink = greenlet.rawlink rawlink = greenlet.rawlink
except AttributeError: except AttributeError:
...@@ -416,6 +457,9 @@ class Group(GroupMappingMixin): ...@@ -416,6 +457,9 @@ class Group(GroupMappingMixin):
self._empty_event.set() self._empty_event.set()
def discard(self, greenlet): def discard(self, greenlet):
"""
Stop tracking the greenlet.
"""
self._discard(greenlet) self._discard(greenlet)
try: try:
unlink = greenlet.unlink unlink = greenlet.unlink
...@@ -449,6 +493,11 @@ class Group(GroupMappingMixin): ...@@ -449,6 +493,11 @@ class Group(GroupMappingMixin):
# self.add = RaiseException("This %s has been closed" % self.__class__.__name__) # self.add = RaiseException("This %s has been closed" % self.__class__.__name__)
def join(self, timeout=None, raise_error=False): def join(self, timeout=None, raise_error=False):
"""
Wait for the group to become empty at least once. Note that by the
time the waiting code runs again, some other greenlet may have been
added.
"""
if raise_error: if raise_error:
greenlets = self.greenlets.copy() greenlets = self.greenlets.copy()
self._empty_event.wait(timeout=timeout) self._empty_event.wait(timeout=timeout)
...@@ -461,7 +510,10 @@ class Group(GroupMappingMixin): ...@@ -461,7 +510,10 @@ class Group(GroupMappingMixin):
self._empty_event.wait(timeout=timeout) self._empty_event.wait(timeout=timeout)
def kill(self, exception=GreenletExit, block=True, timeout=None): def kill(self, exception=GreenletExit, block=True, timeout=None):
timer = Timeout.start_new(timeout) """
Kill all greenlets being tracked by this group.
"""
timer = Timeout._start_new_or_dummy(timeout)
try: try:
try: try:
while self.greenlets: while self.greenlets:
...@@ -484,6 +536,10 @@ class Group(GroupMappingMixin): ...@@ -484,6 +536,10 @@ class Group(GroupMappingMixin):
timer.cancel() timer.cancel()
def killone(self, greenlet, exception=GreenletExit, block=True, timeout=None): def killone(self, greenlet, exception=GreenletExit, block=True, timeout=None):
"""
If the given *greenlet* is running and being tracked by this group,
kill it.
"""
if greenlet not in self.dying and greenlet in self.greenlets: if greenlet not in self.dying and greenlet in self.greenlets:
greenlet.kill(exception, block=False) greenlet.kill(exception, block=False)
self.dying.add(greenlet) self.dying.add(greenlet)
...@@ -491,9 +547,21 @@ class Group(GroupMappingMixin): ...@@ -491,9 +547,21 @@ class Group(GroupMappingMixin):
greenlet.join(timeout) greenlet.join(timeout)
def full(self): def full(self):
"""
Return a value indicating whether this group can track more greenlets.
In this implementation, because there are no limits on the number of
tracked greenlets, this will always return a ``False`` value.
"""
return False return False
def wait_available(self): def wait_available(self, timeout=None):
"""
Block until it is possible to :meth:`spawn` a new greenlet.
In this implementation, because there are no limits on the number
of tracked greenlets, this will always return immediately.
"""
pass pass
# MappingMixin methods # MappingMixin methods
...@@ -539,7 +607,8 @@ class Pool(Group): ...@@ -539,7 +607,8 @@ class Pool(Group):
* ``None`` (the default) places no limit on the number of * ``None`` (the default) places no limit on the number of
greenlets. This is useful when you need to track, but not limit, greenlets. This is useful when you need to track, but not limit,
greenlets, as with :class:`gevent.pywsgi.WSGIServer` greenlets, as with :class:`gevent.pywsgi.WSGIServer`. A :class:`Group`
may be a more efficient way to achieve the same effect.
* ``0`` creates a pool that can never have any active greenlets. Attempting * ``0`` creates a pool that can never have any active greenlets. Attempting
to spawn in this pool will block forever. This is only useful to spawn in this pool will block forever. This is only useful
if an application uses :meth:`wait_available` with a timeout and checks if an application uses :meth:`wait_available` with a timeout and checks
...@@ -583,7 +652,7 @@ class Pool(Group): ...@@ -583,7 +652,7 @@ class Pool(Group):
def free_count(self): def free_count(self):
""" """
Return a number indicating approximately how many more members Return a number indicating *approximately* how many more members
can be added to this pool. can be added to this pool.
""" """
if self.size is None: if self.size is None:
...@@ -591,6 +660,11 @@ class Pool(Group): ...@@ -591,6 +660,11 @@ class Pool(Group):
return max(0, self.size - len(self)) return max(0, self.size - len(self))
def add(self, greenlet): def add(self, greenlet):
"""
Begin tracking the given greenlet, blocking until space is available.
.. seealso:: :meth:`Group.add`
"""
self._semaphore.acquire() self._semaphore.acquire()
try: try:
Group.add(self, greenlet) Group.add(self, greenlet)
......
...@@ -207,7 +207,7 @@ class Queue(object): ...@@ -207,7 +207,7 @@ class Queue(object):
elif block: elif block:
waiter = ItemWaiter(item, self) waiter = ItemWaiter(item, self)
self.putters.append(waiter) self.putters.append(waiter)
timeout = Timeout.start_new(timeout, Full) if timeout is not None else None timeout = Timeout._start_new_or_dummy(timeout, Full)
try: try:
if self.getters: if self.getters:
self._schedule_unlock() self._schedule_unlock()
...@@ -215,7 +215,6 @@ class Queue(object): ...@@ -215,7 +215,6 @@ class Queue(object):
if result is not waiter: if result is not waiter:
raise InvalidSwitchError("Invalid switch into Queue.put: %r" % (result, )) raise InvalidSwitchError("Invalid switch into Queue.put: %r" % (result, ))
finally: finally:
if timeout is not None:
timeout.cancel() timeout.cancel()
_safe_remove(self.putters, waiter) _safe_remove(self.putters, waiter)
else: else:
...@@ -252,7 +251,7 @@ class Queue(object): ...@@ -252,7 +251,7 @@ class Queue(object):
raise Empty() raise Empty()
waiter = Waiter() waiter = Waiter()
timeout = Timeout.start_new(timeout, Empty) if timeout is not None else None timeout = Timeout._start_new_or_dummy(timeout, Empty)
try: try:
self.getters.append(waiter) self.getters.append(waiter)
if self.putters: if self.putters:
...@@ -262,7 +261,6 @@ class Queue(object): ...@@ -262,7 +261,6 @@ class Queue(object):
raise InvalidSwitchError('Invalid switch into Queue.get: %r' % (result, )) raise InvalidSwitchError('Invalid switch into Queue.get: %r' % (result, ))
return method() return method()
finally: finally:
if timeout is not None:
timeout.cancel() timeout.cancel()
_safe_remove(self.getters, waiter) _safe_remove(self.getters, waiter)
...@@ -520,7 +518,7 @@ class Channel(object): ...@@ -520,7 +518,7 @@ class Channel(object):
waiter = Waiter() waiter = Waiter()
item = (item, waiter) item = (item, waiter)
self.putters.append(item) self.putters.append(item)
timeout = Timeout.start_new(timeout, Full) if timeout is not None else None timeout = Timeout._start_new_or_dummy(timeout, Full)
try: try:
if self.getters: if self.getters:
self._schedule_unlock() self._schedule_unlock()
...@@ -531,7 +529,6 @@ class Channel(object): ...@@ -531,7 +529,6 @@ class Channel(object):
_safe_remove(self.putters, item) _safe_remove(self.putters, item)
raise raise
finally: finally:
if timeout is not None:
timeout.cancel() timeout.cancel()
def put_nowait(self, item): def put_nowait(self, item):
...@@ -548,7 +545,7 @@ class Channel(object): ...@@ -548,7 +545,7 @@ class Channel(object):
timeout = 0 timeout = 0
waiter = Waiter() waiter = Waiter()
timeout = Timeout.start_new(timeout, Empty) timeout = Timeout._start_new_or_dummy(timeout, Empty)
try: try:
self.getters.append(waiter) self.getters.append(waiter)
if self.putters: if self.putters:
......
...@@ -15,6 +15,10 @@ __all__ = ['ThreadPool', ...@@ -15,6 +15,10 @@ __all__ = ['ThreadPool',
class ThreadPool(GroupMappingMixin): class ThreadPool(GroupMappingMixin):
"""
.. note:: The method :meth:`apply_async` will always return a new
greenlet, bypassing the threadpool entirely.
"""
def __init__(self, maxsize, hub=None): def __init__(self, maxsize, hub=None):
if hub is None: if hub is None:
......
...@@ -23,7 +23,8 @@ __all__ = ['Timeout', ...@@ -23,7 +23,8 @@ __all__ = ['Timeout',
class _FakeTimer(object): class _FakeTimer(object):
# An object that mimics the API of get_hub().loop.timer, but # An object that mimics the API of get_hub().loop.timer, but
# without allocating any native resources. This is useful for timeouts # without allocating any native resources. This is useful for timeouts
# that will never expire # that will never expire.
# Also partially mimics the API of Timeout itself for use in _start_new_or_dummy
pending = False pending = False
active = False active = False
...@@ -33,6 +34,9 @@ class _FakeTimer(object): ...@@ -33,6 +34,9 @@ class _FakeTimer(object):
def stop(self): def stop(self):
return return
def cancel(self):
return
_FakeTimer = _FakeTimer() _FakeTimer = _FakeTimer()
...@@ -109,6 +113,7 @@ class Timeout(BaseException): ...@@ -109,6 +113,7 @@ class Timeout(BaseException):
""" """
def __init__(self, seconds=None, exception=None, ref=True, priority=-1): def __init__(self, seconds=None, exception=None, ref=True, priority=-1):
BaseException.__init__(self)
self.seconds = seconds self.seconds = seconds
self.exception = exception self.exception = exception
if seconds is None: if seconds is None:
...@@ -155,6 +160,21 @@ class Timeout(BaseException): ...@@ -155,6 +160,21 @@ class Timeout(BaseException):
timeout.start() timeout.start()
return timeout return timeout
@staticmethod
def _start_new_or_dummy(timeout, exception=None):
# Internal use only in 1.1
# Return an object with a 'cancel' method; if timeout is None,
# this will be a shared instance object that does nothing. Otherwise,
# return an actual Timeout.
# This saves the previously common idiom of 'timer = Timeout.start_new(t) if t is not None else None'
# followed by 'if timer is not None: timer.cancel()'.
# That idiom was used to avoid any object allocations.
# A staticmethod is slightly faster under CPython, compared to a classmethod;
# under PyPy in synthetic benchmarks it makes no difference.
if timeout is None:
return _FakeTimer
return Timeout.start_new(timeout, exception)
@property @property
def pending(self): def pending(self):
"""Return True if the timeout is scheduled to be raised.""" """Return True if the timeout is scheduled to be raised."""
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment