Commit ad51a118 authored by Georg Brandl's avatar Georg Brandl

merge 3.3.5rc1 release commits with 3.3 branch

parents e879c0ad a7e7f353
...@@ -37,6 +37,8 @@ Lib/test/xmltestdata/* = BIN ...@@ -37,6 +37,8 @@ Lib/test/xmltestdata/* = BIN
Lib/venv/scripts/nt/* = BIN Lib/venv/scripts/nt/* = BIN
Lib/test/coding20731.py = BIN
# All other files (which presumably are human-editable) are "native". # All other files (which presumably are human-editable) are "native".
# This must be the last rule! # This must be the last rule!
......
...@@ -306,7 +306,7 @@ in various ways. There is a separate error indicator for each thread. ...@@ -306,7 +306,7 @@ in various ways. There is a separate error indicator for each thread.
.. c:function:: void PyErr_SyntaxLocation(char *filename, int lineno) .. c:function:: void PyErr_SyntaxLocation(char *filename, int lineno)
Like :c:func:`PyErr_SyntaxLocationExc`, but the col_offset parameter is Like :c:func:`PyErr_SyntaxLocationEx`, but the col_offset parameter is
omitted. omitted.
...@@ -490,11 +490,11 @@ Exception Objects ...@@ -490,11 +490,11 @@ Exception Objects
reference, as accessible from Python through :attr:`__cause__`. reference, as accessible from Python through :attr:`__cause__`.
.. c:function:: void PyException_SetCause(PyObject *ex, PyObject *ctx) .. c:function:: void PyException_SetCause(PyObject *ex, PyObject *cause)
Set the cause associated with the exception to *ctx*. Use *NULL* to clear Set the cause associated with the exception to *cause*. Use *NULL* to clear
it. There is no type check to make sure that *ctx* is either an exception it. There is no type check to make sure that *cause* is either an exception
instance or :const:`None`. This steals a reference to *ctx*. instance or :const:`None`. This steals a reference to *cause*.
:attr:`__suppress_context__` is implicitly set to ``True`` by this function. :attr:`__suppress_context__` is implicitly set to ``True`` by this function.
......
...@@ -113,7 +113,7 @@ There are only a few functions special to module objects. ...@@ -113,7 +113,7 @@ There are only a few functions special to module objects.
Return a pointer to the :c:type:`PyModuleDef` struct from which the module was Return a pointer to the :c:type:`PyModuleDef` struct from which the module was
created, or *NULL* if the module wasn't created with created, or *NULL* if the module wasn't created with
:c:func:`PyModule_Create`.i :c:func:`PyModule_Create`.
.. c:function:: PyObject* PyState_FindModule(PyModuleDef *def) .. c:function:: PyObject* PyState_FindModule(PyModuleDef *def)
......
...@@ -205,9 +205,8 @@ type objects) *must* have the :attr:`ob_size` field. ...@@ -205,9 +205,8 @@ type objects) *must* have the :attr:`ob_size` field.
bit currently defined is :const:`Py_PRINT_RAW`. When the :const:`Py_PRINT_RAW` bit currently defined is :const:`Py_PRINT_RAW`. When the :const:`Py_PRINT_RAW`
flag bit is set, the instance should be printed the same way as :c:member:`~PyTypeObject.tp_str` flag bit is set, the instance should be printed the same way as :c:member:`~PyTypeObject.tp_str`
would format it; when the :const:`Py_PRINT_RAW` flag bit is clear, the instance would format it; when the :const:`Py_PRINT_RAW` flag bit is clear, the instance
should be printed the same was as :c:member:`~PyTypeObject.tp_repr` would format it. It should should be printed the same way as :c:member:`~PyTypeObject.tp_repr` would format it. It should
return ``-1`` and set an exception condition when an error occurred during the return ``-1`` and set an exception condition when an error occurs.
comparison.
It is possible that the :c:member:`~PyTypeObject.tp_print` field will be deprecated. In any case, It is possible that the :c:member:`~PyTypeObject.tp_print` field will be deprecated. In any case,
it is recommended not to define :c:member:`~PyTypeObject.tp_print`, but instead to rely on it is recommended not to define :c:member:`~PyTypeObject.tp_print`, but instead to rely on
......
...@@ -142,36 +142,43 @@ The :mod:`csv` module defines the following functions: ...@@ -142,36 +142,43 @@ The :mod:`csv` module defines the following functions:
The :mod:`csv` module defines the following classes: The :mod:`csv` module defines the following classes:
.. class:: DictReader(csvfile, fieldnames=None, restkey=None, restval=None, dialect='excel', *args, **kwds) .. class:: DictReader(csvfile, fieldnames=None, restkey=None, restval=None, \
dialect='excel', *args, **kwds)
Create an object which operates like a regular reader but maps the information
read into a dict whose keys are given by the optional *fieldnames* parameter. Create an object which operates like a regular reader but maps the
If the *fieldnames* parameter is omitted, the values in the first row of the information read into a dict whose keys are given by the optional
*csvfile* will be used as the fieldnames. If the row read has more fields *fieldnames* parameter. The *fieldnames* parameter is a :mod:`sequence
than the fieldnames sequence, the remaining data is added as a sequence <collections.abc>` whose elements are associated with the fields of the
keyed by the value of *restkey*. If the row read has fewer fields than the input data in order. These elements become the keys of the resulting
fieldnames sequence, the remaining keys take the value of the optional dictionary. If the *fieldnames* parameter is omitted, the values in the
*restval* parameter. Any other optional or keyword arguments are passed to first row of the *csvfile* will be used as the fieldnames. If the row read
the underlying :class:`reader` instance. has more fields than the fieldnames sequence, the remaining data is added as
a sequence keyed by the value of *restkey*. If the row read has fewer
fields than the fieldnames sequence, the remaining keys take the value of
.. class:: DictWriter(csvfile, fieldnames, restval='', extrasaction='raise', dialect='excel', *args, **kwds) the optional *restval* parameter. Any other optional or keyword arguments
are passed to the underlying :class:`reader` instance.
Create an object which operates like a regular writer but maps dictionaries onto
output rows. The *fieldnames* parameter identifies the order in which values in
the dictionary passed to the :meth:`writerow` method are written to the .. class:: DictWriter(csvfile, fieldnames, restval='', extrasaction='raise', \
*csvfile*. The optional *restval* parameter specifies the value to be written dialect='excel', *args, **kwds)
if the dictionary is missing a key in *fieldnames*. If the dictionary passed to
the :meth:`writerow` method contains a key not found in *fieldnames*, the Create an object which operates like a regular writer but maps dictionaries
optional *extrasaction* parameter indicates what action to take. If it is set onto output rows. The *fieldnames* parameter is a :mod:`sequence
to ``'raise'`` a :exc:`ValueError` is raised. If it is set to ``'ignore'``, <collections.abc>` of keys that identify the order in which values in the
extra values in the dictionary are ignored. Any other optional or keyword dictionary passed to the :meth:`writerow` method are written to the
arguments are passed to the underlying :class:`writer` instance. *csvfile*. The optional *restval* parameter specifies the value to be
written if the dictionary is missing a key in *fieldnames*. If the
Note that unlike the :class:`DictReader` class, the *fieldnames* parameter of dictionary passed to the :meth:`writerow` method contains a key not found in
the :class:`DictWriter` is not optional. Since Python's :class:`dict` objects *fieldnames*, the optional *extrasaction* parameter indicates what action to
are not ordered, there is not enough information available to deduce the order take. If it is set to ``'raise'`` a :exc:`ValueError` is raised. If it is
in which the row should be written to the *csvfile*. set to ``'ignore'``, extra values in the dictionary are ignored. Any other
optional or keyword arguments are passed to the underlying :class:`writer`
instance.
Note that unlike the :class:`DictReader` class, the *fieldnames* parameter
of the :class:`DictWriter` is not optional. Since Python's :class:`dict`
objects are not ordered, there is not enough information available to deduce
the order in which the row should be written to the *csvfile*.
.. class:: Dialect .. class:: Dialect
......
...@@ -3,7 +3,6 @@ ...@@ -3,7 +3,6 @@
.. module:: stringprep .. module:: stringprep
:synopsis: String preparation, as per RFC 3453 :synopsis: String preparation, as per RFC 3453
:deprecated:
.. moduleauthor:: Martin v. Löwis <martin@v.loewis.de> .. moduleauthor:: Martin v. Löwis <martin@v.loewis.de>
.. sectionauthor:: Martin v. Löwis <martin@v.loewis.de> .. sectionauthor:: Martin v. Löwis <martin@v.loewis.de>
......
...@@ -412,7 +412,7 @@ mock using the "as" form of the with statement: ...@@ -412,7 +412,7 @@ mock using the "as" form of the with statement:
As an alternative `patch`, `patch.object` and `patch.dict` can be used as As an alternative `patch`, `patch.object` and `patch.dict` can be used as
class decorators. When used in this way it is the same as applying the class decorators. When used in this way it is the same as applying the
decorator indvidually to every method whose name starts with "test". decorator individually to every method whose name starts with "test".
.. _further-examples: .. _further-examples:
......
...@@ -938,7 +938,7 @@ method: ...@@ -938,7 +938,7 @@ method:
.. [#] The only exceptions are magic methods and attributes (those that have .. [#] The only exceptions are magic methods and attributes (those that have
leading and trailing double underscores). Mock doesn't create these but leading and trailing double underscores). Mock doesn't create these but
instead of raises an ``AttributeError``. This is because the interpreter instead raises an ``AttributeError``. This is because the interpreter
will often implicitly request these methods, and gets *very* confused to will often implicitly request these methods, and gets *very* confused to
get a new Mock object when it expects a magic method. If you need magic get a new Mock object when it expects a magic method. If you need magic
method support see :ref:`magic methods <magic-methods>`. method support see :ref:`magic methods <magic-methods>`.
...@@ -1489,7 +1489,7 @@ Patching Descriptors and Proxy Objects ...@@ -1489,7 +1489,7 @@ Patching Descriptors and Proxy Objects
Both patch_ and patch.object_ correctly patch and restore descriptors: class Both patch_ and patch.object_ correctly patch and restore descriptors: class
methods, static methods and properties. You should patch these on the *class* methods, static methods and properties. You should patch these on the *class*
rather than an instance. They also work with *some* objects rather than an instance. They also work with *some* objects
that proxy attribute access, like the `django setttings object that proxy attribute access, like the `django settings object
<http://www.voidspace.org.uk/python/weblog/arch_d7_2010_12_04.shtml#e1198>`_. <http://www.voidspace.org.uk/python/weblog/arch_d7_2010_12_04.shtml#e1198>`_.
......
...@@ -16,6 +16,7 @@ from docutils import nodes, utils ...@@ -16,6 +16,7 @@ from docutils import nodes, utils
import sphinx import sphinx
from sphinx.util.nodes import split_explicit_title from sphinx.util.nodes import split_explicit_title
from sphinx.util.compat import Directive
from sphinx.writers.html import HTMLTranslator from sphinx.writers.html import HTMLTranslator
from sphinx.writers.latex import LaTeXTranslator from sphinx.writers.latex import LaTeXTranslator
from sphinx.locale import versionlabels from sphinx.locale import versionlabels
...@@ -27,7 +28,9 @@ Body.enum.converters['loweralpha'] = \ ...@@ -27,7 +28,9 @@ Body.enum.converters['loweralpha'] = \
Body.enum.converters['lowerroman'] = \ Body.enum.converters['lowerroman'] = \
Body.enum.converters['upperroman'] = lambda x: None Body.enum.converters['upperroman'] = lambda x: None
if sphinx.__version__[:3] < '1.2': SPHINX11 = sphinx.__version__[:3] < '1.2'
if SPHINX11:
# monkey-patch HTML translator to give versionmodified paragraphs a class # monkey-patch HTML translator to give versionmodified paragraphs a class
def new_visit_versionmodified(self, node): def new_visit_versionmodified(self, node):
self.body.append(self.starttag(node, 'p', CLASS=node['type'])) self.body.append(self.starttag(node, 'p', CLASS=node['type']))
...@@ -88,8 +91,6 @@ def source_role(typ, rawtext, text, lineno, inliner, options={}, content=[]): ...@@ -88,8 +91,6 @@ def source_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):
# Support for marking up implementation details # Support for marking up implementation details
from sphinx.util.compat import Directive
class ImplementationDetail(Directive): class ImplementationDetail(Directive):
has_content = True has_content = True
...@@ -142,10 +143,6 @@ class PyDecoratorMethod(PyDecoratorMixin, PyClassmember): ...@@ -142,10 +143,6 @@ class PyDecoratorMethod(PyDecoratorMixin, PyClassmember):
# Support for documenting version of removal in deprecations # Support for documenting version of removal in deprecations
from sphinx.locale import versionlabels
from sphinx.util.compat import Directive
class DeprecatedRemoved(Directive): class DeprecatedRemoved(Directive):
has_content = True has_content = True
required_arguments = 2 required_arguments = 2
...@@ -171,16 +168,16 @@ class DeprecatedRemoved(Directive): ...@@ -171,16 +168,16 @@ class DeprecatedRemoved(Directive):
messages = [] messages = []
if self.content: if self.content:
self.state.nested_parse(self.content, self.content_offset, node) self.state.nested_parse(self.content, self.content_offset, node)
if len(node):
if isinstance(node[0], nodes.paragraph) and node[0].rawsource: if isinstance(node[0], nodes.paragraph) and node[0].rawsource:
content = nodes.inline(node[0].rawsource, translatable=True) content = nodes.inline(node[0].rawsource, translatable=True)
content.source = node[0].source content.source = node[0].source
content.line = node[0].line content.line = node[0].line
content += node[0].children content += node[0].children
node[0].replace_self(nodes.paragraph('', '', content)) node[0].replace_self(nodes.paragraph('', '', content))
node[0].insert(0, nodes.inline('', '%s: ' % text, if not SPHINX11:
classes=['versionmodified'])) node[0].insert(0, nodes.inline('', '%s: ' % text,
else: classes=['versionmodified']))
elif not SPHINX11:
para = nodes.paragraph('', '', para = nodes.paragraph('', '',
nodes.inline('', '%s.' % text, classes=['versionmodified'])) nodes.inline('', '%s.' % text, classes=['versionmodified']))
node.append(para) node.append(para)
...@@ -188,6 +185,9 @@ class DeprecatedRemoved(Directive): ...@@ -188,6 +185,9 @@ class DeprecatedRemoved(Directive):
env.note_versionchange('deprecated', version[0], node, self.lineno) env.note_versionchange('deprecated', version[0], node, self.lineno)
return [node] + messages return [node] + messages
# for Sphinx < 1.2
versionlabels['deprecated-removed'] = DeprecatedRemoved._label
# Support for including Misc/NEWS # Support for including Misc/NEWS
......
...@@ -371,9 +371,9 @@ values. The most versatile is the *list*, which can be written as a list of ...@@ -371,9 +371,9 @@ values. The most versatile is the *list*, which can be written as a list of
comma-separated values (items) between square brackets. Lists might contain comma-separated values (items) between square brackets. Lists might contain
items of different types, but usually the items all have the same type. :: items of different types, but usually the items all have the same type. ::
>>> squares = [1, 2, 4, 9, 16, 25] >>> squares = [1, 4, 9, 16, 25]
>>> squares >>> squares
[1, 2, 4, 9, 16, 25] [1, 4, 9, 16, 25]
Like strings (and all other built-in :term:`sequence` type), lists can be Like strings (and all other built-in :term:`sequence` type), lists can be
indexed and sliced:: indexed and sliced::
...@@ -389,12 +389,12 @@ All slice operations return a new list containing the requested elements. This ...@@ -389,12 +389,12 @@ All slice operations return a new list containing the requested elements. This
means that the following slice returns a new (shallow) copy of the list:: means that the following slice returns a new (shallow) copy of the list::
>>> squares[:] >>> squares[:]
[1, 2, 4, 9, 16, 25] [1, 4, 9, 16, 25]
Lists also supports operations like concatenation:: Lists also supports operations like concatenation::
>>> squares + [36, 49, 64, 81, 100] >>> squares + [36, 49, 64, 81, 100]
[1, 2, 4, 9, 16, 25, 36, 49, 64, 81, 100] [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
Unlike strings, which are :term:`immutable`, lists are a :term:`mutable` Unlike strings, which are :term:`immutable`, lists are a :term:`mutable`
type, i.e. it is possible to change their content:: type, i.e. it is possible to change their content::
......
...@@ -94,6 +94,33 @@ PyAPI_FUNC(PyObject *) PyCodec_Decode( ...@@ -94,6 +94,33 @@ PyAPI_FUNC(PyObject *) PyCodec_Decode(
const char *errors const char *errors
); );
#ifndef Py_LIMITED_API
/* Text codec specific encoding and decoding API.
Checks the encoding against a list of codecs which do not
implement a str<->bytes encoding before attempting the
operation.
Please note that these APIs are internal and should not
be used in Python C extensions.
*/
PyAPI_FUNC(PyObject *) _PyCodec_EncodeText(
PyObject *object,
const char *encoding,
const char *errors
);
PyAPI_FUNC(PyObject *) _PyCodec_DecodeText(
PyObject *object,
const char *encoding,
const char *errors
);
#endif
/* --- Codec Lookup APIs -------------------------------------------------- /* --- Codec Lookup APIs --------------------------------------------------
All APIs return a codec object with incremented refcount and are All APIs return a codec object with incremented refcount and are
......
...@@ -73,9 +73,19 @@ BOM64_BE = BOM_UTF32_BE ...@@ -73,9 +73,19 @@ BOM64_BE = BOM_UTF32_BE
### Codec base classes (defining the API) ### Codec base classes (defining the API)
class CodecInfo(tuple): class CodecInfo(tuple):
"""Codec details when looking up the codec registry"""
# Private API to allow Python 3.4 to blacklist the known non-Unicode
# codecs in the standard library. A more general mechanism to
# reliably distinguish test encodings from other codecs will hopefully
# be defined for Python 3.5
#
# See http://bugs.python.org/issue19619
_is_text_encoding = True # Assume codecs are text encodings by default
def __new__(cls, encode, decode, streamreader=None, streamwriter=None, def __new__(cls, encode, decode, streamreader=None, streamwriter=None,
incrementalencoder=None, incrementaldecoder=None, name=None): incrementalencoder=None, incrementaldecoder=None, name=None,
*, _is_text_encoding=None):
self = tuple.__new__(cls, (encode, decode, streamreader, streamwriter)) self = tuple.__new__(cls, (encode, decode, streamreader, streamwriter))
self.name = name self.name = name
self.encode = encode self.encode = encode
...@@ -84,6 +94,8 @@ class CodecInfo(tuple): ...@@ -84,6 +94,8 @@ class CodecInfo(tuple):
self.incrementaldecoder = incrementaldecoder self.incrementaldecoder = incrementaldecoder
self.streamwriter = streamwriter self.streamwriter = streamwriter
self.streamreader = streamreader self.streamreader = streamreader
if _is_text_encoding is not None:
self._is_text_encoding = _is_text_encoding
return self return self
def __repr__(self): def __repr__(self):
......
...@@ -110,7 +110,7 @@ _copy_dispatch = d = {} ...@@ -110,7 +110,7 @@ _copy_dispatch = d = {}
def _copy_immutable(x): def _copy_immutable(x):
return x return x
for t in (type(None), int, float, bool, str, tuple, for t in (type(None), int, float, bool, str, tuple,
frozenset, type, range, bytes, frozenset, type, range,
types.BuiltinFunctionType, type(Ellipsis), types.BuiltinFunctionType, type(Ellipsis),
types.FunctionType, weakref.ref): types.FunctionType, weakref.ref):
d[t] = _copy_immutable d[t] = _copy_immutable
......
...@@ -52,4 +52,5 @@ def getregentry(): ...@@ -52,4 +52,5 @@ def getregentry():
incrementaldecoder=IncrementalDecoder, incrementaldecoder=IncrementalDecoder,
streamwriter=StreamWriter, streamwriter=StreamWriter,
streamreader=StreamReader, streamreader=StreamReader,
_is_text_encoding=False,
) )
...@@ -74,4 +74,5 @@ def getregentry(): ...@@ -74,4 +74,5 @@ def getregentry():
incrementaldecoder=IncrementalDecoder, incrementaldecoder=IncrementalDecoder,
streamwriter=StreamWriter, streamwriter=StreamWriter,
streamreader=StreamReader, streamreader=StreamReader,
_is_text_encoding=False,
) )
...@@ -52,4 +52,5 @@ def getregentry(): ...@@ -52,4 +52,5 @@ def getregentry():
incrementaldecoder=IncrementalDecoder, incrementaldecoder=IncrementalDecoder,
streamwriter=StreamWriter, streamwriter=StreamWriter,
streamreader=StreamReader, streamreader=StreamReader,
_is_text_encoding=False,
) )
...@@ -53,4 +53,5 @@ def getregentry(): ...@@ -53,4 +53,5 @@ def getregentry():
incrementaldecoder=IncrementalDecoder, incrementaldecoder=IncrementalDecoder,
streamwriter=StreamWriter, streamwriter=StreamWriter,
streamreader=StreamReader, streamreader=StreamReader,
_is_text_encoding=False,
) )
...@@ -43,6 +43,7 @@ def getregentry(): ...@@ -43,6 +43,7 @@ def getregentry():
incrementaldecoder=IncrementalDecoder, incrementaldecoder=IncrementalDecoder,
streamwriter=StreamWriter, streamwriter=StreamWriter,
streamreader=StreamReader, streamreader=StreamReader,
_is_text_encoding=False,
) )
### Map ### Map
......
...@@ -96,4 +96,5 @@ def getregentry(): ...@@ -96,4 +96,5 @@ def getregentry():
incrementaldecoder=IncrementalDecoder, incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader, streamreader=StreamReader,
streamwriter=StreamWriter, streamwriter=StreamWriter,
_is_text_encoding=False,
) )
...@@ -74,4 +74,5 @@ def getregentry(): ...@@ -74,4 +74,5 @@ def getregentry():
incrementaldecoder=IncrementalDecoder, incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader, streamreader=StreamReader,
streamwriter=StreamWriter, streamwriter=StreamWriter,
_is_text_encoding=False,
) )
...@@ -41,9 +41,10 @@ idle class. For the benefit of buildbot machines that do not have a graphics ...@@ -41,9 +41,10 @@ idle class. For the benefit of buildbot machines that do not have a graphics
screen, gui tests must be 'guarded' by "requires('gui')" in a setUp screen, gui tests must be 'guarded' by "requires('gui')" in a setUp
function or method. This will typically be setUpClass. function or method. This will typically be setUpClass.
All gui objects must be destroyed by the end of the test, perhaps in a tearDown To avoid interfering with other gui tests, all gui objects must be destroyed
function. Creating the Tk root directly in a setUp allows a reference to be saved and deleted by the end of the test. If a widget, such as a Tk root, is created
so it can be properly destroyed in the corresponding tearDown. in a setUpX function, destroy it in the corresponding tearDownX. For module
and class attributes, also delete the widget.
--- ---
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
...@@ -53,6 +54,7 @@ so it can be properly destroyed in the corresponding tearDown. ...@@ -53,6 +54,7 @@ so it can be properly destroyed in the corresponding tearDown.
@classmethod @classmethod
def tearDownClass(cls): def tearDownClass(cls):
cls.root.destroy() cls.root.destroy()
del cls.root
--- ---
Support.requires('gui') returns true if it is either called in a main module Support.requires('gui') returns true if it is either called in a main module
......
...@@ -277,6 +277,9 @@ class FormatEventTest(unittest.TestCase): ...@@ -277,6 +277,9 @@ class FormatEventTest(unittest.TestCase):
@classmethod @classmethod
def tearDownClass(cls): def tearDownClass(cls):
cls.root.destroy() cls.root.destroy()
del cls.root
del cls.text
del cls.formatter
def test_short_line(self): def test_short_line(self):
self.text.insert('1.0', "Short line\n") self.text.insert('1.0', "Short line\n")
......
...@@ -80,6 +80,7 @@ class FetchTest(unittest.TestCase): ...@@ -80,6 +80,7 @@ class FetchTest(unittest.TestCase):
@classmethod @classmethod
def tearDownClass(cls): def tearDownClass(cls):
cls.root.destroy() cls.root.destroy()
del cls.root
def fetch_test(self, reverse, line, prefix, index, *, bell=False): def fetch_test(self, reverse, line, prefix, index, *, bell=False):
# Perform one fetch as invoked by Alt-N or Alt-P # Perform one fetch as invoked by Alt-N or Alt-P
......
...@@ -64,6 +64,7 @@ class GetSelectionTest(unittest.TestCase): ...@@ -64,6 +64,7 @@ class GetSelectionTest(unittest.TestCase):
## @classmethod ## @classmethod
## def tearDownClass(cls): ## def tearDownClass(cls):
## cls.root.destroy() ## cls.root.destroy()
## del cls.root
def test_get_selection(self): def test_get_selection(self):
# text = Text(master=self.root) # text = Text(master=self.root)
...@@ -219,6 +220,7 @@ class SearchTest(unittest.TestCase): ...@@ -219,6 +220,7 @@ class SearchTest(unittest.TestCase):
## @classmethod ## @classmethod
## def tearDownClass(cls): ## def tearDownClass(cls):
## cls.root.destroy() ## cls.root.destroy()
## del cls.root
def test_search(self): def test_search(self):
Equal = self.assertEqual Equal = self.assertEqual
...@@ -261,6 +263,7 @@ class ForwardBackwardTest(unittest.TestCase): ...@@ -261,6 +263,7 @@ class ForwardBackwardTest(unittest.TestCase):
## @classmethod ## @classmethod
## def tearDownClass(cls): ## def tearDownClass(cls):
## cls.root.destroy() ## cls.root.destroy()
## del cls.root
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
......
...@@ -221,6 +221,7 @@ class TkTextTest(TextTest, unittest.TestCase): ...@@ -221,6 +221,7 @@ class TkTextTest(TextTest, unittest.TestCase):
@classmethod @classmethod
def tearDownClass(cls): def tearDownClass(cls):
cls.root.destroy() cls.root.destroy()
del cls.root
if __name__ == '__main__': if __name__ == '__main__':
......
...@@ -287,7 +287,7 @@ class ModuleFinder: ...@@ -287,7 +287,7 @@ class ModuleFinder:
if fp.read(4) != imp.get_magic(): if fp.read(4) != imp.get_magic():
self.msgout(2, "raise ImportError: Bad magic number", pathname) self.msgout(2, "raise ImportError: Bad magic number", pathname)
raise ImportError("Bad magic number in %s" % pathname) raise ImportError("Bad magic number in %s" % pathname)
fp.read(4) fp.read(8) # Skip mtime and size.
co = marshal.load(fp) co = marshal.load(fp)
else: else:
co = None co = None
......
...@@ -4,6 +4,7 @@ import locale ...@@ -4,6 +4,7 @@ import locale
import sys import sys
import unittest import unittest
import warnings import warnings
import encodings
from test import support from test import support
...@@ -2408,6 +2409,47 @@ class TransformCodecTest(unittest.TestCase): ...@@ -2408,6 +2409,47 @@ class TransformCodecTest(unittest.TestCase):
sout = reader.readline() sout = reader.readline()
self.assertEqual(sout, b"\x80") self.assertEqual(sout, b"\x80")
def test_text_to_binary_blacklists_binary_transforms(self):
# Check binary -> binary codecs give a good error for str input
bad_input = "bad input type"
for encoding in bytes_transform_encodings:
fmt = (r"{!r} is not a text encoding; "
r"use codecs.encode\(\) to handle arbitrary codecs")
msg = fmt.format(encoding)
with self.assertRaisesRegex(LookupError, msg) as failure:
bad_input.encode(encoding)
self.assertIsNone(failure.exception.__cause__)
def test_text_to_binary_blacklists_text_transforms(self):
# Check str.encode gives a good error message for str -> str codecs
msg = (r"^'rot_13' is not a text encoding; "
r"use codecs.encode\(\) to handle arbitrary codecs")
with self.assertRaisesRegex(LookupError, msg):
"just an example message".encode("rot_13")
def test_binary_to_text_blacklists_binary_transforms(self):
# Check bytes.decode and bytearray.decode give a good error
# message for binary -> binary codecs
data = b"encode first to ensure we meet any format restrictions"
for encoding in bytes_transform_encodings:
encoded_data = codecs.encode(data, encoding)
fmt = (r"{!r} is not a text encoding; "
r"use codecs.decode\(\) to handle arbitrary codecs")
msg = fmt.format(encoding)
with self.assertRaisesRegex(LookupError, msg):
encoded_data.decode(encoding)
with self.assertRaisesRegex(LookupError, msg):
bytearray(encoded_data).decode(encoding)
def test_binary_to_text_blacklists_text_transforms(self):
# Check str -> str codec gives a good error for binary input
for bad_input in (b"immutable", bytearray(b"mutable")):
msg = (r"^'rot_13' is not a text encoding; "
r"use codecs.decode\(\) to handle arbitrary codecs")
with self.assertRaisesRegex(LookupError, msg) as failure:
bad_input.decode("rot_13")
self.assertIsNone(failure.exception.__cause__)
@unittest.skipUnless(sys.platform == 'win32', @unittest.skipUnless(sys.platform == 'win32',
'code pages are specific to Windows') 'code pages are specific to Windows')
......
import unittest import unittest
from test.support import TESTFN, unlink, unload from test.support import TESTFN, unlink, unload
import importlib, os, sys import importlib, os, sys, subprocess
class CodingTest(unittest.TestCase): class CodingTest(unittest.TestCase):
def test_bad_coding(self): def test_bad_coding(self):
...@@ -58,6 +58,14 @@ class CodingTest(unittest.TestCase): ...@@ -58,6 +58,14 @@ class CodingTest(unittest.TestCase):
self.assertTrue(c.exception.args[0].startswith(expected), self.assertTrue(c.exception.args[0].startswith(expected),
msg=c.exception.args[0]) msg=c.exception.args[0])
def test_20731(self):
sub = subprocess.Popen([sys.executable,
os.path.join(os.path.dirname(__file__),
'coding20731.py')],
stderr=subprocess.PIPE)
err = sub.communicate()[1]
self.assertEqual(sub.returncode, 0)
self.assertNotIn(b'SyntaxError', err)
if __name__ == "__main__": if __name__ == "__main__":
unittest.main() unittest.main()
...@@ -98,6 +98,7 @@ class TestCopy(unittest.TestCase): ...@@ -98,6 +98,7 @@ class TestCopy(unittest.TestCase):
pass pass
tests = [None, 42, 2**100, 3.14, True, False, 1j, tests = [None, 42, 2**100, 3.14, True, False, 1j,
"hello", "hello\u1234", f.__code__, "hello", "hello\u1234", f.__code__,
b"world", bytes(range(256)),
NewStyle, range(10), Classic, max, WithMetaclass] NewStyle, range(10), Classic, max, WithMetaclass]
for x in tests: for x in tests:
self.assertIs(copy.copy(x), x) self.assertIs(copy.copy(x), x)
......
...@@ -258,6 +258,24 @@ class FileInputTests(unittest.TestCase): ...@@ -258,6 +258,24 @@ class FileInputTests(unittest.TestCase):
fi.readline() fi.readline()
self.assertTrue(custom_open_hook.invoked, "openhook not invoked") self.assertTrue(custom_open_hook.invoked, "openhook not invoked")
def test_readline(self):
with open(TESTFN, 'wb') as f:
f.write(b'A\nB\r\nC\r')
# Fill TextIOWrapper buffer.
f.write(b'123456789\n' * 1000)
# Issue #20501: readline() shouldn't read whole file.
f.write(b'\x80')
self.addCleanup(safe_unlink, TESTFN)
with FileInput(files=TESTFN,
openhook=hook_encoded('ascii'), bufsize=8) as fi:
self.assertEqual(fi.readline(), 'A\n')
self.assertEqual(fi.readline(), 'B\n')
self.assertEqual(fi.readline(), 'C\n')
with self.assertRaises(UnicodeDecodeError):
# Read to the end of file.
list(fi)
def test_context_manager(self): def test_context_manager(self):
try: try:
t1 = writeTmp(1, ["A\nB\nC"]) t1 = writeTmp(1, ["A\nB\nC"])
...@@ -835,6 +853,24 @@ class Test_hook_encoded(unittest.TestCase): ...@@ -835,6 +853,24 @@ class Test_hook_encoded(unittest.TestCase):
self.assertIs(kwargs.pop('encoding'), encoding) self.assertIs(kwargs.pop('encoding'), encoding)
self.assertFalse(kwargs) self.assertFalse(kwargs)
def test_modes(self):
# Unlikely UTF-7 is locale encoding
with open(TESTFN, 'wb') as f:
f.write(b'A\nB\r\nC\rD+IKw-')
self.addCleanup(safe_unlink, TESTFN)
def check(mode, expected_lines):
with FileInput(files=TESTFN, mode=mode,
openhook=hook_encoded('utf-7')) as fi:
lines = list(fi)
self.assertEqual(lines, expected_lines)
check('r', ['A\n', 'B\n', 'C\n', 'D\u20ac'])
check('rU', ['A\n', 'B\n', 'C\n', 'D\u20ac'])
check('U', ['A\n', 'B\n', 'C\n', 'D\u20ac'])
with self.assertRaises(ValueError):
check('rb', ['A\n', 'B\r\n', 'C\r', 'D\u20ac'])
def test_main(): def test_main():
run_unittest( run_unittest(
BufferSizesTests, BufferSizesTests,
......
...@@ -14,6 +14,7 @@ if use_resources and 'gui' in use_resources: ...@@ -14,6 +14,7 @@ if use_resources and 'gui' in use_resources:
try: try:
root = tk.Tk() root = tk.Tk()
root.destroy() root.destroy()
del root
except tk.TclError: except tk.TclError:
while 'gui' in use_resources: while 'gui' in use_resources:
use_resources.remove('gui') use_resources.remove('gui')
......
...@@ -162,6 +162,7 @@ class SimpleTest(unittest.TestCase): ...@@ -162,6 +162,7 @@ class SimpleTest(unittest.TestCase):
if os.path.exists(pycache): if os.path.exists(pycache):
shutil.rmtree(pycache) shutil.rmtree(pycache)
@source_util.writes_bytecode_files
def test_timestamp_overflow(self): def test_timestamp_overflow(self):
# When a modification timestamp is larger than 2**32, it should be # When a modification timestamp is larger than 2**32, it should be
# truncated rather than raise an OverflowError. # truncated rather than raise an OverflowError.
......
import os import os
import errno import errno
import importlib.machinery
import py_compile
import shutil import shutil
import unittest import unittest
import tempfile import tempfile
...@@ -208,6 +210,14 @@ a/module.py ...@@ -208,6 +210,14 @@ a/module.py
from . import * from . import *
"""] """]
bytecode_test = [
"a",
["a"],
[],
[],
""
]
def open_file(path): def open_file(path):
dirname = os.path.dirname(path) dirname = os.path.dirname(path)
...@@ -288,6 +298,16 @@ class ModuleFinderTest(unittest.TestCase): ...@@ -288,6 +298,16 @@ class ModuleFinderTest(unittest.TestCase):
def test_relative_imports_4(self): def test_relative_imports_4(self):
self._do_test(relative_import_test_4) self._do_test(relative_import_test_4)
def test_bytecode(self):
base_path = os.path.join(TEST_DIR, 'a')
source_path = base_path + importlib.machinery.SOURCE_SUFFIXES[0]
bytecode_path = base_path + importlib.machinery.BYTECODE_SUFFIXES[0]
with open_file(source_path) as file:
file.write('testing_modulefinder = True\n')
py_compile.compile(source_path, cfile=bytecode_path)
os.remove(source_path)
self._do_test(bytecode_test)
def test_main(): def test_main():
support.run_unittest(ModuleFinderTest) support.run_unittest(ModuleFinderTest)
......
...@@ -1144,7 +1144,7 @@ class PosixGroupsTester(unittest.TestCase): ...@@ -1144,7 +1144,7 @@ class PosixGroupsTester(unittest.TestCase):
def test_initgroups(self): def test_initgroups(self):
# find missing group # find missing group
g = max(self.saved_groups) + 1 g = max(self.saved_groups or [0]) + 1
name = pwd.getpwuid(posix.getuid()).pw_name name = pwd.getpwuid(posix.getuid()).pw_name
posix.initgroups(name, g) posix.initgroups(name, g)
self.assertIn(g, posix.getgroups()) self.assertIn(g, posix.getgroups())
......
...@@ -376,6 +376,7 @@ class TclTest(unittest.TestCase): ...@@ -376,6 +376,7 @@ class TclTest(unittest.TestCase):
result = arg result = arg
return arg return arg
self.interp.createcommand('testfunc', testfunc) self.interp.createcommand('testfunc', testfunc)
self.addCleanup(self.interp.tk.deletecommand, 'testfunc')
def check(value, expected, eq=self.assertEqual): def check(value, expected, eq=self.assertEqual):
r = self.interp.call('testfunc', value) r = self.interp.call('testfunc', value)
self.assertIsInstance(result, str) self.assertIsInstance(result, str)
......
...@@ -2,7 +2,7 @@ doctests = """ ...@@ -2,7 +2,7 @@ doctests = """
Tests for the tokenize module. Tests for the tokenize module.
The tests can be really simple. Given a small fragment of source The tests can be really simple. Given a small fragment of source
code, print out a table with tokens. The ENDMARK is omitted for code, print out a table with tokens. The ENDMARKER is omitted for
brevity. brevity.
>>> dump_tokens("1 + 1") >>> dump_tokens("1 + 1")
...@@ -578,9 +578,15 @@ pass the '-ucpu' option to process the full directory. ...@@ -578,9 +578,15 @@ pass the '-ucpu' option to process the full directory.
>>> tempdir = os.path.dirname(f) or os.curdir >>> tempdir = os.path.dirname(f) or os.curdir
>>> testfiles = glob.glob(os.path.join(tempdir, "test*.py")) >>> testfiles = glob.glob(os.path.join(tempdir, "test*.py"))
tokenize is broken on test_pep3131.py because regular expressions are broken on Tokenize is broken on test_pep3131.py because regular expressions are
the obscure unicode identifiers in it. *sigh* broken on the obscure unicode identifiers in it. *sigh*
With roundtrip extended to test the 5-tuple mode of untokenize,
7 more testfiles fail. Remove them also until the failure is diagnosed.
>>> testfiles.remove(os.path.join(tempdir, "test_pep3131.py")) >>> testfiles.remove(os.path.join(tempdir, "test_pep3131.py"))
>>> for f in ('buffer', 'builtin', 'fileio', 'inspect', 'os', 'platform', 'sys'):
... testfiles.remove(os.path.join(tempdir, "test_%s.py") % f)
...
>>> if not support.is_resource_enabled("cpu"): >>> if not support.is_resource_enabled("cpu"):
... testfiles = random.sample(testfiles, 10) ... testfiles = random.sample(testfiles, 10)
... ...
...@@ -659,21 +665,39 @@ def dump_tokens(s): ...@@ -659,21 +665,39 @@ def dump_tokens(s):
def roundtrip(f): def roundtrip(f):
""" """
Test roundtrip for `untokenize`. `f` is an open file or a string. Test roundtrip for `untokenize`. `f` is an open file or a string.
The source code in f is tokenized, converted back to source code via The source code in f is tokenized to both 5- and 2-tuples.
tokenize.untokenize(), and tokenized again from the latter. The test Both sequences are converted back to source code via
fails if the second tokenization doesn't match the first. tokenize.untokenize(), and the latter tokenized again to 2-tuples.
The test fails if the 3 pair tokenizations do not match.
When untokenize bugs are fixed, untokenize with 5-tuples should
reproduce code that does not contain a backslash continuation
following spaces. A proper test should test this.
This function would be more useful for correcting bugs if it reported
the first point of failure, like assertEqual, rather than just
returning False -- or if it were only used in unittests and not
doctest and actually used assertEqual.
""" """
# Get source code and original tokenizations
if isinstance(f, str): if isinstance(f, str):
f = BytesIO(f.encode('utf-8')) code = f.encode('utf-8')
try: else:
token_list = list(tokenize(f.readline)) code = f.read()
finally:
f.close() f.close()
tokens1 = [tok[:2] for tok in token_list] readline = iter(code.splitlines(keepends=True)).__next__
new_bytes = untokenize(tokens1) tokens5 = list(tokenize(readline))
readline = (line for line in new_bytes.splitlines(keepends=True)).__next__ tokens2 = [tok[:2] for tok in tokens5]
tokens2 = [tok[:2] for tok in tokenize(readline)] # Reproduce tokens2 from pairs
return tokens1 == tokens2 bytes_from2 = untokenize(tokens2)
readline2 = iter(bytes_from2.splitlines(keepends=True)).__next__
tokens2_from2 = [tok[:2] for tok in tokenize(readline2)]
# Reproduce tokens2 from 5-tuples
bytes_from5 = untokenize(tokens5)
readline5 = iter(bytes_from5.splitlines(keepends=True)).__next__
tokens2_from5 = [tok[:2] for tok in tokenize(readline5)]
# Compare 3 versions
return tokens2 == tokens2_from2 == tokens2_from5
# This is an example from the docs, set up as a doctest. # This is an example from the docs, set up as a doctest.
def decistmt(s): def decistmt(s):
...@@ -1156,6 +1180,7 @@ class TestTokenize(TestCase): ...@@ -1156,6 +1180,7 @@ class TestTokenize(TestCase):
class UntokenizeTest(TestCase): class UntokenizeTest(TestCase):
def test_bad_input_order(self): def test_bad_input_order(self):
# raise if previous row
u = Untokenizer() u = Untokenizer()
u.prev_row = 2 u.prev_row = 2
u.prev_col = 2 u.prev_col = 2
...@@ -1163,8 +1188,22 @@ class UntokenizeTest(TestCase): ...@@ -1163,8 +1188,22 @@ class UntokenizeTest(TestCase):
u.add_whitespace((1,3)) u.add_whitespace((1,3))
self.assertEqual(cm.exception.args[0], self.assertEqual(cm.exception.args[0],
'start (1,3) precedes previous end (2,2)') 'start (1,3) precedes previous end (2,2)')
# raise if previous column in row
self.assertRaises(ValueError, u.add_whitespace, (2,1)) self.assertRaises(ValueError, u.add_whitespace, (2,1))
def test_backslash_continuation(self):
# The problem is that <whitespace>\<newline> leaves no token
u = Untokenizer()
u.prev_row = 1
u.prev_col = 1
u.tokens = []
u.add_whitespace((2, 0))
self.assertEqual(u.tokens, ['\\\n'])
u.prev_row = 2
u.add_whitespace((4, 4))
self.assertEqual(u.tokens, ['\\\n', '\\\n\\\n', ' '])
self.assertTrue(roundtrip('a\n b\n c\n \\\n c\n'))
def test_iter_compat(self): def test_iter_compat(self):
u = Untokenizer() u = Untokenizer()
token = (NAME, 'Hello') token = (NAME, 'Hello')
......
...@@ -234,6 +234,10 @@ class Untokenizer: ...@@ -234,6 +234,10 @@ class Untokenizer:
if row < self.prev_row or row == self.prev_row and col < self.prev_col: if row < self.prev_row or row == self.prev_row and col < self.prev_col:
raise ValueError("start ({},{}) precedes previous end ({},{})" raise ValueError("start ({},{}) precedes previous end ({},{})"
.format(row, col, self.prev_row, self.prev_col)) .format(row, col, self.prev_row, self.prev_col))
row_offset = row - self.prev_row
if row_offset:
self.tokens.append("\\\n" * row_offset)
self.prev_col = 0
col_offset = col - self.prev_col col_offset = col - self.prev_col
if col_offset: if col_offset:
self.tokens.append(" " * col_offset) self.tokens.append(" " * col_offset)
...@@ -248,6 +252,8 @@ class Untokenizer: ...@@ -248,6 +252,8 @@ class Untokenizer:
if tok_type == ENCODING: if tok_type == ENCODING:
self.encoding = token self.encoding = token
continue continue
if tok_type == ENDMARKER:
break
self.add_whitespace(start) self.add_whitespace(start)
self.tokens.append(token) self.tokens.append(token)
self.prev_row, self.prev_col = end self.prev_row, self.prev_col = end
......
...@@ -40,6 +40,7 @@ Erik Andersén ...@@ -40,6 +40,7 @@ Erik Andersén
Oliver Andrich Oliver Andrich
Ross Andrus Ross Andrus
Juancarlo Añez Juancarlo Añez
Chris Angelico
Jérémy Anger Jérémy Anger
Ankur Ankan Ankur Ankan
Jon Anglin Jon Anglin
......
...@@ -10,6 +10,15 @@ What's New in Python 3.3.5 release candidate 1? ...@@ -10,6 +10,15 @@ What's New in Python 3.3.5 release candidate 1?
Core and Builtins Core and Builtins
----------------- -----------------
- Issue #20731: Properly position in source code files even if they
are opened in text mode. Patch by Serhiy Storchaka.
- Issue #19619: str.encode, bytes.decode and bytearray.decode now use an
internal API to throw LookupError for known non-text encodings, rather
than attempting the encoding or decoding operation and then throwing a
TypeError for an unexpected output type. (The latter mechanism remains
in place for third party non-text encodings)
- Issue #20588: Make Python-ast.c C89 compliant. - Issue #20588: Make Python-ast.c C89 compliant.
- Issue #20437: Fixed 21 potential bugs when deleting objects references. - Issue #20437: Fixed 21 potential bugs when deleting objects references.
...@@ -20,6 +29,15 @@ Core and Builtins ...@@ -20,6 +29,15 @@ Core and Builtins
Library Library
------- -------
- Issue #20778: Fix modulefinder to work with bytecode-only modules.
- Issue #20791: copy.copy() now doesn't make a copy when the input is
a bytes object. Initial patch by Peter Otten.
- Issue #20621: Fixes a zipimport bug introduced in 3.3.4 that could cause
spurious crashes or SystemErrors when importing modules or packages from a
zip file. The change causing the problem was reverted.
- Issue #20635: Fixed grid_columnconfigure() and grid_rowconfigure() methods of - Issue #20635: Fixed grid_columnconfigure() and grid_rowconfigure() methods of
Tkinter widgets to work in wantobjects=True mode. Tkinter widgets to work in wantobjects=True mode.
...@@ -117,6 +135,8 @@ IDLE ...@@ -117,6 +135,8 @@ IDLE
Tests Tests
----- -----
- Issue #20743: Fix a reference leak in test_tcl.
- Issue #20510: Rewrote test_exit in test_sys to match existing comments, - Issue #20510: Rewrote test_exit in test_sys to match existing comments,
use modern unittest features, and use helpers from test.script_helper use modern unittest features, and use helpers from test.script_helper
instead of using subprocess directly. Patch by Gareth Rees. instead of using subprocess directly. Patch by Gareth Rees.
...@@ -147,6 +167,12 @@ Build ...@@ -147,6 +167,12 @@ Build
- Issue #20609: Restored the ability to build 64-bit Windows binaries on - Issue #20609: Restored the ability to build 64-bit Windows binaries on
32-bit Windows, which was broken by the change in issue #19788. 32-bit Windows, which was broken by the change in issue #19788.
Tools/Demos
-----------
- Issue #20535: PYTHONWARNING no longer affects the run_tests.py script.
Patch by Arfrever Frehtes Taifersar Arahesis.
What's New in Python 3.3.4? What's New in Python 3.3.4?
=========================== ===========================
......
...@@ -3129,7 +3129,7 @@ PyUnicode_Decode(const char *s, ...@@ -3129,7 +3129,7 @@ PyUnicode_Decode(const char *s,
buffer = PyMemoryView_FromBuffer(&info); buffer = PyMemoryView_FromBuffer(&info);
if (buffer == NULL) if (buffer == NULL)
goto onError; goto onError;
unicode = PyCodec_Decode(buffer, encoding, errors); unicode = _PyCodec_DecodeText(buffer, encoding, errors);
if (unicode == NULL) if (unicode == NULL)
goto onError; goto onError;
if (!PyUnicode_Check(unicode)) { if (!PyUnicode_Check(unicode)) {
...@@ -3489,7 +3489,7 @@ PyUnicode_AsEncodedString(PyObject *unicode, ...@@ -3489,7 +3489,7 @@ PyUnicode_AsEncodedString(PyObject *unicode,
} }
/* Encode via the codec registry */ /* Encode via the codec registry */
v = PyCodec_Encode(unicode, encoding, errors); v = _PyCodec_EncodeText(unicode, encoding, errors);
if (v == NULL) if (v == NULL)
return NULL; return NULL;
......
...@@ -498,9 +498,13 @@ fp_setreadl(struct tok_state *tok, const char* enc) ...@@ -498,9 +498,13 @@ fp_setreadl(struct tok_state *tok, const char* enc)
fd = fileno(tok->fp); fd = fileno(tok->fp);
/* Due to buffering the file offset for fd can be different from the file /* Due to buffering the file offset for fd can be different from the file
* position of tok->fp. */ * position of tok->fp. If tok->fp was opened in text mode on Windows,
* its file position counts CRLF as one char and can't be directly mapped
* to the file offset for fd. Instead we step back one byte and read to
* the end of line.*/
pos = ftell(tok->fp); pos = ftell(tok->fp);
if (pos == -1 || lseek(fd, (off_t)pos, SEEK_SET) == (off_t)-1) { if (pos == -1 ||
lseek(fd, (off_t)(pos > 0 ? pos - 1 : pos), SEEK_SET) == (off_t)-1) {
PyErr_SetFromErrnoWithFilename(PyExc_OSError, NULL); PyErr_SetFromErrnoWithFilename(PyExc_OSError, NULL);
goto cleanup; goto cleanup;
} }
...@@ -513,6 +517,12 @@ fp_setreadl(struct tok_state *tok, const char* enc) ...@@ -513,6 +517,12 @@ fp_setreadl(struct tok_state *tok, const char* enc)
Py_XDECREF(tok->decoding_readline); Py_XDECREF(tok->decoding_readline);
readline = _PyObject_GetAttrId(stream, &PyId_readline); readline = _PyObject_GetAttrId(stream, &PyId_readline);
tok->decoding_readline = readline; tok->decoding_readline = readline;
if (pos > 0) {
if (PyObject_CallObject(readline, NULL) == NULL) {
readline = NULL;
goto cleanup;
}
}
cleanup: cleanup:
Py_XDECREF(stream); Py_XDECREF(stream);
......
...@@ -337,18 +337,15 @@ PyObject *PyCodec_StreamWriter(const char *encoding, ...@@ -337,18 +337,15 @@ PyObject *PyCodec_StreamWriter(const char *encoding,
errors is passed to the encoder factory as argument if non-NULL. */ errors is passed to the encoder factory as argument if non-NULL. */
PyObject *PyCodec_Encode(PyObject *object, static PyObject *
const char *encoding, _PyCodec_EncodeInternal(PyObject *object,
const char *errors) PyObject *encoder,
const char *encoding,
const char *errors)
{ {
PyObject *encoder = NULL;
PyObject *args = NULL, *result = NULL; PyObject *args = NULL, *result = NULL;
PyObject *v = NULL; PyObject *v = NULL;
encoder = PyCodec_Encoder(encoding);
if (encoder == NULL)
goto onError;
args = args_tuple(object, errors); args = args_tuple(object, errors);
if (args == NULL) if (args == NULL)
goto onError; goto onError;
...@@ -384,18 +381,15 @@ PyObject *PyCodec_Encode(PyObject *object, ...@@ -384,18 +381,15 @@ PyObject *PyCodec_Encode(PyObject *object,
errors is passed to the decoder factory as argument if non-NULL. */ errors is passed to the decoder factory as argument if non-NULL. */
PyObject *PyCodec_Decode(PyObject *object, static PyObject *
const char *encoding, _PyCodec_DecodeInternal(PyObject *object,
const char *errors) PyObject *decoder,
const char *encoding,
const char *errors)
{ {
PyObject *decoder = NULL;
PyObject *args = NULL, *result = NULL; PyObject *args = NULL, *result = NULL;
PyObject *v; PyObject *v;
decoder = PyCodec_Decoder(encoding);
if (decoder == NULL)
goto onError;
args = args_tuple(object, errors); args = args_tuple(object, errors);
if (args == NULL) if (args == NULL)
goto onError; goto onError;
...@@ -425,6 +419,118 @@ PyObject *PyCodec_Decode(PyObject *object, ...@@ -425,6 +419,118 @@ PyObject *PyCodec_Decode(PyObject *object,
return NULL; return NULL;
} }
/* Generic encoding/decoding API */
PyObject *PyCodec_Encode(PyObject *object,
const char *encoding,
const char *errors)
{
PyObject *encoder;
encoder = PyCodec_Encoder(encoding);
if (encoder == NULL)
return NULL;
return _PyCodec_EncodeInternal(object, encoder, encoding, errors);
}
PyObject *PyCodec_Decode(PyObject *object,
const char *encoding,
const char *errors)
{
PyObject *decoder;
decoder = PyCodec_Decoder(encoding);
if (decoder == NULL)
return NULL;
return _PyCodec_DecodeInternal(object, decoder, encoding, errors);
}
/* Text encoding/decoding API */
static
PyObject *codec_getitem_checked(const char *encoding,
const char *operation_name,
int index)
{
_Py_IDENTIFIER(_is_text_encoding);
PyObject *codec;
PyObject *attr;
PyObject *v;
int is_text_codec;
codec = _PyCodec_Lookup(encoding);
if (codec == NULL)
return NULL;
/* Backwards compatibility: assume any raw tuple describes a text
* encoding, and the same for anything lacking the private
* attribute.
*/
if (!PyTuple_CheckExact(codec)) {
attr = _PyObject_GetAttrId(codec, &PyId__is_text_encoding);
if (attr == NULL) {
if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
PyErr_Clear();
} else {
Py_DECREF(codec);
return NULL;
}
} else {
is_text_codec = PyObject_IsTrue(attr);
Py_DECREF(attr);
if (!is_text_codec) {
Py_DECREF(codec);
PyErr_Format(PyExc_LookupError,
"'%.400s' is not a text encoding; "
"use codecs.%s() to handle arbitrary codecs",
encoding, operation_name);
return NULL;
}
}
}
v = PyTuple_GET_ITEM(codec, index);
Py_DECREF(codec);
Py_INCREF(v);
return v;
}
static PyObject * _PyCodec_TextEncoder(const char *encoding)
{
return codec_getitem_checked(encoding, "encode", 0);
}
static PyObject * _PyCodec_TextDecoder(const char *encoding)
{
return codec_getitem_checked(encoding, "decode", 1);
}
PyObject *_PyCodec_EncodeText(PyObject *object,
const char *encoding,
const char *errors)
{
PyObject *encoder;
encoder = _PyCodec_TextEncoder(encoding);
if (encoder == NULL)
return NULL;
return _PyCodec_EncodeInternal(object, encoder, encoding, errors);
}
PyObject *_PyCodec_DecodeText(PyObject *object,
const char *encoding,
const char *errors)
{
PyObject *decoder;
decoder = _PyCodec_TextDecoder(encoding);
if (decoder == NULL)
return NULL;
return _PyCodec_DecodeInternal(object, decoder, encoding, errors);
}
/* Register the error handling callback function error under the name /* Register the error handling callback function error under the name
name. This function will be called by the codec when it encounters name. This function will be called by the codec when it encounters
an unencodable characters/undecodable bytes and doesn't know the an unencodable characters/undecodable bytes and doesn't know the
......
...@@ -32,6 +32,12 @@ def main(regrtest_args): ...@@ -32,6 +32,12 @@ def main(regrtest_args):
] ]
# Allow user-specified interpreter options to override our defaults. # Allow user-specified interpreter options to override our defaults.
args.extend(test.support.args_from_interpreter_flags()) args.extend(test.support.args_from_interpreter_flags())
# Workaround for issue #20355
os.environ.pop("PYTHONWARNINGS", None)
# Workaround for issue #20361
args.extend(['-W', 'error::BytesWarning'])
args.extend(['-m', 'test', # Run the test suite args.extend(['-m', 'test', # Run the test suite
'-r', # Randomize test order '-r', # Randomize test order
'-w', # Re-run failed tests in verbose mode '-w', # Re-run failed tests in verbose mode
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment