Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
C
cpython
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
cpython
Commits
ad51a118
Commit
ad51a118
authored
Mar 02, 2014
by
Georg Brandl
Browse files
Options
Browse Files
Download
Plain Diff
merge 3.3.5rc1 release commits with 3.3 branch
parents
e879c0ad
a7e7f353
Changes
44
Hide whitespace changes
Inline
Side-by-side
Showing
44 changed files
with
472 additions
and
101 deletions
+472
-101
.hgeol
.hgeol
+2
-0
Doc/c-api/exceptions.rst
Doc/c-api/exceptions.rst
+5
-5
Doc/c-api/module.rst
Doc/c-api/module.rst
+1
-1
Doc/c-api/typeobj.rst
Doc/c-api/typeobj.rst
+2
-3
Doc/library/csv.rst
Doc/library/csv.rst
+37
-30
Doc/library/stringprep.rst
Doc/library/stringprep.rst
+0
-1
Doc/library/unittest.mock-examples.rst
Doc/library/unittest.mock-examples.rst
+1
-1
Doc/library/unittest.mock.rst
Doc/library/unittest.mock.rst
+2
-2
Doc/tools/sphinxext/pyspecific.py
Doc/tools/sphinxext/pyspecific.py
+11
-11
Doc/tutorial/introduction.rst
Doc/tutorial/introduction.rst
+4
-4
Include/codecs.h
Include/codecs.h
+27
-0
Lib/codecs.py
Lib/codecs.py
+13
-1
Lib/copy.py
Lib/copy.py
+1
-1
Lib/encodings/base64_codec.py
Lib/encodings/base64_codec.py
+1
-0
Lib/encodings/bz2_codec.py
Lib/encodings/bz2_codec.py
+1
-0
Lib/encodings/hex_codec.py
Lib/encodings/hex_codec.py
+1
-0
Lib/encodings/quopri_codec.py
Lib/encodings/quopri_codec.py
+1
-0
Lib/encodings/rot_13.py
Lib/encodings/rot_13.py
+1
-0
Lib/encodings/uu_codec.py
Lib/encodings/uu_codec.py
+1
-0
Lib/encodings/zlib_codec.py
Lib/encodings/zlib_codec.py
+1
-0
Lib/idlelib/idle_test/README.txt
Lib/idlelib/idle_test/README.txt
+5
-3
Lib/idlelib/idle_test/test_formatparagraph.py
Lib/idlelib/idle_test/test_formatparagraph.py
+3
-0
Lib/idlelib/idle_test/test_idlehistory.py
Lib/idlelib/idle_test/test_idlehistory.py
+1
-0
Lib/idlelib/idle_test/test_searchengine.py
Lib/idlelib/idle_test/test_searchengine.py
+3
-0
Lib/idlelib/idle_test/test_text.py
Lib/idlelib/idle_test/test_text.py
+1
-0
Lib/modulefinder.py
Lib/modulefinder.py
+1
-1
Lib/test/coding20731.py
Lib/test/coding20731.py
+4
-0
Lib/test/test_codecs.py
Lib/test/test_codecs.py
+42
-0
Lib/test/test_coding.py
Lib/test/test_coding.py
+9
-1
Lib/test/test_copy.py
Lib/test/test_copy.py
+1
-0
Lib/test/test_fileinput.py
Lib/test/test_fileinput.py
+36
-0
Lib/test/test_idle.py
Lib/test/test_idle.py
+1
-0
Lib/test/test_importlib/source/test_file_loader.py
Lib/test/test_importlib/source/test_file_loader.py
+1
-0
Lib/test/test_modulefinder.py
Lib/test/test_modulefinder.py
+20
-0
Lib/test/test_posix.py
Lib/test/test_posix.py
+1
-1
Lib/test/test_tcl.py
Lib/test/test_tcl.py
+1
-0
Lib/test/test_tokenize.py
Lib/test/test_tokenize.py
+54
-15
Lib/tokenize.py
Lib/tokenize.py
+6
-0
Misc/ACKS
Misc/ACKS
+1
-0
Misc/NEWS
Misc/NEWS
+26
-0
Objects/unicodeobject.c
Objects/unicodeobject.c
+2
-2
Parser/tokenizer.c
Parser/tokenizer.c
+12
-2
Python/codecs.c
Python/codecs.c
+122
-16
Tools/scripts/run_tests.py
Tools/scripts/run_tests.py
+6
-0
No files found.
.hgeol
View file @
ad51a118
...
...
@@ -37,6 +37,8 @@ Lib/test/xmltestdata/* = BIN
Lib/venv/scripts/nt/* = BIN
Lib/test/coding20731.py = BIN
# All other files (which presumably are human-editable) are "native".
# This must be the last rule!
...
...
Doc/c-api/exceptions.rst
View file @
ad51a118
...
...
@@ -306,7 +306,7 @@ in various ways. There is a separate error indicator for each thread.
.. c:function:: void PyErr_SyntaxLocation(char *filename, int lineno)
Like :c:func:`PyErr_SyntaxLocationEx
c
`, but the col_offset parameter is
Like :c:func:`PyErr_SyntaxLocationEx`, but the col_offset parameter is
omitted.
...
...
@@ -490,11 +490,11 @@ Exception Objects
reference, as accessible from Python through :attr:`__cause__`.
.. c:function:: void PyException_SetCause(PyObject *ex, PyObject *c
tx
)
.. c:function:: void PyException_SetCause(PyObject *ex, PyObject *c
ause
)
Set the cause associated with the exception to *c
tx
*. Use *NULL* to clear
it. There is no type check to make sure that *c
tx
* is either an exception
instance or :const:`None`. This steals a reference to *c
tx
*.
Set the cause associated with the exception to *c
ause
*. Use *NULL* to clear
it. There is no type check to make sure that *c
ause
* is either an exception
instance or :const:`None`. This steals a reference to *c
ause
*.
:attr:`__suppress_context__` is implicitly set to ``True`` by this function.
...
...
Doc/c-api/module.rst
View file @
ad51a118
...
...
@@ -113,7 +113,7 @@ There are only a few functions special to module objects.
Return a pointer to the :c:type:`PyModuleDef` struct from which the module was
created, or *NULL* if the module wasn't created with
:c:func:`PyModule_Create`.
i
:c:func:`PyModule_Create`.
.. c:function:: PyObject* PyState_FindModule(PyModuleDef *def)
...
...
Doc/c-api/typeobj.rst
View file @
ad51a118
...
...
@@ -205,9 +205,8 @@ type objects) *must* have the :attr:`ob_size` field.
bit currently defined is :const:`Py_PRINT_RAW`. When the :const:`Py_PRINT_RAW`
flag bit is set, the instance should be printed the same way as :c:member:`~PyTypeObject.tp_str`
would format it; when the :const:`Py_PRINT_RAW` flag bit is clear, the instance
should be printed the same was as :c:member:`~PyTypeObject.tp_repr` would format it. It should
return ``-1`` and set an exception condition when an error occurred during the
comparison.
should be printed the same way as :c:member:`~PyTypeObject.tp_repr` would format it. It should
return ``-1`` and set an exception condition when an error occurs.
It is possible that the :c:member:`~PyTypeObject.tp_print` field will be deprecated. In any case,
it is recommended not to define :c:member:`~PyTypeObject.tp_print`, but instead to rely on
...
...
Doc/library/csv.rst
View file @
ad51a118
...
...
@@ -142,36 +142,43 @@ The :mod:`csv` module defines the following functions:
The :mod:`csv` module defines the following classes:
.. class:: DictReader(csvfile, fieldnames=None, restkey=None, restval=None, dialect='excel', *args, **kwds)
Create an object which operates like a regular reader but maps the information
read into a dict whose keys are given by the optional *fieldnames* parameter.
If the *fieldnames* parameter is omitted, the values in the first row of the
*csvfile* will be used as the fieldnames. If the row read has more fields
than the fieldnames sequence, the remaining data is added as a sequence
keyed by the value of *restkey*. If the row read has fewer fields than the
fieldnames sequence, the remaining keys take the value of the optional
*restval* parameter. Any other optional or keyword arguments are passed to
the underlying :class:`reader` instance.
.. class:: DictWriter(csvfile, fieldnames, restval='', extrasaction='raise', dialect='excel', *args, **kwds)
Create an object which operates like a regular writer but maps dictionaries onto
output rows. The *fieldnames* parameter identifies the order in which values in
the dictionary passed to the :meth:`writerow` method are written to the
*csvfile*. The optional *restval* parameter specifies the value to be written
if the dictionary is missing a key in *fieldnames*. If the dictionary passed to
the :meth:`writerow` method contains a key not found in *fieldnames*, the
optional *extrasaction* parameter indicates what action to take. If it is set
to ``'raise'`` a :exc:`ValueError` is raised. If it is set to ``'ignore'``,
extra values in the dictionary are ignored. Any other optional or keyword
arguments are passed to the underlying :class:`writer` instance.
Note that unlike the :class:`DictReader` class, the *fieldnames* parameter of
the :class:`DictWriter` is not optional. Since Python's :class:`dict` objects
are not ordered, there is not enough information available to deduce the order
in which the row should be written to the *csvfile*.
.. class:: DictReader(csvfile, fieldnames=None, restkey=None, restval=None, \
dialect='excel', *args, **kwds)
Create an object which operates like a regular reader but maps the
information read into a dict whose keys are given by the optional
*fieldnames* parameter. The *fieldnames* parameter is a :mod:`sequence
<collections.abc>` whose elements are associated with the fields of the
input data in order. These elements become the keys of the resulting
dictionary. If the *fieldnames* parameter is omitted, the values in the
first row of the *csvfile* will be used as the fieldnames. If the row read
has more fields than the fieldnames sequence, the remaining data is added as
a sequence keyed by the value of *restkey*. If the row read has fewer
fields than the fieldnames sequence, the remaining keys take the value of
the optional *restval* parameter. Any other optional or keyword arguments
are passed to the underlying :class:`reader` instance.
.. class:: DictWriter(csvfile, fieldnames, restval='', extrasaction='raise', \
dialect='excel', *args, **kwds)
Create an object which operates like a regular writer but maps dictionaries
onto output rows. The *fieldnames* parameter is a :mod:`sequence
<collections.abc>` of keys that identify the order in which values in the
dictionary passed to the :meth:`writerow` method are written to the
*csvfile*. The optional *restval* parameter specifies the value to be
written if the dictionary is missing a key in *fieldnames*. If the
dictionary passed to the :meth:`writerow` method contains a key not found in
*fieldnames*, the optional *extrasaction* parameter indicates what action to
take. If it is set to ``'raise'`` a :exc:`ValueError` is raised. If it is
set to ``'ignore'``, extra values in the dictionary are ignored. Any other
optional or keyword arguments are passed to the underlying :class:`writer`
instance.
Note that unlike the :class:`DictReader` class, the *fieldnames* parameter
of the :class:`DictWriter` is not optional. Since Python's :class:`dict`
objects are not ordered, there is not enough information available to deduce
the order in which the row should be written to the *csvfile*.
.. class:: Dialect
...
...
Doc/library/stringprep.rst
View file @
ad51a118
...
...
@@ -3,7 +3,6 @@
.. module:: stringprep
:synopsis: String preparation, as per RFC 3453
:deprecated:
.. moduleauthor:: Martin v. Löwis <martin@v.loewis.de>
.. sectionauthor:: Martin v. Löwis <martin@v.loewis.de>
...
...
Doc/library/unittest.mock-examples.rst
View file @
ad51a118
...
...
@@ -412,7 +412,7 @@ mock using the "as" form of the with statement:
As an alternative `patch`, `patch.object` and `patch.dict` can be used as
class decorators. When used in this way it is the same as applying the
decorator indvidually to every method whose name starts with "test".
decorator ind
i
vidually to every method whose name starts with "test".
.. _further-examples:
...
...
Doc/library/unittest.mock.rst
View file @
ad51a118
...
...
@@ -938,7 +938,7 @@ method:
.. [#] The only exceptions are magic methods and attributes (those that have
leading and trailing double underscores). Mock doesn't create these but
instead
of
raises an ``AttributeError``. This is because the interpreter
instead raises an ``AttributeError``. This is because the interpreter
will often implicitly request these methods, and gets *very* confused to
get a new Mock object when it expects a magic method. If you need magic
method support see :ref:`magic methods <magic-methods>`.
...
...
@@ -1489,7 +1489,7 @@ Patching Descriptors and Proxy Objects
Both patch_ and patch.object_ correctly patch and restore descriptors: class
methods, static methods and properties. You should patch these on the *class*
rather than an instance. They also work with *some* objects
that proxy attribute access, like the `django sett
t
ings object
that proxy attribute access, like the `django settings object
<http://www.voidspace.org.uk/python/weblog/arch_d7_2010_12_04.shtml#e1198>`_.
...
...
Doc/tools/sphinxext/pyspecific.py
View file @
ad51a118
...
...
@@ -16,6 +16,7 @@ from docutils import nodes, utils
import
sphinx
from
sphinx.util.nodes
import
split_explicit_title
from
sphinx.util.compat
import
Directive
from
sphinx.writers.html
import
HTMLTranslator
from
sphinx.writers.latex
import
LaTeXTranslator
from
sphinx.locale
import
versionlabels
...
...
@@ -27,7 +28,9 @@ Body.enum.converters['loweralpha'] = \
Body
.
enum
.
converters
[
'lowerroman'
]
=
\
Body
.
enum
.
converters
[
'upperroman'
]
=
lambda
x
:
None
if
sphinx
.
__version__
[:
3
]
<
'1.2'
:
SPHINX11
=
sphinx
.
__version__
[:
3
]
<
'1.2'
if
SPHINX11
:
# monkey-patch HTML translator to give versionmodified paragraphs a class
def
new_visit_versionmodified
(
self
,
node
):
self
.
body
.
append
(
self
.
starttag
(
node
,
'p'
,
CLASS
=
node
[
'type'
]))
...
...
@@ -88,8 +91,6 @@ def source_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):
# Support for marking up implementation details
from
sphinx.util.compat
import
Directive
class
ImplementationDetail
(
Directive
):
has_content
=
True
...
...
@@ -142,10 +143,6 @@ class PyDecoratorMethod(PyDecoratorMixin, PyClassmember):
# Support for documenting version of removal in deprecations
from
sphinx.locale
import
versionlabels
from
sphinx.util.compat
import
Directive
class
DeprecatedRemoved
(
Directive
):
has_content
=
True
required_arguments
=
2
...
...
@@ -171,16 +168,16 @@ class DeprecatedRemoved(Directive):
messages
=
[]
if
self
.
content
:
self
.
state
.
nested_parse
(
self
.
content
,
self
.
content_offset
,
node
)
if
len
(
node
):
if
isinstance
(
node
[
0
],
nodes
.
paragraph
)
and
node
[
0
].
rawsource
:
content
=
nodes
.
inline
(
node
[
0
].
rawsource
,
translatable
=
True
)
content
.
source
=
node
[
0
].
source
content
.
line
=
node
[
0
].
line
content
+=
node
[
0
].
children
node
[
0
].
replace_self
(
nodes
.
paragraph
(
''
,
''
,
content
))
node
[
0
].
insert
(
0
,
nodes
.
inline
(
''
,
'%s: '
%
text
,
classes
=
[
'versionmodified'
]))
else
:
if
not
SPHINX11
:
node
[
0
].
insert
(
0
,
nodes
.
inline
(
''
,
'%s: '
%
text
,
classes
=
[
'versionmodified'
]))
elif
not
SPHINX11
:
para
=
nodes
.
paragraph
(
''
,
''
,
nodes
.
inline
(
''
,
'%s.'
%
text
,
classes
=
[
'versionmodified'
]))
node
.
append
(
para
)
...
...
@@ -188,6 +185,9 @@ class DeprecatedRemoved(Directive):
env
.
note_versionchange
(
'deprecated'
,
version
[
0
],
node
,
self
.
lineno
)
return
[
node
]
+
messages
# for Sphinx < 1.2
versionlabels
[
'deprecated-removed'
]
=
DeprecatedRemoved
.
_label
# Support for including Misc/NEWS
...
...
Doc/tutorial/introduction.rst
View file @
ad51a118
...
...
@@ -371,9 +371,9 @@ values. The most versatile is the *list*, which can be written as a list of
comma-separated values (items) between square brackets. Lists might contain
items of different types, but usually the items all have the same type. ::
>>> squares = [1,
2,
4, 9, 16, 25]
>>> squares = [1, 4, 9, 16, 25]
>>> squares
[1,
2,
4, 9, 16, 25]
[1, 4, 9, 16, 25]
Like strings (and all other built-in :term:`sequence` type), lists can be
indexed and sliced::
...
...
@@ -389,12 +389,12 @@ All slice operations return a new list containing the requested elements. This
means that the following slice returns a new (shallow) copy of the list::
>>> squares[:]
[1,
2,
4, 9, 16, 25]
[1, 4, 9, 16, 25]
Lists also supports operations like concatenation::
>>> squares + [36, 49, 64, 81, 100]
[1,
2,
4, 9, 16, 25, 36, 49, 64, 81, 100]
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
Unlike strings, which are :term:`immutable`, lists are a :term:`mutable`
type, i.e. it is possible to change their content::
...
...
Include/codecs.h
View file @
ad51a118
...
...
@@ -94,6 +94,33 @@ PyAPI_FUNC(PyObject *) PyCodec_Decode(
const
char
*
errors
);
#ifndef Py_LIMITED_API
/* Text codec specific encoding and decoding API.
Checks the encoding against a list of codecs which do not
implement a str<->bytes encoding before attempting the
operation.
Please note that these APIs are internal and should not
be used in Python C extensions.
*/
PyAPI_FUNC
(
PyObject
*
)
_PyCodec_EncodeText
(
PyObject
*
object
,
const
char
*
encoding
,
const
char
*
errors
);
PyAPI_FUNC
(
PyObject
*
)
_PyCodec_DecodeText
(
PyObject
*
object
,
const
char
*
encoding
,
const
char
*
errors
);
#endif
/* --- Codec Lookup APIs --------------------------------------------------
All APIs return a codec object with incremented refcount and are
...
...
Lib/codecs.py
View file @
ad51a118
...
...
@@ -73,9 +73,19 @@ BOM64_BE = BOM_UTF32_BE
### Codec base classes (defining the API)
class
CodecInfo
(
tuple
):
"""Codec details when looking up the codec registry"""
# Private API to allow Python 3.4 to blacklist the known non-Unicode
# codecs in the standard library. A more general mechanism to
# reliably distinguish test encodings from other codecs will hopefully
# be defined for Python 3.5
#
# See http://bugs.python.org/issue19619
_is_text_encoding
=
True
# Assume codecs are text encodings by default
def
__new__
(
cls
,
encode
,
decode
,
streamreader
=
None
,
streamwriter
=
None
,
incrementalencoder
=
None
,
incrementaldecoder
=
None
,
name
=
None
):
incrementalencoder
=
None
,
incrementaldecoder
=
None
,
name
=
None
,
*
,
_is_text_encoding
=
None
):
self
=
tuple
.
__new__
(
cls
,
(
encode
,
decode
,
streamreader
,
streamwriter
))
self
.
name
=
name
self
.
encode
=
encode
...
...
@@ -84,6 +94,8 @@ class CodecInfo(tuple):
self
.
incrementaldecoder
=
incrementaldecoder
self
.
streamwriter
=
streamwriter
self
.
streamreader
=
streamreader
if
_is_text_encoding
is
not
None
:
self
.
_is_text_encoding
=
_is_text_encoding
return
self
def
__repr__
(
self
):
...
...
Lib/copy.py
View file @
ad51a118
...
...
@@ -110,7 +110,7 @@ _copy_dispatch = d = {}
def
_copy_immutable
(
x
):
return
x
for
t
in
(
type
(
None
),
int
,
float
,
bool
,
str
,
tuple
,
frozenset
,
type
,
range
,
bytes
,
frozenset
,
type
,
range
,
types
.
BuiltinFunctionType
,
type
(
Ellipsis
),
types
.
FunctionType
,
weakref
.
ref
):
d
[
t
]
=
_copy_immutable
...
...
Lib/encodings/base64_codec.py
View file @
ad51a118
...
...
@@ -52,4 +52,5 @@ def getregentry():
incrementaldecoder
=
IncrementalDecoder
,
streamwriter
=
StreamWriter
,
streamreader
=
StreamReader
,
_is_text_encoding
=
False
,
)
Lib/encodings/bz2_codec.py
View file @
ad51a118
...
...
@@ -74,4 +74,5 @@ def getregentry():
incrementaldecoder
=
IncrementalDecoder
,
streamwriter
=
StreamWriter
,
streamreader
=
StreamReader
,
_is_text_encoding
=
False
,
)
Lib/encodings/hex_codec.py
View file @
ad51a118
...
...
@@ -52,4 +52,5 @@ def getregentry():
incrementaldecoder
=
IncrementalDecoder
,
streamwriter
=
StreamWriter
,
streamreader
=
StreamReader
,
_is_text_encoding
=
False
,
)
Lib/encodings/quopri_codec.py
View file @
ad51a118
...
...
@@ -53,4 +53,5 @@ def getregentry():
incrementaldecoder
=
IncrementalDecoder
,
streamwriter
=
StreamWriter
,
streamreader
=
StreamReader
,
_is_text_encoding
=
False
,
)
Lib/encodings/rot_13.py
View file @
ad51a118
...
...
@@ -43,6 +43,7 @@ def getregentry():
incrementaldecoder
=
IncrementalDecoder
,
streamwriter
=
StreamWriter
,
streamreader
=
StreamReader
,
_is_text_encoding
=
False
,
)
### Map
...
...
Lib/encodings/uu_codec.py
View file @
ad51a118
...
...
@@ -96,4 +96,5 @@ def getregentry():
incrementaldecoder
=
IncrementalDecoder
,
streamreader
=
StreamReader
,
streamwriter
=
StreamWriter
,
_is_text_encoding
=
False
,
)
Lib/encodings/zlib_codec.py
View file @
ad51a118
...
...
@@ -74,4 +74,5 @@ def getregentry():
incrementaldecoder
=
IncrementalDecoder
,
streamreader
=
StreamReader
,
streamwriter
=
StreamWriter
,
_is_text_encoding
=
False
,
)
Lib/idlelib/idle_test/README.txt
View file @
ad51a118
...
...
@@ -41,9 +41,10 @@ idle class. For the benefit of buildbot machines that do not have a graphics
screen, gui tests must be 'guarded' by "requires('gui')" in a setUp
function or method. This will typically be setUpClass.
All gui objects must be destroyed by the end of the test, perhaps in a tearDown
function. Creating the Tk root directly in a setUp allows a reference to be saved
so it can be properly destroyed in the corresponding tearDown.
To avoid interfering with other gui tests, all gui objects must be destroyed
and deleted by the end of the test. If a widget, such as a Tk root, is created
in a setUpX function, destroy it in the corresponding tearDownX. For module
and class attributes, also delete the widget.
---
@classmethod
def setUpClass(cls):
...
...
@@ -53,6 +54,7 @@ so it can be properly destroyed in the corresponding tearDown.
@classmethod
def tearDownClass(cls):
cls.root.destroy()
del cls.root
---
Support.requires('gui') returns true if it is either called in a main module
...
...
Lib/idlelib/idle_test/test_formatparagraph.py
View file @
ad51a118
...
...
@@ -277,6 +277,9 @@ class FormatEventTest(unittest.TestCase):
@
classmethod
def
tearDownClass
(
cls
):
cls
.
root
.
destroy
()
del
cls
.
root
del
cls
.
text
del
cls
.
formatter
def
test_short_line
(
self
):
self
.
text
.
insert
(
'1.0'
,
"Short line
\
n
"
)
...
...
Lib/idlelib/idle_test/test_idlehistory.py
View file @
ad51a118
...
...
@@ -80,6 +80,7 @@ class FetchTest(unittest.TestCase):
@
classmethod
def
tearDownClass
(
cls
):
cls
.
root
.
destroy
()
del
cls
.
root
def
fetch_test
(
self
,
reverse
,
line
,
prefix
,
index
,
*
,
bell
=
False
):
# Perform one fetch as invoked by Alt-N or Alt-P
...
...
Lib/idlelib/idle_test/test_searchengine.py
View file @
ad51a118
...
...
@@ -64,6 +64,7 @@ class GetSelectionTest(unittest.TestCase):
## @classmethod
## def tearDownClass(cls):
## cls.root.destroy()
## del cls.root
def
test_get_selection
(
self
):
# text = Text(master=self.root)
...
...
@@ -219,6 +220,7 @@ class SearchTest(unittest.TestCase):
## @classmethod
## def tearDownClass(cls):
## cls.root.destroy()
## del cls.root
def test_search(self):
Equal = self.assertEqual
...
...
@@ -261,6 +263,7 @@ class ForwardBackwardTest(unittest.TestCase):
## @classmethod
## def tearDownClass(cls):
## cls.root.destroy()
## del cls.root
@classmethod
def setUpClass(cls):
...
...
Lib/idlelib/idle_test/test_text.py
View file @
ad51a118
...
...
@@ -221,6 +221,7 @@ class TkTextTest(TextTest, unittest.TestCase):
@
classmethod
def
tearDownClass
(
cls
):
cls
.
root
.
destroy
()
del
cls
.
root
if
__name__
==
'__main__'
:
...
...
Lib/modulefinder.py
View file @
ad51a118
...
...
@@ -287,7 +287,7 @@ class ModuleFinder:
if
fp
.
read
(
4
)
!=
imp
.
get_magic
():
self
.
msgout
(
2
,
"raise ImportError: Bad magic number"
,
pathname
)
raise
ImportError
(
"Bad magic number in %s"
%
pathname
)
fp
.
read
(
4
)
fp
.
read
(
8
)
# Skip mtime and size.
co
=
marshal
.
load
(
fp
)
else
:
co
=
None
...
...
Lib/test/coding20731.py
0 → 100644
View file @
ad51a118
#coding:latin1
Lib/test/test_codecs.py
View file @
ad51a118
...
...
@@ -4,6 +4,7 @@ import locale
import
sys
import
unittest
import
warnings
import
encodings
from
test
import
support
...
...
@@ -2408,6 +2409,47 @@ class TransformCodecTest(unittest.TestCase):
sout
=
reader
.
readline
()
self
.
assertEqual
(
sout
,
b"
\
x80
"
)
def
test_text_to_binary_blacklists_binary_transforms
(
self
):
# Check binary -> binary codecs give a good error for str input
bad_input
=
"bad input type"
for
encoding
in
bytes_transform_encodings
:
fmt
=
(
r"{!r} is not a text encoding; "
r"use codecs.encode\
(
\) to handle arbitrary codecs"
)
msg
=
fmt
.
format
(
encoding
)
with
self
.
assertRaisesRegex
(
LookupError
,
msg
)
as
failure
:
bad_input
.
encode
(
encoding
)
self
.
assertIsNone
(
failure
.
exception
.
__cause__
)
def
test_text_to_binary_blacklists_text_transforms
(
self
):
# Check str.encode gives a good error message for str -> str codecs
msg
=
(
r"^'rot_13' is not a text encoding; "
r"use codecs.encode\
(
\) to handle arbitrary codecs"
)
with
self
.
assertRaisesRegex
(
LookupError
,
msg
):
"just an example message"
.
encode
(
"rot_13"
)
def
test_binary_to_text_blacklists_binary_transforms
(
self
):
# Check bytes.decode and bytearray.decode give a good error
# message for binary -> binary codecs
data
=
b"encode first to ensure we meet any format restrictions"
for
encoding
in
bytes_transform_encodings
:
encoded_data
=
codecs
.
encode
(
data
,
encoding
)
fmt
=
(
r"{!r} is not a text encoding; "
r"use codecs.decode\
(
\) to handle arbitrary codecs"
)
msg
=
fmt
.
format
(
encoding
)
with
self
.
assertRaisesRegex
(
LookupError
,
msg
):
encoded_data
.
decode
(
encoding
)
with
self
.
assertRaisesRegex
(
LookupError
,
msg
):
bytearray
(
encoded_data
).
decode
(
encoding
)
def
test_binary_to_text_blacklists_text_transforms
(
self
):
# Check str -> str codec gives a good error for binary input
for
bad_input
in
(
b"immutable"
,
bytearray
(
b"mutable"
)):
msg
=
(
r"^'rot_13' is not a text encoding; "
r"use codecs.decode\
(
\) to handle arbitrary codecs"
)
with
self
.
assertRaisesRegex
(
LookupError
,
msg
)
as
failure
:
bad_input
.
decode
(
"rot_13"
)
self
.
assertIsNone
(
failure
.
exception
.
__cause__
)
@
unittest
.
skipUnless
(
sys
.
platform
==
'win32'
,
'code pages are specific to Windows'
)
...
...
Lib/test/test_coding.py
View file @
ad51a118
import
unittest
from
test.support
import
TESTFN
,
unlink
,
unload
import
importlib
,
os
,
sys
import
importlib
,
os
,
sys
,
subprocess
class
CodingTest
(
unittest
.
TestCase
):
def
test_bad_coding
(
self
):
...
...
@@ -58,6 +58,14 @@ class CodingTest(unittest.TestCase):
self
.
assertTrue
(
c
.
exception
.
args
[
0
].
startswith
(
expected
),
msg
=
c
.
exception
.
args
[
0
])
def
test_20731
(
self
):
sub
=
subprocess
.
Popen
([
sys
.
executable
,
os
.
path
.
join
(
os
.
path
.
dirname
(
__file__
),
'coding20731.py'
)],
stderr
=
subprocess
.
PIPE
)
err
=
sub
.
communicate
()[
1
]
self
.
assertEqual
(
sub
.
returncode
,
0
)
self
.
assertNotIn
(
b'SyntaxError'
,
err
)
if
__name__
==
"__main__"
:
unittest
.
main
()
Lib/test/test_copy.py
View file @
ad51a118
...
...
@@ -98,6 +98,7 @@ class TestCopy(unittest.TestCase):
pass
tests
=
[
None
,
42
,
2
**
100
,
3.14
,
True
,
False
,
1j
,
"hello"
,
"hello
\
u1234
"
,
f
.
__code__
,
b"world"
,
bytes
(
range
(
256
)),
NewStyle
,
range
(
10
),
Classic
,
max
,
WithMetaclass
]
for
x
in
tests
:
self
.
assertIs
(
copy
.
copy
(
x
),
x
)
...
...
Lib/test/test_fileinput.py
View file @
ad51a118
...
...
@@ -258,6 +258,24 @@ class FileInputTests(unittest.TestCase):
fi
.
readline
()
self
.
assertTrue
(
custom_open_hook
.
invoked
,
"openhook not invoked"
)
def
test_readline
(
self
):
with
open
(
TESTFN
,
'wb'
)
as
f
:
f
.
write
(
b'A
\
n
B
\
r
\
n
C
\
r
'
)
# Fill TextIOWrapper buffer.
f
.
write
(
b'123456789
\
n
'
*
1000
)
# Issue #20501: readline() shouldn't read whole file.
f
.
write
(
b'
\
x80
'
)
self
.
addCleanup
(
safe_unlink
,
TESTFN
)
with
FileInput
(
files
=
TESTFN
,
openhook
=
hook_encoded
(
'ascii'
),
bufsize
=
8
)
as
fi
:
self
.
assertEqual
(
fi
.
readline
(),
'A
\
n
'
)
self
.
assertEqual
(
fi
.
readline
(),
'B
\
n
'
)
self
.
assertEqual
(
fi
.
readline
(),
'C
\
n
'
)
with
self
.
assertRaises
(
UnicodeDecodeError
):
# Read to the end of file.
list
(
fi
)
def
test_context_manager
(
self
):
try
:
t1
=
writeTmp
(
1
,
[
"A
\
n
B
\
n
C"
])
...
...
@@ -835,6 +853,24 @@ class Test_hook_encoded(unittest.TestCase):
self
.
assertIs
(
kwargs
.
pop
(
'encoding'
),
encoding
)
self
.
assertFalse
(
kwargs
)
def
test_modes
(
self
):
# Unlikely UTF-7 is locale encoding
with
open
(
TESTFN
,
'wb'
)
as
f
:
f
.
write
(
b'A
\
n
B
\
r
\
n
C
\
r
D+IKw-'
)
self
.
addCleanup
(
safe_unlink
,
TESTFN
)
def
check
(
mode
,
expected_lines
):
with
FileInput
(
files
=
TESTFN
,
mode
=
mode
,
openhook
=
hook_encoded
(
'utf-7'
))
as
fi
:
lines
=
list
(
fi
)
self
.
assertEqual
(
lines
,
expected_lines
)
check
(
'r'
,
[
'A
\
n
'
,
'B
\
n
'
,
'C
\
n
'
,
'D
\
u20ac
'
])
check
(
'rU'
,
[
'A
\
n
'
,
'B
\
n
'
,
'C
\
n
'
,
'D
\
u20ac
'
])
check
(
'U'
,
[
'A
\
n
'
,
'B
\
n
'
,
'C
\
n
'
,
'D
\
u20ac
'
])
with
self
.
assertRaises
(
ValueError
):
check
(
'rb'
,
[
'A
\
n
'
,
'B
\
r
\
n
'
,
'C
\
r
'
,
'D
\
u20ac
'
])
def
test_main
():
run_unittest
(
BufferSizesTests
,
...
...
Lib/test/test_idle.py
View file @
ad51a118
...
...
@@ -14,6 +14,7 @@ if use_resources and 'gui' in use_resources:
try
:
root
=
tk
.
Tk
()
root
.
destroy
()
del
root
except
tk
.
TclError
:
while
'gui'
in
use_resources
:
use_resources
.
remove
(
'gui'
)
...
...
Lib/test/test_importlib/source/test_file_loader.py
View file @
ad51a118
...
...
@@ -162,6 +162,7 @@ class SimpleTest(unittest.TestCase):
if
os
.
path
.
exists
(
pycache
):
shutil
.
rmtree
(
pycache
)
@
source_util
.
writes_bytecode_files
def
test_timestamp_overflow
(
self
):
# When a modification timestamp is larger than 2**32, it should be
# truncated rather than raise an OverflowError.
...
...
Lib/test/test_modulefinder.py
View file @
ad51a118
import
os
import
errno
import
importlib.machinery
import
py_compile
import
shutil
import
unittest
import
tempfile
...
...
@@ -208,6 +210,14 @@ a/module.py
from . import *
"""
]
bytecode_test
=
[
"a"
,
[
"a"
],
[],
[],
""
]
def
open_file
(
path
):
dirname
=
os
.
path
.
dirname
(
path
)
...
...
@@ -288,6 +298,16 @@ class ModuleFinderTest(unittest.TestCase):
def
test_relative_imports_4
(
self
):
self
.
_do_test
(
relative_import_test_4
)
def
test_bytecode
(
self
):
base_path
=
os
.
path
.
join
(
TEST_DIR
,
'a'
)
source_path
=
base_path
+
importlib
.
machinery
.
SOURCE_SUFFIXES
[
0
]
bytecode_path
=
base_path
+
importlib
.
machinery
.
BYTECODE_SUFFIXES
[
0
]
with
open_file
(
source_path
)
as
file
:
file
.
write
(
'testing_modulefinder = True
\
n
'
)
py_compile
.
compile
(
source_path
,
cfile
=
bytecode_path
)
os
.
remove
(
source_path
)
self
.
_do_test
(
bytecode_test
)
def
test_main
():
support
.
run_unittest
(
ModuleFinderTest
)
...
...
Lib/test/test_posix.py
View file @
ad51a118
...
...
@@ -1144,7 +1144,7 @@ class PosixGroupsTester(unittest.TestCase):
def
test_initgroups
(
self
):
# find missing group
g
=
max
(
self
.
saved_groups
)
+
1
g
=
max
(
self
.
saved_groups
or
[
0
]
)
+
1
name
=
pwd
.
getpwuid
(
posix
.
getuid
()).
pw_name
posix
.
initgroups
(
name
,
g
)
self
.
assertIn
(
g
,
posix
.
getgroups
())
...
...
Lib/test/test_tcl.py
View file @
ad51a118
...
...
@@ -376,6 +376,7 @@ class TclTest(unittest.TestCase):
result
=
arg
return
arg
self
.
interp
.
createcommand
(
'testfunc'
,
testfunc
)
self
.
addCleanup
(
self
.
interp
.
tk
.
deletecommand
,
'testfunc'
)
def
check
(
value
,
expected
,
eq
=
self
.
assertEqual
):
r
=
self
.
interp
.
call
(
'testfunc'
,
value
)
self
.
assertIsInstance
(
result
,
str
)
...
...
Lib/test/test_tokenize.py
View file @
ad51a118
...
...
@@ -2,7 +2,7 @@ doctests = """
Tests for the tokenize module.
The tests can be really simple. Given a small fragment of source
code, print out a table with tokens. The ENDMARK is omitted for
code, print out a table with tokens. The ENDMARK
ER
is omitted for
brevity.
>>> dump_tokens("1 + 1")
...
...
@@ -578,9 +578,15 @@ pass the '-ucpu' option to process the full directory.
>>> tempdir = os.path.dirname(f) or os.curdir
>>> testfiles = glob.glob(os.path.join(tempdir, "test*.py"))
tokenize is broken on test_pep3131.py because regular expressions are broken on
the obscure unicode identifiers in it. *sigh*
Tokenize is broken on test_pep3131.py because regular expressions are
broken on the obscure unicode identifiers in it. *sigh*
With roundtrip extended to test the 5-tuple mode of untokenize,
7 more testfiles fail. Remove them also until the failure is diagnosed.
>>> testfiles.remove(os.path.join(tempdir, "test_pep3131.py"))
>>> for f in ('buffer', 'builtin', 'fileio', 'inspect', 'os', 'platform', 'sys'):
... testfiles.remove(os.path.join(tempdir, "test_%s.py") % f)
...
>>> if not support.is_resource_enabled("cpu"):
... testfiles = random.sample(testfiles, 10)
...
...
...
@@ -659,21 +665,39 @@ def dump_tokens(s):
def
roundtrip
(
f
):
"""
Test roundtrip for `untokenize`. `f` is an open file or a string.
The source code in f is tokenized, converted back to source code via
tokenize.untokenize(), and tokenized again from the latter. The test
fails if the second tokenization doesn't match the first.
The source code in f is tokenized to both 5- and 2-tuples.
Both sequences are converted back to source code via
tokenize.untokenize(), and the latter tokenized again to 2-tuples.
The test fails if the 3 pair tokenizations do not match.
When untokenize bugs are fixed, untokenize with 5-tuples should
reproduce code that does not contain a backslash continuation
following spaces. A proper test should test this.
This function would be more useful for correcting bugs if it reported
the first point of failure, like assertEqual, rather than just
returning False -- or if it were only used in unittests and not
doctest and actually used assertEqual.
"""
# Get source code and original tokenizations
if
isinstance
(
f
,
str
):
f
=
BytesIO
(
f
.
encode
(
'utf-8'
))
try
:
token_list
=
list
(
tokenize
(
f
.
readline
))
finally
:
code
=
f
.
encode
(
'utf-8'
)
else
:
code
=
f
.
read
()
f
.
close
()
tokens1
=
[
tok
[:
2
]
for
tok
in
token_list
]
new_bytes
=
untokenize
(
tokens1
)
readline
=
(
line
for
line
in
new_bytes
.
splitlines
(
keepends
=
True
)).
__next__
tokens2
=
[
tok
[:
2
]
for
tok
in
tokenize
(
readline
)]
return
tokens1
==
tokens2
readline
=
iter
(
code
.
splitlines
(
keepends
=
True
)).
__next__
tokens5
=
list
(
tokenize
(
readline
))
tokens2
=
[
tok
[:
2
]
for
tok
in
tokens5
]
# Reproduce tokens2 from pairs
bytes_from2
=
untokenize
(
tokens2
)
readline2
=
iter
(
bytes_from2
.
splitlines
(
keepends
=
True
)).
__next__
tokens2_from2
=
[
tok
[:
2
]
for
tok
in
tokenize
(
readline2
)]
# Reproduce tokens2 from 5-tuples
bytes_from5
=
untokenize
(
tokens5
)
readline5
=
iter
(
bytes_from5
.
splitlines
(
keepends
=
True
)).
__next__
tokens2_from5
=
[
tok
[:
2
]
for
tok
in
tokenize
(
readline5
)]
# Compare 3 versions
return
tokens2
==
tokens2_from2
==
tokens2_from5
# This is an example from the docs, set up as a doctest.
def
decistmt
(
s
):
...
...
@@ -1156,6 +1180,7 @@ class TestTokenize(TestCase):
class
UntokenizeTest
(
TestCase
):
def
test_bad_input_order
(
self
):
# raise if previous row
u
=
Untokenizer
()
u
.
prev_row
=
2
u
.
prev_col
=
2
...
...
@@ -1163,8 +1188,22 @@ class UntokenizeTest(TestCase):
u
.
add_whitespace
((
1
,
3
))
self
.
assertEqual
(
cm
.
exception
.
args
[
0
],
'start (1,3) precedes previous end (2,2)'
)
# raise if previous column in row
self
.
assertRaises
(
ValueError
,
u
.
add_whitespace
,
(
2
,
1
))
def
test_backslash_continuation
(
self
):
# The problem is that <whitespace>\<newline> leaves no token
u
=
Untokenizer
()
u
.
prev_row
=
1
u
.
prev_col
=
1
u
.
tokens
=
[]
u
.
add_whitespace
((
2
,
0
))
self
.
assertEqual
(
u
.
tokens
,
[
'
\
\
\
n
'
])
u
.
prev_row
=
2
u
.
add_whitespace
((
4
,
4
))
self
.
assertEqual
(
u
.
tokens
,
[
'
\
\
\
n
'
,
'
\
\
\
n
\
\
\
n
'
,
' '
])
self
.
assertTrue
(
roundtrip
(
'a
\
n
b
\
n
c
\
n
\
\
\
n
c
\
n
'
))
def
test_iter_compat
(
self
):
u
=
Untokenizer
()
token
=
(
NAME
,
'Hello'
)
...
...
Lib/tokenize.py
View file @
ad51a118
...
...
@@ -234,6 +234,10 @@ class Untokenizer:
if row < self.prev_row or row == self.prev_row and col < self.prev_col:
raise ValueError("start ({},{}) precedes previous end ({},{})"
.format(row, col, self.prev_row, self.prev_col))
row_offset = row - self.prev_row
if row_offset:
self.tokens.append("
\
\
\
n
" * row_offset)
self.prev_col = 0
col_offset = col - self.prev_col
if col_offset:
self.tokens.append(" " * col_offset)
...
...
@@ -248,6 +252,8 @@ class Untokenizer:
if tok_type == ENCODING:
self.encoding = token
continue
if tok_type == ENDMARKER:
break
self.add_whitespace(start)
self.tokens.append(token)
self.prev_row, self.prev_col = end
...
...
Misc/ACKS
View file @
ad51a118
...
...
@@ -40,6 +40,7 @@ Erik Andersén
Oliver Andrich
Ross Andrus
Juancarlo Añez
Chris Angelico
Jérémy Anger
Ankur Ankan
Jon Anglin
...
...
Misc/NEWS
View file @
ad51a118
...
...
@@ -10,6 +10,15 @@ What's New in Python 3.3.5 release candidate 1?
Core and Builtins
-----------------
- Issue #20731: Properly position in source code files even if they
are opened in text mode. Patch by Serhiy Storchaka.
- Issue #19619: str.encode, bytes.decode and bytearray.decode now use an
internal API to throw LookupError for known non-text encodings, rather
than attempting the encoding or decoding operation and then throwing a
TypeError for an unexpected output type. (The latter mechanism remains
in place for third party non-text encodings)
- Issue #20588: Make Python-ast.c C89 compliant.
- Issue #20437: Fixed 21 potential bugs when deleting objects references.
...
...
@@ -20,6 +29,15 @@ Core and Builtins
Library
-------
- Issue #20778: Fix modulefinder to work with bytecode-only modules.
- Issue #20791: copy.copy() now doesn'
t
make
a
copy
when
the
input
is
a
bytes
object
.
Initial
patch
by
Peter
Otten
.
-
Issue
#
20621
:
Fixes
a
zipimport
bug
introduced
in
3.3.4
that
could
cause
spurious
crashes
or
SystemErrors
when
importing
modules
or
packages
from
a
zip
file
.
The
change
causing
the
problem
was
reverted
.
-
Issue
#
20635
:
Fixed
grid_columnconfigure
()
and
grid_rowconfigure
()
methods
of
Tkinter
widgets
to
work
in
wantobjects
=
True
mode
.
...
...
@@ -117,6 +135,8 @@ IDLE
Tests
-----
- Issue #20743: Fix a reference leak in test_tcl.
- Issue #20510: Rewrote test_exit in test_sys to match existing comments,
use modern unittest features, and use helpers from test.script_helper
instead of using subprocess directly. Patch by Gareth Rees.
...
...
@@ -147,6 +167,12 @@ Build
-
Issue
#
20609
:
Restored
the
ability
to
build
64
-
bit
Windows
binaries
on
32
-
bit
Windows
,
which
was
broken
by
the
change
in
issue
#
19788.
Tools
/
Demos
-----------
-
Issue
#
20535
:
PYTHONWARNING
no
longer
affects
the
run_tests
.
py
script
.
Patch
by
Arfrever
Frehtes
Taifersar
Arahesis
.
What
's New in Python 3.3.4?
===========================
...
...
Objects/unicodeobject.c
View file @
ad51a118
...
...
@@ -3129,7 +3129,7 @@ PyUnicode_Decode(const char *s,
buffer
=
PyMemoryView_FromBuffer
(
&
info
);
if
(
buffer
==
NULL
)
goto
onError
;
unicode
=
PyCodec_Decode
(
buffer
,
encoding
,
errors
);
unicode
=
_PyCodec_DecodeText
(
buffer
,
encoding
,
errors
);
if
(
unicode
==
NULL
)
goto
onError
;
if
(
!
PyUnicode_Check
(
unicode
))
{
...
...
@@ -3489,7 +3489,7 @@ PyUnicode_AsEncodedString(PyObject *unicode,
}
/* Encode via the codec registry */
v
=
PyCodec_Encode
(
unicode
,
encoding
,
errors
);
v
=
_PyCodec_EncodeText
(
unicode
,
encoding
,
errors
);
if
(
v
==
NULL
)
return
NULL
;
...
...
Parser/tokenizer.c
View file @
ad51a118
...
...
@@ -498,9 +498,13 @@ fp_setreadl(struct tok_state *tok, const char* enc)
fd
=
fileno
(
tok
->
fp
);
/* Due to buffering the file offset for fd can be different from the file
* position of tok->fp. */
* position of tok->fp. If tok->fp was opened in text mode on Windows,
* its file position counts CRLF as one char and can't be directly mapped
* to the file offset for fd. Instead we step back one byte and read to
* the end of line.*/
pos
=
ftell
(
tok
->
fp
);
if
(
pos
==
-
1
||
lseek
(
fd
,
(
off_t
)
pos
,
SEEK_SET
)
==
(
off_t
)
-
1
)
{
if
(
pos
==
-
1
||
lseek
(
fd
,
(
off_t
)(
pos
>
0
?
pos
-
1
:
pos
),
SEEK_SET
)
==
(
off_t
)
-
1
)
{
PyErr_SetFromErrnoWithFilename
(
PyExc_OSError
,
NULL
);
goto
cleanup
;
}
...
...
@@ -513,6 +517,12 @@ fp_setreadl(struct tok_state *tok, const char* enc)
Py_XDECREF
(
tok
->
decoding_readline
);
readline
=
_PyObject_GetAttrId
(
stream
,
&
PyId_readline
);
tok
->
decoding_readline
=
readline
;
if
(
pos
>
0
)
{
if
(
PyObject_CallObject
(
readline
,
NULL
)
==
NULL
)
{
readline
=
NULL
;
goto
cleanup
;
}
}
cleanup:
Py_XDECREF
(
stream
);
...
...
Python/codecs.c
View file @
ad51a118
...
...
@@ -337,18 +337,15 @@ PyObject *PyCodec_StreamWriter(const char *encoding,
errors is passed to the encoder factory as argument if non-NULL. */
PyObject
*
PyCodec_Encode
(
PyObject
*
object
,
const
char
*
encoding
,
const
char
*
errors
)
static
PyObject
*
_PyCodec_EncodeInternal
(
PyObject
*
object
,
PyObject
*
encoder
,
const
char
*
encoding
,
const
char
*
errors
)
{
PyObject
*
encoder
=
NULL
;
PyObject
*
args
=
NULL
,
*
result
=
NULL
;
PyObject
*
v
=
NULL
;
encoder
=
PyCodec_Encoder
(
encoding
);
if
(
encoder
==
NULL
)
goto
onError
;
args
=
args_tuple
(
object
,
errors
);
if
(
args
==
NULL
)
goto
onError
;
...
...
@@ -384,18 +381,15 @@ PyObject *PyCodec_Encode(PyObject *object,
errors is passed to the decoder factory as argument if non-NULL. */
PyObject
*
PyCodec_Decode
(
PyObject
*
object
,
const
char
*
encoding
,
const
char
*
errors
)
static
PyObject
*
_PyCodec_DecodeInternal
(
PyObject
*
object
,
PyObject
*
decoder
,
const
char
*
encoding
,
const
char
*
errors
)
{
PyObject
*
decoder
=
NULL
;
PyObject
*
args
=
NULL
,
*
result
=
NULL
;
PyObject
*
v
;
decoder
=
PyCodec_Decoder
(
encoding
);
if
(
decoder
==
NULL
)
goto
onError
;
args
=
args_tuple
(
object
,
errors
);
if
(
args
==
NULL
)
goto
onError
;
...
...
@@ -425,6 +419,118 @@ PyObject *PyCodec_Decode(PyObject *object,
return
NULL
;
}
/* Generic encoding/decoding API */
PyObject
*
PyCodec_Encode
(
PyObject
*
object
,
const
char
*
encoding
,
const
char
*
errors
)
{
PyObject
*
encoder
;
encoder
=
PyCodec_Encoder
(
encoding
);
if
(
encoder
==
NULL
)
return
NULL
;
return
_PyCodec_EncodeInternal
(
object
,
encoder
,
encoding
,
errors
);
}
PyObject
*
PyCodec_Decode
(
PyObject
*
object
,
const
char
*
encoding
,
const
char
*
errors
)
{
PyObject
*
decoder
;
decoder
=
PyCodec_Decoder
(
encoding
);
if
(
decoder
==
NULL
)
return
NULL
;
return
_PyCodec_DecodeInternal
(
object
,
decoder
,
encoding
,
errors
);
}
/* Text encoding/decoding API */
static
PyObject
*
codec_getitem_checked
(
const
char
*
encoding
,
const
char
*
operation_name
,
int
index
)
{
_Py_IDENTIFIER
(
_is_text_encoding
);
PyObject
*
codec
;
PyObject
*
attr
;
PyObject
*
v
;
int
is_text_codec
;
codec
=
_PyCodec_Lookup
(
encoding
);
if
(
codec
==
NULL
)
return
NULL
;
/* Backwards compatibility: assume any raw tuple describes a text
* encoding, and the same for anything lacking the private
* attribute.
*/
if
(
!
PyTuple_CheckExact
(
codec
))
{
attr
=
_PyObject_GetAttrId
(
codec
,
&
PyId__is_text_encoding
);
if
(
attr
==
NULL
)
{
if
(
PyErr_ExceptionMatches
(
PyExc_AttributeError
))
{
PyErr_Clear
();
}
else
{
Py_DECREF
(
codec
);
return
NULL
;
}
}
else
{
is_text_codec
=
PyObject_IsTrue
(
attr
);
Py_DECREF
(
attr
);
if
(
!
is_text_codec
)
{
Py_DECREF
(
codec
);
PyErr_Format
(
PyExc_LookupError
,
"'%.400s' is not a text encoding; "
"use codecs.%s() to handle arbitrary codecs"
,
encoding
,
operation_name
);
return
NULL
;
}
}
}
v
=
PyTuple_GET_ITEM
(
codec
,
index
);
Py_DECREF
(
codec
);
Py_INCREF
(
v
);
return
v
;
}
static
PyObject
*
_PyCodec_TextEncoder
(
const
char
*
encoding
)
{
return
codec_getitem_checked
(
encoding
,
"encode"
,
0
);
}
static
PyObject
*
_PyCodec_TextDecoder
(
const
char
*
encoding
)
{
return
codec_getitem_checked
(
encoding
,
"decode"
,
1
);
}
PyObject
*
_PyCodec_EncodeText
(
PyObject
*
object
,
const
char
*
encoding
,
const
char
*
errors
)
{
PyObject
*
encoder
;
encoder
=
_PyCodec_TextEncoder
(
encoding
);
if
(
encoder
==
NULL
)
return
NULL
;
return
_PyCodec_EncodeInternal
(
object
,
encoder
,
encoding
,
errors
);
}
PyObject
*
_PyCodec_DecodeText
(
PyObject
*
object
,
const
char
*
encoding
,
const
char
*
errors
)
{
PyObject
*
decoder
;
decoder
=
_PyCodec_TextDecoder
(
encoding
);
if
(
decoder
==
NULL
)
return
NULL
;
return
_PyCodec_DecodeInternal
(
object
,
decoder
,
encoding
,
errors
);
}
/* Register the error handling callback function error under the name
name. This function will be called by the codec when it encounters
an unencodable characters/undecodable bytes and doesn't know the
...
...
Tools/scripts/run_tests.py
View file @
ad51a118
...
...
@@ -32,6 +32,12 @@ def main(regrtest_args):
]
# Allow user-specified interpreter options to override our defaults.
args
.
extend
(
test
.
support
.
args_from_interpreter_flags
())
# Workaround for issue #20355
os
.
environ
.
pop
(
"PYTHONWARNINGS"
,
None
)
# Workaround for issue #20361
args
.
extend
([
'-W'
,
'error::BytesWarning'
])
args
.
extend
([
'-m'
,
'test'
,
# Run the test suite
'-r'
,
# Randomize test order
'-w'
,
# Re-run failed tests in verbose mode
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment