Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
C
cpython
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
cpython
Commits
2139a9fd
Commit
2139a9fd
authored
Jan 16, 2016
by
Martin Panter
Browse files
Options
Browse Files
Download
Plain Diff
Issue #26127: Merge tokenize documentation
parents
d7a815b1
20b1bfa6
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
7 deletions
+7
-7
Doc/library/tokenize.rst
Doc/library/tokenize.rst
+7
-7
No files found.
Doc/library/tokenize.rst
View file @
2139a9fd
...
...
@@ -27,7 +27,7 @@ The primary entry point is a :term:`generator`:
.. function:: tokenize(readline)
The :func:`tokenize` generator requires one argument, *readline*, which
The :func:`
.
tokenize` generator requires one argument, *readline*, which
must be a callable object which provides the same interface as the
:meth:`io.IOBase.readline` method of file objects. Each call to the
function should return one line of input as bytes.
...
...
@@ -52,7 +52,7 @@ The primary entry point is a :term:`generator`:
.. versionchanged:: 3.3
Added support for ``exact_type``.
:func:`tokenize` determines the source encoding of the file by looking for a
:func:`
.
tokenize` determines the source encoding of the file by looking for a
UTF-8 BOM or encoding cookie, according to :pep:`263`.
...
...
@@ -74,7 +74,7 @@ All constants from the :mod:`token` module are also exported from
.. data:: ENCODING
Token value that indicates the encoding used to decode the source bytes
into text. The first token returned by :func:`tokenize` will always be an
into text. The first token returned by :func:`
.
tokenize` will always be an
ENCODING token.
...
...
@@ -96,17 +96,17 @@ write back the modified script.
positions) may change.
It returns bytes, encoded using the ENCODING token, which is the first
token sequence output by :func:`tokenize`.
token sequence output by :func:`
.
tokenize`.
:func:`tokenize` needs to detect the encoding of source files it tokenizes. The
:func:`
.
tokenize` needs to detect the encoding of source files it tokenizes. The
function it uses to do this is available:
.. function:: detect_encoding(readline)
The :func:`detect_encoding` function is used to detect the encoding that
should be used to decode a Python source file. It requires one argument,
readline, in the same way as the :func:`tokenize` generator.
readline, in the same way as the :func:`
.
tokenize` generator.
It will call readline a maximum of twice, and return the encoding used
(as a string) and a list of any lines (not decoded from bytes) it has read
...
...
@@ -120,7 +120,7 @@ function it uses to do this is available:
If no encoding is specified, then the default of ``'utf-8'`` will be
returned.
Use :func:`open` to open Python source files: it uses
Use :func:`
.
open` to open Python source files: it uses
:func:`detect_encoding` to detect the file encoding.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment