Commit 2a58b063 authored by Anthony Sottile's avatar Anthony Sottile Committed by Miss Islington (bot)

bpo-5028: Fix up rest of documentation for tokenize documenting line (GH-13686)



https://bugs.python.org/issue5028
parent eea47e09
......@@ -39,8 +39,8 @@ The primary entry point is a :term:`generator`:
column where the token begins in the source; a 2-tuple ``(erow, ecol)`` of
ints specifying the row and column where the token ends in the source; and
the line on which the token was found. The line passed (the last tuple item)
is the *physical* line; continuation lines are included. The 5 tuple is
returned as a :term:`named tuple` with the field names:
is the *physical* line. The 5 tuple is returned as a :term:`named tuple`
with the field names:
``type string start end line``.
The returned :term:`named tuple` has an additional property named
......
......@@ -346,7 +346,7 @@ def generate_tokens(readline):
column where the token begins in the source; a 2-tuple (erow, ecol) of
ints specifying the row and column where the token ends in the source;
and the line on which the token was found. The line passed is the
physical line; continuation lines are included.
physical line.
"""
lnum = parenlev = continued = 0
contstr, needcont = '', 0
......
......@@ -415,7 +415,7 @@ def tokenize(readline):
column where the token begins in the source; a 2-tuple (erow, ecol) of
ints specifying the row and column where the token ends in the source;
and the line on which the token was found. The line passed is the
physical line; continuation lines are included.
physical line.
The first token sequence will always be an ENCODING token
which tells you which encoding was used to decode the bytes stream.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment