Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
C
cpython
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
cpython
Commits
37ade9cb
Commit
37ade9cb
authored
Sep 16, 2010
by
Raymond Hettinger
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Add tokenizer example to regex docs.
parent
c1cc0d08
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
63 additions
and
0 deletions
+63
-0
Doc/library/re.rst
Doc/library/re.rst
+63
-0
No files found.
Doc/library/re.rst
View file @
37ade9cb
...
...
@@ -1282,3 +1282,66 @@ functionally identical:
<_sre.SRE_Match object at ...>
>>> re.match("\\\\", r"\\")
<_sre.SRE_Match object at ...>
Writing a Tokenizer
^^^^^^^^^^^^^^^^^^^
A `tokenizer or scanner <http://en.wikipedia.org/wiki/Lexical_analysis>`_
analyzes a string to categorize groups of characters. This is a useful first
step in writing a compiler or interpreter.
The text categories are specified with regular expressions. The technique is
to combine those into a single master regular expression and to loop over
successive matches::
Token = collections.namedtuple('Token', 'typ value line column')
def tokenize(s):
tok_spec = [
('NUMBER', r'\d+(.\d+)?'), # Integer or decimal number
('ASSIGN', r':='), # Assignment operator
('END', ';'), # Statement terminator
('ID', r'[A-Za-z]+'), # Identifiers
('OP', r'[+*\/\-]'), # Arithmetic operators
('NEWLINE', r'\n'), # Line endings
('SKIP', r'[ \t]'), # Skip over spaces and tabs
]
tok_re = '|'.join('(?P<%s>%s)' % pair for pair in tok_spec)
gettok = re.compile(tok_re).match
line = 1
pos = line_start = 0
mo = gettok(s)
while mo is not None:
typ = mo.lastgroup
if typ == 'NEWLINE':
line_start = pos
line += 1
elif typ != 'SKIP':
yield Token(typ, mo.group(typ), line, mo.start()-line_start)
pos = mo.end()
mo = gettok(s, pos)
if pos != len(s):
raise RuntimeError('Unexpected character %r on line %d' %(s[pos], line))
>>> statements = '''\
total := total + price * quantity;
tax := price * 0.05;
'''
>>> for token in tokenize(statements):
... print(token)
...
Token(typ='ID', value='total', line=1, column=8)
Token(typ='ASSIGN', value=':=', line=1, column=14)
Token(typ='ID', value='total', line=1, column=17)
Token(typ='OP', value='+', line=1, column=23)
Token(typ='ID', value='price', line=1, column=25)
Token(typ='OP', value='*', line=1, column=31)
Token(typ='ID', value='quantity', line=1, column=33)
Token(typ='END', value=';', line=1, column=41)
Token(typ='ID', value='tax', line=2, column=9)
Token(typ='ASSIGN', value=':=', line=2, column=13)
Token(typ='ID', value='price', line=2, column=16)
Token(typ='OP', value='*', line=2, column=22)
Token(typ='NUMBER', value='0.05', line=2, column=24)
Token(typ='END', value=';', line=2, column=28)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment