Source code:Lib/tokenize.py
Thetokenize module provides a lexical scanner for Python source code,implemented in Python. The scanner in this module returns comments as tokensas well, making it useful for implementing “pretty-printers,” includingcolorizers for on-screen displays.
To simplify token stream handling, allOperators andDelimiterstokens are returned using the generictoken.OP token type. The exacttype can be determined by checking theexact_type property on thenamed tuple returned fromtokenize.tokenize().
The primary entry point is agenerator:
Thetokenize() generator requires one argument,readline, whichmust be a callable object which provides the same interface as theio.IOBase.readline() method of file objects. Each call to thefunction should return one line of input as bytes.
The generator produces 5-tuples with these members: the token type; thetoken string; a 2-tuple(srow,scol) of ints specifying the row andcolumn where the token begins in the source; a 2-tuple(erow,ecol) ofints specifying the row and column where the token ends in the source; andthe line on which the token was found. The line passed (the last tuple item)is thelogical line; continuation lines are included. The 5 tuple isreturned as anamed tuple with the field names:typestringstartendline.
The returnednamed tuple has a additional property namedexact_type that contains the exact operator type fortoken.OP tokens. For all other token typesexact_typeequals the named tupletype field.
Changed in version 3.1:Added support for named tuples.
Changed in version 3.3:Added support forexact_type.
tokenize() determines the source encoding of the file by looking for aUTF-8 BOM or encoding cookie, according toPEP 263.
All constants from thetoken module are also exported fromtokenize, as are three additional token type values:
Token value used to indicate a comment.
Token value used to indicate a non-terminating newline. The NEWLINE tokenindicates the end of a logical line of Python code; NL tokens are generatedwhen a logical line of code is continued over multiple physical lines.
Token value that indicates the encoding used to decode the source bytesinto text. The first token returned bytokenize() will always be anENCODING token.
Another function is provided to reverse the tokenization process. This isuseful for creating tools that tokenize a script, modify the token stream, andwrite back the modified script.
Converts tokens back into Python source code. Theiterable must returnsequences with at least two elements, the token type and the token string.Any additional sequence elements are ignored.
The reconstructed script is returned as a single string. The result isguaranteed to tokenize back to match the input so that the conversion islossless and round-trips are assured. The guarantee applies only to thetoken type and token string as the spacing between tokens (columnpositions) may change.
It returns bytes, encoded using the ENCODING token, which is the firsttoken sequence output bytokenize().
tokenize() needs to detect the encoding of source files it tokenizes. Thefunction it uses to do this is available:
Thedetect_encoding() function is used to detect the encoding thatshould be used to decode a Python source file. It requires one argument,readline, in the same way as thetokenize() generator.
It will call readline a maximum of twice, and return the encoding used(as a string) and a list of any lines (not decoded from bytes) it has readin.
It detects the encoding from the presence of a UTF-8 BOM or an encodingcookie as specified inPEP 263. If both a BOM and a cookie are present,but disagree, a SyntaxError will be raised. Note that if the BOM is found,'utf-8-sig' will be returned as an encoding.
If no encoding is specified, then the default of'utf-8' will bereturned.
Useopen() to open Python source files: it usesdetect_encoding() to detect the file encoding.
Open a file in read only mode using the encoding detected bydetect_encoding().
New in version 3.2.
New in version 3.3.
Thetokenize module can be executed as a script from the command line.It is as simple as:
python -m tokenize[-e][filename.py]
The following options are accepted:
show this help message and exit
display token names using the exact type
Iffilename.py is specified its contents are tokenized to stdout.Otherwise, tokenization is performed on stdin.
Example of a script rewriter that transforms float literals into Decimalobjects:
fromtokenizeimporttokenize,untokenize,NUMBER,STRING,NAME,OPfromioimportBytesIOdefdecistmt(s):"""Substitute Decimals for floats in a string of statements. >>> from decimal import Decimal >>> s = 'print(+21.3e-5*-.1234/81.7)' >>> decistmt(s) "print (+Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7'))" The format of the exponent is inherited from the platform C library. Known cases are "e-007" (Windows) and "e-07" (not Windows). Since we're only showing 12 digits, and the 13th isn't close to 5, the rest of the output should be platform-independent. >>> exec(s) #doctest: +ELLIPSIS -3.21716034272e-0...7 Output from calculations with Decimal should be identical across all platforms. >>> exec(decistmt(s)) -3.217160342717258261933904529E-7 """result=[]g=tokenize(BytesIO(s.encode('utf-8')).readline)# tokenize the stringfortoknum,tokval,_,_,_ing:iftoknum==NUMBERand'.'intokval:# replace NUMBER tokensresult.extend([(NAME,'Decimal'),(OP,'('),(STRING,repr(tokval)),(OP,')')])else:result.append((toknum,tokval))returnuntokenize(result).decode('utf-8')
Example of tokenizing from the command line. The script:
defsay_hello():print("Hello, World!")say_hello()
will be tokenized to the following output where the first column is the rangeof the line/column coordinates where the token is found, the second column isthe name of the token, and the final column is the value of the token (if any)
$python -m tokenize hello.py0,0-0,0: ENCODING'utf-8'1,0-1,3: NAME'def'1,4-1,13: NAME'say_hello'1,13-1,14: OP'('1,14-1,15: OP')'1,15-1,16: OP':'1,16-1,17: NEWLINE'\n'2,0-2,4: INDENT' '2,4-2,9: NAME'print'2,9-2,10: OP'('2,10-2,25: STRING'"Hello, World!"'2,25-2,26: OP')'2,26-2,27: NEWLINE'\n'3,0-3,1: NL'\n'4,0-4,0: DEDENT''4,0-4,9: NAME'say_hello'4,9-4,10: OP'('4,10-4,11: OP')'4,11-4,12: NEWLINE'\n'5,0-5,0: ENDMARKER''
The exact token type names can be displayed using the-e option:
$python -m tokenize -e hello.py0,0-0,0: ENCODING'utf-8'1,0-1,3: NAME'def'1,4-1,13: NAME'say_hello'1,13-1,14: LPAR'('1,14-1,15: RPAR')'1,15-1,16: COLON':'1,16-1,17: NEWLINE'\n'2,0-2,4: INDENT' '2,4-2,9: NAME'print'2,9-2,10: LPAR'('2,10-2,25: STRING'"Hello, World!"'2,25-2,26: RPAR')'2,26-2,27: NEWLINE'\n'3,0-3,1: NL'\n'4,0-4,0: DEDENT''4,0-4,9: NAME'say_hello'4,9-4,10: LPAR'('4,10-4,11: RPAR')'4,11-4,12: NEWLINE'\n'5,0-5,0: ENDMARKER''
31.6.keyword — Testing for Python keywords
31.8.tabnanny — Detection of ambiguous indentation
Enter search terms or a module, class or function name.