robot.parsing.lexer package
Submodules
robot.parsing.lexer.blocklexers module
- classrobot.parsing.lexer.blocklexers.BlockLexer(ctx:LexingContext)[source]
Bases:
Lexer,ABC
- classrobot.parsing.lexer.blocklexers.FileLexer(ctx:LexingContext)[source]
Bases:
BlockLexer
- classrobot.parsing.lexer.blocklexers.SectionLexer(ctx:LexingContext)[source]
Bases:
BlockLexer,ABC- ctx:FileContext
- classrobot.parsing.lexer.blocklexers.SettingSectionLexer(ctx:LexingContext)[source]
Bases:
SectionLexer
- classrobot.parsing.lexer.blocklexers.VariableSectionLexer(ctx:LexingContext)[source]
Bases:
SectionLexer
- classrobot.parsing.lexer.blocklexers.TestCaseSectionLexer(ctx:LexingContext)[source]
Bases:
SectionLexer
- classrobot.parsing.lexer.blocklexers.TaskSectionLexer(ctx:LexingContext)[source]
Bases:
SectionLexer
- classrobot.parsing.lexer.blocklexers.KeywordSectionLexer(ctx:LexingContext)[source]
Bases:
SettingSectionLexer
- classrobot.parsing.lexer.blocklexers.CommentSectionLexer(ctx:LexingContext)[source]
Bases:
SectionLexer
- classrobot.parsing.lexer.blocklexers.ImplicitCommentSectionLexer(ctx:LexingContext)[source]
Bases:
SectionLexer
- classrobot.parsing.lexer.blocklexers.InvalidSectionLexer(ctx:LexingContext)[source]
Bases:
SectionLexer
- classrobot.parsing.lexer.blocklexers.TestOrKeywordLexer(ctx:LexingContext)[source]
Bases:
BlockLexer,ABC- name_type:str
- classrobot.parsing.lexer.blocklexers.TestCaseLexer(ctx:SuiteFileContext)[source]
Bases:
TestOrKeywordLexer- name_type:str='TESTCASENAME'
- classrobot.parsing.lexer.blocklexers.KeywordLexer(ctx:FileContext)[source]
Bases:
TestOrKeywordLexer- name_type:str='KEYWORDNAME'
- classrobot.parsing.lexer.blocklexers.NestedBlockLexer(ctx:TestCaseContext|KeywordContext)[source]
Bases:
BlockLexer,ABC
- classrobot.parsing.lexer.blocklexers.ForLexer(ctx:TestCaseContext|KeywordContext)[source]
Bases:
NestedBlockLexer
- classrobot.parsing.lexer.blocklexers.WhileLexer(ctx:TestCaseContext|KeywordContext)[source]
Bases:
NestedBlockLexer
- classrobot.parsing.lexer.blocklexers.TryLexer(ctx:TestCaseContext|KeywordContext)[source]
Bases:
NestedBlockLexer
- classrobot.parsing.lexer.blocklexers.GroupLexer(ctx:TestCaseContext|KeywordContext)[source]
Bases:
NestedBlockLexer
- classrobot.parsing.lexer.blocklexers.IfLexer(ctx:TestCaseContext|KeywordContext)[source]
Bases:
NestedBlockLexer
robot.parsing.lexer.context module
- classrobot.parsing.lexer.context.LexingContext(settings:Settings,languages:Languages)[source]
Bases:
object
- classrobot.parsing.lexer.context.FileContext(lang:Languages|Language|str|Path|Iterable[Language|str|Path]|None=None)[source]
Bases:
LexingContext- settings:FileSettings
- keyword_context()→KeywordContext[source]
- classrobot.parsing.lexer.context.SuiteFileContext(lang:Languages|Language|str|Path|Iterable[Language|str|Path]|None=None)[source]
Bases:
FileContext- settings:SuiteFileSettings
- test_case_context()→TestCaseContext[source]
- classrobot.parsing.lexer.context.ResourceFileContext(lang:Languages|Language|str|Path|Iterable[Language|str|Path]|None=None)[source]
Bases:
FileContext- settings:ResourceFileSettings
- classrobot.parsing.lexer.context.InitFileContext(lang:Languages|Language|str|Path|Iterable[Language|str|Path]|None=None)[source]
Bases:
FileContext- settings:InitFileSettings
- classrobot.parsing.lexer.context.TestCaseContext(settings:TestCaseSettings)[source]
Bases:
LexingContext- settings:TestCaseSettings
- propertytemplate_set:bool
- classrobot.parsing.lexer.context.KeywordContext(settings:KeywordSettings)[source]
Bases:
LexingContext- settings:KeywordSettings
- propertytemplate_set:bool
robot.parsing.lexer.lexer module
- robot.parsing.lexer.lexer.get_tokens(source:Path|str|TextIO,data_only:bool=False,tokenize_variables:bool=False,lang:Languages|Language|str|Path|Iterable[Language|str|Path]|None=None)→Iterator[Token][source]
Parses the given source to tokens.
- Parameters:
source – The source where to read the data. Can be a path toa source file as a string or as
pathlib.Pathobject, an alreadyopened file object, or Unicode text containing the date directly.Source files must be UTF-8 encoded.data_only – When
False(default), returns all tokens. When settoTrue, omits separators, comments, continuation markers, andother non-data tokens.tokenize_variables – When
True, possible variables in keywordarguments and elsewhere are tokenized. See thetokenize_variables()method for details.lang – Additional languages to be supported during parsing.Can be a string matching any of the supported language codes or names,an initialized
Languagesubclass,a list containing such strings or instances, or aLanguagesinstance.
Returns a generator that yields
Tokeninstances.
- robot.parsing.lexer.lexer.get_resource_tokens(source:Path|str|TextIO,data_only:bool=False,tokenize_variables:bool=False,lang:Languages|Language|str|Path|Iterable[Language|str|Path]|None=None)→Iterator[Token][source]
Parses the given source to resource file tokens.
Same as
get_tokens()otherwise, but the source is considered to bea resource file. This affects, for example, what settings are valid.
- robot.parsing.lexer.lexer.get_init_tokens(source:Path|str|TextIO,data_only:bool=False,tokenize_variables:bool=False,lang:Languages|Language|str|Path|Iterable[Language|str|Path]|None=None)→Iterator[Token][source]
Parses the given source to init file tokens.
Same as
get_tokens()otherwise, but the source is considered to bea suite initialization file. This affects, for example, what settings arevalid.
robot.parsing.lexer.settings module
- classrobot.parsing.lexer.settings.Settings(languages:Languages)[source]
Bases:
ABC- names:tuple[str,...]=()
- aliases:dict[str,str]={}
- multi_use=('Metadata','Library','Resource','Variables')
- single_value=('Resource','TestTimeout','TestTemplate','Timeout','Template','Name')
- name_and_arguments=('Metadata','SuiteSetup','SuiteTeardown','TestSetup','TestTeardown','TestTemplate','Setup','Teardown','Template','Resource','Variables')
- name_arguments_and_with_name=('Library',)
- classrobot.parsing.lexer.settings.SuiteFileSettings(languages:Languages)[source]
Bases:
FileSettings- names:tuple[str,...]=('Documentation','Metadata','Name','SuiteSetup','SuiteTeardown','TestSetup','TestTeardown','TestTemplate','TestTimeout','TestTags','DefaultTags','KeywordTags','Library','Resource','Variables')
- aliases:dict[str,str]={'ForceTags':'TestTags','TaskSetup':'TestSetup','TaskTags':'TestTags','TaskTeardown':'TestTeardown','TaskTemplate':'TestTemplate','TaskTimeout':'TestTimeout'}
- classrobot.parsing.lexer.settings.InitFileSettings(languages:Languages)[source]
Bases:
FileSettings- names:tuple[str,...]=('Documentation','Metadata','Name','SuiteSetup','SuiteTeardown','TestSetup','TestTeardown','TestTimeout','TestTags','KeywordTags','Library','Resource','Variables')
- aliases:dict[str,str]={'ForceTags':'TestTags','TaskSetup':'TestSetup','TaskTags':'TestTags','TaskTeardown':'TestTeardown','TaskTimeout':'TestTimeout'}
- classrobot.parsing.lexer.settings.ResourceFileSettings(languages:Languages)[source]
Bases:
FileSettings- names:tuple[str,...]=('Documentation','KeywordTags','Library','Resource','Variables')
- classrobot.parsing.lexer.settings.TestCaseSettings(parent:SuiteFileSettings)[source]
Bases:
Settings- names:tuple[str,...]=('Documentation','Tags','Setup','Teardown','Template','Timeout')
- propertytemplate_set:bool
- classrobot.parsing.lexer.settings.KeywordSettings(parent:FileSettings)[source]
Bases:
Settings- names:tuple[str,...]=('Documentation','Arguments','Setup','Teardown','Timeout','Tags','Return')
robot.parsing.lexer.statementlexers module
- classrobot.parsing.lexer.statementlexers.Lexer(ctx:LexingContext)[source]
Bases:
ABC
- classrobot.parsing.lexer.statementlexers.StatementLexer(ctx:LexingContext)[source]
Bases:
Lexer,ABC- token_type:str
- classrobot.parsing.lexer.statementlexers.SingleType(ctx:LexingContext)[source]
Bases:
StatementLexer,ABC
- classrobot.parsing.lexer.statementlexers.TypeAndArguments(ctx:LexingContext)[source]
Bases:
StatementLexer,ABC
- classrobot.parsing.lexer.statementlexers.SectionHeaderLexer(ctx:LexingContext)[source]
Bases:
SingleType,ABC- ctx:FileContext
- classrobot.parsing.lexer.statementlexers.SettingSectionHeaderLexer(ctx:LexingContext)[source]
Bases:
SectionHeaderLexer- token_type:str='SETTINGHEADER'
- classrobot.parsing.lexer.statementlexers.VariableSectionHeaderLexer(ctx:LexingContext)[source]
Bases:
SectionHeaderLexer- token_type:str='VARIABLEHEADER'
- classrobot.parsing.lexer.statementlexers.TestCaseSectionHeaderLexer(ctx:LexingContext)[source]
Bases:
SectionHeaderLexer- token_type:str='TESTCASEHEADER'
- classrobot.parsing.lexer.statementlexers.TaskSectionHeaderLexer(ctx:LexingContext)[source]
Bases:
SectionHeaderLexer- token_type:str='TASKHEADER'
- classrobot.parsing.lexer.statementlexers.KeywordSectionHeaderLexer(ctx:LexingContext)[source]
Bases:
SectionHeaderLexer- token_type:str='KEYWORDHEADER'
- classrobot.parsing.lexer.statementlexers.CommentSectionHeaderLexer(ctx:LexingContext)[source]
Bases:
SectionHeaderLexer- token_type:str='COMMENTHEADER'
- classrobot.parsing.lexer.statementlexers.InvalidSectionHeaderLexer(ctx:LexingContext)[source]
Bases:
SectionHeaderLexer- token_type:str='INVALIDHEADER'
- classrobot.parsing.lexer.statementlexers.CommentLexer(ctx:LexingContext)[source]
Bases:
SingleType- token_type:str='COMMENT'
- classrobot.parsing.lexer.statementlexers.ImplicitCommentLexer(ctx:LexingContext)[source]
Bases:
CommentLexer- ctx:FileContext
- classrobot.parsing.lexer.statementlexers.SettingLexer(ctx:LexingContext)[source]
Bases:
StatementLexer- ctx:FileContext
- classrobot.parsing.lexer.statementlexers.TestCaseSettingLexer(ctx:LexingContext)[source]
Bases:
StatementLexer- ctx:TestCaseContext
- classrobot.parsing.lexer.statementlexers.KeywordSettingLexer(ctx:LexingContext)[source]
Bases:
StatementLexer- ctx:KeywordContext
- classrobot.parsing.lexer.statementlexers.VariableLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- ctx:FileContext
- token_type:str='VARIABLE'
- classrobot.parsing.lexer.statementlexers.KeywordCallLexer(ctx:LexingContext)[source]
Bases:
StatementLexer
- classrobot.parsing.lexer.statementlexers.ForHeaderLexer(ctx:LexingContext)[source]
Bases:
StatementLexer- separators=('IN','INRANGE','INENUMERATE','INZIP')
- classrobot.parsing.lexer.statementlexers.IfHeaderLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='IF'
- classrobot.parsing.lexer.statementlexers.InlineIfHeaderLexer(ctx:LexingContext)[source]
Bases:
StatementLexer- token_type:str='INLINEIF'
- classrobot.parsing.lexer.statementlexers.ElseIfHeaderLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='ELSEIF'
- classrobot.parsing.lexer.statementlexers.ElseHeaderLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='ELSE'
- classrobot.parsing.lexer.statementlexers.TryHeaderLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='TRY'
- classrobot.parsing.lexer.statementlexers.ExceptHeaderLexer(ctx:LexingContext)[source]
Bases:
StatementLexer- token_type:str='EXCEPT'
- classrobot.parsing.lexer.statementlexers.FinallyHeaderLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='FINALLY'
- classrobot.parsing.lexer.statementlexers.WhileHeaderLexer(ctx:LexingContext)[source]
Bases:
StatementLexer- token_type:str='WHILE'
- classrobot.parsing.lexer.statementlexers.GroupHeaderLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='GROUP'
- classrobot.parsing.lexer.statementlexers.EndLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='END'
- classrobot.parsing.lexer.statementlexers.VarLexer(ctx:LexingContext)[source]
Bases:
StatementLexer- token_type:str='VAR'
- classrobot.parsing.lexer.statementlexers.ReturnLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='RETURNSTATEMENT'
- classrobot.parsing.lexer.statementlexers.ContinueLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='CONTINUE'
- classrobot.parsing.lexer.statementlexers.BreakLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='BREAK'
- classrobot.parsing.lexer.statementlexers.SyntaxErrorLexer(ctx:LexingContext)[source]
Bases:
TypeAndArguments- token_type:str='ERROR'
robot.parsing.lexer.tokenizer module
robot.parsing.lexer.tokens module
- classrobot.parsing.lexer.tokens.Token(type:str|None=None,value:str|None=None,lineno:int=-1,col_offset:int=-1,error:str|None=None)[source]
Bases:
objectToken representing piece of Robot Framework data.
Each token has type, value, line number, column offset and end columnoffset in
type,value,lineno,col_offsetandend_col_offsetattributes, respectively. Tokens representingerror also have their error message inerrorattribute.Token types are declared as class attributes such as
SETTING_HEADERandEOL. Values of these constants have changed slightly in RobotFramework 4.0, and they may change again in the future. It is thus saferto use the constants, not their values, when types are needed. For example,useToken(Token.EOL)instead ofToken('EOL')andtoken.type==Token.EOLinstead oftoken.type=='EOL'.If
valueis not given andtypeis a special marker likeIFor:attr:`EOL, the value is set automatically.- SETTING_HEADER='SETTINGHEADER'
- VARIABLE_HEADER='VARIABLEHEADER'
- TESTCASE_HEADER='TESTCASEHEADER'
- TASK_HEADER='TASKHEADER'
- KEYWORD_HEADER='KEYWORDHEADER'
- COMMENT_HEADER='COMMENTHEADER'
- INVALID_HEADER='INVALIDHEADER'
- FATAL_INVALID_HEADER='FATALINVALIDHEADER'
- TESTCASE_NAME='TESTCASENAME'
- KEYWORD_NAME='KEYWORDNAME'
- SUITE_NAME='SUITENAME'
- DOCUMENTATION='DOCUMENTATION'
- SUITE_SETUP='SUITESETUP'
- SUITE_TEARDOWN='SUITETEARDOWN'
- METADATA='METADATA'
- TEST_SETUP='TESTSETUP'
- TEST_TEARDOWN='TESTTEARDOWN'
- TEST_TEMPLATE='TESTTEMPLATE'
- TEST_TIMEOUT='TESTTIMEOUT'
- TEST_TAGS='TESTTAGS'
- FORCE_TAGS='TESTTAGS'
- DEFAULT_TAGS='DEFAULTTAGS'
- KEYWORD_TAGS='KEYWORDTAGS'
- LIBRARY='LIBRARY'
- RESOURCE='RESOURCE'
- VARIABLES='VARIABLES'
- SETUP='SETUP'
- TEARDOWN='TEARDOWN'
- TEMPLATE='TEMPLATE'
- TIMEOUT='TIMEOUT'
- TAGS='TAGS'
- ARGUMENTS='ARGUMENTS'
- RETURN='RETURN'
- RETURN_SETTING='RETURN'
- AS='AS'
- WITH_NAME='AS'
- NAME='NAME'
- VARIABLE='VARIABLE'
- ARGUMENT='ARGUMENT'
- ASSIGN='ASSIGN'
- KEYWORD='KEYWORD'
- FOR='FOR'
- FOR_SEPARATOR='FORSEPARATOR'
- END='END'
- IF='IF'
- INLINE_IF='INLINEIF'
- ELSE_IF='ELSEIF'
- ELSE='ELSE'
- TRY='TRY'
- EXCEPT='EXCEPT'
- FINALLY='FINALLY'
- WHILE='WHILE'
- VAR='VAR'
- RETURN_STATEMENT='RETURNSTATEMENT'
- CONTINUE='CONTINUE'
- BREAK='BREAK'
- OPTION='OPTION'
- GROUP='GROUP'
- SEPARATOR='SEPARATOR'
- COMMENT='COMMENT'
- CONTINUATION='CONTINUATION'
- CONFIG='CONFIG'
- EOL='EOL'
- EOS='EOS'
- ERROR='ERROR'
- FATAL_ERROR='FATALERROR'
- NON_DATA_TOKENS={'COMMENT','CONTINUATION','EOL','EOS','SEPARATOR'}
- SETTING_TOKENS={'ARGUMENTS','DEFAULTTAGS','DOCUMENTATION','KEYWORDTAGS','LIBRARY','METADATA','RESOURCE','RETURN','SETUP','SUITENAME','SUITESETUP','SUITETEARDOWN','TAGS','TEARDOWN','TEMPLATE','TESTSETUP','TESTTAGS','TESTTEARDOWN','TESTTEMPLATE','TESTTIMEOUT','TIMEOUT','VARIABLES'}
- HEADER_TOKENS={'COMMENTHEADER','INVALIDHEADER','KEYWORDHEADER','SETTINGHEADER','TASKHEADER','TESTCASEHEADER','VARIABLEHEADER'}
- ALLOW_VARIABLES={'ARGUMENT','KEYWORDNAME','NAME','TESTCASENAME'}
- type
- value
- lineno
- col_offset
- error
- propertyend_col_offset:int
- tokenize_variables()→Iterator[Token][source]
Tokenizes possible variables in token value.
Yields the token itself if the token does not allow variables (see
Token.ALLOW_VARIABLES) or its value does not containvariables. Otherwise, yields variable tokens as well as tokensbefore, after, or between variables so that they have the sametype as the original token.
- classrobot.parsing.lexer.tokens.EOS(lineno:int=-1,col_offset:int=-1)[source]
Bases:
TokenToken representing end of a statement.