Movatterモバイル変換


[0]ホーム

URL:


Notice  The highest tagged major version isv4.

antlr

packagemodule
v0.0.0-...-98b5237Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 18, 2023 License:BSD-3-ClauseImports:13Imported by:5

Details

Repository

github.com/antlr4-go/antlr

Links

README

Go Report CardPkgGoDevReleaseReleaseMaintenanceLicenseGitHub stars

ANTLR4 Go Runtime Module Repo

IMPORTANT: Please submit PRs via a clone of thehttps://github.com/antlr/antlr4 repo, and not here.

  • Do not submit PRs or any change requests to this repo
  • This repo is read only and is updated by the ANTLR team to create a new release of the Go Runtime for ANTLR
  • This repo contains the Go runtime that your generated projects should import

Introduction

This repo contains the official modules for the Go Runtime for ANTLR. It is a copy of the runtime maintainedat:https://github.com/antlr/antlr4/tree/master/runtime/Go/antlr and is automatically updated by the ANTLR team to createthe official Go runtime release only. No development work is carried out in this repo and PRs are not accepted here.

The dev branch of this repo is kept in sync with the dev branch of the main ANTLR repo and is updated periodically.

=== Why?

Thego get command is unable to retrieve the Go runtime when it is embedded sodeeply in the main repo. Ago get against theantlr/antlr4 repo, while retrieving the correct source code for the runtime,does not correctly resolve tags and will create a reference in yourgo.mod file that is unclear, will not upgrade smoothly andcauses confusion.

For instance, the current Go runtime release, which is tagged with v4.12.0 inantlr/antlr4 is retrieved by go get as:

require (github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230219212500-1f9a474cc2dc)

Where you would expect to see something like:

require (    github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.12.0)

The decision was taken to create a separate org in a separate repo to hold the official Go runtime for ANTLR andfrom whence users can expectgo get to behave as expected.

Documentation

Please read the official documentation at:https://github.com/antlr/antlr4/blob/master/doc/index.md for tips onmigrating existing projects to use the new module location and for information on how to use the Go runtime ingeneral.

Documentation

Overview

Package antlr implements the Go version of the ANTLR 4 runtime.

The ANTLR Tool

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing,or translating structured text or binary files. It's widely used to build languages, tools, and frameworks.From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface(or visitor) that makes it easy to respond to the recognition of phrases of interest.

Code Generation

ANTLR supports the generation of code in a number oftarget languages, and the generated code is supported by aruntime library, written specifically to support the generated code in the target language. This library is theruntime for the Go target.

To generate code for the go target, it is generally recommended to place the source grammar files in a package oftheir own, and use the `.sh` script method of generating code, using the go generate directive. In that same directoryit is usual, though not required, to place the antlr tool that should be used to generate the code. That does meanthat the antlr tool JAR file will be checked in to your source code control though, so you are free to use any otherway of specifying the version of the ANTLR tool to use, such as aliasing in `.zshrc` or equivalent, or a profile inyour IDE, or configuration in your CI system.

Here is a general template for an ANTLR based recognizer in Go:

.├── myproject├── parser│     ├── mygrammar.g4│     ├── antlr-4.12.0-complete.jar│     ├── error_listeners.go│     ├── generate.go│     ├── generate.sh├── go.mod├── go.sum├── main.go└── main_test.go

Make sure that the package statement in your grammar file(s) reflects the go package they exist in.The generate.go file then looks like this:

package parser//go:generate ./generate.sh

And the generate.sh file will look similar to this:

#!/bin/shalias antlr4='java -Xmx500M -cp "./antlr4-4.12.0-complete.jar:$CLASSPATH" org.antlr.v4.Tool'antlr4 -Dlanguage=Go -no-visitor -package parser *.g4

depending on whether you want visitors or listeners or any other ANTLR options.

From the command line at the root of your package “myproject” you can then simply issue the command:

go generate ./...

Copyright Notice

Copyright (c) 2012-2022 The ANTLR Project. All rights reserved.

Use of this file is governed by the BSD 3-clause license, which can be found in theLICENSE.txt file in the project root.

Index

Constants

View Source
const (ATNStateInvalidType    = 0ATNStateBasic          = 1ATNStateRuleStart      = 2ATNStateBlockStart     = 3ATNStatePlusBlockStart = 4ATNStateStarBlockStart = 5ATNStateTokenStart     = 6ATNStateRuleStop       = 7ATNStateBlockEnd       = 8ATNStateStarLoopBack   = 9ATNStateStarLoopEntry  = 10ATNStatePlusLoopBack   = 11ATNStateLoopEnd        = 12ATNStateInvalidStateNumber = -1)

Constants for serialization.

View Source
const (ATNTypeLexer  = 0ATNTypeParser = 1)

Represent the type of recognizer an ATN applies to.

View Source
const (LexerDefaultMode = 0LexerMore        = -2LexerSkip        = -3)
View Source
const (LexerDefaultTokenChannel =TokenDefaultChannelLexerHidden              =TokenHiddenChannelLexerMinCharValue        = 0x0000LexerMaxCharValue        = 0x10FFFF)
View Source
const (LexerActionTypeChannel  = 0//The type of a {@link LexerChannelAction} action.LexerActionTypeCustom   = 1//The type of a {@link LexerCustomAction} action.LexerActionTypeMode     = 2//The type of a {@link LexerModeAction} action.LexerActionTypeMore     = 3//The type of a {@link LexerMoreAction} action.LexerActionTypePopMode  = 4//The type of a {@link LexerPopModeAction} action.LexerActionTypePushMode = 5//The type of a {@link LexerPushModeAction} action.LexerActionTypeSkip     = 6//The type of a {@link LexerSkipAction} action.LexerActionTypeType     = 7//The type of a {@link LexerTypeAction} action.)
View Source
const (//// The SLL(*) prediction mode. This prediction mode ignores the current// parser context when making predictions. This is the fastest prediction// mode, and provides correct results for many grammars. This prediction// mode is more powerful than the prediction mode provided by ANTLR 3, but// may result in syntax errors for grammar and input combinations which are// not SLL.//// <p>// When using this prediction mode, the parser will either return a correct// parse tree (i.e. the same parse tree that would be returned with the// {@link //LL} prediction mode), or it will Report a syntax error. If a// syntax error is encountered when using the {@link //SLL} prediction mode,// it may be due to either an actual syntax error in the input or indicate// that the particular combination of grammar and input requires the more// powerful {@link //LL} prediction abilities to complete successfully.</p>//// <p>// This prediction mode does not provide any guarantees for prediction// behavior for syntactically-incorrect inputs.</p>//PredictionModeSLL = 0//// The LL(*) prediction mode. This prediction mode allows the current parser// context to be used for resolving SLL conflicts that occur during// prediction. This is the fastest prediction mode that guarantees correct// parse results for all combinations of grammars with syntactically correct// inputs.//// <p>// When using this prediction mode, the parser will make correct decisions// for all syntactically-correct grammar and input combinations. However, in// cases where the grammar is truly ambiguous this prediction mode might not// Report a precise answer for <em>exactly which</em> alternatives are// ambiguous.</p>//// <p>// This prediction mode does not provide any guarantees for prediction// behavior for syntactically-incorrect inputs.</p>//PredictionModeLL = 1//// The LL(*) prediction mode with exact ambiguity detection. In addition to// the correctness guarantees provided by the {@link //LL} prediction mode,// this prediction mode instructs the prediction algorithm to determine the// complete and exact set of ambiguous alternatives for every ambiguous// decision encountered while parsing.//// <p>// This prediction mode may be used for diagnosing ambiguities during// grammar development. Due to the performance overhead of calculating sets// of ambiguous alternatives, this prediction mode should be avoided when// the exact results are not necessary.</p>//// <p>// This prediction mode does not provide any guarantees for prediction// behavior for syntactically-incorrect inputs.</p>//PredictionModeLLExactAmbigDetection = 2)
View Source
const (TokenInvalidType = 0// During lookahead operations, this "token" signifies we hit rule end ATN state// and did not follow it despite needing to.TokenEpsilon = -2TokenMinUserTokenType = 1TokenEOF = -1TokenDefaultChannel = 0TokenHiddenChannel = 1)
View Source
const (Default_Program_Name = "default"Program_Init_Size    = 100Min_Token_Index      = 0)
View Source
const (TransitionEPSILON    = 1TransitionRANGE      = 2TransitionRULE       = 3TransitionPREDICATE  = 4// e.g., {isType(input.LT(1))}?TransitionATOM       = 5TransitionACTION     = 6TransitionSET        = 7// ~(A|B) or ~atom, wildcard, which convert to next 2TransitionNOTSET     = 8TransitionWILDCARD   = 9TransitionPRECEDENCE = 10)
View Source
const (BasePredictionContextEmptyReturnState = 0x7FFFFFFF)

Represents {@code $} in local context prediction, which means wildcard.{@code//+x =//}./

View Source
const (LL1AnalyzerHitPred =TokenInvalidType)
  • Special value added to the lookahead sets to indicate that we hita predicate during analysis if {@code seeThruPreds==false}.

/

Variables

View Source
var (LexerATNSimulatorDebug    =falseLexerATNSimulatorDFADebug =falseLexerATNSimulatorMinDFAEdge = 0LexerATNSimulatorMaxDFAEdge = 127// forces unicode to stay in ATNLexerATNSimulatorMatchCalls = 0)
View Source
var (ParserATNSimulatorDebug       =falseParserATNSimulatorTraceATNSim =falseParserATNSimulatorDFADebug    =falseParserATNSimulatorRetryDebug  =falseTurnOffLRLoopEntryBranchOpt   =false)
View Source
var (BasePredictionContextglobalNodeCount = 1BasePredictionContextid              =BasePredictionContextglobalNodeCount)
View Source
var ATNInvalidAltNumberint

ATNInvalidAltNumber is used to represent an ALT number that has yet to be calculated orwhich is invalid for a particular struct such as*antlr.BaseRuleContext

View Source
var ATNSimulatorError =NewDFAState(0x7FFFFFFF,NewBaseATNConfigSet(false))
View Source
var ATNStateInitialNumTransitions = 4
View Source
var BasePredictionContextEMPTY =NewEmptyPredictionContext()
View Source
var CommonTokenFactoryDEFAULT =NewCommonTokenFactory(false)

CommonTokenFactoryDEFAULT is the default CommonTokenFactory. It does notexplicitly copy token text when constructing tokens.

View Source
var ConsoleErrorListenerINSTANCE =NewConsoleErrorListener()

Provides a default instance of {@link ConsoleErrorListener}.

View Source
var ErrEmptyStack =errors.New("Stack is empty")
View Source
var LexerMoreActionINSTANCE =NewLexerMoreAction()
View Source
var LexerPopModeActionINSTANCE =NewLexerPopModeAction()
View Source
var LexerSkipActionINSTANCE =NewLexerSkipAction()

Provides a singleton instance of l parameterless lexer action.

View Source
var ParseTreeWalkerDefault =NewParseTreeWalker()
View Source
var ParserRuleContextEmpty =NewBaseParserRuleContext(nil, -1)
View Source
var SemanticContextNone =NewPredicate(-1, -1,false)
View Source
var TransitionserializationNames = []string{"INVALID","EPSILON","RANGE","RULE","PREDICATE","ATOM","ACTION","SET","NOT_SET","WILDCARD","PRECEDENCE",}
View Source
var TreeInvalidInterval =NewInterval(-1, -2)

Functions

funcEscapeWhitespace

func EscapeWhitespace(sstring, escapeSpacesbool)string

funcPredictionModeallConfigsInRuleStopStates

func PredictionModeallConfigsInRuleStopStates(configsATNConfigSet)bool

Checks if all configurations in {@code configs} are in a{@link RuleStopState}. Configurations meeting this condition have reachedthe end of the decision rule (local context) or end of start rule (fullcontext).

@param configs the configuration set to test@return {@code true} if all configurations in {@code configs} are in a{@link RuleStopState}, otherwise {@code false}

funcPredictionModeallSubsetsConflict

func PredictionModeallSubsetsConflict(altsets []*BitSet)bool

Determines if every alternative subset in {@code altsets} contains morethan one alternative.

@param altsets a collection of alternative subsets@return {@code true} if every {@link BitSet} in {@code altsets} has{@link BitSet//cardinality cardinality} &gt 1, otherwise {@code false}

funcPredictionModeallSubsetsEqual

func PredictionModeallSubsetsEqual(altsets []*BitSet)bool

Determines if every alternative subset in {@code altsets} is equivalent.

@param altsets a collection of alternative subsets@return {@code true} if every member of {@code altsets} is equal to theothers, otherwise {@code false}

funcPredictionModegetSingleViableAlt

func PredictionModegetSingleViableAlt(altsets []*BitSet)int

funcPredictionModegetUniqueAlt

func PredictionModegetUniqueAlt(altsets []*BitSet)int

Returns the unique alternative predicted by all alternative subsets in{@code altsets}. If no such alternative exists, this method returns{@link ATN//INVALID_ALT_NUMBER}.

@param altsets a collection of alternative subsets

funcPredictionModehasConfigInRuleStopState

func PredictionModehasConfigInRuleStopState(configsATNConfigSet)bool

Checks if any configuration in {@code configs} is in a{@link RuleStopState}. Configurations meeting this condition have reachedthe end of the decision rule (local context) or end of start rule (fullcontext).

@param configs the configuration set to test@return {@code true} if any configuration in {@code configs} is in a{@link RuleStopState}, otherwise {@code false}

funcPredictionModehasConflictingAltSet

func PredictionModehasConflictingAltSet(altsets []*BitSet)bool

Determines if any single alternative subset in {@code altsets} containsmore than one alternative.

@param altsets a collection of alternative subsets@return {@code true} if {@code altsets} contains a {@link BitSet} with{@link BitSet//cardinality cardinality} &gt 1, otherwise {@code false}

funcPredictionModehasNonConflictingAltSet

func PredictionModehasNonConflictingAltSet(altsets []*BitSet)bool

Determines if any single alternative subset in {@code altsets} containsexactly one alternative.

@param altsets a collection of alternative subsets@return {@code true} if {@code altsets} contains a {@link BitSet} with{@link BitSet//cardinality cardinality} 1, otherwise {@code false}

funcPredictionModehasSLLConflictTerminatingPrediction

func PredictionModehasSLLConflictTerminatingPrediction(modeint, configsATNConfigSet)bool

Computes the SLL prediction termination condition.

<p>This method computes the SLL prediction termination condition for both ofthe following cases.</p>

<ul><li>The usual SLL+LL fallback upon SLL conflict</li><li>Pure SLL without LL fallback</li></ul>

<p><strong>COMBINED SLL+LL PARSING</strong></p>

<p>When LL-fallback is enabled upon SLL conflict, correct predictions areensured regardless of how the termination condition is computed by thismethod. Due to the substantially higher cost of LL prediction, theprediction should only fall back to LL when the additional lookaheadcannot lead to a unique SLL prediction.</p>

<p>Assuming combined SLL+LL parsing, an SLL configuration set with onlyconflicting subsets should fall back to full LL, even if theconfiguration sets don't resolve to the same alternative (e.g.{@code {1,2}} and {@code {3,4}}. If there is at least one non-conflictingconfiguration, SLL could continue with the hopes that more lookahead willresolve via one of those non-conflicting configurations.</p>

<p>Here's the prediction termination rule them: SLL (for SLL+LL parsing)stops when it sees only conflicting configuration subsets. In contrast,full LL keeps going when there is uncertainty.</p>

<p><strong>HEURISTIC</strong></p>

<p>As a heuristic, we stop prediction when we see any conflicting subsetunless we see a state that only has one alternative associated with it.The single-alt-state thing lets prediction continue upon rules like(otherwise, it would admit defeat too soon):</p>

<p>{@code [12|1|[], 6|2|[], 12|2|[]]. s : (ID | ID ID?) ” }</p>

<p>When the ATN simulation reaches the state before {@code ”}, it has aDFA state that looks like: {@code [12|1|[], 6|2|[], 12|2|[]]}. Naturally{@code 12|1|[]} and {@code 12|2|[]} conflict, but we cannot stopprocessing this node because alternative to has another way to continue,via {@code [6|2|[]]}.</p>

<p>It also let's us continue for this rule:</p>

<p>{@code [1|1|[], 1|2|[], 8|3|[]] a : A | A | A B }</p>

<p>After Matching input A, we reach the stop state for rule A, state 1.State 8 is the state right before B. Clearly alternatives 1 and 2conflict and no amount of further lookahead will separate the two.However, alternative 3 will be able to continue and so we do not stopworking on this state. In the previous example, we're concerned withstates associated with the conflicting alternatives. Here alt 3 is notassociated with the conflicting configs, but since we can continuelooking for input reasonably, don't declare the state done.</p>

<p><strong>PURE SLL PARSING</strong></p>

<p>To handle pure SLL parsing, all we have to do is make sure that wecombine stack contexts for configurations that differ only by semanticpredicate. From there, we can do the usual SLL termination heuristic.</p>

<p><strong>PREDICATES IN SLL+LL PARSING</strong></p>

<p>SLL decisions don't evaluate predicates until after they reach DFA stopstates because they need to create the DFA cache that works in allsemantic situations. In contrast, full LL evaluates predicates collectedduring start state computation so it can ignore predicates thereafter.This means that SLL termination detection can totally ignore semanticpredicates.</p>

<p>Implementation-wise, {@link ATNConfigSet} combines stack contexts but notsemantic predicate contexts so we might see two configurations like thefollowing.</p>

<p>{@code (s, 1, x, {}), (s, 1, x', {p})}</p>

<p>Before testing these configurations against others, we have to merge{@code x} and {@code x'} (without modifying the existing configurations).For example, we test {@code (x+x')==x”} when looking for conflicts inthe following configurations.</p>

<p>{@code (s, 1, x, {}), (s, 1, x', {p}), (s, 2, x”, {})}</p>

<p>If the configuration set has predicates (as indicated by{@link ATNConfigSet//hasSemanticContext}), this algorithm makes a copy ofthe configurations to strip out all of the predicates so that a standard{@link ATNConfigSet} will merge everything ignoring predicates.</p>

funcPredictionModehasStateAssociatedWithOneAlt

func PredictionModehasStateAssociatedWithOneAlt(configsATNConfigSet)bool

funcPredictionModeresolvesToJustOneViableAlt

func PredictionModeresolvesToJustOneViableAlt(altsets []*BitSet)int

Full LL prediction termination.

<p>Can we stop looking ahead during ATN simulation or is there someuncertainty as to which alternative we will ultimately pick, afterconsuming more input? Even if there are partial conflicts, we might knowthat everything is going to resolve to the same minimum alternative. Thatmeans we can stop since no more lookahead will change that fact. On theother hand, there might be multiple conflicts that resolve to differentminimums. That means we need more look ahead to decide which of thosealternatives we should predict.</p>

<p>The basic idea is to split the set of configurations {@code C}, intoconflicting subsets {@code (s, _, ctx, _)} and singleton subsets withnon-conflicting configurations. Two configurations conflict if they haveidentical {@link ATNConfig//state} and {@link ATNConfig//context} valuesbut different {@link ATNConfig//alt} value, e.g. {@code (s, i, ctx, _)}and {@code (s, j, ctx, _)} for {@code i!=j}.</p>

<p>Reduce these configuration subsets to the set of possible alternatives.You can compute the alternative subsets in one pass as follows:</p>

<p>{@code A_s,ctx = {i | (s, i, ctx, _)}} for each configuration in{@code C} holding {@code s} and {@code ctx} fixed.</p>

<p>Or in pseudo-code, for each configuration {@code c} in {@code C}:</p>

<pre>map[c] U= c.{@link ATNConfig//alt alt} // map hash/equals uses s and x, notalt and not pred</pre>

<p>The values in {@code map} are the set of {@code A_s,ctx} sets.</p>

<p>If {@code |A_s,ctx|=1} then there is no conflict associated with{@code s} and {@code ctx}.</p>

<p>Reduce the subsets to singletons by choosing a minimum of each subset. Ifthe union of these alternative subsets is a singleton, then no amount ofmore lookahead will help us. We will always pick that alternative. If,however, there is more than one alternative, then we are uncertain whichalternative to predict and must continue looking for resolution. We mayor may not discover an ambiguity in the future, even if there are noconflicting subsets this round.</p>

<p>The biggest sin is to terminate early because it means we've made adecision but were uncertain as to the eventual outcome. We haven't usedenough lookahead. On the other hand, announcing a conflict too late is nobig deal you will still have the conflict. It's just inefficient. Itmight even look until the end of file.</p>

<p>No special consideration for semantic predicates is required becausepredicates are evaluated on-the-fly for full LL prediction, ensuring thatno configuration contains a semantic context during the terminationcheck.</p>

<p><strong>CONFLICTING CONFIGS</strong></p>

<p>Two configurations {@code (s, i, x)} and {@code (s, j, x')}, conflictwhen {@code i!=j} but {@code x=x'}. Because we merge all{@code (s, i, _)} configurations together, that means that there are atmost {@code n} configurations associated with state {@code s} for{@code n} possible alternatives in the decision. The merged stackscomplicate the comparison of configuration contexts {@code x} and{@code x'}. Sam checks to see if one is a subset of the other by callingmerge and checking to see if the merged result is either {@code x} or{@code x'}. If the {@code x} associated with lowest alternative {@code i}is the superset, then {@code i} is the only possible prediction since theothers resolve to {@code min(i)} as well. However, if {@code x} isassociated with {@code j>i} then at least one stack configuration for{@code j} is not in conflict with alternative {@code i}. The algorithmshould keep going, looking for more lookahead due to the uncertainty.</p>

<p>For simplicity, I'm doing a equality check between {@code x} and{@code x'} that lets the algorithm continue to consume lookahead longerthan necessary. The reason I like the equality is of course thesimplicity but also because that is the test you need to detect thealternatives that are actually in conflict.</p>

<p><strong>CONTINUE/STOP RULE</strong></p>

<p>Continue if union of resolved alternative sets from non-conflicting andconflicting alternative subsets has more than one alternative. We areuncertain about which alternative to predict.</p>

<p>The complete set of alternatives, {@code [i for (_,i,_)]}, tells us whichalternatives are still in the running for the amount of input we'veconsumed at this point. The conflicting sets let us to strip awayconfigurations that won't lead to more states because we resolveconflicts to the configuration with a minimum alternate for theconflicting set.</p>

<p><strong>CASES</strong></p>

<ul>

<li>no conflicts and more than 1 alternative in set =&gt continue</li>

<li> {@code (s, 1, x)}, {@code (s, 2, x)}, {@code (s, 3, z)},{@code (s', 1, y)}, {@code (s', 2, y)} yields non-conflicting set{@code {3}} U conflicting sets {@code min({1,2})} U {@code min({1,2})} ={@code {1,3}} =&gt continue</li>

<li>{@code (s, 1, x)}, {@code (s, 2, x)}, {@code (s', 1, y)},{@code (s', 2, y)}, {@code (s”, 1, z)} yields non-conflicting set{@code {1}} U conflicting sets {@code min({1,2})} U {@code min({1,2})} ={@code {1}} =&gt stop and predict 1</li>

<li>{@code (s, 1, x)}, {@code (s, 2, x)}, {@code (s', 1, y)},{@code (s', 2, y)} yields conflicting, reduced sets {@code {1}} U{@code {1}} = {@code {1}} =&gt stop and predict 1, can announceambiguity {@code {1,2}}</li>

<li>{@code (s, 1, x)}, {@code (s, 2, x)}, {@code (s', 2, y)},{@code (s', 3, y)} yields conflicting, reduced sets {@code {1}} U{@code {2}} = {@code {1,2}} =&gt continue</li>

<li>{@code (s, 1, x)}, {@code (s, 2, x)}, {@code (s', 3, y)},{@code (s', 4, y)} yields conflicting, reduced sets {@code {1}} U{@code {3}} = {@code {1,3}} =&gt continue</li>

</ul>

<p><strong>EXACT AMBIGUITY DETECTION</strong></p>

<p>If all states Report the same conflicting set of alternatives, then weknow we have the exact ambiguity set.</p>

<p><code>|A_<em>i</em>|&gt1</code> and<code>A_<em>i</em> = A_<em>j</em></code> for all <em>i</em>, <em>j</em>.</p>

<p>In other words, we continue examining lookahead until all {@code A_i}have more than one alternative and all {@code A_i} are the same. If{@code A={{1,2}, {1,3}}}, then regular LL prediction would terminatebecause the resolved set is {@code {1}}. To determine what the realambiguity is, we have to know whether the ambiguity is between one andtwo or one and three so we keep going. We can only stop prediction whenwe need exact ambiguity detection when the sets look like{@code A={{1,2}}} or {@code {{1,2},{1,2}}}, etc...</p>

funcPrintArrayJavaStyle

func PrintArrayJavaStyle(sa []string)string

funcTerminalNodeToStringArray

func TerminalNodeToStringArray(sa []TerminalNode) []string

funcTreesGetNodeText

func TreesGetNodeText(tTree, ruleNames []string, recogParser)string

funcTreesStringTree

func TreesStringTree(treeTree, ruleNames []string, recogRecognizer)string

Print out a whole tree in LISP form. {@link //getNodeText} is used on the

node payloads to get the text for the nodes.  Detectparse trees and extract data appropriately.

Types

typeAND

type AND struct {// contains filtered or unexported fields}

funcNewAND

func NewAND(a, bSemanticContext) *AND

func (*AND)Equals

func (a *AND) Equals(otherCollectable[SemanticContext])bool

func (*AND)Hash

func (a *AND) Hash()int

func (*AND)String

func (a *AND) String()string

typeATN

type ATN struct {// DecisionToState is the decision points for all rules, subrules, optional// blocks, ()+, ()*, etc. Each subrule/rule is a decision point, and we must track them so we// can go back later and build DFA predictors for them.  This includes// all the rules, subrules, optional blocks, ()+, ()* etc...DecisionToState []DecisionState// contains filtered or unexported fields}

ATN represents an “Augmented Transition Network”, though general in ANTLR the term“Augmented Recursive Transition Network” though there are some descriptions of “Recursive Transition Network”in existence.

ATNs represent the main networks in the system and are serialized by the code generator and supportALL(*).

funcNewATN

func NewATN(grammarTypeint, maxTokenTypeint) *ATN

NewATN returns a new ATN struct representing the given grammarType and is usedfor runtime deserialization of ATNs from the code generated by the ANTLR tool

func (*ATN)NextTokens

func (a *ATN) NextTokens(sATNState, ctxRuleContext) *IntervalSet

NextTokens computes and returns the set of valid tokens starting in state s, bycalling either [NextTokensNoContext] (ctx == nil) or [NextTokensInContext] (ctx != nil).

func (*ATN)NextTokensInContext

func (a *ATN) NextTokensInContext(sATNState, ctxRuleContext) *IntervalSet

NextTokensInContext computes and returns the set of valid tokens that can occur startingin state s. If ctx is nil, the set of tokens will not include what can followthe rule surrounding s. In other words, the set will be restricted to tokensreachable staying within the rule of s.

func (*ATN)NextTokensNoContext

func (a *ATN) NextTokensNoContext(sATNState) *IntervalSet

NextTokensNoContext computes and returns the set of valid tokens that can occur startingin state s and staying in same rule.antlr.Token.EPSILON is in set if we reach end ofrule.

typeATNAltConfigComparator

type ATNAltConfigComparator[TCollectable[T]] struct {}

ATNAltConfigComparator is used as the comparator for mapping configs to Alt Bitsets

func (*ATNAltConfigComparator[T])Equals2

func (c *ATNAltConfigComparator[T]) Equals2(o1, o2ATNConfig)bool

Equals2 is a custom comparator for ATNConfigs specifically for configLookup

func (*ATNAltConfigComparator[T])Hash1

Hash1 is custom hash implementation for ATNConfigs specifically for configLookup

typeATNConfig

type ATNConfig interface {Equals(oCollectable[ATNConfig])boolHash()intGetState()ATNStateGetAlt()intGetSemanticContext()SemanticContextGetContext()PredictionContextSetContext(PredictionContext)GetReachesIntoOuterContext()intSetReachesIntoOuterContext(int)String()string// contains filtered or unexported methods}

ATNConfig is a tuple: (ATN state, predicted alt, syntactic, semanticcontext). The syntactic context is a graph-structured stack node whosepath(s) to the root is the rule invocation(s) chain used to arrive at thestate. The semantic context is the tree of semantic predicates encounteredbefore reaching an ATN state.

funcNewBaseATNConfig7

func NewBaseATNConfig7(old *BaseATNConfig)ATNConfig

typeATNConfigComparator

type ATNConfigComparator[TCollectable[T]] struct {}

ATNConfigComparator is used as the compartor for the configLookup field of an ATNConfigSetand has a custom Equals() and Hash() implementation, because equality is not based on thestandard Hash() and Equals() methods of the ATNConfig type.

func (*ATNConfigComparator[T])Equals2

func (c *ATNConfigComparator[T]) Equals2(o1, o2ATNConfig)bool

Equals2 is a custom comparator for ATNConfigs specifically for configLookup

func (*ATNConfigComparator[T])Hash1

func (c *ATNConfigComparator[T]) Hash1(oATNConfig)int

Hash1 is custom hash implementation for ATNConfigs specifically for configLookup

typeATNConfigSet

type ATNConfigSet interface {Hash()intEquals(oCollectable[ATNConfig])boolAdd(ATNConfig, *DoubleDict)boolAddAll([]ATNConfig)boolGetStates() *JStore[ATNState,Comparator[ATNState]]GetPredicates() []SemanticContextGetItems() []ATNConfigOptimizeConfigs(interpreter *BaseATNSimulator)Length()intIsEmpty()boolContains(ATNConfig)boolContainsFast(ATNConfig)boolClear()String()stringHasSemanticContext()boolSetHasSemanticContext(vbool)ReadOnly()boolSetReadOnly(bool)GetConflictingAlts() *BitSetSetConflictingAlts(*BitSet)Alts() *BitSetFullContext()boolGetUniqueAlt()intSetUniqueAlt(int)GetDipsIntoOuterContext()boolSetDipsIntoOuterContext(bool)}

typeATNConfigSetPair

type ATNConfigSetPair struct {// contains filtered or unexported fields}

typeATNDeserializationOptions

type ATNDeserializationOptions struct {// contains filtered or unexported fields}

funcDefaultATNDeserializationOptions

func DefaultATNDeserializationOptions() *ATNDeserializationOptions

func (*ATNDeserializationOptions)GenerateRuleBypassTransitions

func (opts *ATNDeserializationOptions) GenerateRuleBypassTransitions()bool

func (*ATNDeserializationOptions)ReadOnly

func (opts *ATNDeserializationOptions) ReadOnly()bool

func (*ATNDeserializationOptions)SetGenerateRuleBypassTransitions

func (opts *ATNDeserializationOptions) SetGenerateRuleBypassTransitions(generateRuleBypassTransitionsbool)

func (*ATNDeserializationOptions)SetReadOnly

func (opts *ATNDeserializationOptions) SetReadOnly(readOnlybool)

func (*ATNDeserializationOptions)SetVerifyATN

func (opts *ATNDeserializationOptions) SetVerifyATN(verifyATNbool)

func (*ATNDeserializationOptions)VerifyATN

func (opts *ATNDeserializationOptions) VerifyATN()bool

typeATNDeserializer

type ATNDeserializer struct {// contains filtered or unexported fields}

funcNewATNDeserializer

func NewATNDeserializer(options *ATNDeserializationOptions) *ATNDeserializer

func (*ATNDeserializer)Deserialize

func (a *ATNDeserializer) Deserialize(data []int32) *ATN

typeATNState

type ATNState interface {GetEpsilonOnlyTransitions()boolGetRuleIndex()intSetRuleIndex(int)GetNextTokenWithinRule() *IntervalSetSetNextTokenWithinRule(*IntervalSet)GetATN() *ATNSetATN(*ATN)GetStateType()intGetStateNumber()intSetStateNumber(int)GetTransitions() []TransitionSetTransitions([]Transition)AddTransition(Transition,int)String()stringHash()intEquals(Collectable[ATNState])bool}

typeAbstractPredicateTransition

type AbstractPredicateTransition interface {TransitionIAbstractPredicateTransitionFoo()}

typeActionTransition

type ActionTransition struct {*BaseTransition// contains filtered or unexported fields}

funcNewActionTransition

func NewActionTransition(targetATNState, ruleIndex, actionIndexint, isCtxDependentbool) *ActionTransition

func (*ActionTransition)Matches

func (t *ActionTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*ActionTransition)String

func (t *ActionTransition) String()string

typeAltDict

type AltDict struct {// contains filtered or unexported fields}

funcNewAltDict

func NewAltDict() *AltDict

funcPredictionModeGetStateToAltMap

func PredictionModeGetStateToAltMap(configsATNConfigSet) *AltDict

PredictionModeGetStateToAltMap gets a map from state to alt subset from a configuration set. For eachconfiguration {@code c} in {@code configs}:

<pre>map[c.{@link ATNConfig//state state}] U= c.{@link ATNConfig//alt alt}</pre>

func (*AltDict)Get

func (a *AltDict) Get(keystring) interface{}

typeArrayPredictionContext

type ArrayPredictionContext struct {*BasePredictionContext// contains filtered or unexported fields}

funcNewArrayPredictionContext

func NewArrayPredictionContext(parents []PredictionContext, returnStates []int) *ArrayPredictionContext

func (*ArrayPredictionContext)Equals

func (a *ArrayPredictionContext) Equals(o interface{})bool

Equals is the default comparison function for ArrayPredictionContext when no specializedimplementation is needed for a collection

func (*ArrayPredictionContext)GetParent

func (*ArrayPredictionContext)GetReturnStates

func (a *ArrayPredictionContext) GetReturnStates() []int

func (*ArrayPredictionContext)Hash

func (a *ArrayPredictionContext) Hash()int

Hash is the default hash function for ArrayPredictionContext when no specializedimplementation is needed for a collection

func (*ArrayPredictionContext)String

func (a *ArrayPredictionContext) String()string

typeAtomTransition

type AtomTransition struct {*BaseTransition}

TODO: make all transitions sets? no, should remove set edges

funcNewAtomTransition

func NewAtomTransition(targetATNState, intervalSetint) *AtomTransition

func (*AtomTransition)Matches

func (t *AtomTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*AtomTransition)String

func (t *AtomTransition) String()string

typeBailErrorStrategy

type BailErrorStrategy struct {*DefaultErrorStrategy}

funcNewBailErrorStrategy

func NewBailErrorStrategy() *BailErrorStrategy

func (*BailErrorStrategy)Recover

func (b *BailErrorStrategy) Recover(recognizerParser, eRecognitionException)

Instead of recovering from exception {@code e}, re-panic it wrappedin a {@link ParseCancellationException} so it is not caught by therule func catches. Use {@link Exception//getCause()} to get theoriginal {@link RecognitionException}.

func (*BailErrorStrategy)RecoverInline

func (b *BailErrorStrategy) RecoverInline(recognizerParser)Token

Make sure we don't attempt to recover inline if the parsersuccessfully recovers, it won't panic an exception.

func (*BailErrorStrategy)Sync

func (b *BailErrorStrategy) Sync(recognizerParser)

Make sure we don't attempt to recover from problems in subrules.//

typeBaseATNConfig

type BaseATNConfig struct {// contains filtered or unexported fields}

funcNewBaseATNConfig

func NewBaseATNConfig(cATNConfig, stateATNState, contextPredictionContext, semanticContextSemanticContext) *BaseATNConfig

funcNewBaseATNConfig1

func NewBaseATNConfig1(cATNConfig, stateATNState, contextPredictionContext) *BaseATNConfig

funcNewBaseATNConfig2

func NewBaseATNConfig2(cATNConfig, semanticContextSemanticContext) *BaseATNConfig

funcNewBaseATNConfig3

func NewBaseATNConfig3(cATNConfig, stateATNState, semanticContextSemanticContext) *BaseATNConfig

funcNewBaseATNConfig4

func NewBaseATNConfig4(cATNConfig, stateATNState) *BaseATNConfig

funcNewBaseATNConfig5

func NewBaseATNConfig5(stateATNState, altint, contextPredictionContext, semanticContextSemanticContext) *BaseATNConfig

funcNewBaseATNConfig6

func NewBaseATNConfig6(stateATNState, altint, contextPredictionContext) *BaseATNConfig

func (*BaseATNConfig)Equals

Equals is the default comparison function for an ATNConfig when no specialist implementation is requiredfor a collection.

An ATN configuration is equal to another if both have the same state, theypredict the same alternative, and syntactic/semantic contexts are the same.

func (*BaseATNConfig)GetAlt

func (b *BaseATNConfig) GetAlt()int

func (*BaseATNConfig)GetContext

func (b *BaseATNConfig) GetContext()PredictionContext

func (*BaseATNConfig)GetReachesIntoOuterContext

func (b *BaseATNConfig) GetReachesIntoOuterContext()int

func (*BaseATNConfig)GetSemanticContext

func (b *BaseATNConfig) GetSemanticContext()SemanticContext

func (*BaseATNConfig)GetState

func (b *BaseATNConfig) GetState()ATNState

func (*BaseATNConfig)Hash

func (b *BaseATNConfig) Hash()int

Hash is the default hash function for BaseATNConfig, when no specialist hash functionis required for a collection

func (*BaseATNConfig)SetContext

func (b *BaseATNConfig) SetContext(vPredictionContext)

func (*BaseATNConfig)SetReachesIntoOuterContext

func (b *BaseATNConfig) SetReachesIntoOuterContext(vint)

func (*BaseATNConfig)String

func (b *BaseATNConfig) String()string

typeBaseATNConfigComparator

type BaseATNConfigComparator[TCollectable[T]] struct {}

BaseATNConfigComparator is used as the comparator for the configLookup field of a BaseATNConfigSetand has a custom Equals() and Hash() implementation, because equality is not based on thestandard Hash() and Equals() methods of the ATNConfig type.

func (*BaseATNConfigComparator[T])Equals2

func (c *BaseATNConfigComparator[T]) Equals2(o1, o2ATNConfig)bool

Equals2 is a custom comparator for ATNConfigs specifically for baseATNConfigSet

func (*BaseATNConfigComparator[T])Hash1

Hash1 is custom hash implementation for ATNConfigs specifically for configLookup, but in fact justdelegates to the standard Hash() method of the ATNConfig type.

typeBaseATNConfigSet

type BaseATNConfigSet struct {// contains filtered or unexported fields}

BaseATNConfigSet is a specialized set of ATNConfig that tracks informationabout its elements and can combine similar configurations using agraph-structured stack.

funcNewBaseATNConfigSet

func NewBaseATNConfigSet(fullCtxbool) *BaseATNConfigSet

func (*BaseATNConfigSet)Add

func (b *BaseATNConfigSet) Add(configATNConfig, mergeCache *DoubleDict)bool

Add merges contexts with existing configs for (s, i, pi, _), where s is theATNConfig.state, i is the ATNConfig.alt, and pi is theATNConfig.semanticContext. We use (s,i,pi) as the key. UpdatesdipsIntoOuterContext and hasSemanticContext when necessary.

func (*BaseATNConfigSet)AddAll

func (b *BaseATNConfigSet) AddAll(coll []ATNConfig)bool

func (*BaseATNConfigSet)Alts

func (b *BaseATNConfigSet) Alts() *BitSet

func (*BaseATNConfigSet)Clear

func (b *BaseATNConfigSet) Clear()

func (*BaseATNConfigSet)Compare

func (b *BaseATNConfigSet) Compare(bs *BaseATNConfigSet)bool

Compare is a hack function just to verify that adding DFAstares to the knownset works, so long as comparison of ATNConfigSet s works. For that to work, weneed to make sure that the set of ATNConfigs in two sets are equivalent. We can'tknow the order, so we do this inefficient hack. If this proves the point, thenwe can change the config set to a better structure.

func (*BaseATNConfigSet)Contains

func (b *BaseATNConfigSet) Contains(itemATNConfig)bool

func (*BaseATNConfigSet)ContainsFast

func (b *BaseATNConfigSet) ContainsFast(itemATNConfig)bool

func (*BaseATNConfigSet)Equals

func (*BaseATNConfigSet)FullContext

func (b *BaseATNConfigSet) FullContext()bool

func (*BaseATNConfigSet)GetConflictingAlts

func (b *BaseATNConfigSet) GetConflictingAlts() *BitSet

func (*BaseATNConfigSet)GetDipsIntoOuterContext

func (b *BaseATNConfigSet) GetDipsIntoOuterContext()bool

func (*BaseATNConfigSet)GetItems

func (b *BaseATNConfigSet) GetItems() []ATNConfig

func (*BaseATNConfigSet)GetPredicates

func (b *BaseATNConfigSet) GetPredicates() []SemanticContext

func (*BaseATNConfigSet)GetStates

func (*BaseATNConfigSet)GetUniqueAlt

func (b *BaseATNConfigSet) GetUniqueAlt()int

func (*BaseATNConfigSet)HasSemanticContext

func (b *BaseATNConfigSet) HasSemanticContext()bool

func (*BaseATNConfigSet)Hash

func (b *BaseATNConfigSet) Hash()int

func (*BaseATNConfigSet)IsEmpty

func (b *BaseATNConfigSet) IsEmpty()bool

func (*BaseATNConfigSet)Length

func (b *BaseATNConfigSet) Length()int

func (*BaseATNConfigSet)OptimizeConfigs

func (b *BaseATNConfigSet) OptimizeConfigs(interpreter *BaseATNSimulator)

func (*BaseATNConfigSet)ReadOnly

func (b *BaseATNConfigSet) ReadOnly()bool

func (*BaseATNConfigSet)SetConflictingAlts

func (b *BaseATNConfigSet) SetConflictingAlts(v *BitSet)

func (*BaseATNConfigSet)SetDipsIntoOuterContext

func (b *BaseATNConfigSet) SetDipsIntoOuterContext(vbool)

func (*BaseATNConfigSet)SetHasSemanticContext

func (b *BaseATNConfigSet) SetHasSemanticContext(vbool)

func (*BaseATNConfigSet)SetReadOnly

func (b *BaseATNConfigSet) SetReadOnly(readOnlybool)

func (*BaseATNConfigSet)SetUniqueAlt

func (b *BaseATNConfigSet) SetUniqueAlt(vint)

func (*BaseATNConfigSet)String

func (b *BaseATNConfigSet) String()string

typeBaseATNSimulator

type BaseATNSimulator struct {// contains filtered or unexported fields}

funcNewBaseATNSimulator

func NewBaseATNSimulator(atn *ATN, sharedContextCache *PredictionContextCache) *BaseATNSimulator

func (*BaseATNSimulator)ATN

func (b *BaseATNSimulator) ATN() *ATN

func (*BaseATNSimulator)DecisionToDFA

func (b *BaseATNSimulator) DecisionToDFA() []*DFA

func (*BaseATNSimulator)SharedContextCache

func (b *BaseATNSimulator) SharedContextCache() *PredictionContextCache

typeBaseATNState

type BaseATNState struct {// NextTokenWithinRule caches lookahead during parsing. Not used during construction.NextTokenWithinRule *IntervalSet// contains filtered or unexported fields}

funcNewBaseATNState

func NewBaseATNState() *BaseATNState

func (*BaseATNState)AddTransition

func (as *BaseATNState) AddTransition(transTransition, indexint)

func (*BaseATNState)Equals

func (as *BaseATNState) Equals(otherCollectable[ATNState])bool

func (*BaseATNState)GetATN

func (as *BaseATNState) GetATN() *ATN

func (*BaseATNState)GetEpsilonOnlyTransitions

func (as *BaseATNState) GetEpsilonOnlyTransitions()bool

func (*BaseATNState)GetNextTokenWithinRule

func (as *BaseATNState) GetNextTokenWithinRule() *IntervalSet

func (*BaseATNState)GetRuleIndex

func (as *BaseATNState) GetRuleIndex()int

func (*BaseATNState)GetStateNumber

func (as *BaseATNState) GetStateNumber()int

func (*BaseATNState)GetStateType

func (as *BaseATNState) GetStateType()int

func (*BaseATNState)GetTransitions

func (as *BaseATNState) GetTransitions() []Transition

func (*BaseATNState)Hash

func (as *BaseATNState) Hash()int

func (*BaseATNState)SetATN

func (as *BaseATNState) SetATN(atn *ATN)

func (*BaseATNState)SetNextTokenWithinRule

func (as *BaseATNState) SetNextTokenWithinRule(v *IntervalSet)

func (*BaseATNState)SetRuleIndex

func (as *BaseATNState) SetRuleIndex(vint)

func (*BaseATNState)SetStateNumber

func (as *BaseATNState) SetStateNumber(stateNumberint)

func (*BaseATNState)SetTransitions

func (as *BaseATNState) SetTransitions(t []Transition)

func (*BaseATNState)String

func (as *BaseATNState) String()string

typeBaseAbstractPredicateTransition

type BaseAbstractPredicateTransition struct {*BaseTransition}

funcNewBasePredicateTransition

func NewBasePredicateTransition(targetATNState) *BaseAbstractPredicateTransition

func (*BaseAbstractPredicateTransition)IAbstractPredicateTransitionFoo

func (a *BaseAbstractPredicateTransition) IAbstractPredicateTransitionFoo()

typeBaseBlockStartState

type BaseBlockStartState struct {*BaseDecisionState// contains filtered or unexported fields}

BaseBlockStartState is the start of a regular (...) block.

funcNewBlockStartState

func NewBlockStartState() *BaseBlockStartState

typeBaseDecisionState

type BaseDecisionState struct {*BaseATNState// contains filtered or unexported fields}

funcNewBaseDecisionState

func NewBaseDecisionState() *BaseDecisionState

typeBaseInterpreterRuleContext

type BaseInterpreterRuleContext struct {*BaseParserRuleContext}

funcNewBaseInterpreterRuleContext

func NewBaseInterpreterRuleContext(parentBaseInterpreterRuleContext, invokingStateNumber, ruleIndexint) *BaseInterpreterRuleContext

typeBaseLexer

type BaseLexer struct {*BaseRecognizerInterpreterILexerATNSimulatorTokenStartCharIndexintTokenStartLineintTokenStartColumnintActionTypeintVirtLexer// The most derived lexer implementation. Allows virtual method calls.// contains filtered or unexported fields}

funcNewBaseLexer

func NewBaseLexer(inputCharStream) *BaseLexer

func (*BaseLexer)Emit

func (b *BaseLexer) Emit()Token

The standard method called to automatically emit a token at theoutermost lexical rule. The token object should point into thechar buffer start..stop. If there is a text override in 'text',use that to set the token's text. Override l method to emitcustom Token objects or provide a Newfactory./

func (*BaseLexer)EmitEOF

func (b *BaseLexer) EmitEOF()Token

func (*BaseLexer)EmitToken

func (b *BaseLexer) EmitToken(tokenToken)

By default does not support multiple emits per NextToken invocationfor efficiency reasons. Subclass and override l method, NextToken,and GetToken (to push tokens into a list and pull from that listrather than a single variable as l implementation does)./

func (*BaseLexer)GetATN

func (b *BaseLexer) GetATN() *ATN

func (*BaseLexer)GetAllTokens

func (b *BaseLexer) GetAllTokens() []Token

Return a list of all Token objects in input char stream.Forces load of all tokens. Does not include EOF token./

func (*BaseLexer)GetCharIndex

func (b *BaseLexer) GetCharIndex()int

What is the index of the current character of lookahead?///

func (*BaseLexer)GetCharPositionInLine

func (b *BaseLexer) GetCharPositionInLine()int

func (*BaseLexer)GetInputStream

func (b *BaseLexer) GetInputStream()CharStream

func (*BaseLexer)GetInterpreter

func (b *BaseLexer) GetInterpreter()ILexerATNSimulator

func (*BaseLexer)GetLine

func (b *BaseLexer) GetLine()int

func (*BaseLexer)GetSourceName

func (b *BaseLexer) GetSourceName()string

func (*BaseLexer)GetText

func (b *BaseLexer) GetText()string

Return the text Matched so far for the current token or any text override.Set the complete text of l token it wipes any previous changes to the text.

func (*BaseLexer)GetTokenFactory

func (b *BaseLexer) GetTokenFactory()TokenFactory

func (*BaseLexer)GetTokenSourceCharStreamPair

func (b *BaseLexer) GetTokenSourceCharStreamPair() *TokenSourceCharStreamPair

func (*BaseLexer)GetType

func (b *BaseLexer) GetType()int

func (*BaseLexer)More

func (b *BaseLexer) More()

func (*BaseLexer)NextToken

func (b *BaseLexer) NextToken()Token

Return a token from l source i.e., Match a token on the char stream.

func (*BaseLexer)PopMode

func (b *BaseLexer) PopMode()int

func (*BaseLexer)PushMode

func (b *BaseLexer) PushMode(mint)

func (*BaseLexer)Recover

func (b *BaseLexer) Recover(reRecognitionException)

Lexers can normally Match any char in it's vocabulary after Matchinga token, so do the easy thing and just kill a character and hopeit all works out. You can instead use the rule invocation stackto do sophisticated error recovery if you are in a fragment rule./

func (*BaseLexer)SetChannel

func (b *BaseLexer) SetChannel(vint)

func (*BaseLexer)SetInputStream

func (b *BaseLexer) SetInputStream(inputCharStream)

SetInputStream resets the lexer input stream and associated lexer state.

func (*BaseLexer)SetMode

func (b *BaseLexer) SetMode(mint)

func (*BaseLexer)SetText

func (b *BaseLexer) SetText(textstring)

func (*BaseLexer)SetType

func (b *BaseLexer) SetType(tint)

func (*BaseLexer)Skip

func (b *BaseLexer) Skip()

Instruct the lexer to Skip creating a token for current lexer ruleand look for another token. NextToken() knows to keep looking whena lexer rule finishes with token set to SKIPTOKEN. Recall thatif token==nil at end of any token rule, it creates one for youand emits it./

typeBaseLexerAction

type BaseLexerAction struct {// contains filtered or unexported fields}

funcNewBaseLexerAction

func NewBaseLexerAction(actionint) *BaseLexerAction

func (*BaseLexerAction)Equals

func (b *BaseLexerAction) Equals(otherLexerAction)bool

func (*BaseLexerAction)Hash

func (b *BaseLexerAction) Hash()int

typeBaseParseTreeListener

type BaseParseTreeListener struct{}

func (*BaseParseTreeListener)EnterEveryRule

func (l *BaseParseTreeListener) EnterEveryRule(ctxParserRuleContext)

func (*BaseParseTreeListener)ExitEveryRule

func (l *BaseParseTreeListener) ExitEveryRule(ctxParserRuleContext)

func (*BaseParseTreeListener)VisitErrorNode

func (l *BaseParseTreeListener) VisitErrorNode(nodeErrorNode)

func (*BaseParseTreeListener)VisitTerminal

func (l *BaseParseTreeListener) VisitTerminal(nodeTerminalNode)

typeBaseParseTreeVisitor

type BaseParseTreeVisitor struct{}

func (*BaseParseTreeVisitor)Visit

func (v *BaseParseTreeVisitor) Visit(treeParseTree) interface{}

func (*BaseParseTreeVisitor)VisitChildren

func (v *BaseParseTreeVisitor) VisitChildren(nodeRuleNode) interface{}

func (*BaseParseTreeVisitor)VisitErrorNode

func (v *BaseParseTreeVisitor) VisitErrorNode(nodeErrorNode) interface{}

func (*BaseParseTreeVisitor)VisitTerminal

func (v *BaseParseTreeVisitor) VisitTerminal(nodeTerminalNode) interface{}

typeBaseParser

type BaseParser struct {*BaseRecognizerInterpreter     *ParserATNSimulatorBuildParseTreesbool// contains filtered or unexported fields}

funcNewBaseParser

func NewBaseParser(inputTokenStream) *BaseParser

p.is all the parsing support code essentially most of it is errorrecovery stuff.//

func (*BaseParser)AddParseListener

func (p *BaseParser) AddParseListener(listenerParseTreeListener)

Registers {@code listener} to receive events during the parsing process.

<p>To support output-preserving grammar transformations (including but notlimited to left-recursion removal, automated left-factoring, andoptimized code generation), calls to listener methods during the parsemay differ substantially from calls made by{@link ParseTreeWalker//DEFAULT} used after the parse is complete. Inparticular, rule entry and exit events may occur in a different orderduring the parse than after the parser. In addition, calls to certainrule entry methods may be omitted.</p>

<p>With the following specific exceptions, calls to listener events are<em>deterministic</em>, i.e. for identical input the calls to listenermethods will be the same.</p>

<ul><li>Alterations to the grammar used to generate code may change thebehavior of the listener calls.</li><li>Alterations to the command line options passed to ANTLR 4 whengenerating the parser may change the behavior of the listener calls.</li><li>Changing the version of the ANTLR Tool used to generate the parsermay change the behavior of the listener calls.</li></ul>

@param listener the listener to add

@panics nilPointerException if {@code} listener is {@code nil}

func (*BaseParser)Consume

func (p *BaseParser) Consume()Token

func (*BaseParser)DumpDFA

func (p *BaseParser) DumpDFA()

For debugging and other purposes.//

func (*BaseParser)EnterOuterAlt

func (p *BaseParser) EnterOuterAlt(localctxParserRuleContext, altNumint)

func (*BaseParser)EnterRecursionRule

func (p *BaseParser) EnterRecursionRule(localctxParserRuleContext, state, ruleIndex, precedenceint)

func (*BaseParser)EnterRule

func (p *BaseParser) EnterRule(localctxParserRuleContext, state, ruleIndexint)

func (*BaseParser)ExitRule

func (p *BaseParser) ExitRule()

func (*BaseParser)GetATN

func (p *BaseParser) GetATN() *ATN

func (*BaseParser)GetATNWithBypassAlts

func (p *BaseParser) GetATNWithBypassAlts()

The ATN with bypass alternatives is expensive to create so we create itlazily.

@panics UnsupportedOperationException if the current parser does notimplement the {@link //getSerializedATN()} method.

func (*BaseParser)GetCurrentToken

func (p *BaseParser) GetCurrentToken()Token

Match needs to return the current input symbol, which gets putinto the label for the associated token ref e.g., x=ID.

func (*BaseParser)GetDFAStrings

func (p *BaseParser) GetDFAStrings()string

For debugging and other purposes.//

func (*BaseParser)GetErrorHandler

func (p *BaseParser) GetErrorHandler()ErrorStrategy

func (*BaseParser)GetExpectedTokens

func (p *BaseParser) GetExpectedTokens() *IntervalSet

Computes the set of input symbols which could follow the current parserstate and context, as given by {@link //GetState} and {@link //GetContext},respectively.

@see ATN//getExpectedTokens(int, RuleContext)

func (*BaseParser)GetExpectedTokensWithinCurrentRule

func (p *BaseParser) GetExpectedTokensWithinCurrentRule() *IntervalSet

func (*BaseParser)GetInputStream

func (p *BaseParser) GetInputStream()IntStream

func (*BaseParser)GetInterpreter

func (p *BaseParser) GetInterpreter() *ParserATNSimulator

func (*BaseParser)GetInvokingContext

func (p *BaseParser) GetInvokingContext(ruleIndexint)ParserRuleContext

func (*BaseParser)GetParseListeners

func (p *BaseParser) GetParseListeners() []ParseTreeListener

func (*BaseParser)GetParserRuleContext

func (p *BaseParser) GetParserRuleContext()ParserRuleContext

func (*BaseParser)GetPrecedence

func (p *BaseParser) GetPrecedence()int

func (*BaseParser)GetRuleIndex

func (p *BaseParser) GetRuleIndex(ruleNamestring)int

Get a rule's index (i.e., {@code RULE_ruleName} field) or -1 if not found.//

func (*BaseParser)GetRuleInvocationStack

func (p *BaseParser) GetRuleInvocationStack(cParserRuleContext) []string

func (*BaseParser)GetSourceName

func (p *BaseParser) GetSourceName()string

func (*BaseParser)GetTokenFactory

func (p *BaseParser) GetTokenFactory()TokenFactory

func (*BaseParser)GetTokenStream

func (p *BaseParser) GetTokenStream()TokenStream

func (*BaseParser)IsExpectedToken

func (p *BaseParser) IsExpectedToken(symbolint)bool

func (*BaseParser)Match

func (p *BaseParser) Match(ttypeint)Token

func (*BaseParser)MatchWildcard

func (p *BaseParser) MatchWildcard()Token

func (*BaseParser)NotifyErrorListeners

func (p *BaseParser) NotifyErrorListeners(msgstring, offendingTokenToken, errRecognitionException)

func (*BaseParser)Precpred

func (p *BaseParser) Precpred(localctxRuleContext, precedenceint)bool

func (*BaseParser)PushNewRecursionContext

func (p *BaseParser) PushNewRecursionContext(localctxParserRuleContext, state, ruleIndexint)

func (*BaseParser)RemoveParseListener

func (p *BaseParser) RemoveParseListener(listenerParseTreeListener)

Remove {@code listener} from the list of parse listeners.

<p>If {@code listener} is {@code nil} or has not been added as a parselistener, p.method does nothing.</p>@param listener the listener to remove

func (*BaseParser)SetErrorHandler

func (p *BaseParser) SetErrorHandler(eErrorStrategy)

func (*BaseParser)SetInputStream

func (p *BaseParser) SetInputStream(inputTokenStream)

func (*BaseParser)SetParserRuleContext

func (p *BaseParser) SetParserRuleContext(vParserRuleContext)

func (*BaseParser)SetTokenStream

func (p *BaseParser) SetTokenStream(inputTokenStream)

Set the token stream and reset the parser.//

func (*BaseParser)SetTrace

func (p *BaseParser) SetTrace(trace *TraceListener)

During a parse is sometimes useful to listen in on the rule entry and exitevents as well as token Matches. p.is for quick and dirty debugging.

func (*BaseParser)TriggerEnterRuleEvent

func (p *BaseParser) TriggerEnterRuleEvent()

Notify any parse listeners of an enter rule event.

func (*BaseParser)TriggerExitRuleEvent

func (p *BaseParser) TriggerExitRuleEvent()

Notify any parse listeners of an exit rule event.

@see //addParseListener

func (*BaseParser)UnrollRecursionContexts

func (p *BaseParser) UnrollRecursionContexts(parentCtxParserRuleContext)

typeBaseParserRuleContext

type BaseParserRuleContext struct {*BaseRuleContext// contains filtered or unexported fields}

funcNewBaseParserRuleContext

func NewBaseParserRuleContext(parentParserRuleContext, invokingStateNumberint) *BaseParserRuleContext

func (*BaseParserRuleContext)Accept

func (prc *BaseParserRuleContext) Accept(visitorParseTreeVisitor) interface{}

func (*BaseParserRuleContext)AddChild

func (*BaseParserRuleContext)AddErrorNode

func (prc *BaseParserRuleContext) AddErrorNode(badTokenToken) *ErrorNodeImpl

func (*BaseParserRuleContext)AddTokenNode

func (prc *BaseParserRuleContext) AddTokenNode(tokenToken) *TerminalNodeImpl

func (*BaseParserRuleContext)CopyFrom

func (prc *BaseParserRuleContext) CopyFrom(ctx *BaseParserRuleContext)

func (*BaseParserRuleContext)EnterRule

func (prc *BaseParserRuleContext) EnterRule(listenerParseTreeListener)

Double dispatch methods for listeners

func (*BaseParserRuleContext)ExitRule

func (prc *BaseParserRuleContext) ExitRule(listenerParseTreeListener)

func (*BaseParserRuleContext)GetChild

func (prc *BaseParserRuleContext) GetChild(iint)Tree

func (*BaseParserRuleContext)GetChildCount

func (prc *BaseParserRuleContext) GetChildCount()int

func (*BaseParserRuleContext)GetChildOfType

func (prc *BaseParserRuleContext) GetChildOfType(iint, childTypereflect.Type)RuleContext

func (*BaseParserRuleContext)GetChildren

func (prc *BaseParserRuleContext) GetChildren() []Tree

func (*BaseParserRuleContext)GetPayload

func (prc *BaseParserRuleContext) GetPayload() interface{}

func (*BaseParserRuleContext)GetRuleContext

func (prc *BaseParserRuleContext) GetRuleContext()RuleContext

func (*BaseParserRuleContext)GetSourceInterval

func (prc *BaseParserRuleContext) GetSourceInterval() *Interval

func (*BaseParserRuleContext)GetStart

func (prc *BaseParserRuleContext) GetStart()Token

func (*BaseParserRuleContext)GetStop

func (prc *BaseParserRuleContext) GetStop()Token

func (*BaseParserRuleContext)GetText

func (prc *BaseParserRuleContext) GetText()string

func (*BaseParserRuleContext)GetToken

func (prc *BaseParserRuleContext) GetToken(ttypeint, iint)TerminalNode

func (*BaseParserRuleContext)GetTokens

func (prc *BaseParserRuleContext) GetTokens(ttypeint) []TerminalNode

func (*BaseParserRuleContext)GetTypedRuleContext

func (prc *BaseParserRuleContext) GetTypedRuleContext(ctxTypereflect.Type, iint)RuleContext

func (*BaseParserRuleContext)GetTypedRuleContexts

func (prc *BaseParserRuleContext) GetTypedRuleContexts(ctxTypereflect.Type) []RuleContext

func (*BaseParserRuleContext)RemoveLastChild

func (prc *BaseParserRuleContext) RemoveLastChild()

* Used by EnterOuterAlt to toss out a RuleContext previously added aswe entered a rule. If we have // label, we will need to removegeneric ruleContext object./

func (*BaseParserRuleContext)SetException

func (prc *BaseParserRuleContext) SetException(eRecognitionException)

func (*BaseParserRuleContext)SetStart

func (prc *BaseParserRuleContext) SetStart(tToken)

func (*BaseParserRuleContext)SetStop

func (prc *BaseParserRuleContext) SetStop(tToken)

func (*BaseParserRuleContext)String

func (prc *BaseParserRuleContext) String(ruleNames []string, stopRuleContext)string

func (*BaseParserRuleContext)ToStringTree

func (prc *BaseParserRuleContext) ToStringTree(ruleNames []string, recogRecognizer)string

typeBasePredictionContext

type BasePredictionContext struct {// contains filtered or unexported fields}

funcNewBasePredictionContext

func NewBasePredictionContext(cachedHashint) *BasePredictionContext

typeBaseRecognitionException

type BaseRecognitionException struct {// contains filtered or unexported fields}

funcNewBaseRecognitionException

func NewBaseRecognitionException(messagestring, recognizerRecognizer, inputIntStream, ctxRuleContext) *BaseRecognitionException

func (*BaseRecognitionException)GetInputStream

func (b *BaseRecognitionException) GetInputStream()IntStream

func (*BaseRecognitionException)GetMessage

func (b *BaseRecognitionException) GetMessage()string

func (*BaseRecognitionException)GetOffendingToken

func (b *BaseRecognitionException) GetOffendingToken()Token

func (*BaseRecognitionException)String

typeBaseRecognizer

type BaseRecognizer struct {RuleNames       []stringLiteralNames    []stringSymbolicNames   []stringGrammarFileNamestring// contains filtered or unexported fields}

funcNewBaseRecognizer

func NewBaseRecognizer() *BaseRecognizer

func (*BaseRecognizer)Action

func (b *BaseRecognizer) Action(contextRuleContext, ruleIndex, actionIndexint)

func (*BaseRecognizer)AddErrorListener

func (b *BaseRecognizer) AddErrorListener(listenerErrorListener)

func (*BaseRecognizer)GetErrorHeader

func (b *BaseRecognizer) GetErrorHeader(eRecognitionException)string

What is the error header, normally line/character position information?//

func (*BaseRecognizer)GetErrorListenerDispatch

func (b *BaseRecognizer) GetErrorListenerDispatch()ErrorListener

func (*BaseRecognizer)GetLiteralNames

func (b *BaseRecognizer) GetLiteralNames() []string

func (*BaseRecognizer)GetRuleIndexMap

func (b *BaseRecognizer) GetRuleIndexMap() map[string]int

Get a map from rule names to rule indexes.

<p>Used for XPath and tree pattern compilation.</p>

func (*BaseRecognizer)GetRuleNames

func (b *BaseRecognizer) GetRuleNames() []string

func (*BaseRecognizer)GetState

func (b *BaseRecognizer) GetState()int

func (*BaseRecognizer)GetSymbolicNames

func (b *BaseRecognizer) GetSymbolicNames() []string

func (*BaseRecognizer)GetTokenErrorDisplay

func (b *BaseRecognizer) GetTokenErrorDisplay(tToken)string

How should a token be displayed in an error message? The default

is to display just the text, but during development you mightwant to have a lot of information spit out.  Override in that caseto use t.String() (which, for CommonToken, dumps everything aboutthe token). This is better than forcing you to override a method inyour token objects because you don't have to go modify your lexerso that it creates a NewJava type.

@deprecated This method is not called by the ANTLR 4 Runtime. Specificimplementations of {@link ANTLRErrorStrategy} may provide a similarfeature when necessary. For example, see{@link DefaultErrorStrategy//GetTokenErrorDisplay}.

func (*BaseRecognizer)GetTokenNames

func (b *BaseRecognizer) GetTokenNames() []string

func (*BaseRecognizer)GetTokenType

func (b *BaseRecognizer) GetTokenType(tokenNamestring)int

func (*BaseRecognizer)Precpred

func (b *BaseRecognizer) Precpred(localctxRuleContext, precedenceint)bool

func (*BaseRecognizer)RemoveErrorListeners

func (b *BaseRecognizer) RemoveErrorListeners()

func (*BaseRecognizer)Sempred

func (b *BaseRecognizer) Sempred(localctxRuleContext, ruleIndexint, actionIndexint)bool

subclass needs to override these if there are sempreds or actionsthat the ATN interp needs to execute

func (*BaseRecognizer)SetState

func (b *BaseRecognizer) SetState(vint)

typeBaseRewriteOperation

type BaseRewriteOperation struct {// contains filtered or unexported fields}

func (*BaseRewriteOperation)Execute

func (op *BaseRewriteOperation) Execute(buffer *bytes.Buffer)int

func (*BaseRewriteOperation)GetIndex

func (op *BaseRewriteOperation) GetIndex()int

func (*BaseRewriteOperation)GetInstructionIndex

func (op *BaseRewriteOperation) GetInstructionIndex()int

func (*BaseRewriteOperation)GetOpName

func (op *BaseRewriteOperation) GetOpName()string

func (*BaseRewriteOperation)GetText

func (op *BaseRewriteOperation) GetText()string

func (*BaseRewriteOperation)GetTokens

func (op *BaseRewriteOperation) GetTokens()TokenStream

func (*BaseRewriteOperation)SetIndex

func (op *BaseRewriteOperation) SetIndex(valint)

func (*BaseRewriteOperation)SetInstructionIndex

func (op *BaseRewriteOperation) SetInstructionIndex(valint)

func (*BaseRewriteOperation)SetOpName

func (op *BaseRewriteOperation) SetOpName(valstring)

func (*BaseRewriteOperation)SetText

func (op *BaseRewriteOperation) SetText(valstring)

func (*BaseRewriteOperation)SetTokens

func (op *BaseRewriteOperation) SetTokens(valTokenStream)

func (*BaseRewriteOperation)String

func (op *BaseRewriteOperation) String()string

typeBaseRuleContext

type BaseRuleContext struct {RuleIndexint// contains filtered or unexported fields}

funcNewBaseRuleContext

func NewBaseRuleContext(parentRuleContext, invokingStateint) *BaseRuleContext

func (*BaseRuleContext)GetAltNumber

func (b *BaseRuleContext) GetAltNumber()int

func (*BaseRuleContext)GetBaseRuleContext

func (b *BaseRuleContext) GetBaseRuleContext() *BaseRuleContext

func (*BaseRuleContext)GetInvokingState

func (b *BaseRuleContext) GetInvokingState()int

func (*BaseRuleContext)GetParent

func (b *BaseRuleContext) GetParent()Tree

func (*BaseRuleContext)GetRuleIndex

func (b *BaseRuleContext) GetRuleIndex()int

func (*BaseRuleContext)IsEmpty

func (b *BaseRuleContext) IsEmpty()bool

A context is empty if there is no invoking state meaning nobody callcurrent context.

func (*BaseRuleContext)SetAltNumber

func (b *BaseRuleContext) SetAltNumber(altNumberint)

func (*BaseRuleContext)SetInvokingState

func (b *BaseRuleContext) SetInvokingState(tint)

func (*BaseRuleContext)SetParent

func (b *BaseRuleContext) SetParent(vTree)

typeBaseSingletonPredictionContext

type BaseSingletonPredictionContext struct {*BasePredictionContext// contains filtered or unexported fields}

funcNewBaseSingletonPredictionContext

func NewBaseSingletonPredictionContext(parentPredictionContext, returnStateint) *BaseSingletonPredictionContext

func (*BaseSingletonPredictionContext)Equals

func (b *BaseSingletonPredictionContext) Equals(other interface{})bool

func (*BaseSingletonPredictionContext)GetParent

func (*BaseSingletonPredictionContext)Hash

func (*BaseSingletonPredictionContext)String

typeBaseToken

type BaseToken struct {// contains filtered or unexported fields}

func (*BaseToken)GetChannel

func (b *BaseToken) GetChannel()int

func (*BaseToken)GetColumn

func (b *BaseToken) GetColumn()int

func (*BaseToken)GetInputStream

func (b *BaseToken) GetInputStream()CharStream

func (*BaseToken)GetLine

func (b *BaseToken) GetLine()int

func (*BaseToken)GetSource

func (b *BaseToken) GetSource() *TokenSourceCharStreamPair

func (*BaseToken)GetStart

func (b *BaseToken) GetStart()int

func (*BaseToken)GetStop

func (b *BaseToken) GetStop()int

func (*BaseToken)GetTokenIndex

func (b *BaseToken) GetTokenIndex()int

func (*BaseToken)GetTokenSource

func (b *BaseToken) GetTokenSource()TokenSource

func (*BaseToken)GetTokenType

func (b *BaseToken) GetTokenType()int

func (*BaseToken)SetTokenIndex

func (b *BaseToken) SetTokenIndex(vint)

typeBaseTransition

type BaseTransition struct {// contains filtered or unexported fields}

funcNewBaseTransition

func NewBaseTransition(targetATNState) *BaseTransition

func (*BaseTransition)Matches

func (t *BaseTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

typeBasicBlockStartState

type BasicBlockStartState struct {*BaseBlockStartState}

funcNewBasicBlockStartState

func NewBasicBlockStartState() *BasicBlockStartState

typeBasicState

type BasicState struct {*BaseATNState}

funcNewBasicState

func NewBasicState() *BasicState

typeBitSet

type BitSet struct {// contains filtered or unexported fields}

funcNewBitSet

func NewBitSet() *BitSet

funcPredictionModeGetAlts

func PredictionModeGetAlts(altsets []*BitSet) *BitSet

Gets the complete set of represented alternatives for a collection ofalternative subsets. This method returns the union of each {@link BitSet}in {@code altsets}.

@param altsets a collection of alternative subsets@return the set of represented alternatives in {@code altsets}

funcPredictionModegetConflictingAltSubsets

func PredictionModegetConflictingAltSubsets(configsATNConfigSet) []*BitSet

PredictionModegetConflictingAltSubsets gets the conflicting alt subsets from a configuration set.For each configuration {@code c} in {@code configs}:

<pre>map[c] U= c.{@link ATNConfig//alt alt} // map hash/equals uses s and x, notalt and not pred</pre>

func (*BitSet)String

func (b *BitSet) String()string

typeBlockEndState

type BlockEndState struct {*BaseATNState// contains filtered or unexported fields}

BlockEndState is a terminal node of a simple (a|b|c) block.

funcNewBlockEndState

func NewBlockEndState() *BlockEndState

typeBlockStartState

type BlockStartState interface {DecisionState// contains filtered or unexported methods}

typeCharStream

type CharStream interface {IntStreamGetText(int,int)stringGetTextFromTokens(start, endToken)stringGetTextFromInterval(*Interval)string}

typeCollectable

type Collectable[Tany] interface {Hash()intEquals(otherCollectable[T])bool}

Collectable is an interface that a struct should implement if it is to beusable as a key in these collections.

typeCommonToken

type CommonToken struct {*BaseToken}

funcNewCommonToken

func NewCommonToken(source *TokenSourceCharStreamPair, tokenType, channel, start, stopint) *CommonToken

func (*CommonToken)GetText

func (c *CommonToken) GetText()string

func (*CommonToken)SetText

func (c *CommonToken) SetText(textstring)

func (*CommonToken)String

func (c *CommonToken) String()string

typeCommonTokenFactory

type CommonTokenFactory struct {// contains filtered or unexported fields}

CommonTokenFactory is the default TokenFactory implementation.

funcNewCommonTokenFactory

func NewCommonTokenFactory(copyTextbool) *CommonTokenFactory

func (*CommonTokenFactory)Create

func (c *CommonTokenFactory) Create(source *TokenSourceCharStreamPair, ttypeint, textstring, channel, start, stop, line, columnint)Token

typeCommonTokenStream

type CommonTokenStream struct {// contains filtered or unexported fields}

CommonTokenStream is an implementation of TokenStream that loads tokens froma TokenSource on-demand and places the tokens in a buffer to provide accessto any previous token by index. This token stream ignores the value ofToken.getChannel. If your parser requires the token stream filter tokens toonly those on a particular channel, such as Token.DEFAULT_CHANNEL orToken.HIDDEN_CHANNEL, use a filtering token stream such a CommonTokenStream.

funcNewCommonTokenStream

func NewCommonTokenStream(lexerLexer, channelint) *CommonTokenStream

func (*CommonTokenStream)Consume

func (c *CommonTokenStream) Consume()

func (*CommonTokenStream)Fill

func (c *CommonTokenStream) Fill()

Fill gets all tokens from the lexer until EOF.

func (*CommonTokenStream)Get

func (c *CommonTokenStream) Get(indexint)Token

func (*CommonTokenStream)GetAllText

func (c *CommonTokenStream) GetAllText()string

func (*CommonTokenStream)GetAllTokens

func (c *CommonTokenStream) GetAllTokens() []Token

func (*CommonTokenStream)GetHiddenTokensToLeft

func (c *CommonTokenStream) GetHiddenTokensToLeft(tokenIndex, channelint) []Token

GetHiddenTokensToLeft collects all tokens on channel to the left of thecurrent token until we see a token on DEFAULT_TOKEN_CHANNEL. If channel is-1, it finds any non default channel token.

func (*CommonTokenStream)GetHiddenTokensToRight

func (c *CommonTokenStream) GetHiddenTokensToRight(tokenIndex, channelint) []Token

GetHiddenTokensToRight collects all tokens on a specified channel to theright of the current token up until we see a token on DEFAULT_TOKEN_CHANNELor EOF. If channel is -1, it finds any non-default channel token.

func (*CommonTokenStream)GetSourceName

func (c *CommonTokenStream) GetSourceName()string

func (*CommonTokenStream)GetTextFromInterval

func (c *CommonTokenStream) GetTextFromInterval(interval *Interval)string

func (*CommonTokenStream)GetTextFromRuleContext

func (c *CommonTokenStream) GetTextFromRuleContext(intervalRuleContext)string

func (*CommonTokenStream)GetTextFromTokens

func (c *CommonTokenStream) GetTextFromTokens(start, endToken)string

func (*CommonTokenStream)GetTokenSource

func (c *CommonTokenStream) GetTokenSource()TokenSource

func (*CommonTokenStream)GetTokens

func (c *CommonTokenStream) GetTokens(startint, stopint, types *IntervalSet) []Token

GetTokens gets all tokens from start to stop inclusive.

func (*CommonTokenStream)Index

func (c *CommonTokenStream) Index()int

func (*CommonTokenStream)LA

func (c *CommonTokenStream) LA(iint)int

func (*CommonTokenStream)LB

func (*CommonTokenStream)LT

func (*CommonTokenStream)Mark

func (c *CommonTokenStream) Mark()int

func (*CommonTokenStream)NextTokenOnChannel

func (c *CommonTokenStream) NextTokenOnChannel(i, channelint)int

NextTokenOnChannel returns the index of the next token on channel given astarting index. Returns i if tokens[i] is on channel. Returns -1 if there areno tokens on channel between i and EOF.

func (*CommonTokenStream)Release

func (c *CommonTokenStream) Release(markerint)

func (*CommonTokenStream)Seek

func (c *CommonTokenStream) Seek(indexint)

func (*CommonTokenStream)SetTokenSource

func (c *CommonTokenStream) SetTokenSource(tokenSourceTokenSource)

SetTokenSource resets the c token stream by setting its token source.

func (*CommonTokenStream)Size

func (c *CommonTokenStream) Size()int

func (*CommonTokenStream)Sync

func (c *CommonTokenStream) Sync(iint)bool

Sync makes sure index i in tokens has a token and returns true if a token islocated at index i and otherwise false.

typeComparator

type Comparator[Tany] interface {Hash1(o T)intEquals2(T, T)bool}

typeConsoleErrorListener

type ConsoleErrorListener struct {*DefaultErrorListener}

funcNewConsoleErrorListener

func NewConsoleErrorListener() *ConsoleErrorListener

func (*ConsoleErrorListener)SyntaxError

func (c *ConsoleErrorListener) SyntaxError(recognizerRecognizer, offendingSymbol interface{}, line, columnint, msgstring, eRecognitionException)

{@inheritDoc}

<p>This implementation prints messages to {@link System//err} containing thevalues of {@code line}, {@code charPositionInLine}, and {@code msg} usingthe following format.</p>

<pre>line <em>line</em>:<em>charPositionInLine</em> <em>msg</em></pre>

typeDFA

type DFA struct {// contains filtered or unexported fields}

funcNewDFA

func NewDFA(atnStartStateDecisionState, decisionint) *DFA

func (*DFA)String

func (d *DFA) String(literalNames []string, symbolicNames []string)string

func (*DFA)ToLexerString

func (d *DFA) ToLexerString()string

typeDFASerializer

type DFASerializer struct {// contains filtered or unexported fields}

DFASerializer is a DFA walker that knows how to dump them to serializedstrings.

funcNewDFASerializer

func NewDFASerializer(dfa *DFA, literalNames, symbolicNames []string) *DFASerializer

func (*DFASerializer)GetStateString

func (d *DFASerializer) GetStateString(s *DFAState)string

func (*DFASerializer)String

func (d *DFASerializer) String()string

typeDFAState

type DFAState struct {// contains filtered or unexported fields}

DFAState represents a set of possible ATN configurations. As Aho, Sethi,Ullman p. 117 says: "The DFA uses its state to keep track of all possiblestates the ATN can be in after reading each input symbol. That is to say,after reading input a1a2..an, the DFA is in a state that represents thesubset T of the states of the ATN that are reachable from the ATN's startstate along some path labeled a1a2..an." In conventional NFA-to-DFAconversion, therefore, the subset T would be a bitset representing the set ofstates the ATN could be in. We need to track the alt predicted by each stateas well, however. More importantly, we need to maintain a stack of states,tracking the closure operations as they jump from rule to rule, emulatingrule invocations (method calls). I have to add a stack to simulate the properlookahead sequences for the underlying LL grammar from which the ATN wasderived.

I use a set of ATNConfig objects, not simple states. An ATNConfig is both astate (ala normal conversion) and a RuleContext describing the chain of rules(if any) followed to arrive at that state.

A DFAState may have multiple references to a particular state, but withdifferent ATN contexts (with same or different alts) meaning that state wasreached via a different set of rule invocations.

funcNewDFAState

func NewDFAState(stateNumberint, configsATNConfigSet) *DFAState

func (*DFAState)Equals

func (d *DFAState) Equals(oCollectable[*DFAState])bool

Equals returns whether d equals other. Two DFAStates are equal if their ATNconfiguration sets are the same. This method is used to see if a statealready exists.

Because the number of alternatives and number of ATN configurations arefinite, there is a finite number of DFA states that can be processed. This isnecessary to show that the algorithm terminates.

Cannot test the DFA state numbers here because inParserATNSimulator.addDFAState we need to know if any other state exists thathas d exact set of ATN configurations. The stateNumber is irrelevant.

func (*DFAState)GetAltSet

func (d *DFAState) GetAltSet() []int

GetAltSet gets the set of all alts mentioned by all ATN configurations in d.

func (*DFAState)Hash

func (d *DFAState) Hash()int

func (*DFAState)String

func (d *DFAState) String()string

typeDecisionState

type DecisionState interface {ATNState// contains filtered or unexported methods}

typeDefaultErrorListener

type DefaultErrorListener struct {}

funcNewDefaultErrorListener

func NewDefaultErrorListener() *DefaultErrorListener

func (*DefaultErrorListener)ReportAmbiguity

func (d *DefaultErrorListener) ReportAmbiguity(recognizerParser, dfa *DFA, startIndex, stopIndexint, exactbool, ambigAlts *BitSet, configsATNConfigSet)

func (*DefaultErrorListener)ReportAttemptingFullContext

func (d *DefaultErrorListener) ReportAttemptingFullContext(recognizerParser, dfa *DFA, startIndex, stopIndexint, conflictingAlts *BitSet, configsATNConfigSet)

func (*DefaultErrorListener)ReportContextSensitivity

func (d *DefaultErrorListener) ReportContextSensitivity(recognizerParser, dfa *DFA, startIndex, stopIndex, predictionint, configsATNConfigSet)

func (*DefaultErrorListener)SyntaxError

func (d *DefaultErrorListener) SyntaxError(recognizerRecognizer, offendingSymbol interface{}, line, columnint, msgstring, eRecognitionException)

typeDefaultErrorStrategy

type DefaultErrorStrategy struct {// contains filtered or unexported fields}

This is the default implementation of {@link ANTLRErrorStrategy} used forerror Reporting and recovery in ANTLR parsers.

funcNewDefaultErrorStrategy

func NewDefaultErrorStrategy() *DefaultErrorStrategy

func (*DefaultErrorStrategy)GetExpectedTokens

func (d *DefaultErrorStrategy) GetExpectedTokens(recognizerParser) *IntervalSet

func (*DefaultErrorStrategy)GetMissingSymbol

func (d *DefaultErrorStrategy) GetMissingSymbol(recognizerParser)Token

Conjure up a missing token during error recovery.

The recognizer attempts to recover from single missingsymbols. But, actions might refer to that missing symbol.For example, x=ID {f($x)}. The action clearly assumesthat there has been an identifier Matched previously and that$x points at that token. If that token is missing, butthe next token in the stream is what we want we assume thatd token is missing and we keep going. Because wehave to return some token to replace the missing token,we have to conjure one up. This method gives the user controlover the tokens returned for missing tokens. Mostly,you will want to create something special for identifiertokens. For literals such as '{' and ',', the defaultaction in the parser or tree parser works. It simply createsa CommonToken of the appropriate type. The text will be the token.If you change what tokens must be created by the lexer,override d method to create the appropriate tokens.

func (*DefaultErrorStrategy)GetTokenErrorDisplay

func (d *DefaultErrorStrategy) GetTokenErrorDisplay(tToken)string

How should a token be displayed in an error message? The defaultis to display just the text, but during development you mightwant to have a lot of information spit out. Override in that caseto use t.String() (which, for CommonToken, dumps everything aboutthe token). This is better than forcing you to override a method inyour token objects because you don't have to go modify your lexerso that it creates a NewJava type.

func (*DefaultErrorStrategy)InErrorRecoveryMode

func (d *DefaultErrorStrategy) InErrorRecoveryMode(recognizerParser)bool

func (*DefaultErrorStrategy)Recover

func (d *DefaultErrorStrategy) Recover(recognizerParser, eRecognitionException)

{@inheritDoc}

<p>The default implementation reSynchronizes the parser by consuming tokensuntil we find one in the reSynchronization set--loosely the set of tokensthat can follow the current rule.</p>

func (*DefaultErrorStrategy)RecoverInline

func (d *DefaultErrorStrategy) RecoverInline(recognizerParser)Token

<p>The default implementation attempts to recover from the mismatched inputby using single token insertion and deletion as described below. If therecovery attempt fails, d method panics an{@link InputMisMatchException}.</p>

<p><strong>EXTRA TOKEN</strong> (single token deletion)</p>

<p>{@code LA(1)} is not what we are looking for. If {@code LA(2)} has theright token, however, then assume {@code LA(1)} is some extra spurioustoken and delete it. Then consume and return the next token (which wasthe {@code LA(2)} token) as the successful result of the Match operation.</p>

<p>This recovery strategy is implemented by {@link//singleTokenDeletion}.</p>

<p><strong>MISSING TOKEN</strong> (single token insertion)</p>

<p>If current token (at {@code LA(1)}) is consistent with what could comeafter the expected {@code LA(1)} token, then assume the token is missingand use the parser's {@link TokenFactory} to create it on the fly. The"insertion" is performed by returning the created token as the successfulresult of the Match operation.</p>

<p>This recovery strategy is implemented by {@link//singleTokenInsertion}.</p>

<p><strong>EXAMPLE</strong></p>

<p>For example, Input {@code i=(3} is clearly missing the {@code ')'}. Whenthe parser returns from the nested call to {@code expr}, it will havecall chain:</p>

<pre>stat &rarr expr &rarr atom</pre>

and it will be trying to Match the {@code ')'} at d point in thederivation:

<pre>=&gt ID '=' '(' INT ')' ('+' atom)* ”^</pre>

The attempt to Match {@code ')'} will fail when it sees {@code ”} andcall {@link //recoverInline}. To recover, it sees that {@code LA(1)==”}is in the set of tokens that can follow the {@code ')'} token referencein rule {@code atom}. It can assume that you forgot the {@code ')'}.

func (*DefaultErrorStrategy)ReportError

func (d *DefaultErrorStrategy) ReportError(recognizerParser, eRecognitionException)

{@inheritDoc}

<p>The default implementation returns immediately if the handler is alreadyin error recovery mode. Otherwise, it calls {@link //beginErrorCondition}and dispatches the Reporting task based on the runtime type of {@code e}according to the following table.</p>

<ul><li>{@link NoViableAltException}: Dispatches the call to{@link //ReportNoViableAlternative}</li><li>{@link InputMisMatchException}: Dispatches the call to{@link //ReportInputMisMatch}</li><li>{@link FailedPredicateException}: Dispatches the call to{@link //ReportFailedPredicate}</li><li>All other types: calls {@link Parser//NotifyErrorListeners} to Reportthe exception</li></ul>

func (*DefaultErrorStrategy)ReportFailedPredicate

func (d *DefaultErrorStrategy) ReportFailedPredicate(recognizerParser, e *FailedPredicateException)

This is called by {@link //ReportError} when the exception is a{@link FailedPredicateException}.

@see //ReportError

@param recognizer the parser instance@param e the recognition exception

func (*DefaultErrorStrategy)ReportInputMisMatch

func (this *DefaultErrorStrategy) ReportInputMisMatch(recognizerParser, e *InputMisMatchException)

This is called by {@link //ReportError} when the exception is an{@link InputMisMatchException}.

@see //ReportError

@param recognizer the parser instance@param e the recognition exception

func (*DefaultErrorStrategy)ReportMatch

func (d *DefaultErrorStrategy) ReportMatch(recognizerParser)

{@inheritDoc}

<p>The default implementation simply calls {@link //endErrorCondition}.</p>

func (*DefaultErrorStrategy)ReportMissingToken

func (d *DefaultErrorStrategy) ReportMissingToken(recognizerParser)

This method is called to Report a syntax error which requires theinsertion of a missing token into the input stream. At the time dmethod is called, the missing token has not yet been inserted. When dmethod returns, {@code recognizer} is in error recovery mode.

<p>This method is called when {@link //singleTokenInsertion} identifiessingle-token insertion as a viable recovery strategy for a mismatchedinput error.</p>

<p>The default implementation simply returns if the handler is already inerror recovery mode. Otherwise, it calls {@link //beginErrorCondition} toenter error recovery mode, followed by calling{@link Parser//NotifyErrorListeners}.</p>

@param recognizer the parser instance

func (*DefaultErrorStrategy)ReportNoViableAlternative

func (d *DefaultErrorStrategy) ReportNoViableAlternative(recognizerParser, e *NoViableAltException)

This is called by {@link //ReportError} when the exception is a{@link NoViableAltException}.

@see //ReportError

@param recognizer the parser instance@param e the recognition exception

func (*DefaultErrorStrategy)ReportUnwantedToken

func (d *DefaultErrorStrategy) ReportUnwantedToken(recognizerParser)

This method is called to Report a syntax error which requires the removalof a token from the input stream. At the time d method is called, theerroneous symbol is current {@code LT(1)} symbol and has not yet beenremoved from the input stream. When d method returns,{@code recognizer} is in error recovery mode.

<p>This method is called when {@link //singleTokenDeletion} identifiessingle-token deletion as a viable recovery strategy for a mismatchedinput error.</p>

<p>The default implementation simply returns if the handler is already inerror recovery mode. Otherwise, it calls {@link //beginErrorCondition} toenter error recovery mode, followed by calling{@link Parser//NotifyErrorListeners}.</p>

@param recognizer the parser instance

func (*DefaultErrorStrategy)SingleTokenDeletion

func (d *DefaultErrorStrategy) SingleTokenDeletion(recognizerParser)Token

This method implements the single-token deletion inline error recoverystrategy. It is called by {@link //recoverInline} to attempt to recoverfrom mismatched input. If this method returns nil, the parser and errorhandler state will not have changed. If this method returns non-nil,{@code recognizer} will <em>not</em> be in error recovery mode since thereturned token was a successful Match.

<p>If the single-token deletion is successful, d method calls{@link //ReportUnwantedToken} to Report the error, followed by{@link Parser//consume} to actually "delete" the extraneous token. Then,before returning {@link //ReportMatch} is called to signal a successfulMatch.</p>

@param recognizer the parser instance@return the successfully Matched {@link Token} instance if single-tokendeletion successfully recovers from the mismatched input, otherwise{@code nil}

func (*DefaultErrorStrategy)SingleTokenInsertion

func (d *DefaultErrorStrategy) SingleTokenInsertion(recognizerParser)bool

This method implements the single-token insertion inline error recoverystrategy. It is called by {@link //recoverInline} if the single-tokendeletion strategy fails to recover from the mismatched input. If thismethod returns {@code true}, {@code recognizer} will be in error recoverymode.

<p>This method determines whether or not single-token insertion is viable bychecking if the {@code LA(1)} input symbol could be successfully Matchedif it were instead the {@code LA(2)} symbol. If d method returns{@code true}, the caller is responsible for creating and inserting atoken with the correct type to produce d behavior.</p>

@param recognizer the parser instance@return {@code true} if single-token insertion is a viable recoverystrategy for the current mismatched input, otherwise {@code false}

func (*DefaultErrorStrategy)Sync

func (d *DefaultErrorStrategy) Sync(recognizerParser)

The default implementation of {@link ANTLRErrorStrategy//Sync} makes surethat the current lookahead symbol is consistent with what were expectingat d point in the ATN. You can call d anytime but ANTLR onlygenerates code to check before subrules/loops and each iteration.

<p>Implements Jim Idle's magic Sync mechanism in closures and optionalsubrules. E.g.,</p>

<pre>a : Sync ( stuff Sync )*Sync : {consume to what can follow Sync}</pre>

At the start of a sub rule upon error, {@link //Sync} performs singletoken deletion, if possible. If it can't do that, it bails on the currentrule and uses the default error recovery, which consumes until thereSynchronization set of the current rule.

<p>If the sub rule is optional ({@code (...)?}, {@code (...)*}, or blockwith an empty alternative), then the expected set includes what followsthe subrule.</p>

<p>During loop iteration, it consumes until it sees a token that can start asub rule or what follows loop. Yes, that is pretty aggressive. We opt tostay in the loop as long as possible.</p>

<p><strong>ORIGINS</strong></p>

<p>Previous versions of ANTLR did a poor job of their recovery within loops.A single mismatch token or missing token would force the parser to bailout of the entire rules surrounding the loop. So, for rule</p>

<pre>classfunc : 'class' ID '{' member* '}'</pre>

input with an extra token between members would force the parser toconsume until it found the next class definition rather than the nextmember definition of the current class.

<p>This functionality cost a little bit of effort because the parser has tocompare token set at the start of the loop and at each iteration. If forsome reason speed is suffering for you, you can turn off dfunctionality by simply overriding d method as a blank { }.</p>

typeDiagnosticErrorListener

type DiagnosticErrorListener struct {*DefaultErrorListener// contains filtered or unexported fields}

funcNewDiagnosticErrorListener

func NewDiagnosticErrorListener(exactOnlybool) *DiagnosticErrorListener

func (*DiagnosticErrorListener)ReportAmbiguity

func (d *DiagnosticErrorListener) ReportAmbiguity(recognizerParser, dfa *DFA, startIndex, stopIndexint, exactbool, ambigAlts *BitSet, configsATNConfigSet)

func (*DiagnosticErrorListener)ReportAttemptingFullContext

func (d *DiagnosticErrorListener) ReportAttemptingFullContext(recognizerParser, dfa *DFA, startIndex, stopIndexint, conflictingAlts *BitSet, configsATNConfigSet)

func (*DiagnosticErrorListener)ReportContextSensitivity

func (d *DiagnosticErrorListener) ReportContextSensitivity(recognizerParser, dfa *DFA, startIndex, stopIndex, predictionint, configsATNConfigSet)

typeDoubleDict

type DoubleDict struct {// contains filtered or unexported fields}

funcNewDoubleDict

func NewDoubleDict() *DoubleDict

func (*DoubleDict)Get

func (d *DoubleDict) Get(a, bint) interface{}

typeEmptyPredictionContext

type EmptyPredictionContext struct {*BaseSingletonPredictionContext}

funcNewEmptyPredictionContext

func NewEmptyPredictionContext() *EmptyPredictionContext

func (*EmptyPredictionContext)Equals

func (e *EmptyPredictionContext) Equals(other interface{})bool

func (*EmptyPredictionContext)GetParent

func (*EmptyPredictionContext)Hash

func (e *EmptyPredictionContext) Hash()int

func (*EmptyPredictionContext)String

func (e *EmptyPredictionContext) String()string

typeEpsilonTransition

type EpsilonTransition struct {*BaseTransition// contains filtered or unexported fields}

funcNewEpsilonTransition

func NewEpsilonTransition(targetATNState, outermostPrecedenceReturnint) *EpsilonTransition

func (*EpsilonTransition)Matches

func (t *EpsilonTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*EpsilonTransition)String

func (t *EpsilonTransition) String()string

typeErrorListener

type ErrorListener interface {SyntaxError(recognizerRecognizer, offendingSymbol interface{}, line, columnint, msgstring, eRecognitionException)ReportAmbiguity(recognizerParser, dfa *DFA, startIndex, stopIndexint, exactbool, ambigAlts *BitSet, configsATNConfigSet)ReportAttemptingFullContext(recognizerParser, dfa *DFA, startIndex, stopIndexint, conflictingAlts *BitSet, configsATNConfigSet)ReportContextSensitivity(recognizerParser, dfa *DFA, startIndex, stopIndex, predictionint, configsATNConfigSet)}

typeErrorNode

type ErrorNode interface {TerminalNode// contains filtered or unexported methods}

typeErrorNodeImpl

type ErrorNodeImpl struct {*TerminalNodeImpl}

funcNewErrorNodeImpl

func NewErrorNodeImpl(tokenToken) *ErrorNodeImpl

func (*ErrorNodeImpl)Accept

func (e *ErrorNodeImpl) Accept(vParseTreeVisitor) interface{}

typeErrorStrategy

type ErrorStrategy interface {RecoverInline(Parser)TokenRecover(Parser,RecognitionException)Sync(Parser)InErrorRecoveryMode(Parser)boolReportError(Parser,RecognitionException)ReportMatch(Parser)// contains filtered or unexported methods}

typeFailedPredicateException

type FailedPredicateException struct {*BaseRecognitionException// contains filtered or unexported fields}

funcNewFailedPredicateException

func NewFailedPredicateException(recognizerParser, predicatestring, messagestring) *FailedPredicateException

typeFileStream

type FileStream struct {*InputStream// contains filtered or unexported fields}

funcNewFileStream

func NewFileStream(fileNamestring) (*FileStream,error)

func (*FileStream)GetSourceName

func (f *FileStream) GetSourceName()string

typeIATNSimulator

type IATNSimulator interface {SharedContextCache() *PredictionContextCacheATN() *ATNDecisionToDFA() []*DFA}

typeILexerATNSimulator

type ILexerATNSimulator interface {IATNSimulatorMatch(inputCharStream, modeint)intGetCharPositionInLine()intGetLine()intGetText(inputCharStream)stringConsume(inputCharStream)// contains filtered or unexported methods}

typeInputMisMatchException

type InputMisMatchException struct {*BaseRecognitionException}

funcNewInputMisMatchException

func NewInputMisMatchException(recognizerParser) *InputMisMatchException

This signifies any kind of mismatched input exceptions such aswhen the current input does not Match the expected token.

typeInputStream

type InputStream struct {// contains filtered or unexported fields}

funcNewInputStream

func NewInputStream(datastring) *InputStream

func (*InputStream)Consume

func (is *InputStream) Consume()

func (*InputStream)GetSourceName

func (*InputStream) GetSourceName()string

func (*InputStream)GetText

func (is *InputStream) GetText(startint, stopint)string

func (*InputStream)GetTextFromInterval

func (is *InputStream) GetTextFromInterval(i *Interval)string

func (*InputStream)GetTextFromTokens

func (is *InputStream) GetTextFromTokens(start, stopToken)string

func (*InputStream)Index

func (is *InputStream) Index()int

func (*InputStream)LA

func (is *InputStream) LA(offsetint)int

func (*InputStream)LT

func (is *InputStream) LT(offsetint)int

func (*InputStream)Mark

func (is *InputStream) Mark()int

mark/release do nothing we have entire buffer

func (*InputStream)Release

func (is *InputStream) Release(markerint)

func (*InputStream)Seek

func (is *InputStream) Seek(indexint)

func (*InputStream)Size

func (is *InputStream) Size()int

func (*InputStream)String

func (is *InputStream) String()string

typeInsertAfterOp

type InsertAfterOp struct {BaseRewriteOperation}

funcNewInsertAfterOp

func NewInsertAfterOp(indexint, textstring, streamTokenStream) *InsertAfterOp

func (*InsertAfterOp)Execute

func (op *InsertAfterOp) Execute(buffer *bytes.Buffer)int

func (*InsertAfterOp)String

func (op *InsertAfterOp) String()string

typeInsertBeforeOp

type InsertBeforeOp struct {BaseRewriteOperation}

funcNewInsertBeforeOp

func NewInsertBeforeOp(indexint, textstring, streamTokenStream) *InsertBeforeOp

func (*InsertBeforeOp)Execute

func (op *InsertBeforeOp) Execute(buffer *bytes.Buffer)int

func (*InsertBeforeOp)String

func (op *InsertBeforeOp) String()string

typeIntStack

type IntStack []int

func (*IntStack)Pop

func (s *IntStack) Pop() (int,error)

func (*IntStack)Push

func (s *IntStack) Push(eint)

typeIntStream

type IntStream interface {Consume()LA(int)intMark()intRelease(markerint)Index()intSeek(indexint)Size()intGetSourceName()string}

typeInterpreterRuleContext

type InterpreterRuleContext interface {ParserRuleContext}

typeInterval

type Interval struct {StartintStopint}

funcNewInterval

func NewInterval(start, stopint) *Interval

stop is not included!

func (*Interval)Contains

func (i *Interval) Contains(itemint)bool

func (*Interval)String

func (i *Interval) String()string

typeIntervalSet

type IntervalSet struct {// contains filtered or unexported fields}

funcNewIntervalSet

func NewIntervalSet() *IntervalSet

func (*IntervalSet)GetIntervals

func (i *IntervalSet) GetIntervals() []*Interval

func (*IntervalSet)String

func (i *IntervalSet) String()string

func (*IntervalSet)StringVerbose

func (i *IntervalSet) StringVerbose(literalNames []string, symbolicNames []string, elemsAreCharbool)string

typeJMap

type JMap[K, Vany, CComparator[K]] struct {// contains filtered or unexported fields}

funcNewJMap

func NewJMap[K, Vany, CComparator[K]](comparatorComparator[K]) *JMap[K, V, C]

func (*JMap[K, V, C])Clear

func (m *JMap[K, V, C]) Clear()

func (*JMap[K, V, C])Delete

func (m *JMap[K, V, C]) Delete(key K)

func (*JMap[K, V, C])Get

func (m *JMap[K, V, C]) Get(key K) (V,bool)

func (*JMap[K, V, C])Len

func (m *JMap[K, V, C]) Len()int

func (*JMap[K, V, C])Put

func (m *JMap[K, V, C]) Put(key K, val V)

func (*JMap[K, V, C])Values

func (m *JMap[K, V, C]) Values() []V

typeJStore

type JStore[Tany, CComparator[T]] struct {// contains filtered or unexported fields}

JStore implements a container that allows the use of a struct to calculate the keyfor a collection of values akin to map. This is not meant to be a full-blown HashMap but justserve the needs of the ANTLR Go runtime.

For ease of porting the logic of the runtime from the master target (Java), this collectionoperates in a similar way to Java, in that it can use any struct that supplies a Hash() and Equals()function as the key. The values are stored in a standard go map which internally is a form of hashmapitself, the key for the go map is the hash supplied by the key object. The collection is able to deal withhash conflicts by using a simple slice of values associated with the hash code indexed bucket. That isn'tparticularly efficient, but it is simple, and it works. As this is specifically for the ANTLR runtime, andwe understand the requirements, then this is fine - this is not a general purpose collection.

funcNewJStore

func NewJStore[Tany, CComparator[T]](comparatorComparator[T]) *JStore[T, C]

func (*JStore[T, C])Contains

func (s *JStore[T, C]) Contains(key T)bool

Contains returns true if the given key is present in the store

func (*JStore[T, C])Each

func (s *JStore[T, C]) Each(f func(T)bool)

func (*JStore[T, C])Get

func (s *JStore[T, C]) Get(key T) (T,bool)

Get will return the value associated with the key - the type of the key is the same type as the valuewhich would not generally be useful, but this is a specific thing for ANTLR where the key isgenerated using the object we are going to store.

func (*JStore[T, C])Len

func (s *JStore[T, C]) Len()int

func (*JStore[T, C])Put

func (s *JStore[T, C]) Put(value T) (v T, existsbool)

Put will store given value in the collection. Note that the key for storage is generated fromthe value itself - this is specifically because that is what ANTLR needs - this would not be usefulas any kind of general collection.

If the key has a hash conflict, then the value will be added to the slice of values associated with thehash, unless the value is already in the slice, in which case the existing value is returned. Value equivalence istested by calling the equals() method on the key.

If the given value is already present in the store, then the existing value is returned as v and exists is set to true

If the given value is not present in the store, then the value is added to the store and returned as v and exists is set to false.

func (*JStore[T, C])SortedSlice

func (s *JStore[T, C]) SortedSlice(less func(i, j T)bool) []T

func (*JStore[T, C])Values

func (s *JStore[T, C]) Values() []T

typeLL1Analyzer

type LL1Analyzer struct {// contains filtered or unexported fields}

funcNewLL1Analyzer

func NewLL1Analyzer(atn *ATN) *LL1Analyzer

func (*LL1Analyzer)Look

func (la *LL1Analyzer) Look(s, stopStateATNState, ctxRuleContext) *IntervalSet

*Compute set of tokens that can follow {@code s} in the ATN in thespecified {@code ctx}.

<p>If {@code ctx} is {@code nil} and the end of the rule containing{@code s} is reached, {@link Token//EPSILON} is added to the result set.If {@code ctx} is not {@code nil} and the end of the outermost rule isreached, {@link Token//EOF} is added to the result set.</p>

@param s the ATN state@param stopState the ATN state to stop at. This can be a{@link BlockEndState} to detect epsilon paths through a closure.@param ctx the complete parser context, or {@code nil} if the contextshould be ignored

@return The set of tokens that can follow {@code s} in the ATN in thespecified {@code ctx}./

typeLexer

type Lexer interface {TokenSourceRecognizerEmit()TokenSetChannel(int)PushMode(int)PopMode()intSetType(int)SetMode(int)}

typeLexerATNConfig

type LexerATNConfig struct {*BaseATNConfig// contains filtered or unexported fields}

funcNewLexerATNConfig1

func NewLexerATNConfig1(stateATNState, altint, contextPredictionContext) *LexerATNConfig

funcNewLexerATNConfig2

func NewLexerATNConfig2(c *LexerATNConfig, stateATNState, contextPredictionContext) *LexerATNConfig

funcNewLexerATNConfig3

func NewLexerATNConfig3(c *LexerATNConfig, stateATNState, lexerActionExecutor *LexerActionExecutor) *LexerATNConfig

funcNewLexerATNConfig4

func NewLexerATNConfig4(c *LexerATNConfig, stateATNState) *LexerATNConfig

funcNewLexerATNConfig5

func NewLexerATNConfig5(stateATNState, altint, contextPredictionContext, lexerActionExecutor *LexerActionExecutor) *LexerATNConfig

funcNewLexerATNConfig6

func NewLexerATNConfig6(stateATNState, altint, contextPredictionContext) *LexerATNConfig

func (*LexerATNConfig)Equals

func (l *LexerATNConfig) Equals(otherCollectable[ATNConfig])bool

Equals is the default comparison function for LexerATNConfig objects, it can be used directly or viathe default comparatorObjEqComparator.

func (*LexerATNConfig)Hash

func (l *LexerATNConfig) Hash()int

Hash is the default hash function for LexerATNConfig objects, it can be used directly or viathe default comparatorObjEqComparator.

typeLexerATNSimulator

type LexerATNSimulator struct {*BaseATNSimulatorLineintCharPositionInLineintMatchCallsint// contains filtered or unexported fields}

funcNewLexerATNSimulator

func NewLexerATNSimulator(recogLexer, atn *ATN, decisionToDFA []*DFA, sharedContextCache *PredictionContextCache) *LexerATNSimulator

func (*LexerATNSimulator)Consume

func (l *LexerATNSimulator) Consume(inputCharStream)

func (*LexerATNSimulator)GetCharPositionInLine

func (l *LexerATNSimulator) GetCharPositionInLine()int

func (*LexerATNSimulator)GetLine

func (l *LexerATNSimulator) GetLine()int

func (*LexerATNSimulator)GetText

func (l *LexerATNSimulator) GetText(inputCharStream)string

Get the text Matched so far for the current token.

func (*LexerATNSimulator)GetTokenName

func (l *LexerATNSimulator) GetTokenName(ttint)string

func (*LexerATNSimulator)Match

func (l *LexerATNSimulator) Match(inputCharStream, modeint)int

func (*LexerATNSimulator)MatchATN

func (l *LexerATNSimulator) MatchATN(inputCharStream)int

typeLexerAction

type LexerAction interface {Hash()intEquals(otherLexerAction)bool// contains filtered or unexported methods}

typeLexerActionExecutor

type LexerActionExecutor struct {// contains filtered or unexported fields}

funcLexerActionExecutorappend

func LexerActionExecutorappend(lexerActionExecutor *LexerActionExecutor, lexerActionLexerAction) *LexerActionExecutor

Creates a {@link LexerActionExecutor} which executes the actions forthe input {@code lexerActionExecutor} followed by a specified{@code lexerAction}.

@param lexerActionExecutor The executor for actions already traversed bythe lexer while Matching a token within a particular{@link LexerATNConfig}. If this is {@code nil}, the method behaves asthough it were an empty executor.@param lexerAction The lexer action to execute after the actionsspecified in {@code lexerActionExecutor}.

@return A {@link LexerActionExecutor} for executing the combine actionsof {@code lexerActionExecutor} and {@code lexerAction}.

funcNewLexerActionExecutor

func NewLexerActionExecutor(lexerActions []LexerAction) *LexerActionExecutor

func (*LexerActionExecutor)Equals

func (l *LexerActionExecutor) Equals(other interface{})bool

func (*LexerActionExecutor)Hash

func (l *LexerActionExecutor) Hash()int

typeLexerChannelAction

type LexerChannelAction struct {*BaseLexerAction// contains filtered or unexported fields}

Implements the {@code channel} lexer action by calling{@link Lexer//setChannel} with the assigned channel.Constructs a New{@code channel} action with the specified channel value.@param channel The channel value to pass to {@link Lexer//setChannel}.

funcNewLexerChannelAction

func NewLexerChannelAction(channelint) *LexerChannelAction

func (*LexerChannelAction)Equals

func (l *LexerChannelAction) Equals(otherLexerAction)bool

func (*LexerChannelAction)Hash

func (l *LexerChannelAction) Hash()int

func (*LexerChannelAction)String

func (l *LexerChannelAction) String()string

typeLexerCustomAction

type LexerCustomAction struct {*BaseLexerAction// contains filtered or unexported fields}

funcNewLexerCustomAction

func NewLexerCustomAction(ruleIndex, actionIndexint) *LexerCustomAction

func (*LexerCustomAction)Equals

func (l *LexerCustomAction) Equals(otherLexerAction)bool

func (*LexerCustomAction)Hash

func (l *LexerCustomAction) Hash()int

typeLexerDFASerializer

type LexerDFASerializer struct {*DFASerializer}

funcNewLexerDFASerializer

func NewLexerDFASerializer(dfa *DFA) *LexerDFASerializer

func (*LexerDFASerializer)String

func (l *LexerDFASerializer) String()string

typeLexerIndexedCustomAction

type LexerIndexedCustomAction struct {*BaseLexerAction// contains filtered or unexported fields}

Constructs a Newindexed custom action by associating a character offsetwith a {@link LexerAction}.

<p>Note: This class is only required for lexer actions for which{@link LexerAction//isPositionDependent} returns {@code true}.</p>

@param offset The offset into the input {@link CharStream}, relative tothe token start index, at which the specified lexer action should beexecuted.@param action The lexer action to execute at a particular offset in theinput {@link CharStream}.

funcNewLexerIndexedCustomAction

func NewLexerIndexedCustomAction(offsetint, lexerActionLexerAction) *LexerIndexedCustomAction

func (*LexerIndexedCustomAction)Hash

typeLexerModeAction

type LexerModeAction struct {*BaseLexerAction// contains filtered or unexported fields}

Implements the {@code mode} lexer action by calling {@link Lexer//mode} withthe assigned mode.

funcNewLexerModeAction

func NewLexerModeAction(modeint) *LexerModeAction

func (*LexerModeAction)Equals

func (l *LexerModeAction) Equals(otherLexerAction)bool

func (*LexerModeAction)Hash

func (l *LexerModeAction) Hash()int

func (*LexerModeAction)String

func (l *LexerModeAction) String()string

typeLexerMoreAction

type LexerMoreAction struct {*BaseLexerAction}

funcNewLexerMoreAction

func NewLexerMoreAction() *LexerMoreAction

func (*LexerMoreAction)String

func (l *LexerMoreAction) String()string

typeLexerNoViableAltException

type LexerNoViableAltException struct {*BaseRecognitionException// contains filtered or unexported fields}

funcNewLexerNoViableAltException

func NewLexerNoViableAltException(lexerLexer, inputCharStream, startIndexint, deadEndConfigsATNConfigSet) *LexerNoViableAltException

func (*LexerNoViableAltException)String

typeLexerPopModeAction

type LexerPopModeAction struct {*BaseLexerAction}

Implements the {@code popMode} lexer action by calling {@link Lexer//popMode}.

<p>The {@code popMode} command does not have any parameters, so l action isimplemented as a singleton instance exposed by {@link //INSTANCE}.</p>

funcNewLexerPopModeAction

func NewLexerPopModeAction() *LexerPopModeAction

func (*LexerPopModeAction)String

func (l *LexerPopModeAction) String()string

typeLexerPushModeAction

type LexerPushModeAction struct {*BaseLexerAction// contains filtered or unexported fields}

Implements the {@code pushMode} lexer action by calling{@link Lexer//pushMode} with the assigned mode.

funcNewLexerPushModeAction

func NewLexerPushModeAction(modeint) *LexerPushModeAction

func (*LexerPushModeAction)Equals

func (l *LexerPushModeAction) Equals(otherLexerAction)bool

func (*LexerPushModeAction)Hash

func (l *LexerPushModeAction) Hash()int

func (*LexerPushModeAction)String

func (l *LexerPushModeAction) String()string

typeLexerSkipAction

type LexerSkipAction struct {*BaseLexerAction}

Implements the {@code Skip} lexer action by calling {@link Lexer//Skip}.

<p>The {@code Skip} command does not have any parameters, so l action isimplemented as a singleton instance exposed by {@link //INSTANCE}.</p>

funcNewLexerSkipAction

func NewLexerSkipAction() *LexerSkipAction

func (*LexerSkipAction)String

func (l *LexerSkipAction) String()string

typeLexerTypeAction

type LexerTypeAction struct {*BaseLexerAction// contains filtered or unexported fields}
Implements the {@code type} lexer action by calling {@link Lexer//setType}

with the assigned type.

funcNewLexerTypeAction

func NewLexerTypeAction(thetypeint) *LexerTypeAction

func (*LexerTypeAction)Equals

func (l *LexerTypeAction) Equals(otherLexerAction)bool

func (*LexerTypeAction)Hash

func (l *LexerTypeAction) Hash()int

func (*LexerTypeAction)String

func (l *LexerTypeAction) String()string

typeLoopEndState

type LoopEndState struct {*BaseATNState// contains filtered or unexported fields}

LoopEndState marks the end of a * or + loop.

funcNewLoopEndState

func NewLoopEndState() *LoopEndState

typeNoViableAltException

type NoViableAltException struct {*BaseRecognitionException// contains filtered or unexported fields}

funcNewNoViableAltException

func NewNoViableAltException(recognizerParser, inputTokenStream, startTokenToken, offendingTokenToken, deadEndConfigsATNConfigSet, ctxParserRuleContext) *NoViableAltException

Indicates that the parser could not decide which of two or more pathsto take based upon the remaining input. It tracks the starting tokenof the offending input and also knows where the parser wasin the various paths when the error. Reported by ReportNoViableAlternative()

typeNotSetTransition

type NotSetTransition struct {*SetTransition}

funcNewNotSetTransition

func NewNotSetTransition(targetATNState, set *IntervalSet) *NotSetTransition

func (*NotSetTransition)Matches

func (t *NotSetTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*NotSetTransition)String

func (t *NotSetTransition) String()string

typeOR

type OR struct {// contains filtered or unexported fields}

funcNewOR

func NewOR(a, bSemanticContext) *OR

func (*OR)Equals

func (o *OR) Equals(otherCollectable[SemanticContext])bool

func (*OR)Hash

func (a *OR) Hash()int

func (*OR)String

func (o *OR) String()string

typeObjEqComparator

type ObjEqComparator[TCollectable[T]] struct{}

ObjEqComparator is the equivalent of the Java ObjectEqualityComparator, which is the default instance ofEquality comparator. We do not have inheritance in Go, only interfaces, so we use generics to enforce sometype safety and avoid having to implement this for every type that we want to perform comparison on.

This comparator works by using the standard Hash() and Equals() methods of the type T that is being compared. Whichallows us to use it in any collection instance that does nto require a special hash or equals implementation.

func (*ObjEqComparator[T])Equals2

func (c *ObjEqComparator[T]) Equals2(o1, o2 T)bool

Equals2 delegates to the Equals() method of type T

func (*ObjEqComparator[T])Hash1

func (c *ObjEqComparator[T]) Hash1(o T)int

Hash1 delegates to the Hash() method of type T

typeOrderedATNConfigSet

type OrderedATNConfigSet struct {*BaseATNConfigSet}

funcNewOrderedATNConfigSet

func NewOrderedATNConfigSet() *OrderedATNConfigSet

typeParseCancellationException

type ParseCancellationException struct {}

funcNewParseCancellationException

func NewParseCancellationException() *ParseCancellationException

typeParseTree

type ParseTree interface {SyntaxTreeAccept(VisitorParseTreeVisitor) interface{}GetText()stringToStringTree([]string,Recognizer)string}

funcTreesDescendants

func TreesDescendants(tParseTree) []ParseTree

funcTreesFindAllTokenNodes

func TreesFindAllTokenNodes(tParseTree, ttypeint) []ParseTree

funcTreesfindAllNodes

func TreesfindAllNodes(tParseTree, indexint, findTokensbool) []ParseTree

funcTreesfindAllRuleNodes

func TreesfindAllRuleNodes(tParseTree, ruleIndexint) []ParseTree

typeParseTreeListener

type ParseTreeListener interface {VisitTerminal(nodeTerminalNode)VisitErrorNode(nodeErrorNode)EnterEveryRule(ctxParserRuleContext)ExitEveryRule(ctxParserRuleContext)}

typeParseTreeVisitor

type ParseTreeVisitor interface {Visit(treeParseTree) interface{}VisitChildren(nodeRuleNode) interface{}VisitTerminal(nodeTerminalNode) interface{}VisitErrorNode(nodeErrorNode) interface{}}

typeParseTreeWalker

type ParseTreeWalker struct {}

funcNewParseTreeWalker

func NewParseTreeWalker() *ParseTreeWalker

func (*ParseTreeWalker)EnterRule

func (p *ParseTreeWalker) EnterRule(listenerParseTreeListener, rRuleNode)

Enters a grammar rule by first triggering the generic event {@link ParseTreeListener//EnterEveryRule}then by triggering the event specific to the given parse tree node

func (*ParseTreeWalker)ExitRule

func (p *ParseTreeWalker) ExitRule(listenerParseTreeListener, rRuleNode)

Exits a grammar rule by first triggering the event specific to the given parse tree nodethen by triggering the generic event {@link ParseTreeListener//ExitEveryRule}

func (*ParseTreeWalker)Walk

func (p *ParseTreeWalker) Walk(listenerParseTreeListener, tTree)

Performs a walk on the given parse tree starting at the root and going down recursivelywith depth-first search. On each node, EnterRule is called beforerecursively walking down into child nodes, thenExitRule is called after the recursive call to wind up.

typeParser

type Parser interface {RecognizerGetInterpreter() *ParserATNSimulatorGetTokenStream()TokenStreamGetTokenFactory()TokenFactoryGetParserRuleContext()ParserRuleContextSetParserRuleContext(ParserRuleContext)Consume()TokenGetParseListeners() []ParseTreeListenerGetErrorHandler()ErrorStrategySetErrorHandler(ErrorStrategy)GetInputStream()IntStreamGetCurrentToken()TokenGetExpectedTokens() *IntervalSetNotifyErrorListeners(string,Token,RecognitionException)IsExpectedToken(int)boolGetPrecedence()intGetRuleInvocationStack(ParserRuleContext) []string}

typeParserATNSimulator

type ParserATNSimulator struct {*BaseATNSimulator// contains filtered or unexported fields}

funcNewParserATNSimulator

func NewParserATNSimulator(parserParser, atn *ATN, decisionToDFA []*DFA, sharedContextCache *PredictionContextCache) *ParserATNSimulator

func (*ParserATNSimulator)AdaptivePredict

func (p *ParserATNSimulator) AdaptivePredict(inputTokenStream, decisionint, outerContextParserRuleContext)int

func (*ParserATNSimulator)GetAltThatFinishedDecisionEntryRule

func (p *ParserATNSimulator) GetAltThatFinishedDecisionEntryRule(configsATNConfigSet)int

func (*ParserATNSimulator)GetPredictionMode

func (p *ParserATNSimulator) GetPredictionMode()int

func (*ParserATNSimulator)GetTokenName

func (p *ParserATNSimulator) GetTokenName(tint)string

func (*ParserATNSimulator)ReportAmbiguity

func (p *ParserATNSimulator) ReportAmbiguity(dfa *DFA, D *DFAState, startIndex, stopIndexint,exactbool, ambigAlts *BitSet, configsATNConfigSet)

If context sensitive parsing, we know it's ambiguity not conflict//

func (*ParserATNSimulator)ReportAttemptingFullContext

func (p *ParserATNSimulator) ReportAttemptingFullContext(dfa *DFA, conflictingAlts *BitSet, configsATNConfigSet, startIndex, stopIndexint)

func (*ParserATNSimulator)ReportContextSensitivity

func (p *ParserATNSimulator) ReportContextSensitivity(dfa *DFA, predictionint, configsATNConfigSet, startIndex, stopIndexint)

func (*ParserATNSimulator)SetPredictionMode

func (p *ParserATNSimulator) SetPredictionMode(vint)

typeParserRuleContext

type ParserRuleContext interface {RuleContextSetException(RecognitionException)AddTokenNode(tokenToken) *TerminalNodeImplAddErrorNode(badTokenToken) *ErrorNodeImplEnterRule(listenerParseTreeListener)ExitRule(listenerParseTreeListener)SetStart(Token)GetStart()TokenSetStop(Token)GetStop()TokenAddChild(childRuleContext)RuleContextRemoveLastChild()}

typePlusBlockStartState

type PlusBlockStartState struct {*BaseBlockStartState// contains filtered or unexported fields}

PlusBlockStartState is the start of a (A|B|...)+ loop. Technically it is adecision state; we don't use it for code generation. Somebody might need it,it is included for completeness. In reality, PlusLoopbackState is the realdecision-making node for A+.

funcNewPlusBlockStartState

func NewPlusBlockStartState() *PlusBlockStartState

typePlusLoopbackState

type PlusLoopbackState struct {*BaseDecisionState}

PlusLoopbackState is a decision state for A+ and (A|B)+. It has twotransitions: one to the loop back to start of the block, and one to exit.

funcNewPlusLoopbackState

func NewPlusLoopbackState() *PlusLoopbackState

typePrecedencePredicate

type PrecedencePredicate struct {// contains filtered or unexported fields}

funcNewPrecedencePredicate

func NewPrecedencePredicate(precedenceint) *PrecedencePredicate

func (*PrecedencePredicate)Equals

func (*PrecedencePredicate)Hash

func (p *PrecedencePredicate) Hash()int

func (*PrecedencePredicate)String

func (p *PrecedencePredicate) String()string

typePrecedencePredicateTransition

type PrecedencePredicateTransition struct {*BaseAbstractPredicateTransition// contains filtered or unexported fields}

funcNewPrecedencePredicateTransition

func NewPrecedencePredicateTransition(targetATNState, precedenceint) *PrecedencePredicateTransition

func (*PrecedencePredicateTransition)Matches

func (t *PrecedencePredicateTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*PrecedencePredicateTransition)String

typePredPrediction

type PredPrediction struct {// contains filtered or unexported fields}

PredPrediction maps a predicate to a predicted alternative.

funcNewPredPrediction

func NewPredPrediction(predSemanticContext, altint) *PredPrediction

func (*PredPrediction)String

func (p *PredPrediction) String()string

typePredicate

type Predicate struct {// contains filtered or unexported fields}

funcNewPredicate

func NewPredicate(ruleIndex, predIndexint, isCtxDependentbool) *Predicate

func (*Predicate)Equals

func (p *Predicate) Equals(otherCollectable[SemanticContext])bool

func (*Predicate)Hash

func (p *Predicate) Hash()int

func (*Predicate)String

func (p *Predicate) String()string

typePredicateTransition

type PredicateTransition struct {*BaseAbstractPredicateTransition// contains filtered or unexported fields}

funcNewPredicateTransition

func NewPredicateTransition(targetATNState, ruleIndex, predIndexint, isCtxDependentbool) *PredicateTransition

func (*PredicateTransition)Matches

func (t *PredicateTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*PredicateTransition)String

func (t *PredicateTransition) String()string

typePredictionContext

type PredictionContext interface {Hash()intEquals(interface{})boolGetParent(int)PredictionContextString()string// contains filtered or unexported methods}

funcSingletonBasePredictionContextCreate

func SingletonBasePredictionContextCreate(parentPredictionContext, returnStateint)PredictionContext

typePredictionContextCache

type PredictionContextCache struct {// contains filtered or unexported fields}

funcNewPredictionContextCache

func NewPredictionContextCache() *PredictionContextCache

func (*PredictionContextCache)Get

typeProxyErrorListener

type ProxyErrorListener struct {*DefaultErrorListener// contains filtered or unexported fields}

funcNewProxyErrorListener

func NewProxyErrorListener(delegates []ErrorListener) *ProxyErrorListener

func (*ProxyErrorListener)ReportAmbiguity

func (p *ProxyErrorListener) ReportAmbiguity(recognizerParser, dfa *DFA, startIndex, stopIndexint, exactbool, ambigAlts *BitSet, configsATNConfigSet)

func (*ProxyErrorListener)ReportAttemptingFullContext

func (p *ProxyErrorListener) ReportAttemptingFullContext(recognizerParser, dfa *DFA, startIndex, stopIndexint, conflictingAlts *BitSet, configsATNConfigSet)

func (*ProxyErrorListener)ReportContextSensitivity

func (p *ProxyErrorListener) ReportContextSensitivity(recognizerParser, dfa *DFA, startIndex, stopIndex, predictionint, configsATNConfigSet)

func (*ProxyErrorListener)SyntaxError

func (p *ProxyErrorListener) SyntaxError(recognizerRecognizer, offendingSymbol interface{}, line, columnint, msgstring, eRecognitionException)

typeRangeTransition

type RangeTransition struct {*BaseTransition// contains filtered or unexported fields}

funcNewRangeTransition

func NewRangeTransition(targetATNState, start, stopint) *RangeTransition

func (*RangeTransition)Matches

func (t *RangeTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*RangeTransition)String

func (t *RangeTransition) String()string

typeRecognitionException

type RecognitionException interface {GetOffendingToken()TokenGetMessage()stringGetInputStream()IntStream}

typeRecognizer

type Recognizer interface {GetLiteralNames() []stringGetSymbolicNames() []stringGetRuleNames() []stringSempred(RuleContext,int,int)boolPrecpred(RuleContext,int)boolGetState()intSetState(int)Action(RuleContext,int,int)AddErrorListener(ErrorListener)RemoveErrorListeners()GetATN() *ATNGetErrorListenerDispatch()ErrorListener}

typeReplaceOp

type ReplaceOp struct {BaseRewriteOperationLastIndexint}

I'm going to try replacing range from x..y with (y-x)+1 ReplaceOpinstructions.

funcNewReplaceOp

func NewReplaceOp(from, toint, textstring, streamTokenStream) *ReplaceOp

func (*ReplaceOp)Execute

func (op *ReplaceOp) Execute(buffer *bytes.Buffer)int

func (*ReplaceOp)String

func (op *ReplaceOp) String()string

typeRewriteOperation

type RewriteOperation interface {// Execute the rewrite operation by possibly adding to the buffer.// Return the index of the next token to operate on.Execute(buffer *bytes.Buffer)intString()stringGetInstructionIndex()intGetIndex()intGetText()stringGetOpName()stringGetTokens()TokenStreamSetInstructionIndex(valint)SetIndex(int)SetText(string)SetOpName(string)SetTokens(TokenStream)}

typeRuleContext

type RuleContext interface {RuleNodeGetInvokingState()intSetInvokingState(int)GetRuleIndex()intIsEmpty()boolGetAltNumber()intSetAltNumber(altNumberint)String([]string,RuleContext)string}

typeRuleNode

type RuleNode interface {ParseTreeGetRuleContext()RuleContextGetBaseRuleContext() *BaseRuleContext}

typeRuleStartState

type RuleStartState struct {*BaseATNState// contains filtered or unexported fields}

funcNewRuleStartState

func NewRuleStartState() *RuleStartState

typeRuleStopState

type RuleStopState struct {*BaseATNState}

RuleStopState is the last node in the ATN for a rule, unless that rule is thestart symbol. In that case, there is one transition to EOF. Later, we mightencode references to all calls to this rule to compute FOLLOW sets for errorhandling.

funcNewRuleStopState

func NewRuleStopState() *RuleStopState

typeRuleTransition

type RuleTransition struct {*BaseTransition// contains filtered or unexported fields}

funcNewRuleTransition

func NewRuleTransition(ruleStartATNState, ruleIndex, precedenceint, followStateATNState) *RuleTransition

func (*RuleTransition)Matches

func (t *RuleTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

typeSemCComparator

type SemCComparator[TCollectable[T]] struct{}

typeSemanticContext

type SemanticContext interface {Equals(otherCollectable[SemanticContext])boolHash()intString()string// contains filtered or unexported methods}

funcSemanticContextandContext

func SemanticContextandContext(a, bSemanticContext)SemanticContext

funcSemanticContextorContext

func SemanticContextorContext(a, bSemanticContext)SemanticContext

typeSet

type Set interface {Add(value interface{}) (added interface{})Len()intGet(value interface{}) (found interface{})Contains(value interface{})boolValues() []interface{}Each(f func(interface{})bool)}

typeSetTransition

type SetTransition struct {*BaseTransition}

funcNewSetTransition

func NewSetTransition(targetATNState, set *IntervalSet) *SetTransition

func (*SetTransition)Matches

func (t *SetTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*SetTransition)String

func (t *SetTransition) String()string

typeSimState

type SimState struct {// contains filtered or unexported fields}

funcNewSimState

func NewSimState() *SimState

typeSingletonPredictionContext

type SingletonPredictionContext interface {PredictionContext}

typeStarBlockStartState

type StarBlockStartState struct {*BaseBlockStartState}

StarBlockStartState is the block that begins a closure loop.

funcNewStarBlockStartState

func NewStarBlockStartState() *StarBlockStartState

typeStarLoopEntryState

type StarLoopEntryState struct {*BaseDecisionState// contains filtered or unexported fields}

funcNewStarLoopEntryState

func NewStarLoopEntryState() *StarLoopEntryState

typeStarLoopbackState

type StarLoopbackState struct {*BaseATNState}

funcNewStarLoopbackState

func NewStarLoopbackState() *StarLoopbackState

typeSyntaxTree

type SyntaxTree interface {TreeGetSourceInterval() *Interval}

typeTerminalNode

type TerminalNode interface {ParseTreeGetSymbol()Token}

typeTerminalNodeImpl

type TerminalNodeImpl struct {// contains filtered or unexported fields}

funcNewTerminalNodeImpl

func NewTerminalNodeImpl(symbolToken) *TerminalNodeImpl

func (*TerminalNodeImpl)Accept

func (t *TerminalNodeImpl) Accept(vParseTreeVisitor) interface{}

func (*TerminalNodeImpl)GetChild

func (t *TerminalNodeImpl) GetChild(iint)Tree

func (*TerminalNodeImpl)GetChildCount

func (t *TerminalNodeImpl) GetChildCount()int

func (*TerminalNodeImpl)GetChildren

func (t *TerminalNodeImpl) GetChildren() []Tree

func (*TerminalNodeImpl)GetParent

func (t *TerminalNodeImpl) GetParent()Tree

func (*TerminalNodeImpl)GetPayload

func (t *TerminalNodeImpl) GetPayload() interface{}

func (*TerminalNodeImpl)GetSourceInterval

func (t *TerminalNodeImpl) GetSourceInterval() *Interval

func (*TerminalNodeImpl)GetSymbol

func (t *TerminalNodeImpl) GetSymbol()Token

func (*TerminalNodeImpl)GetText

func (t *TerminalNodeImpl) GetText()string

func (*TerminalNodeImpl)SetChildren

func (t *TerminalNodeImpl) SetChildren(tree []Tree)

func (*TerminalNodeImpl)SetParent

func (t *TerminalNodeImpl) SetParent(treeTree)

func (*TerminalNodeImpl)String

func (t *TerminalNodeImpl) String()string

func (*TerminalNodeImpl)ToStringTree

func (t *TerminalNodeImpl) ToStringTree(s []string, rRecognizer)string

typeToken

type Token interface {GetSource() *TokenSourceCharStreamPairGetTokenType()intGetChannel()intGetStart()intGetStop()intGetLine()intGetColumn()intGetText()stringSetText(sstring)GetTokenIndex()intSetTokenIndex(vint)GetTokenSource()TokenSourceGetInputStream()CharStream}

typeTokenFactory

type TokenFactory interface {Create(source *TokenSourceCharStreamPair, ttypeint, textstring, channel, start, stop, line, columnint)Token}

TokenFactory creates CommonToken objects.

typeTokenSource

type TokenSource interface {NextToken()TokenSkip()More()GetLine()intGetCharPositionInLine()intGetInputStream()CharStreamGetSourceName()stringGetTokenFactory()TokenFactory// contains filtered or unexported methods}

typeTokenSourceCharStreamPair

type TokenSourceCharStreamPair struct {// contains filtered or unexported fields}

typeTokenStream

type TokenStream interface {IntStreamLT(kint)TokenGet(indexint)TokenGetTokenSource()TokenSourceSetTokenSource(TokenSource)GetAllText()stringGetTextFromInterval(*Interval)stringGetTextFromRuleContext(RuleContext)stringGetTextFromTokens(Token,Token)string}

typeTokenStreamRewriter

type TokenStreamRewriter struct {// contains filtered or unexported fields}

funcNewTokenStreamRewriter

func NewTokenStreamRewriter(tokensTokenStream) *TokenStreamRewriter

func (*TokenStreamRewriter)AddToProgram

func (tsr *TokenStreamRewriter) AddToProgram(namestring, opRewriteOperation)

func (*TokenStreamRewriter)Delete

func (tsr *TokenStreamRewriter) Delete(program_namestring, from, toint)

func (*TokenStreamRewriter)DeleteDefault

func (tsr *TokenStreamRewriter) DeleteDefault(from, toint)

func (*TokenStreamRewriter)DeleteDefaultPos

func (tsr *TokenStreamRewriter) DeleteDefaultPos(indexint)

func (*TokenStreamRewriter)DeleteProgram

func (tsr *TokenStreamRewriter) DeleteProgram(program_namestring)

Reset the program so that no instructions exist

func (*TokenStreamRewriter)DeleteProgramDefault

func (tsr *TokenStreamRewriter) DeleteProgramDefault()

func (*TokenStreamRewriter)DeleteToken

func (tsr *TokenStreamRewriter) DeleteToken(program_namestring, from, toToken)

func (*TokenStreamRewriter)DeleteTokenDefault

func (tsr *TokenStreamRewriter) DeleteTokenDefault(from, toToken)

func (*TokenStreamRewriter)GetLastRewriteTokenIndex

func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndex(program_namestring)int

func (*TokenStreamRewriter)GetLastRewriteTokenIndexDefault

func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndexDefault()int

func (*TokenStreamRewriter)GetProgram

func (tsr *TokenStreamRewriter) GetProgram(namestring) []RewriteOperation

func (*TokenStreamRewriter)GetText

func (tsr *TokenStreamRewriter) GetText(program_namestring, interval *Interval)string

Return the text from the original tokens altered per theinstructions given to this rewriter.

func (*TokenStreamRewriter)GetTextDefault

func (tsr *TokenStreamRewriter) GetTextDefault()string

Return the text from the original tokens altered per theinstructions given to this rewriter.

func (*TokenStreamRewriter)GetTokenStream

func (tsr *TokenStreamRewriter) GetTokenStream()TokenStream

func (*TokenStreamRewriter)InitializeProgram

func (tsr *TokenStreamRewriter) InitializeProgram(namestring) []RewriteOperation

func (*TokenStreamRewriter)InsertAfter

func (tsr *TokenStreamRewriter) InsertAfter(program_namestring, indexint, textstring)

func (*TokenStreamRewriter)InsertAfterDefault

func (tsr *TokenStreamRewriter) InsertAfterDefault(indexint, textstring)

func (*TokenStreamRewriter)InsertAfterToken

func (tsr *TokenStreamRewriter) InsertAfterToken(program_namestring, tokenToken, textstring)

func (*TokenStreamRewriter)InsertBefore

func (tsr *TokenStreamRewriter) InsertBefore(program_namestring, indexint, textstring)

func (*TokenStreamRewriter)InsertBeforeDefault

func (tsr *TokenStreamRewriter) InsertBeforeDefault(indexint, textstring)

func (*TokenStreamRewriter)InsertBeforeToken

func (tsr *TokenStreamRewriter) InsertBeforeToken(program_namestring, tokenToken, textstring)

func (*TokenStreamRewriter)Replace

func (tsr *TokenStreamRewriter) Replace(program_namestring, from, toint, textstring)

func (*TokenStreamRewriter)ReplaceDefault

func (tsr *TokenStreamRewriter) ReplaceDefault(from, toint, textstring)

func (*TokenStreamRewriter)ReplaceDefaultPos

func (tsr *TokenStreamRewriter) ReplaceDefaultPos(indexint, textstring)

func (*TokenStreamRewriter)ReplaceToken

func (tsr *TokenStreamRewriter) ReplaceToken(program_namestring, from, toToken, textstring)

func (*TokenStreamRewriter)ReplaceTokenDefault

func (tsr *TokenStreamRewriter) ReplaceTokenDefault(from, toToken, textstring)

func (*TokenStreamRewriter)ReplaceTokenDefaultPos

func (tsr *TokenStreamRewriter) ReplaceTokenDefaultPos(indexToken, textstring)

func (*TokenStreamRewriter)Rollback

func (tsr *TokenStreamRewriter) Rollback(program_namestring, instruction_indexint)

Rollback the instruction stream for a program so thatthe indicated instruction (via instructionIndex) is nolonger in the stream. UNTESTED!

func (*TokenStreamRewriter)RollbackDefault

func (tsr *TokenStreamRewriter) RollbackDefault(instruction_indexint)

func (*TokenStreamRewriter)SetLastRewriteTokenIndex

func (tsr *TokenStreamRewriter) SetLastRewriteTokenIndex(program_namestring, iint)

typeTokensStartState

type TokensStartState struct {*BaseDecisionState}

TokensStartState is the Tokens rule start state linking to each lexer rule start state.

funcNewTokensStartState

func NewTokensStartState() *TokensStartState

typeTraceListener

type TraceListener struct {// contains filtered or unexported fields}

funcNewTraceListener

func NewTraceListener(parser *BaseParser) *TraceListener

func (*TraceListener)EnterEveryRule

func (t *TraceListener) EnterEveryRule(ctxParserRuleContext)

func (*TraceListener)ExitEveryRule

func (t *TraceListener) ExitEveryRule(ctxParserRuleContext)

func (*TraceListener)VisitErrorNode

func (t *TraceListener) VisitErrorNode(_ErrorNode)

func (*TraceListener)VisitTerminal

func (t *TraceListener) VisitTerminal(nodeTerminalNode)

typeTransition

type Transition interface {Matches(int,int,int)bool// contains filtered or unexported methods}

typeTree

type Tree interface {GetParent()TreeSetParent(Tree)GetPayload() interface{}GetChild(iint)TreeGetChildCount()intGetChildren() []Tree}

funcTreesGetChildren

func TreesGetChildren(tTree) []Tree

Return ordered list of all children of this node

funcTreesgetAncestors

func TreesgetAncestors(tTree) []Tree

Return a list of all ancestors of this node. The first node of

list is the root and the last is the parent of this node.

typeWildcardTransition

type WildcardTransition struct {*BaseTransition}

funcNewWildcardTransition

func NewWildcardTransition(targetATNState) *WildcardTransition

func (*WildcardTransition)Matches

func (t *WildcardTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*WildcardTransition)String

func (t *WildcardTransition) String()string

Source Files

View all Source files

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f orF : Jump to
y orY : Canonical URL
go.dev uses cookies from Google to deliver and enhance the quality of its services and to analyze traffic.Learn more.

[8]ページ先頭

©2009-2025 Movatter.jp