Movatterモバイル変換


[0]ホーム

URL:


antlr

packagemodule
v4.13.1Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 15, 2024 License:BSD-3-ClauseImports:16Imported by:703

Details

Repository

github.com/antlr4-go/antlr

Links

README

Go Report CardPkgGoDevReleaseReleaseMaintenanceLicenseGitHub stars

ANTLR4 Go Runtime Module Repo

IMPORTANT: Please submit PRs via a clone of thehttps://github.com/antlr/antlr4 repo, and not here.

  • Do not submit PRs or any change requests to this repo
  • This repo is read only and is updated by the ANTLR team to create a new release of the Go Runtime for ANTLR
  • This repo contains the Go runtime that your generated projects should import

Introduction

This repo contains the official modules for the Go Runtime for ANTLR. It is a copy of the runtime maintainedat:https://github.com/antlr/antlr4/tree/master/runtime/Go/antlr and is automatically updated by the ANTLR team to createthe official Go runtime release only. No development work is carried out in this repo and PRs are not accepted here.

The dev branch of this repo is kept in sync with the dev branch of the main ANTLR repo and is updated periodically.

Why?

Thego get command is unable to retrieve the Go runtime when it is embedded sodeeply in the main repo. Ago get against theantlr/antlr4 repo, while retrieving the correct source code for the runtime,does not correctly resolve tags and will create a reference in yourgo.mod file that is unclear, will not upgrade smoothly andcauses confusion.

For instance, the current Go runtime release, which is tagged with v4.13.0 inantlr/antlr4 is retrieved by go get as:

require (github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230219212500-1f9a474cc2dc)

Where you would expect to see:

require (    github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.13.0)

The decision was taken to create a separate org in a separate repo to hold the official Go runtime for ANTLR andfrom whence users can expectgo get to behave as expected.

Documentation

Please read the official documentation at:https://github.com/antlr/antlr4/blob/master/doc/index.md for tips onmigrating existing projects to use the new module location and for information on how to use the Go runtime ingeneral.

Documentation

Overview

Package antlr implements the Go version of the ANTLR 4 runtime.

The ANTLR Tool

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing,or translating structured text or binary files. It's widely used to build languages, tools, and frameworks.From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface(or visitor) that makes it easy to respond to the recognition of phrases of interest.

Go Runtime

At version 4.11.x and prior, the Go runtime was not properly versioned for go modules. After this point, the runtimesource code to be imported was held in the `runtime/Go/antlr/v4` directory, and the go.mod file was updated to reflect the version ofANTLR4 that it is compatible with (I.E. uses the /v4 path).

However, this was found to be problematic, as it meant that with the runtime embedded so far underneath the rootof the repo, the `go get` and related commands could not properly resolve the location of the go runtime source code.This meant that the reference to the runtime in your `go.mod` file would refer to the correct source code, but would notlist the release tag such as @4.13.1 - this was confusing, to say the least.

As of 4.13.0, the runtime is now available as a go module in its own repo, and can be imported as `github.com/antlr4-go/antlr`(the go get command should also be used with this path). See the main documentation for the ANTLR4 project for more information,which is available atANTLR docs. The documentation for using the Go runtime is available atGo runtime docs.

This means that if you are using the source code without modules, you should also use the source code in thenew repo.Though we highly recommend that you use go modules, as they are now idiomatic for Go.

I am aware that this change will prove Hyrum's Law, but am prepared to live with it for the common good.

Go runtime author:Jim Idle jimi@idle.ws

Code Generation

ANTLR supports the generation of code in a number oftarget languages, and the generated code is supported by aruntime library, written specifically to support the generated code in the target language. This library is theruntime for the Go target.

To generate code for the go target, it is generally recommended to place the source grammar files in a package oftheir own, and use the `.sh` script method of generating code, using the go generate directive. In that same directoryit is usual, though not required, to place the antlr tool that should be used to generate the code. That does meanthat the antlr tool JAR file will be checked in to your source code control though, so you are, of course, free to use any otherway of specifying the version of the ANTLR tool to use, such as aliasing in `.zshrc` or equivalent, or a profile inyour IDE, or configuration in your CI system. Checking in the jar does mean that it is easy to reproduce the build asit was at any point in its history.

Here is a general/recommended template for an ANTLR based recognizer in Go:

.├── parser│     ├── mygrammar.g4│     ├── antlr-4.13.1-complete.jar│     ├── generate.go│     └── generate.sh├── parsing   - generated code goes here│     └── error_listeners.go├── go.mod├── go.sum├── main.go└── main_test.go

Make sure that the package statement in your grammar file(s) reflects the go package the generated code will exist in.

The generate.go file then looks like this:

package parser//go:generate ./generate.sh

And the generate.sh file will look similar to this:

#!/bin/shalias antlr4='java -Xmx500M -cp "./antlr4-4.13.1-complete.jar:$CLASSPATH" org.antlr.v4.Tool'antlr4 -Dlanguage=Go -no-visitor -package parsing *.g4

depending on whether you want visitors or listeners or any other ANTLR options. Not that another option hereis to generate the code into a

From the command line at the root of your source package (location of go.mo)d) you can then simply issue the command:

go generate ./...

Which will generate the code for the parser, and place it in the parsing package. You can then use the generated codeby importing the parsing package.

There are no hard and fast rules on this. It is just a recommendation. You can generate the code in any way and to anywhere you like.

Copyright Notice

Copyright (c) 2012-2023 The ANTLR Project. All rights reserved.

Use of this file is governed by the BSD 3-clause license, which can be found in theLICENSE.txt file in the project root.

Index

Constants

View Source
const (ATNStateInvalidType    = 0ATNStateBasic          = 1ATNStateRuleStart      = 2ATNStateBlockStart     = 3ATNStatePlusBlockStart = 4ATNStateStarBlockStart = 5ATNStateTokenStart     = 6ATNStateRuleStop       = 7ATNStateBlockEnd       = 8ATNStateStarLoopBack   = 9ATNStateStarLoopEntry  = 10ATNStatePlusLoopBack   = 11ATNStateLoopEnd        = 12ATNStateInvalidStateNumber = -1)

Constants for serialization.

View Source
const (ATNTypeLexer  = 0ATNTypeParser = 1)

Represent the type of recognizer an ATN applies to.

View Source
const (LexerDefaultMode = 0LexerMore        = -2LexerSkip        = -3)
View Source
const (LexerDefaultTokenChannel =TokenDefaultChannelLexerHidden              =TokenHiddenChannelLexerMinCharValue        = 0x0000LexerMaxCharValue        = 0x10FFFF)
View Source
const (// LexerActionTypeChannel represents a [LexerChannelAction] action.LexerActionTypeChannel = 0// LexerActionTypeCustom represents a [LexerCustomAction] action.LexerActionTypeCustom = 1// LexerActionTypeMode represents a [LexerModeAction] action.LexerActionTypeMode = 2// LexerActionTypeMore represents a [LexerMoreAction] action.LexerActionTypeMore = 3// LexerActionTypePopMode represents a [LexerPopModeAction] action.LexerActionTypePopMode = 4// LexerActionTypePushMode represents a [LexerPushModeAction] action.LexerActionTypePushMode = 5// LexerActionTypeSkip represents a [LexerSkipAction] action.LexerActionTypeSkip = 6// LexerActionTypeType represents a [LexerTypeAction] action.LexerActionTypeType = 7)
View Source
const (PredictionContextEmpty =iotaPredictionContextSingletonPredictionContextArray)
View Source
const (// PredictionModeSLL represents the SLL(*) prediction mode.// This prediction mode ignores the current// parser context when making predictions. This is the fastest prediction// mode, and provides correct results for many grammars. This prediction// mode is more powerful than the prediction mode provided by ANTLR 3, but// may result in syntax errors for grammar and input combinations which are// not SLL.//// When using this prediction mode, the parser will either return a correct// parse tree (i.e. the same parse tree that would be returned with the// [PredictionModeLL] prediction mode), or it will Report a syntax error. If a// syntax error is encountered when using the SLL prediction mode,// it may be due to either an actual syntax error in the input or indicate// that the particular combination of grammar and input requires the more// powerful LL prediction abilities to complete successfully.//// This prediction mode does not provide any guarantees for prediction// behavior for syntactically-incorrect inputs.//PredictionModeSLL = 0// PredictionModeLL represents the LL(*) prediction mode.// This prediction mode allows the current parser// context to be used for resolving SLL conflicts that occur during// prediction. This is the fastest prediction mode that guarantees correct// parse results for all combinations of grammars with syntactically correct// inputs.//// When using this prediction mode, the parser will make correct decisions// for all syntactically-correct grammar and input combinations. However, in// cases where the grammar is truly ambiguous this prediction mode might not// report a precise answer for exactly which alternatives are// ambiguous.//// This prediction mode does not provide any guarantees for prediction// behavior for syntactically-incorrect inputs.//PredictionModeLL = 1// PredictionModeLLExactAmbigDetection represents the LL(*) prediction mode// with exact ambiguity detection.//// In addition to the correctness guarantees provided by the [PredictionModeLL] prediction mode,// this prediction mode instructs the prediction algorithm to determine the// complete and exact set of ambiguous alternatives for every ambiguous// decision encountered while parsing.//// This prediction mode may be used for diagnosing ambiguities during// grammar development. Due to the performance overhead of calculating sets// of ambiguous alternatives, this prediction mode should be avoided when// the exact results are not necessary.//// This prediction mode does not provide any guarantees for prediction// behavior for syntactically-incorrect inputs.//PredictionModeLLExactAmbigDetection = 2)
View Source
const (TokenInvalidType = 0// TokenEpsilon  - during lookahead operations, this "token" signifies we hit the rule end [ATN] state// and did not follow it despite needing to.TokenEpsilon = -2TokenMinUserTokenType = 1TokenEOF = -1// TokenDefaultChannel is the default channel upon which tokens are sent to the parser.//// All tokens go to the parser (unless [Skip] is called in the lexer rule)// on a particular "channel". The parser tunes to a particular channel// so that whitespace etc... can go to the parser on a "hidden" channel.TokenDefaultChannel = 0// TokenHiddenChannel defines the normal hidden channel - the parser wil not see tokens that are not on [TokenDefaultChannel].//// Anything on a different channel than TokenDefaultChannel is not parsed by parser.TokenHiddenChannel = 1)
View Source
const (DefaultProgramName = "default"ProgramInitSize    = 100MinTokenIndex      = 0)
View Source
const (TransitionEPSILON    = 1TransitionRANGE      = 2TransitionRULE       = 3TransitionPREDICATE  = 4// e.g., {isType(input.LT(1))}?TransitionATOM       = 5TransitionACTION     = 6TransitionSET        = 7// ~(A|B) or ~atom, wildcard, which convert to next 2TransitionNOTSET     = 8TransitionWILDCARD   = 9TransitionPRECEDENCE = 10)
View Source
const (// BasePredictionContextEmptyReturnState represents {@code $} in an array in full context mode, $// doesn't mean wildcard:////   $ + x = [$,x]//// Here,////   $ = EmptyReturnStateBasePredictionContextEmptyReturnState = 0x7FFFFFFF)
View Source
const (// LL1AnalyzerHitPred is a special value added to the lookahead sets to indicate that we hit// a predicate during analysis if////   seeThruPreds==falseLL1AnalyzerHitPred =TokenInvalidType)

Variables

View Source
var (LexerATNSimulatorMinDFAEdge = 0LexerATNSimulatorMaxDFAEdge = 127// forces unicode to stay in ATNLexerATNSimulatorMatchCalls = 0)
View Source
var (BasePredictionContextglobalNodeCount = 1BasePredictionContextid              =BasePredictionContextglobalNodeCount)

TODO: JI These are meant to be atomics - this does not seem to match the Java runtime here

View Source
var ATNInvalidAltNumberint

ATNInvalidAltNumber is used to represent an ALT number that has yet to be calculated orwhich is invalid for a particular struct such as*antlr.BaseRuleContext

View Source
var ATNSimulatorError =NewDFAState(0x7FFFFFFF,NewATNConfigSet(false))
View Source
var ATNStateInitialNumTransitions = 4
View Source
var BasePredictionContextEMPTY = &PredictionContext{cachedHash:  calculateEmptyHash(),pcType:PredictionContextEmpty,returnState:BasePredictionContextEmptyReturnState,}
View Source
var CollectionDescriptors = map[CollectionSource]CollectionDescriptor{UnknownCollection: {SybolicName: "UnknownCollection",Description: "Unknown collection type. Only used if the target author thought it was an unimportant collection.",},ATNConfigCollection: {SybolicName: "ATNConfigCollection",Description: "ATNConfig collection. Used to store the ATNConfigs for a particular state in the ATN." +"For instance, it is used to store the results of the closure() operation in the ATN.",},ATNConfigLookupCollection: {SybolicName: "ATNConfigLookupCollection",Description: "ATNConfigLookup collection. Used to store the ATNConfigs for a particular state in the ATN." +"This is used to prevent duplicating equivalent states in an ATNConfigurationSet.",},ATNStateCollection: {SybolicName: "ATNStateCollection",Description: "ATNState collection. This is used to store the states of the ATN.",},DFAStateCollection: {SybolicName: "DFAStateCollection",Description: "DFAState collection. This is used to store the states of the DFA.",},PredictionContextCollection: {SybolicName: "PredictionContextCollection",Description: "PredictionContext collection. This is used to store the prediction contexts of the ATN and cache computes.",},SemanticContextCollection: {SybolicName: "SemanticContextCollection",Description: "SemanticContext collection. This is used to store the semantic contexts of the ATN.",},ClosureBusyCollection: {SybolicName: "ClosureBusyCollection",Description: "ClosureBusy collection. This is used to check and prevent infinite recursion right recursive rules." +"It stores ATNConfigs that are currently being processed in the closure() operation.",},PredictionVisitedCollection: {SybolicName: "PredictionVisitedCollection",Description: "A map that records whether we have visited a particular context when searching through cached entries.",},MergeCacheCollection: {SybolicName: "MergeCacheCollection",Description: "A map that records whether we have already merged two particular contexts and can save effort by not repeating it.",},PredictionContextCacheCollection: {SybolicName: "PredictionContextCacheCollection",Description: "A map that records whether we have already created a particular context and can save effort by not computing it again.",},AltSetCollection: {SybolicName: "AltSetCollection",Description: "Used to eliminate duplicate alternatives in an ATN config set.",},ReachSetCollection: {SybolicName: "ReachSetCollection",Description: "Used as merge cache to prevent us needing to compute the merge of two states if we have already done it.",},}
View Source
var CommonTokenFactoryDEFAULT =NewCommonTokenFactory(false)

CommonTokenFactoryDEFAULT is the default CommonTokenFactory. It does notexplicitly copy token text when constructing tokens.

View Source
var ConsoleErrorListenerINSTANCE =NewConsoleErrorListener()

ConsoleErrorListenerINSTANCE provides a default instance of {@link ConsoleErrorListener}.

View Source
var ErrEmptyStack =errors.New("stack is empty")
View Source
var LexerMoreActionINSTANCE =NewLexerMoreAction()
View Source
var LexerPopModeActionINSTANCE =NewLexerPopModeAction()
View Source
var LexerSkipActionINSTANCE =NewLexerSkipAction()

LexerSkipActionINSTANCE provides a singleton instance of this parameterless lexer action.

View Source
var ParseTreeWalkerDefault =NewParseTreeWalker()
View Source
var ParserRuleContextEmpty =NewBaseParserRuleContext(nil, -1)
View Source
var SemanticContextNone =NewPredicate(-1, -1,false)
View Source
var Statistics = &goRunStats{}
View Source
var TransitionserializationNames = []string{"INVALID","EPSILON","RANGE","RULE","PREDICATE","ATOM","ACTION","SET","NOT_SET","WILDCARD","PRECEDENCE",}
View Source
var TreeInvalidInterval =NewInterval(-1, -2)

Functions

funcConfigureRuntimeadded inv4.13.0

func ConfigureRuntime(options ...runtimeOption)error

ConfigureRuntime allows the runtime to be configured globally setting things like trace and statistics options.It uses the functional options pattern for go. This is a package global function as it operates on the runtimeconfiguration regardless of the instantiation of anything higher up such as a parser or lexer. Generally this isused for debugging/tracing/statistics options, which are usually used by the runtime maintainers (or rather theonly maintainer). However, it is possible that you might want to use this to set a global option concerning thememory allocation type used by the runtime such as sync.Pool or not.

The options are applied in the order they are passed in, so the last option will override any previous options.

For example, if you want to turn on the collection create point stack flag to true, you can do:

antlr.ConfigureRuntime(antlr.WithStatsTraceStacks(true))

If you want to turn it off, you can do:

antlr.ConfigureRuntime(antlr.WithStatsTraceStacks(false))

funcEscapeWhitespace

func EscapeWhitespace(sstring, escapeSpacesbool)string

funcInitBaseParserRuleContextadded inv4.13.0

func InitBaseParserRuleContext(prc *BaseParserRuleContext, parentParserRuleContext, invokingStateNumberint)

funcPredictionModeallConfigsInRuleStopStates

func PredictionModeallConfigsInRuleStopStates(configs *ATNConfigSet)bool

PredictionModeallConfigsInRuleStopStates checks if all configurations in configs are in aRuleStopState. Configurations meeting this condition have reachedthe end of the decision rule (local context) or end of start rule (fullcontext).

the func returns true if all configurations in configs are in aRuleStopState

funcPredictionModeallSubsetsConflict

func PredictionModeallSubsetsConflict(altsets []*BitSet)bool

PredictionModeallSubsetsConflict determines if every alternative subset in altsets contains morethan one alternative.

The func returns true if everyBitSet in altsets hasBitSet.cardinality cardinality > 1

funcPredictionModeallSubsetsEqual

func PredictionModeallSubsetsEqual(altsets []*BitSet)bool

PredictionModeallSubsetsEqual determines if every alternative subset in altsets is equivalent.

The func returns true if every member of altsets is equal to the others.

funcPredictionModegetSingleViableAlt

func PredictionModegetSingleViableAlt(altsets []*BitSet)int

PredictionModegetSingleViableAlt gets the single alternative predicted by all alternative subsets in altsetsif there is one.

TODO: JI - Review this code - it does not seem to do the same thing as the Java code - maybe becauseBitSet is not like the Java utils BitSet

funcPredictionModegetUniqueAlt

func PredictionModegetUniqueAlt(altsets []*BitSet)int

PredictionModegetUniqueAlt returns the unique alternative predicted by all alternative subsets inaltsets. If no such alternative exists, this method returnsATNInvalidAltNumber.

@param altsets a collection of alternative subsets

funcPredictionModehasConfigInRuleStopState

func PredictionModehasConfigInRuleStopState(configs *ATNConfigSet)bool

PredictionModehasConfigInRuleStopState checks if any configuration in the given configs is in aRuleStopState. Configurations meeting this condition have reachedthe end of the decision rule (local context) or end of start rule (fullcontext).

The func returns true if any configuration in the supplied configs is in aRuleStopState

funcPredictionModehasConflictingAltSet

func PredictionModehasConflictingAltSet(altsets []*BitSet)bool

PredictionModehasConflictingAltSet determines if any single alternative subset in altsets containsmore than one alternative.

The func returns true if altsets contains aBitSet withBitSet.cardinality cardinality > 1, otherwise false

funcPredictionModehasNonConflictingAltSet

func PredictionModehasNonConflictingAltSet(altsets []*BitSet)bool

PredictionModehasNonConflictingAltSet determines if any single alternative subset in altsets containsexactly one alternative.

The func returns true if altsets contains at least oneBitSet withBitSet.cardinality cardinality 1

funcPredictionModehasSLLConflictTerminatingPrediction

func PredictionModehasSLLConflictTerminatingPrediction(modeint, configs *ATNConfigSet)bool

PredictionModehasSLLConflictTerminatingPrediction computes the SLL prediction termination condition.

This method computes the SLL prediction termination condition for both ofthe following cases:

  • The usual SLL+LL fallback upon SLL conflict
  • Pure SLL without LL fallback

Combined SLL+LL Parsing

When LL-fallback is enabled upon SLL conflict, correct predictions areensured regardless of how the termination condition is computed by thismethod. Due to the substantially higher cost of LL prediction, theprediction should only fall back to LL when the additional lookaheadcannot lead to a unique SLL prediction.

Assuming combined SLL+LL parsing, an SLL configuration set with onlyconflicting subsets should fall back to full LL, even if theconfiguration sets don't resolve to the same alternative, e.g.

{1,2} and {3,4}

If there is at least one non-conflictingconfiguration, SLL could continue with the hopes that more lookahead willresolve via one of those non-conflicting configurations.

Here's the prediction termination rule them: SLL (for SLL+LL parsing)stops when it sees only conflicting configuration subsets. In contrast,full LL keeps going when there is uncertainty.

Heuristic

As a heuristic, we stop prediction when we see any conflicting subsetunless we see a state that only has one alternative associated with it.The single-alt-state thing lets prediction continue upon rules like(otherwise, it would admit defeat too soon):

[12|1|[], 6|2|[], 12|2|[]]. s : (ID | ID ID?) ;

When theATN simulation reaches the state before ';', it has aDFA state that looks like:

[12|1|[], 6|2|[], 12|2|[]]

Naturally

12|1|[] and  12|2|[]

conflict, but we cannot stop processing this node because alternative to has another way to continue,via

[6|2|[]]

It also let's us continue for this rule:

[1|1|[], 1|2|[], 8|3|[]] a : A | A | A B ;

After Matching input A, we reach the stop state for rule A, state 1.State 8 is the state immediately before B. Clearly alternatives 1 and 2conflict and no amount of further lookahead will separate the two.However, alternative 3 will be able to continue, and so we do not stopworking on this state. In the previous example, we're concerned withstates associated with the conflicting alternatives. Here alt 3 is notassociated with the conflicting configs, but since we can continuelooking for input reasonably, don't declare the state done.

Pure SLL Parsing

To handle pure SLL parsing, all we have to do is make sure that wecombine stack contexts for configurations that differ only by semanticpredicate. From there, we can do the usual SLL termination heuristic.

Predicates in SLL+LL Parsing

SLL decisions don't evaluate predicates until after they reachDFA stopstates because they need to create theDFA cache that works in allsemantic situations. In contrast, full LL evaluates predicates collectedduring start state computation, so it can ignore predicates thereafter.This means that SLL termination detection can totally ignore semanticpredicates.

Implementation-wise,ATNConfigSet combines stack contexts but notsemantic predicate contexts, so we might see two configurations like thefollowing:

(s, 1, x, {}), (s, 1, x', {p})

Before testing these configurations against others, we have to mergex and x' (without modifying the existing configurations).For example, we test (x+x')==x” when looking for conflicts inthe following configurations:

(s, 1, x, {}), (s, 1, x', {p}), (s, 2, x”, {})

If the configuration set has predicates (as indicated by[ATNConfigSet.hasSemanticContext]), this algorithm makes a copy ofthe configurations to strip out all the predicates so that a standardATNConfigSet will merge everything ignoring predicates.

funcPredictionModehasStateAssociatedWithOneAlt

func PredictionModehasStateAssociatedWithOneAlt(configs *ATNConfigSet)bool

funcPredictionModeresolvesToJustOneViableAlt

func PredictionModeresolvesToJustOneViableAlt(altsets []*BitSet)int

PredictionModeresolvesToJustOneViableAlt checks full LL prediction termination.

Can we stop looking ahead duringATN simulation or is there someuncertainty as to which alternative we will ultimately pick, afterconsuming more input? Even if there are partial conflicts, we might knowthat everything is going to resolve to the same minimum alternative. Thatmeans we can stop since no more lookahead will change that fact. On theother hand, there might be multiple conflicts that resolve to differentminimums. That means we need more look ahead to decide which of thosealternatives we should predict.

The basic idea is to split the set of configurations 'C', intoconflicting subsets (s, _, ctx, _) and singleton subsets withnon-conflicting configurations. Two configurations conflict if they haveidenticalATNConfig.state andATNConfig.context valuesbut a differentATNConfig.alt value, e.g.

(s, i, ctx, _)

and

(s, j, ctx, _) ; for i != j

Reduce these configuration subsets to the set of possible alternatives.You can compute the alternative subsets in one pass as follows:

A_s,ctx = {i | (s, i, ctx, _)}

for each configuration in C holding s and ctx fixed.

Or in pseudo-code:

 for each configuration c in  C:  map[c] U = c.ATNConfig.alt alt  // map hash/equals uses s and x, not alt and not pred

The values in map are the set of

A_s,ctx

sets.

If

|A_s,ctx| = 1

then there is no conflict associated with s and ctx.

Reduce the subsets to singletons by choosing a minimum of each subset. Ifthe union of these alternative subsets is a singleton, then no amount offurther lookahead will help us. We will always pick that alternative. If,however, there is more than one alternative, then we are uncertain whichalternative to predict and must continue looking for resolution. We mayor may not discover an ambiguity in the future, even if there are noconflicting subsets this round.

The biggest sin is to terminate early because it means we've made adecision but were uncertain as to the eventual outcome. We haven't usedenough lookahead. On the other hand, announcing a conflict too late is nobig deal; you will still have the conflict. It's just inefficient. Itmight even look until the end of file.

No special consideration for semantic predicates is required becausepredicates are evaluated on-the-fly for full LL prediction, ensuring thatno configuration contains a semantic context during the terminationcheck.

Conflicting Configs

Two configurations:

(s, i, x) and (s, j, x')

conflict when i != j but x = x'. Because we merge all(s, i, _) configurations together, that means that there are atmost n configurations associated with state s forn possible alternatives in the decision. The merged stackscomplicate the comparison of configuration contexts x and x'.

Sam checks to see if one is a subset of the other by callingmerge and checking to see if the merged result is either x or x'.If the x associated with lowest alternative iis the superset, then i is the only possible prediction since theothers resolve to min(i) as well. However, if x isassociated with j > i then at least one stack configuration forj is not in conflict with alternative i. The algorithmshould keep going, looking for more lookahead due to the uncertainty.

For simplicity, I'm doing an equality check between x andx', which lets the algorithm continue to consume lookahead longerthan necessary. The reason I like the equality is of course thesimplicity but also because that is the test you need to detect thealternatives that are actually in conflict.

Continue/Stop Rule

Continue if the union of resolved alternative sets from non-conflicting andconflicting alternative subsets has more than one alternative. We areuncertain about which alternative to predict.

The complete set of alternatives,

[i for (_, i, _)]

tells us which alternatives are still in the running for the amount of input we'veconsumed at this point. The conflicting sets let us to strip awayconfigurations that won't lead to more states because we resolveconflicts to the configuration with a minimum alternate for theconflicting set.

Cases

  • no conflicts and more than 1 alternative in set => continue
  • (s, 1, x), (s, 2, x), (s, 3, z), (s', 1, y), (s', 2, y) yields non-conflicting set{3} ∪ conflicting sets min({1,2}) ∪ min({1,2}) = {1,3} => continue
  • (s, 1, x), (s, 2, x), (s', 1, y), (s', 2, y), (s”, 1, z) yields non-conflicting set{1} ∪ conflicting sets min({1,2}) ∪ min({1,2}) = {1} => stop and predict 1
  • (s, 1, x), (s, 2, x), (s', 1, y), (s', 2, y) yields conflicting, reduced sets{1} ∪ {1} = {1} => stop and predict 1, can announce ambiguity {1,2}
  • (s, 1, x), (s, 2, x), (s', 2, y), (s', 3, y) yields conflicting, reduced sets{1} ∪ {2} = {1,2} => continue
  • (s, 1, x), (s, 2, x), (s', 2, y), (s', 3, y) yields conflicting, reduced sets{1} ∪ {2} = {1,2} => continue
  • (s, 1, x), (s, 2, x), (s', 3, y), (s', 4, y) yields conflicting, reduced sets{1} ∪ {3} = {1,3} => continue

Exact Ambiguity Detection

If all states report the same conflicting set of alternatives, then weknow we have the exact ambiguity set:

|A_i| > 1

and

A_i = A_j ; for all i, j

In other words, we continue examining lookahead until all A_ihave more than one alternative and all A_i are the same. If

A={{1,2}, {1,3}}

then regular LL prediction would terminate because the resolved set is {1}.To determine what the real ambiguity is, we have to know whether the ambiguity is between one andtwo or one and three so we keep going. We can only stop prediction whenwe need exact ambiguity detection when the sets look like:

A={{1,2}}

or

{{1,2},{1,2}}, etc...

funcPrintArrayJavaStyle

func PrintArrayJavaStyle(sa []string)string

funcTerminalNodeToStringArray

func TerminalNodeToStringArray(sa []TerminalNode) []string

funcTreesGetNodeText

func TreesGetNodeText(tTree, ruleNames []string, recogParser)string

funcTreesStringTree

func TreesStringTree(treeTree, ruleNames []string, recogRecognizer)string

TreesStringTree prints out a whole tree in LISP form. [getNodeText] is used on thenode payloads to get the text for the nodes. Detects parse trees and extracts data appropriately.

funcWithLRLoopEntryBranchOptadded inv4.13.0

func WithLRLoopEntryBranchOpt(offbool) runtimeOption

WithLRLoopEntryBranchOpt sets the global flag indicating whether let recursive loop operations should beoptimized or not. This is useful for debugging parser issues by comparing the output with the Java runtime.It turns off the functionality of [canDropLoopEntryEdgeInLeftRecursiveRule] inParserATNSimulator.

Note that default is to use this optimization.

Use:

antlr.ConfigureRuntime(antlr.WithLRLoopEntryBranchOpt(true))

You can turn it off at any time using:

antlr.ConfigureRuntime(antlr.WithLRLoopEntryBranchOpt(false))

funcWithLexerATNSimulatorDFADebugadded inv4.13.0

func WithLexerATNSimulatorDFADebug(debugbool) runtimeOption

WithLexerATNSimulatorDFADebug sets the global flag indicating whether to log debug information from the lexerATNDFAsimulator. This is useful for debugging lexer issues by comparing the output with the Java runtime. Only usefulto the runtime maintainers.

Use:

antlr.ConfigureRuntime(antlr.WithLexerATNSimulatorDFADebug(true))

You can turn it off at any time using:

antlr.ConfigureRuntime(antlr.WithLexerATNSimulatorDFADebug(false))

funcWithLexerATNSimulatorDebugadded inv4.13.0

func WithLexerATNSimulatorDebug(debugbool) runtimeOption

WithLexerATNSimulatorDebug sets the global flag indicating whether to log debug information from the lexerATNsimulator. This is useful for debugging lexer issues by comparing the output with the Java runtime. Only usefulto the runtime maintainers.

Use:

antlr.ConfigureRuntime(antlr.WithLexerATNSimulatorDebug(true))

You can turn it off at any time using:

antlr.ConfigureRuntime(antlr.WithLexerATNSimulatorDebug(false))

funcWithMemoryManageradded inv4.13.0

func WithMemoryManager(usebool) runtimeOption

WithMemoryManager sets the global flag indicating whether to use the memory manager or not. This is usefulfor poorly constructed grammars that create a lot of garbage. It turns on the functionality of [memoryManager], whichwill intercept garbage collection and cause available memory to be reused. At the end of the day, this is no substitutefor fixing your grammar by ridding yourself of extreme ambiguity. BUt if you are just trying to reuse an opensourcegrammar, this may help make it more practical.

Note that default is to use normal Go memory allocation and not pool memory.

Use:

antlr.ConfigureRuntime(antlr.WithMemoryManager(true))

Note that if you turn this on, you should probably leave it on. You should use only one memory strategy or the otherand should remember to nil out any references to the parser or lexer when you are done with them.

funcWithParserATNSimulatorDFADebugadded inv4.13.0

func WithParserATNSimulatorDFADebug(debugbool) runtimeOption

WithParserATNSimulatorDFADebug sets the global flag indicating whether to log debug information from the parserATNDFAsimulator. This is useful for debugging parser issues by comparing the output with the Java runtime. Only usefulto the runtime maintainers.

Use:

antlr.ConfigureRuntime(antlr.WithParserATNSimulatorDFADebug(true))

You can turn it off at any time using:

antlr.ConfigureRuntime(antlr.WithParserATNSimulatorDFADebug(false))

funcWithParserATNSimulatorDebugadded inv4.13.0

func WithParserATNSimulatorDebug(debugbool) runtimeOption

WithParserATNSimulatorDebug sets the global flag indicating whether to log debug information from the parserATNsimulator. This is useful for debugging parser issues by comparing the output with the Java runtime. Only usefulto the runtime maintainers.

Use:

antlr.ConfigureRuntime(antlr.WithParserATNSimulatorDebug(true))

You can turn it off at any time using:

antlr.ConfigureRuntime(antlr.WithParserATNSimulatorDebug(false))

funcWithParserATNSimulatorRetryDebugadded inv4.13.0

func WithParserATNSimulatorRetryDebug(debugbool) runtimeOption

WithParserATNSimulatorRetryDebug sets the global flag indicating whether to log debug information from the parserATNDFAsimulator when retrying a decision. This is useful for debugging parser issues by comparing the output with the Java runtime.Only useful to the runtime maintainers.

Use:

antlr.ConfigureRuntime(antlr.WithParserATNSimulatorRetryDebug(true))

You can turn it off at any time using:

antlr.ConfigureRuntime(antlr.WithParserATNSimulatorRetryDebug(false))

funcWithParserATNSimulatorTraceATNSimadded inv4.13.0

func WithParserATNSimulatorTraceATNSim(tracebool) runtimeOption

WithParserATNSimulatorTraceATNSim sets the global flag indicating whether to log trace information from the parserATN simulatorDFA. This is useful for debugging parser issues by comparing the output with the Java runtime. Only usefulto the runtime maintainers.

Use:

antlr.ConfigureRuntime(antlr.WithParserATNSimulatorTraceATNSim(true))

You can turn it off at any time using:

antlr.ConfigureRuntime(antlr.WithParserATNSimulatorTraceATNSim(false))

funcWithStatsTraceStacksadded inv4.13.0

func WithStatsTraceStacks(tracebool) runtimeOption

WithStatsTraceStacks sets the global flag indicating whether to collect stack traces at the create-point ofcertain structs, such as collections, or the use point of certain methods such as Put().Because this can be expensive, it is turned off by default. However, itcan be useful to track down exactly where memory is being created and used.

Use:

antlr.ConfigureRuntime(antlr.WithStatsTraceStacks(true))

You can turn it off at any time using:

antlr.ConfigureRuntime(antlr.WithStatsTraceStacks(false))

funcWithTopNadded inv4.13.0

func WithTopN(topNint) statsOption

Types

typeAND

type AND struct {// contains filtered or unexported fields}

funcNewAND

func NewAND(a, bSemanticContext) *AND

func (*AND)Equals

func (a *AND) Equals(otherCollectable[SemanticContext])bool

func (*AND)Hash

func (a *AND) Hash()int

func (*AND)String

func (a *AND) String()string

typeATN

type ATN struct {// DecisionToState is the decision points for all rules, sub-rules, optional// blocks, ()+, ()*, etc. Each sub-rule/rule is a decision point, and we must track them, so we// can go back later and build DFA predictors for them.  This includes// all the rules, sub-rules, optional blocks, ()+, ()* etc...DecisionToState []DecisionState// contains filtered or unexported fields}

ATN represents an “Augmented Transition Network”, though general in ANTLR the term“Augmented Recursive Transition Network” though there are some descriptions of “Recursive Transition Network”in existence.

ATNs represent the main networks in the system and are serialized by the code generator and supportALL(*).

funcNewATN

func NewATN(grammarTypeint, maxTokenTypeint) *ATN

NewATN returns a new ATN struct representing the given grammarType and is usedfor runtime deserialization of ATNs from the code generated by the ANTLR tool

func (*ATN)NextTokens

func (a *ATN) NextTokens(sATNState, ctxRuleContext) *IntervalSet

NextTokens computes and returns the set of valid tokens starting in state s, bycalling either [NextTokensNoContext] (ctx == nil) or [NextTokensInContext] (ctx != nil).

func (*ATN)NextTokensInContext

func (a *ATN) NextTokensInContext(sATNState, ctxRuleContext) *IntervalSet

NextTokensInContext computes and returns the set of valid tokens that can occur startingin state s. If ctx is nil, the set of tokens will not include what can followthe rule surrounding s. In other words, the set will be restricted to tokensreachable staying within the rule of s.

func (*ATN)NextTokensNoContext

func (a *ATN) NextTokensNoContext(sATNState) *IntervalSet

NextTokensNoContext computes and returns the set of valid tokens that can occur startingin state s and staying in same rule.antlr.Token.EPSILON is in set if we reach end ofrule.

typeATNAltConfigComparator

type ATNAltConfigComparator[TCollectable[T]] struct {}

ATNAltConfigComparator is used as the comparator for mapping configs to Alt Bitsets

func (*ATNAltConfigComparator[T])Equals2

func (c *ATNAltConfigComparator[T]) Equals2(o1, o2 *ATNConfig)bool

Equals2 is a custom comparator for ATNConfigs specifically for configLookup

func (*ATNAltConfigComparator[T])Hash1

func (c *ATNAltConfigComparator[T]) Hash1(o *ATNConfig)int

Hash1 is custom hash implementation for ATNConfigs specifically for configLookup

typeATNConfig

type ATNConfig struct {// contains filtered or unexported fields}

ATNConfig is a tuple: (ATN state, predicted alt, syntactic, semanticcontext). The syntactic context is a graph-structured stack node whosepath(s) to the root is the rule invocation(s) chain used to arrive in thestate. The semantic context is the tree of semantic predicates encounteredbefore reaching an ATN state.

funcNewATNConfigadded inv4.13.0

func NewATNConfig(c *ATNConfig, stateATNState, context *PredictionContext, semanticContextSemanticContext) *ATNConfig

NewATNConfig creates a new ATNConfig instance given an existing config, a state, a context and a semantic context, other 'constructors'are just wrappers around this one.

funcNewATNConfig1added inv4.13.0

func NewATNConfig1(c *ATNConfig, stateATNState, context *PredictionContext) *ATNConfig

NewATNConfig1 creates a new ATNConfig instance given an existing config, a state, and a context only

funcNewATNConfig2added inv4.13.0

func NewATNConfig2(c *ATNConfig, semanticContextSemanticContext) *ATNConfig

NewATNConfig2 creates a new ATNConfig instance given an existing config, and a context only

funcNewATNConfig3added inv4.13.0

func NewATNConfig3(c *ATNConfig, stateATNState, semanticContextSemanticContext) *ATNConfig

NewATNConfig3 creates a new ATNConfig instance given an existing config, a state and a semantic context

funcNewATNConfig4added inv4.13.0

func NewATNConfig4(c *ATNConfig, stateATNState) *ATNConfig

NewATNConfig4 creates a new ATNConfig instance given an existing config, and a state only

funcNewATNConfig5added inv4.13.0

func NewATNConfig5(stateATNState, altint, context *PredictionContext, semanticContextSemanticContext) *ATNConfig

NewATNConfig5 creates a new ATNConfig instance given a state, alt, context and semantic context

funcNewATNConfig6added inv4.13.0

func NewATNConfig6(stateATNState, altint, context *PredictionContext) *ATNConfig

NewATNConfig6 creates a new ATNConfig instance given a state, alt and context only

funcNewLexerATNConfig1

func NewLexerATNConfig1(stateATNState, altint, context *PredictionContext) *ATNConfig

funcNewLexerATNConfig2

func NewLexerATNConfig2(c *ATNConfig, stateATNState, context *PredictionContext) *ATNConfig

funcNewLexerATNConfig3

func NewLexerATNConfig3(c *ATNConfig, stateATNState, lexerActionExecutor *LexerActionExecutor) *ATNConfig

funcNewLexerATNConfig4

func NewLexerATNConfig4(c *ATNConfig, stateATNState) *ATNConfig

funcNewLexerATNConfig6

func NewLexerATNConfig6(stateATNState, altint, context *PredictionContext) *ATNConfig

func (*ATNConfig)Equals

func (a *ATNConfig) Equals(oCollectable[*ATNConfig])bool

Equals is the default comparison function for an ATNConfig when no specialist implementation is requiredfor a collection.

An ATN configuration is equal to another if both have the same state, theypredict the same alternative, and syntactic/semantic contexts are the same.

func (*ATNConfig)GetAlt

func (a *ATNConfig) GetAlt()int

GetAlt returns the alternative associated with this configuration

func (*ATNConfig)GetContext

func (a *ATNConfig) GetContext() *PredictionContext

GetContext returns the rule invocation stack associated with this configuration

func (*ATNConfig)GetReachesIntoOuterContext

func (a *ATNConfig) GetReachesIntoOuterContext()int

GetReachesIntoOuterContext returns the count of references to an outer context from this configuration

func (*ATNConfig)GetSemanticContext

func (a *ATNConfig) GetSemanticContext()SemanticContext

GetSemanticContext returns the semantic context associated with this configuration

func (*ATNConfig)GetState

func (a *ATNConfig) GetState()ATNState

GetState returns the ATN state associated with this configuration

func (*ATNConfig)Hash

func (a *ATNConfig) Hash()int

Hash is the default hash function for a parser ATNConfig, when no specialist hash functionis required for a collection

func (*ATNConfig)InitATNConfigadded inv4.13.0

func (a *ATNConfig) InitATNConfig(c *ATNConfig, stateATNState, altint, context *PredictionContext, semanticContextSemanticContext)

func (*ATNConfig)LEqualsadded inv4.13.0

func (a *ATNConfig) LEquals(otherCollectable[*ATNConfig])bool

LEquals is the default comparison function for Lexer ATNConfig objects, it can be used directly or viathe default comparatorObjEqComparator.

func (*ATNConfig)LHashadded inv4.13.0

func (a *ATNConfig) LHash()int

LHash is the default hash function for Lexer ATNConfig objects, it can be used directly or viathe default comparatorObjEqComparator.

func (*ATNConfig)PEqualsadded inv4.13.0

func (a *ATNConfig) PEquals(oCollectable[*ATNConfig])bool

PEquals is the default comparison function for a Parser ATNConfig when no specialist implementation is requiredfor a collection.

An ATN configuration is equal to another if both have the same state, theypredict the same alternative, and syntactic/semantic contexts are the same.

func (*ATNConfig)PHashadded inv4.13.0

func (a *ATNConfig) PHash()int

PHash is the default hash function for a parser ATNConfig, when no specialist hash functionis required for a collection

func (*ATNConfig)SetContext

func (a *ATNConfig) SetContext(v *PredictionContext)

SetContext sets the rule invocation stack associated with this configuration

func (*ATNConfig)SetReachesIntoOuterContext

func (a *ATNConfig) SetReachesIntoOuterContext(vint)

SetReachesIntoOuterContext sets the count of references to an outer context from this configuration

func (*ATNConfig)String

func (a *ATNConfig) String()string

String returns a string representation of the ATNConfig, usually used for debugging purposes

typeATNConfigComparator

type ATNConfigComparator[TCollectable[T]] struct {}

ATNConfigComparator is used as the comparator for the configLookup field of an ATNConfigSetand has a custom Equals() and Hash() implementation, because equality is not based on thestandard Hash() and Equals() methods of the ATNConfig type.

func (*ATNConfigComparator[T])Equals2

func (c *ATNConfigComparator[T]) Equals2(o1, o2 *ATNConfig)bool

Equals2 is a custom comparator for ATNConfigs specifically for configLookup

func (*ATNConfigComparator[T])Hash1

func (c *ATNConfigComparator[T]) Hash1(o *ATNConfig)int

Hash1 is custom hash implementation for ATNConfigs specifically for configLookup

typeATNConfigSet

type ATNConfigSet struct {// contains filtered or unexported fields}

ATNConfigSet is a specialized set of ATNConfig that tracks informationabout its elements and can combine similar configurations using agraph-structured stack.

funcNewATNConfigSetadded inv4.13.0

func NewATNConfigSet(fullCtxbool) *ATNConfigSet

NewATNConfigSet creates a new ATNConfigSet instance.

funcNewOrderedATNConfigSet

func NewOrderedATNConfigSet() *ATNConfigSet

NewOrderedATNConfigSet creates a config set with a slightly different Hash/Equal pairfor use in lexers.

func (*ATNConfigSet)Add

func (b *ATNConfigSet) Add(config *ATNConfig, mergeCache *JPCMap)bool

Add merges contexts with existing configs for (s, i, pi, _),where 's' is the ATNConfig.state, 'i' is the ATNConfig.alt, and'pi' is theATNConfig.semanticContext.

We use (s,i,pi) as the key.Updates dipsIntoOuterContext and hasSemanticContext when necessary.

func (*ATNConfigSet)AddAll

func (b *ATNConfigSet) AddAll(coll []*ATNConfig)bool

func (*ATNConfigSet)Alts

func (b *ATNConfigSet) Alts() *BitSet

Alts returns the combined set of alts for all the configurations in this set.

func (*ATNConfigSet)Clear

func (b *ATNConfigSet) Clear()

func (*ATNConfigSet)Compareadded inv4.13.0

func (b *ATNConfigSet) Compare(bs *ATNConfigSet)bool

Compare The configs are only equal if they are in the same order and their Equals function returns true.Java uses ArrayList.equals(), which requires the same order.

func (*ATNConfigSet)Contains

func (b *ATNConfigSet) Contains(item *ATNConfig)bool

func (*ATNConfigSet)ContainsFast

func (b *ATNConfigSet) ContainsFast(item *ATNConfig)bool

func (*ATNConfigSet)Equals

func (b *ATNConfigSet) Equals(otherCollectable[ATNConfig])bool

func (*ATNConfigSet)GetPredicates

func (b *ATNConfigSet) GetPredicates() []SemanticContext

func (*ATNConfigSet)GetStates

GetStates returns the set of states represented by all configurations in this config set

func (*ATNConfigSet)Hash

func (b *ATNConfigSet) Hash()int

func (*ATNConfigSet)OptimizeConfigs

func (b *ATNConfigSet) OptimizeConfigs(interpreter *BaseATNSimulator)

func (*ATNConfigSet)String

func (b *ATNConfigSet) String()string

typeATNConfigSetPair

type ATNConfigSetPair struct {// contains filtered or unexported fields}

typeATNDeserializationOptions

type ATNDeserializationOptions struct {// contains filtered or unexported fields}

funcDefaultATNDeserializationOptions

func DefaultATNDeserializationOptions() *ATNDeserializationOptions

func (*ATNDeserializationOptions)GenerateRuleBypassTransitions

func (opts *ATNDeserializationOptions) GenerateRuleBypassTransitions()bool

func (*ATNDeserializationOptions)ReadOnly

func (opts *ATNDeserializationOptions) ReadOnly()bool

func (*ATNDeserializationOptions)SetGenerateRuleBypassTransitions

func (opts *ATNDeserializationOptions) SetGenerateRuleBypassTransitions(generateRuleBypassTransitionsbool)

func (*ATNDeserializationOptions)SetReadOnly

func (opts *ATNDeserializationOptions) SetReadOnly(readOnlybool)

func (*ATNDeserializationOptions)SetVerifyATN

func (opts *ATNDeserializationOptions) SetVerifyATN(verifyATNbool)

func (*ATNDeserializationOptions)VerifyATN

func (opts *ATNDeserializationOptions) VerifyATN()bool

typeATNDeserializer

type ATNDeserializer struct {// contains filtered or unexported fields}

funcNewATNDeserializer

func NewATNDeserializer(options *ATNDeserializationOptions) *ATNDeserializer

func (*ATNDeserializer)Deserialize

func (a *ATNDeserializer) Deserialize(data []int32) *ATN

typeATNState

type ATNState interface {GetEpsilonOnlyTransitions()boolGetRuleIndex()intSetRuleIndex(int)GetNextTokenWithinRule() *IntervalSetSetNextTokenWithinRule(*IntervalSet)GetATN() *ATNSetATN(*ATN)GetStateType()intGetStateNumber()intSetStateNumber(int)GetTransitions() []TransitionSetTransitions([]Transition)AddTransition(Transition,int)String()stringHash()intEquals(Collectable[ATNState])bool}

typeAbstractPredicateTransition

type AbstractPredicateTransition interface {TransitionIAbstractPredicateTransitionFoo()}

typeActionTransition

type ActionTransition struct {BaseTransition// contains filtered or unexported fields}

funcNewActionTransition

func NewActionTransition(targetATNState, ruleIndex, actionIndexint, isCtxDependentbool) *ActionTransition

func (*ActionTransition)Matches

func (t *ActionTransition) Matches(_, _, _int)bool

func (*ActionTransition)String

func (t *ActionTransition) String()string

typeAltDict

type AltDict struct {// contains filtered or unexported fields}

funcNewAltDict

func NewAltDict() *AltDict

funcPredictionModeGetStateToAltMap

func PredictionModeGetStateToAltMap(configs *ATNConfigSet) *AltDict

PredictionModeGetStateToAltMap gets a map from state to alt subset from a configuration set.

for each configuration c in configs:   map[c.ATNConfig.state] U= c.ATNConfig.alt}

func (*AltDict)Get

func (a *AltDict) Get(keystring) interface{}

typeAtomTransition

type AtomTransition struct {BaseTransition}

AtomTransitionTODO: make all transitions sets? no, should remove set edges

funcNewAtomTransition

func NewAtomTransition(targetATNState, intervalSetint) *AtomTransition

func (*AtomTransition)Matches

func (t *AtomTransition) Matches(symbol, _, _int)bool

func (*AtomTransition)String

func (t *AtomTransition) String()string

typeBailErrorStrategy

type BailErrorStrategy struct {*DefaultErrorStrategy}

The BailErrorStrategy implementation of ANTLRErrorStrategy responds to syntax errorsby immediately canceling the parse operation with aParseCancellationException. The implementation ensures that the[ParserRuleContext//exception] field is set for all parse tree nodesthat were not completed prior to encountering the error.

This error strategy is useful in the following scenarios.

  • Two-stage parsing: This error strategy allows the firststage of two-stage parsing to immediately terminate if an error isencountered, and immediately fall back to the second stage. In addition toavoiding wasted work by attempting to recover from errors here, the emptyimplementation ofBailErrorStrategy.Sync improves the performance ofthe first stage.

  • Silent validation: When syntax errors are not beingReported or logged, and the parse result is simply ignored if errors occur,theBailErrorStrategy avoids wasting work on recovering from errorswhen the result will be ignored either way.

    myparser.SetErrorHandler(NewBailErrorStrategy())

See also: [Parser.SetErrorHandler(ANTLRErrorStrategy)]

funcNewBailErrorStrategy

func NewBailErrorStrategy() *BailErrorStrategy

func (*BailErrorStrategy)Recover

func (b *BailErrorStrategy) Recover(recognizerParser, eRecognitionException)

Recover Instead of recovering from exception e, re-panic it wrappedin aParseCancellationException so it is not caught by therule func catches. Use Exception.GetCause() to get theoriginalRecognitionException.

func (*BailErrorStrategy)RecoverInline

func (b *BailErrorStrategy) RecoverInline(recognizerParser)Token

RecoverInline makes sure we don't attempt to recover inline if the parsersuccessfully recovers, it won't panic an exception.

func (*BailErrorStrategy)Sync

func (b *BailErrorStrategy) Sync(_Parser)

Sync makes sure we don't attempt to recover from problems in sub-rules.

typeBaseATNConfigComparator

type BaseATNConfigComparator[TCollectable[T]] struct {}

BaseATNConfigComparator is used as the comparator for the configLookup field of a ATNConfigSetand has a custom Equals() and Hash() implementation, because equality is not based on thestandard Hash() and Equals() methods of the ATNConfig type.

func (*BaseATNConfigComparator[T])Equals2

func (c *BaseATNConfigComparator[T]) Equals2(o1, o2 *ATNConfig)bool

Equals2 is a custom comparator for ATNConfigs specifically for baseATNConfigSet

func (*BaseATNConfigComparator[T])Hash1

func (c *BaseATNConfigComparator[T]) Hash1(o *ATNConfig)int

Hash1 is custom hash implementation for ATNConfigs specifically for configLookup, but in fact justdelegates to the standard Hash() method of the ATNConfig type.

typeBaseATNSimulator

type BaseATNSimulator struct {// contains filtered or unexported fields}

func (*BaseATNSimulator)ATN

func (b *BaseATNSimulator) ATN() *ATN

func (*BaseATNSimulator)DecisionToDFA

func (b *BaseATNSimulator) DecisionToDFA() []*DFA

func (*BaseATNSimulator)SharedContextCache

func (b *BaseATNSimulator) SharedContextCache() *PredictionContextCache

typeBaseATNState

type BaseATNState struct {// NextTokenWithinRule caches lookahead during parsing. Not used during construction.NextTokenWithinRule *IntervalSet// contains filtered or unexported fields}

funcNewATNStateadded inv4.13.0

func NewATNState() *BaseATNState

func (*BaseATNState)AddTransition

func (as *BaseATNState) AddTransition(transTransition, indexint)

func (*BaseATNState)Equals

func (as *BaseATNState) Equals(otherCollectable[ATNState])bool

func (*BaseATNState)GetATN

func (as *BaseATNState) GetATN() *ATN

func (*BaseATNState)GetEpsilonOnlyTransitions

func (as *BaseATNState) GetEpsilonOnlyTransitions()bool

func (*BaseATNState)GetNextTokenWithinRule

func (as *BaseATNState) GetNextTokenWithinRule() *IntervalSet

func (*BaseATNState)GetRuleIndex

func (as *BaseATNState) GetRuleIndex()int

func (*BaseATNState)GetStateNumber

func (as *BaseATNState) GetStateNumber()int

func (*BaseATNState)GetStateType

func (as *BaseATNState) GetStateType()int

func (*BaseATNState)GetTransitions

func (as *BaseATNState) GetTransitions() []Transition

func (*BaseATNState)Hash

func (as *BaseATNState) Hash()int

func (*BaseATNState)SetATN

func (as *BaseATNState) SetATN(atn *ATN)

func (*BaseATNState)SetNextTokenWithinRule

func (as *BaseATNState) SetNextTokenWithinRule(v *IntervalSet)

func (*BaseATNState)SetRuleIndex

func (as *BaseATNState) SetRuleIndex(vint)

func (*BaseATNState)SetStateNumber

func (as *BaseATNState) SetStateNumber(stateNumberint)

func (*BaseATNState)SetTransitions

func (as *BaseATNState) SetTransitions(t []Transition)

func (*BaseATNState)String

func (as *BaseATNState) String()string

typeBaseAbstractPredicateTransition

type BaseAbstractPredicateTransition struct {BaseTransition}

funcNewBasePredicateTransition

func NewBasePredicateTransition(targetATNState) *BaseAbstractPredicateTransition

func (*BaseAbstractPredicateTransition)IAbstractPredicateTransitionFoo

func (a *BaseAbstractPredicateTransition) IAbstractPredicateTransitionFoo()

typeBaseBlockStartState

type BaseBlockStartState struct {BaseDecisionState// contains filtered or unexported fields}

BaseBlockStartState is the start of a regular (...) block.

funcNewBlockStartState

func NewBlockStartState() *BaseBlockStartState

typeBaseDecisionState

type BaseDecisionState struct {BaseATNState// contains filtered or unexported fields}

funcNewBaseDecisionState

func NewBaseDecisionState() *BaseDecisionState

typeBaseInterpreterRuleContext

type BaseInterpreterRuleContext struct {*BaseParserRuleContext}

funcNewBaseInterpreterRuleContext

func NewBaseInterpreterRuleContext(parentBaseInterpreterRuleContext, invokingStateNumber, ruleIndexint) *BaseInterpreterRuleContext

typeBaseLexer

type BaseLexer struct {*BaseRecognizerInterpreterILexerATNSimulatorTokenStartCharIndexintTokenStartLineintTokenStartColumnintActionTypeintVirtLexer// The most derived lexer implementation. Allows virtual method calls.// contains filtered or unexported fields}

funcNewBaseLexer

func NewBaseLexer(inputCharStream) *BaseLexer

func (*BaseLexer)Emit

func (b *BaseLexer) Emit()Token

Emit is the standard method called to automatically emit a token at theoutermost lexical rule. The token object should point into thechar buffer start..stop. If there is a text override in 'text',use that to set the token's text. Override this method to emitcustomToken objects or provide a new factory./

func (*BaseLexer)EmitEOF

func (b *BaseLexer) EmitEOF()Token

EmitEOF emits an EOF token. By default, this is the last token emitted

func (*BaseLexer)EmitToken

func (b *BaseLexer) EmitToken(tokenToken)

EmitToken by default does not support multiple emits per [NextToken] invocationfor efficiency reasons. Subclass and override this func, [NextToken],and [GetToken] (to push tokens into a list and pull from that listrather than a single variable as this implementation does).

func (*BaseLexer)GetATN

func (b *BaseLexer) GetATN() *ATN

GetATN returns the ATN used by the lexer.

func (*BaseLexer)GetAllTokens

func (b *BaseLexer) GetAllTokens() []Token

GetAllTokens returns a list of allToken objects in input char stream.Forces a load of all tokens that can be made from the input char stream.

Does not include EOF token.

func (*BaseLexer)GetCharIndex

func (b *BaseLexer) GetCharIndex()int

GetCharIndex returns the index of the current character of lookahead

func (*BaseLexer)GetCharPositionInLine

func (b *BaseLexer) GetCharPositionInLine()int

GetCharPositionInLine returns the current position in the current line as far as the lexer is concerned.

func (*BaseLexer)GetInputStream

func (b *BaseLexer) GetInputStream()CharStream

func (*BaseLexer)GetInterpreter

func (b *BaseLexer) GetInterpreter()ILexerATNSimulator

func (*BaseLexer)GetLine

func (b *BaseLexer) GetLine()int

func (*BaseLexer)GetSourceName

func (b *BaseLexer) GetSourceName()string

func (*BaseLexer)GetText

func (b *BaseLexer) GetText()string

GetText returns the text Matched so far for the current token or any text override.

func (*BaseLexer)GetTokenFactory

func (b *BaseLexer) GetTokenFactory()TokenFactory

func (*BaseLexer)GetTokenSourceCharStreamPair

func (b *BaseLexer) GetTokenSourceCharStreamPair() *TokenSourceCharStreamPair

func (*BaseLexer)GetType

func (b *BaseLexer) GetType()int

func (*BaseLexer)More

func (b *BaseLexer) More()

func (*BaseLexer)NextToken

func (b *BaseLexer) NextToken()Token

NextToken returns a token from the lexer input source i.e., Match a token on the source char stream.

func (*BaseLexer)PopMode

func (b *BaseLexer) PopMode()int

PopMode restores the lexer mode saved by a call to [PushMode]. It is a panic error if there is no saved mode toreturn to.

func (*BaseLexer)PushMode

func (b *BaseLexer) PushMode(mint)

PushMode saves the current lexer mode so that it can be restored later. See [PopMode], then sets thecurrent lexer mode to the supplied mode m.

func (*BaseLexer)Recover

func (b *BaseLexer) Recover(reRecognitionException)

Recover can normally Match any char in its vocabulary after Matchinga token, so here we do the easy thing and just kill a character and hopeit all works out. You can instead use the rule invocation stackto do sophisticated error recovery if you are in a fragment rule.

In general, lexers should not need to recover and should have rules that cover any eventuality, such asa character that makes no sense to the recognizer.

func (*BaseLexer)Resetadded inv4.13.0

func (b *BaseLexer) Reset()

func (*BaseLexer)SetChannel

func (b *BaseLexer) SetChannel(vint)

func (*BaseLexer)SetInputStream

func (b *BaseLexer) SetInputStream(inputCharStream)

SetInputStream resets the lexer input stream and associated lexer state.

func (*BaseLexer)SetMode

func (b *BaseLexer) SetMode(mint)

SetMode changes the lexer to a new mode. The lexer will use this mode from hereon in and the rules for that modewill be in force.

func (*BaseLexer)SetText

func (b *BaseLexer) SetText(textstring)

SetText sets the complete text of this token; it wipes any previous changes to the text.

func (*BaseLexer)SetType

func (b *BaseLexer) SetType(tint)

func (*BaseLexer)Skip

func (b *BaseLexer) Skip()

Skip instructs the lexer to Skip creating a token for current lexer ruleand look for another token. [NextToken] knows to keep looking whena lexer rule finishes with token set to [SKIPTOKEN]. Recall thatif token==nil at end of any token rule, it creates one for youand emits it.

typeBaseLexerAction

type BaseLexerAction struct {// contains filtered or unexported fields}

funcNewBaseLexerAction

func NewBaseLexerAction(actionint) *BaseLexerAction

func (*BaseLexerAction)Equals

func (b *BaseLexerAction) Equals(otherLexerAction)bool

func (*BaseLexerAction)Hash

func (b *BaseLexerAction) Hash()int

typeBaseParseTreeListener

type BaseParseTreeListener struct{}

func (*BaseParseTreeListener)EnterEveryRule

func (l *BaseParseTreeListener) EnterEveryRule(_ParserRuleContext)

func (*BaseParseTreeListener)ExitEveryRule

func (l *BaseParseTreeListener) ExitEveryRule(_ParserRuleContext)

func (*BaseParseTreeListener)VisitErrorNode

func (l *BaseParseTreeListener) VisitErrorNode(_ErrorNode)

func (*BaseParseTreeListener)VisitTerminal

func (l *BaseParseTreeListener) VisitTerminal(_TerminalNode)

typeBaseParseTreeVisitor

type BaseParseTreeVisitor struct{}

func (*BaseParseTreeVisitor)Visit

func (v *BaseParseTreeVisitor) Visit(treeParseTree) interface{}

func (*BaseParseTreeVisitor)VisitChildren

func (v *BaseParseTreeVisitor) VisitChildren(_RuleNode) interface{}

func (*BaseParseTreeVisitor)VisitErrorNode

func (v *BaseParseTreeVisitor) VisitErrorNode(_ErrorNode) interface{}

func (*BaseParseTreeVisitor)VisitTerminal

func (v *BaseParseTreeVisitor) VisitTerminal(_TerminalNode) interface{}

typeBaseParser

type BaseParser struct {*BaseRecognizerInterpreter     *ParserATNSimulatorBuildParseTreesbool// contains filtered or unexported fields}

funcNewBaseParser

func NewBaseParser(inputTokenStream) *BaseParser

NewBaseParser contains all the parsing support code to embed in parsers. Essentially most of it is errorrecovery stuff.

func (*BaseParser)AddParseListener

func (p *BaseParser) AddParseListener(listenerParseTreeListener)

AddParseListener registers listener to receive events during the parsing process.

To support output-preserving grammar transformations (including but notlimited to left-recursion removal, automated left-factoring, andoptimized code generation), calls to listener methods during the parsemay differ substantially from calls made by[ParseTreeWalker.DEFAULT] used after the parse is complete. Inparticular, rule entry and exit events may occur in a different orderduring the parse than after the parser. In addition, calls to certainrule entry methods may be omitted.

With the following specific exceptions, calls to listener events aredeterministic, i.e. for identical input the calls to listenermethods will be the same.

  • Alterations to the grammar used to generate code may change thebehavior of the listener calls.
  • Alterations to the command line options passed to ANTLR 4 whengenerating the parser may change the behavior of the listener calls.
  • Changing the version of the ANTLR Tool used to generate the parsermay change the behavior of the listener calls.

func (*BaseParser)Consume

func (p *BaseParser) Consume()Token

func (*BaseParser)DumpDFA

func (p *BaseParser) DumpDFA()

DumpDFA prints the whole of the DFA for debugging

func (*BaseParser)EnterOuterAlt

func (p *BaseParser) EnterOuterAlt(localctxParserRuleContext, altNumint)

func (*BaseParser)EnterRecursionRule

func (p *BaseParser) EnterRecursionRule(localctxParserRuleContext, state, _, precedenceint)

func (*BaseParser)EnterRule

func (p *BaseParser) EnterRule(localctxParserRuleContext, state, _int)

func (*BaseParser)ExitRule

func (p *BaseParser) ExitRule()

func (*BaseParser)GetATN

func (p *BaseParser) GetATN() *ATN

func (*BaseParser)GetATNWithBypassAlts

func (p *BaseParser) GetATNWithBypassAlts()

GetATNWithBypassAlts - the ATN with bypass alternatives is expensive to create, so we create itlazily.

func (*BaseParser)GetCurrentToken

func (p *BaseParser) GetCurrentToken()Token

GetCurrentToken returns the current token at LT(1).

[Match] needs to return the current input symbol, which gets putinto the label for the associated token ref e.g., x=ID.

func (*BaseParser)GetDFAStrings

func (p *BaseParser) GetDFAStrings()string

GetDFAStrings returns a list of all DFA states used for debugging purposes

func (*BaseParser)GetErrorHandler

func (p *BaseParser) GetErrorHandler()ErrorStrategy

func (*BaseParser)GetExpectedTokens

func (p *BaseParser) GetExpectedTokens() *IntervalSet

GetExpectedTokens and returns the set of input symbols which could follow the current parserstate and context, as given by [GetState] and [GetContext],respectively.

func (*BaseParser)GetExpectedTokensWithinCurrentRule

func (p *BaseParser) GetExpectedTokensWithinCurrentRule() *IntervalSet

func (*BaseParser)GetInputStream

func (p *BaseParser) GetInputStream()IntStream

func (*BaseParser)GetInterpreter

func (p *BaseParser) GetInterpreter() *ParserATNSimulator

func (*BaseParser)GetInvokingContext

func (p *BaseParser) GetInvokingContext(ruleIndexint)ParserRuleContext

func (*BaseParser)GetParseListeners

func (p *BaseParser) GetParseListeners() []ParseTreeListener

func (*BaseParser)GetParserRuleContext

func (p *BaseParser) GetParserRuleContext()ParserRuleContext

func (*BaseParser)GetPrecedence

func (p *BaseParser) GetPrecedence()int

func (*BaseParser)GetRuleIndex

func (p *BaseParser) GetRuleIndex(ruleNamestring)int

GetRuleIndex get a rule's index (i.e., RULE_ruleName field) or -1 if not found.

func (*BaseParser)GetRuleInvocationStack

func (p *BaseParser) GetRuleInvocationStack(cParserRuleContext) []string

GetRuleInvocationStack returns a list of the rule names in your parser instanceleading up to a call to the current rule. You could override ifyou want more details such as the file/line info of wherein the ATN a rule is invoked.

func (*BaseParser)GetSourceName

func (p *BaseParser) GetSourceName()string

func (*BaseParser)GetTokenFactory

func (p *BaseParser) GetTokenFactory()TokenFactory

func (*BaseParser)GetTokenStream

func (p *BaseParser) GetTokenStream()TokenStream

func (*BaseParser)IsExpectedToken

func (p *BaseParser) IsExpectedToken(symbolint)bool

IsExpectedToken checks whether symbol can follow the current state in the{ATN}. The behavior of p.method is equivalent to the following, but isimplemented such that the complete context-sensitive follow set does notneed to be explicitly constructed.

return getExpectedTokens().contains(symbol)

func (*BaseParser)Match

func (p *BaseParser) Match(ttypeint)Token

func (*BaseParser)MatchWildcard

func (p *BaseParser) MatchWildcard()Token

func (*BaseParser)NotifyErrorListeners

func (p *BaseParser) NotifyErrorListeners(msgstring, offendingTokenToken, errRecognitionException)

func (*BaseParser)Precpred

func (p *BaseParser) Precpred(_RuleContext, precedenceint)bool

func (*BaseParser)PushNewRecursionContext

func (p *BaseParser) PushNewRecursionContext(localctxParserRuleContext, state, _int)

func (*BaseParser)RemoveParseListener

func (p *BaseParser) RemoveParseListener(listenerParseTreeListener)

RemoveParseListener removes listener from the list of parse listeners.

If listener is nil or has not been added as a parselistener, this func does nothing.

func (*BaseParser)SetErrorHandler

func (p *BaseParser) SetErrorHandler(eErrorStrategy)

func (*BaseParser)SetInputStream

func (p *BaseParser) SetInputStream(inputTokenStream)

func (*BaseParser)SetParserRuleContext

func (p *BaseParser) SetParserRuleContext(vParserRuleContext)

func (*BaseParser)SetTokenStream

func (p *BaseParser) SetTokenStream(inputTokenStream)

SetTokenStream installs input as the token stream and resets the parser.

func (*BaseParser)SetTrace

func (p *BaseParser) SetTrace(trace *TraceListener)

SetTrace installs a trace listener for the parse.

During a parse it is sometimes useful to listen in on the rule entry and exitevents as well as token Matches. This is for quick and dirty debugging.

func (*BaseParser)TriggerEnterRuleEvent

func (p *BaseParser) TriggerEnterRuleEvent()

TriggerEnterRuleEvent notifies all parse listeners of an enter rule event.

func (*BaseParser)TriggerExitRuleEvent

func (p *BaseParser) TriggerExitRuleEvent()

TriggerExitRuleEvent notifies any parse listeners of an exit rule event.

func (*BaseParser)UnrollRecursionContexts

func (p *BaseParser) UnrollRecursionContexts(parentCtxParserRuleContext)

typeBaseParserRuleContext

type BaseParserRuleContext struct {RuleIndexint// contains filtered or unexported fields}

funcNewBaseParserRuleContext

func NewBaseParserRuleContext(parentParserRuleContext, invokingStateNumberint) *BaseParserRuleContext

func (*BaseParserRuleContext)Accept

func (prc *BaseParserRuleContext) Accept(visitorParseTreeVisitor) interface{}

func (*BaseParserRuleContext)AddChild

func (*BaseParserRuleContext)AddErrorNode

func (prc *BaseParserRuleContext) AddErrorNode(badTokenToken) *ErrorNodeImpl

func (*BaseParserRuleContext)AddTokenNode

func (prc *BaseParserRuleContext) AddTokenNode(tokenToken) *TerminalNodeImpl

func (*BaseParserRuleContext)CopyFrom

func (prc *BaseParserRuleContext) CopyFrom(ctx *BaseParserRuleContext)

func (*BaseParserRuleContext)EnterRule

EnterRule is called when any rule is entered.

func (*BaseParserRuleContext)ExitRule

ExitRule is called when any rule is exited.

func (*BaseParserRuleContext)GetAltNumberadded inv4.13.0

func (prc *BaseParserRuleContext) GetAltNumber()int

func (*BaseParserRuleContext)GetChild

func (prc *BaseParserRuleContext) GetChild(iint)Tree

func (*BaseParserRuleContext)GetChildCount

func (prc *BaseParserRuleContext) GetChildCount()int

func (*BaseParserRuleContext)GetChildOfType

func (prc *BaseParserRuleContext) GetChildOfType(iint, childTypereflect.Type)RuleContext

func (*BaseParserRuleContext)GetChildren

func (prc *BaseParserRuleContext) GetChildren() []Tree

func (*BaseParserRuleContext)GetInvokingStateadded inv4.13.0

func (prc *BaseParserRuleContext) GetInvokingState()int

func (*BaseParserRuleContext)GetParentadded inv4.13.0

func (prc *BaseParserRuleContext) GetParent()Tree

GetParent returns the combined text of all child nodes. This method only considerstokens which have been added to the parse tree.

Since tokens on hidden channels (e.g. whitespace or comments) are notadded to the parse trees, they will not appear in the output of thismethod.

func (*BaseParserRuleContext)GetPayload

func (prc *BaseParserRuleContext) GetPayload() interface{}

func (*BaseParserRuleContext)GetRuleContext

func (prc *BaseParserRuleContext) GetRuleContext()RuleContext

func (*BaseParserRuleContext)GetRuleIndexadded inv4.13.0

func (prc *BaseParserRuleContext) GetRuleIndex()int

func (*BaseParserRuleContext)GetSourceInterval

func (prc *BaseParserRuleContext) GetSourceInterval()Interval

func (*BaseParserRuleContext)GetStart

func (prc *BaseParserRuleContext) GetStart()Token

func (*BaseParserRuleContext)GetStop

func (prc *BaseParserRuleContext) GetStop()Token

func (*BaseParserRuleContext)GetText

func (prc *BaseParserRuleContext) GetText()string

func (*BaseParserRuleContext)GetToken

func (prc *BaseParserRuleContext) GetToken(ttypeint, iint)TerminalNode

func (*BaseParserRuleContext)GetTokens

func (prc *BaseParserRuleContext) GetTokens(ttypeint) []TerminalNode

func (*BaseParserRuleContext)GetTypedRuleContext

func (prc *BaseParserRuleContext) GetTypedRuleContext(ctxTypereflect.Type, iint)RuleContext

func (*BaseParserRuleContext)GetTypedRuleContexts

func (prc *BaseParserRuleContext) GetTypedRuleContexts(ctxTypereflect.Type) []RuleContext

func (*BaseParserRuleContext)IsEmptyadded inv4.13.0

func (prc *BaseParserRuleContext) IsEmpty()bool

IsEmpty returns true if the context of b is empty.

A context is empty if there is no invoking state, meaning nobody callscurrent context.

func (*BaseParserRuleContext)RemoveLastChild

func (prc *BaseParserRuleContext) RemoveLastChild()

RemoveLastChild is used by [EnterOuterAlt] to toss out aRuleContext previously added aswe entered a rule. If we have a label, we will need to removethe generic ruleContext object.

func (*BaseParserRuleContext)SetAltNumberadded inv4.13.0

func (prc *BaseParserRuleContext) SetAltNumber(_int)

func (*BaseParserRuleContext)SetException

func (prc *BaseParserRuleContext) SetException(eRecognitionException)

func (*BaseParserRuleContext)SetInvokingStateadded inv4.13.0

func (prc *BaseParserRuleContext) SetInvokingState(tint)

func (*BaseParserRuleContext)SetParentadded inv4.13.0

func (prc *BaseParserRuleContext) SetParent(vTree)

func (*BaseParserRuleContext)SetStart

func (prc *BaseParserRuleContext) SetStart(tToken)

func (*BaseParserRuleContext)SetStop

func (prc *BaseParserRuleContext) SetStop(tToken)

func (*BaseParserRuleContext)String

func (prc *BaseParserRuleContext) String(ruleNames []string, stopRuleContext)string

func (*BaseParserRuleContext)ToStringTree

func (prc *BaseParserRuleContext) ToStringTree(ruleNames []string, recogRecognizer)string

typeBaseRecognitionException

type BaseRecognitionException struct {// contains filtered or unexported fields}

funcNewBaseRecognitionException

func NewBaseRecognitionException(messagestring, recognizerRecognizer, inputIntStream, ctxRuleContext) *BaseRecognitionException

func (*BaseRecognitionException)GetInputStream

func (b *BaseRecognitionException) GetInputStream()IntStream

func (*BaseRecognitionException)GetMessage

func (b *BaseRecognitionException) GetMessage()string

func (*BaseRecognitionException)GetOffendingToken

func (b *BaseRecognitionException) GetOffendingToken()Token

func (*BaseRecognitionException)String

typeBaseRecognizer

type BaseRecognizer struct {RuleNames       []stringLiteralNames    []stringSymbolicNames   []stringGrammarFileNamestringSynErrRecognitionException// contains filtered or unexported fields}

funcNewBaseRecognizer

func NewBaseRecognizer() *BaseRecognizer

func (*BaseRecognizer)Action

func (b *BaseRecognizer) Action(_RuleContext, _, _int)

func (*BaseRecognizer)AddErrorListener

func (b *BaseRecognizer) AddErrorListener(listenerErrorListener)

func (*BaseRecognizer)GetErroradded inv4.13.0

func (*BaseRecognizer)GetErrorHeader

func (b *BaseRecognizer) GetErrorHeader(eRecognitionException)string

GetErrorHeader returns the error header, normally line/character position information.

Can be overridden in sub structs embedding BaseRecognizer.

func (*BaseRecognizer)GetErrorListenerDispatch

func (b *BaseRecognizer) GetErrorListenerDispatch()ErrorListener

func (*BaseRecognizer)GetLiteralNames

func (b *BaseRecognizer) GetLiteralNames() []string

func (*BaseRecognizer)GetRuleIndexMap

func (b *BaseRecognizer) GetRuleIndexMap() map[string]int

GetRuleIndexMap Get a map from rule names to rule indexes.

Used for XPath and tree pattern compilation.

TODO: JI This is not yet implemented in the Go runtime. Maybe not needed.

func (*BaseRecognizer)GetRuleNames

func (b *BaseRecognizer) GetRuleNames() []string

func (*BaseRecognizer)GetState

func (b *BaseRecognizer) GetState()int

func (*BaseRecognizer)GetSymbolicNames

func (b *BaseRecognizer) GetSymbolicNames() []string

func (*BaseRecognizer)GetTokenErrorDisplaydeprecated

func (b *BaseRecognizer) GetTokenErrorDisplay(tToken)string

GetTokenErrorDisplay shows how a token should be displayed in an error message.

The default is to display just the text, but during development you mightwant to have a lot of information spit out. Override in that caseto use t.String() (which, for CommonToken, dumps everything aboutthe token). This is better than forcing you to override a method inyour token objects because you don't have to go modify your lexerso that it creates a NewJava type.

Deprecated: This method is not called by the ANTLR 4 Runtime. Specificimplementations of [ANTLRErrorStrategy] may provide a similarfeature when necessary. For example, seeDefaultErrorStrategy.GetTokenErrorDisplay()

func (*BaseRecognizer)GetTokenNames

func (b *BaseRecognizer) GetTokenNames() []string

func (*BaseRecognizer)GetTokenType

func (b *BaseRecognizer) GetTokenType(_string)int

GetTokenType get the token type based upon its name

func (*BaseRecognizer)HasErroradded inv4.13.0

func (b *BaseRecognizer) HasError()bool

func (*BaseRecognizer)Precpred

func (b *BaseRecognizer) Precpred(_RuleContext, _int)bool

Precpred embedding structs need to override this if there are preceding predicatesthat the ATN interpreter needs to execute

func (*BaseRecognizer)RemoveErrorListeners

func (b *BaseRecognizer) RemoveErrorListeners()

func (*BaseRecognizer)Sempred

func (b *BaseRecognizer) Sempred(_RuleContext, _int, _int)bool

Sempred embedding structs need to override this if there are sempreds or actionsthat the ATN interpreter needs to execute

func (*BaseRecognizer)SetErroradded inv4.13.0

func (b *BaseRecognizer) SetError(errRecognitionException)

func (*BaseRecognizer)SetState

func (b *BaseRecognizer) SetState(vint)

typeBaseRewriteOperation

type BaseRewriteOperation struct {// contains filtered or unexported fields}

func (*BaseRewriteOperation)Execute

func (op *BaseRewriteOperation) Execute(_ *bytes.Buffer)int

func (*BaseRewriteOperation)GetIndex

func (op *BaseRewriteOperation) GetIndex()int

func (*BaseRewriteOperation)GetInstructionIndex

func (op *BaseRewriteOperation) GetInstructionIndex()int

func (*BaseRewriteOperation)GetOpName

func (op *BaseRewriteOperation) GetOpName()string

func (*BaseRewriteOperation)GetText

func (op *BaseRewriteOperation) GetText()string

func (*BaseRewriteOperation)GetTokens

func (op *BaseRewriteOperation) GetTokens()TokenStream

func (*BaseRewriteOperation)SetIndex

func (op *BaseRewriteOperation) SetIndex(valint)

func (*BaseRewriteOperation)SetInstructionIndex

func (op *BaseRewriteOperation) SetInstructionIndex(valint)

func (*BaseRewriteOperation)SetOpName

func (op *BaseRewriteOperation) SetOpName(valstring)

func (*BaseRewriteOperation)SetText

func (op *BaseRewriteOperation) SetText(valstring)

func (*BaseRewriteOperation)SetTokens

func (op *BaseRewriteOperation) SetTokens(valTokenStream)

func (*BaseRewriteOperation)String

func (op *BaseRewriteOperation) String()string

typeBaseToken

type BaseToken struct {// contains filtered or unexported fields}

func (*BaseToken)GetChannel

func (b *BaseToken) GetChannel()int

func (*BaseToken)GetColumn

func (b *BaseToken) GetColumn()int

func (*BaseToken)GetInputStream

func (b *BaseToken) GetInputStream()CharStream

func (*BaseToken)GetLine

func (b *BaseToken) GetLine()int

func (*BaseToken)GetSource

func (b *BaseToken) GetSource() *TokenSourceCharStreamPair

func (*BaseToken)GetStart

func (b *BaseToken) GetStart()int

func (*BaseToken)GetStop

func (b *BaseToken) GetStop()int

func (*BaseToken)GetTextadded inv4.13.1

func (b *BaseToken) GetText()string

func (*BaseToken)GetTokenIndex

func (b *BaseToken) GetTokenIndex()int

func (*BaseToken)GetTokenSource

func (b *BaseToken) GetTokenSource()TokenSource

func (*BaseToken)GetTokenType

func (b *BaseToken) GetTokenType()int

func (*BaseToken)SetTextadded inv4.13.1

func (b *BaseToken) SetText(textstring)

func (*BaseToken)SetTokenIndex

func (b *BaseToken) SetTokenIndex(vint)

func (*BaseToken)Stringadded inv4.13.1

func (b *BaseToken) String()string

typeBaseTransition

type BaseTransition struct {// contains filtered or unexported fields}

funcNewBaseTransition

func NewBaseTransition(targetATNState) *BaseTransition

func (*BaseTransition)Matches

func (t *BaseTransition) Matches(_, _, _int)bool

typeBasicBlockStartState

type BasicBlockStartState struct {BaseBlockStartState}

funcNewBasicBlockStartState

func NewBasicBlockStartState() *BasicBlockStartState

typeBasicState

type BasicState struct {BaseATNState}

funcNewBasicState

func NewBasicState() *BasicState

typeBitSet

type BitSet struct {// contains filtered or unexported fields}

funcNewBitSet

func NewBitSet() *BitSet

NewBitSet creates a new bitwise setTODO: See if we can replace with the standard library's BitSet

funcPredictionModeGetAlts

func PredictionModeGetAlts(altsets []*BitSet) *BitSet

PredictionModeGetAlts returns the complete set of represented alternatives for a collection ofalternative subsets. This method returns the union of eachBitSetin altsets, being the set of represented alternatives in altsets.

funcPredictionModegetConflictingAltSubsets

func PredictionModegetConflictingAltSubsets(configs *ATNConfigSet) []*BitSet

PredictionModegetConflictingAltSubsets gets the conflicting alt subsets from a configuration set.

for each configuration c in configs:   map[c] U= c.ATNConfig.alt // map hash/equals uses s and x, not alt and not pred

func (*BitSet)String

func (b *BitSet) String()string

typeBlockEndState

type BlockEndState struct {BaseATNState// contains filtered or unexported fields}

BlockEndState is a terminal node of a simple (a|b|c) block.

funcNewBlockEndState

func NewBlockEndState() *BlockEndState

typeBlockStartState

type BlockStartState interface {DecisionState// contains filtered or unexported methods}

typeCharStream

type CharStream interface {IntStreamGetText(int,int)stringGetTextFromTokens(start, endToken)stringGetTextFromInterval(Interval)string}

typeClosureBusyadded inv4.13.0

type ClosureBusy struct {// contains filtered or unexported fields}

ClosureBusy is a store of ATNConfigs and is a tiny abstraction layer overa standard JStore so that we can use Lazy instantiation of the JStore, mostlyto avoid polluting the stats module with a ton of JStore instances with nothing in them.

funcNewClosureBusyadded inv4.13.0

func NewClosureBusy(descstring) *ClosureBusy

NewClosureBusy creates a new ClosureBusy instance used to avoid infinite recursion for right-recursive rules

func (*ClosureBusy)Putadded inv4.13.0

func (c *ClosureBusy) Put(config *ATNConfig) (*ATNConfig,bool)

typeCollectable

type Collectable[Tany] interface {Hash()intEquals(otherCollectable[T])bool}

Collectable is an interface that a struct should implement if it is to beusable as a key in these collections.

typeCollectionDescriptoradded inv4.13.0

type CollectionDescriptor struct {SybolicNamestringDescriptionstring}

typeCollectionSourceadded inv4.13.0

type CollectionSourceint
const (UnknownCollectionCollectionSource =iotaATNConfigLookupCollectionATNStateCollectionDFAStateCollectionATNConfigCollectionPredictionContextCollectionSemanticContextCollectionClosureBusyCollectionPredictionVisitedCollectionMergeCacheCollectionPredictionContextCacheCollectionAltSetCollectionReachSetCollection)

typeCommonToken

type CommonToken struct {BaseToken}

funcNewCommonToken

func NewCommonToken(source *TokenSourceCharStreamPair, tokenType, channel, start, stopint) *CommonToken

typeCommonTokenFactory

type CommonTokenFactory struct {// contains filtered or unexported fields}

CommonTokenFactory is the default TokenFactory implementation.

funcNewCommonTokenFactory

func NewCommonTokenFactory(copyTextbool) *CommonTokenFactory

func (*CommonTokenFactory)Create

func (c *CommonTokenFactory) Create(source *TokenSourceCharStreamPair, ttypeint, textstring, channel, start, stop, line, columnint)Token

typeCommonTokenStream

type CommonTokenStream struct {// contains filtered or unexported fields}

CommonTokenStream is an implementation of TokenStream that loads tokens froma TokenSource on-demand and places the tokens in a buffer to provide accessto any previous token by index. This token stream ignores the value ofToken.getChannel. If your parser requires the token stream filter tokens toonly those on a particular channel, such as Token.DEFAULT_CHANNEL orToken.HIDDEN_CHANNEL, use a filtering token stream such a CommonTokenStream.

funcNewCommonTokenStream

func NewCommonTokenStream(lexerLexer, channelint) *CommonTokenStream

NewCommonTokenStream creates a new CommonTokenStream instance using the supplied lexer to producetokens and will pull tokens from the given lexer channel.

func (*CommonTokenStream)Consume

func (c *CommonTokenStream) Consume()

func (*CommonTokenStream)Fill

func (c *CommonTokenStream) Fill()

Fill gets all tokens from the lexer until EOF.

func (*CommonTokenStream)Get

func (c *CommonTokenStream) Get(indexint)Token

func (*CommonTokenStream)GetAllText

func (c *CommonTokenStream) GetAllText()string

func (*CommonTokenStream)GetAllTokens

func (c *CommonTokenStream) GetAllTokens() []Token

GetAllTokens returns all tokens currently pulled from the token source.

func (*CommonTokenStream)GetHiddenTokensToLeft

func (c *CommonTokenStream) GetHiddenTokensToLeft(tokenIndex, channelint) []Token

GetHiddenTokensToLeft collects all tokens on channel to the left of thecurrent token until we see a token on DEFAULT_TOKEN_CHANNEL. If channel is-1, it finds any non default channel token.

func (*CommonTokenStream)GetHiddenTokensToRight

func (c *CommonTokenStream) GetHiddenTokensToRight(tokenIndex, channelint) []Token

GetHiddenTokensToRight collects all tokens on a specified channel to theright of the current token up until we see a token on DEFAULT_TOKEN_CHANNELor EOF. If channel is -1, it finds any non-default channel token.

func (*CommonTokenStream)GetSourceName

func (c *CommonTokenStream) GetSourceName()string

func (*CommonTokenStream)GetTextFromInterval

func (c *CommonTokenStream) GetTextFromInterval(intervalInterval)string

func (*CommonTokenStream)GetTextFromRuleContext

func (c *CommonTokenStream) GetTextFromRuleContext(intervalRuleContext)string

func (*CommonTokenStream)GetTextFromTokens

func (c *CommonTokenStream) GetTextFromTokens(start, endToken)string

func (*CommonTokenStream)GetTokenSource

func (c *CommonTokenStream) GetTokenSource()TokenSource

func (*CommonTokenStream)GetTokens

func (c *CommonTokenStream) GetTokens(startint, stopint, types *IntervalSet) []Token

GetTokens gets all tokens from start to stop inclusive.

func (*CommonTokenStream)Index

func (c *CommonTokenStream) Index()int

func (*CommonTokenStream)LA

func (c *CommonTokenStream) LA(iint)int

func (*CommonTokenStream)LB

func (*CommonTokenStream)LT

func (*CommonTokenStream)Mark

func (c *CommonTokenStream) Mark()int

func (*CommonTokenStream)NextTokenOnChannel

func (c *CommonTokenStream) NextTokenOnChannel(i, _int)int

NextTokenOnChannel returns the index of the next token on channel given astarting index. Returns i if tokens[i] is on channel. Returns -1 if there areno tokens on channel between 'i' andTokenEOF.

func (*CommonTokenStream)Release

func (c *CommonTokenStream) Release(_int)

func (*CommonTokenStream)Resetadded inv4.13.0

func (c *CommonTokenStream) Reset()

func (*CommonTokenStream)Seek

func (c *CommonTokenStream) Seek(indexint)

func (*CommonTokenStream)SetTokenSource

func (c *CommonTokenStream) SetTokenSource(tokenSourceTokenSource)

SetTokenSource resets the c token stream by setting its token source.

func (*CommonTokenStream)Size

func (c *CommonTokenStream) Size()int

func (*CommonTokenStream)Sync

func (c *CommonTokenStream) Sync(iint)bool

Sync makes sure index i in tokens has a token and returns true if a token islocated at index i and otherwise false.

typeComparator

type Comparator[Tany] interface {Hash1(o T)intEquals2(T, T)bool}

typeConsoleErrorListener

type ConsoleErrorListener struct {*DefaultErrorListener}

funcNewConsoleErrorListener

func NewConsoleErrorListener() *ConsoleErrorListener

func (*ConsoleErrorListener)SyntaxError

func (c *ConsoleErrorListener) SyntaxError(_Recognizer, _ interface{}, line, columnint, msgstring, _RecognitionException)

SyntaxError prints messages to System.err containing thevalues of line, charPositionInLine, and msg usingthe following format:

line <line>:<charPositionInLine> <msg>

typeDFA

type DFA struct {// contains filtered or unexported fields}

DFA represents the Deterministic Finite Automaton used by the recognizer, including all the states it canreach and the transitions between them.

funcNewDFA

func NewDFA(atnStartStateDecisionState, decisionint) *DFA

func (*DFA)Getadded inv4.13.0

func (d *DFA) Get(s *DFAState) (*DFAState,bool)

Get returns a state that matches s if it is present in the DFA state set. We defer to thisfunction instead of accessing states directly so that we can implement lazy instantiation of the states JMap.

func (*DFA)Lenadded inv4.13.0

func (d *DFA) Len()int

Len returns the number of states in d. We use this instead of accessing states directly so that we can implement lazyinstantiation of the states JMap.

func (*DFA)Putadded inv4.13.0

func (d *DFA) Put(s *DFAState) (*DFAState,bool)

func (*DFA)String

func (d *DFA) String(literalNames []string, symbolicNames []string)string

func (*DFA)ToLexerString

func (d *DFA) ToLexerString()string

typeDFASerializer

type DFASerializer struct {// contains filtered or unexported fields}

DFASerializer is a DFA walker that knows how to dump the DFA states to serializedstrings.

funcNewDFASerializer

func NewDFASerializer(dfa *DFA, literalNames, symbolicNames []string) *DFASerializer

func (*DFASerializer)GetStateString

func (d *DFASerializer) GetStateString(s *DFAState)string

func (*DFASerializer)String

func (d *DFASerializer) String()string

typeDFAState

type DFAState struct {// contains filtered or unexported fields}

DFAState represents a set of possibleATN configurations. As Aho, Sethi,Ullman p. 117 says: "The DFA uses its state to keep track of all possiblestates the ATN can be in after reading each input symbol. That is to say,after reading input a1, a2,..an, the DFA is in a state that represents thesubset T of the states of the ATN that are reachable from the ATN's startstate along some path labeled a1a2..an."

In conventional NFA-to-DFA conversion, therefore, the subset T would be a bitset representing the set ofstates theATN could be in. We need to track the alt predicted by each stateas well, however. More importantly, we need to maintain a stack of states,tracking the closure operations as they jump from rule to rule, emulatingrule invocations (method calls). I have to add a stack to simulate the properlookahead sequences for the underlying LL grammar from which the ATN wasderived.

I use a set ofATNConfig objects, not simple states. AnATNConfig is both astate (ala normal conversion) and aRuleContext describing the chain of rules(if any) followed to arrive at that state.

ADFAState may have multiple references to a particular state, but withdifferentATN contexts (with same or different alts) meaning that state wasreached via a different set of rule invocations.

funcNewDFAState

func NewDFAState(stateNumberint, configs *ATNConfigSet) *DFAState

func (*DFAState)Equals

func (d *DFAState) Equals(oCollectable[*DFAState])bool

Equals returns whether d equals other. Two DFAStates are equal if their ATNconfiguration sets are the same. This method is used to see if a statealready exists.

Because the number of alternatives and number of ATN configurations arefinite, there is a finite number of DFA states that can be processed. This isnecessary to show that the algorithm terminates.

Cannot test the DFA state numbers here because inParserATNSimulator.addDFAState we need to know if any other state exists thathas d exact set of ATN configurations. The stateNumber is irrelevant.

func (*DFAState)GetAltSet

func (d *DFAState) GetAltSet() []int

GetAltSet gets the set of all alts mentioned by all ATN configurations in d.

func (*DFAState)Hash

func (d *DFAState) Hash()int

func (*DFAState)String

func (d *DFAState) String()string

typeDecisionState

type DecisionState interface {ATNState// contains filtered or unexported methods}

typeDefaultErrorListener

type DefaultErrorListener struct {}

funcNewDefaultErrorListener

func NewDefaultErrorListener() *DefaultErrorListener

func (*DefaultErrorListener)ReportAmbiguity

func (d *DefaultErrorListener) ReportAmbiguity(_Parser, _ *DFA, _, _int, _bool, _ *BitSet, _ *ATNConfigSet)

func (*DefaultErrorListener)ReportAttemptingFullContext

func (d *DefaultErrorListener) ReportAttemptingFullContext(_Parser, _ *DFA, _, _int, _ *BitSet, _ *ATNConfigSet)

func (*DefaultErrorListener)ReportContextSensitivity

func (d *DefaultErrorListener) ReportContextSensitivity(_Parser, _ *DFA, _, _, _int, _ *ATNConfigSet)

func (*DefaultErrorListener)SyntaxError

func (d *DefaultErrorListener) SyntaxError(_Recognizer, _ interface{}, _, _int, _string, _RecognitionException)

typeDefaultErrorStrategy

type DefaultErrorStrategy struct {// contains filtered or unexported fields}

DefaultErrorStrategy is the default implementation of ANTLRErrorStrategy used forerror reporting and recovery in ANTLR parsers.

funcNewDefaultErrorStrategy

func NewDefaultErrorStrategy() *DefaultErrorStrategy

func (*DefaultErrorStrategy)GetErrorRecoverySetadded inv4.13.0

func (d *DefaultErrorStrategy) GetErrorRecoverySet(recognizerParser) *IntervalSet

GetErrorRecoverySet computes the error recovery set for the current rule. Duringrule invocation, the parser pushes the set of tokens that canfollow that rule reference on the stack. This amounts tocomputing FIRST of what follows the rule reference in theenclosing rule. See LinearApproximator.FIRST().

This local follow set only includes tokensfrom within the rule i.e., the FIRST computation done byANTLR stops at the end of a rule.

Example

When you find a "no viable alt exception", the input is notconsistent with any of the alternatives for rule r. The bestthing to do is to consume tokens until you see something thatcan legally follow a call to r or any rule that called r.You don't want the exact set of viable next tokens because theinput might just be missing a token--you might consume therest of the input looking for one of the missing tokens.

Consider the grammar:

a : '[' b ']'  | '(' b ')'  ;b : c '^' INT      ;c : ID  | INT  ;

At each rule invocation, the set of tokens that could followthat rule is pushed on a stack. Here are the variouscontext-sensitive follow sets:

FOLLOW(b1_in_a) = FIRST(']') = ']'FOLLOW(b2_in_a) = FIRST(')') = ')'FOLLOW(c_in_b)  = FIRST('^') = '^'

Upon erroneous input “[]”, the call chain is

a → b → c

and, hence, the follow context stack is:

Depth Follow set   Start of rule execution  0   <EOF>        a (from main())  1   ']'          b  2   '^'          c

Notice that ')' is not included, because b would have to havebeen called from a different context in rule a for ')' to beincluded.

For error recovery, we cannot consider FOLLOW(c)(context-sensitive or otherwise). We need the combined set ofall context-sensitive FOLLOW sets - the set of all tokens thatcould follow any reference in the call chain. We need toreSync to one of those tokens. Note that FOLLOW(c)='^' and ifwe reSync'd to that token, we'd consume until EOF. We need toSync to context-sensitive FOLLOWs for a, b, and c:

{']','^'}

In this case, for input "[]", LA(1) is ']' and in the set, so we wouldnot consume anything. After printing an error, rule c wouldreturn normally. Rule b would not find the required '^' though.At this point, it gets a mismatched token error and panics anexception (since LA(1) is not in the viable following tokenset). The rule exception handler tries to recover, but findsthe same recovery set and doesn't consume anything. Rule bexits normally returning to rule a. Now it finds the ']' (andwith the successful Match exits errorRecovery mode).

So, you can see that the parser walks up the call chain lookingfor the token that was a member of the recovery set.

Errors are not generated in errorRecovery mode.

ANTLR's error recovery mechanism is based upon original ideas:

Algorithms + Data Structures = Programs by Niklaus Wirth andA note on error recovery in recursive descent parsers.

Later, Josef Grosch had some good ideas inEfficient and Comfortable Error Recovery in Recursive DescentParsers

Like Grosch I implement context-sensitive FOLLOW sets that are combined at run-time upon error to avoid overheadduring parsing. Later, the runtime Sync was improved for loops/sub-rules see [Sync] docs

func (*DefaultErrorStrategy)GetExpectedTokens

func (d *DefaultErrorStrategy) GetExpectedTokens(recognizerParser) *IntervalSet

func (*DefaultErrorStrategy)GetMissingSymbol

func (d *DefaultErrorStrategy) GetMissingSymbol(recognizerParser)Token

GetMissingSymbol conjures up a missing token during error recovery.

The recognizer attempts to recover from single missingsymbols. But, actions might refer to that missing symbol.For example:

x=ID {f($x)}.

The action clearly assumesthat there has been an identifier Matched previously and that$x points at that token. If that token is missing, butthe next token in the stream is what we want we assume thatthis token is missing, and we keep going. Because wehave to return some token to replace the missing token,we have to conjure one up. This method gives the user controlover the tokens returned for missing tokens. Mostly,you will want to create something special for identifiertokens. For literals such as '{' and ',', the defaultaction in the parser or tree parser works. It simply createsaCommonToken of the appropriate type. The text will be the token name.If you need to change which tokens must be created by the lexer,override this method to create the appropriate tokens.

func (*DefaultErrorStrategy)GetTokenErrorDisplay

func (d *DefaultErrorStrategy) GetTokenErrorDisplay(tToken)string

GetTokenErrorDisplay determines how a token should be displayed in an error message.The default is to display just the text, but during development you mightwant to have a lot of information spit out. Override this func in that caseto use t.String() (which, forCommonToken, dumps everything aboutthe token). This is better than forcing you to override a method inyour token objects because you don't have to go modify your lexerso that it creates a new type.

func (*DefaultErrorStrategy)InErrorRecoveryMode

func (d *DefaultErrorStrategy) InErrorRecoveryMode(_Parser)bool

func (*DefaultErrorStrategy)Recover

func (d *DefaultErrorStrategy) Recover(recognizerParser, _RecognitionException)

Recover is the default recovery implementation.It reSynchronizes the parser by consuming tokens until we find one in the reSynchronization set -loosely the set of tokens that can follow the current rule.

func (*DefaultErrorStrategy)RecoverInline

func (d *DefaultErrorStrategy) RecoverInline(recognizerParser)Token

The RecoverInline default implementation attempts to recover from the mismatched inputby using single token insertion and deletion as described below. If therecovery attempt fails, this method panics with [InputMisMatchException}.TODO: Not sure that panic() is the right thing to do here - JI

EXTRA TOKEN (single token deletion)

LA(1) is not what we are looking for. If LA(2) has theright token, however, then assume LA(1) is some extra spurioustoken and delete it. Then consume and return the next token (which wasthe LA(2) token) as the successful result of the Match operation.

This recovery strategy is implemented by singleTokenDeletion

MISSING TOKEN (single token insertion)

If current token -at LA(1) - is consistent with what could comeafter the expected LA(1) token, then assume the token is missingand use the parser'sTokenFactory to create it on the fly. The“insertion” is performed by returning the created token as the successfulresult of the Match operation.

This recovery strategy is implemented by [SingleTokenInsertion].

Example

For example, Input i=(3 is clearly missing the ')'. Whenthe parser returns from the nested call to expr, it will havecall the chain:

stat → expr → atom

and it will be trying to Match the ')' at this point in thederivation:

  : ID '=' '(' INT ')' ('+' atom)* ';'                ^

The attempt to [Match] ')' will fail when it sees ';' andcall [RecoverInline]. To recover, it sees that LA(1)==';'is in the set of tokens that can follow the ')' token referencein rule atom. It can assume that you forgot the ')'.

func (*DefaultErrorStrategy)ReportError

func (d *DefaultErrorStrategy) ReportError(recognizerParser, eRecognitionException)

ReportError is the default implementation of error reporting.It returns immediately if the handler is alreadyin error recovery mode. Otherwise, it calls [beginErrorCondition]and dispatches the Reporting task based on the runtime type of eaccording to the following table.

 [NoViableAltException]     : Dispatches the call to [ReportNoViableAlternative] [InputMisMatchException]   : Dispatches the call to [ReportInputMisMatch]  [FailedPredicateException] : Dispatches the call to [ReportFailedPredicate]  All other types            : Calls [NotifyErrorListeners] to Report the exception

func (*DefaultErrorStrategy)ReportFailedPredicate

func (d *DefaultErrorStrategy) ReportFailedPredicate(recognizerParser, e *FailedPredicateException)

ReportFailedPredicate is called by [ReportError] when the exception is aFailedPredicateException.

See also: [ReportError]

func (*DefaultErrorStrategy)ReportInputMisMatch

func (d *DefaultErrorStrategy) ReportInputMisMatch(recognizerParser, e *InputMisMatchException)

ReportInputMisMatch is called by [ReportError] when the exception is anInputMisMatchException

See also: [ReportError]

func (*DefaultErrorStrategy)ReportMatch

func (d *DefaultErrorStrategy) ReportMatch(recognizerParser)

ReportMatch is the default implementation of error matching and simply calls endErrorCondition.

func (*DefaultErrorStrategy)ReportMissingToken

func (d *DefaultErrorStrategy) ReportMissingToken(recognizerParser)

ReportMissingToken is called to report a syntax error which requires theinsertion of a missing token into the input stream. At the time thismethod is called, the missing token has not yet been inserted. When thismethod returns, recognizer is in error recovery mode.

This method is called when singleTokenInsertion identifiessingle-token insertion as a viable recovery strategy for a mismatchedinput error.

The default implementation simply returns if the handler is already inerror recovery mode. Otherwise, it calls beginErrorCondition toenter error recovery mode, followed by calling [NotifyErrorListeners]

func (*DefaultErrorStrategy)ReportNoViableAlternative

func (d *DefaultErrorStrategy) ReportNoViableAlternative(recognizerParser, e *NoViableAltException)

ReportNoViableAlternative is called by [ReportError] when the exception is aNoViableAltException.

See also [ReportError]

func (*DefaultErrorStrategy)ReportUnwantedToken

func (d *DefaultErrorStrategy) ReportUnwantedToken(recognizerParser)

ReportUnwantedToken is called to report a syntax error that requires the removalof a token from the input stream. At the time d method is called, theerroneous symbol is the current LT(1) symbol and has not yet beenremoved from the input stream. When this method returns,recognizer is in error recovery mode.

This method is called when singleTokenDeletion identifiessingle-token deletion as a viable recovery strategy for a mismatchedinput error.

The default implementation simply returns if the handler is already inerror recovery mode. Otherwise, it calls beginErrorCondition toenter error recovery mode, followed by calling[NotifyErrorListeners]

func (*DefaultErrorStrategy)SingleTokenDeletion

func (d *DefaultErrorStrategy) SingleTokenDeletion(recognizerParser)Token

SingleTokenDeletion implements the single-token deletion inline error recoverystrategy. It is called by [RecoverInline] to attempt to recoverfrom mismatched input. If this method returns nil, the parser and errorhandler state will not have changed. If this method returns non-nil,recognizer will not be in error recovery mode since thereturned token was a successful Match.

If the single-token deletion is successful, this method calls[ReportUnwantedToken] to Report the error, followed by[Consume] to actually “delete” the extraneous token. Then,before returning, [ReportMatch] is called to signal a successfulMatch.

The func returns the successfully MatchedToken instance if single-tokendeletion successfully recovers from the mismatched input, otherwise nil.

func (*DefaultErrorStrategy)SingleTokenInsertion

func (d *DefaultErrorStrategy) SingleTokenInsertion(recognizerParser)bool

SingleTokenInsertion implements the single-token insertion inline error recoverystrategy. It is called by [RecoverInline] if the single-tokendeletion strategy fails to recover from the mismatched input. If thismethod returns {@code true}, {@code recognizer} will be in error recoverymode.

This method determines whether single-token insertion is viable bychecking if the LA(1) input symbol could be successfully Matchedif it were instead the LA(2) symbol. If this method returns{@code true}, the caller is responsible for creating and inserting atoken with the correct type to produce this behavior.</p>

This func returns true if single-token insertion is a viable recoverystrategy for the current mismatched input.

func (*DefaultErrorStrategy)Sync

func (d *DefaultErrorStrategy) Sync(recognizerParser)

Sync is the default implementation of error strategy synchronization.

This Sync makes sure that the current lookahead symbol is consistent with what were expectingat this point in theATN. You can call this anytime but ANTLR onlygenerates code to check before sub-rules/loops and each iteration.

ImplementsJim Idle's magic Sync mechanism in closures and optionalsub-rules. E.g.:

a    : Sync ( stuff Sync )*Sync : {consume to what can follow Sync}

At the start of a sub-rule upon error, Sync performs singletoken deletion, if possible. If it can't do that, it bails on the currentrule and uses the default error recovery, which consumes until thereSynchronization set of the current rule.

If the sub-rule is optional

({@code (...)?}, {@code (...)*},

or a block with an empty alternative), then the expected set includes what followsthe sub-rule.

During loop iteration, it consumes until it sees a token that can start asub-rule or what follows loop. Yes, that is pretty aggressive. We opt tostay in the loop as long as possible.

Origins

Previous versions of ANTLR did a poor job of their recovery within loops.A single mismatch token or missing token would force the parser to bailout of the entire rules surrounding the loop. So, for rule:

classfunc : 'class' ID '{' member* '}'

input with an extra token between members would force the parser toconsume until it found the next class definition rather than the nextmember definition of the current class.

This functionality cost a bit of effort because the parser has tocompare the token set at the start of the loop and at each iteration. If forsome reason speed is suffering for you, you can turn off thisfunctionality by simply overriding this method as empty:

{ }

typeDiagnosticErrorListener

type DiagnosticErrorListener struct {*DefaultErrorListener// contains filtered or unexported fields}

funcNewDiagnosticErrorListener

func NewDiagnosticErrorListener(exactOnlybool) *DiagnosticErrorListener

func (*DiagnosticErrorListener)ReportAmbiguity

func (d *DiagnosticErrorListener) ReportAmbiguity(recognizerParser, dfa *DFA, startIndex, stopIndexint, exactbool, ambigAlts *BitSet, configs *ATNConfigSet)

func (*DiagnosticErrorListener)ReportAttemptingFullContext

func (d *DiagnosticErrorListener) ReportAttemptingFullContext(recognizerParser, dfa *DFA, startIndex, stopIndexint, _ *BitSet, _ *ATNConfigSet)

func (*DiagnosticErrorListener)ReportContextSensitivity

func (d *DiagnosticErrorListener) ReportContextSensitivity(recognizerParser, dfa *DFA, startIndex, stopIndex, _int, _ *ATNConfigSet)

typeEpsilonTransition

type EpsilonTransition struct {BaseTransition// contains filtered or unexported fields}

funcNewEpsilonTransition

func NewEpsilonTransition(targetATNState, outermostPrecedenceReturnint) *EpsilonTransition

func (*EpsilonTransition)Matches

func (t *EpsilonTransition) Matches(_, _, _int)bool

func (*EpsilonTransition)String

func (t *EpsilonTransition) String()string

typeErrorListener

type ErrorListener interface {SyntaxError(recognizerRecognizer, offendingSymbol interface{}, line, columnint, msgstring, eRecognitionException)ReportAmbiguity(recognizerParser, dfa *DFA, startIndex, stopIndexint, exactbool, ambigAlts *BitSet, configs *ATNConfigSet)ReportAttemptingFullContext(recognizerParser, dfa *DFA, startIndex, stopIndexint, conflictingAlts *BitSet, configs *ATNConfigSet)ReportContextSensitivity(recognizerParser, dfa *DFA, startIndex, stopIndex, predictionint, configs *ATNConfigSet)}

typeErrorNode

type ErrorNode interface {TerminalNode// contains filtered or unexported methods}

typeErrorNodeImpl

type ErrorNodeImpl struct {*TerminalNodeImpl}

funcNewErrorNodeImpl

func NewErrorNodeImpl(tokenToken) *ErrorNodeImpl

func (*ErrorNodeImpl)Accept

func (e *ErrorNodeImpl) Accept(vParseTreeVisitor) interface{}

typeErrorStrategy

type ErrorStrategy interface {RecoverInline(Parser)TokenRecover(Parser,RecognitionException)Sync(Parser)InErrorRecoveryMode(Parser)boolReportError(Parser,RecognitionException)ReportMatch(Parser)// contains filtered or unexported methods}

typeFailedPredicateException

type FailedPredicateException struct {*BaseRecognitionException// contains filtered or unexported fields}

FailedPredicateException indicates that a semantic predicate failed during validation. Validation of predicatesoccurs when normally parsing the alternative just like Matching a token.Disambiguating predicate evaluation occurs when we test a predicate duringprediction.

funcNewFailedPredicateException

func NewFailedPredicateException(recognizerParser, predicatestring, messagestring) *FailedPredicateException

typeFileStream

type FileStream struct {InputStream// contains filtered or unexported fields}

funcNewFileStream

func NewFileStream(fileNamestring) (*FileStream,error)

func (*FileStream)GetSourceName

func (f *FileStream) GetSourceName()string

typeIATNSimulator

type IATNSimulator interface {SharedContextCache() *PredictionContextCacheATN() *ATNDecisionToDFA() []*DFA}

typeILexerATNSimulator

type ILexerATNSimulator interface {IATNSimulatorMatch(inputCharStream, modeint)intGetCharPositionInLine()intGetLine()intGetText(inputCharStream)stringConsume(inputCharStream)// contains filtered or unexported methods}

typeInputMisMatchException

type InputMisMatchException struct {*BaseRecognitionException}

funcNewInputMisMatchException

func NewInputMisMatchException(recognizerParser) *InputMisMatchException

NewInputMisMatchException creates an exception that signifies any kind of mismatched input exceptions such aswhen the current input does not Match the expected token.

typeInputStream

type InputStream struct {// contains filtered or unexported fields}

funcNewInputStream

func NewInputStream(datastring) *InputStream

NewInputStream creates a new input stream from the given string

funcNewIoStreamadded inv4.13.0

func NewIoStream(readerio.Reader) *InputStream

NewIoStream creates a new input stream from the given io.Reader reader.Note that the reader is read completely into memory and so it must actuallyhave a stopping point - you cannot pass in a reader on an open-ended source suchas a socket for instance.

func (*InputStream)Consume

func (is *InputStream) Consume()

Consume moves the input pointer to the next character in the input stream

func (*InputStream)GetSourceName

func (*InputStream) GetSourceName()string

func (*InputStream)GetText

func (is *InputStream) GetText(startint, stopint)string

GetText returns the text from the input stream from the start to the stop index

func (*InputStream)GetTextFromInterval

func (is *InputStream) GetTextFromInterval(iInterval)string

func (*InputStream)GetTextFromTokens

func (is *InputStream) GetTextFromTokens(start, stopToken)string

GetTextFromTokens returns the text from the input stream from the first character of the start token to the lastcharacter of the stop token

func (*InputStream)Index

func (is *InputStream) Index()int

Index returns the current offset in to the input stream

func (*InputStream)LA

func (is *InputStream) LA(offsetint)int

LA returns the character at the given offset from the start of the input stream

func (*InputStream)LT

func (is *InputStream) LT(offsetint)int

LT returns the character at the given offset from the start of the input stream

func (*InputStream)Mark

func (is *InputStream) Mark()int

Mark does nothing here as we have entire buffer

func (*InputStream)Release

func (is *InputStream) Release(_int)

Release does nothing here as we have entire buffer

func (*InputStream)Seek

func (is *InputStream) Seek(indexint)

Seek the input point to the provided index offset

func (*InputStream)Size

func (is *InputStream) Size()int

Size returns the total number of characters in the input stream

func (*InputStream)String

func (is *InputStream) String()string

String returns the entire input stream as a string

typeInsertAfterOp

type InsertAfterOp struct {BaseRewriteOperation}

InsertAfterOp distinguishes between insert after/before to do the "insert after" instructionsfirst and then the "insert before" instructions at same index. Implementationof "insert after" is "insert before index+1".

funcNewInsertAfterOp

func NewInsertAfterOp(indexint, textstring, streamTokenStream) *InsertAfterOp

func (*InsertAfterOp)Execute

func (op *InsertAfterOp) Execute(buffer *bytes.Buffer)int

func (*InsertAfterOp)String

func (op *InsertAfterOp) String()string

typeInsertBeforeOp

type InsertBeforeOp struct {BaseRewriteOperation}

funcNewInsertBeforeOp

func NewInsertBeforeOp(indexint, textstring, streamTokenStream) *InsertBeforeOp

func (*InsertBeforeOp)Execute

func (op *InsertBeforeOp) Execute(buffer *bytes.Buffer)int

func (*InsertBeforeOp)String

func (op *InsertBeforeOp) String()string

typeIntStack

type IntStack []int

func (*IntStack)Pop

func (s *IntStack) Pop() (int,error)

func (*IntStack)Push

func (s *IntStack) Push(eint)

typeIntStream

type IntStream interface {Consume()LA(int)intMark()intRelease(markerint)Index()intSeek(indexint)Size()intGetSourceName()string}

typeInterpreterRuleContext

type InterpreterRuleContext interface {ParserRuleContext}

typeInterval

type Interval struct {StartintStopint}

funcNewInterval

func NewInterval(start, stopint)Interval

NewInterval creates a new interval with the given start and stop values.

func (Interval)Contains

func (iInterval) Contains(itemint)bool

Contains returns true if the given item is contained within the interval.

func (Interval)Lengthadded inv4.13.0

func (iInterval) Length()int

Length returns the length of the interval.

func (Interval)String

func (iInterval) String()string

String generates a string representation of the interval.

typeIntervalSet

type IntervalSet struct {// contains filtered or unexported fields}

IntervalSet represents a collection of [Intervals], which may be read-only.

funcNewIntervalSet

func NewIntervalSet() *IntervalSet

NewIntervalSet creates a new empty, writable, interval set.

func (*IntervalSet)Equalsadded inv4.13.0

func (i *IntervalSet) Equals(other *IntervalSet)bool

func (*IntervalSet)GetIntervals

func (i *IntervalSet) GetIntervals() []Interval

func (*IntervalSet)String

func (i *IntervalSet) String()string

func (*IntervalSet)StringVerbose

func (i *IntervalSet) StringVerbose(literalNames []string, symbolicNames []string, elemsAreCharbool)string

typeIterativeParseTreeWalkeradded inv4.13.0

type IterativeParseTreeWalker struct {*ParseTreeWalker}

funcNewIterativeParseTreeWalkeradded inv4.13.0

func NewIterativeParseTreeWalker() *IterativeParseTreeWalker

func (*IterativeParseTreeWalker)Walkadded inv4.13.0

typeJMap

type JMap[K, Vany, CComparator[K]] struct {// contains filtered or unexported fields}

funcNewJMap

func NewJMap[K, Vany, CComparator[K]](comparatorComparator[K], cTypeCollectionSource, descstring) *JMap[K, V, C]

func (*JMap[K, V, C])Clear

func (m *JMap[K, V, C]) Clear()

func (*JMap[K, V, C])Delete

func (m *JMap[K, V, C]) Delete(key K)

func (*JMap[K, V, C])Get

func (m *JMap[K, V, C]) Get(key K) (V,bool)

func (*JMap[K, V, C])Len

func (m *JMap[K, V, C]) Len()int

func (*JMap[K, V, C])Put

func (m *JMap[K, V, C]) Put(key K, val V) (V,bool)

func (*JMap[K, V, C])Values

func (m *JMap[K, V, C]) Values() []V

typeJPCEntryadded inv4.13.0

type JPCEntry struct {// contains filtered or unexported fields}

typeJPCMapadded inv4.13.0

type JPCMap struct {// contains filtered or unexported fields}

funcNewJPCMapadded inv4.13.0

func NewJPCMap(cTypeCollectionSource, descstring) *JPCMap

func (*JPCMap)Getadded inv4.13.0

func (pcm *JPCMap) Get(k1, k2 *PredictionContext) (*PredictionContext,bool)

func (*JPCMap)Putadded inv4.13.0

func (pcm *JPCMap) Put(k1, k2, v *PredictionContext)

typeJPCMap2added inv4.13.0

type JPCMap2 struct {// contains filtered or unexported fields}

funcNewJPCMap2added inv4.13.0

func NewJPCMap2(cTypeCollectionSource, descstring) *JPCMap2

func (*JPCMap2)Getadded inv4.13.0

func (pcm *JPCMap2) Get(k1, k2 *PredictionContext) (*PredictionContext,bool)

func (*JPCMap2)Putadded inv4.13.0

func (pcm *JPCMap2) Put(k1, k2, v *PredictionContext) (*PredictionContext,bool)

typeJStatRecadded inv4.13.0

type JStatRec struct {SourceCollectionSourceMaxSizeintCurSizeintGetsintGetHitsintGetMissesintGetHashConflictsintGetNoEntintPutsintPutHitsintPutMissesintPutHashConflictsintMaxSlotSizeintDescriptionstringCreateStack      []byte}

A JStatRec is a record of a particular use of aJStore,JMap or JPCMap] collection. Typically, it will beused to look for unused collections that wre allocated anyway, problems with hash bucket clashes, and anomaliessuch as huge numbers of Gets with no entries found GetNoEnt. You can refer to the CollectionAnomalies() functionfor ideas on what can be gleaned from these statistics about collections.

typeJStore

type JStore[Tany, CComparator[T]] struct {// contains filtered or unexported fields}

JStore implements a container that allows the use of a struct to calculate the keyfor a collection of values akin to map. This is not meant to be a full-blown HashMap but justserve the needs of the ANTLR Go runtime.

For ease of porting the logic of the runtime from the master target (Java), this collectionoperates in a similar way to Java, in that it can use any struct that supplies a Hash() and Equals()function as the key. The values are stored in a standard go map which internally is a form of hashmapitself, the key for the go map is the hash supplied by the key object. The collection is able to deal withhash conflicts by using a simple slice of values associated with the hash code indexed bucket. That isn'tparticularly efficient, but it is simple, and it works. As this is specifically for the ANTLR runtime, andwe understand the requirements, then this is fine - this is not a general purpose collection.

funcNewJStore

func NewJStore[Tany, CComparator[T]](comparatorComparator[T], cTypeCollectionSource, descstring) *JStore[T, C]

func (*JStore[T, C])Contains

func (s *JStore[T, C]) Contains(key T)bool

Contains returns true if the given key is present in the store

func (*JStore[T, C])Each

func (s *JStore[T, C]) Each(f func(T)bool)

func (*JStore[T, C])Get

func (s *JStore[T, C]) Get(key T) (T,bool)

Get will return the value associated with the key - the type of the key is the same type as the valuewhich would not generally be useful, but this is a specific thing for ANTLR where the key isgenerated using the object we are going to store.

func (*JStore[T, C])Len

func (s *JStore[T, C]) Len()int

func (*JStore[T, C])Put

func (s *JStore[T, C]) Put(value T) (v T, existsbool)

Put will store given value in the collection. Note that the key for storage is generated fromthe value itself - this is specifically because that is what ANTLR needs - this would not be usefulas any kind of general collection.

If the key has a hash conflict, then the value will be added to the slice of values associated with thehash, unless the value is already in the slice, in which case the existing value is returned. Value equivalence istested by calling the equals() method on the key.

If the given value is already present in the store, then the existing value is returned as v and exists is set to true

If the given value is not present in the store, then the value is added to the store and returned as v and exists is set to false.

func (*JStore[T, C])SortedSlice

func (s *JStore[T, C]) SortedSlice(less func(i, j T)bool) []T

func (*JStore[T, C])Values

func (s *JStore[T, C]) Values() []T

typeLL1Analyzer

type LL1Analyzer struct {// contains filtered or unexported fields}

funcNewLL1Analyzer

func NewLL1Analyzer(atn *ATN) *LL1Analyzer

func (*LL1Analyzer)Look

func (la *LL1Analyzer) Look(s, stopStateATNState, ctxRuleContext) *IntervalSet

Look computes the set of tokens that can follow s in theATN in thespecified ctx.

If ctx is nil and the end of the rule containings is reached, [EPSILON] is added to the result set.

If ctx is not nil and the end of the outermost rule isreached, [EOF] is added to the result set.

Parameter s the ATN state, and stopState is the ATN state to stop at. This can be aBlockEndState to detect epsilon paths through a closure.

Parameter ctx is the complete parser context, or nil if the contextshould be ignored

The func returns the set of tokens that can follow s in theATN in thespecified ctx.

typeLexer

type Lexer interface {TokenSourceRecognizerEmit()TokenSetChannel(int)PushMode(int)PopMode()intSetType(int)SetMode(int)}

typeLexerATNSimulator

type LexerATNSimulator struct {BaseATNSimulatorLineintCharPositionInLineintMatchCallsint// contains filtered or unexported fields}

funcNewLexerATNSimulator

func NewLexerATNSimulator(recogLexer, atn *ATN, decisionToDFA []*DFA, sharedContextCache *PredictionContextCache) *LexerATNSimulator

func (*LexerATNSimulator)Consume

func (l *LexerATNSimulator) Consume(inputCharStream)

func (*LexerATNSimulator)GetCharPositionInLine

func (l *LexerATNSimulator) GetCharPositionInLine()int

func (*LexerATNSimulator)GetLine

func (l *LexerATNSimulator) GetLine()int

func (*LexerATNSimulator)GetText

func (l *LexerATNSimulator) GetText(inputCharStream)string

GetText returns the text [Match]ed so far for the current token.

func (*LexerATNSimulator)GetTokenName

func (l *LexerATNSimulator) GetTokenName(ttint)string

func (*LexerATNSimulator)Match

func (l *LexerATNSimulator) Match(inputCharStream, modeint)int

func (*LexerATNSimulator)MatchATN

func (l *LexerATNSimulator) MatchATN(inputCharStream)int

typeLexerAction

type LexerAction interface {Hash()intEquals(otherLexerAction)bool// contains filtered or unexported methods}

typeLexerActionExecutor

type LexerActionExecutor struct {// contains filtered or unexported fields}

funcLexerActionExecutorappend

func LexerActionExecutorappend(lexerActionExecutor *LexerActionExecutor, lexerActionLexerAction) *LexerActionExecutor

LexerActionExecutorappend creates aLexerActionExecutor which executes the actions forthe inputLexerActionExecutor followed by a specifiedLexerAction.TODO: This does not match the Java code

funcNewLexerActionExecutor

func NewLexerActionExecutor(lexerActions []LexerAction) *LexerActionExecutor

func (*LexerActionExecutor)Equals

func (l *LexerActionExecutor) Equals(other interface{})bool

func (*LexerActionExecutor)Hash

func (l *LexerActionExecutor) Hash()int

typeLexerChannelAction

type LexerChannelAction struct {*BaseLexerAction// contains filtered or unexported fields}

LexerChannelAction implements the channel lexer action by calling[Lexer.setChannel] with the assigned channel.

Constructs a new channel action with the specified channel value.

funcNewLexerChannelAction

func NewLexerChannelAction(channelint) *LexerChannelAction

NewLexerChannelAction creates a channel lexer action by calling[Lexer.setChannel] with the assigned channel.

Constructs a new channel action with the specified channel value.

func (*LexerChannelAction)Equals

func (l *LexerChannelAction) Equals(otherLexerAction)bool

func (*LexerChannelAction)Hash

func (l *LexerChannelAction) Hash()int

func (*LexerChannelAction)String

func (l *LexerChannelAction) String()string

typeLexerCustomAction

type LexerCustomAction struct {*BaseLexerAction// contains filtered or unexported fields}

funcNewLexerCustomAction

func NewLexerCustomAction(ruleIndex, actionIndexint) *LexerCustomAction

func (*LexerCustomAction)Equals

func (l *LexerCustomAction) Equals(otherLexerAction)bool

func (*LexerCustomAction)Hash

func (l *LexerCustomAction) Hash()int

typeLexerDFASerializer

type LexerDFASerializer struct {*DFASerializer}

funcNewLexerDFASerializer

func NewLexerDFASerializer(dfa *DFA) *LexerDFASerializer

func (*LexerDFASerializer)String

func (l *LexerDFASerializer) String()string

typeLexerIndexedCustomAction

type LexerIndexedCustomAction struct {*BaseLexerAction// contains filtered or unexported fields}

funcNewLexerIndexedCustomAction

func NewLexerIndexedCustomAction(offsetint, lexerActionLexerAction) *LexerIndexedCustomAction

NewLexerIndexedCustomAction constructs a new indexed custom action by associating a character offsetwith aLexerAction.

Note: This class is only required for lexer actions for which[LexerAction.isPositionDependent] returns true.

The offset points into the inputCharStream, relative tothe token start index, at which the specified lexerAction should beexecuted.

func (*LexerIndexedCustomAction)Hash

typeLexerModeAction

type LexerModeAction struct {*BaseLexerAction// contains filtered or unexported fields}

LexerModeAction implements the mode lexer action by calling [Lexer.mode] withthe assigned mode.

funcNewLexerModeAction

func NewLexerModeAction(modeint) *LexerModeAction

func (*LexerModeAction)Equals

func (l *LexerModeAction) Equals(otherLexerAction)bool

func (*LexerModeAction)Hash

func (l *LexerModeAction) Hash()int

func (*LexerModeAction)String

func (l *LexerModeAction) String()string

typeLexerMoreAction

type LexerMoreAction struct {*BaseLexerAction}

funcNewLexerMoreAction

func NewLexerMoreAction() *LexerMoreAction

func (*LexerMoreAction)String

func (l *LexerMoreAction) String()string

typeLexerNoViableAltException

type LexerNoViableAltException struct {*BaseRecognitionException// contains filtered or unexported fields}

funcNewLexerNoViableAltException

func NewLexerNoViableAltException(lexerLexer, inputCharStream, startIndexint, deadEndConfigs *ATNConfigSet) *LexerNoViableAltException

func (*LexerNoViableAltException)String

typeLexerPopModeAction

type LexerPopModeAction struct {*BaseLexerAction}

LexerPopModeAction implements the popMode lexer action by calling [Lexer.popMode].

The popMode command does not have any parameters, so this action isimplemented as a singleton instance exposed byLexerPopModeActionINSTANCE

funcNewLexerPopModeAction

func NewLexerPopModeAction() *LexerPopModeAction

func (*LexerPopModeAction)String

func (l *LexerPopModeAction) String()string

typeLexerPushModeAction

type LexerPushModeAction struct {*BaseLexerAction// contains filtered or unexported fields}

LexerPushModeAction implements the pushMode lexer action by calling[Lexer.pushMode] with the assigned mode.

funcNewLexerPushModeAction

func NewLexerPushModeAction(modeint) *LexerPushModeAction

func (*LexerPushModeAction)Equals

func (l *LexerPushModeAction) Equals(otherLexerAction)bool

func (*LexerPushModeAction)Hash

func (l *LexerPushModeAction) Hash()int

func (*LexerPushModeAction)String

func (l *LexerPushModeAction) String()string

typeLexerSkipAction

type LexerSkipAction struct {*BaseLexerAction}

LexerSkipAction implements the [BaseLexerAction.Skip] lexer action by calling [Lexer.Skip].

The Skip command does not have any parameters, so this action isimplemented as a singleton instance exposed by theLexerSkipActionINSTANCE.

funcNewLexerSkipAction

func NewLexerSkipAction() *LexerSkipAction

func (*LexerSkipAction)Equalsadded inv4.13.0

func (b *LexerSkipAction) Equals(otherLexerAction)bool

func (*LexerSkipAction)String

func (l *LexerSkipAction) String()string

String returns a string representation of the currentLexerSkipAction.

typeLexerTypeAction

type LexerTypeAction struct {*BaseLexerAction// contains filtered or unexported fields}
Implements the {@code type} lexer action by calling {@link Lexer//setType}

with the assigned type.

funcNewLexerTypeAction

func NewLexerTypeAction(thetypeint) *LexerTypeAction

func (*LexerTypeAction)Equals

func (l *LexerTypeAction) Equals(otherLexerAction)bool

func (*LexerTypeAction)Hash

func (l *LexerTypeAction) Hash()int

func (*LexerTypeAction)String

func (l *LexerTypeAction) String()string

typeLoopEndState

type LoopEndState struct {BaseATNState// contains filtered or unexported fields}

LoopEndState marks the end of a * or + loop.

funcNewLoopEndState

func NewLoopEndState() *LoopEndState

typeMutexadded inv4.13.1

type Mutex struct {// contains filtered or unexported fields}

Mutex is a simple mutex implementation which just delegates to sync.Mutex, itis used to provide a mutex implementation for the antlr package, which userscan turn off with the build tag -tags antlr.nomutex

func (*Mutex)Lockadded inv4.13.1

func (m *Mutex) Lock()

func (*Mutex)Unlockadded inv4.13.1

func (m *Mutex) Unlock()

typeNoViableAltException

type NoViableAltException struct {*BaseRecognitionException// contains filtered or unexported fields}

funcNewNoViableAltException

func NewNoViableAltException(recognizerParser, inputTokenStream, startTokenToken, offendingTokenToken, deadEndConfigs *ATNConfigSet, ctxParserRuleContext) *NoViableAltException

NewNoViableAltException creates an exception indicating that the parser could not decide which of two or more pathsto take based upon the remaining input. It tracks the starting tokenof the offending input and also knows where the parser wasin the various paths when the error.

Reported by [ReportNoViableAlternative]

typeNotSetTransition

type NotSetTransition struct {SetTransition}

funcNewNotSetTransition

func NewNotSetTransition(targetATNState, set *IntervalSet) *NotSetTransition

func (*NotSetTransition)Matches

func (t *NotSetTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*NotSetTransition)String

func (t *NotSetTransition) String()string

typeOR

type OR struct {// contains filtered or unexported fields}

funcNewOR

func NewOR(a, bSemanticContext) *OR

func (*OR)Equals

func (o *OR) Equals(otherCollectable[SemanticContext])bool

func (*OR)Hash

func (o *OR) Hash()int

func (*OR)String

func (o *OR) String()string

typeObjEqComparator

type ObjEqComparator[TCollectable[T]] struct{}

ObjEqComparator is the equivalent of the Java ObjectEqualityComparator, which is the default instance ofEquality comparator. We do not have inheritance in Go, only interfaces, so we use generics to enforce sometype safety and avoid having to implement this for every type that we want to perform comparison on.

This comparator works by using the standard Hash() and Equals() methods of the type T that is being compared. Whichallows us to use it in any collection instance that does not require a special hash or equals implementation.

func (*ObjEqComparator[T])Equals2

func (c *ObjEqComparator[T]) Equals2(o1, o2 T)bool

Equals2 delegates to the Equals() method of type T

func (*ObjEqComparator[T])Hash1

func (c *ObjEqComparator[T]) Hash1(o T)int

Hash1 delegates to the Hash() method of type T

typeParseCancellationException

type ParseCancellationException struct {}

funcNewParseCancellationException

func NewParseCancellationException() *ParseCancellationException

func (ParseCancellationException)GetInputStreamadded inv4.13.0

func (pParseCancellationException) GetInputStream()IntStream

func (ParseCancellationException)GetMessageadded inv4.13.0

func (ParseCancellationException)GetOffendingTokenadded inv4.13.0

func (pParseCancellationException) GetOffendingToken()Token

typeParseTree

type ParseTree interface {SyntaxTreeAccept(VisitorParseTreeVisitor) interface{}GetText()stringToStringTree([]string,Recognizer)string}

funcTreesDescendants

func TreesDescendants(tParseTree) []ParseTree

funcTreesFindAllTokenNodes

func TreesFindAllTokenNodes(tParseTree, ttypeint) []ParseTree

funcTreesfindAllNodes

func TreesfindAllNodes(tParseTree, indexint, findTokensbool) []ParseTree

funcTreesfindAllRuleNodes

func TreesfindAllRuleNodes(tParseTree, ruleIndexint) []ParseTree

typeParseTreeListener

type ParseTreeListener interface {VisitTerminal(nodeTerminalNode)VisitErrorNode(nodeErrorNode)EnterEveryRule(ctxParserRuleContext)ExitEveryRule(ctxParserRuleContext)}

typeParseTreeVisitor

type ParseTreeVisitor interface {Visit(treeParseTree) interface{}VisitChildren(nodeRuleNode) interface{}VisitTerminal(nodeTerminalNode) interface{}VisitErrorNode(nodeErrorNode) interface{}}

typeParseTreeWalker

type ParseTreeWalker struct {}

funcNewParseTreeWalker

func NewParseTreeWalker() *ParseTreeWalker

func (*ParseTreeWalker)EnterRule

func (p *ParseTreeWalker) EnterRule(listenerParseTreeListener, rRuleNode)

EnterRule enters a grammar rule by first triggering the generic eventParseTreeListener.[EnterEveryRule]then by triggering the event specific to the given parse tree node

func (*ParseTreeWalker)ExitRule

func (p *ParseTreeWalker) ExitRule(listenerParseTreeListener, rRuleNode)

ExitRule exits a grammar rule by first triggering the event specific to the given parse tree nodethen by triggering the generic eventParseTreeListener.ExitEveryRule

func (*ParseTreeWalker)Walk

func (p *ParseTreeWalker) Walk(listenerParseTreeListener, tTree)

Walk performs a walk on the given parse tree starting at the root and going down recursivelywith depth-first search. On each node, [EnterRule] is called beforerecursively walking down into child nodes, then [ExitRule] is called after the recursive call to wind up.

typeParser

type Parser interface {RecognizerGetInterpreter() *ParserATNSimulatorGetTokenStream()TokenStreamGetTokenFactory()TokenFactoryGetParserRuleContext()ParserRuleContextSetParserRuleContext(ParserRuleContext)Consume()TokenGetParseListeners() []ParseTreeListenerGetErrorHandler()ErrorStrategySetErrorHandler(ErrorStrategy)GetInputStream()IntStreamGetCurrentToken()TokenGetExpectedTokens() *IntervalSetNotifyErrorListeners(string,Token,RecognitionException)IsExpectedToken(int)boolGetPrecedence()intGetRuleInvocationStack(ParserRuleContext) []string}

typeParserATNSimulator

type ParserATNSimulator struct {BaseATNSimulator// contains filtered or unexported fields}

funcNewParserATNSimulator

func NewParserATNSimulator(parserParser, atn *ATN, decisionToDFA []*DFA, sharedContextCache *PredictionContextCache) *ParserATNSimulator

func (*ParserATNSimulator)AdaptivePredict

func (p *ParserATNSimulator) AdaptivePredict(parser *BaseParser, inputTokenStream, decisionint, outerContextParserRuleContext)int

func (*ParserATNSimulator)GetAltThatFinishedDecisionEntryRule

func (p *ParserATNSimulator) GetAltThatFinishedDecisionEntryRule(configs *ATNConfigSet)int

func (*ParserATNSimulator)GetPredictionMode

func (p *ParserATNSimulator) GetPredictionMode()int

func (*ParserATNSimulator)GetTokenName

func (p *ParserATNSimulator) GetTokenName(tint)string

func (*ParserATNSimulator)ReportAmbiguity

func (p *ParserATNSimulator) ReportAmbiguity(dfa *DFA, _ *DFAState, startIndex, stopIndexint,exactbool, ambigAlts *BitSet, configs *ATNConfigSet)

ReportAmbiguity reports and ambiguity in the parse, which shows that the parser will explore a different route.

If context-sensitive parsing, we know it's an ambiguity not a conflict or error, but we can report it to the developerso that they can see that this is happening and can take action if they want to.

func (*ParserATNSimulator)ReportAttemptingFullContext

func (p *ParserATNSimulator) ReportAttemptingFullContext(dfa *DFA, conflictingAlts *BitSet, configs *ATNConfigSet, startIndex, stopIndexint)

func (*ParserATNSimulator)ReportContextSensitivity

func (p *ParserATNSimulator) ReportContextSensitivity(dfa *DFA, predictionint, configs *ATNConfigSet, startIndex, stopIndexint)

func (*ParserATNSimulator)SetPredictionMode

func (p *ParserATNSimulator) SetPredictionMode(vint)

typeParserRuleContext

type ParserRuleContext interface {RuleContextSetException(RecognitionException)AddTokenNode(tokenToken) *TerminalNodeImplAddErrorNode(badTokenToken) *ErrorNodeImplEnterRule(listenerParseTreeListener)ExitRule(listenerParseTreeListener)SetStart(Token)GetStart()TokenSetStop(Token)GetStop()TokenAddChild(childRuleContext)RuleContextRemoveLastChild()}

typePlusBlockStartState

type PlusBlockStartState struct {BaseBlockStartState// contains filtered or unexported fields}

PlusBlockStartState is the start of a (A|B|...)+ loop. Technically it is adecision state; we don't use it for code generation. Somebody might need it,it is included for completeness. In reality, PlusLoopbackState is the realdecision-making node for A+.

funcNewPlusBlockStartState

func NewPlusBlockStartState() *PlusBlockStartState

typePlusLoopbackState

type PlusLoopbackState struct {BaseDecisionState}

PlusLoopbackState is a decision state for A+ and (A|B)+. It has twotransitions: one to the loop back to start of the block, and one to exit.

funcNewPlusLoopbackState

func NewPlusLoopbackState() *PlusLoopbackState

typePrecedencePredicate

type PrecedencePredicate struct {// contains filtered or unexported fields}

funcNewPrecedencePredicate

func NewPrecedencePredicate(precedenceint) *PrecedencePredicate

func (*PrecedencePredicate)Equals

func (*PrecedencePredicate)Hash

func (p *PrecedencePredicate) Hash()int

func (*PrecedencePredicate)String

func (p *PrecedencePredicate) String()string

typePrecedencePredicateTransition

type PrecedencePredicateTransition struct {BaseAbstractPredicateTransition// contains filtered or unexported fields}

funcNewPrecedencePredicateTransition

func NewPrecedencePredicateTransition(targetATNState, precedenceint) *PrecedencePredicateTransition

func (*PrecedencePredicateTransition)Matches

func (t *PrecedencePredicateTransition) Matches(_, _, _int)bool

func (*PrecedencePredicateTransition)String

typePredPrediction

type PredPrediction struct {// contains filtered or unexported fields}

PredPrediction maps a predicate to a predicted alternative.

funcNewPredPrediction

func NewPredPrediction(predSemanticContext, altint) *PredPrediction

func (*PredPrediction)String

func (p *PredPrediction) String()string

typePredicate

type Predicate struct {// contains filtered or unexported fields}

funcNewPredicate

func NewPredicate(ruleIndex, predIndexint, isCtxDependentbool) *Predicate

func (*Predicate)Equals

func (p *Predicate) Equals(otherCollectable[SemanticContext])bool

func (*Predicate)Hash

func (p *Predicate) Hash()int

func (*Predicate)String

func (p *Predicate) String()string

typePredicateTransition

type PredicateTransition struct {BaseAbstractPredicateTransition// contains filtered or unexported fields}

funcNewPredicateTransition

func NewPredicateTransition(targetATNState, ruleIndex, predIndexint, isCtxDependentbool) *PredicateTransition

func (*PredicateTransition)Matches

func (t *PredicateTransition) Matches(_, _, _int)bool

func (*PredicateTransition)String

func (t *PredicateTransition) String()string

typePredictionContext

type PredictionContext struct {// contains filtered or unexported fields}

PredictionContext is a go idiomatic implementation of PredictionContext that does not rty toemulate inheritance from Java, and can be used without an interface definition. An interfaceis not required because no user code will ever need to implement this interface.

funcNewArrayPredictionContext

func NewArrayPredictionContext(parents []*PredictionContext, returnStates []int) *PredictionContext

funcNewBaseSingletonPredictionContext

func NewBaseSingletonPredictionContext(parent *PredictionContext, returnStateint) *PredictionContext

funcNewEmptyPredictionContext

func NewEmptyPredictionContext() *PredictionContext

funcSingletonBasePredictionContextCreate

func SingletonBasePredictionContextCreate(parent *PredictionContext, returnStateint) *PredictionContext

func (*PredictionContext)ArrayEqualsadded inv4.13.0

func (*PredictionContext)Equals

func (*PredictionContext)GetParent

func (p *PredictionContext) GetParent(iint) *PredictionContext

func (*PredictionContext)GetReturnStatesadded inv4.13.0

func (p *PredictionContext) GetReturnStates() []int

func (*PredictionContext)Hash

func (p *PredictionContext) Hash()int

func (*PredictionContext)SingletonEqualsadded inv4.13.0

func (p *PredictionContext) SingletonEquals(otherCollectable[*PredictionContext])bool

func (*PredictionContext)String

func (p *PredictionContext) String()string

func (*PredictionContext)Typeadded inv4.13.0

func (p *PredictionContext) Type()int

typePredictionContextCache

type PredictionContextCache struct {// contains filtered or unexported fields}

PredictionContextCache is Used to cachePredictionContext objects. It is used for the sharedcontext cash associated with contexts in DFA states. This cachecan be used for both lexers and parsers.

funcNewPredictionContextCache

func NewPredictionContextCache() *PredictionContextCache

func (*PredictionContextCache)Get

typeProxyErrorListener

type ProxyErrorListener struct {*DefaultErrorListener// contains filtered or unexported fields}

funcNewProxyErrorListener

func NewProxyErrorListener(delegates []ErrorListener) *ProxyErrorListener

func (*ProxyErrorListener)ReportAmbiguity

func (p *ProxyErrorListener) ReportAmbiguity(recognizerParser, dfa *DFA, startIndex, stopIndexint, exactbool, ambigAlts *BitSet, configs *ATNConfigSet)

func (*ProxyErrorListener)ReportAttemptingFullContext

func (p *ProxyErrorListener) ReportAttemptingFullContext(recognizerParser, dfa *DFA, startIndex, stopIndexint, conflictingAlts *BitSet, configs *ATNConfigSet)

func (*ProxyErrorListener)ReportContextSensitivity

func (p *ProxyErrorListener) ReportContextSensitivity(recognizerParser, dfa *DFA, startIndex, stopIndex, predictionint, configs *ATNConfigSet)

func (*ProxyErrorListener)SyntaxError

func (p *ProxyErrorListener) SyntaxError(recognizerRecognizer, offendingSymbol interface{}, line, columnint, msgstring, eRecognitionException)

typeRWMutexadded inv4.13.1

type RWMutex struct {// contains filtered or unexported fields}

func (*RWMutex)Lockadded inv4.13.1

func (m *RWMutex) Lock()

func (*RWMutex)RLockadded inv4.13.1

func (m *RWMutex) RLock()

func (*RWMutex)RUnlockadded inv4.13.1

func (m *RWMutex) RUnlock()

func (*RWMutex)Unlockadded inv4.13.1

func (m *RWMutex) Unlock()

typeRangeTransition

type RangeTransition struct {BaseTransition// contains filtered or unexported fields}

funcNewRangeTransition

func NewRangeTransition(targetATNState, start, stopint) *RangeTransition

func (*RangeTransition)Matches

func (t *RangeTransition) Matches(symbol, _, _int)bool

func (*RangeTransition)String

func (t *RangeTransition) String()string

typeRecognitionException

type RecognitionException interface {GetOffendingToken()TokenGetMessage()stringGetInputStream()IntStream}

typeRecognizer

type Recognizer interface {GetLiteralNames() []stringGetSymbolicNames() []stringGetRuleNames() []stringSempred(RuleContext,int,int)boolPrecpred(RuleContext,int)boolGetState()intSetState(int)Action(RuleContext,int,int)AddErrorListener(ErrorListener)RemoveErrorListeners()GetATN() *ATNGetErrorListenerDispatch()ErrorListenerHasError()boolGetError()RecognitionExceptionSetError(RecognitionException)}

typeReplaceOp

type ReplaceOp struct {BaseRewriteOperationLastIndexint}

ReplaceOp tries to replace range from x..y with (y-x)+1 ReplaceOpinstructions.

funcNewReplaceOp

func NewReplaceOp(from, toint, textstring, streamTokenStream) *ReplaceOp

func (*ReplaceOp)Execute

func (op *ReplaceOp) Execute(buffer *bytes.Buffer)int

func (*ReplaceOp)String

func (op *ReplaceOp) String()string

typeRewriteOperation

type RewriteOperation interface {// Execute the rewrite operation by possibly adding to the buffer.// Return the index of the next token to operate on.Execute(buffer *bytes.Buffer)intString()stringGetInstructionIndex()intGetIndex()intGetText()stringGetOpName()stringGetTokens()TokenStreamSetInstructionIndex(valint)SetIndex(int)SetText(string)SetOpName(string)SetTokens(TokenStream)}

typeRuleContext

type RuleContext interface {RuleNodeGetInvokingState()intSetInvokingState(int)GetRuleIndex()intIsEmpty()boolGetAltNumber()intSetAltNumber(altNumberint)String([]string,RuleContext)string}

RuleContext is a record of a single rule invocation. It knowswhich context invoked it, if any. If there is no parent context, thennaturally the invoking state is not valid. The parent linkprovides a chain upwards from the current rule invocation to the rootof the invocation tree, forming a stack.

We actually carry no information about the rule associated with this context (exceptwhen parsing). We keep only the state number of the invoking state fromtheATN submachine that invoked this. Contrast this with the spointer insideParserRuleContext that tracks the current statebeing "executed" for the current rule.

The parent contexts are useful for computing lookahead sets andgetting error information.

These objects are used during parsing and prediction.For the special case of parsers, we use the structParserRuleContext, which embeds a RuleContext.

@see ParserRuleContext

typeRuleNode

type RuleNode interface {ParseTreeGetRuleContext()RuleContext}

typeRuleStartState

type RuleStartState struct {BaseATNState// contains filtered or unexported fields}

funcNewRuleStartState

func NewRuleStartState() *RuleStartState

typeRuleStopState

type RuleStopState struct {BaseATNState}

RuleStopState is the last node in the ATN for a rule, unless that rule is thestart symbol. In that case, there is one transition to EOF. Later, we mightencode references to all calls to this rule to compute FOLLOW sets for errorhandling.

funcNewRuleStopState

func NewRuleStopState() *RuleStopState

typeRuleTransition

type RuleTransition struct {BaseTransition// contains filtered or unexported fields}

funcNewRuleTransition

func NewRuleTransition(ruleStartATNState, ruleIndex, precedenceint, followStateATNState) *RuleTransition

func (*RuleTransition)Matches

func (t *RuleTransition) Matches(_, _, _int)bool

typeSemCComparator

type SemCComparator[TCollectable[T]] struct{}

typeSemanticContext

type SemanticContext interface {Equals(otherCollectable[SemanticContext])boolHash()intString()string// contains filtered or unexported methods}

SemanticContext is a tree structure used to record the semantic context in which

an ATN configuration is valid.  It's either a single predicate,a conjunction p1 && p2, or a sum of products p1 || p2.I have scoped the AND, OR, and Predicate subclasses of[SemanticContext] within the scope of this outer ``class''

funcSemanticContextandContext

func SemanticContextandContext(a, bSemanticContext)SemanticContext

funcSemanticContextorContext

func SemanticContextorContext(a, bSemanticContext)SemanticContext

typeSetTransition

type SetTransition struct {BaseTransition}

funcNewSetTransition

func NewSetTransition(targetATNState, set *IntervalSet) *SetTransition

func (*SetTransition)Matches

func (t *SetTransition) Matches(symbol, _, _int)bool

func (*SetTransition)String

func (t *SetTransition) String()string

typeSimState

type SimState struct {// contains filtered or unexported fields}

funcNewSimState

func NewSimState() *SimState

typeStarBlockStartState

type StarBlockStartState struct {BaseBlockStartState}

StarBlockStartState is the block that begins a closure loop.

funcNewStarBlockStartState

func NewStarBlockStartState() *StarBlockStartState

typeStarLoopEntryState

type StarLoopEntryState struct {BaseDecisionState// contains filtered or unexported fields}

funcNewStarLoopEntryState

func NewStarLoopEntryState() *StarLoopEntryState

typeStarLoopbackState

type StarLoopbackState struct {BaseATNState}

funcNewStarLoopbackState

func NewStarLoopbackState() *StarLoopbackState

typeSyntaxTree

type SyntaxTree interface {TreeGetSourceInterval()Interval}

typeTerminalNode

type TerminalNode interface {ParseTreeGetSymbol()Token}

typeTerminalNodeImpl

type TerminalNodeImpl struct {// contains filtered or unexported fields}

funcNewTerminalNodeImpl

func NewTerminalNodeImpl(symbolToken) *TerminalNodeImpl

func (*TerminalNodeImpl)Accept

func (t *TerminalNodeImpl) Accept(vParseTreeVisitor) interface{}

func (*TerminalNodeImpl)GetChild

func (t *TerminalNodeImpl) GetChild(_int)Tree

func (*TerminalNodeImpl)GetChildCount

func (t *TerminalNodeImpl) GetChildCount()int

func (*TerminalNodeImpl)GetChildren

func (t *TerminalNodeImpl) GetChildren() []Tree

func (*TerminalNodeImpl)GetParent

func (t *TerminalNodeImpl) GetParent()Tree

func (*TerminalNodeImpl)GetPayload

func (t *TerminalNodeImpl) GetPayload() interface{}

func (*TerminalNodeImpl)GetSourceInterval

func (t *TerminalNodeImpl) GetSourceInterval()Interval

func (*TerminalNodeImpl)GetSymbol

func (t *TerminalNodeImpl) GetSymbol()Token

func (*TerminalNodeImpl)GetText

func (t *TerminalNodeImpl) GetText()string

func (*TerminalNodeImpl)SetChildren

func (t *TerminalNodeImpl) SetChildren(_ []Tree)

func (*TerminalNodeImpl)SetParent

func (t *TerminalNodeImpl) SetParent(treeTree)

func (*TerminalNodeImpl)String

func (t *TerminalNodeImpl) String()string

func (*TerminalNodeImpl)ToStringTree

func (t *TerminalNodeImpl) ToStringTree(_ []string, _Recognizer)string

typeToken

type Token interface {GetSource() *TokenSourceCharStreamPairGetTokenType()intGetChannel()intGetStart()intGetStop()intGetLine()intGetColumn()intGetText()stringSetText(sstring)GetTokenIndex()intSetTokenIndex(vint)GetTokenSource()TokenSourceGetInputStream()CharStreamString()string}

typeTokenFactory

type TokenFactory interface {Create(source *TokenSourceCharStreamPair, ttypeint, textstring, channel, start, stop, line, columnint)Token}

TokenFactory creates CommonToken objects.

typeTokenSource

type TokenSource interface {NextToken()TokenSkip()More()GetLine()intGetCharPositionInLine()intGetInputStream()CharStreamGetSourceName()stringGetTokenFactory()TokenFactory// contains filtered or unexported methods}

typeTokenSourceCharStreamPair

type TokenSourceCharStreamPair struct {// contains filtered or unexported fields}

typeTokenStream

type TokenStream interface {IntStreamLT(kint)TokenReset()Get(indexint)TokenGetTokenSource()TokenSourceSetTokenSource(TokenSource)GetAllText()stringGetTextFromInterval(Interval)stringGetTextFromRuleContext(RuleContext)stringGetTextFromTokens(Token,Token)string}

typeTokenStreamRewriter

type TokenStreamRewriter struct {// contains filtered or unexported fields}

funcNewTokenStreamRewriter

func NewTokenStreamRewriter(tokensTokenStream) *TokenStreamRewriter

func (*TokenStreamRewriter)AddToProgram

func (tsr *TokenStreamRewriter) AddToProgram(namestring, opRewriteOperation)

func (*TokenStreamRewriter)Delete

func (tsr *TokenStreamRewriter) Delete(programNamestring, from, toint)

func (*TokenStreamRewriter)DeleteDefault

func (tsr *TokenStreamRewriter) DeleteDefault(from, toint)

func (*TokenStreamRewriter)DeleteDefaultPos

func (tsr *TokenStreamRewriter) DeleteDefaultPos(indexint)

func (*TokenStreamRewriter)DeleteProgram

func (tsr *TokenStreamRewriter) DeleteProgram(programNamestring)

DeleteProgram Reset the program so that no instructions exist

func (*TokenStreamRewriter)DeleteProgramDefault

func (tsr *TokenStreamRewriter) DeleteProgramDefault()

func (*TokenStreamRewriter)DeleteToken

func (tsr *TokenStreamRewriter) DeleteToken(programNamestring, from, toToken)

func (*TokenStreamRewriter)DeleteTokenDefault

func (tsr *TokenStreamRewriter) DeleteTokenDefault(from, toToken)

func (*TokenStreamRewriter)GetLastRewriteTokenIndex

func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndex(programNamestring)int

func (*TokenStreamRewriter)GetLastRewriteTokenIndexDefault

func (tsr *TokenStreamRewriter) GetLastRewriteTokenIndexDefault()int

func (*TokenStreamRewriter)GetProgram

func (tsr *TokenStreamRewriter) GetProgram(namestring) []RewriteOperation

func (*TokenStreamRewriter)GetText

func (tsr *TokenStreamRewriter) GetText(programNamestring, intervalInterval)string

GetText returns the text from the original tokens altered per theinstructions given to this rewriter.

func (*TokenStreamRewriter)GetTextDefault

func (tsr *TokenStreamRewriter) GetTextDefault()string

GetTextDefault returns the text from the original tokens altered per theinstructions given to this rewriter.

func (*TokenStreamRewriter)GetTokenStream

func (tsr *TokenStreamRewriter) GetTokenStream()TokenStream

func (*TokenStreamRewriter)InitializeProgram

func (tsr *TokenStreamRewriter) InitializeProgram(namestring) []RewriteOperation

func (*TokenStreamRewriter)InsertAfter

func (tsr *TokenStreamRewriter) InsertAfter(programNamestring, indexint, textstring)

func (*TokenStreamRewriter)InsertAfterDefault

func (tsr *TokenStreamRewriter) InsertAfterDefault(indexint, textstring)

func (*TokenStreamRewriter)InsertAfterToken

func (tsr *TokenStreamRewriter) InsertAfterToken(programNamestring, tokenToken, textstring)

func (*TokenStreamRewriter)InsertBefore

func (tsr *TokenStreamRewriter) InsertBefore(programNamestring, indexint, textstring)

func (*TokenStreamRewriter)InsertBeforeDefault

func (tsr *TokenStreamRewriter) InsertBeforeDefault(indexint, textstring)

func (*TokenStreamRewriter)InsertBeforeToken

func (tsr *TokenStreamRewriter) InsertBeforeToken(programNamestring, tokenToken, textstring)

func (*TokenStreamRewriter)Replace

func (tsr *TokenStreamRewriter) Replace(programNamestring, from, toint, textstring)

func (*TokenStreamRewriter)ReplaceDefault

func (tsr *TokenStreamRewriter) ReplaceDefault(from, toint, textstring)

func (*TokenStreamRewriter)ReplaceDefaultPos

func (tsr *TokenStreamRewriter) ReplaceDefaultPos(indexint, textstring)

func (*TokenStreamRewriter)ReplaceToken

func (tsr *TokenStreamRewriter) ReplaceToken(programNamestring, from, toToken, textstring)

func (*TokenStreamRewriter)ReplaceTokenDefault

func (tsr *TokenStreamRewriter) ReplaceTokenDefault(from, toToken, textstring)

func (*TokenStreamRewriter)ReplaceTokenDefaultPos

func (tsr *TokenStreamRewriter) ReplaceTokenDefaultPos(indexToken, textstring)

func (*TokenStreamRewriter)Rollback

func (tsr *TokenStreamRewriter) Rollback(programNamestring, instructionIndexint)

Rollback the instruction stream for a program so thatthe indicated instruction (via instructionIndex) is nolonger in the stream. UNTESTED!

func (*TokenStreamRewriter)RollbackDefault

func (tsr *TokenStreamRewriter) RollbackDefault(instructionIndexint)

func (*TokenStreamRewriter)SetLastRewriteTokenIndex

func (tsr *TokenStreamRewriter) SetLastRewriteTokenIndex(programNamestring, iint)

typeTokensStartState

type TokensStartState struct {BaseDecisionState}

TokensStartState is the Tokens rule start state linking to each lexer rule start state.

funcNewTokensStartState

func NewTokensStartState() *TokensStartState

typeTraceListener

type TraceListener struct {// contains filtered or unexported fields}

funcNewTraceListener

func NewTraceListener(parser *BaseParser) *TraceListener

func (*TraceListener)EnterEveryRule

func (t *TraceListener) EnterEveryRule(ctxParserRuleContext)

func (*TraceListener)ExitEveryRule

func (t *TraceListener) ExitEveryRule(ctxParserRuleContext)

func (*TraceListener)VisitErrorNode

func (t *TraceListener) VisitErrorNode(_ErrorNode)

func (*TraceListener)VisitTerminal

func (t *TraceListener) VisitTerminal(nodeTerminalNode)

typeTransition

type Transition interface {Matches(int,int,int)bool// contains filtered or unexported methods}

typeTree

type Tree interface {GetParent()TreeSetParent(Tree)GetPayload() interface{}GetChild(iint)TreeGetChildCount()intGetChildren() []Tree}

funcTreesGetChildren

func TreesGetChildren(tTree) []Tree

TreesGetChildren returns am ordered list of all children of this node

funcTreesgetAncestors

func TreesgetAncestors(tTree) []Tree

TreesgetAncestors returns a list of all ancestors of this node. The first node of list is the rootand the last node is the parent of this node.

typeVisitEntryadded inv4.13.0

type VisitEntry struct {// contains filtered or unexported fields}

typeVisitListadded inv4.13.0

type VisitList struct {// contains filtered or unexported fields}

typeVisitRecordadded inv4.13.0

type VisitRecord struct {// contains filtered or unexported fields}

funcNewVisitRecordadded inv4.13.0

func NewVisitRecord() *VisitRecord

NewVisitRecord returns a new VisitRecord instance from the pool if available.Note that this "map" uses a pointer as a key because we are emulating the behavior ofIdentityHashMap in Java, which uses the `==` operator to compare whether the keys are equal,which means is the key the same reference to an object rather than is it .equals() to anotherobject.

func (*VisitRecord)Getadded inv4.13.0

func (*VisitRecord)Putadded inv4.13.0

func (*VisitRecord)Releaseadded inv4.13.0

func (vr *VisitRecord) Release()

typeWildcardTransition

type WildcardTransition struct {BaseTransition}

funcNewWildcardTransition

func NewWildcardTransition(targetATNState) *WildcardTransition

func (*WildcardTransition)Matches

func (t *WildcardTransition) Matches(symbol, minVocabSymbol, maxVocabSymbolint)bool

func (*WildcardTransition)String

func (t *WildcardTransition) String()string

Source Files

View all Source files

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f orF : Jump to
y orY : Canonical URL
go.dev uses cookies from Google to deliver and enhance the quality of its services and to analyze traffic.Learn more.

[8]ページ先頭

©2009-2025 Movatter.jp