![]() | This articlemay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(November 2015) (Learn how and when to remove this message) |
In theformal language theory ofcomputer science,left recursion is a special case ofrecursion where a string is recognized as part of a language by the fact that it decomposes into a string from that same language (on the left) and a suffix (on the right). For instance, can be recognized as a sum because it can be broken into, also a sum, and, a suitable suffix.
In terms ofcontext-free grammar, anonterminal is left-recursive if the leftmost symbol in one of its productions is itself (in the case of direct left recursion) or can be made itself by some sequence of substitutions (in the case of indirect left recursion).
A grammar is left-recursive if and only if there exists a nonterminal symbol that can derive to asentential form with itself as the leftmost symbol.[1] Symbolically,
where indicates the operation of making one or more substitutions, and is any sequence of terminal and nonterminal symbols.
Direct left recursion occurs when the definition can be satisfied with only one substitution. It requires a rule of the form
where is a sequence of nonterminals and terminals . For example, the rule
is directly left-recursive. A left-to-rightrecursive descent parser for this rule might look like
voidExpression(){Expression();match('+');Term();}
and such code would fall into infinite recursion when executed.
Indirect left recursion occurs when the definition of left recursion is satisfied via several substitutions. It entails a set of rules following the pattern
where are sequences that can each yield theempty string, while may be any sequences of terminal and nonterminal symbols at all. Note that these sequences may be empty. The derivation
then gives as leftmost in its final sentential form.
Left recursion is commonly used as an idiom for making operationsleft-associative: that an expressiona+b-c-d+e
is evaluated as(((a+b)-c)-d)+e
. In this case, that evaluation order could be achieved as a matter of syntax via the three grammatical rules
These only allow parsing thea+b-c-d+e
as consisting of thea+b-c-d
ande
, wherea+b-c-d
in turn consists of thea+b-c
andd
, whilea+b-c
consists of thea+b
andc
, etc.
Left recursion often poses problems for parsers, either because it leads them into infinite recursion (as in the case of mosttop-down parsers) or because they expect rules in a normal form that forbids it (as in the case of manybottom-up parsers[clarification needed]). Therefore, a grammar is often preprocessed to eliminate the left recursion.
The general algorithm to remove direct left recursion follows. Several improvements to this method have been made.[2]For a left-recursive nonterminal, discard any rules of the form and consider those that remain:
where:
Replace these with two sets of productions, one set for:
and another set for the fresh nonterminal (often called the "tail" or the "rest"):
Repeat this process until no direct left recursion remains.
As an example, consider the rule set
This could be rewritten to avoid left recursion as
The above process can be extended to eliminate all left recursion, by first converting indirect left recursion to direct left recursion on the highest numbered nonterminal in a cycle.
Step 1.1.1 amounts to expanding the initial nonterminal in the right hand side of some rule, but only if. If was one step in a cycle of productions giving rise to a left recursion, then this has shortened that cycle by one step, but often at the price of increasing the number of rules.
The algorithm may be viewed as establishing atopological ordering on nonterminals: afterwards there can only be a rule if.Note that this algorithm is highly sensitive to the nonterminal ordering; optimizations often focus on choosing this ordering well.[clarification needed]
Although the above transformations preserve the language generated by a grammar, they may change theparse trees thatwitness strings' recognition. With suitable bookkeeping,tree rewriting can recover the originals, but if this step is omitted, the differences may change the semantics of a parse.
Associativity is particularly vulnerable; left-associative operators typically appear in right-associative-like arrangements under the new grammar. For example, starting with this grammar:
the standard transformations to remove left recursion yield the following:
Parsing the string "1 - 2 - 3" with the first grammar in an LALR parser (which can handle left-recursive grammars) would have resulted in the parse tree:
This parse tree groups the terms on the left, giving the correct semantics(1 - 2) - 3.
Parsing with the second grammar gives
which, properly interpreted, signifies1 + (-2 + (-3)), also correct, but less faithful to the input and much harder to implement for some operators. Notice how terms to the right appear deeper in the tree, much as a right-recursive grammar would arrange them for1 - (2 - 3).
Aformal grammar that contains left recursion cannot beparsed by aLL(k)-parser or other naiverecursive descent parser unless it is converted to aweakly equivalent right-recursive form. In contrast, left recursion is preferred forLALR parsers because it results in lower stack usage thanright recursion. However, more sophisticated top-down parsers can implement generalcontext-free grammars by use of curtailment. In 2006, Frost and Hafiz described an algorithm which accommodatesambiguous grammars with direct left-recursiveproduction rules.[3] That algorithm was extended to a completeparsing algorithm to accommodate indirect as well as direct left recursion inpolynomial time, and to generate compact polynomial-size representations of the potentially exponential number of parse trees for highly ambiguous grammars by Frost, Hafiz and Callaghan in 2007.[4] The authors then implemented the algorithm as a set ofparser combinators written in theHaskell programming language.[5]