Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Formal grammar

From Wikipedia, the free encyclopedia
An example of a formal grammar with parsed sentence. Formal grammars consist of a set of non-terminal symbols, terminal symbols, production rules, and a designated start symbol.
Structure of a formal language
Part ofa series on
Formal languages

Aformal grammar is a set ofsymbols and theproduction rules for rewriting some of them into every possible string of aformal language over analphabet. A grammar does not describe themeaning of the strings — only their form.

Inapplied mathematics, formal language theory is the discipline that studies formal grammars and languages. Its applications are found intheoretical computer science,theoretical linguistics,formal semantics,mathematical logic, and other areas.

A formal grammar is aset of rules for rewriting strings, along with a "start symbol" from which rewriting starts. Therefore, a grammar is usually thought of as a language generator. However, it can also sometimes be used as the basis for a "recognizer"—a function in computing that determines whether a given string belongs to the language or is grammatically incorrect. To describe such recognizers, formal language theory uses separate formalisms, known asautomata theory. One of the interesting results of automata theory is that it is not possible to design a recognizer for certain formal languages.[1]Parsing is the process of recognizing an utterance (a string in natural languages) by breaking it down to aset of symbols and analyzing each one against the grammar of the language. Most languages have the meanings of their utterances structured according to their syntax—a practice known ascompositional semantics. As a result, the first step to describing the meaning of an utterance in language is to break it down part by part and look at its analyzed form (known as itsparse tree in computer science, and as itsdeep structure ingenerative grammar).

Introductory example

[edit]

A grammar mainly consists of a set ofproduction rules, rewrite rules for transforming strings. Each rule specifies a replacement of a particular string (itsleft-hand side) with another (itsright-hand side). A rule can be applied to each string that contains its left-hand side and produces a string in which an occurrence of that left-hand side has been replaced with its right-hand side.

Unlike asemi-Thue system, which is wholly defined by these rules, a grammar further distinguishes between two kinds of symbols:nonterminal andterminal symbols; each left-hand side must contain at least one nonterminal symbol. It also distinguishes a special nonterminal symbol, called thestart symbol.

The language generated by the grammar is defined to be the set of all strings without any nonterminal symbols that can be generated from the string consisting of a single start symbol by (possibly repeated) application of its rules in whatever way possible.If there are essentially different ways of generating the same single string, the grammar is said to beambiguous.

In the following examples, the terminal symbols area andb, and the start symbol isS.

Example 1

[edit]

Suppose we have the following production rules:

1.SaSb{\displaystyle S\rightarrow aSb}
2.Sba{\displaystyle S\rightarrow ba}

then we start withS, and can choose a rule to apply to it. If we choose rule 1, we obtain the stringaSb. If we then choose rule 1 again, we replaceS withaSb and obtain the stringaaSbb. If we now choose rule 2, we replaceS withba and obtain the stringaababb, and are done. We can write this series of choices more briefly, using symbols:SaSbaaSbbaababb{\displaystyle S\Rightarrow aSb\Rightarrow aaSbb\Rightarrow aababb}.

The language of the grammar is the infinite set{anbabnn0}={ba,abab,aababb,aaababbb,}{\displaystyle \{a^{n}bab^{n}\mid n\geq 0\}=\{ba,abab,aababb,aaababbb,\dotsc \}}, whereak{\displaystyle a^{k}} isa{\displaystyle a} repeatedk{\displaystyle k} times (andn{\displaystyle n} in particular represents the number of times production rule 1 has been applied). This grammar iscontext-free (only single nonterminals appear as left-hand sides) and unambiguous.

Examples 2 and 3

[edit]

Suppose the rules are these instead:

1.Sa{\displaystyle S\rightarrow a}
2.SSS{\displaystyle S\rightarrow SS}
3.aSab{\displaystyle aSa\rightarrow b}

This grammar is not context-free due to rule 3 and it is ambiguous due to the multiple ways in which rule 2 can be used to generate sequences ofS{\displaystyle S}s.

However, the language it generates is simply the set of all nonempty strings consisting ofa{\displaystyle a}s and/orb{\displaystyle b}s.This is easy to see: to generate ab{\displaystyle b} from anS{\displaystyle S}, use rule 2 twice to generateSSS{\displaystyle SSS}, then rule 1 twice and rule 3 once to produceb{\displaystyle b}. This means we can generate arbitrary nonempty sequences ofS{\displaystyle S}s and then replace each of them witha{\displaystyle a} orb{\displaystyle b} as we please.

That same language can alternatively be generated by a context-free, nonambiguous grammar; for instance, theregular grammar with rules

1.SaS{\displaystyle S\rightarrow aS}
2.SbS{\displaystyle S\rightarrow bS}
3.Sa{\displaystyle S\rightarrow a}
4.Sb{\displaystyle S\rightarrow b}

Definition

[edit]
Main article:Unrestricted grammar

The syntax of grammars

[edit]

In the classic formalization of generative grammars first proposed byNoam Chomsky in the 1950s,[2][3] a grammarG consists of the following components:

(ΣN)N(ΣN)(ΣN){\displaystyle (\Sigma \cup N)^{*}N(\Sigma \cup N)^{*}\rightarrow (\Sigma \cup N)^{*}}
where{\displaystyle {*}} is theKleene star operator and{\displaystyle \cup } denotesset union. That is, each production rule maps from one string of symbols to another, where the first string (the "head") contains an arbitrary number of symbols provided at least one of them is a nonterminal. In the case that the second string (the "body") consists solely of theempty string—i.e., that it contains no symbols at all—it may be denoted with a special notation (oftenΛ{\displaystyle \Lambda },e orϵ{\displaystyle \epsilon }) in order to avoid confusion. Such a rule is called anerasing rule.[4]

A grammar is formally defined as thetuple(N,Σ,P,S){\displaystyle (N,\Sigma ,P,S)}. Such a formal grammar is often called arewriting system or aphrase structure grammar in the literature.[5][6]

Some mathematical constructs regarding formal grammars

[edit]

The operation of a grammar can be defined in terms of relations on strings:

The grammarG=(N,Σ,P,S){\displaystyle G=(N,\Sigma ,P,S)} is effectively thesemi-Thue system(NΣ,P){\displaystyle (N\cup \Sigma ,P)}, rewriting strings in exactly the same way; the only difference is in that we distinguish specificnonterminal symbols, which must be replaced in rewrite rules, and are only interested in rewritings from the designated start symbolS{\displaystyle S} to strings without nonterminal symbols.

Example

[edit]

For these examples, formal languages are specified usingset-builder notation.

Consider the grammarG{\displaystyle G} whereN={S,B}{\displaystyle N=\left\{S,B\right\}},Σ={a,b,c}{\displaystyle \Sigma =\left\{a,b,c\right\}},S{\displaystyle S} is the start symbol, andP{\displaystyle P} consists of the following production rules:

1.SaBSc{\displaystyle S\rightarrow aBSc}
2.Sabc{\displaystyle S\rightarrow abc}
3.BaaB{\displaystyle Ba\rightarrow aB}
4.Bbbb{\displaystyle Bb\rightarrow bb}

This grammar defines the languageL(G)={anbncnn1}{\displaystyle L(G)=\left\{a^{n}b^{n}c^{n}\mid n\geq 1\right\}} wherean{\displaystyle a^{n}} denotes a string ofn consecutivea{\displaystyle a}'s. Thus, the language is the set of strings that consist of 1 or morea{\displaystyle a}'s, followed by the same number ofb{\displaystyle b}'s, followed by the same number ofc{\displaystyle c}'s.

Some examples of the derivation of strings inL(G){\displaystyle L(G)} are:

(On notation:PiQ{\displaystyle P{\underset {i}{\Rightarrow }}Q} reads "StringP generates stringQ by means of productioni", and the generated part is each time indicated in bold type.)

The Chomsky hierarchy

[edit]
Main articles:Chomsky hierarchy andGenerative grammar

WhenNoam Chomsky first formalized generative grammars in 1956,[2] he classified them into types now known as theChomsky hierarchy. The difference between these types is that they have increasingly strict production rules and can therefore express fewer formal languages. Two important types arecontext-free grammars (Type 2) andregular grammars (Type 3). The languages that can be described with such a grammar are calledcontext-free languages andregular languages, respectively. Although much less powerful thanunrestricted grammars (Type 0), which can in fact express any language that can be accepted by aTuring machine, these two restricted types of grammars are most often used because parsers for them can be efficiently implemented.[8] For example, all regular languages can be recognized by afinite-state machine, and for useful subsets of context-free grammars there are well-known algorithms to generate efficientLL parsers andLR parsers to recognize the corresponding languages those grammars generate.

Context-free grammars

[edit]

Acontext-free grammar is a grammar in which the left-hand side of each production rule consists of only a single nonterminal symbol. This restriction is non-trivial; not all languages can be generated by context-free grammars. Those that can are calledcontext-free languages.

The languageL(G)={anbncnn1}{\displaystyle L(G)=\left\{a^{n}b^{n}c^{n}\mid n\geq 1\right\}} defined above is not a context-free language, and this can be strictly proven using thepumping lemma for context-free languages, but for example the language{anbnn1}{\displaystyle \left\{a^{n}b^{n}\mid n\geq 1\right\}} (at least 1a{\displaystyle a} followed by the same number ofb{\displaystyle b}'s) is context-free, as it can be defined by the grammarG2{\displaystyle G_{2}} withN={S}{\displaystyle N=\left\{S\right\}},Σ={a,b}{\displaystyle \Sigma =\left\{a,b\right\}},S{\displaystyle S} the start symbol, and the following production rules:

1.SaSb{\displaystyle S\rightarrow aSb}
2.Sab{\displaystyle S\rightarrow ab}

A context-free language can be recognized inO(n3){\displaystyle O(n^{3})} time (seeBig O notation) by an algorithm such asEarley's recogniser. That is, for every context-free language, a machine can be built that takes a string as input and determines inO(n3){\displaystyle O(n^{3})} time whether the string is a member of the language, wheren{\displaystyle n} is the length of the string.[9]Deterministic context-free languages is a subset of context-free languages that can be recognized in linear time.[10] There exist various algorithms that target either this set of languages or some subset of it.

Regular grammars

[edit]

Inregular grammars, the left hand side is again only a single nonterminal symbol, but now the right-hand side is also restricted. The right side may be the empty string, or a single terminal symbol, or a single terminal symbol followed by a nonterminal symbol, but nothing else. (Sometimes a broader definition is used: one can allow longer strings of terminals or single nonterminals without anything else, making languageseasier to denote while still defining the same class of languages.)

The language{anbnn1}{\displaystyle \left\{a^{n}b^{n}\mid n\geq 1\right\}} defined above is not regular, but the language{anbmm,n1}{\displaystyle \left\{a^{n}b^{m}\mid m,n\geq 1\right\}} (at least 1a{\displaystyle a} followed by at least 1b{\displaystyle b}, where the numbers may be different) is, as it can be defined by the grammarG3{\displaystyle G_{3}} withN={S,A,B}{\displaystyle N=\left\{S,A,B\right\}},Σ={a,b}{\displaystyle \Sigma =\left\{a,b\right\}},S{\displaystyle S} the start symbol, and the following production rules:

  1. SaA{\displaystyle S\rightarrow aA}
  2. AaA{\displaystyle A\rightarrow aA}
  3. AbB{\displaystyle A\rightarrow bB}
  4. BbB{\displaystyle B\rightarrow bB}
  5. Bϵ{\displaystyle B\rightarrow \epsilon }

All languages generated by a regular grammar can be recognized inO(n){\displaystyle O(n)} time by a finite-state machine. Although in practice, regular grammars are commonly expressed usingregular expressions, some forms of regular expression used in practice do not strictly generate the regular languages and do not show linear recognitional performance due to those deviations.

Other forms of generative grammars

[edit]

Many extensions and variations on Chomsky's original hierarchy of formal grammars have been developed, both by linguists and by computer scientists, usually either in order to increase their expressive power or in order to make them easier to analyze or parse. Some forms of grammars developed include:

Recursive grammars

[edit]
Not to be confused withRecursive language.

A recursive grammar is a grammar that contains production rules that arerecursive. For example, a grammar for acontext-free language isleft-recursive if there exists a non-terminal symbolA that can be put through the production rules to produce a string withA as the leftmost symbol.[15] An example of recursive grammar is a clause within a sentence separated by two commas.[16] All types of grammars in theChomsky hierarchy can be recursive.

Analytic grammars

[edit]

Though there is a tremendous body of literature onparsing algorithms, most of these algorithms assume that the language to be parsed is initiallydescribed by means of agenerative formal grammar, and that the goal is to transform this generative grammar into a working parser. Strictly speaking, a generative grammar does not in any way correspond to the algorithm used to parse a language, and various algorithms have different restrictions on the form of production rules that are considered well-formed.

An alternative approach is to formalize the language in terms of an analytic grammar in the first place, which more directly corresponds to the structure and semantics of a parser for the language. Examples of analytic grammar formalisms include the following:

See also

[edit]

References

[edit]
  1. ^Meduna, Alexander (2014),Formal Languages and Computation: Models and Their Applications, CRC Press, p. 233,ISBN 9781466513457. For more on this subject, seeundecidable problem.
  2. ^abChomsky, Noam (Sep 1956). "Three models for the description of language".IRE Transactions on Information Theory.2 (3):113–124.doi:10.1109/TIT.1956.1056813.S2CID 19519474.
  3. ^Chomsky, Noam (1957).Syntactic Structures. The Hague:Mouton.
  4. ^Ashaari, S.; Turaev, S.; Okhunov, A. (2016)."Structurally and Arithmetically Controlled Grammars"(PDF).International Journal on Perceptive and Cognitive Computing.2 (2): 27.doi:10.31436/ijpcc.v2i2.39. Retrieved2024-11-05.
  5. ^Ginsburg, Seymour (1975).Algebraic and automata theoretic properties of formal languages. North-Holland. pp. 8–9.ISBN 978-0-7204-2506-2.
  6. ^Harrison, Michael A. (1978).Introduction to Formal Language Theory. Reading, Mass.: Addison-Wesley Publishing Company. p. 13.ISBN 978-0-201-02955-0.
  7. ^Sentential FormsArchived 2019-11-13 at theWayback Machine, Context-Free Grammars, David Matuszek
  8. ^Grune, Dick & Jacobs, Ceriel H.,Parsing Techniques – A Practical Guide, Ellis Horwood, England, 1990.
  9. ^Earley, Jay, "An Efficient Context-Free Parsing AlgorithmArchived 2020-05-19 at theWayback Machine,"Communications of the ACM, Vol. 13 No. 2, pp. 94-102, February 1970.
  10. ^Knuth, D. E. (July 1965). "On the translation of languages from left to right".Information and Control.8 (6):607–639.doi:10.1016/S0019-9958(65)90426-2.
  11. ^Joshi, Aravind K.,et al., "Tree Adjunct Grammars,"Journal of Computer Systems Science, Vol. 10 No. 1, pp. 136-163, 1975.
  12. ^Koster , Cornelis H. A., "Affix Grammars," inALGOL 68 Implementation, North Holland Publishing Company, Amsterdam, p. 95-109, 1971.
  13. ^Knuth, Donald E., "Semantics of Context-Free Languages,"Mathematical Systems Theory, Vol. 2 No. 2, pp. 127-145, 1968.
  14. ^Knuth, Donald E., "Semantics of Context-Free Languages (correction),"Mathematical Systems Theory, Vol. 5 No. 1, pp 95-96, 1971.
  15. ^Notes on Formal Language Theory and ParsingArchived 2017-08-28 at theWayback Machine, James Power, Department of Computer Science National University of Ireland, Maynooth Maynooth, Co. Kildare, Ireland.JPR02
  16. ^Borenstein, Seth (April 27, 2006)."Songbirds grasp grammar, too".Northwest Herald. p. 2 – via Newspapers.com.
  17. ^Birman, Alexander,The TMG Recognition Schema, Doctoral thesis, Princeton University, Dept. of Electrical Engineering, February 1970.
  18. ^Sleator, Daniel D. & Temperly, Davy, "Parsing English with a Link Grammar," Technical Report CMU-CS-91-196, Carnegie Mellon University Computer Science, 1991.
  19. ^Sleator, Daniel D. & Temperly, Davy, "Parsing English with a Link Grammar,"Third International Workshop on Parsing Technologies, 1993. (Revised version of above report.)
  20. ^Ford, Bryan,Packrat Parsing: a Practical Linear-Time Algorithm with Backtracking, Master’s thesis, Massachusetts Institute of Technology, Sept. 2002.
Each category of languages, except those marked by a*, is aproper subset of the category directly above it.Any language in each category is generated by a grammar and by an automaton in the category in the same line.
General
Theorems (list)
 and paradoxes
Logics
Traditional
Propositional
Predicate
Set theory
Types ofsets
Maps and cardinality
Set theories
Formal systems (list),
language and syntax
Example axiomatic
systems
 (list)
Proof theory
Model theory
Computability theory
Related
Authority control databasesEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Formal_grammar&oldid=1321962171"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp