Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Operational semantics

From Wikipedia, the free encyclopedia
Category of formal programming language semantics
Semantics
Subfields
Topics
Analysis
Applications
Semantics of
programming languages
Types
Theory

Operational semantics is a category offormal programming languagesemantics in which certain desired properties of aprogram, such as correctness, safety or security, areverified by constructingproofs from logical statements about itsexecution and procedures, rather than by attaching mathematical meanings to its terms (denotational semantics). Operational semantics are classified in two categories:structural operational semantics (orsmall-step semantics) formally describe how theindividual steps of acomputation take place in a computer-based system; by oppositionnatural semantics (orbig-step semantics) describe how theoverall results of the executions are obtained. Other approaches to providing aformal semantics of programming languages includeaxiomatic semantics anddenotational semantics.

The operational semantics for aprogramming language describes how a valid program is interpreted as sequences of computational steps. These sequences thenare the meaning of the program. In the context offunctional programming, the final step in a terminating sequence returns the value of the program. (In general there can be many return values for a single program, because the program could benondeterministic, and even for a deterministic program there can be many computation sequences since the semantics may not specify exactly what sequence of operations arrives at that value.)

Perhaps the first formal incarnation of operational semantics was the use of thelambda calculus to define the semantics ofLisp.[1]Abstract machines in the tradition of theSECD machine are also closely related.

History

[edit]

The concept of operational semantics was used for the first time in defining the semantics ofAlgol 68.The following statement is a quote from the revised ALGOL 68 report:

The meaning of a program in the strict language is explained in terms of a hypothetical computerwhich performs the set of actions that constitute the elaboration of that program. (Algol68, Section 2)

The first use of the term "operational semantics" in its present meaning is attributed toDana Scott (Plotkin04).What follows is a quote from Scott's seminal paper on formal semantics,in which he mentions the "operational" aspects of semantics.

It is all very well to aim for a more ‘abstract’ and a ‘cleaner’ approach tosemantics, but if the plan is to be any good, the operational aspects cannotbe completely ignored. (Scott70)

Approaches

[edit]

Gordon Plotkin introduced the structural operational semantics,Matthias Felleisen and Robert Hieb the reduction semantics,[2] andGilles Kahn the natural semantics.

Small-step semantics

[edit]

Structural operational semantics

[edit]

Structural operational semantics (SOS, also calledstructured operational semantics orsmall-step semantics) was introduced byGordon Plotkin in (Plotkin81) as a logical means to define operational semantics. The basic idea behind SOS is to define the behavior of a program in terms of the behavior of its parts, thus providing a structural, i.e., syntax-oriented andinductive, view on operational semantics. An SOS specification defines the behavior of a program in terms of a (set of)transition relation(s). SOS specifications take the form of a set ofinference rules that define the valid transitions of a composite piece of syntax in terms of the transitions of its components.

For a simple example, we consider part of the semantics of a simple programming language; proper illustrations are given inPlotkin81 andHennessy90, and other textbooks. LetC1,C2{\displaystyle C_{1},C_{2}} range over programs of the language, and lets{\displaystyle s} range over states (e.g. functions from memory locations to values). If we have expressions (ranged over byE{\displaystyle E}), values(V{\displaystyle V}) and locations (L{\displaystyle L}), then a memory update command would have semantics:

E,sVL:=E,s(s(LV)){\displaystyle {\frac {\langle E,s\rangle \Rightarrow V}{\langle L:=E\,,\,s\rangle \longrightarrow (s\uplus (L\mapsto V))}}}

Informally, the rule says that "if the expressionE{\displaystyle E} in states{\displaystyle s} reduces to valueV{\displaystyle V},then the programL:=E{\displaystyle L:=E} will update the states{\displaystyle s} with the assignmentL=V{\displaystyle L=V}".

The semantics of sequencing can be given by the following three rules:

1.C1,ssC1;C2,sC2,s{\displaystyle {\frac {\langle C_{1},s\rangle \longrightarrow s'}{\langle C_{1};C_{2}\,,s\rangle \longrightarrow \langle C_{2},s'\rangle }}}
2.C1,sC1,sC1;C2,sC1;C2,s{\displaystyle {\frac {\langle C_{1},s\rangle \longrightarrow \langle C_{1}',s'\rangle }{\langle C_{1};C_{2}\,,s\rangle \longrightarrow \langle C_{1}';C_{2}\,,s'\rangle }}}
3.skip,ss{\displaystyle {\frac {}{\langle \mathbf {skip} ,s\rangle \longrightarrow s}}}

Informally, the first rule says that,if programC1{\displaystyle C_{1}} in states{\displaystyle s} finishes in states{\displaystyle s'}, then the programC1;C2{\displaystyle C_{1};C_{2}} in states{\displaystyle s} will reduce to the programC2{\displaystyle C_{2}} in states{\displaystyle s'}.(You can think of this as formalizing "You can runC1{\displaystyle C_{1}}, and then runC2{\displaystyle C_{2}}using the resulting memory store.)The second rule says thatif the programC1{\displaystyle C_{1}} in states{\displaystyle s} can reduce to the programC1{\displaystyle C_{1}'} with states{\displaystyle s'}, then the programC1;C2{\displaystyle C_{1};C_{2}} in states{\displaystyle s} will reduce to the programC1;C2{\displaystyle C_{1}';C_{2}} in states{\displaystyle s'}.(You can think of this as formalizing the principle for an optimizing compiler:"You are allowed to transformC1{\displaystyle C_{1}} as if it were stand-alone, even if it is just thefirst part of a program.")The semantics is structural, because the meaning of the sequential programC1;C2{\displaystyle C_{1};C_{2}}, is defined by the meaning ofC1{\displaystyle C_{1}} and the meaning ofC2{\displaystyle C_{2}}.

If we also have Boolean expressions over the state, ranged over byB{\displaystyle B}, then we can define the semantics of thewhile command:B,struewhile B do C,sC;while B do C,sB,sfalsewhile B do C,ss{\displaystyle {\frac {\langle B,s\rangle \Rightarrow \mathbf {true} }{\langle \mathbf {while} \ B\ \mathbf {do} \ C,s\rangle \longrightarrow \langle C;\mathbf {while} \ B\ \mathbf {do} \ C,s\rangle }}\quad {\frac {\langle B,s\rangle \Rightarrow \mathbf {false} }{\langle \mathbf {while} \ B\ \mathbf {do} \ C,s\rangle \longrightarrow s}}}

Such a definition allows formal analysis of the behavior of programs, permitting the study ofrelations between programs. Important relations includesimulation preorders andbisimulation.These are especially useful in the context ofconcurrency theory.

Thanks to its intuitive look and easy-to-follow structure,SOS has gained great popularity and has become a de facto standard in definingoperational semantics. As a sign of success, the original report (so-called Aarhusreport) on SOS (Plotkin81) has attracted more than 1000 citations according to the CiteSeer[1],making it one of the most cited technical reports inComputer Science.

Reduction semantics

[edit]

Reduction semantics is an alternative presentation of operational semantics. Its key ideas were first applied to purely functionalcall by name andcall by value variants of thelambda calculus byGordon Plotkin in 1975[3] and generalized to higher-order functional languages with imperative features byMatthias Felleisen in his 1987 dissertation.[4] The method was further elaborated by Matthias Felleisen and Robert Hieb in 1992 into a fullyequational theory forcontrol andstate.[2] The phrase “reduction semantics” itself was first coined by Felleisen andDaniel P. Friedman in a PARLE 1987 paper.[5]

Reduction semantics are given as a set ofreduction rules that each specify a single potential reduction step. For example, the following reduction rule states that an assignment statement can be reduced if it sits immediately beside its variable declaration:

let rec x=v1 in xv2; e    let rec x=v2 in e{\displaystyle \mathbf {let\ rec} \ x=v_{1}\ \mathbf {in} \ x\leftarrow v_{2};\ e\ \ \longrightarrow \ \ \mathbf {let\ rec} \ x=v_{2}\ \mathbf {in} \ e}

To get an assignment statement into such a position it is “bubbled up” through function applications and the right-hand side of assignment statements until it reaches the proper point. Since interveninglet{\displaystyle \mathbf {let} } expressions may declare distinct variables, the calculus also demands an extrusion rule forlet{\displaystyle \mathbf {let} } expressions. Most published uses of reduction semantics define such “bubble rules” with the convenience of evaluation contexts. For example, the grammar of evaluation contexts in a simplecall by value language can be given as

E::=[] | v E | E e | xE | let rec x=v in E | E; e{\displaystyle E::=[\,]\ {\big |}\ v\ E\ {\big |}\ E\ e\ {\big |}\ x\leftarrow E\ {\big |}\ \mathbf {let\ rec} \ x=v\ \mathbf {in} \ E\ {\big |}\ E;\ e}

wheree{\displaystyle e} denotes arbitrary expressions andv{\displaystyle v} denotes fully-reduced values. Each evaluation context includes exactly one hole[]{\displaystyle [\,]} into which a term is plugged in a capturing fashion. The shape of the context indicates with this hole where reduction may occur. To describe “bubbling” with the aid of evaluation contexts, a single axiom suffices:

E[xv; e]    xv; E[e](lift assignments){\displaystyle E[\,x\leftarrow v;\ e\,]\ \ \longrightarrow \ \ x\leftarrow v;\ E[\,e\,]\qquad {\text{(lift assignments)}}}

This single reduction rule is the lift rule from Felleisen and Hieb's lambda calculus for assignment statements. The evaluation contexts restrict this rule to certain terms, but it is freely applicable in any term, including under lambdas.

Following Plotkin, showing the usefulness of a calculus derived from a set of reduction rules demands (1) a Church-Rosser lemma for the single-step relation, which induces an evaluation function, and (2) a Curry-Feys standardization lemma for the transitive-reflexive closure of the single-step relation, which replaces the non-deterministic search in the evaluation function with a deterministic left-most/outermost search. Felleisen showed that imperative extensions of this calculus satisfy these theorems. Consequences of these theorems are that the equational theory—the symmetric-transitive-reflexive closure—is a sound reasoning principle for these languages. However, in practice, most applications of reduction semantics dispense with the calculus and use the standard reduction only (and the evaluator that can be derived from it).

Reduction semantics are particularly useful given the ease by which evaluation contexts can model state or unusual control constructs (e.g.,first-class continuations). In addition, reduction semantics have been used to modelobject-oriented languages,[6]contract systems, exceptions, futures, call-by-need, and many other language features. A thorough, modern treatment of reduction semantics that discusses several such applications at length is given by Matthias Felleisen, Robert Bruce Findler and Matthew Flatt inSemantics Engineering with PLT Redex.[7]

Big-step semantics

[edit]

Natural semantics

[edit]

Big-step structural operational semantics is also known under the namesnatural semantics,relational semantics andevaluation semantics.[8] Big-step operational semantics was introduced under the namenatural semantics byGilles Kahn when presenting Mini-ML, a pure dialect ofML.

One can view big-step definitions as definitions of functions, or more generally of relations, interpreting each language construct in an appropriate domain. Its intuitiveness makes it a popular choice for semantics specification in programming languages, but it has some drawbacks that make it inconvenient or impossible to use in many situations, such as languages with control-intensive features or concurrency.[9]

A big-step semantics describes in a divide-and-conquer manner how final evaluation results of language constructs can be obtained by combining the evaluation results of their syntactic counterparts (subexpressions, substatements, etc.).

Comparison

[edit]

There are a number of distinctions between small-step and big-step semantics that influence whether one or the other forms a more suitable basis for specifying the semantics of a programming language.

Big-step semantics have the advantage of often being simpler (needing fewer inference rules) and often directly correspond to an efficient implementation of an interpreter for the language (hence Kahn calling them "natural".) Both can lead to simpler proofs, for example when proving the preservation of correctness under someprogram transformation.[10]

The main disadvantage of big-step semantics is that non-terminating (diverging) computations do not have an inference tree, making it impossible to state and prove properties about such computations.[10]

Small-step semantics give more control over the details and order of evaluation. In the case of instrumented operational semantics, this allows the operational semantics to track and the semanticist to state and prove more accurate theorems about the run-time behaviour of the language. These properties make small-step semantics more convenient when provingtype soundness of a type system against an operational semantics.[10]

See also

[edit]

References

[edit]
  1. ^McCarthy, John."Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". Archived fromthe original on 2013-10-04. Retrieved2006-10-13.
  2. ^abFelleisen, M.; Hieb, R. (1992). "The Revised Report on the Syntactic Theories of Sequential Control and State".Theoretical Computer Science.103 (2):235–271.doi:10.1016/0304-3975(92)90014-7.
  3. ^Plotkin, Gordon (1975)."Call-by-name, call-by-value and the λ-calculus"(PDF).Theoretical Computer Science.1 (2):125–159.doi:10.1016/0304-3975(75)90017-1. RetrievedJuly 22, 2021.
  4. ^Felleisen, Matthias (1987).The calculi of Lambda-v-CS conversion: a syntactic theory of control and state in imperative higher-order programming languages(PDF) (PhD). Indiana University. RetrievedJuly 22, 2021.
  5. ^Felleisen, Matthias; Friedman, Daniel P. (1987). "A Reduction Semantics for Imperative Higher-Order Languages".Proceedings of the Parallel Architectures and Languages Europe. International Conference on Parallel Architectures and Languages Europe. Vol. 1. Springer-Verlag. pp. 206–223.doi:10.1007/3-540-17945-3_12.
  6. ^Abadi, M.; Cardelli, L. (8 September 2012).A Theory of Objects. Springer.ISBN 9781441985989.
  7. ^Felleisen, Matthias; Findler, Robert Bruce; Flatt, Matthew (2009).Semantics Engineering with PLT Redex. The MIT Press.ISBN 978-0-262-06275-6.
  8. ^University of Illinois CS422
  9. ^Nipkow, Tobias; Klein, Gerwin (2014).Concrete Semantics(PDF). pp. 101–102.doi:10.1007/978-3-319-10542-0.ISBN 978-3-319-10541-3. RetrievedMar 13, 2024.
  10. ^abcXavier Leroy. "Coinductive big-step operational semantics".

Further reading

[edit]

External links

[edit]
Authority control databasesEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Operational_semantics&oldid=1315936400"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp