Dynamic semantics is a perspective on natural language semantics thatemphasizes the growth of information in time. It is an approach tomeaning representation where pieces of text or discourse are viewed asinstructions to update an existing context with new information, theresult of which is an updated context. In a slogan: meaning is contextchange potential.
It is important to be aware of the abstractness of this perspective soas to guard against various non sequiturs. For one thing, one couldeasily think that dynamic semantics or update semantics is committedat least in part to an internalist idea of semantics since theinformation states are “internal”—in the sense thatthey are wholly contained in the individual mind/brain. In otherwords, one might think that the information states of dynamicsemantics are what Putnam (1975) calls “states in the sense ofmethodological solipsism”. See the entries onscientific realism,computational theory of mind,externalism about mental content, andnarrow mental content. However, the general framework says nothing about what the statesare. The state could very well include the environment in which theinterpreter is embedded and thus contain an “external”component.
A second possible misunderstanding is that dynamic semantics or updatesemantics is in complete opposition to classical truth conditionalsemantics (compare the entries onclassical logic andfirst-order model theory). In fact, as this entry will soon make clear, what dynamic semanticsprovides is a generalization of truth conditional semantics ratherthan a radically different alternative. The classical meanings becomethepreconditions for success of the discourse actions.Dynamic semanticists claim thatcompositional meanings havethe nature of functions or relations and the classical meanings arerecoverable from the relational dynamic meanings asprojections onto their “input” coordinate.
The point of the use of an abstract framework is not to give empiricalpredictions. This is the task of specific realizations inside theframework. The framework of dynamic semantics (i) provides a directionof thinking and (ii) allows us to import methods from the mathematicalstudy of the framework. It follows that the question whether naturallanguage meaning is intrinsically dynamic does not have an empiricalanswer. Still, what can be said is that the study of interpretation asa linearly-ordered process has proven quite fruitful andrewarding.
Since dynamic semantics focuses on the discourse actions of sender andreceiver, it is in a sense close to use-oriented approaches to meaningin philosophy such as the work of Wittgenstein and Dummett. However,easy identifications between dynamic semantics and these approachesare to be avoided. Dynamic semantics as an abstract framework iscompatible with many philosophical ways of viewing meaning andinterpretation. Dynamic semantics aims tomodel meaning andinterpretation. You can do that without answering broaderphilosophical questions such as the question of what it is that makesit possible for the subject to be related to these meanings at all.For example, in dynamic predicate logic we take the meaning ofhorse as given without making any substantial claim aboutwhat it means for a subject to have the concept ofhorse; wejust stipulate the subject has it. This is not to say suchquestions—which are at the center of the work of Wittgensteinand Dummett—should not ultimately be answered: it’s justthat a model can be of interest even if it does not answer them. (Notethat dynamic semantics tries to give a systematic and compositionalaccount of meaning, which makes it markedly different in spirit fromWittgenstein’s later philosophy.)
One approach to dynamic semantics isdiscourse representation theory (DRT, Kamp 1981). (Closely related to Kamp’s approach is IreneHeim’sfile change semantics (FCS, Heim 1983a) and thediscourse semantics of Seuren 1985). Meanings in DRT are so-calleddiscourse representation structures (DRSs). These structuresare a type of database that contains specific pieces of information.In and of itself a DRS is a static object, but DRT can be said to be adynamic semantic framework because it allows us to understand theprocess of composing meanings as a process ofmergingdiscourse representation structures. In this way, information changebecomes an integral part of the interpretation process.
Our main focus in this entry is a second approach to dynamicsemantics, although we will compare things to DRT along the way. Inthis second approach, dynamic meanings are types of actions, thingsthat are individuated by the changes they effect. This is the approachassociated withdynamic predicate logic (DPL, Groenendijk andStokhof 1991a). According to this dynamic semantic tradition, ameaning is a specification of how a receiver’s information statewould be modified. It could for instance be a function that maps anold information state to one which has beenupdated with theinformation that the meaning embodies. Alternatively, it could be arelation that expresses the kind of information change that themeaning brings about. (For early work in this tradition, seeGroenendijk and Stokhof 1991a,b; Muskens 1991; Dekker 1993; Vermeulen1993; van Eijck 1994; Vermeulen 1994; Krahmer 1995; van den Berg 1996;Groenendijk et al. 1996; Aloni 1997; Muskens et al. 1997).
Interpretation of declarative sentences can be viewed as a product oras a process. In the product perspective, one focuses on the notion oftruth in a given situation. In the process perspective, interpretationof a proposition is viewed as an information updating step that allowsus to replace a given state of knowledge by a new, more accurateknowledge state. Dynamic semantics focuses on interpretation as aprocess.
Update semantics is a particular way in which theinterpretation-as-process idea can be implemented. The central ideabehind update semantics is very simple. We start with a simple modelof a hearer/receiver who receives items of information sequentially.At every moment the hearer is in a certain state: she possessescertain information. This state is modified by the incominginformation in a systematic way. We now analyze the meaning of theincoming items as their contribution to the change of the informationstate of the receiver. Thus, meanings are seen as actions, or, moreprecisely, asaction types: They are not the concrete changesof some given state into another, but what such concrete changes havein common.
Propositional logic (the logic of negation, disjunction andconjunction) can be viewed as an update logic as follows. Consider thecase where we have three basic propositions \(p, q\) and \(r\), andwe know nothing about their truth. Then there are eight possibilities:\(\{ \bar{p} \bar{q} \bar{r}, p \bar{q} \bar{r}, \bar{p} q \bar{r},\bar{p} \bar{q} r, pq \bar{r}, p \bar{q} r, \bar{p} qr, pqr \}\) Here\(\bar{p} \bar{q} \bar{r}\) should be read as: none of \(p, q, r\) istrue, \(p \bar{q} \bar{r}\) as: \(p\) is true but \(q\) and\(r\) are false, and so on. If now \(\neg p\) (“not\(p\)”) is announced, four of these disappear, and we areleft with \(\{\bar{p} \bar{q} \bar{r}, \bar{p} q \bar{r}, \bar{p}\bar{q} r, \bar{p} qr\}\). If next \(q \vee \neg r\) (“\(q\)or not \(r\)”) is announced, the possibility \(\bar{p}\bar{q} r\) gets ruled out, and we are left with \(\{ \bar{p} \bar{q}\bar{r}, \bar{p} q \bar{r}, \bar{p} qr \}\). And so on. We can viewthe meaning of propositions like \(\neg p\) and \(q \vee \neg r\) asmappings from sets of possibilities to subsets thereof.
Sets of possibilities represent states of knowledge. In the example,\(\{ \bar{p} \bar{q} \bar{r}, p \bar{q} \bar{r}, \bar{p} q \bar{r},\bar{p} \bar{q} r, pq \bar{r}, p \bar{q} r, \bar{p} qr, pqr \}\)represents the state of complete ignorance about propositions \(p, q,r\). Singleton sets like \(\{ pq \bar{r} \}\) represent states offull knowledge about these propositions, and the empty set\(\varnothing\) represents the inconsistent state that results fromprocessing incompatible statements about \(p, q\) and \(r\). Herewe spell out the dynamic meanings of the statements of ourpropositional language:
This gives the meanings of the propositional connectives as operationsfrom an old context representing a state of knowledge to a new contextrepresenting the state of knowledge that results from processing thepropositional information.
It is instructive to compare the actions of update semantics toprogramming statements and their execution. Such a comparison providesa first glimpse into how quantification works within a dynamicsetting. Programming statements of imperative languages areinterpreted (or “executed”) in the context of a machinestate, where machine states can be viewed as allocations of values toregisters. Assume the registers are named by variables \(x, y, z\),and that the contents of the registers are natural numbers. Then thefollowing is a machine state:
\[\begin{array}{|c|c|}\hlinex & 12 \\\hliney & 117 \\\hlinez & 3 \\\hline\end{array}\]If the statement \(z := x\) is executed, i.e.,“interpreted”, in this state (in C syntax, this statementwould have the simpler form \(z = x)\), the result is a new machinestate:
\[\begin{array}{|c|c|}\hlinex & 12 \\\hliney & 117 \\\hlinez & 12 \\\hline\end{array}\]If the sequence of statements \(x := y\); \(y := z\) is executed inthis state, the result is:
\[\begin{array}{|c|c|}\hlinex & 117 \\\hliney & 12 \\\hlinez & 12 \\\hline\end{array}\]This illustrates that the result of the sequence \(z := x\); \(x :=y\); \(y := z\) is that the values of \(x\) and \(y\) areswapped, with the side effect that the old value of \(z\) getslost. In other words, the meaning of the program \(z := x\); \(x :=y\); \(y := z\) can be viewed as a mapping from an input machine state\(s\) to an output machine state \(s'\) that differs from \(s\)in several respects: \(s'(x) = s(y)\) and \(s'(y) = s(x)\) (that is,the input values of \(x\) and \(y\) are swapped in theoutput state), and \(s'(z) = s'(y)\).
Now consider the existential quantifier “there exists an\(x\) such that \(A\)”. Suppose we add this quantifier toan imperative programming language. What would be its meaning? Itwould be an instruction to replace the old value of \(x\) by a newvalue, where the new value has property \(A\). We can decomposethis into a part “there exists \(x\)” and a test“\(A\)”. A formula/instruction is atest ifthe update contributed by it takes the states in the input context oneat a time and tests that they satisfy a particular condition. If theydo, they are included in the output context; if they don’t, theyare discarded. That is, a test is an update that takes an inputcontext and outputs a context that is a subset of the input context.All the formulas of propositional logic in thePropositional logic as an update logic section above are tests.
The two parts “there exists \(x\)” and the test“\(A\)” are glued together by sequential composition:“\(\exists x\); \(A\)”. Focusing on the part“\(\exists x\)”, what would be its natural meaning? Aninstruction to replace the old value of \(x\) by some arbitrary newvalue. This is again a relation between input states and outputstates, but the difference with definite assignments like \(x := y\)is that now the relation is not a function. In fact, this relationalmeaning of quantifiers shows up in the well known Tarski-style truthdefinition for first order logic (compare the entry onTarski’s truth definitions):
\(\exists x\phi\) is true in a model \(M\) relative to a variableassignment \(\alpha\) iff (if and only if) there is some variableassignment \(\beta\) such that \(\beta\) differs from \(\alpha\) atmost with respect to the value it assigns to \(x\) and such that\(\phi\) is true in \(M\) relative to assignment \(\beta\).
Implicit in the Tarskian definition is a relation that holds betweenassignment \(\alpha\) and assignment \(\beta\) iff for all variables\(y\) that are different from \(x\), it is the case that\(\alpha(y) = \beta(y)\). This relation is often called arandomreset of x and is written as [\(x\)]. For any variable\(x\), the binary relation between total assignments [\(x\)] isan equivalence relation between assignments, i.e., it is a reflexive,symmetric and transitive binary relation. Below, we see how suchrelations are put to work in a dynamicised version of first orderpredicate logic.
Adopting [\(x\)] as the meaning of “\(\exists x\)”,note that its meaning is quite different in nature from that of a testin that it creates new values in the output context. In contrast, theoutput context resulting from an update with a test is always a subsetof the input context and can therefore never contain anything newrelative to the input context.
Information states are often calledcontexts, since the stateis a precondition for the “interpretation”, i.e., semanticevaluation, of expressions in a formal or natural language. The use ofthe word “context” also makes it clear that we are notinterested in the total state of the receiver but only in aspects ofit relevant to the interpretation of the expressions/informationalitems we are focusing on. Thus, meanings are often calledcontextchange potentials in the dynamic tradition.
Although it is broadly speaking true that the changes brought about bymeanings in dynamic semantics concern aspects ofcontext, itis important to note that semanticists may mean various things whenthey talk about context (compare the entries onepistemic contextualism andindexicals), and these different views engender varieties of dynamic semanticsthat deal with a variety of issues. Some of these issues are:constructing an appropriate mechanism for pronominal reference(compare the entries onanaphora andreference), explaining the semantics of conditionals (compare the entries onconditionals andthe logic of conditionals), giving a semantic treatment of the distinction between assertion andpresupposition (compare the entries onassertion,speech acts,implicature,pragmatics) and developing a theory of “presuppositionprojection”, explaining how the interpretation of discourse isinfluenced and guided by the common ground that exists between speakerand hearer, and developing a theory of how this common ground developsas the discourse proceeds (compare the entries onpragmatics andimplicature).
Context plays a role in two separate distinctions. The firstdistinction is between context and that which modifies the context.Here the context is the information state or a suitable abstractionthereof (compare the entry onsemantic conceptions of information). The context modifier is (the meaning of) the received informationalitem. The information cannot be received without the correct kind ofpresupposed information state. The proper analogues in classicalstatic predicate logic (compare the entries onclassical logic andfirst-order model theory) are as follows: the information state is an assignment (environment)or a set of assignments, and the received information is a set ofassignments. The second distinction is between context and content.Here the context is something like the storage capacity of thereceiver and various other features that could influence how newexpressions/informational items are interpreted. The content is the(factual, truth conditional) information that is stored. Thus, e.g.,the context in this sense could be a set of registers/variables or inDRT/FCS terms, discourse referents or files. The content would then besome set of assignments or, perhaps, world/assignment pairsconstraining the values of these discourse referents and the set ofworlds that are live candidates for the actual world.
Here is an example to illustrate the distinctions. Suppose we view aninformation state as a pair of a finite set of discourse referents anda set of world/assignment pairs, where the assignments have as domainthe given finite set of discourse referents. Such a state would be acontext-in-the-first-sense and the set of discourse referents would bea context-in-the-second-sense. One basic kind of update would beupdate of content: here we constrain the set of world/assignmentpairs, and leave the set of referents constant. A second basic kind ofupdate would be extension of the set of referents: we extend ourallocated storage capacity. We modify the given world/assignmentspairs to pairs of worlds and extended assignments, where our extendedassignments are constrained by the old ones, but take all possiblevalues on the new referents. Thus, the update process in our exampleis two-dimensional: we have both update of content and update ofcontext-in-the-second-sense.
The motivation for a dynamic semantic framework for natural languagecomes first and foremost from potential dependencies between thereference of a personal pronoun and that of an indefinite noun phrase.The simplest example of such a dependency is that of coreferentialdiscourse anaphora, as in:
The observation is that this sequence of sentences has the samemeaning as the single sentence:
If we assume that indefinites are existential quantifiers, then theanalysis of(2) is easy. It simply says that there exists an \(x\) that is astudent, that met Mary yesterday and that needed her help then. Inpredicate logic:
However, a similar analysis is unavailable for the equivalenttwo-sentence example in(1). This is because interpretation iscompositional (see theentry oncompositionality for discussion) and in our compositional analysis, we will first cometo an analysis ofMary met a student yesterday, which willhave the form \(\existsx(\texttt{student}(x)\wedge\texttt{met}(m,x))\). Likewise, the secondsentence will correspond to \(\texttt{need-help}(x)\). Assuming thatthe default mode of combining multiple sentences is to conjoin them,we now arrive at:
The final occurrence of \(x\) is not bound and so in classicalpredicate logic, we have not arrived at an equivalent translation for(1) and(2). The upshot is that if we want to account for the equivalence between(1) and(2) within a static semantic framework, we will not be able to maintain acompositional interpretation for individual sentences. We will have toassume that the discourse in(1) is interpreted as a whole.
This is counter-intuitive. We know what the individual sentences in(1) mean and we would like to capture the potential these meanings havein combining with other meanings to form a meaningful whole, one whichcorresponds to a sequence of sentences. Dynamic semantics allows us todeliver a fully compositional analysis of meaning at the sententialand supra-sentential level. It does so by guaranteeing that incontrast to classical predicate logic,(3) and(4)are equivalent in adynamic interpretation ofclassical predicate logic syntax. In particular, the following isvalid in dynamic predicate logic:
\[\exists x(\psi \wedge \phi) \textrm{ iff } \exists x(\psi)\wedge \phi\]In this kind of dynamic semantics for natural language, the meaning ofa sentence does not correspond to a set of truth-conditions, butrather to an action performed on a context. There are two kinds ofactions. Predications like \(\texttt{need-help}(x)\) or\(\texttt{met}(m,x)\) aretests. They merely check if everystate/assignment in the current context assigns a value to \(x\)that satisfies the relevant predicate; if (and only if) this is thecase, the test passes the unaltered assignment on to the outputcontext. In contrast, the existential quantifier is not a test. It hasthe potential to alter the context by randomly resetting the value ofits associated variable. So, \(\exists x(\psi)\) takes a context,randomly changes the value of \(x\) in each assignment in thecontext and passes these changed assignments on to the output contextif they also satisfy the condition contributed by the test\(\psi\).
One of the main consequences of this semantics is that the scope ofthe existential quantifier is in principle limitless. It changes thevalue of some variable and until a further change to that variableoccurs, any subsequent test accesses the particular value that wasset. This also means that the semantics of existential quantificationcan be given without reference to any scope: the meaning of \(\existsx\) is the action that takes a context and returns the same contextwith at most the value of \(x\) randomly replaced by another value.(We will work this out in detail below.)
Right now, two senses of the term dynamic semantics (as applied tonatural language) emerge. First and foremost, dynamic semantics is thegeneral idea that logical statements do not express truth-conditionsbut rather actions on contexts (where contexts can be conceptualizedin various ways). A second understanding of the term dynamic semanticsis a set of theoretical positions taken within debates concerning thesemantics of certain natural language phenomena, most notablypronominal anaphora. (See below for a similar take on dynamicsemantics with respect topresupposition). For the case ofanaphora, this theoretical understanding embodies the combination oftwo hypotheses: (i) pronouns correspond to variables; (ii) indefinitesare non-quantificational, they simply contribute a dynamic variableassignment update. As is clear from the second hypothesis, thistheoretical use of the term dynamic semantics presupposes the moregeneral view that meanings are actions on contexts.
Before we turn to defining dynamic predicate logic, we should notethat the route dynamic semantics takes to account for anaphora is byno means the only one to be found in the literature. We could alsochoose to give up the idea that pronouns are correspond to variablesand instead assign them a more intricate meaning, one akin to that ofdefinite descriptions. In the contemporary tradition, such ideasemerge as early as Quine 1960 and Geach 1962, before being brought tomaturity by (especially) Evans (1977, 1980), Parsons (1978, OtherInternet Resources), Heim (1990), and Elbourne (2001, 2005). SeeNouwen (forthcoming) for discussion.
The previous subsection gave a first glimpse into the basic aim of adynamic semantic framework, which is to define a logical semantics inwhich statements express actions and specifically, in whichexistential quantification has the potential to reset variables, thuschanging the context. We get our clue about how to do this byexamining the definition of existential quantification in ordinarypredicate logic. Suppose we work with total assignments on a fixed setof variables \(\textsf{VAR}\) over a fixed domain \(D\). The set oftotal assignments \(\textsf{ASSIGN}\) is therefore the set of all(total) functions from \(\textsf{VAR}\) to \(D\).
Let the meaning of atomic formulas like \(P(x)\) be the set \(F\)of all assignments \(\alpha\) such that \(\alpha(x)\) is an objectsatisfying \(P\).
Now define: \[\alpha[x]\beta := \forall v \in \textsf{VAR}\setminus\{x\}\ (\alpha(v) = \beta(v)).\] So [\(x\)] is the binary relation“assignment \(\beta\) is a result of (at most) resetting thevalue of the variable \(x\) in assignment \(\alpha\)”. Asalready mentioned, this is an equivalence relation over variableassignments. Now the meaning \(G\) of \(\exists x P(x)\), will be:\[G := \{\alpha \in \textsf{ASSIGN } \mid \exists \beta \in F \alpha[x]\beta \}.\] Thus, \(G\) is the set of assignments that can besuccessfully reset with respect to \(x\) and obtain an assignmentin \(F\) as a result of this resetting. Viewed differently,\(G\) isthe domain of the relation \(R\) given by\[\alpha R\beta := \alpha[x]\beta \textrm{ and } \beta \in F.\]
We could say that \(G\) is the precondition for the resettingaction \(R\). Now the idea of \(\textsf{DPL}\) is to take themeaning of \(\exists x P(x)\) to be not the precondition \(G\) (asin classical static first order logic) but the resetting action\(R\). In this way we do not lose information since \(G\) canalways be obtained from \(R\). Moreover, the range of the relation\(R\) consists of assignments \(\beta\) that differ fromassignments in the precondition \(G\) at most with respect to thevalue of \(x\) and that are also in \(F\) (i.e., \(\beta(x)\) isin the interpretation of \(P)\). The \(x\) values stored in therange of the binary relation \(R\) are precisely the \(x\)values that satisfy \(P\), i.e., the values we were lookingfor.
More generally, we take as \(\textsf{DPL}\)-meaningsbinaryrelations between assignments. Such relations can be seen as(modeling)resetting actions. This is an instance of anadmittedly simplistic but well-known and useful way of modelingactions: an action is viewed as a relation between the states of theworld before the action and the corresponding states after theaction.
Here is the full definition. Assume a non-empty domain \(D\), a setof variables \(\textsf{VAR}\) and a model \(\mathcal{M}=\langle D,I\rangle\) of signature \(\Sigma\). Atomic conditions \(\pi\) are ofthe form \(P(x_0 , \ldots ,x_{n-1})\), where \(P\in \Sigma\) is ofarity \(n\). Atomic resets \(\varepsilon\) are of the form\(\exists v\), where \(v\) is a variable. The language of predicatelogic for \(\Sigma\) is given below (\(\cdot\) is conjunction and\({\sim}\) is negation):
\[\phi ::= \bot \mid \top \mid \pi \mid \varepsilon \mid \phi \cdot \phi \mid {\sim}(\phi).\]Assignments are elements \(\alpha , \beta ,\ldots\), of\(\textsf{ASSIGN} := D^{\textsf{VAR}}\). We define thedynamic/relational semantics for this language as follows:
Note that conjunction \(\cdot\) is interpreted as relationcomposition, and negation \({\sim}\) is basically interpreted ascomplementation with respect to the domain of the relation denoted bythe negated formula.
Truth is defined in terms of relational meanings; we basically projectthe binary relations between assignments onto their firstcoordinate:
\[\alpha \vDash \phi := \exists \beta\ \alpha[\phi]\beta.\]We can define implication \(\phi \rightarrow \psi\) as \({\sim}(\phi\cdot {\sim}\psi)\). Applying the truth definition to this gives:
\(\alpha \vDash \phi \rightarrow \psi \textrm{ iff } \forall\beta(\alpha[\phi]\beta \Rightarrow \beta \vDash \psi)\), i.e., anyassignment \(\beta\) that results from updating \(\alpha\) with theantecedent \(\phi\) satisfies the consequent \(\psi\).
Relational meanings also yield the following beautiful definition ofdynamic entailment:
\[\phi \vDash \psi := \forall \alpha , \beta(\alpha[\phi]\beta \Rightarrow \exists \gamma \beta[\psi]\gamma).\]This definition was first introduced by Hans Kamp in his pioneeringpaper Kamp 1981. Informally, it says that any assignment \(\beta\)that has incorporated the update contributed by \(\phi\) is guaranteedto support/satisfy \(\psi\).
Note that \({\sim}\phi\) is equivalent to \((\phi \rightarrow \bot)\),and that \((\phi \rightarrow \psi)\) is true iff \(\phi \vDash \psi\).Equally importantly, we can define \(\forall x (\phi)\) as \((\existsx \rightarrow \phi)\).
A possible alternative notation for \(\exists v\) would be [\(v := ?\)] (random reset). This emphasizes the connection with randomassignment in programming languages.
The interpretations of predicate symbols areconditions. Theyare subsets of the diagonal \(\{\langle \alpha , \alpha \rangle \mid\alpha \in \textsf{ASSIGN}\}\) (which is the meaning of \(\top)\).Subsets of the diagonal are tests: they modify nothing and simply passon what is OK (satisfies the condition) and throw away what is not.The mapping \(\textsf{diag}\) that sends a set \(F\) of assignmentsto a condition \(\{\langle \alpha , \alpha \rangle \mid \alpha \inF\}\) is the link between the classical static and the dynamic world.For example, the relational composition of \(\textsf{diag}(F)\) and\(\textsf{diag}(G)\) is \(\textsf{diag}(F\cap G)\).
Classical first-order logic (FOL) can be interpreted in\(\textsf{DPL}\) as follows. We assume that the FOL language has thefollowing connectives and quantifiers: \(\top , \bot , \wedge ,\rightarrow , \exists x\). We translate as follows:
We get that \([\phi^*]\) is the diagonal of the classicalinterpretation of \(\phi\). Our translation is compositional. It showsthat FOL can be taken as afragment of \(\textsf{DPL}\).
It is, conversely, possible to translate any \(\textsf{DPL}\)-formula\(\phi\) to a predicate logical formula \(\phi\)°, such that thedomain of \([\phi]\) is the classical interpretation of \(\phi\)°.One of the ways to define this translation is by means of aprecondition calculus, with Floyd-Hoare rules (Eijck and de Vries1992). The following is a variation on this. Take the language ofstandard predicate logic, with diamond modalities \(\langle \psi\rangle \phi\) added, where \(\psi\) ranges over DPL formulas and\(\alpha \vDash \langle \psi \rangle \phi\) iff there is an assignment\(\beta\) with \(\alpha[\psi]\beta\), and \(\beta \vDash \phi\). Thenthe following equivalences show that this extension does not increaseexpressive power.
An example of the merits of dynamic predicate logic is that it allowsfor a straightforward compositional analysis of donkey sentences(Geach 1962; see the entry on anaphora).
There is obviously a dependency between the pronouns \(he\) and \(it\)and the indefinitesa farmer anda donkey,respectively. In a nutshell, the problem for(5) in a classical analysis is that such an analysis gives us twochoices, which taken together do not cover the possible meanings of(5). If we treat the indefinites as referring to a particular farmer and aparticular donkey and the pronouns as simply picking up that sameentities, then we get a possible yet not very salient reading for(5). The most prominent reading describes a co-variation between theowning relation and the beating relation: any farmer-donkey pair thatstands in theown relation also stands in thebeatrelation. Clearly, we will need to interpret the indefinites asquantifiers. Yet if we do so, they won’t be able to bind thevariables in the consequent of the conditional since a compositionalanalysis will place the variables contributed by the pronouns outsidethe classical scope of the existential quantifiers in the antecedentof the conditional. That is,(6) does not yield the correct truth-conditions for(5).
The dynamic version of(6) is(7), which yields the correct truth conditions: any random reset of\(x\) and \(y\) such that \(x\) is a farmer and \(y\) is adonkey owned by \(x\) is also such that \(x\) beats\(y\).
Interestingly, the translation ( )° of(7) into predicate logic is not(6) but(8). So the problem is not that predicate logic cannot express thetruth-conditions of donkey conditionals but that sentences like(8) are unlikely to be the end product of a compositional interpretationprocess (but see Barker and Shan 2008).
This is how(8) is derived from(6):
\[\begin{array}{l} (\langle(\exists x \cdot Fx \cdot \exists y \cdot Dy \cdot Hxy ) \rightarrow Bxy \rangle \top) ° \\ = (\langle{\sim}((\exists x \cdot Fx \cdot \exists y \cdot Dy \cdot Hxy ) \cdot {\sim} Bxy ) \rangle \top) ° \\ = \neg(\langle(\exists x \cdot Fx \cdot \exists y \cdot Dy \cdot Hxy ) \cdot {\sim} Bxy \rangle \top) ° \\ =\ldots \\ = \neg \exists x (Fx \wedge \exists y (Dy \wedge Hxy \wedge \neg Bxy ))).\\\end{array}\]The successful application of dynamic predicate logic to theinteraction of quantification and anaphora in natural languages hingeson the fact that in DPL, existential quantification is dynamic whereasuniversal quantification is not. What would happen if universalquantification were dynamic too? Note first of all, that it makes nosense to define a universal quantification action \(\forall x\) inparallel to the random reset action \(\exists x\). This is becauseuniversal quantification only makes sense on a given domain (therestrictor) and with respect to some given property (thescope). Second, if we give \(\forall x(\phi)(\psi)\) adynamic interpretation, it predicts that universal quantifiers canstand in anaphoric relations to singular pronouns across clausalboundaries, just as existential quantifiers can. For cases like(9), this is clearly undesirable.
However, as soon as one looks atplural anaphora it becomesapparent that the static nature of universal quantification (and, infact, that of other non-indefinite generalized quantifiers) should notbe taken for granted. For example,(10) allows a reading in whichthey is anaphorically linked toevery boy.
On the assumption that examples like(10) should receive a dynamic treatment (see the earlier remark onalternative explanations and Nouwen, forthcoming, for discussion), theconclusion can only be that universal quantifiers should not be givena static interpretation. The next question is then what kind ofinterpretation is appropriate, and how this interpretation candistinguish the infelicitous case of anaphora in(9) from the case in(10). One option would be to distinguish between the values assigned to thevariables that are bound by the quantifier in its scope and the valueassigned to that variable outside the scope of the quantifier. (See,for instance, Kamp and Reyle 1993 for such a strategy and Nouwen 2007for discussion.) In order to account for(10), one would have the variable occurrences bound by the quantifier inthe first sentence of(10) range over individual boys, while that variable gets assigned theplurality of all boys outside the quantifier’s scope (i.e. inthe second sentence). As van den Berg (1996) was the first to show,however, such a solution only gets there half-way. In discourse,pronouns do not just have access to pluralities associated toquantifiers, but also to the relations such quantifiers are engagedin. For instance, in the second sentence of(11) the pronoun \(it\) covaries with the quantification over the boys inthe subject in such a way that the second sentence is understood tomean that each boy submitted the paper that \(he\) wrote (cf. van denBerg 1996; Krifka 1996; Nouwen 2003; Brasoveanu 2007, 2008).
The leading idea in dynamic treatments of generalized quantificationand plural anaphora is to represent plural values not by assigningpluralities to variables, but rather to adopt a notion of context thatallows for pluralities (e.g., sets) of assignment functions. Say thatthe first sentence in(11) is translated into dynamic predicate logic with dynamic quantifiersas follows: \(\forall x(\textrm{boy}(x))(\exists y\cdot\textrm{essay}(y)\cdot \textrm{wrote}(x,y))\). The interpretation ofsuch formulas requires collecting assignment functions in which thevalue of \(x\) is a boy and the value of \(y\) is an essaywritten by this boy. The universal quantifier requires suchcollections to include all possible values for the predicateboy. In the subsequent discourse, we now have access to theset of all \(x\) values, i.e., the set of all boys, the set of all\(y\) values, i.e., the set of essays written by the boys, as wellas the individual boy-essay pairs: each atomic assignment \(f\) inthe set of contextual assignments following the first sentence of(11) is such that \(f(y)\) is an essay written by boy \(f(x)\). All thatis now needed to account for the case of anaphora in(11) is the assumption that the universal quantification there involves universal quantification over assignment functions, rather than just quantification over values. See van den Berg (1996), Nouwen (2007, forthcoming), Brasoveanu (2007, 2008, 2013) for various ways of implementing this idea.
The upshot is that given a suitably structured notion of context,quantifiers can be given dynamic interpretations generally. Animportant consequence is that this kind of analysis extends tonon-nominal quantifiers (Brasoveanu 2007). Cases like(11) could be described as cases of quantificational subordination, andthe structured context approach can be seen as designed to offer awindow into the mechanism behind subordination. Cases of modalsubordination (Roberts 1987, 1989), like the famous(12), can receive a parallel treatment.
The modalmight introduces a quantifier over possible worldsthat takes scope over the indefinitea wolf, in the same waythatevery boy takes scope overan essay in(11) above. The set of assignment functions that is the output of theupdate contributed by the first sentence in(12) will therefore store a set of possible worlds contributed bymight that are epistemically possible relative to the actualworld, and a set of wolves that come in in these epistemicallyaccessible worlds. The second sentence in(12) can then further elaborate on the dependency between worlds andwolves requiring at least some of the epistemic possibilities to besuch that the corresponding wolf not only comes in but also eatsyou.
Although anaphora and presuppositions (see below) are the centrallinguistic phenomena that may be thought to require a dynamic semanticanalysis, in principle any aspect of the context could be the targetof a phenomenon that warrants a dynamic analysis of interpretation.Barker’s 2002treatment of the information provided by vague statements isillustrative. Barker assumes that contexts contain precise standardsfor vague adjectives liketall. A sentence like(19) can then be used in two distinct ways.(19) John is tall. If the information state contains precise (enough)information about what counts astall, then an utterance of(19) may be used to provide information about John’s height. If,however, the hearer has no idea about the appropriate precisificationof an expression liketall (say, s/he is an alien or aforeigner), but s/he does have information about John’s height,then(19) can be used to provide information about the standard.
Context plays an important role in presupposition. A sentence like(13) presupposes that John is late. But put this sentence in a contextproviding this very information, as in(14), and the presupposition disappears. That John is late isasserted in(14), not presupposed.
Stalnaker 1973 takespresupposition to be based on presumed common knowledge. Uttering asentence like(13)takes for granted that it is common knowledge that John islate. In this sense,(13) requires the context of utterance to be such that this commonknowledge is in place. In contrast,(14) lacks such a requirement simply because the first conjunct in(14) asserts what the second conjunct takes for granted. The crucialassumption made by Stalnaker is that interpretation is incremental inthe following sense: for a sentence of the form [\(S\)1 and\(S\)2], the interpretation of \(S\)2 occurs in a context whichis already updated with \(S\)1. Schematically:
Stalnaker’s interpretation of the scheme in(15) is pragmatic: when we encounter a series of clauses in discourse, weinterpret these clausesin light of a context that is alreadyinformed by the interpretation of previous clauses. This idea ofincremental interpretation is simple yet powerful and it makes perfectsense for complex discourses with conjunctive interpretations (forinstance, coordinations withand and simple sequences ofdeclarative sentences). Since the conjuncts in a conjunction haveassertoric force, they can be used to update the context in order tocreate a new local context. The problem is though that presuppositionsdo not just disappear in conjunctive environments. Just like(14),(16) also lacks the requirement that it should be commonknowledge that John is late. But here the first disjunct does not haveassertoric force (see, for instance, Schlenker 2009 for discussion).It is not obvious what kind of pragmatic rule could account for thelack of a presupposition in(16).
Examples like(16) call into question the value of an incremental interpretation schemalike(15). On top of that,(15) is rather presumptuous in its assumptions of how interpretationproceeds. Asserting a clause with propositional content \(p\) doesnot automatically make it common knowledge that \(p\). Rather, suchan assertion should be regarded as aproposal to make\(p\) common knowledge. Whether or not \(p\) becomes commonground is dependent on the willingness of the other interlocutors toaccept the proposition (for instance, by not objecting against theassertion). In other words,(15) seems ill-suited for capturing the pragmatics of (the dynamics of)information flow.
A possible way out is to regard(15) not as a pragmatic rule, but rather as a semantic rule, couched in adynamic notion of interpretation. This was most prominently proposedin Heim 1983b, following Karttunen 1973. Karttunen distinguishesglobal contexts, which are contexts relative to which thecurrent sentence is evaluated, fromlocal contexts, which arecontexts relative to which the current clause (or potentially somesub-clausal entity) is interpreted. The idea now is that a rule like(15) can express the semantics ofand. In(15), \(C\) is the global context. A crucial part of the semantics ofconjunction is that the local context for \(S\)2 is the update ofthe global context with \(S\)1. Thus, there is no presupposition in(14) simply because of the dynamicsemantics ofand. Allwe need to account for the lack of presupposition in(16) is to come up with a semantics for disjunction in which the localcontext for the second disjunct has already been updated with thenegation of the first disjunct; see Krahmer and Muskens 1996 for suchan account that also captures interactions between (double) negationand anaphora.
To make things more concrete let us assume that contexts are sets ofpossible worlds and that an update \(C[S]\) of \(C\) with a simpleclause \(S\) is \(C\cap p\), where \(p\) is the propositionalcontent of \(S\): updating \(C\) with a clause outputs the\(C\) worlds in which the clause is true. The rules in(18) show a Heimian fragment of a dynamic interpretation of the mainpropositional operators in English.
Some question the explanatory value of such a dynamic interpretationin the sense that the framework fails to account for why there appearto be no natural language expressions that encode a minimal variationof(17), where the local context of the second disjunct \(S\)2 is \(C[S1]\)instead of \(C[\textrm{not }S1]\), or where the local context of\(S\)1 is based on an update with \(S\)2, or where there are nolocal contexts at all as in(18) (see, for instance, Soames 1989).
In light of such criticisms, there has been a recent resurgence ofstatic approaches to presupposition projection, such as the pragmaticapproaches of Schlenker (2008, 2009), Chemla (2008, Other InternetResources) and the semantic (trivalent) approaches of George (2008)and Fox (2008). As Rothschild points out though, there is a route tomaking a semantics along the lines of(17) explanatory. To do so, one has to show that permissible dynamicinterpretations of connectives share certain properties. As Rothschild(2011) shows, anexplanatory and empirically adequate dynamic treatment ofpresupposition is possible if we assume that context change potentialsadhere to certain principles of definedness. Let us assume that\(C[S]\) (for a simple clause \(S\)) is only defined if and only ifany presupposition of \(S\) is true in all the worlds in \(C\).The rules in(17) determine the definedness conditions for complex statements. Forinstance, according to(17) [not S] is only undefined in \(C\) if \(S\) is undefined in\(C\). Rothschild’s insight is that we can constrain dynamicinterpretation by constraining the resulting definednessconditions.
Epistemic logic, the logic of knowledge, is a branch of modal logicwhere the modality “\(i\) knows that” is studied(compare the entries:epistemic logic,logic of belief revision). The dynamic turn in epistemic logic, which took placearound 2000, introduced a focus on change of state, but now withstates taken to be representations of the knowledge of a set ofagents.
If we fix a set of basic propositions \(P\) and a set of agents\(I\), then a knowledge state for \(P\) and \(I\) consists ofa set \(W\) of possible worlds, together with a valuation function\(V\) that assigns a subset of \(P\) to each \(w\) in\(W\) (if \(w \in W\), then \(V(w)\) lists the basic propositionsthat are true in \(w)\) and for each agent \(i \in I\), a relation\(R_i\) stating the epistemic similarities for \(i\) (if \(wR_iw'\), this means that agent \(i\) cannot distinguish world \(w\)from worldw\('\)). Epistemic models \(M = (W, V, \{R_i \midi \in I\})\) are known as multimodal Kripke models. Pointed epistemicmodels are epistemic models with a designated world \(w_0\)representing the actual world.
What happens to a given epistemic state \((M, w_0) = ((W, V, \{ R_i\mid i \in I\}), w_0)\) if a public announcement \(\phi\) is made?Intuitively, the world set \(W\) of \(M\) is restricted to thoseworlds \(w \in W\) where \(\phi\) holds, and the valuation function\(V\) and epistemic relations \(R_i\) are restricted accordingly.Call the new model \(M\mid\phi\). In case \(\phi\) is true in \(w_0\),the meaning of the public announcement \(\phi\) can be viewed as a mapfrom \((M, w_0)\) to \((M\mid\phi , w_0)\). In case \(\phi\) is falsein \(w_0\), no update is possible.
Veltman’s update logic can be accommodated in publicannouncement logic (compare the entry oncommon knowledge) by allowing public announcements of the form \(\Diamond \phi\), wherethe modality is read as reachability under common knowledge. If an\(S\)5 knowledge state for a set of agents (compare the entry onepistemic logic) is updated with the public announcement \(\Diamond \phi\), then incase \(\phi\) is true somewhere in the model, the update changesnothing (for in this case \(M\mid\Diamond \phi\) equals \(M)\), andotherwise the update yields inconsistency (since public announcementsare assumed to be true). This is in accordance with the update logicdefinition.
The logical toolbox for epistemic logic with communicative updates iscalled dynamic epistemic logic or DEL. DEL started out from theanalysis of the epistemic and doxastic effects of public announcements(Plaza 1989; Gerbrandy 1999). Public announcement is interestingbecause it creates common knowledge. There is a variety of other kindsof announcements—private announcements, group announcements,secret sharing, lies, and so on—that also have well-definedepistemic effects. A general framework for a wide class of updateactions was proposed in Baltag et al. 1999 and Baltag and Moss 2004. Afurther generalization to a complete logic of communication andchange, with enriched actions that allow changing the facts of theworld, is provided in Benthem et al. 2006. A textbook treatment ofdynamic epistemic logic is given in Ditmarsch et al. 2006.
Within an epistemic logic setting, one may represent the communicativesituation of an utterance with presuppositions as follows. First, weneed to represent what a speaker assumes about what her audience knowsor believes in a multi-agent belief (or knowledge) state, then we needto model the effect of the communicative action on the belief state. Asimple way to handle presupposing utterances in dynamic epistemiclogic is by modeling a presupposition \(P\) as a publicannouncement “it is common knowledge that \(P\)”. Incases where it is indeed common knowledge that \(P\), an updatewith this information changes nothing. In cases where \(P\) is notcommon knowledge, however, the utterance is false, and publicannouncements of falsehoods yield an inconsistent knowledge state.
Dynamic semantics is particularly suitable to describe how differenttypes of linguistic material affect different aspects of theinformation state. In particular, dynamic semantics allows one toefficiently model the difference betweenat-issue content,e.g., the content that is asserted by the utterance of a declarativesentence andnon-at issue content, content that plays somesecondary role. For instance, the at-issue content of(19) is that John’s neighbor was arrested yesterday: it is themessage the speaker intends to assert. The appositivewho I havenever met is not at issue. One way to see this is that aninterlocutor can only respond to(19) withNo! That’s not true! if s/he intends to challengethe fact that the neighbor was arrested, not if s/he merely wishes toexpress their disbelief in the speaker’s claim of never havingmet the neighbor.
Dynamic semantics is a suitable framework for analyzing what goes onwhen such sentences are interpreted, since it naturally allows themodeling of separate streams of information. For instance, AnderBoiset al. 2015 provide an account of sentences like(19) where the matrix sentence updates a local set of possible worlds. Theupdated set can be seen as a potential candidate for updating thecommon ground with. In contrast, the appositive directly updates thecommon ground. Rather than aproposed common ground update,it can be seen as animposed update (see Nouwen 2007 for analternative dynamic logic). The ideas of AnderBois et al. 2015 arepartly inspired by similar ideas that were successfully applied in therealm of evidentials; see in particular Murray 2014.
Compositionality has always been an important concern in the use oflogical systems in natural language semantics (see the entry oncompositionality). Through the use of higher order logics (see the entries onsecond-order and higher-order logics andChurch’s type theory), a thoroughly compositional account of, e.g., the quantificationalsystem of natural language can be achieved, as is demonstrated inclassical Montague grammar (Montague 1974a,b, 1973; compare the entryonlogical form). We will review how the dynamic approach can be extended to higherorder systems. The link between dynamic semantics and type theory ismore like a liaison than a stable marriage: there is no intrinsic needfor the connection. The connection is treated here to explain thehistorical influence of Montague grammar on dynamic semantics.
Most proposals for dynamic versions of Montague grammar develop whatare in fact higher order versions of dynamic predicate logic (DPL).This holds for Groenendijk and Stokhof 1990; Chierchia 1992, 1995;Muskens 1994, 1995, 1996; Eijck 1997; Eijck and Kamp 1997; Kohlhase etal. 1996; and Kuschert 2000. These systems all inherit a feature (orbug) from the DPL approach: they make re-assignment destructive. DRTdoes not suffer from this problem: the discourse representationconstruction algorithms of Kamp 1981 and Kamp and Reyle 1993 arestated in terms of functions with finite domains, and carefully talkabout “taking a fresh discourse referent” to extend thedomain of a verifying function, for each new noun phrase to beprocessed.
In extensional Montague grammar “a man” translates as:
\[\lambda P\exists x(\textrm{man } x \wedge Px).\]Here \(P\), of type \(e \rightarrow t\), is the variable for the VPslot: it is assumed that VPs denote sets of entities.
In Dynamic Montague Grammar (DMG) of Groenendijk and Stokhof 1990, thetranslation of an indefinite NP introduces an anaphoric index. Thetranslation of “a man” is
\[\lambda P\lambda a\lambda a' \cdot \exists x(\textrm{man }x \wedge Pu_i (u_i \mid x) aa').\]Instead of the basic types e and t of classical extensional Montaguegrammar, DMG has basic types \(e, t\) and \(m (m\) for marker). Statespick out entities for markers, so they can be viewed as objects oftype \(m \rightarrow e\). Abbreviating \(m \rightarrow e\) as \(s\)(for “state”), we call objects of type \(s \rightarrow s\rightarrow t\) state transitions. The variable \(P\) in the DMGtranslation of “a man” has type \(m \rightarrow s\rightarrow s \rightarrow t\), so VP meanings have been lifted fromtype \(e \rightarrow t\) to this type. Note that \(\rightarrow\)associates to the right, so \(m \rightarrow s \rightarrow s\rightarrow t\) is shorthand for \(m \rightarrow(s \rightarrow(s\rightarrow t))\). Indeed, DMG can be viewed as the result ofsystematic replacement of entities by markers and of truth values bystate transitions. A VP meaning for “is happy” is afunction that maps a marker to a state transition. The statetransition for marker \(u_i\) will check whether the input state maps\(u_i\) to a happy entity, and whether the output context equals theinput context. The variables \(a\),a\('\) range overstates, and the expression \((u_i \mid x)a\) denotes the result ofresetting the value of \(u_i\) in \(a\) to \(x\), so the oldvalue of \(u_i\) gets destroyed (destructive assignment). Theanaphoric index \(i\) on reference marker \(u_i\) is introduced bythe translation. In fact, the translation starts from the indexedindefinite noun phrase “a man\(_i\)”. The connectionbetween Montagovian compositionality and dynamic semantics as well asthe basic Montagovian and dynamic ingredients are much moretransparent and streamlined in the typed Logic of Change proposed inMuskens 1991, 1995, 1996. Because of this, Muskens’sCompositional DRT is probably thede facto standard andstarting point for current research in compositional dynamicsemantics. An alternative treatment is given in Incremental TypedLogic (ITL), an extension to typed logic of a “stacksemantics” that is based on variable free indexing and thatavoids the destructive assignment problem. The basic idea of the stacksemantics for DPL, developed in Vermeulen 1993, is to replace thedestructive assignment of ordinary DPL, which throws away old valueswhen resetting, by a stack-valued one that allows old values to bereused. Stack-valued assignments assign to each variable a stack ofvalues, the top of the stack being the current value. Existentialquantification pushes a new value on the stack, but there is also thepossibility of popping the stack to reuse a previously assigned value.Eijck’s 2000 ITL is in fact a typed version of stack semantics,using a single stack.
Assuming a domain of entities, contexts are finite lists of entities.If \(c\) is a context of length \(n\), then we refer to itselements as \(c[0]\), \(\ldots ,c[n-1]\), and to its length as\(\lvert c\rvert\). We will refer to the type of contexts of length\(i\) as \([e]^i\). If \(c\) is a context in \([e]^i\), thenobjects of type \(\{0, \ldots ,i-1\}\) can serve as indices into\(c\). If \(c \in[e]^i\) and \(j \in \{0, \ldots ,i-1\}\), then\(c[j]\) is the object of type e that occurs at position \(j\) inthe context. A key operation on contexts is extension with an element.If \(c :: [e]^i\) and \(x :: e\) (\(c\) is a context of length\(i\) and \(x\) is an entity) then \(c\mcaret x\) is the contextof length \(i+1\) that has elements \(c\)[0], \(\ldots ,c[i-1],x\). Thus \(\mcaret\) is an operator of type \([e]^i \rightarrow e\rightarrow[e]^{i+1}\). Also note that types like \([e]^i\) are infact polymorphic types, with \(i\) acting as a type variable. SeeMilner 1978.
In ITL there is no destructive assignment, and indefinite noun phrasesdo not carry indexes in the syntax. The ITL translation of “aman” picks up an index from context, as follows:
\[\lambda P\lambda c\lambda c' \cdot \exists x (\textrm{man } x \mcaret P\lvert c\rvert (c^x)c').\]Here \(P\) is a variable of type \(\{0, \ldots ,i\}\rightarrow[e]^{i+1} \rightarrow [e]^j \rightarrow t\), while \(c\)is a variable of type [\(e]^i\) representing the input context oflength \(i\), and \(c'\) is a variable of type [\(e]^j\)representing the output context. Note that the type \(\{0, \ldots ,i\}\rightarrow [e]^{i+1} \rightarrow [e]^j \rightarrow t\) for \(P\)indicates that \(P\) first takes an index in the range \(\{0,\ldots ,i\}\), next a context fitting this range (a context of length\(i+1)\), next a context of a yet unknown length, and then gives atruth value. \(P\) is the type of unary predicates, lifted to thelevel of context changers, as follows. Instead of using a variable torange over objects to form an expression of type \(e\), a liftedpredicate uses a variable ranging over the size of an input context toform an expression that denotes a changer for that context.
The ITL translation of “a man” has type \[(\{0, \ldots ,i\} \rightarrow[e]^{i+1} \rightarrow [e]^j \rightarrow t) \rightarrow [e]^i \rightarrow [e]^j \rightarrow t.\] In\(P\lvert c\rvert (c\mcaret x)c'\), the \(P\) variable marks theslot for the VP interpretation; \(\lvert c\rvert\) gives the length ofthe input context to \(P\); it picks up the value \(i\), whichis the position of the next available slot when the context isextended. This slot is filled by an object \(x\) denoting a man.Note that \(c\mcaret x[\lvert c\rvert ] = c\mcaret x[i] = x\), so theindex \(i\) serves to pick out that man from the context.
To see that a dynamic higher order system is expressible in ITL, it isenough to show how to define the appropriate dynamic operations.Assume \(\phi\) and \(\psi\) have the type of context transitions,i.e. type [\(e] \rightarrow[e] \rightarrow t\) (using [\(e\)] forarbitrary contexts), and that \(c, c', c''\) have type [\(e\)].Then we can define the dynamic existential quantifier, dynamicnegation and dynamic composition as follows:
\[\begin{align*}\cal{E} & := \lambda cc'\cdot \exists x(c \mcaret x = c') \\ {\sim}\phi & := \lambda cc'\cdot (c = c' \mcaret \neg \exists c'' \phi cc'') \\\phi ; \psi & := \lambda cc'\cdot \exists c''(\phi cc'' \mcaret \psi cc')\end{align*}\]Dynamic implication \(\Rightarrow\) is defined in the usual way bymeans of \({\sim}(\phi; {\sim}\psi)\).
ITL and Muskens style Compositional DRT are not incompatible; seeBittner 2014 for example. We will end this section by noting that therange of systems integrating Montagovian compositionality and dynamicsemantics is far from being completely charted. A recent series ofcontributions integrating continuation-based and dynamic semantics isexploring new ways of integrating and generalizing them; see de Groote2006, Bumford and Barker 2013, Charlow 2014, Bumford 2015, and Martin2016.
Hopefully, the above has given the reader a sense of Dynamic Semanticsas a fruitful and flexible approach to meaning and informationprocessing. Dynamic semantics comes with a set of flexible tools, andwith a collection of “killer applications”, such as thecompositional treatment of donkey sentences, the account of anaphoriclinking, the account of presupposition projection, the account ofepistemic updating and fine-grained distinctions between differentkinds of (non-at-issue) updates. Dynamic semantics is a very livelysubfield of formal semantics and the cross-linguistic range ofphenomena for which dynamic approaches are being pursued is expandingat an increasing pace.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry atPhilPapers, with links to its database.
anaphora |assertion |common knowledge |compositionality |conditionals |contextualism, epistemic |discourse representation theory |externalism about the mind |implicature |indexicals |information: semantic conceptions of |logic: classical |logic: conditionals |logic: epistemic |logic: of belief revision |logic: second-order and higher-order |logical form |mental content: narrow |mind: computational theory of |model theory: first-order |pragmatics |reference |scientific realism |speech acts |Tarski, Alfred: truth definitions |type theory: Church’s type theory
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054