Consider the following sentence:
It has long been known that the sentence, (1), produces a paradox, theso-calledliar’s paradox: it seems impossibleconsistently to maintain that (1) is true, and impossible consistentlyto maintain that (1) is not true: if (1) is true, then (1) says,truly, that (1) is not true so that (1) is not true; on the otherhand, if (1) is not true, then what (1) says is the case, i.e., (1) istrue. (For details, see Section 1, below.) Given such a paradox, onemight be sceptical of the notion of truth, or at least of theprospects of giving a scientifically respectable account of truth.
Alfred Tarski’s great accomplishment was to show how to give— contra this scepticism — a formal definition of truthfor a wide class of formalized languages. Tarski didnot,however, show how to give a definition of truth for languages (such asEnglish)that contain their own truth predicates. He thoughtthat this could not be done, precisely because of the liar’sparadox. More generally, Tarski reckoned thatany languagewith its own truth predicate would be inconsistent, as long as itobeyed the rules of standard classical logic, and had the ability torefer to its own sentences. As we will see in our remarks on Theorem2.1 in Section 2.3, Tarski was not quite right: there are consistentclassical interpreted languages that refer to their own sentences andhave their own truth predicates. (This point originates in Gupta 1982and is strengthened in Gupta and Belnap 1993.)
Given the close connection betweenmeaning andtruth, it is widely held that any semantics for a language\(L\), i.e., any theory of meaning for \(L\), will be closely relatedto a theory of truth for \(L\): indeed, it is commonly held thatsomething like a Tarskian theory of truth for \(L\) will be a centralpart of a semantics for \(L\). Thus, the impossibility of giving aTarskian theory of truth for languages with their own truth predicatesthreatens the project of giving a semantics for languages with theirown truth predicates.
We had to wait until the work of Kripke 1975 and of Martin &Woodruff 1975 for a systematic formal proposal of a semantics forlanguages with their own truth predicates. The basic thought issimple: take the offending sentences, such as (1), to beneithertrue nor false. Kripke, in particular, shows how to implementthis thought for a wide variety of languages, in effect employing asemantics with three values,true,false andneither.[1] It is safe to say that Kripkean approaches have replaced Tarskianpessimism as the new orthodoxy concerning languages with their owntruth predicates.
One of the main rivals to the three-valued semantics is the RevisionTheory of Truth, or RTT, independently conceived by Hans Herzbergerand Anil Gupta, and first presented in publication in Herzberger 1982aand 1982b, Gupta 1982 and Belnap 1982 — the first monographs onthe topic are Yaqūb 1993 and thelocus classicus, Gupta& Belnap 1993. The RTT is designed to model the kind of reasoningthat the liar sentence leads to,within a two-valued context.(See Section 5.2 on the question of whether the RTT is genuinelytwo-valued.) The central idea is the idea of arevisionprocess: a process by which werevise hypotheses aboutthe truth-value of one or more sentences. The present article’spurpose is to outline the Revision Theory of Truth. We proceed asfollows:
Let’s take a closer look at the sentence (1), given above:
It will be useful to make the paradoxical reasoning explicit. First,suppose that:
It seems an intuitive principle concerning truth that, for anysentence \(p\), we have the so-called T-biconditional
(Here we are using ‘iff’ as an abbreviation for ‘ifand only if’.) In particular, we should have
Thus, from (2) and (4), we get
Then we can apply the identity,
to conclude that (1) is true. This all shows that if (1) is not true,then (1) is true. Similarly, we can also argue that if (1) is truethen (1) is not true. So (1) seems to be both true and not true: hencethe paradox. As stated above, the three-valued approach to the paradoxtakes the liar sentence, (1), to be neither true nor false. Exactlyhow, or even whether, this move blocks the above reasoning is a matterfor debate.
The RTT is not designed to block reasoning of the above kind, but tomodel it — or most of it.[2] As stated above, the central idea is the idea of arevisionprocess: a process by which werevise hypotheses aboutthe truth-value of one or more sentences.
Consider the reasoning regarding the liar sentence, (1) above. Supposethat wehypothesize that (1) is not true. Then, with anapplication of the relevant T-biconditional, we might revise ourhypothesis as follows:
| Hypothesis: | (1) is not true. |
| T-biconditional: | ‘(1) is not true’ is true iff (1) is not true. |
| Therefore: | ‘(1) is not true’ is true. |
| Known identity: | (1) = ‘(1) is not true’. |
| Conclusion: | (1) is true. |
| Newrevised hypothesis: | (1) is true. |
We could continue the revision process, by revising our hypothesisonce again, as follows:
| New hypothesis: | (1) is true. |
| T-biconditional: | ‘(1) is not true’ is true iff (1) is not true. |
| Therefore: | ‘(1) is not true’ is not true. |
| Known identity: | (1) = ‘(1) is not true’. |
| Conclusion: | (1) is not true. |
| New new revised hypothesis: | (1) is not true. |
As the revision process continues, we flip back and forth betweentaking the liar sentence to be true and not true.
Informally, we might reason as follows. Either (7) is true or (7) isnot true. Thus, either (8) is true or (9) is true. Thus, (7) is true.Thus (8) is true and (9) is not true, and (7) is still true. Iteratingthe process once again, we get (8) is true, (9) is not true, and (7)is true. More formally, consider any initial hypothesis, \(h_0\),about the truth values of (7), (8) and (9). Either \(h_0\) says that(7) is true or \(h_0\) says that (7) is not true. In either case, wecan use the T-biconditional to generate our revised hypothesis\(h_1\): if \(h_0\) says that (7) is true, then \(h_1\) says that‘(7) is true’ is true, i.e. that (8) is true; and if\(h_0\) says that (7) is not true, then \(h_1\) says that ‘(7)is not true’ is true, i.e. that (9) is true. So \(h_1\) saysthat either (8) is true or (9) is true. So \(h_2\) says that‘(8) is true or (9) is true’ is true. In other words,\(h_2\) says that (7) is true. So no matter what hypothesis \(h_0\) westart with, two iterations of the revision process lead to ahypothesis that (7) is true. Similarly, threeor moreiterations of the revision process, lead to the hypothesis that (7) istrue, (8) is true and (9) is not true — regardless of ourinitial hypothesis. In Section 3, we will reconsider this example in amore formal context.
One thing to note is that, in Example 1.1, the revision process yieldsstable truth values for all three sentences. The notion of asentencestably true in all revision sequences will be acentral notion for the RTT. The revision-theoretic treatmentcontrasts, in this case, with the three-valued approach: on most waysof implementing the three-valued idea, all three sentences, (7), (8)and (9), turn out to be neither true nor false.[3] In this case, the RTT arguably better captures the correct informalreasoning than does the three-valued approach: the RTT assigns to thesentences (7), (8) and (9) the truth-values that were assigned to themby the informal reasoning given at the beginning of the example.
The goal of the RTT isnot to give a paradox-free account oftruth. Rather, the goal of the RTT is to give an account of our oftenunstable and often paradoxical reasoning about truth. RTT seeks, morespecifically, to give a two-valued account that assigns stableclassical truth values to sentences when intuitive reasoning wouldassign stable classical truth values. We will present a formalsemantics for a formal language: we want that language to have both atruth predicate and the resources to refer to its own sentences.
Let us consider a first-order language \(L\), with connective &,\(\vee\), and \(\neg\), quantifiers \(\forall\) and \(\exists\), theequals sign =, variables, and some stock of names, function symbolsand relation symbols. We will say that \(L\) is atruthlanguage, if it has a distinguished predicate \(\boldsymbol{T}\)and quotation marks ‘ and ’, which will be used to formquote names: if \(A\) is a sentence of \(L\), then‘\(A\)’ is a name. Let \(\textit{Sent}_L = \{A : A\) is asentence of \(L\}\).
It will be useful to identify the \(\boldsymbol{T}\)-free fragment ofa truth language \(L\): the first-order language \(L^-\) that has thesame names, function symbols and relation symbols as \(L\),except the unary predicate \(\boldsymbol{T}\). Since \(L^-\)has the same names as \(L\), including the same quote names, \(L^-\)will have a quote name ‘\(A\)’ for every sentence \(A\) of\(L\). Thus \(\forall x\boldsymbol{T}x\) is not a sentence of \(L^-\),but ‘\(\forall x\boldsymbol{T}x\)’ is a name of \(L^-\)and \(\forall x(x =\) ‘\(\forall x\boldsymbol{T}x\)’) is asentence of \(L^-\).
Other than the truth predicate, we will assume that our language isinterpreted classically. More precisely, let aground modelfor \(L\) be a classical model \(M = \langle D,I\rangle\) for \(L^-\),the \(\boldsymbol{T}\)-free fragment of \(L\), satisfying thefollowing:
Clauses (1) and (2) simply specify what it is for \(M\) to be aclassical model of the \(\boldsymbol{T}\)-free fragment of \(L\).Clauses (3) and (4) ensure that \(L\), when interpreted, can talkabout its own sentences. Given a ground model, we will consider theprospects of providing a satisfying interpretation of\(\boldsymbol{T}\). The most obvious desideratum is that the groundmodel, expanded to include an interpretation of \(\boldsymbol{T}\),satisfy Tarski’s T-biconditionals, i.e., the biconditionals ofthe form
\[\boldsymbol{T}\lsquo A\rsquo \text{ iff } A\]for each \(A \in\) \(\textit{Sent}_L\).
Some useful terminology: Given a ground model \(M\) for \(L\) and aname, function symbol or relation symbol \(X\), we can think of\(I(X)\) as theinterpretation or, to borrow a term fromGupta and Belnap, thesignification of \(X\). Gupta andBelnap characterize an expression’s or concept’ssignification in a world \(w\) as “an abstractsomething that carries all the information about all theexpression’s [or concept’s] extensional relations in\(w\).” If we want to interpret \(\boldsymbol{T}x\) as‘\(x\) is true’, then, given a ground model \(M\), wewould like to find an appropriate signification, or an appropriaterange of significations, for \(\boldsymbol{T}\).
We might try to assign to \(\boldsymbol{T}\) aclassicalsignification, by expanding \(M\) to a classical model \(M' = \langleD',I'\rangle\) for all of \(L\), including \(\boldsymbol{T}\). Alsorecall that we want \(M'\) to satisfy the T-biconditionals: for ourimmediate purposes, let us interpret these classically. Let us saythat an expansion \(M'\) of a ground model \(M\) isTarskianiff \(M'\) is a classical model and all of the T-biconditionals,interpreted classically, are true in \(M'\). We would like to expandground models to Tarskian models. We consider three ground models inorder to assess our prospects for doing this.
The proofs of (1) and (2) are beyond the scope of this article, butsome remarks are in order.
Re (1): The fact that \(M_1\) can be expanded to a Tarskian model isnot surprising, given the reasoning in Example 1.1, above: any initialhypothesis about the truth values of the three sentences in questionleads, after three iterations of the revision process, to a stablehypothesis that \((\boldsymbol{T}\beta \vee \boldsymbol{T}\gamma)\)and \(\boldsymbol{T}\alpha\) are true, while \(\neg\boldsymbol{T}\alpha\) is false. The fact that \(M_1\) can be expandedtoexactly one Tarskian model needs the so-calledTransfer Theorem, Gupta and Belnap 1993, Theorem 2D.4.
Remark: In the introductory remarks, above, we claim that there areconsistent classical interpreted languages that refer to their ownsentences and have their own truth predicates. Clauses (1) of Theorem2.1 delivers an example. Let \(M_1 '\) be the unique Tarskianexpansion of \(M_1\). Then the language \(L_1\), interpreted by \(M_1'\) is an interpreted language that has its own truth predicatesatisfying the T-biconditionals classically understood, obeys therules of standard classical logic, and has the ability to refer toeach of its own sentences. Thus Tarski was not quite right in his viewthat any language with its own truth predicate would be inconsistent,as long as it obeyed the rules of standard classical logic, and hadthe ability to refer to its own sentences.
Re (2): The only potential problematic self-reference is in thesentence \(\boldsymbol{T}\tau,\) the so-calledtruth teller,which says of itself that it is true. Informal reasoning suggests thatthe truth teller can consistently be assigned either classical truthvalue: if you assign it the value \(\mathbf{t}\) then no paradox isproduced, since the sentence now truly says of itself that it is true;and if you assign it the value \(\mathbf{f}\) then no paradox isproduced, since the sentence now falsely says of itself that it istrue. Theorem 2.1 (2) formalizes this point, i.e., \(M_2\) can beexpanded to one Tarskian model in which \(\boldsymbol{T}\tau\) is trueand one in which \(\boldsymbol{T}\tau\) is false. The fact that\(M_2\) can be expanded toexactly two Tarskian models needsthe Transfer Theorem, alluded to above. Note that the language\(L_2\), interpreted by either of these expansions, provides anotherexample of an interpreted language that has its own truth predicatesatisfying the T-biconditionals classically understood, obeys therules of standard classical logic, and has the ability to refer toeach of its own sentences.
Proof of (3). Suppose that \(M_3 ' = \langle D_3 ,I_3 '\rangle\) is aclassical expansion of \(M_3\) to all of \(L_3\). Since \(M_3 '\) isan expansion of \(M_3, I_3\) and \(I_3 '\) agree on all the names of\(L_3\). So
\[I_3 '(\lambda) = I_3 (\lambda) = \neg \boldsymbol{T}\lambda = I_3(\lsquo \neg \boldsymbol{T}\lambda \rsquo)= I_3 '(\lsquo \neg \boldsymbol{T}\lambda\rsquo).\]So the sentences \(\boldsymbol{T}\lambda\) and\(\boldsymbol{T}\)‘\(\neg \boldsymbol{T}\lambda\)’ havethe same truth value in \(M_3 '\). So the T-biconditional
\[\boldsymbol{T}\lsquo \neg \boldsymbol{T}\lambda\rsquo \equiv \neg \boldsymbol{T}\lambda\]is false in \(M_3 '\).
Remark: The language \(L_3\) interpreted by the ground model \(M_3\)formalizes the liar’s paradox, with the sentence \(\neg\boldsymbol{T}\lambda\) as the offending liar’s sentence. Thus,despite Theorem 2.1, Clauses (1) and (2), Clause (3) strongly suggeststhat in a semantics for languages capable of expressing their owntruth concepts, \(\boldsymbol{T}\) cannot, in general, have aclassical signification; and the ‘iff’ in theT-biconditionals will not be read as the classical biconditional. Wetake these suggestions up in Section 4, below.
In Section 1, we informally sketched the central thought of the RTT,namely, that we can use the T-biconditionals to generate arevision rule — a rule for revising a hypothesis aboutthe extension of the truth predicate. Here we will formalize thisnotion, and work through an example from Section 1.
In general, let L be a truth language and \(M\) be a ground model for\(L\). Anhypothesis is a function \(h:D \rightarrow\{\mathbf{t}, \mathbf{f}\}\). A hypothesis will in effect be ahypothesized classical interpretation for \(\boldsymbol{T}\).Let’s work with an example that combines features from theground models \(M_1\) and \(M_3\). We will state the example formally,but reason in a semiformal way, to transition from one hypothesizedextension of \(\boldsymbol{T}\) to another.
It will be convenient to let
\[\begin{align}&A \text{ be the sentence } \boldsymbol{T}\beta \vee \boldsymbol{T}\gamma \\&B \text{ be the sentence } \boldsymbol{T}\alpha \\&C \text{ be the sentence } \neg \boldsymbol{T}\alpha \\&X \text{ be the sentence } \neg \boldsymbol{T}\lambda\end{align}\]Thus:
\[\begin{align}D &= \textit{Sent}_L \\I(\alpha) &= A \\I(\beta) &= B \\I(\gamma) &= C \\I(\lambda) &= X\end{align}\]Suppose that the hypothesis \(h_0\) hypothesizes that \(A\) is false,\(B\) is true, \(C\) is false and \(X\) is false. Thus
\[\begin{align}h_0 (A) &= \mathbf{f} \\h_0 (B) &= \mathbf{t} \\h_0 (C) &= \mathbf{f} \\h_0 (X) &= \mathbf{f}\end{align}\]Now we will engage in some semiformal reasoning,on the basis ofhypothesis \(h_0\). Among the four sentences, \(A, B, C\) and\(X, h_0\) puts only \(B\) in the extension of \(\boldsymbol{T}\).Thus, reasoning from \(h_0\), we conclude that:
\[\begin{align}\neg \boldsymbol{T}\alpha &\text{ since the referent of } \alpha \text{ is not in the extension of } \boldsymbol{T} \\\boldsymbol{T}\beta &\text{ since the referent of } \beta \text{ is in the extension of } \boldsymbol{T} \\\neg \boldsymbol{T}\gamma &\text{ since the referent of } \gamma \text{ is not in the extension of } \boldsymbol{T} \\\neg \boldsymbol{T}\lambda &\text{ since the referent of } \lambda \text{ is not in the extension of } \boldsymbol{T}.\end{align}\]The T-biconditional for the four sentences \(A, B, C\) and \(X\) areas follows:
\[\begin{align}\tag{T$_A$} A \text{ is true iff }& \boldsymbol{T}\beta \vee \boldsymbol{T}\gamma \\\tag{T$_B$} B \text{ is true iff }& \boldsymbol{T}\alpha \\\tag{T$_C$} C \text{ is true iff }& \neg \boldsymbol{T}\alpha \\\tag{T$_X$} X \text{ is true iff }& \neg \boldsymbol{T}\lambda \end{align}\]Thus, reasoning from \(h_0\), we conclude that:
\[\begin{align}&A \text{ is true} \\&B \text{ is not true} \\&C \text{ is true} \\&X \text{ is true}\end{align}\]This produces our new hypothesis \(h_1\):
\[\begin{align}h_1 (A) &= \mathbf{t} \\h_1 (B) &= \mathbf{f} \\h_1 (C) &= \mathbf{t} \\h_1 (X) &= \mathbf{t}\end{align}\]Let’s revise our hypothesis once again. So now we will engage insome semiformal reasoning,on the basis of hypothesis\(h_1\). Hypothesis \(h_1\) puts \(A, C\) and \(X\), but not \(B\), inthe extension of the \(\boldsymbol{T}\). Thus, reasoning from \(h_1\),we conclude that:
\[\begin{align}\boldsymbol{T}\alpha &\text{ since the referent of } \alpha \text{ is not in the extension of } \boldsymbol{T} \\\neg \boldsymbol{T}\beta &\text{ since the referent of } \beta \text{ is in the extension of } \boldsymbol{T} \\\boldsymbol{T}\gamma &\text{ since the referent of } \gamma \text{ is not in the extension of } \boldsymbol{T} \\\boldsymbol{T}\lambda &\text{ since the referent of } \lambda \text{ is not in the extension of } \boldsymbol{T}.\end{align}\]Recall the T-biconditionals for the four sentences \(A, B, C\) and\(X\), given above. Reasoning from \(h_1\) and these T-biconditionals,we conclude that:
\[\begin{align}&A \text{ is true} \\&B \text{ is true} \\&C \text{ is not true} \\&X \text{ is not true}\end{align}\]This produces ournew new hypothesis \(h_2\):
\[\begin{align}h_2 (A) &= \mathbf{t} \\h_2 (B) &= \mathbf{t} \\h_2 (C) &= \mathbf{f} \\h_2 (X) &= \mathbf{f}\end{align}\] \(\Box\)Let’s formalize the semiformal reasoning carried out in Example3.1. First we hypothesized that certain sentences were, or were not,in the extension of \(\boldsymbol{T}\). Consider ordinary classicalmodel theory. Suppose that our language has a predicate \(G\) and aname \(a\), and that we have a model \(M = \langle D,I\rangle\) whichplaces the referent of \(a\) inside the extension of \(G\):
\[I(G)(I(\alpha)) = \mathbf{t}\]Then we conclude, classically, that the sentence \(Ga\) is true in\(M\). It will be useful to have some notation for the classical truthvalue of a sentence \(S\) in a classical model \(M\). We will write\(\textit{Val}_M (S)\). In this case, \(\textit{Val}_M (Ga) =\mathbf{t}\). In Example 3.1, we did not start with a classical modelof the whole language \(L\), but only a classical model of the\(\boldsymbol{T}\)-free fragment of \(L\). But then we added ahypothesis, in order to get a classical model of all of \(L\).Let’s use the notation \(M + h\) for the classical model of allof \(L\) that you get when you extend \(M\) by assigning\(\boldsymbol{T}\) an extension via the hypothesis \(h\). Once youhave assigned an extension to the predicate \(\boldsymbol{T}\), youcan calculate the truth values of the various sentences of \(L\). Thatis, for each sentence \(S\) of \(L\), we can calculate
\[\textit{Val}_{M + h}(S)\]In Example 3.1, we started with hypothesis \(h_0\) as follows:
\[\begin{align}h_0 (A) &= \mathbf{f} \\h_0 (B) &= \mathbf{t} \\h_0 (C) &= \mathbf{f} \\h_0 (X) &= \mathbf{f}\end{align}\]Then we calculated as follows:
\[\begin{align}\textit{Val}_{M+h_0}(\boldsymbol{T}\alpha) &= \mathbf{f} \\\textit{Val}_{M+h_0}(\boldsymbol{T}\beta) &= \mathbf{t} \\\textit{Val}_{M+h_0}(\boldsymbol{T}\gamma) &= \mathbf{f} \\\textit{Val}_{M+h_0}(\boldsymbol{T}\lambda) &= \mathbf{f}\end{align}\]And then we concluded as follows:
\[\begin{align}\textit{Val}_{M+h_0}(A) &= \textit{Val}_{M+h_0}(\boldsymbol{T}\beta \lor \boldsymbol{T}\gamma) = \mathbf{t} \\\textit{Val}_{M+h_0}(B) &= \textit{Val}_{M+h_0}(\boldsymbol{T}\alpha) = \mathbf{f} \\\textit{Val}_{M+h_0}(C) &= \textit{Val}_{M+h_0}(\neg\boldsymbol{T}\alpha) = \mathbf{t} \\\textit{Val}_{M+h_0}(X) &= \textit{Val}_{M+h_0}(\neg\boldsymbol{T}\lambda) = \mathbf{t}\end{align}\]These conclusions generated our new hypothesis, \(h_1\):
\[\begin{align}h_1 (A) &= \mathbf{t} \\h_1 (B) &= \mathbf{f} \\h_1 (C) &= \mathbf{t} \\h_1 (X) &= \mathbf{t}\end{align}\]Note that, in general,
\[h_1 (S) = \textit{Val}_{M+h_0}(S).\]We are now prepared to define therevision rule given by aground model \(M = \langle D,I\rangle\). In general, given anhypothesis \(h\), let \(M + h = \langle D,I'\rangle\) be the model of\(L\) which agrees with \(M\) on the \(\boldsymbol{T}\)-free fragmentof \(L\), and which is such that \(I'(\boldsymbol{T}) = h\). So \(M +h\) is just a classical model for all of \(L\). For any model \(M +h\) of all of \(L\) and any sentence \(S\) of \(L\), let\(\textit{Val}_{M+h}(S)\) be the ordinary classical truth value of\(S\) in \(M + h\).
The ‘otherwise’ clause tells us that if \(d\) is not asentence of \(L\), then, after one application of revision, we stickwith the hypothesis that \(d\) is not true.[5] Note that, in Example 3.\(1, h_1 = \tau_M (h_0)\) and \(h_2 = \tau_M(h_1)\). We will often drop the subscripted ‘\(M\)’ whenthe context make it clear which ground model is at issue.
Let’s pick up Example 3.1 and see what happens when we iteratethe application of the revision rule.
The following table indicates what happens with repeated applicationsof the revision rule \(\tau_M\) to the hypothesis \(h_0\) from Example3.1. In this table, we will write \(\tau\) instead of \(\tau_M\):
| \(S\) | \(h_0 (S)\) | \(\tau(h_0)(S)^{}\) | \(\tau^2 (h_0)(S)\) | \(\tau^3 (h_0)(S)\) | \(\tau^4 (h_0)(S)\) | \(\cdots\) |
| \(A\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(B\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(C\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) |
| \(X\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) |
So \(h_0\) generates arevision sequence (see Definition 3.7,below). And \(A\) and \(B\) arestably true in that revisionsequence (see Definition 3.6, below), while \(C\) isstablyfalse. The liar sentence \(X\) is, unsurprisingly, neither stablytrue nor stably false: the liar sentence isunstable. Asimilar calculation would show that \(A\) is stably true, regardlessof the initial hypothesis: thus \(A\) iscategorically true(see Definition 3.8).
Before giving a precise definition of arevision sequence, wegive an example where we would want to carry the revision processbeyond the finite stages, \(h, \tau^1 (h), \tau^2 (h), \tau^3 (h)\),and so on.
More formally, let \(A_0\) be the sentence \(\boldsymbol{T}\alpha_0\vee \neg \boldsymbol{T}\alpha_0\), and for each \(n \ge 0\), let\(A_{n+1}\) be the sentence \(\boldsymbol{T}\alpha_n\). Thus \(A_1\)is the sentence \(\boldsymbol{T}\alpha_0\), and \(A_2\) is thesentence \(\boldsymbol{T}\alpha_1\), and \(A_3\) is the sentence\(\boldsymbol{T}\alpha_2\), and so on. Our ground model \(M = \langleD,I\rangle\) is as follows:
\[\begin{align}D &= \textit{Sent}_L \\I(\alpha_n) &= A_n \\I(G)(A) &= \mathbf{t} \text{ iff } A = A_n \text{ for some } n\end{align}\]Thus, the extension of \(G\) is the following set of sentences:\[\{A_0, A_1, A_2, A_3 , \ldots \} = \{(\boldsymbol{T}\alpha_0 \vee\neg \boldsymbol{T}\alpha_0), \boldsymbol{T}\alpha_0,\boldsymbol{T}\alpha_1, \boldsymbol{T}\alpha_2, \boldsymbol{T}\alpha_3 ,\ldots \}.\] Finally let \(B\) be the sentence \(\forall x(Gx \supset\boldsymbol{T}x)\). Let \(h\) be any hypothesis for which we have, foreach natural number \(n\),
\[h(A_n) = h(B) = \mathbf{f}.\]The following table indicates what happens with repeated applicationsof the revision rule \(\tau_M\) to the hypothesis \(h\). In thistable, we will write \(\tau\) instead of \(\tau_M\):
| \(S\) | \(h(S)\) | \(t(h)(S)\) | \(\tau^2 (h)(S)\) | \(\tau^3 (h)(S)\) | \(\tau^4 (h)(S)\) | \(\cdots\) |
| \(A_0\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_1\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_2\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_3\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_4\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) |
| \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) |
| \(B\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) |
At the \(0^{\text{th}}\) stage, each \(A_n\) is outside thehypothesized extension of \(\boldsymbol{T}\). But from the\(n{+}1^{\text{th}}\) stage onwards, \(A_n\) is \(in\) thehypothesized extension of \(\boldsymbol{T}\). So, for each \(n\), thesentence \(A_n\) is eventually stably hypothesized to be true. Despitethis, there is nofinite stage at which all the\(A_n\)’s are hypothesized to be true: as a result the sentence\(B = \forall x(Gx \supset \boldsymbol{T}x)\) remains false at eachfinite stage. This suggests extending the process as follows:
| \(S\) | \(h(S)\) | \(\tau(h)(S)\) | \(\tau^2 (h)(S)\) | \(\tau^3 (h)(S)\) | \(\cdots\) | \(\omega\) | \(\omega +1\) | \(\omega +2\) | \(\cdots\) |
| \(A_0\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_1\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_2\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_3\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_4\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) |
| \(B\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
Thus, if we allow the revision process to proceed beyond the finitestages, then the sentence \(B = \forall x(Gx \supset\boldsymbol{T}x)\) is stably true from the \(\omega{+}1^{\text{th}}\)stage onwards. \(\Box\)
In Example 3.4, the intuitive verdict is that not only should each\(A_n\) receive a stable truth value of \(\mathbf{t}\), but so shouldthe sentence \(B = \forall x(Gx \supset \boldsymbol{T}x)\). The onlyway to ensure this is to carry the revision process beyond the finitestages. So we will consider revision sequences that are very long: notonly will a revision sequence have a \(n^{\text{th}}\) stage for eachfinite number \(n\), but a \(\eta^{\text{th}}\) stage for everyordinal number \(\eta\). (The next paragraph is to help thereader unfamiliar with ordinal numbers.)
One way to think of the ordinal numbers is as follows. Start with thefinite natural numbers:
\[0, 1, 2, 3, \ldots\]Add a number, \(\omega\), greater than all of these but not theimmediate successor of any of them:
\[0, 1, 2, 3, \ldots ,\omega\]And then take the successor of \(\omega\), its successor, and soon:
\[0, 1, 2, 3, \ldots ,\omega , \omega +1, \omega +2, \omega +3\ldots\]Then add a number \(\omega +\omega\), or \(\omega \times 2\), greaterthan all of these (and again, not the immediate successor of any), andstart over, reiterating this process over and over:
\[\begin{align}&0, 1, 2, 3, \ldots \\&\omega , \omega +1, \omega +2, \omega +3,\ldots, \\&\omega \times 2, (\omega \times 2)+1, (\omega \times 2)+2, (\omega \times 2)+3,\ldots, \\&\omega \times 3, (\omega \times 3)+1, (\omega \times 3)+2, (\omega \times 3)+3,\ldots \\&\ \vdots\end{align}\]At the end of this, we add an ordinal number \(\omega \times \omega\)or \(\omega^2\):
\[\begin{align}&0, 1, 2, \ldots ,\omega , \omega +1, \omega +2, \ldots ,\omega \times 2, (\omega \times 2)+1,\ldots, \\&\omega \times 3, \ldots ,\omega \times 4, \ldots ,\omega \times 5,\ldots ,\omega^2, \omega^2 +1,\ldots\end{align}\]The ordinal numbers have the following structure: every ordinal numberhas an immediate successor known as asuccessor ordinal; andfor any infinitely ascending sequence of ordinal numbers, there is alimit ordinal which is greater than all the members of thesequence and which is not the immediate successor of any member of thesequence. Thus the following are successor ordinals: \(5, 178, \omega+12, (\omega \times 5)+56, \omega^2 +8\); and the following are limitordinals: \(\omega , \omega \times 2, \omega^2 , (\omega^2 +\omega)\),etc. Given a limit ordinal \(\eta\), a sequence \(S\) of objects is an\(\eta\)-long sequence if there is an object \(S_{\delta}\)for every ordinal \(\delta \lt \eta\). We will denote the class ofordinals as \(\textsf{On}\). Any sequence \(S\) of objects is an\(\textsf{On}\)-long sequence if there is an object\(S_{\delta}\) for every ordinal \(\delta\).
When assessing whether a sentence receives a stable truth value, theRTT considers sequences of hypotheses of length \(\textsf{On}\). Sosuppose that \(S\) is an \(\textsf{On}\)-long sequence of hypotheses,and let \(\zeta\) and \(\eta\) range over ordinals. Clearly, in orderfor \(S\) to represent the revision process, we need the\(\zeta{+}1^{\text{th}}\) hypothesis to be generated from the\(\zeta^{\text{th}}\) hypothesis by the revision rule. So we insistthat \(S_{\zeta +1} = \tau_M(S_{\zeta})\). But what should we do at alimit stage? That is, how should we set \(S_{\eta}\)(d) when \(\eta\)is a limit ordinal? Clearly any object that is stably true [false]up to that stage should be true [false] \(at\) that stage.Thus consider Example 3.4. The sentence \(A_2\), for example, is trueup to the \(\omega^{\text{th}}\) stage; so we set \(A_2\) to be true\(at\) the \(\omega^{\text{th}}\) stage. For objects that do notstabilize up to that stage, Gupta and Belnap 1993 adopt a liberalpolicy: when constructing a revision sequence \(S\), if the value ofthe object \(d \in D\) has not stabilized by the time you get to thelimit stage \(\eta\), then you can set \(S_{\eta}\)(d) to be whicheverof \(\mathbf{t}\) or \(\mathbf{f}\) you like. Before we give theprecise definition of arevision sequence, we continue withExample 3.3 to see an application of this idea.
Example 3.5 (Example 3.3 continued)
Recall that \(L\) contains four non-quote names, \(\alpha , \beta ,\gamma\) and \(\lambda\) and no predicates other than\(\boldsymbol{T}\). Also recall that \(M = \langle D,I\rangle\) is asfollows:
The following table indicates what happens with repeated applicationsof the revision rule \(\tau_M\) to the hypothesis \(h_0\) from Example3.3. For each ordinal \(\eta\), we will indicate the\(\eta^{\text{th}}\) hypothesis by \(S_{\eta}\) (suppressing the index\(M\) on \(\tau)\). Thus \(S_0 = h_0,\) \(S_1 = \tau(h_0),\) \(S_2 =\tau^2 (h_0),\) \(S_3 = \tau^3 (h_0),\) and \(S_{\omega},\) the\(\omega^{\text{th}}\) hypothesis, is determined in some way from thehypotheses leading up to it. So, starting with \(h_0\) from Example3.3, our revision sequence begins as follows:
| \(S\) | \(S_0 (S)\) | \(S_1 (S)\) | \(S_2 (S)\) | \(S_3 (S)\) | \(S_4 (S)\) | \(\cdots\) |
| \(A\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(B\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(C\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) |
| \(X\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) |
What happens at the \(\omega^{\text{th}}\) stage? \(A\) and \(B\) arestably trueup to the \(\omega^{\text{th}}\) stage, and \(C\)is stably falseup to the \(\omega^{\text{th}}\) stage. So\(at\) the \(\omega^{\text{th}}\) stage, we must have thefollowing:
| \(S\) | \(S_0 (S)\) | \(S_1 (S)\) | \(S_2 (S)\) | \(S_3 (S)\) | \(S_4 (S)\) | \(\cdots\) | \(S_{\omega}(S)\) |
| \(A\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) |
| \(B\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) |
| \(C\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{f}\) |
| \(X\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) | ? |
But the entry for \(S_{\omega}(X)\) can be either \(\mathbf{t}\) or\(\mathbf{f}\). In other words, the initial hypothesis \(h_0\)generates at least two revision sequences. Every revision sequence\(S\) that has \(h_0\) as its initial hypothesis must have\(S_{\omega}(A) = \mathbf{t}, S_{\omega}(B) = \mathbf{t}\), and\(S_{\omega}(C) = \mathbf{f}\). But there is some revision sequence\(S\), with \(h_0\) as its initial hypothesis, and with\(S_{\omega}(X) = \mathbf{t}\); and there is some revision sequence\(S'\), with \(h_0\) as its initial hypothesis, and with\(S_{\omega}'(X) = \mathbf{f}. \Box\)
We are now ready to define the notion of arevisionsequence:
Suppose that \(S\) is a \(\eta\)-long sequence of hypothesis for somelimit ordinal \(\eta\). Then we say that \(d \in D\) isstably \(\mathbf{t}\) \([\mathbf{f}]\) in \(S\) iff for someordinal \(\theta \lt \eta\) we have
\[ S_{\zeta}(d) = \mathbf{t}\ [\mathbf{f}], \text{ for every ordinal } \zeta\text{ such that } \zeta \ge \theta \text{ and } \zeta \lt \eta.\]If \(S\) is an \(\textsf{On}\)-long sequence of hypotheses and\(\eta\) is a limit ordinal, then \(S|_{\eta}\) is the initial segmentof \(S\) up to but not including \(\eta\). Note that \(S|_{\eta}\) isa \(\eta\)-long sequence of hypotheses.
We now illustrate these concepts with an example. The example willalso illustrate a new concept to be defined afterwards.
Let \(A_0\) be the sentence \(\exists x(Gx \amp \neg\boldsymbol{T}x)\). And for each \(n \ge 0\), let \(A_{n+1}\) be thesentence \(\boldsymbol{T}\alpha_n\). Consider the following groundmodel \(M = \langle D,I\rangle\)
\[\begin{align}D &= \textit{Sent}_L \\I(\beta) &= B \\I(\alpha_n) &= A_n \\I(G)(A) &= \mathbf{t} \text{ iff } A = A_n \text{ for some } n\end{align}\]Thus, the extension of \(G\) is the following set of sentences:\(\{A_0, A_1, A_2, A_3 , \ldots \} = \{\exists x(Gx \amp \neg\boldsymbol{T}x), \boldsymbol{T}\alpha_0, \boldsymbol{T}\alpha_1,\boldsymbol{T} \alpha_2, \boldsymbol{T}\alpha_3 , \ldots \}\). Let\(h\) be any hypothesis for which we have, \(h(B) = \mathbf{f}\) andfor each natural number \(n\),
\[h(A_n) = \mathbf{f}.\]And let \(S\) be a revision sequence whose initial hypothesis is\(h\), i.e., \(S_0 = h\). The following table indicates some of thevalues of \(S_{\gamma}(C)\), for sentences \(C \in \{B, A_0, A_1, A_2,A_3 , \ldots \}\). In the top row, we indicate only the ordinal numberrepresenting the stage in the revision process.
| 0 | 1 | 2 | 3 | \(\cdots\) | \(\omega\) | \(\omega{+}1\) | \(\omega{+}2\) | \(\omega{+}3\) | \(\cdots\) | \(\omega{\times}2\) | \((\omega{\times}2){+}1\) | \((\omega{\times}2){+}2\) | \(\cdots\) | |
| \(B\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_0\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_1\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) |
| \(A_2\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_3\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(A_4\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\mathbf{f}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\mathbf{t}\) | \(\cdots\) |
| \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\vdots\) | \(\ddots\) |
It is worth contrasting the behaviour of the sentence B and thesentence \(A_0\). From the \(\omega{+}1^{\text{th}}\) stage on, \(B\)stabilizes as true. In fact, \(B\) is stably true in every revisionsequence for \(M\). Thus, \(B\) is categorically true in \(M\). Thesentence \(A_0\), however, never quite stabilizes: it is usually true,but within a few finite stages of a limit ordinal, the sentence\(A_0\) can be false. In these circumstances, we say that \(A_0\) isnearly stably true (See Definition 3.10, below.) In fact,\(A_0\) is nearly stably true in every revision sequence for \(M.\Box\)
Example 3.9 illustrates not only the notion of stability in a revisionsequence, but also of near stability, which we define now:
Suppose that \(L\) is a truth language, and that \(M = \langleD,I\rangle\) is a ground model. Suppose that \(S\) is an\(\textsf{On}\)-long sequence of hypotheses. Then we say that \(d \inD\) isnearly stably \(\mathbf{t}\) \([\mathbf{f}]\)in \(S\) iff for some ordinal \(\theta\) we have
for every \(\zeta \ge \theta\), there is a natural number \(n\) suchthat, for every \(m \ge n, S_{\zeta +m}(d) = \mathbf{t}\)\([\mathbf{f}]\).
Gupta and Belnap 1993 characterize the difference between stabilityand near stability as follows: “Stabilitysimpliciterrequires an element [in our case a sentence] to settle down to a value\(\mathbf{x}\) [in our case a truth value] after some initialfluctuations say up to [an ordinal \(\eta\)]… In contrast, nearstability allows fluctuations after \(\eta\) also, but thesefluctuations must be confined to finite regions just after limitordinals” (p. 169). Gupta and Belnap 1993 introduce two theoriesof truth, \(\boldsymbol{T}^*\) and \(\boldsymbol{T}^{\#}\), based onstability and near stability. Theorems 3.12 and 3.13, below,illustrate an advantage of the system \(\boldsymbol{T}^{\#}\), i.e.,the system based on near stability.
Gupta and Belnap 1993, Section 6C, note similar advantages of\(\boldsymbol{T}^{\#}\) over \(\boldsymbol{T}^*\). For example,\(\boldsymbol{T}^{\#}\) does, but \(\boldsymbol{T}^*\) does not,validate the following semantic principles:
\[\begin{align}\boldsymbol{T}\lsquo A \amp B\rsquo &\equiv \boldsymbol{T}\lsquo A\rsquo \amp \boldsymbol{T}\lsquo B\rsquo \\\boldsymbol{T}\lsquo A \vee B\rsquo &\equiv \boldsymbol{T}\lsquo A\rsquo \vee \boldsymbol{T}\lsquo B\rsquo\end{align}\]Gupta and Belnap remain noncommittal about which of\(\boldsymbol{T}^{\#}\) and \(\boldsymbol{T}^*\) (and a furtheralternative that they define, \(\boldsymbol{T}^c)\) is preferable.
The main formal notions of the RTT are the notion of arevisionrule (Definition 3.2), i.e., a rule for revising hypotheses; andarevision sequence (Definition 3.7), a sequence ofhypotheses generated in accordance with the appropriate revision rule.Using these notions, we can, given a ground model, specify when asentence isstably (ornearly stably) true orstably (ornearly stably) false (Definition 3.6 and3.10, respectively) in a particular revision sequence. Thus we coulddefine two theories of truth, \(\boldsymbol{T}^*\) and\(\boldsymbol{T}^{\#}\) (Definition 3.11), based on stability and nearstability (respectively). The final idea is that each of thesetheories delivers a verdict on which sentences of the language arevalid (Definition 3.11), given a ground model.
Recall the suggestions made at the end of Section 2:
In a semantics for languages capable of expressing their own truthconcepts, \(\boldsymbol{T}\) will not, in general, have a classicalsignification; and the ‘iff’ in the T-biconditionals willnot be read as the classical biconditional.
Gupta and Belnap fill out these suggestions in the following way.
First, they suggest that the signification of \(\boldsymbol{T}\),given a ground model \(M\), is the revision rule \(\tau_M\) itself. Asnoted in the preceding paragraph, we can give a fine-grained analysisof sentences’ statuses and interrelations on the basis ofnotions generated directly and naturally from the revision rule\(\tau_M\). Thus, \(\tau_M\) is a good candidate for the significationof \(\boldsymbol{T}\), since it does seem to be “an abstractsomething that carries all the information about all [of\(\boldsymbol{T}\)’s] extensional relations” in \(M\).(See Gupta and Belnap’s characterization of anexpression’ssignification, given in Section 2,above.)
Gupta and Belnap’s related suggestion concerning the‘iff’ in the T-biconditionals is that, rather than beingthe classical biconditional, this ‘iff’ is the distinctivebiconditional used todefine a previously undefined concept.In 1993, Gupta and Belnap present the revision theory of truth as aspecial case of a revision theory ofcircularly definedconcepts. Suppose that \(L\) is a language with a unary predicate\(F\) and a binary predicate \(R\). Consider a new concept expressedby a predicate \(G\), introduced through a definition like this:
\[Gx =_{df} \forall y(Ryx \supset Fx) \vee \exists y(Ryx \amp Gx).\]Suppose that we start with a domain of discourse, \(D\), and aninterpretation of the predicate \(F\) and the relation symbol \(R\).Gupta and Belnap’s revision-theoretic treatment of concepts thuscircularly introduced allows one to give categorical verdicts, forcertain \(d \in D\) about whether or not \(d\) satisfies \(G\). Otherobjects will be unstable relative to \(G\): we will be ablecategorically to assert neither that \(d\) satisfies \(G\) nor that ddoes not satisfy \(G\). In the case of truth, Gupta and Belnap takethe set of T-biconditionals of the form
\[\tag{10}\boldsymbol{T}\lsquo A\rsquo =_{df} A\]together to give the definition of the concept of truth. It is theirtreatment of ‘\(=_{df}\)’ (the ‘iff’ ofdefinitional concept introduction), together with the T-biconditionalsof the form (10), that determine the revision rule \(\tau_M\).
Recall the liar sentence, (1), from the beginning of this article:
In Section 1, we claimed that the RTT is designed to model, ratherthan block, the kind of paradoxical reasoning regarding (1). But wenoted in footnote 2 that the RTT does avoid contradictions in thesesituations. There are two ways to see this. First, while the RTT doesendorse the biconditional
(1) is true iff (1) is not true,
the relevant ‘iff’ is not the material biconditional, asexplained above. Thus, it does not follow that both (1) is true and(1) is not true. Second, note that on no hypothesis can we concludethat both (1) is true and (1) is not true. If we keep it firmly inmind that revision-theoretical reasoning is hypothetical rather thancategorical, then we will not infer any contradictions from theexistence of a sentence such as (1), above.
Gupta and Belnap’s suggestions, concerning the signification of\(\boldsymbol{T}\) and the interpretation of the ‘iff’ inthe T-biconditionals, dovetail nicely with two closely relatedintuitions articulated in Gupta & Belnap 1993. The firstintuition, loosely expressed, is “that the T-biconditionals areanalytic andfix the meaning of ‘true’” (p.6). More tightly expressed, it becomes the “SignificationThesis” (p. 31): “The T-biconditionals fix thesignification of truth in every world [where a world is represented bya ground model].”[7] Given the revision-theoretic treatment of the definition‘iff’, and given a ground model \(M\), theT-biconditionals (10) do, as noted, fix the suggested signification of\(\boldsymbol{T}\), i.e., the revision rule \(\tau_M\).
The second intuition isthe supervenience of the signification oftruth. This is a descendant of M. Kremer’s 1988 proposedsupervenience of semantics. The idea is simple: whichsentences fall under the concepttruth should be fixed by (1)the interpretation of the nonsemantic vocabulary, and (2) theempirical facts. In non-circular cases, this intuition is particularlystrong: the standard interpretation of “snow” and“white” and the empirical fact that snow is white, areenough to determine that the sentence “snow is white”falls under the concepttruth. The supervenience of thesignification of truth is the thesis that the signification of truth,whatever it is, is fixed by the ground model \(M\). Clearly, the RTTsatisfies this principle.
It is worth seeing how a theory of truth might violate this principle.Consider a truth-teller sentence, i.e., the sentence that says ofitself that it is true:
As noted above, Kripke’s three-valued semantics allows threetruth values, true \((\mathbf{t})\), false \((\mathbf{f})\), andneither \((\mathbf{n})\). Given a ground model \(M = \langleD,I\rangle\) for a truth language \(L\), the candidate interpretationsof \(\boldsymbol{T}\) are three-valued interpretations, i.e.,functions \(h:D \rightarrow \{\mathbf{t}, \mathbf{f}, \mathbf{n}\}\).Given a three-valued interpretation of \(\boldsymbol{T}\), and ascheme for evaluating the truth value of composite sentences in termsof their parts, we can specify a truth value \(\textit{Val}_{M+h}(A) =\mathbf{t}, \mathbf{f}\) or \(\mathbf{n}\), for every sentence \(A\)of \(L\). The central theorem of the three-valued semantics is that,given any ground model \(M\), there is a three-valued interpretation hof \(\boldsymbol{T}\) so that, for every sentence \(A\), we have\(\textit{Val}_{M+h}(\boldsymbol{T}\lsquo A\rsquo) = \textit{Val}_{M+h}(A)\).[8] We will call such an interpretation of \(\boldsymbol{T}\) anacceptable interpretation. Our point here is this: ifthere’s a truth-teller, as in (11), then there is not only oneacceptable interpretation of \(\boldsymbol{T}\); there are three: oneaccording to which (11) is true, one according to which (11) is false,and one according to which (11) is neither. Thus, there is no single“correct” interpretation of \(\boldsymbol{T}\) given aground model M. Thus the three-valued semantics seems to violate thesupervenience of semantics.[9]
The RTT does not assign a truth value to the truth-teller, (11).Rather, it gives an analysis of the kind of reasoning that one mightengage in with respect to the truth-teller: If we start with ahypothesis \(h\) according to which (11) is true, then upon revision(11) remains true. And if we start with a hypothesis \(h\) accordingto which (11) is not true, then upon revision (11) remains not true.And that is all that the concept of truth leaves us with. Given thisbehaviour of (11), the RTT tells us that (11) is neither categoricallytrue nor categorically false, but this is quite different from averdict that (11) is neither true nor false.
We note an alternative interpretation of the revision-theoreticformalism. Yaqūb 1993 agrees with Gupta and Belnap that theT-biconditionals are definitional rather than material biconditionals,and that the concept of truth is therefore circular. But Yaqūbinterprets this circularity in a distinctive way. He argues that,
since the truth conditions of some sentences involve reference totruth in an essential, irreducible manner, these conditions can onlyobtain or fail in a world that already includes an extension of thetruth predicate. Hence, in order for the revision process to determinean extension of the truth predicate, aninitial extension ofthe predicate must be posited. This much follows from circularity andbivalence. (1993, 40)
Like Gupta and Belnap, Yaqūb posits no privileged extension for\(\boldsymbol{T}\). And like Gupta and Belnap, he sees the revisionsequences of extensions of \(\boldsymbol{T}\), each sequence generatedby an initial hypothesized extension, as “capable ofaccommodating (and diagnosing) the various kinds of problematic andunproblematic sentences of the languages under consideration”(1993, 41). But, unlike Gupta and Belnap, he concludes from theseconsiderations that “truth in a bivalent language is notsupervenient” (1993, 39). He explains in a footnote: fortruth to be supervenient, the truth status of each sentence must be“fully determined by nonsemantical facts”. Yaqūb doesnot explicitly use the notion of a concept’ssignification. But Yaqūb seems committed to the claimthat the signification of \(\boldsymbol{T}\) — i.e., that whichdetermines the truth status of each sentence — is given by aparticular revision sequence itself. And no revision sequence isdetermined by the nonsemantical facts, i.e., by the ground model,alone: a revision sequence is determined, at best, by a ground modeland an initial hypothesis.[10]
To obtain a signification of \(\boldsymbol{T}\) and a notion ofvalidity based on the concept of stability (or near stability) is byno means the only use we can make of revision sequences. For onething, we could use revision-theoretic notions to make ratherfine-grained distinctions among sentences: Some sentences are unstablein every revision sequence; others are stable in every revisionsequence, though stably true in some and stably false in others; andso on. Thus, we can use revision-theoretic ideas to give afine-grained analysis of the status of various sentences, and of therelationships of various sentences to one another. Hsiung (2017)explores further this possibility by generalizing the notion of arevision sequence to arevision mapping on a digraph, inorder to extend this analysis tosets of sentences of acertain kind, calledBoolean paradoxes. As shown in Hsiung2022 this procedure can also be reversed, at least to some extent: Notonly we can use revision sequences (or mappings) to classifyparadoxical sentences by means of their revision-theoretic patterns,but we can also construct “new” paradoxes from givenrevision-theoretic patterns. Rossi (2019) combines therevision-theoretic technique with graph-theoretic tools andfixed-point constructions in order to represent a threefoldclassification of paradoxical sentences (liar-like, truth-teller-like,and revenge sentences) within a single model.
We have given only the barest exposition of the three-valuedsemantics, in our discussion of the supervenience of the significationof truth, above. Given a truth language \(L\) and a ground model\(M\), we defined anacceptable three-valued interpretationof \(\boldsymbol{T}\) as an interpretation \(h:D \rightarrow\{\mathbf{t}, \mathbf{f}, \mathbf{n}\}\) such that\(\textit{Val}_{M+h}(\boldsymbol{T}\lsquo A\rsquo) =\textit{Val}_{M+h}(A)\) for each sentence \(A\) of \(L\). In general,given a ground model \(M\), there are many acceptable interpretationsof \(\boldsymbol{T}\). Suppose that each of these is indeed a trulyacceptable interpretation. Then the three-valued semantics violatesthe supervenience of the signification of \(\boldsymbol{T}\).
Suppose, on the other hand, that, for each ground model \(M\), we canisolate a privileged acceptable interpretation asthe correctinterpretation of \(\boldsymbol{T}\). Gupta and Belnap present anumber of considerations against the three-valued semantics, soconceived. (See Gupta & Belnap 1993, Chapter 3.) One principalargument is that the central theorem, i.e., that for each ground modelthere is an acceptable interpretation, only holds when the underlyinglanguage is expressively impoverished in certain ways: for example,the three-valued approach fails if the language has a connective\({\sim}\) with the following truth table:
| \(A\) | \({\sim}A\) |
| \(\mathbf{t}\) | \(\mathbf{f}\) |
| \(\mathbf{f}\) | \(\mathbf{t}\) |
| \(\mathbf{n}\) | \(\mathbf{t}\) |
The only negation operator that the three-valued approach can handlehas the following truth table:
| \(A\) | \(\neg A\) |
| \(\mathbf{t}\) | \(\mathbf{f}\) |
| \(\mathbf{f}\) | \(\mathbf{t}\) |
| \(\mathbf{n}\) | \(\mathbf{n}\) |
But consider the liar that says of itself that it is ‘not’true, in this latter sense of ‘not’. Gupta and Belnap urgethe claim that this sentence “ceases to be intuitivelyparadoxical” (1993, 100). The claimed advantage of the RTT isits ability to describe the behaviour of genuinely paradoxicalsentences: the genuine liar is unstable under semantic evaluation:“No matter what we hypothesize its value to be, semanticevaluation refutes our hypothesis.” The three-valued semanticscan only handle the “weak liar”, i.e., a sentence thatonly weakly negates itself, but that is not guaranteed to beparadoxical: “There are appearances of the liar here, but theydeceive.”
We’ve thus far reviewed two of Gupta and Belnap’scomplaints against three-valued approaches, and now we raise a third:in the three-valued theories, truth typically behaves like anonclassical concept even when there’s no vicious reference inthe language. Without defining terms here, we note that one popularprecisification of the three-valued approach, is to take the correctinterpretation of T to be that given by the ‘least fixedpoint’ of the ‘strong Kleene scheme’: putting asidedetails, this interpretation always assigns the truth value\(\mathbf{n}\) to the sentence \(\forall\)x\((\boldsymbol{T}\)x \(\vee\neg \boldsymbol{T}\)x), even when the ground model allows nocircular, let alone vicious, reference. Gupta and Belnap claim anadvantage for the RTT: according to revision-theoretic approach, theyclaim, truth always behaves like a classical concept when there is novicious reference.
Kremer 2010 challenges this claim by precisifying it as a formal claimagainst which particular revision theories (e.g. \(\boldsymbol{T}^*\)or \(\boldsymbol{T}^{\#}\), see Definition 3.11, above) and particularthree-valued theories can be tested. As it turns out, on manythree-valued theories, truth does in fact behave like a classicalconcept when there’s no vicious reference: for example, theleast fixed point of a natural variant of the supervaluation schemealways assigns \(\boldsymbol{T}\) a classical interpretation in theabsence of vicious reference. When there’s no vicious reference,it is granted that truth behaves like a classical concept if we adoptGupta and Belnap’s theory \(\boldsymbol{T}^*\), however, soKremer argues, this is not the case if we instead adopt Gupta andBelnap’s theory \(\boldsymbol{T}^{\#}\). This discussion isfurther taken up by Wintein 2014. A general assessment of the relativemerits of the three-valued approaches and the revision-theoreticapproaches, from a metasemantic point of view, is at the core ofPinder 2018.
A contrast presupposed by this entry is between allegedly two-valuedtheories, like the RTT, and allegedly three-valued or othermany-valued rivals. One might think of the RTT itself as providinginfinitely many semantic values, for example one value for everypossible revision sequence. Or one could extract three semantic valuesfor sentences: categorical truth, categorical falsehood, anduncategoricalness.
In reply, it must be granted that the RTT generates manystatuses available to sentences. Similarly, three-valuedapproaches also typically generate many statuses available tosentences. The claim of two-valuedness is not a claim about statusesavailable to sentences, but rather a claim about thetruthvalues presupposed in the whole enterprise.
We note three ways to amend the RTT. First, we might put constraintson which hypotheses are acceptable. For example, Gupta and Belnap 1993introduce a theory, \(\mathbf{T}^c\), of truth based onconsistent hypotheses: an hypothesis \(h\) isconsistent iff the set \(\{A:h(A) = \mathbf{t}\}\) is acomplete consistent set of sentences. The relative merits of\(\mathbf{T}^*, \mathbf{T}^{\#}\) and \(\mathbf{T}^c\) are discussedin Gupta & Belnap 1993, Chapter 6.
Second, we might adopt a more restrictivelimit policy thanGupta and Belnap adopt. Recall the question asked in Section 3: Howshould we set \(S_{\eta}(d)\) when \(\eta\) is a limit ordinal? Wegave a partial answer: any object that is stably true [false]upto that stage should be true [false]at that stage. Wealso noted that for an object \(d \in D\) that does not stabilize upto the stage \(\eta\), Gupta and Belnap 1993 allow us to set\(S_{\eta}(d)\) as either \(\mathbf{t}\) or \(\mathbf{f}\). In asimilar context, Herzberger 1982a and 1982b assigns the value\(\mathbf{f}\) to the unstable objects. And Gupta originallysuggested, in Gupta 1982, that unstable elements receive whatevervalue they received at the initial hypothesis \(S_0\).
These first two ways of amending the RTT both, in effect, restrict thenotion of a revision sequence, by putting constraints on which ofour revision sequences really count as acceptable revisionsequences. The constraints are, in some sense local: the firstconstraint is achieved by putting restrictions on which hypotheses canbe used, and the second constraint is achieved by putting restrictionson what happens at limit ordinals. A third option would be to put moreglobal constraints on which putative revision sequences count asacceptable. Yaqūb 1993 suggests, in effect, a limit rule wherebyacceptable verdicts on unstable sentences at some limit stage \(\eta\)depend on verdicts rendered atother limit stages. Yaqūbargues that these constraints allow us to avoid certain“artifacts”. For example, suppose that a ground model \(M= \langle D,I\rangle\) has two independent liars, by having two names\(\alpha\) and \(\beta\), where \(I(\alpha) = \neg\boldsymbol{T}\alpha\) and \(I(\beta) = \neg \boldsymbol{T}\beta\).Yaqūb argues that it is a mere “artifact” of therevision semantics, naively presented, that there are revisionsequences in which the sentence \(\neg \boldsymbol{T}\alpha \equiv\neg \boldsymbol{T}\beta\) is stably true, since the two liars areindependent. His global constraints are developed to rule out suchsequences. (See Chapuis 1996 for further discussion.)
The first and the second way of amending the RTT are in some sense puttogether by Campbell-Moore (2019). Here, the notion of stability forobjects is extended to sets of hypotheses: A set \(H\) of hypothesesis stable in a sequence \(S\) of hypotheses if for some ordinal\(\theta\) all hypotheses \(S_{\zeta}\), with \(\zeta \ge \theta\),belong to \(H\). With this, we can introduce the notion of\(P\)-revision sequence: If \(P\) is a class of sets of hypotheses, asequence \(S\) is a \(P\)-revision sequence just in case that, atevery limit ordinal \(\eta\), if a set of hypotheses \(H\) belongs to\(P\) and is stable in \(S\), then \(S_{\eta}\) belongs to \(H\). Itcan be shown that, for a suitable choice of \(P\), all limit stages of\(P\)-revision sequences aremaximal consistenthypotheses.
As indicated in our discussion, in Section 4, of the ‘iff’in the T-biconditionals, Gupta and Belnap present the RTT as a specialcase of a revision theory of circularly defined concepts. Toreconsider the example from Section 4. Suppose that \(L\) is alanguage with a unary predicate F and a binary predicate R. Consider anew concept expressed by a predicate \(G\), introduced through adefinition, \(D\), like this:
\[Gx =_{df} A(x,G)\]where \(A(x,G)\) is the formula
\[\forall y(Ryx \supset Fx) \vee \exists y(Ryx \amp Gx).\]In this context, aground model is a classical model \(M =\langle D,I\rangle\) of the language \(L\): we start with a domain ofdiscourse, \(D\), and an interpretation of the predicate \(F\) and therelation symbol \(R\). We would like to extend \(M\) to aninterpretation of the language \(L + G\). So, in this context, anhypothesis will be thought of as an hypothesized extension for thenewly introduced concept \(G\). Formally, a hypothesis is simply afunction \(h:D \rightarrow \{\mathbf{t}, \mathbf{f}\}\). Given ahypothesis \(h\), we take \(M+h\) to be the classical model \(M+h =\langle D,I'\rangle\), where \(I'\) interprets \(F\) and \(R\) in thesame way as \(I\), and where \(I'(G) = h\). Given a hypothesizedinterpretation \(h\) of \(G\), we generate a new interpretation of\(G\) as follows: and object \(d \in D\) is in the new extension of\(G\) just in case the defining formula \(A(x,G)\) is true of \(d\) inthe model \(M+h\). Formally, we use the ground model \(M\) and thedefinition \(D\) to define arevision rule, \(\delta_{D,M}\),mapping hypotheses to hypotheses, i.e., hypothetical interpretationsof \(G\) to hypothetical interpretations of \(G\). In particular, forany formula \(B\) with one free variable \(x\), and \(d \in D\), wecan define the truth value \(\textit{Val}_{M+h,d}(B)\) in the standardway. Then,
\[\delta_{D,M}(h)(d) = \textit{Val}_{M+h,d}(A)\]Given a revision rule \(\delta_{D,M}\), we can generalize the notionof arevision sequence, which is now a sequence ofhypothetical extensions of \(G\) rather than \(\boldsymbol{T}\). Wecan generalize the notion of a sentence \(B\) beingstablytrue,nearly stably true, etc., relative to a revisionsequence. Gupta and Belnap introduce the systems \(\mathbf{S}^*\) and\(\mathbf{S}^{\#}\), analogous to \(\mathbf{T}^*\) and\(\mathbf{T}^{\#}\), as follows:[11]
Definition 5.1.
- A sentence \(B\) isvalid on the definition \(D\)inthe ground model \(M\)in the system \(\mathbf{S}^*\)(notation \(M \vDash_{*,D} B)\) iff \(B\) is stably true relative toeach revision sequence for the revision rule \(\delta_{D,M}\).
- A sentence \(B\) isvalid on the definition \(D\)inthe ground model \(M\)in the system \(\mathbf{S}^{\#}\)(notation \(M \vDash_{\#,D} B)\) iff \(B\) isnearly stablytrue relative to each revision sequence for the revision rule\(\delta_{D,M}\).
- A sentence \(B\) isvalid on the definition \(D\)inthe system \(\mathbf{S}^*\) (notation \(\vDash_{*,D} B)\) iff forall classical ground models \(M\), we have \(M \vDash_{*,D} B\).
- A sentence \(B\) isvalid on the definition \(D\)inthe system \(\mathbf{S}^{\#}\) (notation \(\vDash_{\#,D} B)\) ifffor all classical ground models \(M\), we have \(M \vDash_{\#,D}B\).
One of Gupta and Belnap’s principle open questions is whetherthere is a complete calculus for these systems: that is, whether, foreach definition \(D\), either of the following two sets of sentencesis recursively axiomatizable: \(\{B:\vDash_{*,D} B\}\) and\(\{B:\vDash_{\#,D} B\}\). Kremer 1993 proves that the answer is no:he shows that there is a definition \(D\) such that each of these setsof sentences is of complexity at least \(\Pi^{1}_2\), thereby puttinga lower limit on the complexity of \(\mathbf{S}^*\) and\(\mathbf{S}^{\#}\). (Antonelli 1994a and 2002 shows that this is alsoan upper limit.)
Kremer’s proof exploits an intimate relationship betweencircular definitions understoodrevision-theoretically andcircular definitions understood asinductive definitions: thetheory of inductive definitions has been quite well understood forsome time. In particular, Kremer proves that every inductively definedconcept can be revision-theoretically defined. The expressive powerand other aspects of the revision-theoretic treatment of circulardefinitions is the topic of much interesting work: see Welch 2001,Löwe 2001, Löwe and Welch 2001, and Kühnbergeretal. 2005.
Alongside Kremer’s limitative result there is the positiveobservation that, for some semantic system of definitions based onrestricted kinds of revision sequences, sound and complete calculido exist. For instance, Gupta and Belnap give some examplesof calculi and revision-theoretic systems which use only finiterevision sequences. Further investigation on proof-theoretic calculicapturing some revision-theoretic semantic systems is done in Bruni2013, Standefer 2016, Bruni 2019, and Fjellstad 2020.
The RTT is a clear example of a semantically motivated theory oftruth. Quite a different tradition seeks to give a satisfyingaxiomatic theory of truth. Granted we cannot retain all of classicallogic and all of our intuitive principles regarding truth, especiallyif we allow vicious self-reference. But maybe we can arrive atsatisfying axiom systems for truth, that, for example, maintainconsistency and classical logic, but give up only a little bit when itcomes to our intuitive principles concerning truth, such as theT-biconditionals (interpreted classically); or maintain consistencyand all of the T-biconditionals, but give up only a little bit ofclassical logic. Halbach 2011 comprehensively studies such axiomatictheories (mainly those that retain classical logic), and Horsten 2011is in the same tradition. Both Chapter 14 of Halbach 2011 and Chapter8 of Horsten 2011 study the relationship between the Friedman-Sheardtheory FS and the revision semantics, with some interesting results.For more work on axiomatic systems and the RTT, see Horsten et al2012.
Field 2008 makes an interesting contribution to axiomatic theorizingabout truth, even though most of the positive work in the bookconsists of model building and is therefore semantics. In particular,Field is interested in producing a theory as close to classical logicas possible, which retains all T-biconditionals (the conditionalitself will be nonclassical) and which at the same time can express,in some sense, the claim that such and such a sentence is defective.Field uses tools from multivalued logic, fixed-point semantics, andrevision theory to build models showing, in effect, that a veryattractive axiomatic system is consistent. Field’s constructionis an intricate interplay between using fixed-point constructions forsuccessively interpreting T, and revision sequences for successivelyinterpreting the nonclassical conditional — the finalinterpretation being determined by a sort of super-revision-theoreticprocess.
The connection between revision and Field’s theory is exploredfurther in Standefer 2015b and in Gupta and Standefer 2017.
Given Gupta and Belnap’s general revision-theoretic treatment ofcircular definitions — of which their treatment oftruth is a special case — one would expectrevision-theoretic ideas to be applied to other concepts. Antonelli1994b applies these ideas to non-well-founded sets: a non-well-foundedset \(X\) can be thought of as circular, since, for some \(X_0 ,\ldots ,X_n\) we have \(X \in X_0 \in \ldots \in X_n \in X\). Chapuis2003 applies revision-theoretic ideas to rational decision making.This connection is further developed by Bruni 2015 and by Bruni andSillari 2018. For a discussion of revision theory and abstract objectssee Wang 2011. For a discussion of revision theory and vagueness, seeAsmus 2013.
Standefer (2015a) studies the connection between the circulardefinitions of revision theory and a particular modal logic RT (for“Revision Theory”). Campbell-Mooreet al. 2019and Campbell-Moore 2021 use revision sequences to model probabilitiesand credences, respectively. Cook 2019 employs a revision-theoreticanalysis to find a new possible solution of Benardete’s versionof the Zeno paradox.
In recent times, there has been increasing interest in bridging thegap between classic debates on the nature of truth —deflationism, the correspondence theory, minimalism, pragmatism, andso on — and formal work on truth, motivated by the liar’sparadox. The RTT is tied to pro-sententialism by Belnap 2006;deflationism, by Yaqūb 2008; and minimalism, by Restall 2005.
We must also mention Gupta 2006. In this work, Gupta argues that anexperience provides the experiencer, not with a straightforwardentitlement to a proposition, but rather with a hypotheticalentitlement: as explicated in Berker 2011, if subject S has experience\(e\) and is entitled to hold view \(v\) (where S’sview is the totality of S’s concepts, conceptions, andbeliefs), then S is entitled to believe a certain class of perceptualjudgements, \(\Gamma(v)\). (Berker uses “propositions”instead of “perceptual judgements” in his formulation.)But this generates a problem: how is S entitled to hold a view? Thereseems to be a circular interdependence between entitlements to viewsand entitlements to perceptual judgements. Here, Gupta appeals to ageneral form of revision theory — generalizing beyond both therevision theory of truth and the revision theory of circularly definedconcepts (Section 5.4, above) — to given an account of how“hypothetical perceptual entitlements could yield categoricalentitlements” (Berker 2011).
We close with an open question about \(\mathbf{T}^*\) and\(\mathbf{T}^{\#}\). Recall Definition 3.11, above, which defines whena sentence \(A\) of a truth language \(L\) isvalid in the groundmodel \(M\)by \(\mathbf{T}^*\) orby\(\mathbf{T}^{\#}\). We will say that \(A\) isvalid by\(\mathbf{T}^*\) [alternatively,by \(\mathbf{T}^{\#}\)] iff\(A\) is valid in the ground model \(M\) by \(\mathbf{T}^*\)[alternatively, by \(\mathbf{T}^{\#}\)] for every ground model \(M\).Our open question is this: What is the complexity of the set ofsentences valid by \(\mathbf{T}^* [\mathbf{T}^{\#}\)]?
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2023 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054