Proof-theoretic semantics is an alternative to truth-conditionsemantics. It is based on the fundamental assumption that the centralnotion in terms of which meanings are assigned to certain expressionsof our language, in particular to logical constants, is that ofproof rather thantruth. In this senseproof-theoretic semantics issemantics in terms of proof.Proof-theoretic semantics also means thesemantics of proofs,i.e., the semantics of entities which describe how we arrive atcertain assertions given certain assumptions. Both aspects ofproof-theoretic semantics can be intertwined, i.e. the semantics ofproofs is itself often given in terms of proofs.
Proof-theoretic semantics has several roots, the most specific onebeing Gentzen’s remarks that the introduction rules in hiscalculus of natural deduction define the meanings of logicalconstants, while the elimination rules can be obtained as aconsequence of this definition (see section2.2.1). More broadly, it belongs to what Prawitz calledgeneral prooftheory (see section1.1). Even more broadly, it is part of the tradition according to which themeaning of a term should be explained by reference to the way it isused in our language.
Within philosophy, proof-theoretic semantics has mostly figured underthe heading “theory of meaning”. This terminology followsDummett, who claimed that the theory of meaning is the basis oftheoretical philosophy, a view which he attributed to Frege. The term“proof-theoretic semantics” was proposed bySchroeder-Heister (1991; used already in the 1980s in lectures inStockholm) in order not to leave the term “semantics” todenotationalism alone—after all, “semantics” is thestandard term for investigations dealing with the meaning oflinguistic expressions. Furthermore, unlike “theory ofmeaning”, the term “proof-theoretic semantics”covers philosophical and technical aspects likewise. In 1999, thefirst conference with this title took place in Tübingen, thesecond and third in 2013 and 2019, respectively. Many others havefollowed at various places. The first textbook with this titleappeared in 2015 (Francez 2015).
This entry also includes the following supplementary documents that are linked into the text:
The term “general proof theory” was coined by Prawitz. Ingeneral proof theory, “proofs are studied in their own right inthe hope of understanding their nature”, in contradistinction toHilbert-style “reductive proof theory”, which is the“attempt to analyze the proofs of mathematical theories with theintention of reducing them to some more elementary part of mathematicssuch as finitistic or constructive mathematics” (Prawitz, 1972,p. 123). In a similar way, Kreisel (1971) asks for a re-orientation ofproof theory. He wants to explain “recent work in proof theoryfrom a neglected point of view. Proofs and their representations byformal derivations are treated as principal objects of study, not asmere tools for analyzing the consequence relation.” (Kreisel,1971, p. 109) Whereas Kreisel focuses on the dichotomy between atheory of proofs and a theory of provability, Prawitz concentrates onthe different goals proof theory may pursue. However, both stress thenecessity of studying proofs as fundamental entities by means of whichwe acquire demonstrative (especially mathematical) knowledge. Thismeans in particular that proofs are epistemic entities which shouldnot be conflated with formal proofs or derivations. They are ratherwhat derivations denote when they are considered to be representationsof arguments. (However, in the following we often use“proof” synonymously with “derivation”,leaving it to the reader to determine whether formal proofs or proofsas epistemic entities are meant.) In discussing Prawitz’s (1971)survey, Kreisel (1971, p. 111) explicitly speaks of a“mapping” between derivations and mental acts, andconsiders it a task of proof theory to elucidate this mapping,including the investigation of the identity of proofs, a topic thatPrawitz and Martin-Löf had put on the proof-theoretic agenda.
This means that in general proof theory we are not solely interestedin whetherB follows fromA, butin the way by means of which we arrive atBstarting fromA. In this sense general proof theoryis intensional and epistemological in character, whereas model theory,which is interested in the consequence relation and not in the way ofestablishing it, is extensional and metaphysical.
Proof-theoretic semantics is inherently inferential, as it isinferential activity which manifests itself in proofs. It thus belongstoinferentialism (a term coined by Brandom, see his 1994;2000) according to which inferences and the rules of inferenceestablish the meaning of expressions, in contradistinction todenotationalism, according to which denotations are theprimary sort of meaning. Inferentialism and the‘meaning-as-use’ view of semantics is the broadphilosophical framework of proof-theoretic semantics. This generalphilosophical and semantical perspective merged with constructiveviews which originated in the philosophy of mathematics, especially inmathematical intuitionism. Most forms of proof-theoretic semantics areintuitionistic in spirit, which means in particular that principles ofclassical logic such as the law of excluded middle or the doublenegation law are rejected or at least considered problematic. This ispartly due to the fact that the main tool of proof-theoreticsemantics, the calculus of natural deduction, is biased towardsintuitionistic logic, in the sense that the straightforwardformulation of its elimination rules is the intuitionistic one. Thereclassical logic is only available by means of some rule of indirectproof, which, at least to some extent, destroys the symmetry of thereasoning principles (see section3.5). If one adopts the standpoint of natural deduction, thenintuitionistic logic is a natural logical system. Also the BHK(Brouwer-Heyting-Kolmogorov) interpretation of the logical signs playsa significant role. This interpretation is not a unique approach tosemantics, but comprises various ideas which are often more informallythan formally described. Of particular importance is its functionalview of implication, according to which a proof of \(A \rightarrow B\)is a constructive function which, when applied to a proof ofAyields a proof ofB. Thisfunctional perspective underlies many conceptions of proof-theoreticsemantics, in particular those of Lorenzen, Prawitz and MartinLöf (see sections2.1.1,2.2.2,2.2.3).
According to Dummett (1975), the logical position of intuitionismcorresponds to the philosophical position of anti-realism. The realistview of a recognition-independent reality is the metaphysicalcounterpart of the view that all sentences are either true or falseindependent of our means of recognizing it. Major parts ofproof-theoretic semantics follow Dummett and his association withanti-realism.
Gentzen’s calculus of natural deduction and its rendering byPrawitz is the background to most approaches to proof-theoreticsemantics. Natural deduction is based on at least three centralideas:
In Gentzen’s natural deduction system for first-order logicderivations are written in tree form and based on the well-knownrules. For example, implication has the following introduction andelimination rules
\[ \begin{prooftree}\AxiomC{$[A]$} \noLine \UnaryInfC{$B$} \RightLabel{$\,\to$I} \UnaryInfC{$A\to B$} \end{prooftree} \hspace{6em} \begin{prooftree}\AxiomC{$A\to B$} \AxiomC{$A$} \RightLabel{$\,\to$E} \BinaryInfC{$B$} \end{prooftree} \]where the brackets indicate the possibility to discharge occurrencesof the assumptionA. Theopen assumptionsof a derivation are those assumptions on which the end-formuladepends. A derivation is calledclosed, if it has no openassumption, otherwise it is calledopen. If we deal withquantifiers, we have to consider open individual variables (sometimescalled “parameters”), too. Metalogical features crucialfor proof-theoretic semantics and for the first time systematicallyinvestigated and published by Prawitz (1965) include:
Reduction: For every detour consisting of an introductionimmediately followed by an elimination there is a reduction stepremoving this detour.
Normalization: By successive applications of reductions,derivations can be transformed into normal forms which contain nodetours.
For implication the standard reduction step removing detours is thefollowing:
\[ \begin{prooftree}\AxiomC{$[A]$} \noLine\UnaryInfC{$\vdots$} \noLine\UnaryInfC{$B$} \UnaryInfC{$A\to B$} \AxiomC{$|$} \noLine\UnaryInfC{$A$} \BinaryInfC{$B$} \end{prooftree} \hspace{1em} \text{reduces to} \hspace{1em} \begin{prooftree}\AxiomC{$|$} \noLine\UnaryInfC{$A$} \noLine\UnaryInfC{$\vdots$} \noLine\UnaryInfC{$B$} \end{prooftree} \]A simple, but very important corollary of normalization is thefollowing:Every closed derivation in intuitionistic logic can bereduced to a derivation using an introduction rule in the laststep. We also say that intuitionistic natural deduction satisfiesthe “introduction form property”. Inproof-theoretic semantics this result figures prominently under theheading “fundamental assumption” (Dummett, 1991, p. 254).The “fundamental assumption” is a typical example of aphilosophical re-interpretation of a technical proof-theoreticresult.
Further Reading:
For the general orientation ofproof-theoretic semantics, seethe special issue ofSynthese edited by Kahle andSchroeder-Heister (2006), the reader edited by Piecha andSchroeder-Heister (2016b), the textbook by Francez (2015), and thesurveys by Wansing (2000) and by Schroeder-Heister (2016).
For the philosophical position and development ofprooftheory, see the entry onHilbert’s program, the entry on thedevelopment of proof theory as well as Prawitz (1971).
Forintuitionism, see the entry onintuitionistic logic, the entry onintuitionism in the philosophy of mathematics and the entry on thedevelopment of intuitionistic logic.
Foranti-realism, see the entry onchallenges to metaphysical realism as well as Tennant (1987; 1997) and Tranchini (2010; 2012a).
ForGentzen-style proof-theory and the theory of naturaldeduction, see the entry onnatural deduction systems in logic as well as, in addition to Gentzen’s (1934/35) originalpresentation, Jaśkowski’s (1934) theory of suppositions,Prawitz’s (1965) classical monograph, Tennant (1978; 2017),Troelstra and Schwichtenberg (2000), Negri and von Plato (2001),Mancosu, Galvan and Zach (2021), and Andrzej Indrzejczak’s entryonnatural deduction in theInternet Encyclopedia of Philosophy.
The semantics of implication lies at the heart of proof-theoreticsemantics. In contradistinction to classical truth-conditionsemantics, implication is a logical constant in its own right. It hasalso the characteristic feature that it is tied to the concept ofconsequence. It can be viewed as expressing consequence at thesentential level due to modus ponens and to what in Hilbert-stylesystems is called the deduction theorem, i.e. the equivalence of\(\Gamma ,A \vdash B\) and \(\Gamma \vdash A \rightarrow B\).
A very natural understanding of an implication \(A \rightarrow B\) isreading it as expressing the inference rule which allows one to passover fromA toB. Licensing thestep fromA toB on the basis of\(A \rightarrow B\) is exactly, what modus ponens says. And thededuction theorem can be viewed as the means of establishing a rule:Having shown thatB can be deduced fromAjustifies the rule that fromA wemay pass over toB. A rule-based semantics ofimplication along such lines underlies several conceptions ofproof-theoretic semantics, notably those by Lorenzen, von Kutscheraand Schroeder-Heister.
Lorenzen, in hisIntroduction to Operative Logics andMathematics (1955) starts with logic-free (atomic) calculi, whichcorrespond to production systems or grammars. He calls a ruleadmissible in such a system if it can be added to it withoutenlarging the set of its derivable atoms. The implication arrow\(\rightarrow\) is interpreted as expressing admissibility. Animplication \(A \rightarrow B\) is considered to be valid, if, whenread as a rule, it is admissible (with respect to the underlyingcalculus). For iterated implications \((=\) rules) Lorenzen develops atheory of admissibility statements of higher levels. Certainstatements such as
\[A \rightarrow A \quad\text{ or }\quad ((A \rightarrow B), (B \rightarrow C)) \rightarrow(A \rightarrow C) \]hold independently of the underlying calculus. They are calleduniversally admissible[“allgemeinzulässig”]), and constitute a system ofpositive implicational logic. In a related way, laws for universalquantification \(\forall\) are justified using admissibilitystatements for rules with schematic variables.
For the justification of the laws for the logical constants \(\land ,\lor , \exists\) and \(\bot\), Lorenzen uses aninversionprinciple (a term he coined). In a very simplified form, withouttaking variables in rules into account, the inversion principle saysthat everything that can be obtained from every defining condition ofA can be obtained fromA itself.For example, in the case ofdisjunction, letAandB each be a defining conditionof \(A\lor B\) as expressed by the primitive rules \(A \rightarrowA\lor B\) and \(B \rightarrow A\lor B\). Then the inversion principlesays that \(A\lor B \rightarrow C\) is admissible assuming \(A\rightarrow C\) and \(B \rightarrow C\), which justifies theelimination rule for disjunction. The remaining connectives are dealtwith in a similar way. In the case of \(\bot\), the absurdity rule\(\bot \rightarrow A\) is obtained from the fact that there is nodefining condition for \(\bot\).
In what he calls “Gentzen semantics”, von Kutschera (1968)gives, as Lorenzen, a semantics of logically complex implication-likestatements \(A_1,\ldots,A_n \rightarrow B\) with respect to calculiK which govern the reasoning with atomic sentences.The fundamental difference to Lorenzen is the fact that\(A_1,\ldots,A_n \rightarrow B\) now expresses aderivabilityrather than anadmissibility statement.
In order to turn this into a semantics of the logical constants ofpropositional logic, von Kutschera argues as follows: When giving upbivalence, we can no longer use classical truth-value assignments toatomic formulas. Instead we can use calculi which prove or refuteatomic sentences. Moreover, since calculi not only generate proofs orrefutations but arbitrary derivability relations, the idea is to startdirectly with derivability in an atomic system and extend it withrules that characterize the logical connectives. For that vonKutschera gives a sequent calculus with rules for the introduction ofn-ary propositional connectives in the succedentand antecedent, yielding a sequent system for generalizedpropositional connectives. Von Kutschera then goes on to show that thegeneralized connectives so defined can all be expressed by thestandard connectives of intuitionistic logic (conjunction,disjunction, implication, absurdity).
Further Reading:
For extensions of expressive completeness in the style of vonKutschera: Wansing (1993a).
When developing a general schema for inference rules for arbitrarylogical constants, Schroeder-Heister (1984) proposed, as a semanticalrequirement, that a logically complex formula expresses the content orcommon content ofsystems of rules. A ruleRis either a formulaA or has theform \(R_1,\ldots,R_n \Rightarrow A\), where \(R_1,\ldots,R_n\) arerules. These so-called “higher-level rules” generalize theidea that rules may discharge assumption formulas to the case wherethey may discharge assumption rules (i.e., rules used as assumptions).For the standard connectives this means that \(A\land B\) expressesthe content of the pair \((A,B)\); \(A \rightarrow B\) expresses thecontent of the rule \(A \Rightarrow B\); \(A\lor B\) expresses thecommon content ofA andB; andabsurdity \(\bot\) expresses the common content of the empty family ofrule systems. For arbitraryn-ary propositionalconnectives this leads to a natural deduction system with generalizedintroduction and elimination rules. These general connectives areshown to be definable in terms of the standard ones, which establishesthe expressive completeness of the standard intuitionisticconnectives.
In hisInvestigations into Logical Deduction, Gentzen makessome (frequently quoted) programmatic remarks on the semanticrelationship between introduction and elimination inferences innatural deduction.
The introductions represent, as it were, the ‘definitions’of the symbols concerned, and the eliminations are no more, in thefinal analysis, than the consequences of these definitions. This factmay be expressed as follows: In eliminating a symbol, we may use theformula with whose terminal symbol we are dealing only ‘in thesense afforded it by the introduction of that symbol’. (Gentzen,1934/35, p. 80)
This cannot mean, of course, that the elimination rules arededucible from the introduction rules in the literal sense ofthe word; in fact, they are not. It can only mean that they can bejustified by them in some way.
By making these ideas more precise it should be possible to displaythe E-inferences as unique functions of their correspondingI-inferences, on the basis of certain requirements. (ibid., p. 81)
So the idea underlying Gentzen’s programme is that we have“definitions” in the form of introduction rules and somesort of semantic reasoning which, by using “certainrequirements”, validate the elimination rules.
By adopting Lorenzen’s term and adapting its underlying idea(see section2.1.1) to the context of natural deduction, Prawitz (1965) formulated an“inversion principle” to make Gentzen’s remarks moreprecise:
Let \(\alpha\) be an application of an elimination rule that hasBas consequence. Then, deductions that satisfy thesufficient condition […] for deriving the major premiss of\(\alpha\), when combined with deductions of the minor premisses of\(\alpha\) (if any), already “contain” a deduction ofB;the deduction ofB is thusobtainable directly from the given deductions without the addition of\(\alpha\). (p. 33)
Here the sufficient conditions are given by the premisses of thecorresponding introduction rules. Thus the inversion principle saysthat a derivation of the conclusion of an elimination rule can beobtained without an application of the elimination rule if its majorpremiss has been derived using an introduction rule in the last step,which means that a combination
\[\begin{prooftree}\AxiomC{$\vdots$} \RightLabel{I-inference} \UnaryInfC{$A$} \AxiomC{$\{\mathcal{D}_i\}$} \RightLabel{E-inference} \BinaryInfC{$B$} \end{prooftree}\]of steps, where \(\{\mathcal{D}_i\}\) stands for a (possibly empty)list of deductions of minor premisses, can be avoided.
The relationship between introduction and elimination rules is oftendescribed as “harmony” (Dummett 1973, pp. 396–397),or as governed by a “principle of harmony” (Tennant, 1978,p. 74). This terminology is not uniform and sometimes not even fullyclear. It essentially expresses what is also meant by“inversion”. Even if “harmony” is a term whichsuggests a symmetric relationship, it is frequently understood asexpressing a conception based on introduction rules as, e.g., inRead’s (2010) “general elimination harmony”(although occasionally one includes elimination-based conceptions aswell). Sometimes harmony is supposed to mean that connectives arestrongest or weakest in a certain sense given their introduction ortheir elimination rules. This idea underlies Tennant’s (1978)harmony principle, and also Popper’s and Koslow’sstructural characterizations (see section2.4). The specific relationship between introduction and elimination rulesas formulated in an inversion principle excludes alleged inferentialdefinitions such as that of the connectivetonk, whichcombines an introduction rule for disjunction with an elimination rulefor conjunction, and which has given rise to a still ongoing debate onthe format of inferential definitions. Formal characterizations ofharmony have used the translation of inference rules into second-orderpropositional logic (Girard’s system F) and have become acentral topic of intensional proof-theoretic semantics (Tranchini,2023).
Further Reading:
Forlogical connectives in general, see the entry onsentence connectives in formal logic.
Forinversion principles, see Schroeder-Heister (2007).
Forvariants of proof-theoretic harmony, see Francez (2015),Schroeder-Heister (2016) and Tranchini (2023).
Proof-theoretic validity is the leading approach in proof-theoreticsemantics. As a technical concept it was developed by Prawitz (1971;1973; 1974), by turning a proof-theoretic validity notion based onideas by Tait (1967) and originally used to prove strongnormalization, into a semantical concept. Dummett provided muchphilosophical underpinning to this notion (see Dummett, 1991). Theobjects which are primarily valid are proofs as representations ofarguments. In a secondary sense, single rules can be valid if theylead from valid proofs to valid proofs. In this sense, validity is aglobal rather than a local notion. It applies to arbitrary derivationsover a given atomic system, which defines derivability for atoms.Calling a proofcanonical, if it uses an introduction rule inthe last step, proof-theoretic validity is based on the followingthree ideas:
Ad 1: The definition of validity is based on Gentzen’sidea that introduction rules are ‘self-justifying’ andgive the logical constants their meaning. This self-justifying featureis only used for closed proofs, which are considered primary over openones.
Ad 2: Noncanonical proofs are justified by reducing them tocanonical ones. Thus reduction procedures (detour reductions) as usedin normalization proofs play a crucial role. As they justifyarguments, they are also called “justifications” byPrawitz. This definition again only applies to closed proofs,corresponding to the introduction form property of closed normalderivations in natural deduction (see section1.3).
Ad 3: Open proofs are justified by considering their closedinstances. These closed instances are obtained by replacing their openassumptions with closed proofs of them, and their open variables withclosed terms. For example, a proof ofB fromAis considered valid, if every closed proof, which isobtained by replacing the open assumptionA with aclosed valid proof ofA, is valid. In this way,open assumptions are considered to be placeholders for closed proofs,which is why we may speak of a substitutional interpretation of openproofs.
This yields the following definition of proof-theoretic validity:
Formally, this definition has to be relativized to the atomic systemconsidered (see section2.6), and to a set of available justifications (proof reductions).Furthermore,proofs are here understood ascandidates of valid proofs (“arguments” inPrawitz’s terminology), which means that the rules from whichthey are composed are not fixed. They look like proof trees, but theirindividual steps can have an arbitrary (finite) number of premissesand can eliminate arbitrary assumptions. The definition of validitysingles out those proof structures which are ‘real’ proofson the basis of the given reduction procedures.
Validity with respect to every choice of an atomic system can beviewed as a generalized notion of logical validity. In fact, if weconsider the standard reductions of intuitionistic logic, then allderivations in intuitionistic logic are valid independent of theatomic system considered. This is semanticalcorrectness. Wemay ask if the converse holds, viz. whether for any proof that isvalid for every atomic system, there is a corresponding derivation inintuitionistic logic. That intuitionistic logic is complete in thissense is known as Prawitz’s conjecture (see Prawitz, 1973;2013). There are certain doubts concerning the validity of thisconjecture for systems that go beyond implicational logic. In any caseit will depend on the precise formulation of the notion of validity,in particular on its handling of atomic systems (see section2.8).
For a more formal definition and detailed examples demonstratingvalidity, as well as some remarks on Prawitz’s conjecture seethe
Supplement on Examples of Proof-theoretic Validity.
Further Reading:
For a detailed exposition of further developments ofPrawitz’s theory of proof-theoretic validity, inparticular his theory of grounds, see Piccolomini d’Aragona(2023).
For acomputational interpretation of proof-theoreticvalidity as a method for tactical proof see Gheorghiu and Pym(2022, other internet resources).
Martin-Löf’s type theory (Martin-Löf, 1984) is aleading approach in constructive logic and mathematics.Philosophically, it shares with Prawitz the three fundamentalassumptions of standard proof-theoretic semantics, mentioned insection2.2.2: the priority of closed canonical proofs, the reduction of closednon-canonical proofs to canonical ones and the substitutional view ofopen proofs. However, Martin-Löf’s type theory has at leasttwo characteristic features which go beyond other approaches inproof-theoretic semantics:
The first idea goes back to the Curry-Howard correspondence (see deGroote, 1995; Sørensen and Urzyczyn, 2006), according to whichthe fact that a formulaA has a certain proof canbe codified as the fact that a certain termt is oftypeA, whereby the formulaA isidentified with the typeA. This can be formalizedin a calculus for type assignment, whose statements are of the form\(t {:} A\). A proof of \(t {:} A\) in this system can be read asshowing thatt is a proof ofA.Martin-Löf (1995; 1998) has put this into a philosophicalperspective by distinguishing this two-fold sense of proof in thefollowing way. First we have proofs of statements of the form \(t {:}A\). These statements are calledjudgements, their proofs arecalleddemonstrations.Within such judgements thetermt represents aproof of thepropositionA. A proof in the latter senseis also called aproof object. When demonstrating a judgement\(t {:} A\), we demonstrate thatt is a proof(object) for the propositionA. Within thistwo-layer system thedemonstration layer is the layer ofargumentation. Unlike proof objects, demonstrations have epistemicsignificance; their judgements carry assertoric force. The proof layeris the layer at which meanings are explained: The meaning of apropositionA is explained by telling what countsas a proof (object) forA. The distinction madebetween canonical and non-canonical proofs is a distinction at thepropositional and not at the judgemental layer. This implies a certainexplicitness requirement. When I have proved something, I must notonly have a justification for my proofat my disposal as inPrawitz’s notion of validity, but at the same time have to becertain that this justification fulfills its purpose. Thiscertainty is guaranteed by a demonstration. Mathematically, thistwo-fold sense of proof develops its real power only when types maydepend on terms. Dependent types are a basic ingredient of ofMartin-Löf’s type theory and related approaches.
The second idea makes Martin-Löf’s approach strongly differfrom all other definitions of proof-theoretic validity. The crucialdifference, for example, to Prawitz’s procedure is that it isnotmetalinguistic in character, where“metalinguistic” means that propositions and candidates ofproofs are specified first and then, by means of a definition in themetalanguage, it is fixed which of them are valid and which are not.Rather, propositions and proofs come into play only in the context ofdemonstrations. For example, if we assume that something is a proof ofan implication \(A \rightarrow B\), we need not necessarily show thatbothA andB are well-formedpropositions outright, but, in addition to knowing thatAis a proposition, we only need to know thatBis a propositionprovided thatAhas been proved. Being a proposition isexpressed by a specific form of judgement, which is established in thesame system of demonstration which is used to establish that a proofof a proposition has been achieved.
In Martin-Löf’s theory, proof-theoretic semantics receivesa strongly ontological component. A recent debate deals with thequestion of whether proof objects have a purely ontological status orwhether they codify knowledge, even if they are not epistemic actsthemselves. More recently, Martin-Löf has embedded his approachinto a theory of interaction between assertions, challenges andobligations (Martin-Löf 2019) which gives his proof-theoreticsemantics a pragmatic aspect and also relates it to dialogicalsemantics (see section3.9).
Martin-Löf’s type theory has found its most significantmathematical application in homotopy theory, leading to homotopy typetheory (HoTT) and the univalent foundations programme (The UnivalentFoundations Program 2013). The latter is often (sometimescontroversially) considered a novel foundational approach tomathematics in the intuitionistic spirit, as an alternative toclassical set-theoretic approaches.
Further Reading:
For the history and philosophy ofMartin-Löf’stype theory see Sommaruga (2000). Fortype theory ingeneral, forhomotopy type theory, and for theunivalentfoundations programme, see the entry onintuitionistic type theory and the entry ontype theory.
Proof-theoretic semantics normally focuses on logical constants. Thisfocus is practically never questioned, apparently because it isconsidered so obvious. In proof theory, little attention has been paidto atomic systems, although there has been Lorenzen’s early work(see section2.1.1), where the justification of logical rules is embedded in a theory ofarbitrary rules, and Martin-Löf’s (1971) theory of iteratedinductive definitions where introduction and elimination rules foratomic formulas are proposed. The rise of logic programming haswidened this perspective. From the proof-theoretic point of view,logic programming is a theory of atomic reasoning with respect toclausal definitions of atoms. Definitional reflection is an approachto proof-theoretic semantics that takes up this challenge and attemptsto build a theory whose range of application goes beyond logicalconstants.
In logic programming we are dealing with program clauses of theform
\[ A \Leftarrow B_1 , \ldots ,B_m \]whichdefine atomic formulas. Such clauses can naturally beinterpreted as describing introduction rules for atoms. From the pointof view of proof-theoretic semantics the following two points areessential:
(1) Introduction rules (clauses) for logically compound formulas arenot distinguished in principle from introduction rules (clauses) foratoms. Interpreting logic programming proof-theoretically motivates anextension of proof-theoretic semantics to arbitrary atoms, whichyields a semantics with a much wider realm of applications.
(2) Program clauses are not necessarily well-founded. For example, thehead of a clause may occur in its body. Well-founded programs are justa particular sort of programs. The use of arbitrary clauses withoutfurther requirements in logic programming is a motivation to pursuethe same idea in proof-theoretic semantics. One would admit any sortof introduction rules and not just those of a special form, and inparticular not necessarily ones which are well-founded. This carriesthe idea of definitional freedom, which is a cornerstone of logicprogramming, over to semantics, again widening the realm ofapplication of proof-theoretic semantics.
The idea of considering introduction rules as meaning-giving rules foratoms is closely related to the theory of inductive definitions in itsgeneral form, according to which inductive definitions are systems ofrules (see Aczel, 1977).
The theory of definitional reflection (Hallnäs, 1991; 2006;Hallnäs and Schroeder-Heister, 1990/91; Schroeder-Heister, 1993)takes up the challenge from logic programming and gives aproof-theoretic semantics not just for logical constants but forarbitrary expressions, for which a clausal definition can be given.Formally, this approach starts with a list of clauses which is thedefinition considered. Each clause has the form
\[ A \Leftarrow \Delta \]where the headA is an atomic formula (atom). Inthe simplest case, the body \(\Delta\) is a list of atoms\(B_1,\ldots,B_m\), in which case a definition looks like a definitelogic program. We often consider an extended case where \(\Delta\) mayalso contain some structural implication‘\(\Rightarrow\)’, and sometimes even some structuraluniversal quantification, which essentially is handled by restrictingsubstitution. If the definition ofA has theform
\[ \mathbb{D}_A\ \left\{ \begin{array}{rcl}A & \Leftarrow & \Delta_1 \\ & \vdots & \\ A & \Leftarrow & \Delta_n \end{array} \right. \]thenA has the following introduction andelimination rules
\[ \begin{prooftree}\AxiomC{$\Delta_1$} \UnaryInfC{$A$} \end{prooftree} \;\cdots\; \begin{prooftree}\AxiomC{$\Delta_n$} \UnaryInfC{$A$} \end{prooftree} \] \[ \begin{prooftree}\AxiomC{$A$} \AxiomC{$[\Delta_1]$} \noLine\UnaryInfC{$C$} \AxiomC{$\cdots$} \AxiomC{$[\Delta_n]$} \noLine\UnaryInfC{$C$} \QuaternaryInfC{$C$} \end{prooftree} \]The introduction rules, also called rules ofdefinitionalclosure, express reasoning ‘along’ the clauses. Theelimination rule is called theprinciple of definitionalreflection, as it reflects upon the definition as a whole. If\(\Delta_1,\ldots,\Delta_n\) exhaustall possible conditionsto generateA according to the given definition,and if each of these conditions entails the very same conclusionC,thenA itself entails thisconclusion. If the clausal definition is viewed as an inductivedefinition, this principle can be viewed as expressing the extremalclause in inductive definitions: Nothing else beyond the clauses givendefinesA. Obviously, definitional reflection is ageneralized form of the inversion principles discussed. It developsits genuine power in definitional contexts with free variables that gobeyond purely propositional reasoning, and in contexts which are notwell-founded. An example of a non-wellfounded definition is thedefinition of an atomR by its own negation:
\[ \mathbb{D}_R \;\{\; R \Leftarrow (R \Rightarrow \bot) \]Defining an atom by its negation is a standard example in logicprogramming. In the context of definitional reflection it wassuggested by Hallnäs. It is here discussed in detail in the
Supplement on Definitional Reflection and Paradoxes.
Further Reading:
Fornon-wellfoundedness andparadoxes see the entryonself-reference and the entry onRussell’s paradox.
There is a large field of ideas and results concerning what might becalled the “structural characterization” of logicalconstants, where “structural” is here meant both in theproof-theoretic sense of “structural rules” and in thesense of a framework that bears a certain structure, where thisframework is again proof-theoretically described. Some of its authorsuse a semantical vocabulary and at least implicitly suggest that theirtopic belongs to proof-theoretic semantics. Others explicitly denythese connotations, emphasizing that they are interested in acharacterization which establishes the logicality of a constant. Thequestion “What is a logical constant?” can be answered inproof-theoretic terms, even if the semantics of the constantsthemselves is truth-conditional: Namely by requiring that the (perhapstruth-conditionally defined) constants show a certain inferentialbehaviour that can be described in proof-theoretic terms. However, assome of the authors consider their characterization at the same timeas a semantics, it is appropriate that we mention some of theseapproaches here.
The most outspoken structuralist with respect to logical constants,who explicitly understands himself as such, is Koslow. In hisStructuralist Theory of Logic (1992) he develops a theory oflogical constants, in which he characterizes them by certain“implication relations”, where an implication relationroughly corresponds to a finite consequence relation in Tarski’ssense (which again can be described by certain structural rules of asequent-style system). Koslow develops a structural theory in theprecise metamathematical sense, which does not specify the domain ofobjects in any way beyond the axioms given. If a language or any otherdomain of objects equipped with an implication relation is given, thestructural approach can be used to single out logical compounds bychecking their implicational properties.
In his early papers on the foundations of logic, Popper (1947a; 1947b;Binder et al., eds., 2022) gives inferential characterizations oflogical constants in proof-theoretic terms. He uses a calculus ofsequents and characterizes logical constants by certain derivabilityconditions for such sequents. His terminology clearly suggests that heintends a proof-theoretic semantics of logical constants, as he speaksof “inferential definitions” and the “trivializationof mathematical logic” achieved by defining constants in the waydescribed. Although his presentation is not free from conceptualimprecision and error, he anticipated many ideas now common inproof-theoretic semantics, such as the delineation of logicalconstants by means of certain minimality or maximality conditions withrespect to introduction or elimination rules.
Important contributions to the logicality debate that characterizelogical constants inferentially in terms of sequent calculus rules arethose by Kneale (1956) and Hacking (1979). A thorough account oflogicality is proposed by Došen (1980; 1989) in his theory oflogical constants as “punctuation marks”, expressingstructural features at the logical level. He understands logicalconstants as following certain double-line rules for sequents whichcan be read in both directions. For example, conjunction anddisjunction obey (in classical logic, with multiple-formulaesuccedents) the double-line rules
\[ \xlongequal[{\Large \Gamma \,\vdash\, A \,\land\, B,\, \Delta}] {{\Large \Gamma \,\vdash\, A,\, \Delta\quad \Gamma \,\vdash\, B,\, \Delta}} \hspace{2em} \xlongequal[{\Large \Gamma,\, A \,\lor\, B \,\vdash\, \Delta}] {{\Large \Gamma,\, A \,\vdash\, \Delta\quad \Gamma,\, B \,\vdash\, \Delta}} \]Došen is able to give characterizations which include systemsof modal logic. He explicitly considers his work as a contribution tothe logicality debate and not to any conception of proof-theoreticsemantics. Sambin et al., in their Basic Logic (Sambin, Battilotti,and Faggian, 2000), explicitly understand what Došen callsdouble-line rules as fundamental meaning giving rules. The double-linerules for conjunction and disjunction are read as implicit definitionsof these constants, which by some procedure can be turned into theexplicit sequent-style rules we are used to. So Sambin et al. use thesame starting point as Došen, but interpret it not as astructural description of the behaviour of constants, but semanticallyas their implicit definition.
There are several other approaches to a uniform proof-theoreticcharacterization of logical constants, all of which at least touchupon issues of proof-theoretic semantics. Such theories areBelnap’s Display Logic (Belnap, 1982), Wansing’s Logic ofInformation Structures (Wansing, 1993b), generic proof editing systemsand their implementations such as the Edinburgh Logical Framework(Harper, Honsell, and Plotkin, 1987) and many successors which allowthe specification of a variety of logical systems. Since the rise oflinear and, more generally, substructural logics there are variousapproaches dealing with logics that differ with respect torestrictions on their structural rules (see section3.10). A recent movement away from singling out a particular logic as thetrue one towards a more pluralist stance (see, e.g., Beall andRestall, 2006), which is interested in what different logics have incommon without any preference for a particular logic, can be seen as ashift away from semantical justification towards structuralcharacterization.
Further Reading:
ForPopper’s theory of logical constants see Binder etal. (2022), Piecha (2023).
Forlogical constants and theirlogicality see theentry onlogical constants.
Forlinear andsubstructural logics see the entry onlinear logic, and the entry onsubstructural logics.
Forpluralism in logic, see the entry on onlogical pluralism.
There is a considerable literature on category theory in relation toproof theory, and, following seminal work by Lawvere, Lambek andothers (see Lambek and Scott, 1986), category theory itself can beviewed as a kind of abstract proof theory. If one looks at an arrow\(A \rightarrow B\) in a category as a kind of abstract proof ofBfromA, we have a representationwhich goes beyond pure derivability ofB fromA(as the arrow has its individuality), but does notdeal with the particular syntactic structure of this proof. Forintuitionistic systems, proof-theoretic semantics in categorial formcomes probably closest to what denotational semantics is in theclassical case.
One of the most highly developed approaches to categorial proof theoryis due to Došen. He has not only advanced the application ofcategorial methods in proof theory (e.g., Došen andPetrić, 2004), but also shown how proof-theoretic methods can beused in category theory itself (Došen, 2000). Most importantfor categorial logic in relation to proof-theoretic semantics is thatin categorial logic, arrows always come together with an identityrelation, which in proof-theory corresponds to the identity of proofs.In this way, ideas and results of categorial proof theory pertain towhat may be called intensional proof-theoretic semantics (see section3.7), that is, the study of proofs as entities in their own right, not justas vehicles to establish consequences (Došen, 2006; 2016).
As there is the strong relationship between arrows in the categorialsense and proofs in the sense of deductive reasoning, there arecategorial analogues to practically all concepts dealt with in thedeductive realm, often with novel methods, ideas and results thatsupplement deductive approaches in a very productive way. Categorialproof-theoretic semantics is a very broad research area that runs inparallel to the deductive perspective adopted in this entry ―the deductive perspective is by far more common in philosophy. In thefield of intuitionistic logic, categorial logic covers highly advancedtopics: beyond standard quantifier logic dependent type theories inthe tradition of Martin-Löf (see section2.2.3) including homotopy type theory. For many other logical systemscategorial semantics have been developed, including classical logicand various substructural logics such as linear logic.
A philosophically highly significant general feature of categorialproof-theory is its inherently hypothetical character. This is due tothe fact that it starts from hypothetical statements \(A \rightarrowB\) (arrows) rather than categorical statementsA(“categorical” as opposed to “hypothetical”,not “categorial” which is used here for “pertainingto category theory”). It this way it overcomes a paradigm ofstandard, in particular validity-based, proof-theoretic semantics (seesection3.6).
Further Reading:
Forcategorial approaches in general see the entry oncategory theory and the entry onstructuralism in the philosophy of mathematics.
Forcategorial proof-theoretic semantics see Pym, Ritter, andRobinson (2023, other internet resources).
Proof-theoretic semantics is predominantly concerned with logicalconstants. Even if one agrees with this orientation — the theoryof definitional reflection (see section2.3.2) is an exception to it — we must nonetheless fix the status ofatomic sentences and their proofs. In model-theoretic semantics thetruth of a logically complex sentence is defined with respect to astructure, which is called a “model” of this sentence.This structure determines, which atomic sentences are true and whichare not. Whether a complex sentence is true hinges on its atomicconstituents. This is quite analogous in proof-theoretic semantics.Whether a proof of a complex sentence is valid depends on the atomicbase of the system under consideration. Here an atomic base isnormally considered a calculus for the generation of atomic sentences:Proofs of atomic sentences in the atomic base are per se valid.Similar to the model-theoretic definition of logical truth as truthwith respect to all structures, in proof-theoretic semantics one woulddefine logically valid proofs as proofs which are valid with respectto all atomic bases.
There are various options which atomic systems should be considered tobe candidates for atomic bases. (1) The simplest would be Herbrandmodels as used in the theory of logic programming, which are justcollections of atomic sentences. Proof-theoretically, these would becalculi consisting of axioms only, without proper rules of inference.(2) In the writings of Lorenzen (see section2.1.1) and Prawitz (see section2.2.2) atomic systems are considered lists of production rules, that is,calculi with axioms and rules of the form \(A_1,\ldots,A_n \RightarrowB\) allowing one to generateB from\(A_1,\ldots,A_n\). (3) More involved systems may contain atomic rulesthat allow one to discharge assumptions, quite in analogy to logicalrules such as implication introduction. (4) Generalizing this idea onemay admit rules for atoms that discharge rules themselves, that is,rules of higher levels (see section2.1.3). (5) Even further advanced would be atomic systems that incorporaterules of definitional reflection (see section2.3.2). — Many other sorts of rules for atoms constituting atomicsystems may be imagined.
For proof-theoretic semantics, the critical point here is that certainformal results depend on which types of atomic systems are admitted.This holds in particular for the issue of semantical completeness (seesection2.8). The investigation of which atomic systems, when taken as atomic basesof proof-theoretic semantics, lead to which formal system at thelogical level, has just begun (see Sandqvist 2015b; Piecha andSchroeder-Heister, 2016a; Stafford 2023, other internet resources). Itshould be noted that this is an intensional feature of proof-theoreticsemantics: We are not just interested in which atoms are generated bythe various forms of atomic systems, but how this is done, that is, bywhich type of rules this is achieved. This is a crucial difference tomodel-theoretic semantics, which is purely extensional in thisrespect. For further intensional features of proof-theoretic semanticssee section3.7.
Instead of defining the validity of proofs of sentences, one candefine the proof-theoretic validity of sentences directly. In thedefinition of validity of logically complex sentences one would thendiscard the reference to proofs of these sentences. This does not meanthat one is giving up the proof-theoretic approach, as one would stillrefer to proofs in the atomic base. Proofs come in at the atomiclevel, but not at the logical level. This approach is technically muchsimpler to elaborate than the one based on the validity of proofs (seesection2.2.2), but still sufficient to discuss basic topics such as semanticalcompleteness (see section2.8). In fact, from a proof of a sentenceA which isvalid in the sense of section2.2.2, the validity ofA in the sense of sentence-basedsemantics can be inferred; and conversely, from the validity ofAin sentence-based semantics a (though degenerate)valid proof ofA can be constructed. In a sense,sentence-based semantics is related to the proof-theoretic semanticsinherent in Martin-Löf’s system (see section2.2.3), where in the case of elementary logic (where dependent types do notplay any role), the proof informationt of ajudgement \(t{:}A\) can be discarded in favour of the simplifiedjudgement \(A\ \textit{true}\).
One possible definition of the validity of sentences in minimalpropositional logic with respect to an atomic systemSas its atomic base in propositional logic would bethe following:
Note that in the statement of hypothetical validity reference is madeto arbitrary extensions of the atomic base considered. As in theformal definition of the validity of proofs, consideringbase-extensions is used to ensure monotonicity and to avoid vacuousvalidity. In the literature, following Sandqvist (2015a), the term“base-extension semantics” is often used in a morespecific sense: namely for a special form of sentence-based semantics,in which disjunction receives a deviant interpretation.
Validity in sentence-based semantics depends on the choice of atomicsystems. This choice is what makes sentence-based semanticsproof-theoretic. As indicated in section2.6, the type of atomic system counts, as does the extension relationconsidered for implications and consequence statements.
Proof-theoretic semantics is intuitionistic in spirit, at least by itsorigin. Correspondingly, Prawitz (1971) conjectured that thoseconsequence statements \(\Gamma \vDash A\), which are justified byproofs valid with respect to any atomic base, are exactly thederivability statements \(\Gamma \vdash A\) of the formal system ofintuitionistic logic. Within the framework of sentence semantics,which for the issue of validity in relation to semantical completenessis easier to handle than the framework of valid proofs, thisconjecture turned out to be false. Harrop’s rule
\[ \begin{prooftree}\AxiomC{$\neg A \to (B \lor C)$} \UnaryInfC{$(\neg A \to B) \lor (\neg A \to C)$} \end{prooftree} \]which is not derivable (but only admissible) in intuitionistic logiccan be validated in this framework (Piecha and Schroeder-Heister,2019). However, when the extension structure of atomic bases ismodified beyond the set-theoretical superset relation, the picturechanges and Kripke’s completeness proof for intuitionistic logicbecomes applicable to the proof-theoretic framework (Goldfarb 2016;Stafford and Nascimento 2023). Another way to avoid incompleteness isto give disjunction a non-standard interpretation, for example byrequiring that the minor premisses and conclusion of the rule of\(\lor\)-elimination can always be considered atomic (see Sandqvist,2015a).
Further Reading:
For Prawitz’s completeness conjecture see theSupplement on Examples of Proof-theoretic Validity.
For the role of Harrop’s rule in the context of admissibilitystatements for intuitionistic logic, see the entry onintuitionistic logic.
Most approaches to proof-theoretic semantics consider introductionrules as basic, meaning giving, or self-justifying, whereas theelimination inferences are justified as valid with respect to thegiven introduction rules. This conception has at least three roots:The first is a verificationist theory of meaning according to whichthe assertibility conditions of a sentence constitute its meaning. Thesecond is the idea that we must distinguish between what gives themeaning and what are the consequences of this meaning, as not allinferential knowledge can consist of applications of definitions. Thethird one is the primacy of assertion over other speech acts such asassuming or denying, which is implicit in all approaches considered sofar.
One might investigate how far one gets by considering eliminationrules rather than introduction rules as a basis of proof-theoreticsemantics. Some ideas towards a proof-theoretic semantics based onelimination rather than introduction rules have been sketched byDummett (1991, Ch. 13), albeit in a very rudimentary form. A moreprecise definition of validity based on elimination inferences is dueto Prawitz (1971; 2007; see also Schroeder-Heister 2015). Itsessential idea is that a closed proof is considered valid, if theresult of applying an elimination rule to its end formula is a validproof or reduces to one. For example, a closed proof of an implication\(A \rightarrow B\) is valid, if, for any given closed valid proof ofA, the result of applying modus ponens
\[\begin{prooftree}\AxiomC{$A \to B$} \AxiomC{$A$} \BinaryInfC{$B$} \end{prooftree}\]to these two proofs is a valid proof ofB, orreduces to such a proof. This conception keeps two of the three basicingredients of Prawitz-style proof-theoretic semantics (see section2.2.2): the role of proof reduction and the substitutional view ofassumptions. Only the canonicity of proofs ending with introductionsis turned into the canonicity of proofs ending with eliminations.
Standard proof-theoretic semantics is assertion-centred in thatassertibility conditions determine the meaning of logical constants.Corresponding to the intuitionistic way of proceeding, the negation\(\neg A\) of a formulaA is normally understood asimplying absurdity \(A \rightarrow \bot\), where \(\bot\) is aconstant which cannot be asserted, i.e., for which no assertibilitycondition is defined. This is an ‘indirect’ way ofunderstanding negation. In the literature there has been thediscussion of what, following von Kutschera (1969), might be called‘direct’ negation. By that one understands a one-placeprimitive operator of negation, which cannot be, or at least is not,reduced to implying absurdity. It is not classical negation either. Itrather obeys rules which dualize the usual rules for the logicalconstants. Sometimes it is called the “denial” of asentence, sometimes also “strong negation” (see Odintsov,2008). Typical rules for the denial \({\sim}A\) ofAare
\[ \begin{prooftree}\AxiomC{${\sim}A$} \AxiomC{${\sim}B$} \BinaryInfC{${\sim}(A \lor B)$} \end{prooftree} \hspace{1.5em} \begin{prooftree}\AxiomC{${\sim}A$} \UnaryInfC{${\sim}(A \land B)$} \end{prooftree} \hspace{1.5em} \begin{prooftree}\AxiomC{${\sim}B$} \UnaryInfC{${\sim}(A \land B)$} \end{prooftree} \]Essentially, the denial rules for an operator correspond to theassertion rules for the dual operator. Several logics of denial havebeen investigated, in particular Nelson’s (1949) logics of“constructible falsity” motivated by Nelson with respectto a certain realizability semantics. The main focus has been on hissystems later called N3 and N4 which differ with respect to thetreatment of contradiction (N4 is N3 withoutex contradictionequodlibet). Using denial any approach to proof-theoreticsemantics can be dualized by just exchanging assertion and denial andturning from logical constants to their duals. In doing so, oneobtains a system based on refutation \((=\) proof of denial) ratherthan proof. It can be understood as applying a Popperian view toproof-theoretic semantics by keeping its intuitionistic spirit.Formally, it leads to systems such as dual intuitionistic logic bygiving them a proof-theoretic semantics.
Another approach would be to not just dualize assertion-centeredproof-theoretic semantics in favour of a denial-centeredrefutation-theoretic semantics, but to see the relation between rulesfor assertion and for denial as governed by an inversion principle orprinciple of definitional reflection of its own. This would be aprinciple of what might be called“assertion-denial-harmony”. Whereas in standardproof-theoretic semantics, inversion principles control therelationship between assertions and assumptions (or consequences),such a principle would now regulate the relationship between assertionand denial. Given certain defining conditions ofA,it would say that the denial of every defining condition ofAleads to the denial ofA itself.For conjunction and disjunction this leads to the common pairs ofassertion and denial rules
\[ \begin{prooftree}\AxiomC{$A$} \UnaryInfC{$A \lor B$} \end{prooftree} \hspace{1.5em} \begin{prooftree}\AxiomC{$B$} \UnaryInfC{$A \lor B$} \end{prooftree} \hspace{1.5em} \begin{prooftree}\AxiomC{${\sim}A$} \AxiomC{${\sim}B$} \BinaryInfC{${\sim}(A \lor B)$} \end{prooftree} \] \[ \begin{prooftree}\AxiomC{$A$} \AxiomC{$B$} \BinaryInfC{$A \land B$} \end{prooftree} \hspace{1.5em} \begin{prooftree}\AxiomC{${\sim}A$} \UnaryInfC{${\sim}(A \land B)$} \end{prooftree} \hspace{1.5em} \begin{prooftree}\AxiomC{${\sim}B$} \UnaryInfC{${\sim}(A \land B)$} \end{prooftree} \]This idea can easily be generalized to definitional reflection,yielding a reasoning system in which assertion and denial areintertwined. It has parallels to the deductive relations between theforms of judgement studied in the traditional square of opposition(Schroeder-Heister, 2012a; Zeilberger, 2008). It should be emphasizedthat the denial operator is here an external sign indicating a form ofjudgement and not a logical operator. This means in particular that itcannot be iterated.
The idea that both assertion and denial conditions give meaning to asentence and thus to logical operations also figures under the term“bilateralism”, a term coined by Rumfitt.
Further Reading
For the proof-theoretic semantics ofnegation and denial seeTranchini (2012b), Wansing (2001; 2017), as well as the special issueonassertion and denial edited by Carrara, Chiffi and DeFlorio (2017) and the entry onnegation, the entry onconnexive logic and the entry oncontradiction.
For the idea of bilateralism see Rumfitt (2000), Kürbis (2016;2019), Drobyshevich (2021) and the special issue onbilateralismand proof-theoretic semantics edited by Sara Ayhan (2023).
For Popperianism and proof theoretic semantics see Binder et al.(2022), Kapsner (2014).
Gentzen’s sequent calculus exhibits a symmetry between right andleft introduction rules which suggest to look for a harmony principlethat makes this symmetry significant for proof-theoretic semantics. Atleast three lines have been pursued to deal with this phenomenon. (i)Either the right-introduction or or the left-introduction rules areconsidered to be introduction rules. The opposite rules(left-introductions and right-introductions, respectively) are thenjustified using the corresponding elimination rules. This means thatthe methods discussed before are applied to whole sequents rather thanformulas within sequents. Unlike these formulas, the sequents are notlogically structured. This approach builds on definitional reflection,which applies harmony and inversion to rules for arbitrarilystructured entities rather than for logical compounds only. It hasbeen pursued by de Campos Sanz and Piecha (2009). (ii) The right- andleft-introduction rules are derived from a characterization in thesense of Došen’s double line rules (section2.4), which is then read as a definition of some sort. The top-downdirection of a double-line rule is already a right- or aleft-introduction rule. The other one can be derived from thebottom-up direction by means of certain principles. This is the basicmeaning-theoretic ingredient of Sambin et al.’sBasicLogic (Sambin, Battilotti, and Faggian, 2000). (iii) The right-and left-introduction rules are seen as expressing an interactionbetween sequents using the rule of cut. Given either the right- or theleft-rules, the complementary rules express that everything thatinteracts with its premisses in a certain way so does with itsconclusion. This idea of interaction is a generalized symmetricprinciple of definitional reflection. It can be considered to be ageneralization of the inversion principle, using the notion ofinteraction rather than the derivability of consequences (seeSchroeder-Heister, 2013). All three approaches apply to the sequentcalculus in its classical form, with possibly more than one formula inthe succedent of a sequent, including structurally restricted versionsas investigated in linear and other logics. Therefore this topictouches very much the issue of substructural logic and ofsubstructural approaches to proof-theoretic semantics (see section3.10).
Even if, as in definitional reflection, we are consideringdefinitional rules for atoms, their defining conditions do notnormally decompose these atoms. A proof-theoretic approach that takesthe internal structure of atomic sentences into account, has beenproposed by Więckowski (2008; 2011; 2016; 2021). He usesintroduction and elimination rules for atomic sentences, where theseatomic sentences are not just reduced to other atomic sentences, butto subatomic expressions representing the meaning of predicates andindividual names. Apart from its foundational significance, this canbe seen as a first step towards natural language applications ofproof-theoretic semantics. A further step in this direction has beenundertaken by Francez, who developed a proof-theoretic semantics forseveral fragments of English (see Francez, Dyckhoff, and Ben-Avi,2010; Francez and Dyckhoff, 2010; Francez and Ben-Avi, 2015; Francez,2022). Specifically, Francez is able to use the proof-theoreticapproach to deal with scope ambiguity and many other issues ofsemantic meaning variation. Besides computer science, natural languagegrammar and natural language semantics will play a key role in thepractical application of proof-theoretic semantics.
Further Reading
For the proof-theoretic semantics ofnatural language seeFrancez (2015, Part II).
Proof-theoretic semantics is intuitionistically biased. This is due tothe fact that natural deduction as its preferred framework has certainfeatures which make it particularly suited for intuitionistic logic.In classical natural deduction theex falso quodlibet
\[\begin{prooftree}\AxiomC{$\bot$} \UnaryInfC{$A$} \end{prooftree}\]is replaced with the rule ofclassical reductio adabsurdum
\[\begin{prooftree}\AxiomC{$[A\to \bot]$} \noLine\UnaryInfC{$\bot$} \UnaryInfC{$A$} \end{prooftree}\]In allowing to discharge \(A \rightarrow \bot\) in order to inferA,this rule undermines thesubformulaprinciple. Furthermore, in containing both \(\bot\) and \(A\rightarrow \bot\), it refers to two different logical constants in asingle rule, so there is noseparation of logical constantsany more. Finally, as an elimination rule for \(\bot\) it does notfollow the general pattern of introductions and eliminations. As aconsequence, it destroys theintroduction form property thatevery closed derivation can be reduced to one which uses anintroduction rule in the last step.
Classical logic fits very well with the multiple-succedent sequentcalculus. There we do not need any additional principles beyond thoseassumed in the intuitionistic case. Just the structural feature ofadmitting more than one formula in the succedent suffices to obtainclassical logic. As there are plausible approaches to establish aharmony between right-introduction and left-introduction in thesequent calculus (see section3.3), classical logic appears to be perfectly justified. However, this isonly convincing if reasoning is appropriately framed as amultiple-conclusion process, even though this does not correspond toour standard practice where we focus on single conclusions. One couldtry to develop an appropriate intuition by arguing that reasoningtowards multiple conclusions delineates the area in which truth liesrather than establishing a single proposition as true. However, thisintuition is hard to maintain and cannot be formally captured withoutserious difficulties. Philosophical approaches such as those byShoesmith and Smiley (1978) and proof-theoretic approaches such asproof-nets (see Girard, 1987; the entry onlinear logic) are attempts in this direction.
A fundamental reason for the failure of the introduction form propertyin classical logic is the indeterminism inherent in the laws fordisjunction. \(A\lor B\) can be inferred fromA aswell as fromB. Therefore, if the disjunction lawswere the only way of inferring \(A\lor B\), the derivability of\(A\lor \neg A\), which is a key principle of classical logic, wouldentail that of eitherA or of \(\neg A\), which isabsurd. A way out of this difficulty is to abolish indeterministicdisjunction and use instead its classical de Morgan equivalent\(\neg(\neg A \land \neg B)\). This leads essentially to a logicwithout proper disjunction. In the quantifier case, there would be noproper existential quantifier either, as \(\exists xA\) would beunderstood in the sense of \(\neg \forall x\neg A\). If one isprepared to accept this restriction, then certain harmony principlescan be formulated for classical logic.
As Karl Popper was the first to observe (see Binder et al., 2022), thesimple combination of rules for classical and intuitionisticconnectives collapses into the classical system. More sophisticatedtheories that combine classical and intuitionistic systems in afaithful way are found inecumenical systems (see Pimenteland Pereira 2021). Approaches to classical logic originally developedin computer science by algorithmic considerations, in particular inthe context of proof search, are Michel Parigot’s \(\lambda\mu\)-calculus (Parigot 1992) and the application of the focusingmethod to classical logic (Liang and Miller, 2009; 2024). Bothapproaches rest on the idea that by certain indexing techniques asingle formula in the multiple-formula succedent of a classicalsequent can be distinguished and dealt with individually.
Further Reading
Forclassical logic in general see the entry onclassical logic.
Standard approaches to proof-theoretic semantics, especiallyPrawitz’s validity-based approach (section2.2.2), take closed derivations as basic. The validity of open derivations isdefined as the transmission of validity from closed derivations of theassumptions to a closed derivation of the assertion, where the latteris obtained by substituting a closed derivation for an openassumption. Therefore, if one calls closed derivations‘categorical’ and open derivations‘hypothetical’, one may characterize this approach asfollowing two fundamental ideas: (I) The primacy of the categoricalover the hypothetical, (II) the transmission view of consequence.These two assumptions (I) and (II) may be viewed asdogmas ofstandard semantics. “Standard semantics” here notonly means standard proof-theoretic semantics, but also classicalmodel-theoretic semantics, where these dogmas are assumed as well.There one starts with the definition of truth, which is thecategorical concept, and defines consequence, the hypotheticalconcept, as the transmission of truth from conditions to consequent.From this point of view, constructive semantics, includingproof-theoretic semantics, exchange the concept of truth with aconcept of construction or proof, and interpret“transmission” in terms of a constructive function orprocedure, but otherwise leave the framework untouched.
There is nothing wrong in principle with these dogmas. However, thereare phenomena that are difficult to deal with in the standardframework. Such a phenomenon is non-wellfoundedness, especiallycircularity, where we may have consequences without transmission oftruth and provability (see section3.8). Another phenomenon are substructural distinctions, where it iscrucial to include the structuring of assumptions from the verybeginning (see section3.10). Moreover, and this is most crucial, we might define things in acertain way without knowing in advance of whether our definition orchain of definitions is well-founded or not. We do not first involveourselves into the metalinguistic study of the definition we startwith, but would like to start to reason immediately. This problem doesnot obtain if we restrict ourselves to the case of logical constants,where the defining rules are trivially well-founded. But the problemarises immediately, when we consider more complicated cases that gobeyond logical constants.
This makes it worthwhile to proceed in the other direction and startwith the hypothetical concept of consequence, i.e., characterizeconsequence directly without reducing it to the categorical case.Philosophically this means that the categorical concept is a limitingconcept of the hypothetical one. In the classical case, truth would bea limiting case of consequence, namely consequence without hypotheses.This program is closely related to the approach of categorial prooftheory (see section2.5), which is based on the primacy of hypothetical entities(“arrows”). Formally, it would give preference to thesequent calculus over natural deduction, since the sequent calculusallows the manipulation of the assumption side of a sequence by meansof left-introduction rules. In a term-annotated system, we would notannotate formulas, but hypothetical statements by terms. If ahypothetical statement is expressed by a sequent in the form \(A\vdash B\), we would not annotate it as \(x{:}A \vdash t(x){:}B\) asin the Curry-Howard correspondence, but in a way like \(f{:}(A \vdashB)\). That is, the full hypothetical statement would be annotated,which makes the parallel to the categorial approach, wherefwould be the arrow \(f : A \rightarrow B\),obvious.
Further Reading
Forhypothetical reasoning andintensional prooftheoretic semantics see Došen (2003; 2016) andSchroeder-Heister (2012c, 2016).
As mentioned in the first section (1.1), proof-theoretic semantics is intensional in spirit, as it isinterested in proofs and not just provability. For proof-theoreticsemantics it is not only relevant, whetherBfollows fromA, but also, in which way we canestablish thatB follows fromA.In other words, the identity of proofs is an important issue. However,though this is prima facie obvious and proof-theoretic semanticistswould normally agree with this abstract claim, the practice inproof-theoretic semantics is often different, and the topic of theidentity of proofs is much neglected. It very frequently happens thatrules which are equally powerful are identified. For example, whenprinciples of harmony are discussed, and one considers the standardintroduction rule for conjunction \[\begin{prooftree}\AxiomC{$A$} \AxiomC{$B$} \BinaryInfC{$A\land B$} \end{prooftree}\]
many proof-theoretic semanticists would consider it irrelevant whetherone chooses the pair of projections
\[ \begin{prooftree}\AxiomC{$A \land B$} \UnaryInfC{$A$} \end{prooftree} \hspace{1.5em} \begin{prooftree}\AxiomC{$A\land B$} \UnaryInfC{$B$} \end{prooftree} \]or the pair
\[ \begin{prooftree}\AxiomC{$A \land B$} \UnaryInfC{$A$} \end{prooftree} \hspace{1.5em} \begin{prooftree}\AxiomC{$A\land B$} \AxiomC{$A$} \BinaryInfC{$B$} \end{prooftree} \]as the elimination rules for conjunction. The second pair of ruleswould often be considered to be just a more complicated variant of thepair of projections. However, from an intensional point of view, thesetwo pairs of rules are not identical. Identifying them corresponds toidentifying \(A \land B\) and \(A \land (A \rightarrow B)\), which isonly extensionally, but not intensionally correct. As Došen hasfrequently argued (e.g., Došen 1997; 2006), formulas such as\(A \land B\) and \(A \land(A \rightarrow B)\) are equivalent, but notisomorphic. Here “isomorphic” means that when proving oneformula from the other and vice versa, we obtain, by combining thesetwo proofs, the identity proof. This is not the case in thisexample.
Pursuing this idea leads to principles of harmony and inversion whichare different from the standard ones. As harmony and inversion lieat the heart of proof-theoretic semantics, many of its issues aretouched. Taking the topic of intensionality seriously means reshapingmany fields of proof-theoretic semantics (see Schroeder-Heister,2022). This has repercussions in various neighbouring fields such asthe treatment of paradoxes. The first monograph on intensionalproof-theoretic semantics, with particular emphasis on the topic ofparadoxes, has been written by Tranchini (2023).
Since the identity of proofs is a basic topic of categorial prooftheory (see section2.5), the latter will require more attention in proof-theoretic semanticsthan is currently the case.
The logical, mathematical and semantical paradoxes, which play acrucial role in logic and the philosophy of mathematics, receive anovel rendering in the framework of proof-theoretic semantics. If oneformulates paradoxes in the context of introduction and eliminationrules, one can apply to them the proof-theoretic machinery ofproof-reduction. Suppose introduction and elimination rules for setterms are given as follows (in a kind of naive set theory):
\[ \begin{prooftree}\AxiomC{$A(t)$} \UnaryInfC{$t \in \{x:A(x)\}$} \end{prooftree} \hspace{1.5em} \begin{prooftree}\AxiomC{$t \in \{x:A(x)\}$} \UnaryInfC{$A(t)$} \end{prooftree} \]Then, forr as \(\{x: x \not\in x\}\), we can infer\(r \in r\) from \(r \not\in r\) and vice versa, which is a naturaldeduction variant of Russell’s paradox. Now Prawitz (1965,Appendix B) observed that the derivation of absurdity arising fromRussell’s paradox is non-normalizable, a feature that Tennant(1982) was able to demonstrate for a vast range of paradoxes. From thestandpoint of proof-theoretic semantics, a non-normalizable proof maywell be considered invalid, at least not valid in the sense in which anormalizable proof is valid. This means that we obtain aproof-theoretic criterion to tell whether we have a meaningful proofor not, and proofs of the paradoxes would not be meaningful in thissense. The Prawitz-Tennant analysis of paradoxes has opened up thefield of the proof-theory of paradoxes, which goes way beyondproof-theoretic semantics in the narrower sense. It is closely relatedto the aspect of intensionality (see section3.7) and also to the idea of definitional reflection (see section2.3.2 and theSupplement on Definitional Reflection and Paradoxes).
A particularly interesting paradox from the point of view ofproof-theoretic semantics is Ekman’s paradox. It results fromcodifying an implicational variant of Russell’s paradox inminimal propositional logic. It turns out that, depending on howreductions of detour derivations are formulated, certain derivationsin minimal propositional logic are not normalizable. This sheds astrong light on the concept of proof reduction, which is absolutelyessential in natural-deduction style proof-theoretic semantics: muchdepends on its careful definition. In fact, the concept of proofreduction constitutes the concept of proof identity and goes waybeyond the extensional idea that “detours” in proofs areremoved.
Further Reading
ForRussell’s paradox in general see the entry onRussell’s paradox.
For the treatment ofparadoxes in proof-theoretic semantics,including Ekman’s paradox see Tranchini (2023).
While deduction is forward-directed, passing from sentences alreadyestablished to further sentences by means of justified rules,reduction proceeds backwards, trying to find an argument for a givenclaim. It thus belongs to proof search. Various methods of proofsearch have been discovered and discussed in computer science:resolution and tableau systems are prominent examples. However, from asemantical perspective, reductive approaches are not conceptuallysecondary to deduction but are theories in their own right. One mighteven suggest that reductive methods are primary from the semanticalpoint of view, as the search for an argument, or the possible ways ofsearching for it, constitute the meaning of a given claim. This ideahas been adopted, for example, in dialogical or game-theoreticalsemantics, but can and should be incorporated in proof-theoreticsemantics in general. This is a major desideratum of proof-theoreticsemantics, which so far is predominantly based on deduction ratherthan reduction.
Further Reading
For the idea ofreductive logic and its semantics see themonograph by David Pym and Eike Ritter (2004).
Fordialogical logic see the entry ondialogical logic as well as Thomas Piecha’s entry ondialogical logic theInternet Encyclopedia of Philosophy.
For game-theoretical approaches in logic see the entry onlogic and games.
As soon as one treats proof-theoretic semantics in the framework ofthe sequent calculus and thus focuses on hypothetical rather thancategorical claims, the area of logics with restricted structuralrules, called “substructural logics”, demands semanticalconsideration. Here the way assumptions are introduced and kept inone’s discourse represents a much more fine-grained structurethan assumptions in natural deduction, where they are normallyconsidered to be sets of statements. Due to the possibility ofstructuring the succedent of a sequent in an analogous way,proof-theoretic semantics looses its intuitionistic bias withoutnecessarily becoming classical. Within the framework of substructurallogic, deviant logical systems such as relevant logic orresource-sensitive logics can easily be modelled. A particularlyinteresting topic is the logic of bunched implications, whichdistinguishes different, but co-existing ways of structuringassumptions and conclusions (see Pym, 2002). Substructural logic iswell-established as a general tool todescribe logicalsystems within a single emcompassing structural framework. Whethersubstructural logiceo ipso represents a semanticalframework, is still an open question.
Further Reading
Forsubstructural logic in general see the entry onsubstructural logics.
Standard proof-theoretic semantics has practically exclusively beenoccupied with logical constants. Logical constants play a central rolein reasoning and inference, but are definitely not the exclusive, andperhaps not even the most typical sort of entities that can be definedinferentially. A framework is needed that deals with inferentialdefinitions in a wider sense and covers both logical and extra-logicalinferential definitions alike. The idea of definitional reflectionwith respect to arbitrary definitional rules (see section2.3.2) and also natural language applications (see secton3.4) point in this direction, but farther reaching conceptions can beimagined. Furthermore, the concentration on harmony, inversionprinciples, definitional reflection and the like is somewhatmisleading, as it might suggest that proof-theoretic semanticsconsists of only that. It should be emphasized that already when itcomes to arithmetic, stronger principles are needed in addition toinversion. However, in spite of these limitations, proof-theoreticsemantics has already gained very substantial achievements that cancompete with more widespread approaches to semantics.
Further Reading
For contributions on a variety of aspects of proof-theoretic semanticssee the reader edited by Piecha and Wehmeier (2024, open access).
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
category theory |connectives: sentence connectives in formal logic |contradiction |Curry’s paradox |Hilbert, David: program in the foundations of mathematics |logic, history of: intuitionistic logic |logic: and games |logic: classical |logic: connexive |logic: dialogical |logic: intuitionistic |logic: linear |logic: substructural |logical constants |logical pluralism |mathematics, philosophy of: intuitionism |mathematics, philosophy of: structuralism |natural deduction systems in logic |negation |paradoxes: and contemporary logic |proof theory: development of |realism: challenges to metaphysical |Russell’s paradox |self-reference |truth: revision theory of |type theory |type theory: intuitionistic
I would like to thank the editors, especially Ed Zalta, and theanonymous reviewers for many helpful comments and suggestions.
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054