Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Some Remarks on Proof-Theoretic Semantics

  • Chapter
  • Open Access
  • First Online:

You have full access to thisopen access chapter

Part of the book series:Trends in Logic ((TREN,volume 43))

  • 12kAccesses

Abstract

This is a tripartite work. The first part is a brief discussion of what it is to be a logical constant, rejecting a view that allows a particular self-referential “constant”\(\bullet \) to be such a thing in favour of a view that leads to strong normalisation results. The second part is a commentary on the flattened version of Modus Ponens, and its relationship with rules of type theory. The third part is a commentary on work (joint with Nissim Francez) on “general elimination rules” and harmony, with a retraction of one of the main ideas of that work, i.e. the use of “flattened” general elimination rules for situations with discharge of assumptions. We begin with some general background on general elimination rules.

The material was presented at the proof-theoretic semantics conference in Tübingen in March 2013; an early version was presented at the proof-theoretic semantics (Arché) workshop in St Andrews in May 2009.

You have full access to this open access chapter, Download chapter PDF

Similar content being viewed by others

Keywords

1Background on General Elimination Rules

Standard natural deduction rules forInt (intuitionistic predicate logic) in the style of Gentzen [9] and Prawitz [24] are presumed to be familiar. The theory of cut-elimination for sequent calculus rules is very clear: whether a derivation in a sequent calculus is cut-free or not is easily defined, according to the presence or absence of instances of theCut rule. For natural deduction, normality is a less clear concept: there are several inequivalent definitions (including variations such as “full normality”) in the literature. For implicational logic it is easy; but rules such as the elimination rule for disjunction cause minor problems with the notion of “maximal formula occurrence” (should one include or not include the permutative conversions?), and more problems when minor premisses have vacuous discharge of assumptions.

One proposed solution, albeit partial, is the uniform use ofgeneral elimination rules, i.e.GE-rules. These can be motivated in terms ofPrawitz’s inversion principleFootnote1: “the conclusion obtained by an elimination does not state anything more than what must have already been obtained if the major premiss of the elimination was inferred by an introduction” [25, p. 246]. Normality is now the simple idea [39] that the major premiss of each elimination step should be an assumption; see also [13,36].

The standard elimination rules for disjunction, absurdity and existential quantification are already GE rules:

(withy fresh in\(\exists E\)) and the same pattern was proposed (as a GE-rule) in the early 1980 s for conjunction

by various authors, notably Prawitz [26,27], Martin-Löf [15] and Schroeder-Heister [30], inspired in part by type theory (where conjunction is a special case of the\(\Sigma \)-type constructor, with\(A \wedge B =_{\textit{def}} \Sigma (A,B)\) wheneverB(x) is independent ofx) and (perhaps) in part by linear logic [10] (where conjunction appears in two flavours: multiplicative\(\otimes \) and additive &).

To this one can add GE-rules for implicationFootnote2 and universal quantification:

Rules of the first kind are conveniently called “flattened” [29] (in comparison with Schroeder-Heister’s “higher-level” rules, for which see [30,32]). López-Escobar [13] distinguishes between the premissA of\(\supset \!\! GE\) as a “minor” premiss and that ofC (assumingB) as a “transfer” premiss.Footnote3

One thus has a calculus of rules in natural deduction style forInt; such calculi, and their normalisation results, have been studied by von Plato [39], by López-Escobar [13] and by Tennant [36]. With the definition (given above) that a deduction isnormal iff the major premiss of every elimination step is an assumption, the main results are:

  1. 1.

    Weak Normalisation (WN): every deduction can be replaced by a normal deduction of the same conclusion from the same assumptions [13,20,36,39].

  2. 2.

    Strong Normalisation (SN), for the implicational fragment: an obvious set of rules for reducing non-normal deductions is strongly normalising, i.e. every reduction sequence terminates [12,13,36,37].

  3. 3.

    SN, for the full language: a straightforward extension of the proof of [37] for implicationFootnote4; also, the proofs for implication “directly carry over” [12] to a system with conjunctions and disjunctions. An argument (using the ordinary elimination rule for implication) is given in [35] for the rules for implication and existential quantification, with the virtue of illustrating in detail how to handle GE rules where the Tait–Martin-Löf method of induction on types familiar from [11] is not available. See also [13].

  4. 4.

    Some straightforward arguments for normalisation (by induction on the structure of the deduction) [40].

  5. 5.

    A 1-1 correspondence with intuitionistic sequent calculus derivations [20,39].

  6. 6.

    Some interpolation properties [17].

  7. 7.

    Extension of the normalisation results to classical logic [41].

Despite the above results, there are some disadvantages:

  1. 1.

    Poor generalisation of the GE rule for implication to the type-theoretic constant\(\varPi \), of which\(\supset \) can be treated as a special case [15]: details below in Sect. 3.

  2. 2.

    Too many deductions, as in sequent calculus. Focused [aka “permutation-free”] sequent calculi [5,6] have advantages. Sequent calculus has (for each derivable sequent) rather too many derivations, in comparison to natural deduction, since derivations often have many permutations each of which is, when translated to ordinary natural deduction, replaced by an identity of deductions. The GE-rules have the same feature, which interferes with rather than assists in root-first proof search.

  3. 3.

    (For some complex constants, if one adopts the methodology beyond the basic intuitionistic ones) a “disharmonious mess” [4]: details below in Sect. 4.4.

  4. 4.

    No SN results (yet, in general) for GE-rules for arbitrarily complex constants.

2Is Bullet a Logical Constant?

Read [28] has, following a suggestion of Schroeder-Heister (with Appendix B of Prawitz’s [26] and Ekman’s ParadoxFootnote5 [7] in mind), proposed as a logical constant a nullary operator\(\bullet \) (akaR, for “Russell”) with the single (but impure) introduction rule

The GE-rule justified by this (along the same lines as for implication) is then

which, given the usual\(\bot E\) rule and the unnecessary duplication of premisses, can be simplified to

So, by this\(\bullet E\) rule, the premiss of the\(\bullet I\) rule is deducible, hence\(\bullet \) is deducible, hence\(\bot \) is deducible.

There is however a weakness (other than just that it leads to inconsistency) in the alleged justification of\(\bullet \) as a logical constant: it is a circularity. We follow Martin-Löf [15,16] and Dummett [2] in accepting that we understand a proposition when we understand what it means to have acanonical proof of it, i.e. what forms a canonical proof can take. In the case of\(\bullet \), there is a circularity: the introduction rule gives us a canonical proof only once we have a proof of\(\bot \) from the assumption of\(\bullet \), i.e. have a method for transformingarbitrary proofs of\(\bullet \) into proofs of\(\bot \). The reference here to “arbitrary proofs of\(\bullet \)” is the circularity.

There are similar ideas about type formers, and it is instructive to consider another case, an apparent circularity: the formation rule (in [15]) for the typeN of natural numbers. That is a type that we understand when we know what its canonical elements are; these are 0 and, when we have an elementn ofN, the terms(n). The reference back to “an elementn ofN” looks like a circularity of the same kind; but it is rather different—we don’t need to graspall elements ofN to construct a canonical element by means of the rule, justone of them, namelyn.

A formal treatment of this issue has long been available in the type theory literature, e.g. Mendler [18], Luo [14],Coq [1]. We will try to give a simplified version of the ideas. With the convention that propositions are interpreted as types (of their proofs), we take type theory as a generalisation of logic, with ideas and restrictions in the former being applicable to the latter. The simplest recursive case (N) has just been considered and the recursion explained as harmless (despite Dummett’s reservations expressed as his “complexity condition” [2]). What about more general definitions?

The definition of the typeN can be expressed as saying thatN is the least fixed point of the operator\(\varPhi _N =_{\textit{def}} \lambda X .(1 + X)\), i.e.\(N =_{\textit{def}} \mu X .(1+X)\). Similarly, the type of lists of natural numbers is\(\mu L .(1 + N \times L)\), and the type of binary trees with leaves inA and node labels inB is\(\mu T.A + (T \times B \times T)\). A unary operator definition\(\varPhi =_{\textit{def}} \lambda X .\ldots \) is said to bepositive iff the only occurrences of the type variableX in the body\(\ldots \) are positive, where an occurrence ofX in the expression\(A \rightarrow B \) ispositive (resp.negative) iff it is a positive (resp. negative) occurrence inB or a negative (resp. positive) occurrence inA; a variable occurspositively in itself, and occurspositively (resp. negatively) in\(A+B\) and in\(A \times B\) just where it occurs positively (resp. negatively) inA or inB. A definition of a type as the least fixed point of an operator is thenpositive iff the operator definition is positive.

Read’s\(\bullet \), then, is defined as\(\mu X .(X \rightarrow \bot )\). This is not a positive definition; the negativity of the occurrence ofX in the body\(X \rightarrow \bot \) is a symptom of the circular idea that\(\bullet \) can be grasped once we already have a full grasp of what the proofs of\(\bullet \) might be.

In practice, a stronger requirement is imposed, that the definition be strictly positive, i.e. the only occurrences of the type variableX in the body\(\ldots \) are strictly positive, where an occurrence ofX in the expression\(A \rightarrow B \) isstrictly positive iff it is a strictly positive occurrence inB; a variable occursstrictly positively in itself, and occursstrictly positively in\(A+B\) and in\(A \times B\) just where it occurs strictly positively inA or inB. A definition of a type as the least fixed point of an operator is thenstrictly positive iff the operator definition is strictly positive.

With such definitions, it can be shown that strong normalisation (of a suitable set of reductions) holds [18, Chap. 3]; similar accounts appear in [1,14].

3The GE-rule for Implication and the Type-Theoretic Dependent Product Type

The present author commented [3] that the general (aka “flattened” [29]) E-rule for implication didn’t look promising because it didn’t generalise to type theory. Here (after 27 years) are the details of this problem: Recall [15] that in the dependently-typed context

the rule

withFootnote6 semantics\(split((a,b), c) \rightarrow c(a,b)\) is a generalisation of the rule

Now, ordinary (but with witnesses)Modus Ponens

has, in the dependently-typed context

the generalisation (in which\(ap2(f,a)\) is often just written as\( f\ a\) or\((f\ a)\)):

(with\(\varPi (A,B)\) written as\(A\! \supset \! B\) wheneverB(x) is independent ofx); but the “flattened” GE rule

with semantics\(ap3(\lambda (g), a,c) \rightarrow c(g(a))\) doesn’t appear to generalise:

in which, note the question-mark—what should go there? In the contexty : B(a), the only ingredient isy, which won’t do—it has the wrong type. Addition of an assumption such asx : A (and makingc depend on it, as inc(xy)) doesn’t help.

One solution is the system of higher-level rules of Schroeder-Heister [30]. Our own preference, to be advocated after a closer look at flattened GE-rules, is for implication (and universal quantification) to be taken as primitive, withModus Ponens and the\(\varPi E\) rule taken as their elimination rules, with justifications as in [15].

4GE-Rules in General

The wide-spread idea that the “grounds for asserting a proposition” collectively form some kind of structure which can be used to construct the assumptions in the minor premiss(es)Footnote7 of a GE-rule is attractive, as illustrated by the idea that, where two formulaeA,B are used as the grounds for asserting\(A \wedge B\), one may make the pairAB the assumptions of the minor premiss of\(\wedge GE\). An example of this is López-Escobar’s [13], which gives I-rules and then GE-rules for implicationFootnote8 and disjunction, with the observation [13, p. 417] that:

Had the correspondingI-rule had three “options with say 2, 3 and 5 premises respectively, then there would have been\(2 \times 3 \times 5\)E-rules corresponding to that logical atom.Footnote9 Also had there been an indirectFootnote10 premise, say\(\nabla \mathfrak {D} / \mathfrak {E}\), in one of the options then it would contribute a minor premise with conclusion\(\mathfrak {E}\) and a transfer premise with discharged sentence\(\mathfrak {D}\) to the appropriateFootnote11E-rule.

In practice, there is an explosion of possibilities, which we analyse in order as follows:

  1. 1.

    a logical constant, such as\(\bot \),\(\wedge \),\(\vee \),\(\equiv \) or\(\oplus \) (exclusive or), can be introduced by zero or more rules;

  2. 2.

    each of these rules can have zero or more premisses, e.g.\(\top I\) has zero,\(\supset \! I\) and each\(\vee I_i\) have one,\(\wedge I\) has two;

  3. 3.

    each such premiss may discharge zero or more assumptions (as in\(\supset \! I\));

  4. 4.

    each such premiss may abstract over one or more variables, as in\(\forall I\);

  5. 5.

    and a premiss may take a term as a parameter (as in\(\exists I\)).

It is not suggested that this list is exhaustive: conventions such as those of substructural logic about avoiding multiple or vacuous discharge will extend it, as would recursion; but it is long enough to illustrate the explosion. The paper [8] attemptedFootnote12 to deal with all these possibilities and carry out a programme of mechanically generating GE-rules from a set of I-rules with results about harmony.

4.1Several I-Rules

Where a logical constant (such as\(\vee \)) is introduced by several alternativeFootnote13 rules, one can formulate an appropriate GE-rule as having several minorFootnote14 premisses, one for each of the I-rules, giving a case analysis. This is very familiar from the case of\(\vee \) and the usual\(\vee E\) rule:

so an appropriate generalisation for\(n \ge 0\) alternative I-rules is to ensure that “the GE-rule” hasn minor premisses. This works well for\(\bot \), with no I-rules: the\(\bot E\)-rule, as in [9,24], has no minor premisses.Footnote15

4.2I-Rule Has Several Premisses

Now there are two possibilities following the general idea that the conclusion of a GE-rule is arbitrary. Let us consider the intuitionistic constant\(\wedge \) (with its only I-rule having two premisses) as an example. The first possibility is as illustrated earlier: the rule

The second is to have two GE-rules:

and it is routine to show that the ordinary GE-rule for\(\wedge \) is derivable in a system including these two rules, and vice-versa. Tradition goes for the first possibility; examples below show however that this doesn’t always work and that the second may be required.

4.3Premiss of I-Rule Discharges Some Assumptions

Natural deduction’s main feature is that assumptions can be discharged, as illustrated by the I-rule for\(\supset \) and the E-rule for\(\vee \). This raises difficulties for the construction of the appropriate GE-rules: Prawitz [26] got it wrong (corrected in [27]), Schroeder-Heister [30] gave an answer in the form of a system of rules of higher level, allowing discharge not just of assumptions but of rules (which may themselves discharge ...)—but, although much cited, use of this system seems to be modest. As already discussed, an alternative was mentioned (disparagingly) in [3] and (independently) adopted more widely by others [13,36,39], the “flattened” GE-rule for\(\supset \) being

Let us now consider the position where two premisses discharge an assumption (just one each is enough): consider the logical constant\(\equiv \) with one I-rule, namely

According to our methodology, we have two possibilities for the GE-rule; first, have the minor premiss of the rule with two assumptionsBA being discharged and some device to ensure that there are other premisses withA andB as conclusions. There seems to be no way of doing this coherently, i.e. withA somehow tied to the discharge ofB and vice-versa. The alternative is to havetwo GE-rules, along the lines discussed above for\( \wedge \), and these are clearly

by means of which it is clear that, from the assumption of\(A \equiv B\), one can construct a proof of\(A \equiv B\) using the introduction rule as the last step, implying the “local completeness” of this set of rules in a sense explored by Pfenning and Davies [22]:

We are thus committed in general to the use of the second rather than the first possibility of GE-rules—the use of two such rules rather than one—when there are two premisses in an I-rule.

4.4GE Harmony: A Counter-Example

Francez and the present author [8]Footnote16 developed these ideas (looking also at the analogues of universal and existential quantification) by defining the notion of “GE-harmony” (E-rules are GE-rules obtained according to a formal procedure, of which parts are as described above) and showing that it implied “local intrinsic harmony” (local soundness, i.e. reductions, and local completeness, as illustrated above for\(\equiv \)). The classification in [8] corresponds roughly but not exactly to the different possibilities enumerated above (\(1 \ldots 5\)): “non-combining” (zero or one premiss[es]) or “combining” (more than one premiss) corresponds to possibility 2; “hypothetical” (a premiss with assumptions discharge) or “categorical” (no such discharge) corresponds to possibility 3; “parametrized” (a premiss depends on a free variable) corresponds roughly to a mixture of 4 and 5; “conditional” (e.g. there is a freshness condition) corresponds roughly to 4.

Let us now consider a combination of such ideas, e.g. two I-rules each of which discharges an assumption, e.g. the pair

What is/are the appropriate GE-rule(s)? It/they might be just

but that only captures, as it were, the first of the two I-rules (and implies that\((A \odot B) \supset (A \supset B)\), surely not what should be the case); so we have to try also

but then these two need to be combined somehow. If into a single rule,Footnote17 it would be something like

which is weird; with only the second and last of these premisses, alreadyC can be deduced. The meaning of\(A \odot B\) is thus surely not being captured, whether we go for two GE-rules or just one.

A similar example was given in 1968 by von Kutschera [38, p. 15], with two I-rules for an operatorF based on the informal definition\(F(A,B,C) \equiv (A \supset B) \vee (C \supset B)\) but the flattened E-rule failing to capture the definition adequately.

4.5Another [Counter-]Example

Following Zucker and Tragesser’s [42, p. 506], Olkhovikov and Schroeder-Heister [21] have given as a simpler example the ternary constant\(\star \) with two introduction rules:

and the “obvious” GE ruleFootnote18 thereby justified is:

which is clearly wrong, there being nothing to distinguish it from\(\star (A,C,B)\). Their main point is to show by a semantic argument that there is no non-obvious GE rule for\(\star \), thus defending the “idea of higher-level rules” [30].

4.6In Other Words

The “flattening” methodology when either the constant being defined has several introduction rules or one or more of such rules have several premisses can lead

  1. 1.

    to a number\(({>} 1)\) of GE rules, none of which on its own suffices, and

  2. 2.

    to a “disharmonious mess”, i.e. a failure to capture the correct meaning.

Already there are enough problems, before we start considering the cases where the premiss abstracts over several variables, instantiates a variable as a term or recurses on the constant being defined.

The solution of Schroeder-Heister [30] is to allow rules to discharge rules. We prefer, however, to propose instead that one should adopt the standard solution from (e.g.)Coq [1]: to reject the idea that the rule for handling implication (and other situations where assumptions are discharged) be treated as illustrated above and instead to take implication (and its generalisation, universal quantification), together with an inductive definition mechanism, as primitive, with traditional “special” elimination rules (e.g.Modus Ponens) but to allow GE rules elsewhere (e.g. for\(\wedge \) and its generalisations\(\Sigma \) and\(\exists \)). This deals with\(\equiv \); likewise, it deals with\(\odot \) as if it were

More precisely, we note that with an introduction rule given inCoq by the inductive definition

figure a

we obtain as a theorem

figure b

and similarly for\(\odot \) we have the inductive definition

figure c

and we can obtain as a theorem

figure d

Not only can we obtain such theorems, butCoq will calculate them (and several variants) from the definitions automatically. Further details of this approach can be found in [23]. For example, existential quantification can be defined thus (we give also the obtained theorem representing the elimination rule):

figure e

Short shrift is given to\(\bullet \):

figure f

This pushes the problem (of constructing and justifying elimination rules given a set of introduction rules, and establishing properties like harmony, local completeness and stability) elsewhere: into the same problem for a mechanism of inductive definitions and for the rules regarded as primitive: introduction and (non-general) elimination rules for implication and universal quantification. Apart from the issue of stability, we regard the latter as unproblematic, and the former as relatively straightforward (once we can base the syntax on implication and universal quantification).

To a large extent this approach may be regarded as just expressing first-order connectives using second-order logic, and not very different from Schroeder-Heister’s higher-level rules. The important point is that there are difficulties (we think unsurmountable) with trying to do it all without such higher-order mechanisms.

5Conclusion

The main conclusion is this: although the idea that the “grounds for asserting a proposition” are easily collected together as a unit is attractive, the different ways in which it can be done (disjunctive, conjunctive, with assumption discharge, with variable abstraction or parameterisation, ..., recursion) generate (if the GE rules pattern is followed) many problems for the programme of mechanically generating one (or more) elimination rules for a logical constant, other than in simple cases. There are difficulties with the mechanical approach in [8]; there are similar difficulties in [13]. Without success of such a programme, it is hard to see what “GE harmony” can amount to, except as carried out in (e.g.)Coq [1] where strictly positive inductive type definitions lead automatically to rules for reasoning by induction and case analysis over objects of the types thus defined, and with strong normalisation results. A similar conclusion is to be found in [33].

Notes

  1. 1.

    See [19,31] for discussions of this principle, including its antecedents in the work of Lorenzen.

  2. 2.

    Reference [3] has an early occurrence of this.

  3. 3.

    On the other hand, Francez and Dyckhoff [8] callsA the “support” and, more in line with tradition than López-Escobar [13], the remaining premiss the “minor premiss”.

  4. 4.

    Personal communication from Jan von Plato, May 2009.

  5. 5.

    See [34] for a recent discussion.

  6. 6.

    The notationc is used to abbreviate\(\lambda xy.c(x,y)\). Similar abbreviations are used below.c(ab) is then justc applied toa and then tob.

  7. 7.

    In the sense of “all but the major premiss”.

  8. 8.

    He also gives primitive rules for negation; in our view this is best treated as a defined notion, since even its I-rule is impure, i.e. mentions the constant\(\bot \).

  9. 9.

    In our terminology, “logical constant”.

  10. 10.

    \(\nabla \mathfrak {D} / \mathfrak {E}\) is the notation of [13] for “\(\mathfrak {E}\) [assuming\(\mathfrak {D}\)]”.

  11. 11.

    Surely, in this,\(\mathfrak {D}\) and\(\mathfrak {E}\) are the wrong way round.

  12. 12.

    In ignorance, alas, of [13].

  13. 13.

    López-Escobar [13] calls these “options”.

  14. 14.

    “Transfer” premises in the terminology of [13].

  15. 15.

    López-Escobar [13] does it differently.

  16. 16.

    Written in 2007–9, several years before publication.

  17. 17.

    As the formula in [13] implies, since\(1 \times 1 = 1\).

  18. 18.

    Again, [13] implies there is just one GE rule.

References

  1. Bertot, Y., Casteran, P.: Interactive Theorem Proving and Program Development. Springer, Heidelberg (2004)

    Book  Google Scholar 

  2. Dummett, M.A.E.: The Logical Basis of Metaphysics. Duckworth, London (1991)

    Google Scholar 

  3. Dyckhoff, R.: Implementing a simple proof assistant. In: Derrick, J., Lewis, H.A. (eds.) Workshop on Programming for Logic Teaching, Proceedings, vol. 23.88, pp. 49–59. Centre for Theoretical Computer Science, University of Leeds (1987).http://rd.host.cs.st-andrews.ac.uk/publications/LeedsPaper.pdf

  4. Dyckhoff, R.: Generalised elimination rules and harmony. In: Arché Seminar, St Andrews, 26 May 2009.http://rd.host.cs.st-andrews.ac.uk/talks/2009/GE.pdf

  5. Dyckhoff, R., Pinto, L.: Cut-elimination and a permutation-free sequent calculus for intuitionistic logic. Studia Logica60, 107–118 (1998)

    Article  Google Scholar 

  6. Dyckhoff, R., Pinto, L.: Proof search in constructive logics. In: Cooper, S.B., Truss, J.K. (eds.) Proceedings of Logic Colloquium 97, i.e. Sets and Proofs, pp. 53–65. CUP (1999)

    Google Scholar 

  7. Ekman, J.: Propositions in propositional logic provable only by indirect proofs. Math. Log. Q.44, 69–91 (1998)

    Article  Google Scholar 

  8. Francez, N., Dyckhoff, R.: A note on harmony. J. Philos. Log.41, 613–628 (2012)

    Article  Google Scholar 

  9. Gentzen, G.: Untersuchungen über das logische Schließen. Math. Zeitschrift39, 176–210 (1934)

    Article  Google Scholar 

  10. Girard, J.Y.: Linear logic. Theor. Comput. Sci.50, 1–102 (1987)

    Article  Google Scholar 

  11. Girard, J.Y., Lafont, Y., Taylor, P.: Proofs and Types. Cambridge University Press, Cambridge (1989)

    Google Scholar 

  12. Joachimski, F., Matthes, R.: Short proofs of normalization for the simply-typed lambda-calculus, permutative conversions and Gödel’s T. Arch. Math. Log.42, 59–87 (2003)

    Article  Google Scholar 

  13. López-Escobar, E.G.K.: Standardizing the N systems of Gentzen. Models, Algebras, and Proofs, pp. 411–434. Dekker, New York (1999)

    Google Scholar 

  14. Luo, Z.H.: Computation and Reasoning. Oxford University Press, New York (1994)

    Google Scholar 

  15. Martin-Löf, P.: Intuitionistic Type Theory. Bibliopolis, Naples (1986)

    Google Scholar 

  16. Martin-Löf, P.: On the meanings of the logical constants and the justifications of the logical laws (the Siena Lectures). Nord. J. Philos. Log.1, 11–60 (1996)

    Google Scholar 

  17. Matthes, R.: Interpolation for natural deduction with generalized eliminations. In: Kahle, R., Schroeder-Heister, P., Stärk, R. (eds.) Proof Theory in Computer Science. LNCS, vol. 2183, pp. 153–169. Springer, New York (2001)

    Chapter  Google Scholar 

  18. Mendler, P.F. (Better known as Nax P. Mendler.): Inductive definition in type theory. Ph.D. thesis, Cornell (1987).http://www.nuprl.org/documents/Mendler/InductiveDefinition.pdf

  19. Moriconi, E., Tesconi, L.: On inversion principles. Hist. Philos. Log.29, 103–113 (2008)

    Article  Google Scholar 

  20. Negri, S., von Plato, J.: Structural Proof Theory. CUP, Cambridge (2000)

    Google Scholar 

  21. Olkhovikov, G.K., Schroeder-Heister, P.: On flattening elimination rules. Rev. Symb. Log. p. 13 (2014).http://dx.doi.org/10.1017/S1755020313000385

  22. Pfenning, F., Davies, R.: A judgmental reconstruction of modal logic. Math. Struct. Comput. Sci.11, 511–540 (2001)

    Article  Google Scholar 

  23. Pierce, B., et al.: Software Foundations.http://www.cis.upenn.edu/~bcpierce/sf/Logic.html 28 July 2013. Accessed 27 Jan 2014

  24. Prawitz, D.: Natural Deduction. Almqvist and Wiksell, Stockholm (1965)

    Google Scholar 

  25. Prawitz, D.: Ideas and results in proof theory. In: Fenstad, J.E. (ed.) Second Scandinavian Logic Symposium Proceedings, pp. 235–307. North-Holland, Amsterdam (1971)

    Chapter  Google Scholar 

  26. Prawitz, D.: Proofs and the meaning and completeness of the logical constants. Essays on Mathematical and Philosophical Logic. Reidel, Dordrecht (1979)

    Google Scholar 

  27. Prawitz, D.: Beweise und die Bedeutung und Vollständigkeit der logischen Konstanten. Conceptus16, 31–44 (1982) (Translation and revision of [26])

    Google Scholar 

  28. Read, S.L.: Harmony and autonomy in classical logic. J. Philos. Log.29, 123–154 (2000)

    Article  Google Scholar 

  29. Read, S.L.: General-elimination harmony and higher-level rules. In: Wansing, H. (ed.) Dag Prawitz on Proofs and Meaning, pp. 293–312. Springer, New York (2015)

    Google Scholar 

  30. Schroeder-Heister, P.: A natural extension of natural deduction. J. Symb. Log.49, 1284–1300 (1984)

    Article  Google Scholar 

  31. Schroeder-Heister, P.: Generalized definitional reflection and the inversion principle. Log. Univ.1, 355–376 (2007)

    Article  Google Scholar 

  32. Schroeder-Heister, P.: Generalized elimination inferences, higher-level rules, and the implications-as-rules interpretation of the sequent calculus. In: Pereira, L.C., Haeusler, E.H., de Paiva, V. (eds.) Advances in Natural Deduction: A Celebration of Dag Prawitz’s Work, pp. 1–29. Springer, New York (2014)

    Chapter  Google Scholar 

  33. Schroeder-Heister, P.: Harmony in proof-theoretic semantics: a reductive analysis. In: Wansing, H. (ed.) Dag Prawitz on Proofs and Meaning, pp. 329–358. Springer, New York (2015)

    Google Scholar 

  34. Schroeder-Heister, P., Tranchini, L.: Ekman’s paradox (2014). Notre Dame Journal of Formal Logic (submitted)

    Google Scholar 

  35. Schwichtenberg, H., Wainer, S.: Proofs and Computations. Cambridge University Press, Cambridge (2012)

    Google Scholar 

  36. Tennant, N.: Ultimate normal forms for parallelized natural deductions. Log. J. IGPL10, 299–337 (2002)

    Article  Google Scholar 

  37. Tesconi, L.: Strong normalization for natural deduction with general elimination rules. Ph.D. thesis, Pisa (2004)

    Google Scholar 

  38. von Kutschera, F.: Die Vollständigkeit des Operatorensystems für die intuitionistiche Aussagenlogik im Rahmen der Gentzensemantik. Archiv für mathematische Logik und Grundlagenforschung11, 3–16 (1968)

    Google Scholar 

  39. von Plato, J.: Natural deduction with general elimination rules. Arch. Math. Log.40, 541–567 (2001)

    Article  Google Scholar 

  40. von Plato, J.: Explicit composition and its application in proofs of normalization (2015) (submitted)

    Google Scholar 

  41. von Plato, J., Siders, A.: Normal derivability in classical natural deduction. Rev. Symb. Log.5, 205–211 (2012)

    Article  Google Scholar 

  42. Zucker, J., Tragesser, R.S.: The adequacy problem for inferential logic. J. Philos. Log.7, 501–516 (1978)

    Google Scholar 

Download references

Author information

Authors and Affiliations

  1. University of St Andrews, Fife, UK

    Roy Dyckhoff

Authors
  1. Roy Dyckhoff

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toRoy Dyckhoff.

Editor information

Editors and Affiliations

  1. Department of Computer Science, University of Tübingen, Tübingen, Germany

    Thomas Piecha

  2. Department of Computer Science, University of Tübingen, Tübingen, Germany

    Peter Schroeder-Heister

Rights and permissions

Open Access This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License, which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Reprints and permissions

Copyright information

© 2016 The Author(s)

About this chapter

Cite this chapter

Dyckhoff, R. (2016). Some Remarks on Proof-Theoretic Semantics. In: Piecha, T., Schroeder-Heister, P. (eds) Advances in Proof-Theoretic Semantics. Trends in Logic, vol 43. Springer, Cham. https://doi.org/10.1007/978-3-319-22686-6_5

Download citation

Publish with us


[8]ページ先頭

©2009-2025 Movatter.jp