A good argument is one whose conclusions follow from its premises; itsconclusions areconsequences of its premises. But in whatsense do conclusionsfollow from premises? What is it for aconclusion to be aconsequence of premises? Those questions,in many respects, are at the heart of logic (as a philosophicaldiscipline). Consider the following argument:
There are many different things one can say about this argument, butmany agree that if we do not equivocate (if the terms mean the samething in the premises and the conclusion) then the argument isvalid, that is, the conclusion follows deductively from thepremises. This does not mean that the conclusion is true. Perhaps thepremises are not true. However, if the premises are true, then theconclusion is also true, as a matter of logic. This entry is about therelation between premises and conclusions in valid arguments.
Contemporary analyses of the concept of consequence—of thefollows from relation—take it to be bothnecessary andformal, with such answers often beingexplicated viaproofs ormodels (or, in some cases,both). Our aim in this article is to provide a brief characterisationof some of the notions that play a central role in contemporaryaccounts of logical consequence.
We should note that we only highlight a few of thephilosophical aspects of logical consequence, leaving outalmost all technical details, and also leaving out a large number ofphilosophical debates about the topic. Our rationale for doing as muchis that one will get the technical details, and the particularphilosophical issues that motivated them, from looking atspecificlogics—specific theories of logical consequence (e.g.,relevant logics, substructural logics, non-monotonic logics, dynamiclogics, modal logics, theories of quantification, and so on).(Moreover, debates about almost any feature oflanguage—structure versus form of sentences, propositions,context sensitivity, meaning, even truth—are relevant to debatesabout logical consequence, making an exhaustive discussion practicallyimpossible.) Our aim here is simply to touch on a few of the verybasic issues that are central to logical consequence.
Some arguments are such that the (joint) truth of the premises isnecessarily sufficient for the truth of the conclusions. Inthe sense oflogical consequence central to the currenttradition, such “necessary sufficiency” distinguishesdeductive validity frominductive validity. In inductivelyvalid arguments, the (joint) truth of the premises isvery likely(but not necessarily) sufficient for the truth of the conclusion.An inductively valid argument is such that, as it is often put, itspremises make its conclusion more likely or more reasonable (eventhough the conclusion may well be untrue given the joint truth of thepremises). The argument
is not deductively valid because the premises are not necessarilysufficient for the conclusion. Smoothy may well be a black swan.
Distinctions can be drawn between different inductive arguments. Someinductive arguments seem quite reasonable, and others are less so.There are many different ways to attempt to analyse inductiveconsequence. We might consider the degree to which the premises makethe conclusionmore likely (a probabilistic reading), or wemight check whether the mostnormal circumstances in whichthe premises are true render the conclusion true as well. (This leadsto some kinds of default or non-monotonic inference.) The field ofinductive consequence is difficult and important, but we shall leavethat topic here and focus ondeductive validity.
(See the entries oninductive logic andnon-monotonic logic for more information on these topics.)
The constraint ofnecessity is not sufficient to settle thenotion of deductive validity, for the notion ofnecessity mayalso be fleshed out in a number of ways. To say that a conclusionnecessarily follows from the premises is to say that the argument issomehowexceptionless, but there are many different ways tomake that idea precise.
A first stab at the notion might use what we now call metaphysicalnecessity. Perhaps an argument is valid if it is(metaphysically)impossible for the premises to be true and the conclusion to beuntrue, valid if—holding fixed the interpretations of premisesand conclusion—in every possible world in which the premiseshold, so does the conclusion. This constraint is plausibly thought tobe a necessary condition for logical consequence (if itcouldbe that the premises are true and the conclusion isn’t,then there is no doubt that the conclusion does not follow from thepremises); however, on most accounts of logical consequence, it is nota sufficient condition for validity. Many admit the existence ofaposteriori necessities, such as the claim that water is H\(_2\)O.If that claim is necessary, then the argument:
is necessarily truth preserving, but it seems a long way from beingdeductively valid. It was a genuine discovery that water is H\(_2\)O,one that required significant empirical investigation. While there maybe genuine discoveries of valid arguments that we had not previouslyrecognised as such, it is another thing entirely to think that thesediscoveries require empirical investigation.
An alternative line on the requisite sort ofnecessity turnstoconceptual necessity. On this line, the conclusion of (3)is not a consequence of its premise given that it is not a conceptualtruth that water is H\(_2\)O. The conceptwater and theconcept \(H_2O\) happen to pick out the same property, but thisagreement is determined partially by the world.
A similar picture of logic takes consequence to be a matter of what isanalytically true, and it is not an analytic truth that wateris H\(_2\)O. The word “water” and the formula“H\(_2\)O” agree in extension (and necessarily so) butthey do not agree inmeaning.
If metaphysical necessity is too coarse a notion to determine logicalconsequence (since it may be taken to render too many argumentsdeductively valid), an appeal to conceptual or analytic necessitymight seem to be a better route. The trouble, as Quine argued, is thatthe distinction between analytic and synthetic (and similarly,conceptual and non-conceptual) truths is not as straightforward as wemight have thought in the beginning of the 20th Century. (See theentry on theanalytic/synthetic distinction.) Furthermore many arguments seem to be truth-preserving on the basisof analysis alone:
One can understand that the conclusion follows from the premises, onthe basis of one’s understanding of the concepts involved. Oneneed not know anything about the identity of Peter, Greg’scousin. Still, many have thought that (4) is not deductively valid,despite its credentials as truth-preserving on analytic or conceptualgrounds. It is not quite as general as it could be because it is notasformal as it could be. The argument succeeds only becauseof the particular details of family concepts involved.
A further possibility for carving out the distinctive notion ofnecessity grounding logical consequence is the notion ofapriority. Deductively valid arguments, whatever they are,can be known to be so without recourse to experience, so they must beknowablea priori. A constraint of apriority certainly seemsto rule argument (3) out as deductively valid, and rightly so.However, it will not do to rule out argument (4). If we take argumentslike (4) to turn not on matters of deductive validity but somethingelse, such as ana priori knowable definition, then we mustlook elsewhere for a characterisation of logical consequence.
The strongest and most widespread proposal for finding a narrowercriterion for logical consequence is the appeal toformality.The step in (4) from “Peter is Greg’s mother’sbrother’s son” to “Peter is my cousin” is amaterial consequence and not a formal one, because to makethe step from the premise to the conclusion we need more than thestructure orform of the claims involved: we need tounderstand theircontents too.
What could the distinction betweenform andcontentmean? We mean to say that consequence is formal if it depends on theform and not thesubstance of the claims involved.But how is that to be understood? We will give at most a sketch,which, again, can be filled out in a number of ways.
The obvious first step is to notice that all presentations of therules of logical consequence rely onschemes. Aristotle’s syllogistic is a proud example.
Ferio: No \(F\) is \(G\). Some \(H\) is \(G\). Thereforesome \(H\) is not \(F\).
Inference schemes, like the one above, display the structure of validarguments. Perhaps to say that an argument is formally valid is to saythat it falls under some general scheme of which every instance isvalid, such as Ferio.
That, too, is an incomplete specification of formality. The materialargument (4) is an instance of:
every instance of which is valid. We must say more to explain why someschemes count as properly formal (and hence a sufficient ground forlogical consequence) and others do not. A general answer willarticulate the notion oflogical form, which is an important issue in its own right (involving the notion oflogical constants, among other things). Instead of exploring the details of differentcandidates for logical form, we will mention different proposals aboutthepoint of the exercise.
What is the point in demanding that validity be underwritten by anotion of logical form? There are at least three distinct proposalsfor the required notion of formality, and each provides a differentkind of answer to that question.
We might take the formal rules of logic to be totallyneutralwith respect to particular features ofobjects. Laws oflogic, on this view, must abstract away from particular features ofobjects. Logic is formal in that it is totallygeneral. Oneway to characterise what counts as a totallygeneral notionis by way of permutations. Tarski proposed (1986) that an operation orpredicate on a domain counted as general (or logical) if it wasinvariant under permutations of objects. (A permutation of acollection of objects assigns for each object a unique object in thatcollection, such that no object is assigned more than once. Apermutation of \(\{a, b, c, d\}\) might, for example, assign \(b\) to\(a, d\) to \(b, c\) to \(c\) and \(a\) to \(d\).) A \(2\)-placepredicate \(R\) is invariant under permutation if for any permutation\(p\), whenever \(Rxy\) holds, \(Rp(x)p(y)\) holds too. You can seethat theidentity relation is permutation invariant—if\(x = y\) then \(p(x) = p(y)\)—but themother-ofrelation is not. We may have permutations \(p\) such that even though\(x\) is the mother of \(y\), \(p(x)\) is not the mother of \(p(y)\).We may use permutation to characterise logicality for more thanpredicates too: we may say that a one-place sentential connective‘\(\bullet\)’ is permutation invariant if and only if, forall \(A\), \(p(\bullet A)\) is true if and only if \(\bullet p(A)\) istrue. Defining this rigorously requires establishing how permutationsoperate on sentences, and this takes us beyond the scope of thisarticle. Suffice to say, an operation such as negation passes the testof invariance, but an operation such as ‘JC believes that’fails.
A closely related analysis for formality is that formal rules aretotallyabstract. They abstract away from the semanticcontent of thoughts or claims, to leave only semanticstructure. The terms ‘mother’ and ‘cousin’enter essentially into argument (5). On this view, expressions such aspropositional connectives and quantifiers do not add new semanticcontent to expressions, but instead add only ways to combine andstructure semantic content. Expressions like ‘mother’ and‘cousin’, by contrast, add new semantic content.
Another way to draw the distinction (or to perhaps to draw a differentdistinction) is to take the formal rules of logic to beconstitutitive norms for thought, regardless of its subjectmatter. It is plausible to hold that no matter what we think about, itmakes sense to conjoin, disjoin and negate our thoughts to make newthoughts. It might also make sense to quantify. The behaviour, then,of logical vocabulary may be used to structure and regulateany kind of theory, and the norms governing logicalvocabulary apply totally universally. The norms of valid argument, onthis picture, are those norms that apply to thought irrespective ofthe particular content of that thought.[1]
Twentieth Centurytechnical work on the notion of logicalconsequence has centered on two different mathematical tools, prooftheory and model theory. Each of these can be seen as explicatingdifferent aspects of the concept of logical consequence, backed bydifferent philosophical perspectives.
We have characterized logical consequence asnecessary truthpreservation in virtue of form. This idea can be explicatedformally. One can use mathematical structures to account for the rangeof possibilities over which truth needs to be preserved. The formalityof logical consequence can be explicated formally by giving a specialrole to the logical vocabulary, taken as constituting the forms ofsentences. Let us see how model theory attends to both thesetasks.
Themodel-centered approach to logical consequence takes thevalidity of an argument to beabsence of counterexample. Acounterexample to an argument is, in general, some way ofmanifesting the manner in which the premises of the argumentfail to lead to a conclusion. One way to do this is toprovide an argument of the same form for which the premises areclearly true and the conclusion is clearly false. Another way to dothis is to provide acircumstance in which the premises aretrue and the conclusion is false. In the contemporary literature, theintuitive idea of a counterexample is developed into a theory ofmodels.
The exact structure of a model will depend on the kind of language athand (extensional/intensional, first/higher-order, etc.). A model foran extensional first order language consists of a non-empty set whichconstitutes thedomain, and aninterpretationfunction, which assigns to each nonlogical term an extension overthe domain—any extension agreeing with its semantic type(individual constants are assigned elements of the domain, functionsymbols are assigned functions from the domain to itself, one-placefirst-order predicates are assigned subsets of the domain, etc.).
The contemporary model-theoretic definition of logical consequencetraces back to Tarski (1936). It builds on the definition oftruthin a model given by Tarski in (1935). Tarski defines atruesentence in a model recursively, by giving truth (orsatisfaction) conditions on the logical vocabulary. A conjunction, forexample, is true in a model if and only if both conjuncts are true inthat model. A universally quantified sentence \(\forall xFx\) is truein a model if and only if each instance is true in the model. (Or, onthe Tarskian account of satisfaction, if and only if the open sentence\(Fx\) is satisfied by every object in the domain of the model. Fordetail on how this is accomplished, see the entry onTarski’s truth definitions.) Now we can define logical consequence as preservation of truth overmodels: an argument is valid if in anymodel in which thepremises are true (or in anyinterpretation of the premisesaccording to which they are true), the conclusion is true too.
The model-theoretic definition is one of the most successfulmathematical explications of a philosophical concept to date. Itpromises to capture both the necessity of logical consequence—bylooking at truth over all models, and the formality of logicalconsequence—by varying the interpretations of the nonlogicalvocabulary across models: an argument is valid no matter what thenonlogical vocabulary means. Yet, models are just sets, which aremerely mathematical objects. How do they account for the range ofpossibilities, or circumstances required? John Etchemendy (1990)offers two perspectives for understanding models. On therepresentational approach, each model is taken to represent apossible world. If an argument preserves truth over models, we arethen guaranteed that it preserves truth over possible worlds, and ifwe accept the identification of necessity with truth in all possibleworlds, we have the necessary truth preservation of logicalconsequence. The problem with this approach is that it identifieslogical consequence with metaphysical consequence, and it gives noaccount of the formality of logical consequence. On therepresentational approach, there is no basis for a distinction betweenthe logical and the nonlogical vocabulary, and there is no explanationof why the interpretations of the nonlogical vocabulary are maximallyvaried. The second perspective on models is afforded by theinterpretational approach, by which each model assignsextensions to the nonlogical vocabulary from the actual world: whatvaries between models is not the world depicted but the meaning of theterms. Here, the worry is that necessity isn’t captured. Forinstance, on the usual division of the vocabulary into logical andnonlogical, identity is considered a logical term, and can be used toform statements about the cardinality of the domain (e.g.,‘‘there are at least two things’’) which aretrue under every reinterpretation, but perhaps are not necessarilytrue. On this approach, there is no basis for considering models withdomains other than the universe of what actually exists, andspecifically, there is no explanation of model theory’s use ofdomains of different sizes. Each approach, as described here, isflawed with respect to our analysis of logical consequence asnecessary and formal. The interpretational approach, by looking onlyat the actual world fails to account for necessity, and therepresentational approach fails to account for formality (for details,see Etchemendy 1990, Sher 1996, and Shapiro 1998, and for refinementssee Etchemendy 2008). A possible response to Etchemendy would be toblend the representational and the interpretational perspectives,viewing each model as representing a possible world under are-interpretation of the nonlogical vocabulary (Shapiro 1998, see alsoSher 1996 and Hanson 1997 for alternative responses).
One of the main challenges set by the model-theoretic definition oflogical consequence is to distinguish between the logical and thenonlogical vocabulary. The logical vocabulary is defined in all modelsby the recursive clauses (such as those mentioned above forconjunction and the universal quantifier), and in that sense itsmeaning is fixed. The choice of the logical vocabulary determines theclass of models considered when evaluating validity, and thus itdetermines the class of the logically valid arguments. Now, while eachformal language is typically defined with a choice of a logicalvocabulary, one can ask for a more principled characterization oflogical vocabulary. Tarski left the question of a principleddistinction open in his 1936, and only gave the lines of arelativistic stance, by which different choices of the logicalvocabulary may be admissible. Others have proposed criteria forlogicality, demanding that logical constants be appropriately formal,general or topic neutral (for references and details, see the entry onlogical constants). Note that a choice of the logical vocabulary is a special case ofsetting constraints on the class of models to be used. It has beensuggested that the focus on criteria for the logical vocabulary missesthis point, and that more generally the question is whichsemanticconstraints should be adopted, limiting theadmissiblemodels for a language (Sagi 2014a, Zinke 2017).
Another challenge faced by the model-theoretic account is due to thelimitations of its set-theoretic basis. Recall that models are sets.The worry is that truth-preservation over models might not guaranteenecessary truth preservation—moreover, it might not evenguarantee material truth preservation (truth preservation in theactual world). The reason is that each model domain is a set, but theactual world presumably contains all sets, and as a collection whichincludes all sets is too ‘‘large’’ to be a set(it constitutes aproper class), the actual world is notaccounted for by any model (see Shapiro 1987).
One way of dealing with this worry is to employ external means, suchas proof theory, in support of the model-theoretic definition. This isdone by Georg Kreisel in his “squeezing argument”, whichwe present in section 3.3. Kreisel’s argument crucially dependson the language in question having a sound and complete proof system.Another option is to use set-theoretic reflection principles.Generally speaking, reflection principles state that whatever is trueof the universe of sets, is already true in an initial segment thereof(which is always a set). If reflection principles are accepted, thenat least as concerns the relevant language, one can argue that anargument is valid if and only if there is no counter set-model (seeKreisel 1967, Shapiro 1987, Kennedy & Väänänen2017).
Finally, the explanation of logical consequence in terms of truth inmodels is typically preferred by “Realists”, who taketruth of sentences to be independent of what can be known. Explaininglogical consequence in terms of truth in models is rather close toexplaining logical consequence in terms oftruth, and theanalysis of truth-in-a-model is sometimes taken to be an explicationof truth in terms of correspondence, a typically Realist notion. Some,however, view logical consequence as having an indispensable epistemiccomponent, having to do with the way we establish the conclusion onthe basis of the premises. “Anti-realists”, who eschewtaking truth (or at least, correspondence-truth) as an explanatorynotion, will typically prefer explaining logical consequence in termsofproof—to which we turn next.
On theproof-centered approach to logical consequence, thevalidity of an argument amounts to there being aproof of theconclusions from the premises. Exactly what proofsare is abig issue but the idea is fairly plain (at least if you have beenexposed to some proof system or other). Proofs are made up of smallsteps, the primitive inference principles of the proof system. The20th Century has seen very many different kinds of proof systems, fromso-called Hilbert proofs, with simple rules and complex axioms, tonatural deduction systems, with few (or even no) axioms and very manyrules.
The proof-centered approach highlights epistemic aspects of logicalconsequence. A proof does not merely attest to the validity of theargument: it provides the steps by which we can establish thisvalidity. And so, if a reasoner has grounds for the premises of anargument, and they infer the conclusion via a series of applicationsof valid inference rules, they thereby obtain grounds for theconclusion (see Prawitz 2012). One can go further and subscribe toinferentialism, the view by which the meaning of expressionsis determined by their role in inference. The idea is that our use ofa linguistic expression is regulated by rules, and mastering the rulessuffices for understanding the expression. This gives us a preliminaryrestriction on what semantic values of expressions can be: they cannotmake any distinctions not accounted for by the rules. One can then goeven further, and reject any kind of meaning that goes beyond therules—adopting the later Wittgensteinian slogan “meaningis use”. This view is favored by anti-realists about meaning,since meaning on this view is fully explained by what is knowable.
The condition of necessity on logical consequence obtains a newinterpretation in the proof-centered approach. The condition can bereformulated thus: in a valid argument, the truth of the conclusionfollows from the truth of the premises by necessity of thought(Prawitz 2005). Let us parse this formulation.Truth isunderstoodconstructively: sentences are true in virtue ofpotential evidence for them, and the facts described by truesentences are thus conceived as constructed in terms of potentialevidence. (Note that one can completely forgo reference to truth, andinstead speak of assertibility or acceptance of sentences.) Now, thenecessity of thought by which an argument is valid is explained by themeaning of the terms involved, which compels us to accept the truth ofthe conclusion given the truth of the premises. Meanings ofexpressions, in turn, are understood through the rules governing theiruse: the usualtruth conditions give their way toproofconditions of formulas containing an expression.
One can thus provide aproof-theoretic semantics for alanguage (Schroeder-Heister 1991). When presenting his system ofnatural deduction, Gentzen remarked that the introduction rules forthe logical expressions represent their “definitions,” andthe elimination rules are consequences of those definitions (Gentzen1933). For example, the introduction rule for conjunction dictatesthat a conjunction \(A \amp B\) may be inferred from both conjuncts\(A\) and \(B\), and this rule captures the meaning of the connective.Conversely, the elimination rule for conjunction says that from \(A\amp B\) one may infer both \(A\) and \(B\). The universal quantifierrules tell us that from the universally quantified claim \(\forallxFx\) we can infer any instance \(Fa\), and we can infer \(\forallxFx\) from the instance \(Fa\), provided that no other assumption hasbeen made involving the name \(a\). Under certain requirements, onecan show that the elimination rule is validated by the introductionrule.
One of the main challenges for the proof-centered approach is that ofdistinguishing between rules that are genuinely meaning-determiningand those that are not. Some rules for connectives, if added to asystem, would lead to triviality. Prior (1960) offered the followingrules for a connective “\(\tonk\)”. Its introduction rulesays that from \(A\) one can infer \(A \tonk B\), and its eliminationrule says that from \(A \tonk B\) one can infer \(B\). With theintroduction of these rules, the system becomes trivial so long as atleast one thing is provable, since from any assumption \(A\) one canderive any conclusion \(B\). Some constraints have to posed oninference rules, and much of subsequent literature has been concernedwith these constraints (Belnap 1962, Dummett 1991, Prawitz 1974).
To render the notions of proof and validity more systematized, Prawitzhas introduced the notion of acanonical proof. A sentencemight be proved in several different ways, but it is the direct, orcanonical proof that is constitutive of its meaning. A canonical proofis a proof whose last step is an application of an introduction rule,and its immediate subproofs are canonical (unless they have freevariables or undischarged assumptions—for details see Prawitz2005). A canonical proof is conceived as giving direct evidence forthe sentence proved, as it establishes the truth of the sentence bythe rule constitutive of the meaning of its connectives. For more oncanonical proofs and the ways other proofs can be reduced to them, seethe entry onproof-theoretic semantics.
We have indicated how the condition of necessity can be interpreted inthe proof-centered approach. The condition of formality can beaccounted for as well. Note that on the present perspective as well,there is a division of the vocabulary into logical and nonlogical.This division can be used to definesubstitutions of anargument. A substitution of an argument is an argument obtained fromthe original one by replacing the nonlogical terms with terms of thesame syntactic category in a uniform manner. A definition of validitythat respects the condition of formality will entail that an argumentis valid if and only if all its substitutions are valid, and in thepresent context, this is a requirement that there is a proof of allits substitutions. This condition is satisfied in any proof systemwhere rules are given only for the logical vocabulary. Of course, inthe proof-centered approach as well, there is a question ofdistinguishing the logical vocabulary (see the entry onlogical constants).
Finally, it should be noted that a proof theoretic semantics can begiven for classical logic as well as a variety of non-classicallogics. However, due to the epistemic anti-realist attitude that liesat the basis of the proof-centered approach, its proponents havetypically advocatedintuitionistic logic (see Dummett 1991).
For more on the proof-centered perspective and on proof-theoreticsemantics, see the entry onproof-theoretic semantics.
The proof-theoretic and model-theoretic perspectives have beenconsidered as providing rival accounts of logical consequence.However, one can also view “logical consequence” and“validity” as expressingcluster concepts:“A number of different, closely related notions go by thosenames. They invoke matters of modality, meaning, effectiveness,justification, rationality, and form” (Shapiro 2014). One canalso note that the division between the model-theoretic and theproof-theoretic perspectives is a modern one, and it was only madepossible when tools for metamathematical investigations weredeveloped. Frege’sBegriffsschrift, for instance, whichpredates the development of those tools, is formulated as an axiomaticproof system, but the meanings of the connectives are given via truthconditions.
Once there are two different analyses of a relation of logicalconsequence, one can ask about possible interactions, and we’lldo that next. One can also ask what general features such a relationhas independently of its analysis as proof-theoretic ormodel-theoretic. One way of answering this question goes back toTarski, who introduced the notion of consequence operations. For ourpurposes, we note only some features of such operations. Let \(Cn(X)\)be the consequences of \(X\). (One can think of the operator \(Cn\) asderiving from a prior consequence relation which, when taking \(X\) as‘input (or premise)’ set, tells you what follows from\(X\). But one can also see the ‘process’ in reverse, anda key insight is that consequence relations and correspondingoperations are, in effect, interdefinable. See the entry onalgebraic propositional logic for details.) Among some of the minimal conditions one might imposeon a consequence relation are the following two (from Tarski):
If you think of \(X\) as a set of claims, then the first conditiontells you that the consequences of a set of claims includes the claimsthemselves. The second condition demands that the consequences of\(X\) just are the consequences of the consequences of \(X\). Both ofthese conditions can be motivated from reflection on themodel-theoretic and proof-theoretic approaches; and there are othersuch conditions too. (For a general discussion, see the entry onalgebraic propositional logic.) But as with many foundation issues (e.g., ‘what are theessential features of consequence relations in general?’), evensuch minimal conditions are contentious in philosophical logic and thephilosophy of logic. For example, some might take condition (2) to beobjectionable on the grounds that, for reasons of vagueness (or more),important consequence relations over natural languages (howeverformalized) are not generally transitive in ways reflected in (2).(See Tennant 1994, Cobreros et al 2012, and Ripley 2013, forphilosophical motivations against transitive consequence.) But weleave these issues for more advanced discussion.
While the philosophical divide between Realists and Anti-realistsremains vast, proof-centered and model-centered accounts ofconsequence have been united (at least with respect to extension) inmany cases. The great soundness and completeness theorems fordifferent proof systems (or, from the other angle, for differentmodel-theoretic semantics) show that, in an important sense, the twoapproaches often coincide, at least in extension. A proof system issound with respect to a model-theoretic semantics if everyargument that has a proof in the system is model-theoretically valid.A proof system iscomplete with respect to a model-theoreticsemantics if every model-theoretically valid argument has a proof inthe system. While soundness is a principal condition on any proofsystem worth its name, completeness cannot always be expected.Admittedly, these definitions are biased towards the model-theoreticperspective: the model-theoretic semantic sets the standard to what is“sound” and “complete”. Leaving terminologicalissues aside, if a proof system is both sound and complete withrespect to a model-theoretic semantics (as, significantly, in the caseof first order predicate logic), then the proof system and themodel-theoretic semantics agree on which arguments are valid.
Completeness results can also support the adequacy of themodel-theoretic account, as in Kreisel’s “squeezingargument”. We have noted a weakness of the model-theoreticaccount: all models are sets, and so it might be that no modelrepresents the actual world. Kreisel has shown that if we have a proofsystem that is “intuitively sound” and is complete withrespect to the model-theoretic semantics, we won’t be missingany models: every intuitively valid argument will have acounter-model. Let \(L\) be a first order language. Let \(Val\) denotethe set of intuitively valid arguments in \(L\). Kreisel takesintuitive validity to be preservation of truth across all structures(whether sets or not). His analysis privileges the modal analysis oflogical consequence—but note that the weakness we are addressingis that considering set-theoretic structures might not be enough. Let\(V\) denote the set of model-theoretic validities in \(L\): argumentsthat preserve truth over models. Let \(D\) be the set of deductivelyvalid arguments, by some accepted proof system for first order logic.Now, any such proof system is “intuitively sound”, meaningthat what is deductively valid by the system is intuitively valid.This gives us \(D \subseteq Val\). And obviously, by the definitionswe’ve given, \(Val \subseteq V\), since an argument thatpreserves truth over all structures will preserve truth overset-structures.
By the completeness result for first order logic, we have: \(V\)⊆ \(D\). Putting the three inclusions together (the“squeeze”), we get that all three sets must be equal, andin particular: \(V = Val\). In this way, we’ve proven that ifthere is some structure that is a counterexample to a first orderargument, then there is a set-theoretic one.
Another arena for the interaction between the proof-theoretic and themodel-theoretic perspectives has to do with the definition of thelogical vocabulary. For example, one can hold a “moderate”inferentialist view which defines the meanings of logical connectivesthrough their semantics (i.e. truth conditions) but demands that themeaning of a connective be determined by inference rules. Carnap hasfamously shown that the classical inference rules allow non-standardinterpretations of the logical expressions (Carnap 1943). Much recentwork in the field has been devoted to the exact nature and extent ofCarnap’s categoricity problem (Raatikainen 2008, Murzi andHjortland 2009, Woods 2012, Garson 2013, Peregrin 2014, Bonnay andWesterståhl 2016. See also the entry onsentence connectives in formal logic).
Finally, we should note that while model theory and proof theory arethe most prominent contenders for the explication of logicalconsequence, there are alternative frameworks for formal semanticssuch asalgebraic semantics, game-theoretic semantics anddynamic semantics (see Wansig 2000).
There has also been dissent, even in Aristotle’s day, as to the“shape” of logical consequence. In particular, there is nosettled consensus on the number of premises or conclusions appropriateto “tie together” the consequence relation.
In Aristotle’s syllogistic, a syllogism relates two or morepremises and a single conclusion. In fact, Aristotle focuses onarguments with exactly two premises (the major premise and the minorpremise), but nothing in his definition forbids arguments with threeor more premises. Surely, such arguments should be permitted: if, forexample, we have one syllogism from two premises \(A\) and \(B\) to aconclusion \(C\), and we have another from the premises \(C\) and\(D\) to the conclusion \(E\), then in some sense, the longer argumentfrom premises \(A, B\) and \(D\) to conclusion \(E\) is a good one. Itis found by chaining together the two smaller arguments. If the twooriginal arguments are formally valid, then so too is the longerargument from three premises. On the other hand, on a common readingof Aristotle’s definition of syllogism,one-premisearguments are ruled out—but this seems arbitrary, as evenAristotle’s own “conversion” inferences are thusexcluded.
For such reasons, many have taken the relation of logical consequenceto pair an arbitrary (possibly infinite)collection ofpremises with a single conclusion. This account has the added virtueof having the special case of an empty collection of premises.Arguments to a conclusion from no premises whatsoever are those inwhich the conclusion is true by logic alone. Such“conclusions” arelogical truths (sometimestautologies) or, on the proof-centered approach,theorems.
Perhaps there is a reason to allow the notion of logical consequenceto apply even more broadly. In Gentzen’s proof theory forclassical logic, a notion of consequence is defined to hold betweenmultiple premises and multiple conclusions. The argument from a set\(X\) of premises to a set \(Y\) of conclusions is valid if the truthof every member of \(X\) guarantees (in the relevant sense) the truthof some member of \(Y\). There is no doubt that this is formallyperspicuous, but the philosophical applicability of the multiplepremise—multiple conclusion sense of logical consequence remainsan open philosophical issue. In particular, those anti-Realists whotake logical consequence to be defined in terms ofproof(such as Michael Dummett) reject a multiple conclusion analysis oflogical consequence. For an Anti-realist, who takes good inference tobe characterised by the waywarrant is transmitted frompremise to conclusion, it seems that a multiple conclusion analysis oflogical consequence is out of the question. In a multiple conclusionargument from \(A\) to \(B, C\), any warrant we have for \(A\) doesnot necessarily transmit to \(B\) or \(C\): the only conclusion we arewarranted to draw is the disjunction \(B\)or \(C\), so itseems for an analysis of consequence in terms of warrant we need tounderstand some logical vocabulary (in this case, disjunction) inorder to understand the consequence relation. This is unacceptable ifwe hope to use logical consequence as a tool todefine thatlogical vocabulary. No such problems appear to arise in a singleconclusion setting. (However, see Restall (2005) for a defence ofmultiple conclusion consequence for Anti-realists; and see Beall(2011) for a defence of certain sub-classical multiple-conclusionlogics in the service of non-classical solutions to paradox.)
Another line along which the notion has been broadened (or along whichsome have sought to broaden it) involves recent work onsubstructural logic. The proposal here is that we may consider doing without some of thestandard rules governing the way that premises (or conclusions) of anargument may be combined. Structural rules deal with theshape orstructure of an argument in the sense ofthe way that the premises and conclusions are collected together, andnot the way that those statements are constructed. The structural ruleofweakening for example, states that if an argument fromsome collection of premises \(X\) to a conclusion \(C\) is valid, thenthe argument from \(X\) together with another premise \(A\) to theconclusion \(C\) is also valid. This rule has seemed problematic tosome (chiefly on the grounds that the extra premise \(A\) need not beused in the derivation of the conclusion \(C\) and hence, that \(C\)does not followfrom the premises \(X,A\) in the appropriatesense).Relevant logics are designed to respect this thought, and do without the structuralrule of weakening. (For the proof-theoretic picture, see Negri and vonPlato (2001).)
Other structural rules are also a called into question. Anotherpossible application of substructural logic is found in the analysisof paradoxes such asCurry’s paradox. A crucial move in the reasoning in Curry’s paradox and otherparadoxes like it seems to require the step reducing two applicationsof an assumption to a single one (which is then discharged). Accordingto some, this step is problematic, and so, they must distinguish anargument from \(A\) to \(B\) and an argument from \(A, A\) to \(B\).The rule ofcontraction is rejected.
In yet other examples, theorder in which premises are usedis important and an argument from \(A, B\) to \(C\) is to bedistinguished from an argument from \(B, A\) to \(C\). (For moredetails, consult the entry onsubstructural logics.) There is no doubt that the formal systems of substructural logics areelegant and interesting, but the case for the philosophical importanceand applicability of substructural logics is not closed.
We have touched only on a few central aspects of the notion of logicalconsequence, leaving further issues, debates and, in particular,details to emerge from particular accounts (accounts that arewell-represented in this encyclopedia). But even a quick glance at therelated links section (below) will attest to a fairly largenumber of different logical theories, different accounts of what(logically) follows from what. And that observation raises a questionwith which we will close: Is there one notion of logical consequencethat is the target of all such theories, or are there many?
We all agree that there are many different formal techniques forstudying logical consequence, and very many different formal systemsthat each propose different relations of logical consequence. Butgiven a particular argument, is the question as to whether it isdeductively valid an all-or-nothing affair? The orthodoxy, logicalmonism, answers affirmatively. There is one relation ofdeductive consequence, and different formal systems do a better orworse job of modelling that relation. (See, for example, Priest 1999for a defence of monism.) The logicalcontextualist orrelativist says that the validity of an argument depends onthe subject matter or the frame of reference or some other context ofevaluation. (For example, a use of the law of the excluded middlemight be valid in a classical mathematics textbook, but not in anintuitionistic mathematics textbook, or in a context where we reasonabout fiction or vague matters.) The logicalpluralist, onthe other hand, says that of one and the same argument, in one and thesame context, there are sometimes different things one should say withrespect to its validity. For example, perhaps one ought say that theargument from a contradictory collection of premises to an unrelatedconclusion isvalid in the sense that in virtue of its formit is not the case that the premises are true an the conclusion untrue(so it is valid in one precise sense) but that nonetheless, in anothersense the form of the argument does not ensure that the truth of thepremisesleads to the truth of the conclusion. The monist orthe contextualist holds that in the case of the one argument a singleanswer must be found for the question of its validity. The pluralistdenies this. The pluralist holds that the notion of logicalconsequence itself may be made more precise in more than one way, justas the original idea of a “good argument” bifurcates intodeductive and inductive validity (see Beall and Restall 2000 for adefence of pluralism).
There are many (many) other works on this topic, but thebibliographies of the following will serve as a suitable resource forexploring the field.
How to cite this entry. Preview the PDF version of this entry at theFriends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entryatPhilPapers, with links to its database.
Aristotle, General Topics: logic |Bolzano, Bernard |Carnap, Rudolf |Frege, Gottlob: theorem and foundations for arithmetic |logic, normative status of |logic: algebraic propositional |logic: classical |logic: inductive |logic: intuitionistic |logic: non-monotonic |logic: substructural |logical constants |logical form |logical pluralism |logical truth |model theory |proof theory |Russell, Bertrand |schema |semantics: proof-theoretic
View this site from another server:
The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054