Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Defeasible Reasoning

First published Fri Jan 21, 2005; substantive revision Fri Jun 13, 2025

Reasoning isdefeasible when the corresponding argument isrationally compelling but not deductively valid. The truth of thepremises of a good defeasible argument provide support for theconclusion, even though it is possible for the premises to be true andthe conclusion false. In other words, the relationship of supportbetween premises and conclusion is a tentative one, potentiallydefeated by additional information. Philosophers have studied thenature of defeasible reasoning since Aristotle’s analysis ofdialectical reasoning in theTopics and thePosterior Analytics, but the subject has been studied withunique intensity over the last forty years, largely due to theinterest it attracted from the artificial intelligence movement incomputer science. There have been two approaches to the study ofreasoning: treating it either as a branch of epistemology (the studyof knowledge) or as a branch of logic. In recent work, the termdefeasible reasoning has typically been limited to inferencesinvolving rough-and-ready, exception-permitting generalizations, thatis, inferring what has or will happen on the basis of whatnormally happens. This narrower sense ofdefeasiblereasoning, which will be the subject of this article, excludesfrom the topic the study of other forms of non-deductive reasoning,including inference to the best explanation, abduction, analogicalreasoning, and scientific induction. This exclusion is to some extentartificial, but it reflects the fact that the formal study of theseother forms of non-deductive reasoning remains quite rudimentary.


1. History

Defeasible reasoning has been the subject of study by bothphilosophers and computer scientists (especially those involved in thefield of artificial intelligence). The philosophical history of thesubject goes back to Aristotle, while the field of artificialintelligence has greatly intensified interest in it over the lastforty years.

1.1 Philosophy

According to Aristotle, deductive logic (especially in the form of thesyllogism) plays a central role in the articulation of scientificunderstanding, deducing observable phenomena from definitions ofnatures that hold universally and without exception. However, in thepractical matters of everyday life, we rely upon generalizations thathold only “for the most part”, under normal circumstances,and the application of such common sense generalizations involvesmerelydialectical reasoning, reasoning that is defeasibleand falls short of deductive validity. Aristotle lays out a largenumber and great variety of examples of such reasoning in his workentitled theTopics.

Investigations in logic after Aristotle (from later antiquity throughthe twentieth century) seem to have focused exclusively on deductivelogic. This continued to be true as the predicate logic was developedby Peirce, Frege, Russell, Whitehead, and others in the latenineteenth and early twentieth centuries. With the collapse of logicalpositivism in the mid-twentieth century (and the abandonment ofattempts to treat the physical world as a logical construction fromfacts about sense data), new attention was given to the relationshipbetween sense perception and the external world. Roderick Chisholm(Chisholm 1957; Chisholm 1966) argued that sensory appearances givegood, but defeasible, reasons for believing in corresponding factsabout the physical world. If I am “appeared to redly”(have the sensory experience as of being in the presence of somethingred), then, Chisholm argued, I may presume that I really am in thepresence of something red. This presumption can, of course, bedefeated, if, for example, I learn that my environment is relevantlyabnormal (for instance, all the ambient light is red).

H. L. A. Hart (Hart 1951), at a 1949 meeting of the AristotelianSociety, noted the centrality of defeasible reasoning in the law,especially within the Anglo-American common law tradition. Hartpointed out that judges must take into account exceptionalcircumstances in which a legal principle cannot be applied at all ormust be applied in a weakened form. Hart refers explicitly toconditions that can defeat (Hart 1951, p. 175) the claimthat a contract exists, even when the standard definition of‘contract’ is satisfied. A defeasible logic is neededbecause a judge is required to make a judgment on the basis of anincomplete set of facts: those facts that are presented to the judgeby the two parties as germane to the claim.

The idea of defeasibility also showed up in work on the formal theoryof argumentation, including Stephen Toulmin’s The Uses ofArgument (Toulmin 1964). Toulmin, building on Hart’sobservations, argues for the importance of distinguishing betweenwarrants and rebuttals (Toulmin 1964, 101ff, 143ff). The formal theoryof argumentation (van Eemeren et al. 2020; Prakken and Vreeswijk 2002)has proved a fruitful ground for the development of models ofdefeasible reasoning.

John L. Pollock developed Chisholm’s idea into a theory ofprima facie reasons anddefeaters of those reasons(Pollock 1967, 1970, 1974, 1987, 1995, 2010). Pollock distinguishedbetween two kinds of defeaters of a defeasible inference:rebutting defeaters (which give one a prima facie reason forbelieving the denial of the original conclusion) andundercutting defeaters (which give one a reason for doubting that the usual relationshipbetween the premises and the conclusion hold in the given case).According to Pollock, a conclusion is warranted, given all ofone’s evidence, if it is supported by an ultimately undefeatedargument whose premises are drawn from that evidence.

1.2 Artificial Intelligence

As the subdiscipline of artificial intelligence took shape in the1960s, pioneers like John M. McCarthy and Patrick J. Hayes soondiscovered the need to represent and implement the sort of defeasiblereasoning that had been identified by Aristotle and Chisholm. McCarthyand Hayes (McCarthy and Hayes 1969) developed a formal language theycalled the “situation calculus,” for use by expert systemsattempting to model changes and interactions among a domain of objectsand actors. McCarthy and Hayes encountered what they called theframe problem: the problem of deciding which conditions willnot change in the wake of an event. They required adefeasible principle of inertia: the presumption that any givencondition will not change, unless required to do so by actual eventsand dynamic laws. In addition, they encountered thequalificationproblem: the need for a presumption that an action can besuccessfully performed, once a short list of essential prerequisiteshave been met. McCarthy (McCarthy 1977, 1038–1044) suggestedthat the solution lay in a logical principle ofcircumscription: the presumption that the actual situation isas unencumbered with abnormalities and oddities (including unexplainedchanges and unexpected interferences) as is consistent with ourknowledge of it. (McCarthy 1982; McCarthy 1986) In effect, McCarthysuggests that it is warranted to believe whatever is true in all theminimal (or otherwisepreferred) models ofone’s initial information set.

In the early 1980s, several systems of defeasible reasoning wereproposed by others in the field of artificial intelligence: RayReiter’s default logic (Reiter 1980; Etherington and Reiter1983, 104–108), McDermott and Doyle’s Non-Monotonic LogicI (McDermott and Doyle, 1982), Robert C. Moore’s AutoepistemicLogic (Moore 1985), and Hector Levesque’s formalization of the“all I know” operator (Levesque 1990). These earlyproposals involved the search for a kind offixed point orcognitive equilibrium. Special rules (calleddefault rules byReiter) permit drawing certain conclusions so long as theseconclusions are consistent with what one knows, including all that oneknows on the basis of these very default rules. In some cases, no suchfixed point exists, and, in others, there are multiple, mutuallyinconsistent fixed points. In addition, these systems were proceduralor computational in nature, in contrast to the semanticcharacterization of warranted conclusions (in terms of preferredmodels) in McCarthy’s circumscription system. Later work inartificial intelligence has tended to follow McCarthy’s lead inthis respect.

2. Applications and Motivation

Philosophers and theorists of artificial intelligence have found awide variety of applications for defeasible reasoning. In some cases,the defeasibility seems to be grounded in some aspect of the subjector the context of communication, and in other cases in facts about theobjective world. The first includes defeasible rules as communicativeor representational conventions andautoepistemic (reasoningabout one’s own knowledge and lack of knowledge). The latter,the objective sources of defeasibility, include defeasibleobligations, defeasible laws of nature, induction, abduction, andOckham’s razor (the presumption that the world is asuncomplicated as possible).

2.1 Defeasibility as a Convention of Communication

Much of John McCarthy’s early work in artificial intelligenceconcerned the interpretation of stories and puzzles (McCarthy andHayes 1969; McCarthy 1977). McCarthy found that we often makeassumptions based on what is not said. So, for example, in a puzzleabout safely crossing a river by canoe, we assume that there are nobridges or other means of conveyance available. Similarly, when usinga database to store and convey information, the information that, forexample, no flight is scheduled at a certain time is representedsimply bynot listing such a flight. Inferences based onthese conventions are defeasible, however, because the conventions canthemselves be explicitly abrogated or suspended.

Nicholas Asher and his collaborators (Lascarides and Asher 1993, Asherand Lascarides 2003, Vieu, Bras, Asher, and Aurnague 2005, Txurrukaand Asher 2008) have argued that defeasible reasoning is useful inunpacking the pragmatics of conversational implicature.

2.2 Autoepistemic Reasoning

Robert C. Moore (Moore 1985) pointed out that we sometimes inferthings about the world based on ournot knowing certainthings. So, for instance, I might infer that I do not have a sister,since, if I did, I would certainly know it, and I do not in fact knowthat I have a sister. Such an inference is, of course, defeasible,since if I subsequently learn that I have a sister after all, thebasis for the original inference is nullified.

2.3 Semantics for Generics and the Progressive

Generic terms (likebirds inBirds fly) areexpressed in English by means of bare common noun phrases (withoutdeterminer). Adverbs likenormally andtypically arealso indicators of generic predication. As Asher and Pelletier (Asherand Pelletier 1997) have argued, the semantics for such sentencesseems to involve intentionality: a generic sentence can be true evenif the majority of the kind, or even all of the kind, fail to conformto the generalization. It can be true that birds fly even if, as aresult of a freakish accident, all surviving birds are abnormallyflightless. A promising semantic theory for the generic is torepresent generic predication by means of a defeasible rule orconditional.

The progressive verb involves a similar kind of intentionality. (Asher1992) If Jonesis crossing the street, then it would normallybe the case that Joneswill succeed in crossing the street.However, this inference is clearly defeasible: Jones might be hit by atruck midway across and never complete the crossing.

2.4 Defeasible Reasons

Jonathan Dancy (Dancy 1993, 2004) has developed and defended ananti-Humean conception of practical reasoning, according to which itis the facts themselves, and not our desires, aversions, or otherattitudes towards those facts, that constitutereasons foracting. These facts consist of particulars’ havingproperties, and those properties provide in each such case some reasonfor acting--as, for example, someone’s need can provide a reasonfor meeting that need. However, each general property can provide areason only defeasibly: not only can a reason be overwhelmed bycontrary considerations, but a property’s valence for action canbe completely neutralized or even reversed by further considerations.For example, even if giving pleasure is in general a reason in favorof acting in a certain way, the fact that some action would givepleasure to those pleased by the suffering of others is a reasonagainst and not for so acting. Dancy has introduced (in Dancy2004) the concepts ofintensifiers andattenuators,applying to facts that strengthen or weaken the force of reasons. Inthe extreme case, a fact candisable a reason altogether,corresponding to what Joseph Raz had described as anexclusionaryreason (Raz 1975), and to John Pollock’s idea of anundercutting defeater.

To the extent that our practical reasoning is guided at all by generalrules or principles (something that Dancy explicitly denies), thereasoning must be defeasible, as John Horty has argued (Horty 2007b).From this perspective, Dancy’s thesis ofmoralparticularism corresponds to the potential defeasibility of allgeneral reasons (see Lance and Little 2004, 2007). Defeasible logiccan enable general rules to play an indispensable role despite thereasons holism that Dancy has uncovered. John Brunero(Brunero 2022) has worked out some of the principles of defeasiblereasoning implicit in Dancy’s program. See also Asarnow 2016 andWay 2017. In contrast, Markos Valaris (Valaris 2020) has recentlyargued that the formality of defeasible logic ignores the deepersignificance of Dancy’s particularism.

In addition, defeasible reasoning can be used to illuminate moral andlegal dilemmas, cases in which general rules come into conflict (seeHorty 1994, 2003, 2012). This can be done without attributing logicalinconsistency to the conflicting rules and without treating theconflict as merely apparent, i.e., as due to an incompleterepresentation of the rules.

2.5 Defeasible Obligations

Philosophers have, for quite some time, been interested in defeasibleobligations, which give rise to defeasible inferences about what weare, all things considered, obliged to do. David Ross, in 1930,discussed the phenomena ofprima facie obligations (Ross1930, 1939). The existence of a prima facie obligation gives one good,but defeasible grounds, for believing that one ought to fulfill thatobligation. When formaldeontic logic was developed byChisholm and others in the 1960s (Chisholm 1963), the use of classicallogic gave rise to certain paradoxes, such as Chisholm’s paradoxof contrary-to-duty imperatives. These paradoxes can be resolved byrecognizing that the inference from imperative to actual duty is adefeasible one (Asher and Bonevac 1996, 1998; Nute 1997). DanielBonevac (Bonevac 2018, 2019) has demonstrated the virtues of combininga modal representation of obligation and permission with defeasiblelogic. Bonevac shows how to realize John Broome’s theory ofnormative requirements (Broome 1999, 2001) in a logical setting. Inhis 2019 article on “Free Choice Reasons” (Bonevac 2019),Bonevac tackles the paradoxes involved in imperfect and genericobligations, first formulated by the medieval logician Jean Buridan.It is possible to owe a horse to another farmer, without itsbeing the case that there is any particular horse which is owed.Bonevac suggests that the best solution to the paradox involves usingmore than two truth-values (as in the case of Kleene’s strongthree-valued truth tables), a suggestion that has also been used inreasoning about perception in Barwise-Perry situation theory (Barwiseand Perry 1983) and in Koons’s account of causation (Koons 2000,31–44). Seesection 4

2.6 Law

Prakken (1997) book provides an extensive treatment of thecontributions of techniques from nonmonotonic logic to the formalmodeling of legal reasoning. See also Prakken and Sartor (1996, 1998),Hage et al. (1993), Hage (1997), Lodder (1999), Bench-Capon et al.(2004, 2009), and Sartor (2018). Kevin Ashley’s HYPO system(Ashley 1990) employs defeasible reasoning in the study of case-basedreasoning in the law.

See Komath on defeasible reasoning in Islamic law (Komath 2024).

2.7 Defeasible Laws of Nature and Scientific Programs

Philosophers David M. Armstrong and Nancy Cartwright have argued thatthe actual laws of nature areoaken rather thaniron(to use Armstrong’s terms). (Armstrong 1983; Armstrong 1997,230–231; Cartwright 1983). Oaken laws admit of exceptions: theyhave tacitceteris paribus (other things being equal) orceteris absentibus (other things being absent) conditions. AsCartwright points out, an inference based on such a law of nature isalways defeasible, since we may discover that additionalphenomenological factors must be added to the law in questionin special cases.

There are several reasons to think that deductive logic is not anadequate tool for dealing with this phenomenon. In order to applydeduction to the laws and the initial conditions, the laws must berepresented in a form that admits of no exceptions. This would requireexplicitly stating each potentially relevant condition in theantecedent of each law-stating conditional. This is impractical, notonly because it makes the statement of each and every law extremelycumbersome, but also because we know that there are many exceptionalcases that we have not yet encountered and may not be able to imagine.Defeasible laws enable us to express what we really know to be thecase, rather than forcing us to pretend that we can make an exhaustivelist of all the possible exceptions.

Tohmé, Delrieux, and Bueno (2011) have argued that defeasiblereasoning is crucial to the understanding of scientific researchprograms.

2.8 Defeasible Principles in Metaphysics and Epistemology

Many classical philosophical arguments, especially those in theperennial philosophy that endured from Plato and Aristotle to the endof scholasticism, can be fruitfully reconstructed by means ofdefeasible logic. Metaphysical principles, like the laws of nature,may hold in normal cases, while admitting of occasional exceptions.The principle of causality, for example, that plays a central role inthe cosmological argument for God’s existence, can plausiblyconstrued as a defeasible generalization (Koons 2000, 2001).

As discussed above (in section 1.1), prima facie reasons and defeatersof those reasons play a central role in contemporary epistemology, notonly in relation to perceptual knowledge, but also in relation toevery other source of knowledge: memory, imagination (as an indicatorof possibility) and testimony, at the very least. In each cases, animpression or appearance provides good but defeasible evidence of acorresponding reality.

Ivan Hu (Hu 2020) has applied defeasible logic to provide a novel andattractive account of the reasoning in the sorites paradoxes.

2.9 Occam’s Razor and the Assumption of a “Closed World”

Prediction always involves an element of defeasibility. If onepredicts what will, or what would, under some hypothesis, happen, onemust presume that there are no unknown factors that might interferewith those factors and conditions that are known. Any prediction canbe upset by such unanticipated interventions. Prediction thus proceedsfrom the assumption that the situation as modeled constitutes aclosed world: that nothing outside that situation couldintrude in time to upset one’s predictions. In addition, we seemto presume that any factor that is not known to be causally relevantis in fact causally irrelevant, since we are constantly encounteringnew factors and novel combinations of factors, and it is impossible toverify their causal irrelevance in advance. This closed-worldassumption is one of the principal motivations for McCarthy’slogic of circumscription (McCarthy 1982; McCarthy 1986).

3. Varieties of Approaches

We can treat the study of defeasible reasoning either (i) as a branchof epistemology (the theory of knowledge), or (ii) as a branch oflogic. In the epistemological approach, defeasible reasoning can bestudied as a form of inference, that is, as a process by which we addto our stock of knowledge. Alternatively, we could treatdefeat as a relation between arguments in a disputationaldiscourse. In either version, the epistemological approach isconcerned with the obtaining, maintaining, and transmission ofwarrant, with the question of when an inference, startingwith justified or warranted beliefs, produces a new belief that isalso warranted, given potential defeaters. This approach focusesexplicitly on the norms of belief persistence and change.

In contrast, a logical approach to defeasible reasoning fastens on arelationship between propositions or possible bodies of information.Just as deductive logic consists of the study of a certainconsequence relation between propositions or sets ofpropositions (the relation of valid implication), so defeasible (ornonmonotonic) logic consists of the study of a different kindof consequence relation. Deductive consequence is monotonic: if a setof premises logically entails a conclusion, than any superset (any setof premises that includes all of the first set) will also entail thatsome conclusion. In contrast, defeasible consequence is nonmonotonic.A conclusion follows defeasibly or nonmonotonically from a set ofpremises just in case it is true innearly all of the modelsthat verify the premises, or in themost normal models thatdo.

The two approaches are related. In particular, a logical theory ofdefeasible consequence will have epistemological consequences. It ispresumably true that an ideally rational thinker will have a set ofbeliefs that are closed under defeasible, as well as deductive,consequence. However, a logical theory of defeasible consequence wouldhave a wider scope of application than a merely epistemological theoryof inference. Defeasible logic would provide a mechanism for engaginginhypothetical reasoning, not just reasoning from actualbeliefs.

Conversely, as David Makinson and Peter Gärdenfors have pointedout (Makinson and Gärdenfors 1991, 185–205; Makinson 2005),an epistemological theory of belief change can be used to define a setof nonmonotonic consequence relations (one relation for each initialbelief state). We can define the consequence relation \(\alpha\dproves \beta\), for a given set of beliefs \(T\), as holding just incase the result of adding belief \(\alpha\) to \(T\) would includebelief in \(\beta\). However, on this approach, there would be manydistinct nonmonotonic consequence relations, instead of a singleperspective-independent one.

And, as Phan Minh Dung has argued (Dung 1995), formal argumentationcan also be used to provide a basis for defining a nonmonotonicconsequence relation. A formal argument structure \(F\) is an orderedpair \(⟨A, B⟩\), where \(A\) is a set of arguments, and\(B\) is a binary relation on \(A\) (theattack relation).Then we can say that an argument structure \(F\) has \(p\) has aconsequence just in case \(p\) is the conclusion of some argument inthe optimalextension of \(F\) (which can be defined in avariety of ways--seesection 4.4).

4. Epistemological Approaches

There are have been four versions of the epistemological approach,each of which attempts to define how an cognitively ideal agentarrives at warranted conclusions, given an initial input. The firsttwo of these, John L. Pollock’s theory of defeasible reasoningand the theory of semantic inheritance networks, are explicitlycomputational in nature. They take as input a complex, structuredstate, representing the data available to the agent, and they define aprocedure by which new conclusions can be warranted. The thirdapproach, based on the theory of belief change (the AGM model)developed by Alchourrón, Gärdenfors, and Makinson(Alchourrón, Gärdenfors, and Makinson 1982), instead laysdown a set of conditions that an ideal process of belief change oughtto satisfy. The AGM model can be used to define a nonmonotonicconsequence relation that is temporary and local. This can representreasoning that is hypothetically or counterfactually defeasible, inthe sense that what “follows” from a conjunctiveproposition \((p \amp q)\) need not be a superset of what“follows” from \(p\) alone. The fourth approach is that offormal argumentation theory, in whichdefeat is treated as arelation betweenarguments within a dialogue

4.1 Formal Epistemology

John Pollock’s approach to defeasible reasoning consists ofenumerating a set of rules that are constructive and effectivelycomputable, and that aim at describing how an ideal cognitive agentbuilds up a rich set of beliefs, beginning with a relatively sparsedata set (consisting of beliefs about immediate sensory appearances,apparent memories, and such things). The inferences involved are not,for the most part, deductive. Instead, Pollock defines, first, what itis for one belief to be aprima facie reason for believinganother proposition. In addition, Pollock defines what it is for onebelief, say in \(p\), to be adefeater for \(q\) as a primafacie reason for \(r\). In fact. Pollock distinguishes two kinds ofdefeaters:rebutting defeaters, which are themselves prima facie reasons for believing the negationof the conclusion, andundercutting defeaters, which providea reason for doubting that \(q\) provides any support, in the actualcircumstances, for \(r\). (Pollock 1987, 484) A belief isultimately warranted in relation to a data set (orepistemic basis) just in case it is supported by someultimately undefeated argument proceeding from that epistemicbasis.

In Pollock 1995, Pollock uses a directed graph to represent thestructure of an ideal cognitive state. Each directed link in thenetwork represents the first node’s being a prima facie reasonfor the second. The new theory includes an account ofhypothetical, as well as categorical reasoning, since eachnode of the graph includes a (possibly empty) set of hypotheses.Somewhat surprisingly, Pollock assumes a principle of monotonicitywith respect to hypotheses: a belief that is warranted relative to aset of hypotheses is also warranted with respect to any superset ofhypotheses. Pollock also permits conditionalization and reasoning bycases.

An argument isself-defeating if it supports a defeater forone of its own defeasible steps. Here is an interesting example: (1)Robert says that the elephant beside him looks pink. (2)Robert’s color vision becomes unreliable in the presence of pinkelephants. Ordinarily, belief 1 would support the conclusion that theelephant is pink, but this conclusion undercuts the argument, thanksto belief 2. Thus, the argument that the elephant is pink isself-defeating. Pollock argues that all self-defeating argumentsshould be rejected, and that they should not be allowed to defeatother arguments. In addition, a set of nodes can experience mutualdestruction orcollective defeat if each member of the set isdefeated by some other member, and no member of the set is defeated byan undefeated node that is outside the set.

In formalizing the undercutting rebuttal, Pollock introduces a newconnective, \(\otimes\), where \(p \otimes q\) means that it is notthe case that \(p\) wouldn’t be true unless \(q\) were true.Pollock uses rules, rather than conditional propositions, to expressthe prima facie relation. If he had, instead, introduced a specialconnective \(\Rightarrow\), with \(p \Rightarrow q\) meaning that\(p\) would be a prima facie reason for \(q\), then undercuttingdefeaters could be represented by means of negating this conditional.To express the fact that \(r\) is an undercutting defeater of \(p\) asa prima facie reason for \(q\), we could state both that \((p\Rightarrow q)\) and \(\neg((p \amp r) \Rightarrow q)\).

In the case of conflicting prima facie reasons, Pollock rejects theprinciple ofspecificity, a widely accepted principleaccording to which the defeasible rule with the more specificantecedent takes priority over conflicting rules with less specificantecedents. Pollock does, however, accept a special case ofspecificity in the area of statistical syllogisms with projectibleproperties. (Pollock 1995, 64–66) So, if I know that most \(A\)sare \(B\)s, and the most \(AC\)s are not \(B\)s, then I should, uponlearning that individual \(b\) is both \(A\) and \(C\), give priorityto the \(AC\) generalization over the \(A\) generalization (concludingthat \(b\) is not a \(B)\).

Pollock’s theory of warrant is intended to provide normativerules for belief, of the form: if you have warranted beliefs that areprima facie reasons for some further belief, and you have noultimately undefeated defeaters for those reasons, then that furtherbelief is warranted and should be believed. For more details ofPollock’s theory, see the following supplementary document:

John Pollock’s System

Wolfgang Spohn (Spohn 2002) has argued that Pollock’s system isnormatively defective because, in the end, Pollock has nonormative standard to appeal to, other than ad hoc intuitions abouthow a reasonable person would respond to this or that cognitivesituation. Spohn suggests that, with respect to the state ofdevelopment of the study of defeasible reasoning, Pollock’stheory corresponds to C. I. Lewis’s early investigations intomodal logic. Lewis suggested a number of possible axiom systems, butlacked an adequate semantic theory that could provide an independentcheck on the correctness or completeness of any given list (of thekind that was later provided by Kripke and Kanger). Analogously, Spohnargues that Pollock’s system is in need of a unifying normativestandard. This very same criticism can be lodged, with equal justice,against a number of other theories of defeasible reasoning, includingsemantic inheritance networks and default logic.

Stipe Pandžić has investigated the connections betweendefeasible logic andjustification logic, a modal logic in which we can state that certain propositions arereasons for accepting other propositions (see also Artemov 2008 andFitting 2014).

4.2 Semantic Inheritance Networks

The system of semantic inheritance networks, developed by Horty,Thomason, and Touretzky (1990), is similar to Pollock’s system.Both represent cognitive states by means of directed graphs, withlinks representing defeasible inferences. The semantic inheritancenetwork theory has a intentionally narrower scope: the initial nodesof the network represent particular individuals, and all non-initialnodes represent kinds, categories or properties. A link from aninitial (individual) node to a category node represents simplypredication: that Felix (initial node) is a cat (category node), forexample. Links between category nodes represent defeasible or genericinclusion: that birds (normally or usually) are flying things. To bemore precise, there are both positive (“is a”) andnegative (“is not a”) links. The negative links areusually represented by means of a slash through the body of thearrow.

Semantic inheritance networks differ from Pollock’s system intwo important ways. First, they cannot represent one fact’sconstituting anundercutting defeater of an inference,although they can representrebutting defeaters. For example,they do not allow an inference from the apparent color of an elephantto its actual color to be undercut by the information that my colorvision is unreliable, unless I have information about the actual colorof the elephant that contradicts its apparent color. Secondly, they doincorporate the principle of specificity (the principle that ruleswith more specific antecedents take priority in case of conflict) intothe very definition of a warranted conclusion. In fact, in contrast toPollock, the semantic inheritance approach gives priority to ruleswhose antecedents are weakly or defeasibly more specific. That is, ifthe antecedent of one rule is defeasibly linked to the antecedent of asecond rule, the first rule gains priority. For example, if Quakersare typically pacifists, then, when reasoning about a Quaker pacifist,rules pertaining to Quakers would override rules pertaining topacifists. For the details of semantic inheritance theory, see thefollowing supplementary document:

Semantic Inheritance Networks.

David Makinson (Makinson 1994) has pointed out that semantic networktheory is very sensitive to the form in which defeasible informationis represented. There is a great difference between having a directlink between two nodes and having a path between the two nodes beingsupported by the graph as a whole. The notion of preemption givesspecial powers to explicitly given premises over conclusions. Directlinks always take priority over longer paths. Consequently,inheritance networks lack two desirable metalogical properties: cutand cautious monotony (which will be covered in more detail in thesection on Logical Approaches).

  • Cut: If \(G\) is a subgraph of \(G'\), and every link in \(G'\)corresponds to a path supported by \(G\), then every path supported by\(G\) is also supported by \(G'\).
  • Cautious Monotony: If \(G\) is a subgraph of \(G'\), and everylink in \(G'\) corresponds to a path supported by \(G\), then everypath supported by \(G'\) is also supported by \(G\).

Cumulativity (Cut plus Cautious Monotony) corresponds to reasoning bylemmas or subconclusions. The Horty-Thomason-Touretzky system doessatisfy special cases of Cut and Cautious Monotony: if \(A\) is anatomic statement (a link from an individual to a category), then ifgraph \(G\) supports \(A\), then for any statement \(B, G \cup \{A\}\)supports \(B\) if and only if \(G\) supports \(B\).

Another form of inference that is not supported by semanticinheritance networks is that of reasoning by cases or by dilemma. Inaddition, semantic networks do not license modus-tollens-likeinferences: from the fact that birds normally fly and Tweety does notfly, we are not licensed to infer that Tweety is not a bird. (Thisfeature is also lacking in Pollock’s system.)

Osta-Vélez and Gärdenfors (Osta-Vélez andGärdenfors 2021) investigate the origins of defaults in cognitivepsychology. They argue for the superiority of approaching this problemfrom the perspective of models (including semantic networks andconceptual graphs) rather than propositional formulas. They describehow to derive the ordering of defaults from the structure ofconceptual spaces (Gärdenfors 2000, 2024)

4.3 Belief Revision Theory

Alchourrón, Gärdenfors, and Makinson (1982) developed aformal theory of belief revision and contraction, drawing largely onWillard van Orman Quine’s model of theweb of belief(Quine and Ullian 1970). The cognitive agent is modelled as believinga set of propositions that are ordered by their degree ofentrenchment. This model provides the basis for a set of normativeconstraints on belief contraction (subtracting a belief) and beliefrevision (adding a new belief that is inconsistent with the originalset). When a belief is added that is logically consistent with theoriginal belief set, the agent is supposed to believe the logicalclosure of the original set plus the new belief. When a belief isadded that is inconsistent with the original set, the agent retreatsto the most entrenched of the maximal subsets of the set that areconsistent with the new belief, adding the new proposition to that setand closing under logical consequence. For the axioms of the AGMmodel, see the following supplementary document:

AGM Postulates

AGM belief revision theory can be used as the basis for a system ofdefeasible reasoning or nonmonotonic logic, as Gärdenfors andMakinson have recognized (Makinson and Gärdenfors 1991). If \(K\)is an epistemic state, then a nonmonotonic consequence relation\(\dproves\) can be defined as follows: \(A \dproves B\) iff \(B \in K* A\). Unlike Pollock’s system or semantic inheritance networks,this defeasible consequence relation depends upon a backgroundepistemic state. Thus, the belief revision approach gives rise, not toa single nonmonotonic consequence relation, but to family ofrelations. Each background state \(K\) gives rise to its owncharacteristic consequence relation.

One significant limitation of the belief-revision approach is thatthere is no representation in the object-language of a defeasible ordefault rule or conditional (that is, of a conditional of the formIf p, then normally q orThat p would be a prima faciereason for accepting that q). In fact, Gärdenfors(Gärdenfors 1978; Gärdenfors 1986) proved that noconditional satisfying the Ramsey test can be added to the AGM systemwithout trivializing the revision relation.[1] (A conditional \(\Rightarrow\) satisfies the Ramsey test just incase, for every epistemic state \(K, K\) includes \((A \RightarrowB)\) iff \(K * A\) includes \(B\).)

Since the AGM system cannot include conditional beliefs, it cannotelucidate the question of what logical relationships hold betweenconditional defaults.

The lack of a representation of conditional beliefs is closelyconnected to another limitation of the AGM system: its inability tomodel repeated oriterated belief revision. The input to abelief change is an epistemic state, consisting both of a set ofpropositions believed and an entrenchment relation on that set. Theoutput of an AGM revision, in contrast, consists simply of a set ofbeliefs. The system provides no guidance on the question of what wouldbe the result of revising an epistemic state in two or more steps. Ifthe entrenchment relation could be explicitly represented by means ofconditional propositions, then it would be possible to define the newentrenchment relation that would result from a single belief revision,making iterated belief revision representable. A number of proposalsalong these lines have been made. The difficulty lies in definingexactly what would constitute aminimal change in therelative entrenchment or epistemic ranking of a set of beliefs. Tothis point, no clear consensus has emerged on this question. (SeeSpohn 1988; Nayak 1994; Wobcke 1995; Bochman 2001.)

On the larger question of the relation between belief revision anddefeasible reasoning, there are two possibilities: that a theory ofdefeasible reasoning should be grounded in a theory of beliefrevision, and that a theory of belief revision should be grounded in atheory of defeasible reasoning. The second view has been defended byJohn Pollock (Pollock 1987; Pollock 1995) and by Hans Rott (Rott1989). On this second view, we must make a sharp distinction betweenbasic or foundational beliefs on the one hand and inferred or derivedbeliefs on the other. We can then model belief change on theassumption that new beliefs are added to the foundation (and arelogically consistent with the existing set of those beliefs). Beliefscan be added which are inconsistent with previous inferred beliefs,and the new belief state consists simply in the closure of the newfoundational set under the relation of defeasible consequence. On suchan approach, default conditionals can be explicitly represented amongthe agent’s beliefs. Gärdenfors’s triviality resultis then avoided by rejecting one of the assumptions of the theorem,preservation:

Preservation:
If \(\neg A \not\in K\),then \(K \subseteq K * A\).

From the perspective that uses defeasible reasoning to define beliefrevision, there is no good reason to accept Preservation. One can adda belief that is consistent with what one already believes and therebylose beliefs, since the new information might be anundercutting defeater to some defeasible inference that had beensuccessful.

4.4 Formal Argumentation Theory

Phan Minh Dung (Dung 1995) initiated a new and fruitful approach todefeasible reasoning, one that focuses on the structure of arguments.Dung defines anargument structure as an ordered pair \(⟨ A, B ⟩ \), in which \(A\) is a set of arguments and\(B\) is a binary relation on \(A\), representing theattacksrelation. In other words, if \( ⟨ x, y ⟩ \in B\), thenargument \(x\) is representing as attacking argument \(y\) in someway. An argument is a sequence of propositions, with the lastproposition designated as its conclusion. (The premises of an argumentcan be null, in which case we can treat the argument as equivalent tothe assertion of the single proposition it contains.)

Dung’s approach doesn’t distinguish between the variousways in which one argument can attack another, such as rebutting,undermining, or undercutting, although this additional information canbe added by differentiating several types of attack relations. Oneargumentrebuts another when their conclusions arecontradictories. An argumentundermines a second argumentwhen the conclusion of the first contradicts one of the premises ofthe second. And an argumentundercuts another when itsconclusion provides reason for doubting that the premises of thesecond are in the actual circumstances reliable indicators of thetruth of the conclusion. In many applications, these distinctions canbe ignored. However, Henry Prakken (Prakken 2010) makes use of allthree forms of attack in his ASPIC+ system.

Central to Dung’s approach is the idea of anadmissibleset of arguments relative to an argument structure. A set of arguments\(A\) is admissible if and only if it is conflict free (no argument in\(A\) attacks another argument in \(A\)), and there is every argumentthat attacks something in \(A\) is itself attacked by something in\(A\). In other words, \(A\) can defeat everything that defeats one ofits members. Dung’s approach incorporates the principle:“the one who laughs last, laughs best.”

Apreferred extension of an argument structure is a maximaladmissible set of the structure. Every structure possesses at leastone preferred extension. Astable extension of a structure isa conflict-free set \(S\) that attacks each argument that does notbelong to \(S\). Every stable extension is a preferred extension, butnot vice versa. Some structures do not have stable extensions.Leendert van der Torre and Srdjan Vesic (van der Torre and Vesic 2018)outline the full range of definitions of extensions, along with theprinciples of rationality they embody.

The characteristic function \(F_{AS}\) of an argument structure \(AF\)is defined as follows:

\(F_{AS}\)(\(S\)) = {\(A\): \(A\) is acceptable with respect to \(S\)}

Thegrounded extension of an argument structure is the leastfixed point of its characteristic function. An extension \(S\) iscomplete if it contains every argument that is admissiblewith respect to \(S\). The grounded extension is the minimal completeextension of a structure. If a structure is well-founded (with noinfinite regress of attack relations, then the structure has a uniquecomplete extension that is grounded, preferred, and stable (Dung 1995,331).

These various notions of optimal extension can be used to define whena proposition has been proved or refuted in a particular structure,depending on whether the proposition or its negation belongs to theoptimal extension of the structure.

Gerald Vreeswijk (Vreeswijk 1997) has built upon Dung’sframework by introducing preference relations among arguments. Therelation of relative conclusive force could be determined by suchfactors as the presence or absence of defeasible rules, the occurrenceof a premise of one argument as the conclusion of another, the numberof steps in the argument, or the use in the arguments of defeasiblerules with varying degrees of reliability. The HERMES system ofKaracapilidis and Papadias (Karacapilidis and Papadias 2001)implements numeric weights for reasons for and against aconclusion.

Henry Prakken (Prakken 2010) has expanded Dung’s model byincluding support as well as attack relations between arguments. BartVerheij’s DefLog system (Verheij 2003, 2005) makes use of aconditional to represent support and a negation operator to representattack. Anthony Hunter, Sylwia Polberg, and Matthias Thimm (Hunter etal. 2020) have recently used epistemic graphs to represent bothpositive and negative interactions among arguments.

Others have used probabilities to measure the comparative strength ofarguments: Dung and Thang (2010), Verheij (2012), and Hunter(2013).

Baroni et al. (2022) have begun exploring the relationship betweenformal argumentation theory and AGM belief revision theory.

5. Logical Approaches

Logical approaches to defeasible reasoning treat the subject as a partof logic: the study ofnonmonotonic consequence relations (incontrast to the monotonicity of classical logic). These relations aredefined on propositions, not on the beliefs of an agent, so the focusis not on epistemology per se, although a theory of nonmonotonic logicwill certainly have implications for epistemology.

5.1 Relations of Logical Consequence

A consequence relation is a mathematical relation that models whatfollows logically from what. Consequence relations can be defined in avariety of ways, such as Hilbert, Tarski, and Scott relations. AHilbert consequence relation is a relation between pairs of formulas,a Tarski relation is a relation between sets of formulas (possiblyinfinite) and individual formulas, and a Scott relation is a relationbetween two sets of formulas. In the case of Hilbert and Tarskirelations, \(A \vDash B\) or \(\Gamma \vDash B\) mean that the formula\(B\) follows from formula \(A\) or from set of formulas \(\Gamma\).In the case of Scott consequence relations, \(\Gamma \vDash \Delta\)means that the joint truth of all the members of \(\Gamma\) implies(in some sense) the truth of at least one member of \(\Delta\). Tothis point, studies of nonmonotonic logic have defined nonmonotonicconsequence relations in the style of Hilbert or Tarski, rather thanScott.

A (Tarski) consequence relation ismonotonic just in case itsatisfies the following condition, for all formulas \(p\) and all sets\(\Gamma\) and \(\Delta\):

Monotonicity:
If \(\Gamma \vDash p\), then \(\Gamma \cup \Delta \vDash p\).

Any consequence relation that fails this condition isnonmonotonic. A relation of defeasible consequence clearlymust be nonmonotonic, since a defeasible inference can be defeated byadding additional information that constitutes a rebutting orundercutting defeater.

5.2 Metalogical Desiderata

Once monotonicity is given up, the question arises: why call therelation of defeasible consequence alogical consequencerelation at all? What properties do defeasible consequence andclassical logical consequence have in common, that would justifytreating them as sub-classes of the same category? What justifiescalling nonmonotonic consequencelogical?

To count aslogical, there are certain minimal propertiesthat a relation must satisfy. First, the relation ought to permitreasoning by lemmas or subconclusions. That is, if a proposition \(p\)already follows from a set \(\Gamma\), then it should make nodifference to add \(p\) to \(\Gamma\) as an additional premise.Relations that satisfy this condition are calledcumulative.Cumulative relations satisfy the following two conditions (where“\(C(\Gamma)\)” represents the set of defeasibleconsequences of \(\Gamma)\):

Cut:
If \(\Gamma \subseteq \Delta \subseteq C(\Gamma)\), then \(C(\Delta)\subseteq C(\Gamma)\).

Cautious Monotony:
If \(\Gamma \subseteq \Delta \subseteq C(\Gamma)\), then \(C(\Gamma)\subseteq C(\Delta)\).

In addition, a defeasible consequence relation ought to besupraclassical: if \(p\) follows from \(q\) in classicallogic, then it ought to be included in the defeasible consequences of\(q\) as well. A formula \(q\) ought to count as an (at least)defeasible consequence of itself, and anything included in the contentof \(q\) (any formula \(p\) that follows from \(q\) in classicallogic) ought to count as a defeasible consequence of \(q\) as well.Moreover, the defeasible consequences of a set \(\Gamma\) ought todepend only on the content of the formulas in \(\Gamma\), not in howthat content is represented. Consequently, the defeasible consequencerelation ought to treat \(\Gamma\) and the classical logical closureof \(\Gamma\) (which we’ll represent as“\(Cn(\Gamma)\)”) in exactly the same way. A consequencerelation that satisfies these two conditions is said to satisfyfull absorption (see Makinson 1994, 47).

Full Absorption:
\(Cn(C(\Gamma)) = C(\Gamma) = C(Cn(\Gamma))\)

Finally, a genuinely logical consequence relation ought to enable usto reason by cases. So, it should satisfy a principle calleddistribution: if a formula \(p\) follows defeasibly from both \(q\)and \(r\), then it ought to follow from their disjunction. (To requirethe converse principle would be to reinstate monotonicity.) Therelevant principle is this:

Distribution:
\(C(\Gamma) \cap C(\Delta) \subseteq C(Cn(\Gamma) \capCn(\Delta))\).

Consequence relations that are cumulative, strongly absorptive, anddistributive satisfy a number of other desirable properties, includingconditionalization: If a formula \(p\) is a defeasibleconsequence of \(\Gamma \cup \{q\}\), then the material conditional\((q \rightarrow p)\) is a defeasible consequence of \(\Gamma\) alone.In addition, such logics satisfy the property ofloop: if\(p_1 \dproves p_2 \ldots p_{n-1} \dproves p_n\) (where “\(\dproves\) ” represents the defeasible consequence relation),then the defeasible consequences of \(p_i\) and \(p_j\) are exactlythe same, for any \(i\) or \(j\).[2]

There are three further conditions that have been much discussed inthe literature, but whose status remains controversial:disjunctive rationality,rational monotony, andconsistency preservation.

Disjunctive Rationality:
If \(\Gamma \cup \{p\} \notdproves r\), and \(\Gamma \cup \{q\}\notdproves r\), then \(\Gamma \cup \{\)(p \(\vee\) q)\(\} \notdprovesr\).

Rational Monotony:
If \(\Gamma \dproves A\), then either \(\Gamma \cup \{B\} \dproves A\)or \(\Gamma \dproves \neg B\).

Consistency Preservation:
If \(\Gamma\) is classically consistent, then so is \(C(\Gamma)\) (theset of defeasible consequences of \(\Gamma)\).

All three properties seem desirable, but they set a very high standardfor the defeasible reasoner.

5.3 Default Logic

Ray Reiter’s default logic (Reiter 1980; Etherington and Reiter1983) was part of the first generation of defeasible systems developedin the field of artificial intelligence. The relative ease ofcomputing default extensions has made it one of the more popularsystems.

Reiter’s system is based on the use ofdefault rules. Adefault rule consists of three formulas: theprerequisite,thejustification, and theconsequent. If oneaccepts the prerequisite of a default rule, and the justification isconsistent with all one knows (including what one knows on the basisof the default rules themselves), then one is entitled to accept theconsequent. The most popular use of default logic relies solely onnormal defaults, in which the justification and theconsequent are identical. Thus, a normal default of the form \((p\);\(q \therefore q)\) allows one to infer \(q\) from \(p\), so long as\(q\) is consistent with one’s endpoint (theextensionof the default theory).

A default theory consists of a set of formulas (the facts), togetherwith a set of default rules. Anextension of a default theoryis a fixed point of a particular inferential process: an extension\(E\) must be a consistent theory (a consistent set closed underclassical consequence) that contains all of the facts of the defaulttheory \(T\), and, in addition, for each normal default \((p\Rightarrow q)\), if \(p\) belongs to \(E\), and \(q\) is consistentwith \(E\), then \(q\) must belong to \(E\) also.

Since the consequence relation is defined by a fixed-point condition,there are default theories that have no extension at all, and othertheories that have multiple, mutually inconsistent extensions. Forexample, the theory consisting of the fact \(p\) and the pair ofdefaults \((p\) ; \((q \amp r) \therefore q)\) and \((q\) ; \(\neg r\therefore \neg r)\) has no extension. If the first default isapplied, then the second must be, and if the second default is notapplied, the first must be. However, the conclusion of the seconddefault contradicts the prerequisite of the first, so the first cannotbe applied if the second is. There are many default theories that havemultiple extensions. Consider the theory consisting of the facts \(q\)and r and the pair of defaults \((q\) ; \(p \therefore p)\) and \((r\); \(\neg p \therefore \neg p)\). One or the other, but not both,defaults must be applied.

Furthermore, there is no guarantee that if \(E\) and \(E'\) are bothextensions of theory \(T\), then the intersection of \(E\) and \(E'\)is also an extension (the intersection of two fixed points need not beitself a fixed point). Default logic is usually interpreted as acredulous system: as a system of logic that allows thereasoner to selectany extension of the theory and believeall of the members of that theory, even though many of the resultingbeliefs will involve propositions that are missing from otherextensions (and may even be contradicted in some of thoseextensions).

Default logic fails many of the tests for a logical relation that wereintroduced in the previous section. It satisfied Cut and FullAbsorption, but it fails Cautious Monotony (and thus fails to becumulative). In addition, it fails Distribution, a serious limitationthat rules out reasoning by cases. For example, if one knows thatSmith is either Amish or Quaker, and both Quakers and Amish arenormally pacifists, one cannot infer that Smith is a pacifist. Defaultlogic also fails to represent Pollock’sundercuttingdefeaters. Finally, default logic does not incorporate any formof the principle ofSpecificity, the principle that defaultswith more specific prerequisites ought, in cases of conflict, to takepriority over defaults with less specific prerequisites. Recently,John Horty (Horty 2007a, 2007b) has examined the implications ofadding priorities among defaults (in the form of a partial ordering),which would permit the recognition of specificity and other groundsfor preferring one default to another. In addition, Horty allows fordefeasible reasoning about these priorities (the relative weights ofvarious defaults) by means of higher-order default rules. Suchdefeasible reasoning about relative weights enables Horty to give anaccount of Pollock’sundercutting defeaters: an undercutting defeater is a triggered default rule that lowers theweight of the undercut rule below some threshold, with the result thatthe undercut rule can no longer be triggered.

5.4 Nonmonotonic Logic I and Autoepistemic Logic

In both McDermott-Doyle’s Nonmonotonic Logic I and Moore’sAutoepistemic logic (McDermott and Doyle, 1982; Moore, 1985; Konolige1994), a modal operator \(M\) (representing a kind of epistemicpossibility) is used. Default rules take the following form: \(((p\amp Mq) \rightarrow q)\), that is, if \(p\) is true and \(q\) is“possible” (in the relevant sense), then \(q\) is alsotrue. In both cases, the extension of a theory is defined, as inReiter’s default logic, by means of a fixed-point operation.\(Mp\) represents the fact that \(\neg p\) does not belong to theextension. For example, in Moore’s case, a set \(\Delta\) is astable expansion of a theory \(\Gamma\) just in case\(\Delta\) is the set of classical consequences of the set \(\Gamma\cup \{\neg Mp: p \in \Delta \} \cup \{Mp: p \not\in \Delta \}\). Asin the case of Reiter’s default logic, some theories will lack astable expansion, or have more than one. In addition, these systemsfail to incorporateSpecificity.

5.5 Circumscription

In circumscription (McCarthy 1982; McCarthy 1986; Lifschitz 1988), oneor more predicates of the language are selected for minimization(there is, in addition, a further technical question of whichpredicates to treat as fixed and which to treat as variable). Thenonmonotonic consequences of a theory \(T\) then consist of all theformulas that are true in every model of \(T\) that minimizes theextensions of the selected predicates. One model \(M\) of \(T\) ispreferred to another, \(M'\), if and only if, for each designatedpredicate \(F\), the extension of \(F\) in \(M\) is a subset of theextension of \(F\) in \(M'\), and, for some such predicate, theextension in \(M\) is aproper subset of the extension in\(M'\).

The relation of circumscriptive consequence has all the desirablemeta-logical properties. It is cumulative (satisfies Cut and CautiousMonotony), strongly absorptive, and distributive. In addition, itsatisfies Consistency Preservation, although not RationalMonotony.

The most critical problem in applying circumscription is that ofdeciding on what predicates to minimize (there is, in addition, afurther technical question about which predicates to treat as fixedand which as variable in extension). Most often what is done is tointroduce a family ofabnormality predicates \(ab_1, ab_2\),etc. A default rule then can be written in the form: \(\forall x((F(x)\amp \neg ab_i (x) ) \rightarrow G(x))\), where“\(\rightarrow\)” is the ordinary material conditional ofclassical logic. To derive the consequences of a theory, all of theabnormality predicates are simultaneously minimized. This simpleapproach fails to satisfy the principle of Specificity, since eachdefault is given its own, independent abnormality predicate, and eachis therefore treated with the same priority. It is possible to addspecial rules for the prioritizing of circumscription, but these are,of necessity, ad hoc and exogenous, rather than a natural result ofthe definition of the consequence relation.

Circumscription does have the capacity of representing the existenceofundercutting defeaters. Suppose that satisfying predicate\(F\) provides a prima facie reason for supposing something to be a\(G\), and suppose that we use the abnormality predicate \(ab_1\) inrepresenting this default rule. We can state that the predicate \(H\)provides an undercutting defeater to this inference by simply addingthe rule: \(\forall x (H(x) \rightarrow ab_1 (x))\), stating that all\(H\)s are abnormal in respect number 1.

5.6 Preferential Logics

Circumscription is a special case of a wider class of defeasiblelogics, thepreferential logics (Shoham 1987). Inpreferential logics, \(\Gamma \dproves p\) iff \(p\) is true in all ofthemost preferred models of \(\Gamma\). In the case ofcircumscription, the most preferred models are those that minimize theextension of certain predicates, but many other kinds of preferencerelations can be used instead, so long as the preference relations aretransitive and irreflexive (a strict partial order). A structureconsisting of a set of models of a propositional or first-orderlanguage, together with a preference order on those models, is calledapreferential structure. The symbol \(\prec\) shallrepresent the preference relation. \(M \prec M'\) means that \(M\) isstrictly preferred to \(M'\). A most preferred model is one that isminimal in the ordering.

In order to give rise to a cumulative logic (one that satisfies Cutand Cautious Monotony), we must add an additional condition to thepreferential structures, a Limit Assumption (also known as thecondition ofstopperedness orsmoothness:

Limit Assumption: Given a theory \(T\), and \(M\), anon-minimal model of \(T\), there exists a model \(M'\) which ispreferred to \(M\) and which is a minimal model of \(T\).

The Limit Assumption is satisfied if the preferential structure doesnot contain any infinite descending chains of more and more preferredmodels, with no minimal member. This is a difficult condition tomotivate as natural, but without it, we can find preferentialstructures that give rise to nonmonotonic consequence relations thatfail to be cumulative.

Once we have added the Limit Assumption, it is easy to show that anyconsequence relation based upon a preferential model is not onlycumulative but also supraclassical, strongly absorptive, anddistributive. Let’s call such logicspreferential. Infact, Kraus, Lehmann, and Magidor (Kraus, Lehmann, and Magidor 1990;Makinson 1994, 77; Makinson 2005b) proved the following representationtheorem for preferential logics:

Representation Theorem for Preferential Logics: if\(\dproves\) is a cumulative, supraclassical, strongly absorptive, anddistributive consequence relation (i.e., a preferential relation) thenthere is a preferential structure \(\mathcal{M}\) satisfying the LimitAssumption such that for allfinite theories \(T\), the setof \(\dproves\)-consequences of \(T\) is exactly the set of formulastrue in every preferred model of \(T\) inM.[3]

There are preferential logics that fail to satisfy consistencypreservation, as well as disjunctive rationality and rationalmonotony:

Disjunctive Rationality:
If \(\Gamma \cup \{p\} \notdproves r\), and \(\Gamma \cup \{q\}\notdproves r\), then \(\Gamma \cup \{(p \vee q)\} \notdprovesr\).

Rational Monotony:
If \(\Gamma \dproves p\), then either \(\Gamma \cup \{q\} \dproves p\)or \(\Gamma \dproves \neg q\).

A very natural condition has been found by Kraus, Lehmann, and Magidorthat corresponds to Rational Monotony: that ofranked models.(No condition on preference structures has been found that ensuresdisjunctive rationality without also ensuring rational monotony.) Apreferential structure \(\mathcal{M}\) satisfies the Ranked Modelscondition just in case there is a function \(r\) that assigns anordinal number to each model in such a way that \(M \prec M'\) iff\(r(M) \lt r(M')\). Let’s say that a preferential consequencerelation is arational relation just in case it satisfiesRational Monotony, and that a preferential structure is arational structure just in case it satisfies the rankedmodels condition. Kraus, Lehmann, and Magidor (Kraus, Lehmann, andMagidor 1990; Makinson 1994, 71–81) also proved the followingrepresentation theorem:

Representation Theorem for Rational Logics: if\(\dproves\) is a rational consequence relation (i.e., a preferentialrelation that satisfies Rational Monotony) then there is apreferential structure \(\mathcal{M}\) satisfying the Limit Assumptionand the Ranked Models Assumption such that for all finite theories\(T\), the set of \(\dproves\)-consequences of \(T\) is exactly theset of formulas true in every preferred model of \(T\) in\(\mathcal{M}\).

Freund proved an analogous representation result for preferentiallogics that satisfydisjunctive rationality, replacing theranking condition with a weaker condition offiltered models:a filtered model is one such that, for every formula, if two worldsnon-minimally satisfy the formula, then there is a world less thanboth of them that also satisfies the formula (Freund 1993).

Sten Lindström, in a paper written in 1991 (Rott 2022) but firstpublished in 2022 (Lindström 2022), applied principles drawn fromsocial choice theory (Sen 1986) to the problem of ranking models. Lindström’stheory is a generalization of preferential logic, since theconditional logic relies on a set-selection function instead of anordering on worlds This is a fascinating proposal, deserving offurther investigaion.

Social Choice Postulates

5.7 Logics of Extreme Probabilities

Lehmann and Magidor (Lehmann and Magidor 1992) noticed an interestingcoincidence: the metalogical conditions for preferential consequencerelations correspond exactly to the axioms for a logic of conditionalsdeveloped by Ernest W. Adams (Adams 1975).[4] Adams’s logic was based on a conditional, \(\Rightarrow\),intended to represent a relation of very high conditional probability:\((p \Rightarrow q)\) means that the conditional probability\(Pr(q/p)\) is extremely close to 1. Adams used the standarddelta-epsilon definition of the calculus to make this idea precise.Let us suppose that a theory \(T\) consists of a set ofconditional-free formulas (the facts) and a set of probabilisticconditionals. A conclusion \(p\) follows defeasibly from \(T\) if andonly if every probability function satisfies the followingcondition:

For every \(\delta\), there is an \(\varepsilon\) such that, if theprobability of every fact in \(T\) is assigned a probability at leastas high as \(1 - \varepsilon\), and every conditional in \(T\) isassigned a conditional probability at least as high as \(1 -\varepsilon\), then the probability of the conclusion \(p\) is atleast \(1 - \delta\).

The resulting defeasible consequence relation is a preferentialrelation. (It need not, however, be consistency-preserving.) Thisconsequence relation also corresponds to a relation, 0-entailment,defined by Judea Pearl (Pearl 1990), as the common core to alldefeasible consequence relations.

Lehmann and Magidor (1992) proposed a variation on Adams’s idea.Instead of using the delta-epsilon construction, they made use ofnonstandard measure theory, that is, a theory of probability functionsthat can take values that areinfinitesimals (infinitelysmall numbers). In addition, instead of defining the consequencerelation by quantifying overall probability functions,Lehmann and Magidor assume that we can select a single probabilityfunction (representing something like the ideally rational, orobjective probability). On their construction, a conclusion \(p\)follows from \(T\) just in case the probability of \(p\) is infinitelyclose to 1, on the assumption that the probabilities assigned tomembers of \(T\) are infinitely close to 1. Lehmann and Magidor provedthat the resulting consequence relation is always not onlypreferential: it is alsorational. The logic defined byLehmann and Magidor also corresponds exactly to the theory of Popperfunctions, another extension of probability theory designed to handlecases of conditioning on propositions with infinitesimal probability(see Harper 1976; van Fraassen 1995; Hawthorne 1998). For a briefdiscussion of Popper functions, see the following supplementarydocument:

Popper Functions

Arló Costa and Parikh, using van Fraassen’s account (vanFraassen, 1995) of primitive conditional probabilities (a variant ofPopper functions), proved a representation result for both finite andinfinite languages (Arló Costa and Parikh, 2005). For infinitelanguages, they assumed an axiom of countable additivity forprobabilities.

Kraus, Lehmann, and Magidor proved that, for every preferentialconsequence relation \(\dproves\) that is probabilistically admissible,[5] there is a unique rational consequence relation \(\dproves^*\) thatminimally extends it (that is, that the intersection of all therational consequence relations extending \(\dproves\) is also arational consequence relation). This relation, \(\dproves^*\), iscalled therational closure of \(\dproves\). To find therational closure of a preferential relation, one can perform thefollowing operation on a preferential structure that supports thatrelation: assign to each model in the structure the smallest numberpossible, respecting the preference relation. Judea Pearl alsoproposed the very same idea under the name1-entailment orSystem \(Z\) (Pearl 1990).

A critical advantage to the Lehmann-Magidor-Pearl 1-entailment systemover Adams’s epsilon-entailment lies in the way in which1-entailment handles irrelevant information. Suppose, for example,that we know that birds fly \((B \Rightarrow F)\), Tweety is a bird\((B)\), and Nemo is a whale \((W)\). These premises do notepsilon-entail \(F\) (that Tweety flies), since there is no guaranteethat a probability function assign a high probability to \(F\), giventheconjunction of \(B\) and \(W\). In contrast, 1-entailmentdoes give us the conclusion \(F\).

Moreover, 1-entailment satisfies a condition ofweak independenceof defaults: conditionals with logically unrelated antecedentscan “fire” independently of each other: one can warrant aconclusion even though we are given an explicit exception to theother. Consider, for example, the following case: birds fly \((B\Rightarrow F)\), Tweety is a bird that doesn’t fly \((B \amp\neg F)\), whales are large \((W \Rightarrow L)\), and Nemo is a whale\((W)\). These premises 1-entail that Nemo is large \((L)\). Inaddition, 1-entailment automatically satisfies the principle ofSpecificity: conditionals with more specific antecedents are alwaysgiven priority over those with less specific antecedents.

There is another form of independence,strong independence,that even 1-entailment fails to satisfy. If we are given one exceptionto a rule involving a given antecedent, then we are unable to use anyconditional with the same antecedent to derive any conclusionwhatsoever. Suppose, for example, that we know that birds fly \((B\Rightarrow F)\), Tweety is a bird that doesn’t fly \((B \amp\neg F)\), and birds lay eggs \((B \Rightarrow E)\). Even under1-entailment, the conclusion that Tweety lays eggs \((E)\) fails tofollow. This failure to satisfy Strong Independence is also known asthe Drowning Problem (since all conditionals with the sameantecedent are “drowned” by a single exception).

A consensus is growing that the Drowning Problem should not be“solved” (see Pelletier and Elio 1994; Wobcke 1995, 85;Bonevac, 2003, 461–462). Consider the following variant on theproblem: birds fly, Tweety is a bird that doesn’t fly, and birdshave strong forelimb muscles. Here it seems we should refrain fromconcluding that Tweety has strong forelimb muscles, since there isreason to doubt that the strength of wing muscles is causally (andhence, probabilistically) independent of capacity for flight. Once weknow that Tweety is an exceptional bird, we should refrain fromapplying other conditionals withTweety is a bird as theirantecedents, unless we know that these conditionals are independent offlight, that is, unless we know that the conditional with the strongerantecedent,Tweety is a non-flying bird, is also true.

Nonetheless, several proposals have been made for securing strongindependence and solving the Drowning Problem. Geffner and Pearl(Geffner and Pearl 1992) proposed a system ofconditionalentailment, a variant of circumscription, in which the preferencerelation on models is defined in terms of the sets of defaults thatare satisfied. This enables Geffner and Pearl to satisfy both theSpecificity principle and Strong Independence. Another proposal is themaximum entropy approach (Pearl 1988, 490–496; Goldszmidt,Morris and Pearl, 1993; Pearl 1990). A theory \(T\), consisting ofdefaults \(\Delta\) and facts \(F\), entails \(p\) just in case theprobability of \(p\), conditional on \(F\), approaches 1 as theprobabilities associated with \(\Delta\) approach 1, using the entropy-maximizing[6] probability function that respects the defaults in \(\Delta\). Themaximum-entropy approaches satisfies both Specificity and StrongIndependence.

Every attempt to solve the drowning problem (including conditionalentailment and the maximum-entropy approach) comes at the cost ofsacrificing cumulativity. Securing strong independence makes thesystems very sensitive to the exactform in which the defaultinformation is stored. Consider, for example the following case:Swedes are (normally) fair, Swedes are (normally) tall, Jon is a shortSwede. Conditional entailment and maximum-entropy entailment wouldpermit the conclusion that Jon is fair in this case. However, if wereplace the first two default conditionals by the single default,Swedes are normally both tall and fair, then the conclusionno longer follows, despite the fact that the new conditional islogically equivalent to the conjunction of the two originalconditionals.

Applying the logic of extreme probabilities to real-world defeasiblereasoning generates an obvious problem, however. We know perfectlywell that, in the case of the default rules we actually use, theconditional probability of the conclusion on the premises is nowherenear 1. For example, the probability that an arbitrary bird can fly iscertainly not infinitely close to 1. This problem resembles that ofusing idealizations in science, such as frictionless planes and idealgases. It seems reasonable to think that, in deploying the machineryof defeasible logic, we indulge in the degree of make-believenecessary to make the formal models applicable. Nonetheless, this isclearly a problem warranting further attention.

5.8 Fully Expressive Languages: Conditional Logics, Higher-Order Probabilities, and Negated Defaults

With relatively few exceptions, the logical approaches to defeasiblereasoning developed so far put severe restrictions on the logical formof propositions included in a set of premises. In particular, theyrequire the default conditional operator, \(\Rightarrow\), to havewide scope in every formula in which it appears. Default conditionalsare not allowed to be nested within other default conditionals, orwithin the scope of the usual Boolean operators of propositional logic(negation, conjunction, disjunction, material conditional). This is avery severe restriction and one that is quite difficult to defend. Forexample, in representingundercutting defeaters, it would bevery natural to use a negated default conditional of the form\(\neg((p \amp q) \Rightarrow r)\) to signify that \(q\) defeats \(p\)as a prima facie reason for \(r\). In addition, it seems plausiblethat one might come gaindisjunctive default information: forexample, that either customers are gullible or salesman are wily.

Asher and Pelletier (Asher and Pelletier 1997) have argued that, whentranslating generic sentences in natural language, it is essentialthat we be allowed to nest default conditionals. For example, considerthe following English sentences:

Close friends are (normally) people who (normally) trust oneanother.

People who (normally) rise early (normally) go to bed early.

In the first case, a conditional is nested within the consequent ofanother conditional:

\(\forall x \forall y (\textit{Friend}(x,y) \Rightarrow \forall z(\textit{Time}(z) \Rightarrow \textit{Trust}(x,y,z)))\)

In the second case, we seem to have conditionals nested within boththe antecedent and the consequent of a third conditional, somethinglike:

\(\forall x (\textit{Person}(x) \rightarrow\)
\(\ \ (\forall y(\textit{Day}(y) \Rightarrow \textit{Rise-early}(x,y))\Rightarrow \forall z (\textit{Day}(z) \Rightarrow\textit{Bed-early}(x,z))))\)

This nesting of conditionals can be made possible by borrowing andmodifying the semantics of the subjunctive or counterfactualconditional, developed by Robert Stalnaker and David K. Lewis (Lewis1973). For an axiomatization of Lewis’s conditional logic, seethe following supplementary document:

David Lewis’s Conditional Logic

The only modification that is essential is to drop the condition ofCentering (both strong and weak), a condition that makes modus ponens(affirming the antecedent) logically valid. If the conditional\(\Rightarrow\) is to represent a default conditional, we do not wantmodus ponens to be valid: we do not want \((p \Rightarrow q)\) and\(p\) to entail \(q\) classically (i.e., monotonically). If Centeringis dropped, the resulting logic can be made to correspond exactly toeither a preferential or a rational defeasible entailment relation.For example, the condition of Rational Monotony is the exactcounterpart of the CV axiom of Lewis’s logic:

CV:
\((p \Rightarrow q) \rightarrow [((p \amp r) \Rightarrow q) \vee(p\Rightarrow \neg r )]\)

Something like this was proposed first by James Delgrande (Delgrande1987), and the idea has been most thoroughly developed by NicholasAsher and his collaborators (Asher and Morreau 1991; Asher 1995; Asherand Bonevac 1996; Asher and Mao 2001) under the nameCommonsense Entailment.[7] Commonsense Entailment is a preferential (although not a rational)consequence relation, and it automatically satisfies the Specificityprinciple. It permits the arbitrary nesting of default conditionalswithin other logical operators, and it can be used to representundercutting defeaters, through the use of negated defaults (Asher andMao 2001). Negated defaults, also known asstrong negationalso figure in the work of Yi Mao (Mao 2003), M. J. Maher (Maher 2024)and G. Antoniou et al. (Antoniou et al. 2000)

The models of Commonsense Entailment differ significantly from thoseof preferential logic and the logic of extreme probabilities. Insteadof having structures that contain sets ofmodels of astandard, default-free language, a model of the language ofCommonsense Entailment includes a set ofpossible worlds,together with a function that assigns standard interpretation (a modelof the default-free language) to each world. In addition, to each pairconsisting of a world \(w\) and a set of worlds (proposition) \(A\),there is a function \(*\) that assigns a set of worlds \({*}(w,A)\) tothe pair. The set \({*}(w,A)\) is the set of most normal \(A\)-worlds,from the perspective of \(w\). A default conditional \((p \Rightarrowq)\) is true in a world \(w\) (in such a model) just in case all ofthe most normal \(p\) worlds (from \(w\)’s perspective) areworlds in which \(q\) is also true. Since we can assigntruth-conditions to each such conditional, we can define the truth ofnested conditionals, whether the conditionals are nested withinBoolean operators or within other conditionals. Moreover, we candefine both a classical, monotonic consequence relation for this classof models and a defeasible, nonmonotonic relation (in fact, thenonmonotonic consequence relation can be defined in a variety ofways). We can then distinguish between a default conditional’sfollowingwith logical necessity from a default theory andits followingdefeasibly from that same theory.Contraposition, for example — inferring \((\neg q \Rightarrow\neg p)\) from \((p \Rightarrow q)\) — is not logically validfor default conditionals, but it might be a defeasibly correct inference.[8]

The one critical drawback to Commonsense Entailment, when compared tothe logic of extreme probabilities, is that it lacks a single, clearstandard of normativity. The truth-conditions of the defaultconditional and the definition of nonmonotonic consequence can befine-tuned to match many of our intuitions, but in the end of the day,the theory of Commonsense Entailment offers no simple answer to thequestion of what its conditional or its consequence relation aresupposed (ideally) to represent.

Logics of extreme probability (beginning with the work of ErnestAdams) did not permit the nesting of default conditionals for thisreason: the conditionals were supposed to represent something likesubjective conditional probabilities of the agent, to which the agentwas supposed to have perfect introspective access. Consequently, itmade no sense to nest this conditionals within disjunctions (as thoughthe agent couldn’t tell which disjunct represented his actualprobability assignment) or within other conditionals (since thesubjective probability of a subjective probability is always trivial— either exactly 1 or exactly 0). However, there is no reasonwhy the logic of extreme probabilities couldn’t be given adifferent interpretation, with \((p \Rightarrow q)\) representingsomething likethe objective probability of \(q\), conditional on\(p\), is infinitely close to 1. In this case, it makes perfectsense to nest such statements of objective conditional probabilitywithin Boolean operators (either the probability of \(q\) on \(p\) isclose to 1, or the probability of \(r\) on \(s\) is close to 1), orwithin operators of objective probability (the objective probabilitythat the objective probability of \(p\) is close to 1 is itself closeto 1). What is required in the latter case is a theory ofhigher-order probabilities.

Fortunately, such a theory of higher-order probabilities is available(see Skyrms 1980; Gaifman 1988). The central principle of this theoryis Miller’s principle. For a description of the models of thelogic of extreme, higher-order probability, see the followingsupplementary document:

Models of Higher-Order Probability

The following proposition is logically valid in this logic,representing the presence of a defeasible modus ponens rule:

\(((p \amp(p \Rightarrow q)) \Rightarrow q)\)

This system can be the basis for a family of rational nonmonotonicconsequence relations that include the Adams\(\varepsilon\)-entailment system as a proper part (see Koons 2000,298–319).

5.9 Objections to Nonmonotonic Logic

5.9.1 Confusing Logic and Epistemology?

In an early paper (Israel 1980), David Israel raised a number ofobjections to the very idea ofnonmonotonic logic. First, hepointed out that the nonmonotonic consequences of a finite theory aretypically not semi-decidable (recursively enumerable). This remainstrue of most current systems, but it is also true of second-orderlogic, infinitary logic, and a number of other systems that are nowaccepted as logical in nature.

Secondly, and more to the point, Israel argued that the concept ofnonmonotonic logic evinces a confusion between the rules oflogic and rules of inference. In other words, Israel accused defendersof nonmonotonic logic of confusing a theory of defeasible inference (abranch of epistemology) with a theory of genuine consequence relations(a branch of logic). Inference is nonmonotonic, but logic (accordingto Israel) is essentially monotonic.

The best response to Israel is to point out that, like deductivelogic, a theory of nonmonotonic or defeasible consequence has a numberof applications besides that of guiding actual inference. Defeasiblelogic can be used as part of a theory of scientific explanation, andit can be used in hypothetical reasoning, as in planning. It can beused to interpret implicit features of stories, even fantastic ones,so long as it is clear which actual default rules to suspend. Thus,defeasible logic extends far beyond the boundaries of the theory ofepistemic justification. Moreover, as we have seen, nonmonotonicconsequence relations (especially the preferential ones) share anumber of very significant formal properties with classicalconsequence, warranting the inclusion of them all in a larger familyof logics. From this perspective, classical deductive logic is simplya special case: the study of indefeasible consequence.

5.9.2 Problems with the Deduction Theorem

In a recent paper, Charles Morgan (Morgan 2000) has argued thatnonmonotonic logic is impossible. Morgan offers a series ofimpossibility proofs. All of Morgan’s proofs turn on the factthat nonmonotonic logics cannot support a generalized deductiontheorem, i.e., something of the following form:

\(\Gamma \cup \{p\} \dproves q\) iff \(\Gamma \dproves (p \Rightarrowq)\)

Morgan is certainly right about this.

However, there are good grounds for thinking that a system ofnonmonotonic logicshould fail to include a generalizeddeduction theorem. The very nature of defeasible consequence ensuresthat it must be so. Consider, for example, the left-to-rightdirection: suppose that \(\Gamma \cup \{p\} \dproves q\). Should itfollow that \(\Gamma \dproves (p \Rightarrow q)\)? Not at all. It maybe that, normally, if \(p\) then \(\neg q\), but \(\Gamma\) maycontain defaults and information that defeat and override thisinference. For instance, it might contain the fact \(r\) and thedefault \(((r \amp p) \Rightarrow q)\). Similarly, consider theright-to-left direction: suppose that \(\Gamma \dproves (p \Rightarrowq)\). Should it follow that \(\Gamma \cup \{p\} \dproves q\)? Again,clearly not. \(\Gamma\) might contain both \(r\) and a default \(((p\amp r) \Rightarrow \neg q)\), in which case \(\Gamma \cup \{p\}\dproves \neg q\).

It would be reasonable, however, to demand that a system ofnonmonotonic logic satisfy the followingspecial deductiontheorem:

\(\{p\} \dproves q\) iff \(\varnothing \dproves (p \Rightarrow q)\)

This is certainly possible. The special deduction theorem holdstrivially; if we define\(\{p\} \dproves q\) as \(\varnothing \vDash(p\Rightarrow q)\); that is, \(\{p\}\) defeasibly entails \(q\) if andonly if (by definition) \((p \Rightarrow q)\) is a theorem of theclassical conditional logic.[9]

5.9.3 Lack of Compactness

When we draw classical conclusions from infinite theories, we can relyconfidently on the property oflogical compactness: whateverfollows logically from an infinite set also follows from some finitesubset. However, defeasible logics are not in general compact.Consequently, we can find ourselves uncertain about whether aproposition is really a defeasible consequence of a theory, even givena convincing argument for it. Éric Martin (Martin 2019) hasexplored a notion ofweak compactness which may be ofhelp.

6. Causation and Defeasible Reasoning

6.1 The Need for Explicit Causal Information

Hanks and McDermott, computer scientists at Yale, demonstrated thatthe existing systems of nonmonotonic logic were unable to give theright solution to a simple problem about predicting the course ofevents (Hanks and McDermott 1987). The problem became known astheYale shooting problem. Hanks and McDermott assume that some sortoflaw of inertia can be assumed: that normally properties ofthings do not change. In the Yale shooting problem, there are tworelevant properties: being loaded (a property of a gun) and beingalive (a property of the intended victim of the shooting). Let’sassume that in the initial situation, \(s_0\), the gun is loaded andthe victim is alive,Loaded\((s_0)\) andAlive\((s_0)\), and that two actions are performed insequence:Wait andShoot. Let’s call thesituation that results from a moment of waiting \(s_1\), and thesituation that follows both waiting and then shooting \(s_2\). Thereare then three instances of the law of inertia that are relevant:

  • Alive\((s_0) \Rightarrow\)Alive\((s_1)\)
  • Loaded\((s_0) \Rightarrow\)Loaded\((s_1)\)
  • Alive\((s_1) \Rightarrow\)Alive\((s_2)\)

We need to make one final assumption: that shooting the victim with aloaded gun results in death (not being alive):

  • ((Alive\((s_1)\) &Loaded\((s_1))\rightarrow \neg\)Alive\((s_2)\)

Intuitively, we should be able to derive the defeasible conclusionthat the victim is still alive after waiting, but dead after waitingand shooting:Alive\((s_1) \amp\neg\)Alive\((s_2)\). However, none of the nonmonotoniclogics described above give us this result, since each of the threeinstances of the law of inertia can be violated: by the victim’sinexplicably dying while we are waiting, by the gun’smiraculously becoming unloaded while we are waiting, or by thevictim’s dying as a result of the shooting. Nothing introducedinto nonmonotonic logic up to this point provides us with a basis forpreferring the second exception to the law of inertia to the first orthird. What’s missing is a recognition of the importance ofcausal structure to defeasible consequence.[10]

There are several even simpler examples that illustrate the need toinclude explicitly causal information in the input to defeasiblereasoning. Consider, for instance, this problem of Judea Pearl’s(Pearl 1988): if the sprinkler is on, then normally the sidewalk iswet, and, if the sidewalk is wet, then normally it is raining.However, we should not infer that it is raining from the fact that thesprinkler is on. (See Lifschitz 1990 and Lin and Reiter 1994 foradditional examples of this kind.) Similarly, if we also know that ifthe sidewalk is wet, then it is slippery, we should be able to inferthat the sidewalk is slippery if the sprinkler is on and it isnot raining.

The distinction between causal and evidential rules has been used inthe argumentative-narrative model of reasoning with evidence developedby Floris Bex and his colleagues (Bex et al. 2010; Bex 2011).

6.2 Causally Grounded Independence Relations

Hans Reichenbach, in his analysis of the interaction of causality andprobability (Reichenbach 1956), observed that the immediate causes ofan event probabilisticallyscreen off from that event anyother event that is not causally posterior to it. This means that,given the immediate causal antecedents of an event, the occurrence ofthat event is rendered probabilistically independent of anyinformation about non-posterior events. When this insight is appliedto the nonmonotonic logic of extreme probabilities, we can use causalinformation to identify which defaults function independently ofothers: that is, we can decide when the fact that one defaultconditional has an exception is irrelevant to the question of whethera second conditional is also violated (see Koons 2000, 320–323).In effect, we have a selective version of Independence of Defaultsthat is grounded in causal information, enabling us to dissolve theDrowning Problem.

For example, in the case of Pearl’s sprinkler, since rain iscausally prior to the sidewalk’s being wet, the causal structureof the situation does not ensure that the rain is probabilisticallyindependent of whether the sprinkler is on, given the fact that thesidewalk is wet. That is, we have no grounds for thinking that theprobability of rain, conditional on the sidewalk’s being wet, isidentical to the probability of rain, conditional on thesidewalk’s being wet and the sprinkler’s being on(presumably, the former is higher than the latter). This failure ofindependence prevents us from using the (Wet \(\Rightarrow\)Rain) default, in the presence of the additional fact thatthe sprinkler is on.

In the case of the Yale shooting problem, the state of the gun’sbeing loaded in the aftermath of waiting,Loaded\((s_1)\),has at its only causal antecedent the fact that the gun is loaded in\(s_0\). The fact ofLoaded\((s_0)\) screens off the factthat the victim is alive in \(s_0\) from the conclusionLoaded\((s_1)\). Similarly, the fact that the victim is alivein \(s_0\) screens off the fact that the gun is loaded in \(s_0\) fromthe conclusion that the victim is still alive in \(s_1\). In contrast,the fact that the victim is alive at \(s_1\) doesnot screenoff the fact that the gun is loaded at \(s_1\) from the conclusionthat the victim is still alive at \(s_2\). Thus, we can assign higherpriority to the law of inertia with respect to bothLoad andAlive at \(s_0\), and we can conclude that the victim isalive and the gun is loaded at \(s_1\). The causal law for shootingthen gives us the desired conclusion, namely, that the victim is deadat \(s_2\).

6.3 Causal Circumscription

Our knowledge of causal relatedness is itself very partial. Inparticular, it is difficult for us to verify conclusively that any tworandomly selected facts are or are not causally related. It seems thatin practice we apply something like Occam’s razor, assuming thattwo randomly selected facts are not causally related unless we havepositive reason for thinking otherwise. This invites the use ofsomething like circumscription, minimizing the extension of thepredicatecauses. (This is in fact exactly what Fangzhen Lindoes in his 1995 papers [Lin 1995].)

Once we have a set of tentative conclusions about the causal structureof the world, we can use Reichenbach’s insight to enable us tolocalize the problem of reasoning by default in the presence of knownabnormality. If a known abnormality is screened off from adefault’s rule’s consequent by constituent of itsantecedent, then the rule may legitimately be deployed.

Since circumscription is itself a nonmonotonic logical system, thereare at least two independent sources of nonmonotonicity, ordefeasibility: the minimization or circumscription of causalrelevance, and the application of defeasible causal laws and laws ofinertia.

A number of researchers in artificial intelligence have recentlydeployed one version of circumscription (namely, thestablemodels of Gelfond and Lifschitz [1988]) to problems of causalreasoning, building on an idea of Norman McCain and HudsonTurner’s [McCain and Turner 1997]. McCain and Turner employcausal rules that specify when an atomic fact is adequately caused andwhen it is exogenous and not in need of causal explanation. They thenassume a principle ofuniversal causation, permitting onlythose models that provide adequate causal explanations for allnon-exempt atomic facts, while in effect circumscribing the extensionof the causally explained. This approach has been extended and appliedby Giunchiglia, Lee, Lifschitz, McCain and Turner [2004], Ferraris[2007], and Ferraris, Lee, Lierler, Lifschitz and Yang [2012].Joohyung Lee and Yi Wang (Lee and Wang 2016) have focused onintroducing relative weights to the rules.

7. Implementations

7.1 Reasoning about Probabilities

Sections5.7 and5.8 above discussed probabilistic semantics for defeasible logics. It isalso possible to reason defeasibly about propositions thatexplicitly involve numerical probabilities. We can reasondefeasibly about propositions that assign specific probabilities tothe other propositions or that assert numerical relations (identity,inequalities) between such propositions.

Chitta Baral, Michael Gelfond, and Nelson Rushton (Baral et al. 2009)have developed a declarative language P-Log, which combines defeasiblelogic with Bayesian probability nets. They use Answer Set Prolog toprovide the logical foundations. They make use of a version of theprinciple of indifference. Belief revision occurs by means of Bayesianconditioning. Baral et al. demonstrate that P-log can reason correctlyabout the Monty Hall problem and Simpson’s paradox. See Gelfondand Kahl 2014, pp. 235–270 for the syntax and semantics ofP-log.

Joohyung Lee and Yi Wang (Lee and Wang 2016) take a somewhat differentapproach, using the log-linear models of Markov Logic (Richardson andDomingos 2006), which they argue is a natural way to add probabilisticinformation to stable semantics for logic programming languages. AMarkov Logic network is a way of finding the probability distributionto a Markov chain that isstationary, i.e., stable withrespect to updating. Their approach incorporates the ProbLog model ofFierens et al. 2015 as a special case.

Anthony Hunter has developed a strategy for using argumentation theoryto reason with incomplete and even inconsistent information aboutprobabilities (Hunter 2020). Once inconsistencies have been eliminatedby belief contraction, Hunter relies on the maximum entropydistribution that is consistent with the remaining constraints todefine the optimal probability function.

7.2 Software Implementations

ArguMed by Verheij (Verheij 2005) computes a version ofstable semantics. Chris Reed and Glenn Rowe (Reed and Rowe 2004) havedevelopedAraucaria, an application for analyzing anddiagramming legal arguments. Prakken’sASPIC+ system(Prakken 2010) can be used in analyzing formal argumentativestructures.

Michael Gelfond and Yulia Kahl (Gelfond and Kahl 2014, 131–151)discuss how to develop algorithms for efficiently computing answersets for logic programming. They describe inference engines that canact as answer-set programming solvers.

Benzmüller et al. (2018) includes a wide variety ofimplementations of defeasible reasoning machines.

7.3 Efficiency in Updating

An important practical problem that arises from applying formal modelsof defeasible reasoning is that of updating in light of new orretracted information. Must we re-compute the nonmonotonicconsequences from scratch each time updating is required?

Beishui Liao and his collaborators (Liao et al. 2011) addressed theissue of the computational dynamics of argument systems byinvestigating under which conditions an argument system can be dividedinto modules, so that the implications of new information can beefficiently computed by updating only the affected module. Theydiscovered that such modularity is possible for any semantics that hasthe property ofdirectionality. A semantics is directional ifand only if, for every argument structureAS, theintersection of any extension prescribed forAS with anunattacked set /(U/) is identical to one of the extensions prescribedfor the restriction ofAS to \(U\), and vice versa. See alsoBaroni et al. 2018.

Bibliography

  • Adams, Ernest W., 1975,The Logic of Conditionals,Dordrecht: Reidel.
  • Alchourrón, C., Gärdenfors, P. and Makinson, D., 1982,“On the logic of theory change: contraction functions and theirassociated revision functions”,Theoria, 48:14–37.
  • Antoniou, G., Billington, D., Governatori, G., & Maher, M. J.,2000, “A Flexible Framework for Defeasible Logics,”Proceedings of the Seventeenth National Conference on ArtificialIntelligence (AAAI-2000), AAAI Press, 405–410.
  • Arló Costa, Horacio and Parikh, Rohit, 2005,“Conditional Probability and Defeasible Inference”,Journal of Philosophical Logic, 34: 97–119.
  • Armstrong, David M., 1983,What is a law of nature?, NewYork: Cambridge University Press.
  • –––, 1997,A world of states ofaffairs, Cambridge: Cambridge University Press.
  • Asarnow, S., 2016, “Rational Internalism”,Ethics, 127: 147–78.
  • Asher, Nicholas, 1992, “A Truth Conditional, DefaultSemantics for Progressive”,Linguistics and Philosophy,15: 469–508.
  • –––, 1995, “Commonsense Entailment: alogic for some conditionals”, inConditionals in ArtificialIntelligence, G. Crocco, L. Farinas del Cerro, and A. Hertzig(eds.), Oxford: Oxford University Press.
  • Asher, Nicholas and Daniel Bonevac, 1996, “Prima FacieObligations”,Studia Logica, 57: 19–45.
  • Asher, Nicholas, and Alex Lascarides, 2003,Logics ofConversation, Cambridge: Cambridge University Press.
  • Asher, N.. and Y. Mao, 2001, “Negated Defaults inCommonsense Entailment”,Bulletin of the Section ofLogic, 30: 4–60.
  • Asher, Nicholas, and Michael Morreau, 1991, “CommonsenseEntailment: A Modal, Nonmonotonic Theory of Reasoning”, inProceedings of the Twelfth International Joint Conference onArtificial Intelligence, John Mylopoulos and Ray Reiter (eds.),San Mateo, Calif.: Morgan Kaufmann.
  • Asher, N., and F.J. Pelletier, 1997, “Generics andDefaults”, inHandbook of Logic and Language, J. vanBentham and A. ter Meulen (eds.), Amsterdam: Elsevier.
  • Ashley, Kevin D., 1990,Modeling legal argument: Reasoningwith cases and hypotheticals, Cambridge, MA: MIT Press.
  • Baker, A. B., 1988, “A simple solution to the Yale shootingproblem”, inProceedings of the First InternationalConference on Knowledge Representation and Reasoning, Ronald J.Brachman, Hector Levesque and Ray Reiter (eds.), San Mateo, Calif.:Morgan Kaufmann.
  • Bamber, Donald, 2000, “Entailment with Near Surety of ScaledAssertions of High Conditional Probability”,Journal ofPhilosophical Logic, 29: 1–74.
  • Baroni, Pietro, Massimiliano Giacomin, and Beishui Liao, 2018,“Locality and Modularity in Abstract Argumentation”, in P.Baroni, D. Gabbay, Massimiliano Giacomin, and Leendert van der Torre(eds.),The Handbook of Formal Argumentation, London: CollegePublications, 937–979.
  • Baroni, Pietro, Eduardo Fermé, Massimiliano Giacomin, andGuillermo Ricardo Simari, 2022, “Belief Revision andComputational Argumentation: A Critical Comparison”,Journalof Logic, Language and Information, 31: 555–589.
  • Barwise, Jon and John Perry, 1983,Situations andAttitudes. MIT Press.
  • Bench-Capon, T. J. M., H. Prakken, and G Sartor, 2009,“Argumentation in legal reasoning”, in I. Rahwan and G. R.Simari (eds.),Argumentation in Artificial Intelligence,Dordrecht: Springer, pp. 363–382.
  • Benzmüller, Christoph, Francesco Ricca, Xavier Parent, andDumitru Roman, 2018,Rules and Reasoning: Second InternationalJoint Conference RuleML+RR 2018, Cham: Springer.
  • Bex, F. J., 2011,Arguments, stories and criminal evidence: Aformal hybrid theory, Dordrecht: Springer.
  • Bex, F. J., P. van Koppen, H. Prakken, and B. Verheij, 2010,“A hybrid formal theory of arguments, stories and criminalevidence”,Artificial Intelligence and Law, 18(2):123–152.
  • Bochman, Alexander, 2001,A Logical Theory of NonmonotonicInference and Belief Change, Berlin: Springer.
  • Bodanza, Gustavo A. and F. Tohmé, 2005, “LocalLogics, Non-Monotonicity and Defeasible Argumentation”,Journal of Logic, Language and Information, 14:1–12.
  • Bonevac, Daniel, 2003,Deduction: Introductory SymbolicLogic, Malden, Mass.: Blackwell, 2nd edition.
  • –––, 1998, “Against ConditionalObligation”,Noûs, 32: 37–53.
  • –––, 2018, “Defaulting on Reasons”,Noûs, 52(2): 229–259.
  • –––, 2019, “Free choice reasons”,Synthese, 196: 735–760.
  • Broome, John, 2001), “Normative Practical Reasoning”,Proceedings of the Aristotelian Society Supplement, 75:175–93.
  • –––, 1999, “Normative Requirements”,Ratio, 12: 398–419; reprinted in J. Dancy (ed.),Normativity. Oxford: Blackwell, 2000.
  • Brunero, John, 2022. “Reasons and DefeasibleReasoning”,The Philosophical Quarterly, 72(1):41–64.
  • Carnap, Rudolf, 1962,Logical Foundations of Probability,Chicago: University of Chicago Press.
  • Carnap, Rudolf and Richard C. Jeffrey, 1980,Studies ininductive logic and probability, Berkeley: University ofCalifornia Press.
  • Cartwright, Nancy, 1983,How the laws of physics lie,Oxford: Clarendon Press.
  • Chisholm, Roderick, 1957,Perceiving, Princeton:Princeton University Press.
  • –––, 1963, “Contrary-to-Duty Imperativesand Deontic Logic”,Analysis, 24: 33–36.
  • –––, 1966,Theory of Knowledge,Englewood Cliffs: Prentice-Hall.
  • Dancy, Jonathan, 1993,Moral Reasons, Malden, MA:Wiley-Blackwell.
  • –––, 2004,Ethics without Principles,Oxford: Clarendon Press.
  • Delgrande, J. P., 1987, “A first-order conditional logic forprototypical properties”,Artificial Intelligence, 33:105–130.
  • Dung, Phan Minh, 1995, “On the acceptability of argumentsand its fundamental role in non-monotonic reasoning logic programmingand n-person games”,Artificial Intelligence, 77:321–357.
  • Dung, Phan Minh and Phan Minh Thang, 2010, “Towards(probabilistic) argumentation for jury-based disputeresolution”, in P. Baroni, F. Cerutti, M. Giacomin, and G. R.Simari (eds.),Computational Models of Argument: Proceedings ofCOMMA 2010, Amsterdam: Ios Press, 171–182).
  • –––, 2018, “Fundamental properties ofattack relations in structured argumentation with priorities”,Artificial Intelligence, 255: 1–42.
  • Etherington, D. W. and R. Reiter, 1983, “On InheritanceHierarchies and Exceptions”, inProceedings of the NationalConference on Artificial Intelligence, Los Altos, Calif.: MorganKaufmann.
  • Ferraris, Paolo, 2007, “A Logic Programming Characterizationof Causal Theories”,Proceedings of the TwentiethInternational Joint Conference on Artificial Intelligence, SanFrancisco, Calif.: Morgan Kaufmann.
  • Ferraris, Paolo, with J. Lee, Y. Lierler, V. Lifschitz, and F.Yang, 2012, “Representing first-order causal theories by logicprograms”,Theory and Practice of Logic Programming,12(3): 383–412.
  • Fierens, D, G. van den Broeck, J. Renkens, D. Shterionov, B.Guttman, I. Thon, G. Janssens, and L. de Readt, 2015, “Inferenceand learning in probabilistic logic using weighted Booleanformulas”,Theory and Practice of Logic Programming,15(03): 358–401.
  • Fitting, M., 2014, “Possible world semantics for first-orderlogic of proofs”,Annals of Pure and Applied Logic,165(1): 225–240. doi:10.1016/j.apal.2013.07.011
  • Freund, M., with D. Lehmann, and D. Makinson, 1990,“Canonical extensions to the infinite case of finitarynonmonotonic inference relations”, inProceedings of theWorkshop on Nonmonotonic Reasoning, G. Brewka and H. Freitag(eds.), Sankt Augustin: Gesellschaft für Mathematic undDatenverarbeitung mbH.
  • Freund, M., 1993, “Injective models and disjunctiverelations”,Journal of Logic and Computation, 3:231–347.
  • Gabbay, D. M., 1985, “Theoretical foundations fornon-monotonic reasoning in expert systems”, inLogics andModels of Concurrent Systems, K. R. Apt (ed.), Berlin:Springer-Verlag.
  • Gaifman, Haim, 1988, “A theory of higher-orderprobabilities”, inCausation, Chance and Credence,Brian Skyrms and William Harper (eds.), London, Ontario: University ofWestern Ontario Press.
  • Gärdenfors, P., 1978, “Conditionals and Changes ofBelief”,Acta Fennica, 30: 381–404.
  • –––, 1986, “Belief revisions and theRamsey test for conditionals”,Philosophical Review,95: 81–93.
  • –––, 2000,Conceptual spaces: The geometryof thought, Cambridge, MA: MIT press.
  • –––, 2014,The geometry of meaning:Semantics based on conceptual spaces, Cambridge, MA: MITPress.
  • Geffner, H. A., 1992,Default Reasoning: Causal andConditional Theories, Cambridge, MA: MIT Press.
  • Geffner, H. A., and J. Pearl, 1992, “Conditional entailment:bridging two approaches to default reasoning”,ArtificialIntelligence, 53: 209–244.
  • Gelfond, Michael and Yulia Kahl, 2014,KnowledgeRepresentation, Reasoning, and the Design of Intelligent Agents: TheAnswer-Set Programming Approach, Cambridge: Cambridge UniversityPress.
  • Gelfond, Michael and Leone, N., 2002, “Logic programming andknowledge representation—the A-prolog perspective”,Artificial Intelligence, 138: 37–38.
  • Gelfond, Michael and Lifschitz, Vladimir, 1988, “The stablemodel semantics for logic programming”,Logic Programming:Proceedings of the Fifth International Conference and Symposium,Robert A. Kowalski and Kenneth A. Bowen (eds.), Cambridge, Mass.: TheMIT Press, pp. 1070–1080.
  • Gilio, Angelo, 2005, “Probabilistic Logic under Coherence,Conditional Interpretations, and Default Reasoning”,Synthese, 146: 139–152.
  • Ginsberg, M. L., 1987,Readings in NonmonotonicReasoning, San Mateo, Calif.: Morgan Kaufmann.
  • Giunchiglia, E., with J. Lee, V. Lifschitz, N. McCain, and H.Turner, 2004, “Nonmonotonic Causal Theories”,Artificial Intelligence, 153: 49–104.
  • Goldszmidt, M. and J. Pearl, 1992, “Rank-Based Systems: ASimple Approach to Belief Revision, Belief Update, and Reasoning aboutEvidence and Action”, inProceedings of the ThirdInternational Conference on Principles of Knowledge Representation andReasoning, San Mateo, Calif.: Morgan Kaufmann.
  • Goldszmidt, M., with P. Morris, and J. Pearl, 1993, “Amaximum entropy approach to nonmonotonic reasoning”,IEEETransactions on Pattern Analysis and Machine Intelligence, 15:220–232.
  • Grove, A., 1988, “Two modellings for theory change”,Journal of Philosophical Logic, 17: 157–170.
  • Hage, J. C., 1997,Reasoning with rules: An essay on legalreasoning and its underlying logic, Dordrecht: KluwerAcademic.
  • –––, 2000, “Dialectical models inartificial intelligence and law”,Artificial Intelligenceand Law, 8: 137–172.
  • Hanks, Steve and Drew McDermott, 1987, “Nonmonotonic Logicand Temporal Projection”,Artificial Intelligence, 33:379–412.
  • Hansson, B., 1969, “An analysis of some deonticlogics”,Noûs, 3: 373–398.
  • Hansson, S. O. and Makinson, D., 1997, “Applying normativerules with restraint”, inLogic and Scientific Methods,M. Dalla Chiara (ed.), Dordrecht: Kluwer.
  • Harper, W. L., 1976, “Rational Belief Change, PopperFunctions and Counterfactuals”, inFoundations ofProbability Theory, Statistical Inference, and Statistical Theories ofScience, Volume I, Dordrecht: Reidel.
  • Hart, H. L. A., 1949, “The ascription of responsibility andrights”,Proceedings of the Aristotelian Society,49(1): 171–194.
  • Hawthorne, James, 1998, “On the Logic of NonmonotonicConditionals and Conditional Probabilities: Predicate Logic”,Journal of Philosophical Logic, 27: 1–34.
  • Horty, J. F., with R.H. Thomason, and D.S. Touretzky, 1990,“A sceptical theory of inheritance in nonmonotonic semanticnetworks”,Artificial Intelligence, 42:311–348.
  • Horty, John, 1994, “Moral dilemmas and nonmonotoniclogic”,Journal of Philosophical Logic, 23:35–65.
  • –––, 2003, “Reasoning with moralconflicts”,Noûs, 37: 557–605.
  • –––, 2007a, “Defaults withPriorities”,Journal of Philosophical Logic, 36:367–413.
  • –––, 2007b, “Reasons as defaults”,Philosophers’ Imprints, 7: 1–28.
  • –––, 2012,Reasons as Defaults, Oxford:Oxford University Press.
  • Hu, Ivan, 2020. “Defeasible Tolerance and theSorites”,Journal of Philosophy, 117(4):181–218.
  • Hunter, Anthony, 2013, “A probabilistic approach tomodelling uncertain logical arguments”,InternationalJournal of Approximate Reasoning, 54(1): 47–81.
  • –––, “Reasoning with InconsistentKnowledge using the Epistemic Approach to ProbabilisticArgumentation”,Proceedings of the 17th InternationalConference on Principles of Knowledge Representation and Reasoning(KR’20), Palo Alto: AAAI Press.
  • Israel, David, 1986, “What’s Wrong with Non-monotonicLogic”, inProceedings of the First National Conference onArtificial Intelligence, Palo Alto: AAAI Press.
  • Karacapilidis, N., and D. Papadias, 2001, “Computersupported argumentation and collaborative decision making: The HERMESsystem”,Information Systems, 26: 259–277.
  • Komath, Muhammed, 2024, “Defeasible Reasoning in IslamicLegal Theory”,Informal Logic, 44(3):431–467.
  • Konolige, Kurt, 1994, “Autoepistemic Logic”, inHandbook of Logic in Artificial Intelligence and LogicProgramming, Volume III: Nonmonotonic Reasoning and UncertainReasoning, D. M. Gabbay, C. J. Hogger, and J. A. Robinson (eds.),Oxford: Clarendon Press.
  • Koons, Robert C., 2000,Realism Regained: An Exact Theory ofCausation, Teleology and the Mind, New York: Oxford UniversityPress.
  • –––, 2001, “Defeasible Reasoning, SpecialPleading and the Cosmological Argument: Reply to Oppy”,Faith and Philosophy, 18: 192–203.
  • Kraus, S., with D. Lehmann, and M. Magidor, 1990,“Nonmonotonic Reasoning, Preferential Models and CumulativeLogics”,Artificial Intelligence, 44:167–207.
  • Kyburg, Henry E., 1983,Epistemology and Inference,Minneapolis: University of Minnesota Press.
  • –––, 1990,Knowledge Representation andDefeasible Reasoning, Dordrecht: Kluwer.
  • Lance, Mark and Margaret Little, 2004, “Defeasibility andthe normative grasp of context”,Erkenntnis, 61:435–55.
  • –––, 2007, “Where the laws are”,Oxford Studies in Metaethics, 2: 149–171.
  • Lascarides, Alex and Nicholas Asher, 1993, “TemporalInterpretation, Discourse Relations and Commonsense Entailment”,Linguistics and Philosophy, 16: 437–493.
  • Lee, Joohyung and Yi Wang, 2016, “Weighted Rules under theStable Model Semantics”, inProceedings of the 15thInternational Conference on Principles of Knowledge Representation andReasoning (KR 2016), Palo Alto: AAAI Press, pp.145–154.
  • Lehmann, D., and M. Magidor, 1992, “What does a conditionalknowledge base entail?”,Artificial Intelligence, 55:1–60.
  • Levesque, H., 1990, “A study in autoepistemic logic”,Artificial Intelligence, 42: 263–309.
  • Lewis, David K., 1973,Counterfactuals, Cambridge, Mass.:Harvard University Press.
  • Liao, Beishui, Li Jin, and Robert C. Koons, 2011, “Dynamicsof argumentation systems: A division-based method”,Artificial Intelligence, 175(11): 1790–1814.
  • Liao, Beishui, Nir Oren, Leendert van der Torre, and SerenaVillata, 2019, “Prioritized Norms in FormalArgumentation”,Journal of Logic and Computation,29(2): 215–240.
  • Lifschitz, V., 1988, “Circumscriptive theories: alogic-based framework for knowledge representation”,Journalof Philosophical Logic, 17: 391–441.
  • –––, 1989, “Benchmark Problems for FormalNonmonotonic Reasoning”, inNon-Monotonic Reasoning, M.Reinfrank, J. de Kleer, M. L. Ginsberg and E. Sandewall (eds.),Berlin: Springer-Verlag.
  • –––, 1990, “Frames in the space ofsituations”,Artificial Intelligence, 46:365–376.
  • Lin, Fangzhen, 1995, “Embracing causality in specifying theindirect effects of actions”,Proceedings of the FourteenthInternational Joint Conference on Artificial Intelligence, SanMateo, Calif.: Morgan Kaufmann, pp. 1985–1993.
  • Lin, Fangzhen, and Robert Reiter, 1994, “State constraintsrevisited”,Journal of Logic and Computation, 4:655–678.
  • Lindström, Sten, 2022, “A semantic approach tononmonotonic reasoning: Inference operations and choice”,Theoria, 88: 494–528. doi:10.1111/thoe.12405
  • Lukasiewicz, Thomas, 2005, “Nonmonotonic ProbabilisticReasoning under Variable-Strength Inheritance with Overriding”,Synthese, 146: 153–169.
  • McCain, Norman and Hudson Turner, 1997, “Causal theories ofaction and change”, inProceedings of the FourteenthNational Conference on Artificial Intelligence (AAAI-97),460–465. The MIT Press.
  • McCarthy, John M. and Patrick J. Hayes, 1969, “SomePhilosophical Problems from the Standpoint of ArtificialIntelligence”, inMachine Intelligence 4, B. Meltzerand D. Mitchie (eds.), Edinburgh: Edinburgh University Press.
  • –––, 1977, “Epistemological Problems ofArtificial Intelligence”, inProceedings of the 5thInternational Joint Conference on Artificial Intelligence,Pittsburgh: Computer Science Department, Carnegie-MellonUniversity.
  • –––, 1982, “Circumscription — A Formof Non-Monotonic Reasoning”,Artificial Intelligence,13: 27–39, 171–177.
  • –––, 1986, “Application of Circumscriptionto Formalizing Common-Sense Knowledge”,ArtificialIntelligence, 28: 89–111.
  • McDermott, Drew and Jon Doyle, 1982, “Non-Monotonic LogicI”,Artificial Intelligence, 13: 41–72.
  • Maher, M. J., 2024, “Which are the true defeasiblelogics?”Journal of Applied Non-Classical Logics, firstonline 12 Aug 2024. doi:10.1080/11663081.2024.2386918
  • Makinson, David, 1994, “General Patterns in NonmonotonicReasoning”, inHandbook of Logic in Artificial Intelligenceand Logic Programming, Volume III: Nonmonotonic Reasoning andUncertain Reasoning, D. M. Gabbay, C. J. Hogger, and J. A.Robinson (eds.), Oxford: Clarendon Press.
  • –––, 2005,Bridges from Classical toNonmonotonic Logic, London: King’s CollegePublications.
  • Makinson, David and Gärdenfors, Peter, 1991, “Relationsbetween the logic of theory change and Nonmonotonic Logic”, inLogic of Theory Change, A. Fuhrmann and M. Morreau (eds.),Berlin: Springer-Verlag.
  • Makinson, David and van der Torre, L., 2000, “Input/outputlogics”,Journal of Philosophical Logic, 29:155–85.
  • Mao, Yi, 2003,A Formalism for Nonmonotonic Reasoning EncodedGenerics, Ph.D. dissertation, University of Texas at Austin.
  • Martin, Éric, 2019, “Nonmonotonicity in the Frameworkof Parametric Logic.”Studia Logica, 107:1025–1077. doi:10.1007/s11225-018-9831-7
  • Morgan, Charles, 2000, “The Nature of NonmonotonicReasoning”,Minds and Machines, 10: 321–360.
  • Moore, Robert C., 1985, “Semantic Considerations onNonmonotonic Logic”,Artificial Intelligence, 25:75–94.
  • Morreau, M., and N. Asher, 1995, “What some genericsentences mean”, inThe Generic Book, J. Pelletier(ed.), Chicago: University of Chicago Press.
  • Nayak, A. C., 1994, “Iterated belief change based onepistemic entrenchment”,Erkenntnis, 41:353–390.
  • Nute, Donald, 1988, “Conditional Logic”, inHandbook of Philosophical Logic, Volume II: Extensions ofClassical Logic, D. Gabbay and F. Guenthner (eds.), Dordrecht: D.Reidel.
  • –––, 1997,Defeasible Deontic Logic,Dordrecht: Kluwer.
  • Osta-Vélez and Peter Gärdenfors, 2021,“Nonmonotonic Reasoning, Expectation Orderings, and ConceptualSpaces”,Journal of Logic, Language, and Information,31: 77–97. doi:10.1007/s10849-021-09347-6
  • Pandžić, Stipe, 2022, “A logic of defeasibleargumentation: Constructing arguments in justification logic”,Argument & Computation, 13: 3–47.doi:10.3233/AAC-200536
  • Pearl, Judea, 1988,Probabilistic Reasoning in IntelligentSystems: Networks of Plausible Inference, San Mateo, CA: MorganKaufmann.
  • –––, 1990, “System Z: A Natural Orderingof Defaults with Tractable Applications to Default Reasoning”,Proceedings of the Third Conference on Theoretical Aspects ofReasoning about Knowledge, Rohit Parikh (ed.), San Mateo, Calif.:Morgan Kaufmann.
  • Pelletier, F. J. and R. Elio, “On Relevance in NonmonotonicReasoning: Some Empirical Studies”, in R. Greiner & D.Subramanian (eds)Relevance: AAAI 1994 Fall Symposium Series,Palo Alto: AAAI Press.
  • Pollock, John L., 1967, “Criteria and our knowledge of thematerial world”,Philosophical Review, 76:28–62.
  • –––, 1970, “The structure of epistemicjustification”,American Philosophical Quarterly(Monograph Series), 4: 62–78.
  • –––, 1974,Knowledge and Justification,Princeton: Princeton University Press.
  • –––, 1987, “Defeasible Reasoning”,Cognitive Science, 11: 481–518.
  • –––, 1995,Cognitive Carpentry,Cambridge, Mass.: MIT Press.
  • –––, 2010, “Defeasible reasoning anddegrees of justification”,Argument & Computation,1(1): 7–22.
  • Prakken, Henry, 2010, “An abstract framework forargumentation with structured arguments”,Argument andComputation, 1: 93–124.
  • Prakken, Henry and Giovanni Sartor, 1995, “On the relationbetween legal language and legal argument: assumptions, applicability,and dynamic priorities”, inProceedings of the FifthInternational Conference on Artificial Intelligence and the Law(ICAIL-95), New York: The ACM Press.
  • –––, 1996, “A dialectical model ofassessing conflicting arguments in legal reasoning”,Artificial Intelligence and the Law, 4: 331–368.
  • Prakken, H., and Vreeswijk, G. A. W., 2002, “Logics fordefeasible argumentation”, in D. Gabbay and F. Guenthner (eds.),Handbook of philosophical logic (2nd edition, Volume 4),Dordrecht: Kluwer, pp. 219–318.
  • Quine, Willard van Orman, and J.S. Ullian, 1982,The Web ofBelief, New York: Random House.
  • Raz, Joseph, 1975,Practical Reasoning and Norms, London:Hutchinson and Company.
  • Reed, Christopher A. and Rowe, G. W. A., 2004, “Araucaria:Software for argument analysis, diagramming and representation”,International Journal on Artificial Intelligence Tools,13(4): 961–979.
  • Reiter, Ray, 1980, “A logic for default reasoning”,Artificial Intelligence, 13: 81–137.
  • Richardson, M. and P. Domingos, 2006, “Markov logicnetworks”,Machine Learning, 62(1–3):107–136.
  • Ross, David, 1930,The Right and the Good, Oxford: OxfordUniversity Press.
  • –––, 1939,Foundations of Ethics,Oxford: Clarendon Press.
  • Rott, Hans, 1989, “Conditionals and Theory Change:Revisions, Expansions and Additions”,Synthese, 81:91–113.
  • –––, 2022, “Introduction to StenLindström’s ‘A semantic approach to nonmonotonicreasoning: Inference operations and choice’”,Theoria, 88: 491–493. doi:10.1111/theo.12406
  • Schlechta, Karl, 1997,Nonmonotonic Logics: Basic Concepts,Results and Techniques, Berlin: Springer-Verlag.
  • Sen, A.K., 1986, “Social choice theory”, in Arrow,K.J. & Intriligator, M.D. (eds.),Handbook of MathematicalEconomics (Volume 3), Amsterdam: North-Holland, pp.1073–1181.
  • Shoham, Yoav, 1987, “A Semantical Approach to NonmonotonicLogic”, inProceedings of the Tenth International Conferenceon Artificial Intelligence, John McDermott (ed.), Los Altos,Calif.: Morgan Kaufmann.
  • Skyrms, Brian, 1980, “Higher order degrees of belief”,inProspects for Pragmatism, Hugh Mellor (ed.), Cambridge:Cambridge University Press.
  • Spohn, Wolfgang, 1988, “Ordinal ConditionalFunctions”, inCausation, Decision, Belief Change andStatistics, Volume III, W. L. Harper and B. Skyrms (eds.),Dordrecht: Kluwer.
  • –––, 2002, “A Brief Comparison ofPollock’s Defeasible Reasoning and Ranking Functions”,Synthese, 13: 39–56.
  • Tohmé, Fernando, with Claudio Delrieux and OtávioBueno, 2011, “Defeasible Reasoning + Partial Models: A FormalFramework for the Methodology of Research Programs”,Foundations of Science, 16: 47–65.
  • Toulmin, Stephen E., 1964,The Uses of Argument,Cambridge: Cambridge University Press.
  • Txurruka, I. and N. Asher, 2008, “A discourse-based approachto Natural Language Disjunction (revisited)”, in M. Aunargue, K.Korta and J. Lazzarabal (eds.),Language, Representation andReasoning, University of the Basque Country Press.
  • Valaris, Markos, 2020, “Reasoning, defeasibility, and thetaking condition”,Philosophers’ Imprint, 20(28):1–16.
  • van der Torre, Leendert and Srdjan Vesic, 2018, “ThePrinciple-Based Approach to Abstract Argumentation Semantics”,in P. Baroni, D. Gabbay, Massimiliano Giacomin, and Leendert van derTorre (eds.),The Handbook of Formal Argumentation, London:College Publications, 797–838.
  • van Eemeron, Frans H., Bart Garssen, Erik C. W. Krabbe, A.Francisca Snoeck Henkemans, Bart Verheij, and Jean H. M. Wagemans,2020,Handbook of Argumentation Theory, Dordrecht: SpringerNetherlands.
  • van Fraassen, Bas, 1973, “Values and the heart’scommand”,The Journal of Philosophy, 70: 5–19.
  • –––, 1995, “Fine-grained opinion,probability, and the logic of folk belief”,Journal ofPhilosophical Logic, 24: 349–377.
  • Verheij, B., 2003, “DefLog: On the logical interpretation ofprima facie justified assumptions”,Journal of Logic andComputation, 13(3): 319–346.
  • –––, 2005,Virtual arguments: On the designof argument assistants for lawyers and other arguers, The Hague:T. M. C. Asser Press.
  • –––, 2012, “Jumping to conclusions: Alogico-probabilistic foundation for defeasible rule-basedarguments”, in L. Fariñas del Cerro, A. Herzig & J.Mengin (eds.),Logics in Artificial Intelligence. 13th Europeanconference, JELIA 2012, Dordrecht: Springer, 411–423.
  • –––, 2017, “Proof with and withoutprobabilities: Correct evidential reasoning with presumptivearguments, coherent hypotheses and degrees of uncertainty”,Artificial Intelligence and Law, 25 (1): 127–154.
  • Vieu, L., with M. Bras, N. Asher, and M. Aurnague, 2005,“Locating adverbials in discourse”,Journal of FrenchLanguage Studies, 15(2): 173–193.
  • Vreeswijk, Gerard, 1997, “Abstract argumentationsystems”,Artificial Intelligence, 90:225–27.
  • Way, J., 2017, “Reasons as Premises of GoodReasoning”,Pacific Philosophical Quarterly, 98:251–70.
  • Wobcke, Wayne, 1995, “Belief Revision, Conditional Logic andNonmonotonic Reasoning”,Notre Dame Journal of FormalLogic, 36: 55–103.

Copyright © 2025 by
Robert Koons<koons@austin.utexas.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2025 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp