Movatterモバイル変換


[0]ホーム

URL:


SEP home page
Stanford Encyclopedia of Philosophy

Non-monotonic Logic

First published Tue Dec 11, 2001; substantive revision Sat Nov 23, 2024

Non-monotonic logic (NML) is a family of formallogics designed to model and better understanddefeasiblereasoning. Reasoners draw conclusions in a defeasible manner whenthey retain the right to retract these inferences upon the acquisitionof further information. Numerous instances can be found, ranging frominductive generalizations to reasoning based on the best explanation,as well as inferences grounded on expert opinion. Defeasibleinferences are prevalent in everyday reasoning, expert reasoning suchas medical diagnosis, and scientific reasoning.

Defeasible reasoning can follow complex patterns, patterns that elude classical logic(CL), intuitionistic logic (IL), or other monotonic logics thatcharacterize deductive reasoning. Monotonic logics are not designedfor and therefore do not allow for a retraction of inferences. A primeobjective within the domain of NML is to model defeasible reasoningwith a degree of formal precision and a normative grip, comparable towhat CL or IL furnish for mathematical reasoning. Alternatively, somework in NML aims at providing descriptively adequate models ofdefeasible inference. Formal precision is achieved by adopting formalmethods, such as proof-theoretic and semantic ones. In the context ofmonotonic logics, the principle of truth-preservation, according towhich true premises lead to true conclusions, provides a normativestandard. Due to the possibility of the retraction, this standard isviolated in NMLs. Instead, NML aims at obtaining conclusions that, inview of some premises, hold plausibly, likely, probably, orstereotypically. Defeasible inference may follow different reasoningstyles, some of which are, for instance, more risk-taking than others.Depending on the reasoning style, different principles for grounding anormative framework for NMLs have been proposed in the literature.

1. Dealing with the dynamics of defeasible reasoning

Defeasible reasoning is dynamic in that it allows for a retraction ofinferences. Take, for instance, reasoning on the basis of normality ortypicality assumptions. Although we infer that Tweety flies on thebasis of the information that Tweety is a bird and the backgroundknowledge that birds usually fly, we have good reasons to retract thisinference when learning that Tweety is a penguin or a kiwi.

Another example is abductive reasoning (Aliseda 2017). Given the observation that the streets are wet, we may infer theexplanation that it has been raining recently. However, recalling thattoday the streets are cleaned and that the roof tops are dry, we willretract this inference.

As a last example, take probabilistic reasoning where we infer‘X is aB’ from ‘X is anA and mostAs areBs’. Clearly, wemay learn thatX is an exceptionalA with respect tobeing aB, and in view of this we retract our previousinference.

Our previous examples are instances ofampliative reasoning.It is based on inferences for which the truth of the premises does notguarantee or necessitate the truth of the conclusion, as in deductivereasoning. Instead, the premises support the conclusion defeasibly,e.g., the conclusion may hold in most/typical/etc. cases in which thepremises hold.

Reasoning may also turn defeasible when applied to an inconsistentstock of information, possibly obtained via different sources.Classical logic is not a viable option in such situations, since itallows us to derive any consequence. A more cautious approach is toconsider maximal consistent subsets (with respect to set inclusion) ofthe given information (Rescher and Manor 1970). For instance, where \(p\), \(q\), and \(s\) are logical atoms and\(\Gamma = \{ p \wedge q, \neg p \wedge q, s\}\), maximal consistentsubsets of \(\Gamma\) are \(\Gamma_1 = \{p \wedge q, s\}\) and\(\Gamma_2 = \{\neg p \wedge q, s\}\). Various consequence relationshave been defined on the basis of consistent subsets. We give twoexamples (seeBenferhat et al. (1997) for more).

Let \(\Sigma\) be a possibly inconsistent set of formulas and let\(\mathrm{MCS}(\Sigma)\) be the set of maximal consistent subsets of\(\Sigma\). The set \(\mathrm{Free}(\Sigma)\) ofinnocentbystanders in \(\Sigma\) is obtained by intersecting all membersof \(\mathrm{MCS}(\Sigma)\).

  • Free consequences. \(\phi\) is a free consequence of\(\Sigma\), denoted by \(\Sigma \nc_{\mathrm{free}} \phi\), if andonly if it is (classically) entailed by the set of all the innocentbystanders \(\mathrm{Free}(\Sigma)\).
  • Inevitable consequence. \(\phi\) is an inevitableconsequence of \(\Sigma\), denoted by \(\Sigma \nc_{\mathrm{ie}}\phi\), if and only if it is (classically) entailed by each member of\(\mathrm{MCS}(\Sigma)\).

In our example, the innocent bystander \(s\) is both a free andinevitable consequence, while \(q\) is merely an inevitableconsequence. In order to highlight the defeasible character of thistype of reasoning, consider the situation in which we obtainadditionally the information \(\neg s\). We now have the additionaltwo maximal consistent subsets \(\Gamma_{3} = \{p \wedge q, \neg s\}\)and \(\Gamma_{4} = \{p \wedge \neg q, \neg s\}\) in view of which\(s\) is neither a free nor an inevitable consequence. E.g., although\(\Gamma \nc_{\mathrm{free}} s\) we have \(\Gamma \cup \{\neg s\}\nnc_{\mathrm{free}} s\).

This kind of dynamics is characteristic for non-monotonic logics, infact it is the reason for their name. They violate the Monotonyproperty, which holds for deductive reasoning, for instance, for therelation of logical consequence \(\vDash\) of CL (see the entry onclassical logic for details on the relation \(\vDash\)):

  • Monotony: If \(\Sigma \nc \phi\) then also \(\Sigma \cup\Sigma' \nc \phi\).

Monotony states that consequence sets are robust under the addition ofinformation: If \(\phi\) is a consequence of \(\Sigma\) then it isalso a consequence of any set containing \(\Sigma\) as a subset. Mostof scholarly attention has been paid to the type of dynamicsunderlying defeasible reasoning that results in violations of Montonydue to the retraction of inferences in view of conflicting newinformation. It has been called thesynchronic (Pollock 2008) or theexternal dynamics (Batens 2004) of defeasible reasoning.

Clearly, most forms of defeasible reasoning are externally dynamic andhence most logics for defeasible reasoning violate Monotony: they havenon-monotonic consequence relations for which consequences may notpersist when new information is obtained. This feature is so centralthat the formal logical study of defeasible reasoning is often takento be coextensive with the domain of NML where non-monotonicity isunderstood as a property of the consequence relation of a logic.

In addition to the external dynamic, defeasible reasoning also comeswithdiachronic (Pollock 2008) orinternal dynamics (Batens 2004). This kind of dynamics occurs when inferences are retracted, not as aresult of obtaining new and defeating information, but ifinconsistencies are detected in the process of analyzing the alreadyavailable information.

Let us return to non-monotonicity as a property of the consequencerelation. Given that Monotony is abandoned in NMLs, we are naturallyled to the question which formal properties are to replace Monotonyand whether some weakened forms of robustness of the consequence setunder the addition of new information is also to be expected in therealm of defeasible reasoning. We state here two of the most centralproperties considered in the literature.

  • Cautious Monotony: If \(\Sigma \nc \psi\) and \(\Sigma \nc\phi\), then also \(\Sigma \cup \{\phi\} \nc \psi\).
  • Rational Monotony: If \(\Sigma \nc \psi\) and \(\Sigma \nnc\neg \phi\), then also \(\Sigma \cup \{\phi\} \nc \psi\).

Both Cautious and Rational Monotony are restricted forms of Monotony.

Cautious Monotony is the converse of Cut:

  • Cut: If \(\Sigma \nc \phi\) and \(\Sigma \cup \{\phi\} \nc\psi\), then \(\Sigma \nc \psi\).

Cautious Montony (resp. Cut) states that adding a consequence \(\phi\)to the premise set \(\Sigma\) does not lead to anydecrease(resp.increase) in inferential power. Taken together, theseprinciples express that inference is a cumulative enterprise: we cankeep drawing consequences that can, in turn, be used as additionalpremises, without affecting the set of conclusions. InGabbay (1985) it has been shown that some basic and intuitive assumptions aboutnon-monotonic derivations give rise to consequence relations whichsatisfy Cautious Monotony, Cut and Reflexivity (If \(\phi \in \Sigma\)then \(\Sigma \nc \phi\)). Consequently, these properties are commonlytaken to be central principles of NML.[1]

Rational Monotony states that conclusions of \(\Sigma\) are not lostwhen adding formulas \(\phi\) to \(\Sigma\) that are not contradictedby \(\Sigma\). Rational Monotony is a more controversial property thanCautious Monotony. For instance,Stalnaker (1994) gave a counter-example to it (see thesupplementary document) in view of which it is clear that not in all application contextsRational Monotony is a desirable property of defeasible reasoning.

Weaker properties than Rational Monotony have been proposed, such asDisjunctive Rationality (Makinson (1994)). It states that \( \psi \) does not follow from \( \Sigma \cup \{\phi_1 \vee \phi_2 \} \) if it neither follows from \( \Sigma \cup \{\phi_1 \} \) nor from \( \Sigma \cup \{ \phi_2 \} \). We state thecontraposition formally:

  • Disjunctive Rationality: If \( \Sigma \cup \{ \phi_1 \vee\phi_2 \} \nc \psi \) then \( \Sigma \cup \{ \phi_1 \} \nc \psi \) or\( \Sigma \cup \{ \phi_2 \} \nc \psi \).

2. Dealing with conflicts

An intertwined yet distinct issue from the formal properties of anon-monotonic consequence relation is handlingconflictsbetween potential defeasible conclusions.

There are two types of conflict handling in defeasible reasoning. Oneinvolves conflict resolution principles such as the SpecificityPrinciple or other measures of argument strength. The other deals withirresolvable conflicts through Credulous and Skeptical reasoningtypes.

2.1 Conflict resolution

There are two different kinds of conflict that can arise in thecontext of non-monotonic inference: (i) conflicts betweendefeasible conclusions and “hard facts,” some of which arepossibly newly learned; and (ii) conflicts between defeasibleconclusions (many formalisms, for instance, provide some form ofdefeasible inference rules, and such rules may have conflictingconclusions). When a conflict (of either kind) arises, steps must betaken to preserve or restore consistency.

Conflicts of type (i) must be resolved in favor of the hardfacts by retracting the conflicting defeasible conclusion. Moreinteresting are mechanisms to resolve conflicts of type (ii).In order to analyze these, we will make use of schematic inferencegraphs (similar to the ones used inInheritance Networks, see below). For instance, our previous example featuring the birdTweety is illustrated as follows:

tweety.png

We use the following conventions: double arrows signify deductive orstrict (i.e., nondefeasible) inferences, single arrows signifydefeasible inferences, and strikethrough arrows signify that thenegation of the pointed formula is implied. So, we can read thediagram as follows: Penguins are birds (no exceptions); Birds usuallyfly; and Penguins usually don’t fly.

We have a conflict between the following two arguments (where thearguments are sequences of inferences):Penguin\(\Rightarrow\)Bird \(\rightarrow\)flies(highlighted in blue) andPenguin \(\rightarrow\)not-flies (highlighted in red). Both arguments include afinal defeasible inference. What is important to note is that apenguin is a specific type of bird sincePenguin\(\Rightarrow\)Bird (while notBird \(\Rightarrow\)Penguin). According to theSpecificity Principle aninference with a more specific antecedent overrides a conflictingdefeasible inference with a less specific antecedent. Concerning thepenguin Tweety, we thus infer that she does not fly on the basis ofPenguin \(\rightarrow\)not-flies. We resolve theconflict in favor of the inference based on the more specificinformation.

Logicians distinguish betweenstrong andweakspecificity: according to strong specificity \(A \not\rightarrowC\) overrides \(A \Rightarrow B \rightarrow C\); according weakspecificity \(A \not\rightarrow C\) overrides \(A \rightarrow B\rightarrow C\). Note that the difference concerns the nature of thelink betweenA andB.

A preference for one defeasible inferenceA \(\rightarrow\)B over another conflicting oneC \(\rightarrow\)D may depend also on other factors. For example, in anepistemic context we can compare the strengths ofA\(\rightarrow\)B andC \(\rightarrow\)Dby appealing to the reliability of the source from which therespective conditional knowledge originates. In the context of legalreasoning, we may have the principleslex superior resp.lex posterior, according to which the higher-ranked,respectively, the later issued law dominates.

Given a way to compare the strength of defeasible inference stepsusing a preference relation \(\prec\), the question remains of how tocompare the strength of conflicting sequences of inferences (that is,arguments). We give some examples. For a systematic survey andclassification of preference handling mechanisms in NML the interestedreader is referred toDelgrande et al. (2004) and toBeirlaen et al. (2018).

According to theWeakest Link Principle (Pollock 1991) an argument is preferred over another conflicting argument if itsweakest defeasible link is stronger than the weakest defeasible linkin the conflicting argument. Take, for example, the situation in thefollowing figure:

num-args.png

On the left we see an inference graph with two conflicting arguments(highlighted in blue and red). On the right we see the preferenceordering. The argument \(A \rightarrow B \rightarrow E\) is strongerthan \(C \rightarrow D \not\rightarrow E\) since for its weakest link\(D \not\rightarrow E\) we have \(D \not\rightarrow E \prec A\rightarrow B\) and \(D \not\rightarrow E \prec B \rightarrow E\).

Another approach to preferences is procedural and greedy (Liao et al 2018). Basically, it instructs to always apply the rule with the highestpriority first. (There may be various such rules with highestpriority, but for the sake of simplicity, we neglect this possibilityin what follows.) Consider the following example that is frequentlydiscussed in the literature. We have the rules

order-puzzle.png

where \(A \rightarrow B \prec A \not\rightarrow C \prec B \rightarrowC\) and we take \(A\) to be given. We can apply the defeasible rules\(A \rightarrow B\) and \(A \not\rightarrow C\). If we operate in a“greedy” way, we may apply \(A \not\rightarrow C\) toderive \(C\) before applying \(A \rightarrow B\), since \(A\rightarrow B \prec A \not\rightarrow C\). Now only \(A \rightarrowB\) is applicable. So we derive \(B\). Although the antecedent of \(B\rightarrow C\) is already derived, we cannot apply this rule for thesake of consistency.Brewka and Eiter (2000) argue against this greedy approach and in favor of deriving \(B\) and\(C\).Delgrande and Schaub (2000) argue that the example presents an incoherent set of rules. This isput into question inHorty (2007) where a consistent deontic reading in terms of conditionalimperatives is presented which also challenges the procedural approachby favoring the conclusions \(B\) and \(C\).

Ford (2004) pointed out that the order of strict and defeasible linksin arguments matters. For instance, she argues that an argument of theform \(A \rightarrow B \Rightarrow D\) may be stronger than anargument of the form \(A \Rightarrow C \not\rightarrow D\) (where \(A\rightarrow B\) reads “MostAs areBs”and \(A \Rightarrow C\) reads “AllAs areCs.”). The reason is that in the former case it is notpossible that noA is aD while in the second caseit is possible that noA is a not-D. This isillustrated in the following figure:

ford1.pngford2.png

The left diagram demonstrates that there are distributions such thatnoAs are not-Ds although \(A \Rightarrow C\) and\(C \not\rightarrow D\) hold. The diagram on the right features adistribution for \(A \rightarrow B \Rightarrow D\). Whenever theextension ofA is non-empty there will beAs thatareDs.

2.2 Reasoning with unresolved conflicts

We now discuss questions arising from conflicts that cannot beresolved by any principle. Inferences can be drawn cautiously(skeptical) or boldly (credulous). The skeptical and credulousreasoning styles represent different interpretations of defeasibleknowledge and lead to different warranted conclusions. Both areconcerned with defeasible inferences that are acceptable to a rationalreasoner. While a credulous reasoner considers as consequences thoseconclusions of inferences that are permissible for a rational reasonerto hold, the skeptical reasoner considers those that a rationalreasoner must hold.

To better understand this distinction, let us have a look at awell-known example from the literature, the so-called “Nixondiamond,”. Suppose that our knowledge base contains (defeasible)information to the effect that a given individual, Nixon, is both aQuaker and a Republican. Quakers, by and large, are pacifists, whereasRepublicans, by and large, are not. This is illustrated in thefollowing figure:

nixon.png

The question is what defeasible conclusions are warranted on the basisof this body of knowledge, and in particular whether we should inferthat Nixon is a pacifist or that he is not a pacifist.

Neither the skeptical nor the credulous reasoner have logical groundsto prefer either conclusion (“Nixon is a pacifist”;“Nixon is not a pacifist”). While the credulous reasoneraccepts both conclusions, the skeptical reasoner refrains from either.

Since both conclusions are based on contested inferences, neither mustbe accepted by a rational reasoner, which is why a skeptical reasoningstyle refrains from both. From the perspective of credulous reasoning,the conclusion according to which Nixon is a pacifist is acceptable,since it is permissible for a rational reasoner to hold. It ispermissible since it can be defended from the counterargument based onNixon being a republican (highlighted in red). In order to defend it,a rational reasoner refers to the fact that Nixon is also a Quaker(the corresponding argument is highlighted in blue). (This isparticularly convincing, for a reasoner who considers being Quaker asmore diagnostic of being a Pacifist than being a Republican is of notbeing a pacifist.) For the same reason, the conclusion that Nixon isnot a pacifist is credulously acceptable.

A rationale behind the credulous reasoning type is to provide anoverview of possible conclusions given the conflicting defeasibleinferences in order to subsequently make a choice among them. This isespecially interesting in practical reasoning contexts in which thechoice determines a course of action and in which extralogicalconsiderations (based on preferences, values, etc.) further narrowdown the choice.

In contrast, the rationale behind the skeptical reasoning type is todetermine uncontested defeasible conclusions. The purpose may be of amore epistemological nature such as the updating of the agent’sbelief or knowledge base with the chosen conclusions.

2.3 Some advanced issues for skeptical reasoning

We now discuss some further issues that arise in the context ofconflicting arguments. The first issue is illustrated in the followingfigure:

tweety-drowner.png

Consider the argumentPenguin \(\Rightarrow\)Bird\(\rightarrow\)has wings (highlighted in blue). Sincepenguins do not fly, we know that penguins are exceptional birds, atleast with respect to the property of flying. A very cautious reasonermay take this to be a reason to be skeptical about attributing othertypical properties of birds to penguins. NMLs that do not allow toderivehas wings are said to suffer from theDrowningProblem (Benferhat et al., 1993).

The question whether the exceptional status ofPenguinrelative toflies should spread also to other typicalproperties ofBird may depend on relevance relations amongthese properties. For instance,Koons (2017) proposes that causal relations play a role: whereashas strongforlimb muscles is causally related toflies and henceshould not be attributed to penguins, the situation is different withis cold-blooded. Similarly,Pelletier and Elio (1994) argue that explanatory relations play a significant role in the wayin which reasoners treat exceptional information in non-monotonicinference.

Another much discussed issue (e.g.,Ginsberg 1994,Makinson and Schlechta 1991,Horty 2002) concerns the question whether a conclusion that is derivable viamultiple conflicting arguments should be derived. Such conclusions arecalledFloating Conclusions. The following figure illustratesthis with an extended version of the Nixon Diamond.

nixon-floating.png

The floating conclusion in question concernsis political. Itis derivable via the argumentsNixon \(\Rightarrow\)Quaker \(\rightarrow\)Dove \(\rightarrow\)ispolitical (highlighted in blue) and viaNixon\(\Rightarrow\)Republican \(\rightarrow\)Hawk\(\rightarrow\)is political (highlighted in red). Botharguments are super-arguments of the conflictingNixon\(\Rightarrow\)Quaker \(\rightarrow\)Dove andNixon \(\Rightarrow\)Republican \(\rightarrow\)Hawk.[2]Horty (1994) argues that floating conclusions are acceptable in reasoning contextsin which “the value of drawing conclusions is high relative tothe costs involved if some of those conclusions turn out not to becorrect” while they should be avoided “when the cost oferror rises” (p. 123).

We conclude our discussion with so-calledZombie-Arguments (Makinson and Schlechta 1991,Touretzky et al. 1991). Recall that a skeptical reasoner does not commit to aconflicting argument.Makinson and Schlechta (1991) argue that super-arguments of such conflicted arguments—although not acceptable— nevertheless still have thepower to undermine the commitment of a reasoner to an otherwiseunconflicted argument. We see an example in the following figure:

zombie.png

We observe two conflicting arguments concerning \(D\): \(A \rightarrowB \rightarrow D\) and \(A \rightarrow C \not\rightarrow D\). Thus, wehaveambiguous information concerning \(D\). We observefurther, that \(F\) is a consequence of the (unconflicted) argument\(A \rightarrow C \rightarrow E \rightarrow F\). It conflicts with thezombie-argument \(A \rightarrow B \rightarrow D \not\rightarrow F\)(highlighted in red), which is a super-argument of the conflicted \(A\rightarrow B \rightarrow D\). The name refers to undead beings sincean argument such as \(A \rightarrow B \rightarrow D \not\rightarrowF\) to which we have no commitment may still have the power toinfluence our commitment to an otherwise unconflicted argument such as\(A \rightarrow C \rightarrow E \rightarrow F\). In the literature wecan distinguish two approaches to such scenarios. Inambiguitiy-propagating formalisms the ambiguity in \(D\)propagates to \(F\), while inambiguity-blocking formalismsit does not and so \(F\) is considered a consequence in view of theuncontested argument \(A \rightarrow C \rightarrow E \rightarrow F\) (Stein 1992).

3. Some Non-monotonic logics

Pioneering work in the field of NML began with the realization that CLis inadequte to give a mathematically precise characterization ofdefeasible reasoning. This insight was accompanied by the effort toreproduce the success of CL in the representation of mathematical, orformal, reasoning. Among the pioneers of the field in the late 1970swere J. McCarthy, D. McDermott & J. Doyle, and R. Reiter (seeGinsberg (1987) for a collection of early papers in the field andGabbay et al. (1994) for a collection of excellent survey papers). In 1980, theArtificial Intelligence Journal published an issue (vol. 13,1980) dedicated to these new formalisms, an event that has come to beregarded as the “coming of age” of NML.

In this section an overview will be provided on some important systemsof NML. Since the evolutionary tree of NMLs has grown extraordinarilyrich, we will restrict the focus on presenting the basic ideas behindsome of the most influential and well-known approaches.

3.1 The closed-world assumption

Since one of the goals of non-monotonic logic is to provide anadequate account of defeasible reasoning, it is important to rely on arich supply of examples to guide and hone intuitions. Database theorywas one of the earliest sources of such examples, especially asregards theclosed world assumption. Suppose a travel agentwith access to a flight database needs to answer a client’squery about the best way to get from Oshkosh to Minsk. The agentqueries the database, finds not such entry, and responds that thereare no direct flights. How does the travel agent know this?

Our agent’s reasoning is based on the tacit assumption that thedatabase iscomplete, and that since the database does notlist any direct flights between the two cities, there are none. Auseful way to think about this process is as a kind ofminimization, i.e., an attempt to minimize the extension ofthe predicate (“flight-between,” in this case).

The idea of minimizing predicates is at the core of one of theearliest non-monotonic formalisms, McCarthy’scircumscription (McCarthy 1980). Circumscription makes explicit the intuition that, all other thingsbeing equal, extensions of certain predicates should beminimal. Consider principles such as “all normal birdsfly”. Implicit in this principle is the idea that specimensshould not be considered abnormal unless there is positive informationto that effect. McCarthy’s idea was to represent this formallyusing second-order logic (SOL). In SOL, in contrast to first-orderlogic (FOL), it is allowed to explicitly quantify predicates, formingsentences such as \(\exists P \forall x Px\) (“there is auniversal predicate”) or \(\forall P (Pa \equiv Pb)\)(“a andb are indiscernible”).

In circumscription, given predicatesP andQ, weabbreviate \(\forall x(Px \supset Qx)\) as \(P \le Q\); similarly, weabbreviate \(P \le Q \wedge \neg(Q \le P)\) as \(P < Q\). If\(A(P)\) is a formula containing occurrences of a predicateP, then the circumscription ofP inA isthe second-order sentence \(A^{\star}(P)\):

\(A(P) \wedge \neg\exists Q[A(Q) \wedge Q < P]\)

\(A^{\star}(P)\) expresses thatP satisfiesA, andthat no smaller predicate does. Let \(Px\) be the predicate“x is abnormal,” and let \(A(P)\) be the sentence“All birds that are not abnormal fly.” Then the sentence“Tweety is a bird,” together with \(A^{\star}(P)\) implies“Tweety flies,” for the circumscription axiom forces theextension ofP to be empty, so that “Tweety isnormal” is automatically true.

In terms of consequence relations, circumscription allows us todefine, for each predicateP, a non-monotonic relation \(A(P)\nc \phi\) that holds precisely when \(A^{\star}(P) \vDash \phi\).(This basic form of circumscription has been generalized to minimize apredicate’s extension while allowing others to vary.) From anapplication perspective, circumscription presents a significantcomputational limitation, inherent to the nature of the second-orderlanguage (SOL) in which it is formulated (see the entry onSecond-order and Higher-order Logic for details). The issue with SOL, as opposed to first-order logic(FOL), is the absence of a complete inference procedure: the cost ofthe enhanced expressive capacity of SOL is the lack of completeaxiomatizations available in FOL. Consequently, it is impossible toeffectively enumerate all the validities of SOL, rendering thedetermination of \(A(P) \nc \phi\) infeasible.

Another influential mechanism realizing the closed world assumption isNegation as Failure (orDefault Negation). It can nicely be explained if we takea look atLogic Programming. A logic program consists of alist of rules such as:

\(\tau ~~ \leftarrow ~~ \phi_1, \dotsc, \phi_n, \mathit{not}~ \psi_1,\dotsc, \mathit{not}~ \psi_m\)

In basic logic programs \(\tau\) is a logical atom and \(\phi_1,\dotsc, \phi_n, \psi_1, \dotsc, \psi_m\) are logical literals (i.e.,atoms or negated atoms). The logical form or rules in such programshave been generalized in various ways (e.g.,Alferes et al. 1995) and many ways of interpreting rules have been proposed. To understandthe meaning of the default negationnot we consider aconcrete example of a rule, namely:

flies \(~~\leftarrow~~\)bird,notpenguin

Such rules read as expected, but with a small twist. As usual, therule licenses its conclusion if the formulas in the antecedent(right-hand side) hold. The twist is that the falsity of (default)negated formulas such aspenguin need not be positivelyestablished: Their falsity is assumed in the absence of a proof of theopposite. In our example, ifpenguin cannot be proved thennot penguin is considered to hold (“by default”).A logic program for our Tweety example may consist of the rule aboveand

not-flies \(~~\leftarrow~~\)penguin
bird \(~~\leftarrow~~\)penguin

Suppose first all we know isbird. The latter two rules willnot be triggered. The first rule will be applicable:birdis the case and we have no proof ofpenguin whencenotpenguin is assumed. This allows us to inferflies. Nowsuppose we knowpenguin. In this case, the first rule is notapplicable since the default negation ofpenguin is false,but the latter two rules are triggered and we derivebird andnot-flies.

3.2 Inheritance networks

In the context of a taxonomically organized body of knowledge, it isnatural to assume that subclasses inherit properties from theirsuperclasses: dogs have lungs because they are mammals and mammalshave lungs. However, there may be exceptions. For example, mammals, byand large, do not fly; Since bats are mammals, in the absence of anyinformation to the contrary, we are justified in inferring that batsdo not fly. But if we learn that bats are exceptional mammals in thatthey do fly, the conclusion that they do not fly is retracted, and theconclusion that they fly is drawn instead. There are more complicatedscenarios. For example, baby bats are exceptional bats in that they donot fly. Our example gives rise to potentially conflicting inferences.When we infer that Stellaluna, being a baby bat, does not fly, we areresolving all these potential conflicts based on the SpecificityPrinciple.

Non-monotonic inheritance networks were developed for thepurpose of capturing taxonomic examples such as the one above. Suchnetworks are collections of nodes and directed(‘is-a’) links that represent taxonomicinformation. When exceptions are allowed, the network is interpreteddefeasibly. The following figure gives a network representingthis state of affairs:

inheritance.png

In such a network, links of the form \(A \rightarrow B\) represent thefact that, typically and for the most part,As areBs, and therefore the information aboutAs is morespecific than the information aboutBs. More specificinformation overrides more generic information. Research onnon-monotonic inheritance focuses on the different ways in which onecan make this idea precise.

The main issue in defeasible inheritance is to characterize the set ofassertions that are supported by a given network. It is of course notenough to devise a representational formalism; one also needs tospecify how the formalism is to be interpreted. This characterizationis accomplished through the notion of anextension of a givennetwork. There are two competing characterizations of extension, onethat follows the credulous strategy and one that follows the skepticalone. Both utilize the notion of thedegree of a path throughthe network, defined as the length of the longest sequence of linksconnecting its endpoints. Extensions are then constructed byconsidering paths in ascending order of their degrees. The specificswill not be detailed here, as many analogous issues are addressed inthe context ofDefault Logic (discussed subsequently), withHorty (1994) offering a comprehensive survey. It is also noteworthy that thenotion of degree is applicable solely to acyclic networks, thus posingparticular challenges when networks contain cycles (refer toAntonelli (1997) for an analysis of inheritance in cyclic networks).

Although the language of non-monotonic networks is limited by design(in that only links of the form ‘is-a’ —and their negations — can be represented), such networks providea remarkably useful setting in which to test and refine one’sintuitions and methods for handling defeasible information. Suchintuitions and methods are then applied to more expressive formalisms.

3.3 Argument-based approaches

Inargument-based approaches to defeasible reasoning thenotion of a path through an inheritance network is generalized to thenotion of an argument. Abstracting from the specifics and subtletiesof formalisms proposed in the literature[3], anargument can be thought of in the following way. Given alanguage \(\mathcal{L}\), a set of \(\mathcal{L}\)-formulas\(\Gamma\), a set of strict rulesSRules of the form\(\phi_1, \dotsc, \phi_n \Rightarrow \psi\) (where \(\phi_i\) and\(\psi\) are \(\mathcal{L}\)-formulas) and a set of defeasible rulesDRules of the form \(\phi_1, \dotsc, \phi_n \rightarrow\psi\) (where \(\phi_i\) and \(\psi\) are \(\mathcal{L}\)-formulas) anargument \((\Theta, \theta)\)for \(\tau\) is aproof of \(\tau\) from some \(\Theta \subseteq \Gamma\) using therules inSRules andDRules.

A central notion in argument-based approaches is anargumentativeattack. We can, for instance, distinguish between rebuts andundercuts. Arebut of an argument \((\Theta, \tau)\) is anargument that establishes that \(\tau\) is not the case. Anundercut of \((\Theta, \tau)\) establishes that \(\Theta\)does not support \(\tau\). For instance, the argument that concludesthat an object is red from the fact that it looks red is undercut bymeans of the observation that that object is illuminated by red light (Pollock 1995). Note that in order to undercut an argument for \(\tau\) one need notestablish that \(\tau\) does not hold.

Argument-based approaches are based on the idea of argumentacceptability. InDung (1995), an argument is acceptable if it can be defended against argumentativeattacks. This means that acceptability can be decided purely on thebasis of the attack relations between arguments, abstracting away fromthe specific structure of the arguments themselves. It gives rise to atwo-phase methodology in which arguments and attacks are firstgenerated on the basis of a knowledge base, and then acceptability isdetermined on the basis of the attack relation between arguments.Given thatArgs represents the set of arguments generatedfrom \(\Gamma\) by the rules inSRules andDRules,we define an attack relation \({\leadsto} \subseteq \mathit{Args}\times \mathit{Args}\) as follows: \(a \leadsto b\) if and only if\(a\) attacks (e.g., rebuts or undercuts) \(b\). This leads to theformulation of a directed graph, termed theargumentationframework, where the arguments are depicted as nodes, and thearrows denote the attack relations. (Note that, at the graph level,arguments are treated abstractly and their concrete structures are notrepresented within the graph.)

Variousargumentation semantics have been proposed forargumentation frameworks, specifying criteria for selecting sets ofarguments that represent stances of rational agents. Clearly, aselected set of arguments \(S \subseteq \mathit{Args}\) should satisfythe following requirements:

  • \(S\) should beconflict-free, i.e., for all \(a, b \in\mathit{Args}\), \(a\) does not attack \(b\).
  • \(S\) should be able to defend itself against all attackers. Moreprecisely, \(S\) isadmissible if and only if \(S\) isconflict-free and for every \(a \in \mathit{Args}\) that attacks some\(b \in S\) there is a \(c \in S\) that attacks \(a\).

For instance, given the following graph,

abcd.png

\(\{a\}\) is admissible, while \(\{a,b\}\) is not conflict-free and\(\{d\}\) is not admissible.

Given these basic requirements, we can define, for instance, thefollowing semantics that is frequently adopted in the literature:

  • Apreferred set of arguments \(S\) is an admissible setof arguments that is maximal inArgs relative toset-inclusion.

In our example, \(\{a,d\}\) and \(\{b,d\}\) are the two preferredsets. This shows that the preferred semantics does not determine aunique selection. In order to define a consequence relation, we canproceed either according to the credulous rationale or according tothe skeptical rationale. Considering an abstract argumentationframework based on \(\Gamma\),SRules, andDRules,we give two examples of how a consequence set can be characterized bymeans of the skeptical approach (Prakken 2010):

  • \(\tau\) is a consequence if in each preferred set \(S\) ofarguments there is an argumenta for \(\tau\).
  • \(\tau\) is a consequence if there is an argumenta for\(\tau\) that is in every preferred set of arguments \(S\).

The second approach is more cautious. Intuitively, it demands thatthere is a specific argument for \(\tau\) contained in each rationalstance that a reasoner can take given \(\Gamma\),DRules, andSRules. The first option does not bind the acceptability of\(\tau\) to a specific argument: it is sufficient if according to eachrational stance there is some argument for \(\tau\).

Argumentation-based approaches have been considered a promisingunifying framework in NML (Dung 1995). Many other systems of NML have been translated into logicalargumentation, such as approaches based on consistent subsets (see Arieli et al. 2019 for an overview) and approaches based on default logic (van Berkel et al. 2024) and its prioritized variants (D’Agostino and Modgil 2018,Liao et al. 2018,Pardo and Straßer 2024).

3.4 Default logic

InDefault Logic, the main representational tool is that of adefault rule, or simply adefault. A default is adefeasible inference rule of the form

\((\gamma: \theta) / \tau\)

where \(\gamma, \theta\), and \(\tau\) are sentences in a givenlanguage, respectively called thepre-requisite, thejustification and theconclusion of the default. Theinterpretation of the default is that if \(\gamma\) is known and thereis no evidence that \(\theta\) is false, then \(\tau\) can beinferred.

The application of a default rule should not result in inconsistencywith pre-existing beliefs. The complexity in satisfying thisconsistency requirement arises from interactions among rules.Specifically, the application of a certain rule may lead to thefailure of the consistency condition for another, possibly the same,rule. For example, if \(\theta\) equals \(\neg\tau\), then applyingthe rule could be considered self-defeating, as the application of therule itself leads to the failure of the consistency condition.

Reiter’s default logic (Reiter 1980) uses the notion of anextension to make precise the ideathat the consistency condition must be met both before and after therule is applied. Given a set \(\Delta\) of defaults, an extension for\(\Delta\) represents a set of inferences that can be drawnreasonably andconsistently using defaults from\(\Delta\).

An extension is defined relative to adefault theory. Thelatter is a pair \((\Gamma, \Delta)\), where \(\Delta\) is a (finite)set of defaults, and \(\Gamma\) is a set of sentences (a worlddescription). The idea is that \(\Gamma\) represents the strict orbackground information, while \(\Delta\) specifies the availabledefeasible information. We say that \(\Xi\) is anextensionfor a default theory \((\Gamma, \Delta)\) if and only if

\(\Xi = \Xi_0 \cup \Xi_1 \cup \ldots \cup \Xi_n \cup \ldots\)

where \(\Xi_0 = \Gamma\), and

\(\Xi_{i+1} = \mathrm{Cn}(\Xi_i) \cup \bigl\{\tau \mid (\gamma :\theta) / \tau \in \Delta \text{ where } \neg\theta \notin \Xi \text{and } \gamma \in \Xi_i \bigr\}\)

where \(\mathrm{Cn}(\cdot)\) is the consequence relation of CL. It isimportant to notice the occurrence of the limit \(\Xi\) in thedefinition of \(\Xi_{i+1}\): the condition above is not a recursivedefinition of the garden-variety, but a truly circularcharacterization of extensions.

This circularity can be made explicit by giving an equivalentdefinition of extension as solution of fixed-point equations. Given adefault theory \((\Gamma, \Delta)\), let \(\mathsf{S}\) be an operatordefined on sets of sentences such that for any such set \(\Phi\),\(\mathsf{S}(\Phi)\) is the smallest set that satisfies the followingthree requirements:

  • it contains \(\Gamma\) (\(\Gamma \subseteq \mathsf{S}(\Phi)\)),
  • it is deductively closed (that is, if \(\mathsf{S}(\Phi) \vDash\phi\) then \(\phi \in \mathsf{S}(\Phi)\)),
  • and it is closed under the default rules in \(\Delta\): wheneverfor a default \((\gamma : \theta) / \tau \in \Delta\) both \(\gamma\in \mathsf{S}(\Phi)\) and \(\neg\theta \notin \Phi\), then \(\tau \in\mathsf{S}(\Phi)\).

It can be shown that \(\Xi\) is an extension of \((\Gamma,\Delta)\) ifand only if \(\Theta\) is a fixed-point of \(\mathsf{S}\), that is, if\(\mathsf{S}(\Xi) = \Xi\).

Neither one of the two characterizations of extension for defaultlogic (i.e. the fixpoint definition and the pseudo-iterative one)provides a way to “construct” extensions by means of aniterative process. Essentially, one has to “guess” a setof sentences \(\Xi\), and then verify that it satisfies the definitionof an extension.

For any given default theory, extensions need not exist, and even whenthey exist, they need not be unique. We start with an example of theformer situation. Let \(\Gamma = \emptyset\) and let \(\Delta\)comprise the single default[4]

\((\top : \theta) / \neg \theta\)

If \(\Xi\) were an extension, then the justification \(\theta\) of thedefault above would be consistent with \(\Xi\) or not, and either caseis impossible. For if \(\theta\) were consistent with \(\Xi\), thenthe default would be applied to derive \(\neg\theta\) in contradictionthe consistency of \(\Xi\) with \(\theta\). Similarly, if \(\Xi\) wereinconsistent with \(\theta\) then \(\Xi \vDash \neg \theta\) andhence, by deductive closure, \(\neg\theta \in \Theta\). Our defaultwould not be applicable, in which case \(\Xi = \Xi_1 =\mathrm{Cn}(\emptyset)\). But then \(\neg\theta \notin \Xi\),—acontradiction.

For normal default theories that only consist ofnormaldefaults, that is, defaults of the form \((\gamma : \theta) /\theta\), extensions always exist.

Lukaszewicz (1988) presents a modified definition of extension that avoids the previoustwo problems: it is defined in an iterative way and it warrants theexistence of an extension. In a nutshell, the idea is to keep track ofthe justifications used in the procedure. A default is applied in caseits precondition is implied by the current beliefs and its conclusionis consistent with the given beliefs, its own justification, and eachof the justifications of previously applied defaults. Additionally,for normal theories, Lukaszewicz’s definition of extensions isequivalent to the definitions from above.

Let us now consider an example of a default theory with multipleextensions. Let \(\Gamma = \emptyset\), and suppose that \(\Delta\)comprises the following two defaults

\((\top : \theta) / \neg\tau\) and \((\top: \tau)/\neg\theta\)

This theory has exactly two extensions, one in which the first defaultis applied and one in which the second one is applied. It is easy tosee that at least one default has to be applied in any extension andthat both defaults cannot be applied in the same extension.

The fact that default theories can have zero, one, or multipleextensions raises the issue of what inferences one is justified indrawing from a given default theory. The problem can be presented asfollows: Given a default theory \((\Gamma, \Delta)\), what sentencescan be regarded asdefeasible consequences of the theory?

Sometimes it may be useful to map out all the consequences that can bedrawn fromall the extensions, for instance, in order toidentify extensions that give rise to undesired consequences (in viewof extralogical considerations). The credulous approach does justthat: \((\Gamma, \Delta) \nc \phi\) if and only if \(\phi\) belongs toany extension of the theory. Note that, in case of multipleextensions the credulous consequence set will not be closed under CLand it may be inconsistent.

Alternatively, one can adopt the skeptical strategy according to which\((\Gamma, \Delta) \nc \phi\) if and only if \(\phi\) is contained inevery extension of \((\Gamma, \Delta)\).

Skeptical consequence, as based on Reiter’s notion of extension,fails to be cautiously monotonic (Makinson 1994). To see this, consider the default theory \((\emptyset, \Delta)\),where \(\Delta\) comprises the following two defaults:

\((\top: \theta)/\theta\) and \((\theta \vee \gamma : \neg \theta) /\neg \theta\)

This theory has only one extension, coinciding with the deductiveclosure of \(\{\theta\}\). Hence, according to skeptical consequencewe have \((\emptyset, \Delta) \nc \theta\), as well as \((\emptyset,\Delta) \nc \theta \vee \gamma\) (by the deductive closure ofextensions).

Now consider the theory \((\{\theta \vee \gamma\}, \Delta)\). It hastwo extensions: one is the same as before, but also another onecoinciding with the deductive closure of \(\{\neg\theta\}\), and hencenot containing \(\theta\). It follows that the intersection of theextensions no longer contains \(\theta\), so that \((\{\theta \vee\gamma\}, \Delta) \nc \theta\)fails, against CautiousMonotony. (Notice that the same example establishes a counter-examplefor Cut for the credulous strategy: for this we consider the extensionof \((\{\theta \vee \gamma\}, \Delta)\) that contains \(\neg\theta\).)

InAntonelli (1999) a notion ofgeneral extension for default logic isintroduced, showing that this notion yields a well-behaved relation ofdefeasible consequence \(\nc\) that satisfies Supraclassicality (if\(\Gamma \vDash \phi\) then \(\Gamma \nc \phi\)) and the three centralrequirements for NMLs Reflexivity, Cut, and Cautious Monotony fromGabbay (1985). The idea behind general extensions can be explained in a particularlysimple way in the case ofpre-requisite free default theories(called “categorical” inAntonelli (1999)). Ageneral extension for such a default theory is apair\((\Gamma^+, \Gamma^-)\) of sets of defaults (or conclusions thereof)that simultaneously satisfiestwo fixed-point equations. Theset \(\Gamma^+\) comprises (conclusions of) defaults that areexplicitly triggered, and the set \(\Gamma^-\) comprises (conclusionsof) defaults that are explicitly ruled out. The intuition is thatdefaults that are not ruled out can still prevent other defaults frombeing triggered (although they might not be triggered themselves). Wethus obtain a “3-valued” approach (not unlike that ofKripke’s theory of truth (Kripke 1975)), in virtue of which general extensions are now endowed with anon-trivial (i.e., not “flat”) algebraic structure. It isthen possible to pick out, in a principled way, a particular generalextension (for instance, the unique least one, which is alwaysguaranteed to exist) on which to base a notion of defeasibleconsequence.

A different set of issues arises in connection with the behavior ofdefault logic from the point of view of computation. For a givensemi-decidable set \(\Phi\) of sentences, the set of all \(\phi\) thatare a consequence of \(\Phi\) in FOL is itself semi-decidable (see theentry oncomputability and complexity). In the case of default logic, to formulate the corresponding problemone extends (in the obvious way) the notion of (semi-)decidability tosets of defaults. The problem, then, is to decide, given a defaulttheory \((\Gamma,\Delta)\) and a sentence φ whether\((\Gamma,\Delta) \nc \phi\), where \(\nc\) is defined, say,skeptically. Such a problem is not even semi-decidable, since, inorder to determine whether a default is triggered by a pair of sets ofsentences, one has to perform a consistency check, and such checks arenot computable.

Default logic as defined above does not prioritize among defaults. InPoole (1985) we find an approach in which the Specificity Principle is considered.InHorty (2007) defaults are ordered by a strict partial order \(\prec\) where\((\gamma : \theta)/\tau \prec (\gamma' : \theta') / \tau'\) meansthat \((\gamma' : \theta') / \tau'\) has priority over \((\gamma :\theta) / \tau\). The ordering \(\prec\) may express —dependingon the application— specificity relations, the comparativereliability of the conditional knowledge expressed by defaults, etc.Anordered default theory is then a triple \((\Gamma, \Delta,\prec)\) where \((\Gamma, \Delta)\) is a default theory. We give anexample, but omit a more technical explanation. Take \(\Gamma\) ={bird,penguin}, \(\Delta\) = {bird\(\rightarrow\)flies,penguin \(\rightarrow\)¬flies} andbird \(\rightarrow\)flies\(\prec\)penguin \(\rightarrow\) ¬flies, where\(\phi \rightarrow \psi\) represents the normal default \((\phi: \psi)/ \psi\). We have two extensions of \((\Gamma,\Delta)\), one in whichbird \(\rightarrow\)flies is applied and one inwhichpenguin \(\rightarrow\) ¬flies is applied.Since the latter default is \(\prec\)-preferred over the former, onlythe latter extension is an extension of \((\Gamma,\Delta,\prec)\).

3.5 Autoepistemic logic

An NML closely related to default logic is Moore’sAutoepistemic Logic (Moore 1985). It models the reasoning of an ideal agent reflecting on her ownbeliefs. For instance, sometimes the absence of a belief in \(\phi\)may give an agent a reason to infer \(\neg\phi\). Moore gives thefollowing example: If I had an older brother, I would know about it.Since I do not, I believe not to have an older brother.

An autoepistemic theory consists of the beliefs of an agent includingher reflective beliefs about her beliefs. For the latter anautoepistemic belief operator \(\mathsf{B}\) is used. In our examplesuch a theory may contain \(\neg \mathsf{B} \mathit{brother} \supset\neg \mathit{brother}\). Autoepistemic logic specifies idealproperties of such theories. FollowingStalnaker (1993), an autoepistemic theory \(\Gamma\) should bestable:

  • \(\Gamma\) should be closed under classical logic: if \(\Gamma\vDash \phi\) then \(\phi \in \Gamma\);
  • \(\Gamma\) should be introspectively adequate:
    • if \(\phi \in \Gamma\) then \(\mathsf{B}\phi \in \Gamma\);
    • if \(\phi \notin \Gamma\) then \(\neg \mathsf{B} \phi \in\Gamma\).

If \(\Gamma\) is consistent, stability implies that

  • \(\phi \in \Gamma\) if and only if \(\mathsf{B} \phi \in\Gamma\), and
  • \(\phi \notin \Gamma\) if and only if \(\neg \mathsf{B} \phi \in\Gamma\).

Let us, for instance, consider the two types of consistent stable setsthat extend \(\Xi_1 = \{ \neg \mathsf{B} \mathit{brother} \supset \neg\mathit{brother} \}\):

  1. \(\Gamma_1\) for which \(\mathit{brother} \in \Gamma_1\) and alsoby introspection \(\mathsf{B} \mathit{brother} \in \Gamma_1\).
  2. \(\Gamma_2\) for which \(\mathit{brother} \notin \Gamma_2\).Hence, \(\neg \mathsf{B} \mathit{brother} \in \Gamma_2\) and byclosure, \(\neg \mathit{brother} \in \Gamma_2\). Thus, byintrospection also \(\mathsf{B}\neg \mathit{brother} \in \Gamma_2\).

The second option seems more intuitive since given only \(\neg\mathsf{B} \mathit{brother} \supset \neg \mathit{brother}\) the beliefin \(\mathit{brother}\) seems intuitively speaking ungrounded in thecontext of \(\Xi_1\). To make this formally precise, Moore defines

  • \(\Gamma\) isgrounded in a set of assumptions \(\Xi\)if for each \(\psi \in \Gamma\), \(\Xi \cup \{ \mathsf{B}\phi \mid\phi \in \Gamma\} \cup \{\neg \mathsf{B}\phi \mid \phi \notin \Gamma\}\vDash \psi\).

Stable theories that are grounded in a set of assumptions \(\Xi\) arecalledstable expansions of \(\Xi\) orautoepistemicextensions of \(\Xi\). Stable expansions \(\Gamma\) of \(\Xi\)can equivalently be characterized as fixed points:

\(\Gamma = \mathrm{Cn}\bigl( \Xi \cup \{ \mathsf{B}\phi \mid \phi \in\Gamma\} \cup \{\neg \mathsf{B} \phi \mid \phi \notin \Gamma\}\bigr)\)

where \(\mathrm{Cn}(\cdot)\) is the consequence function of CL.

There is no stable theory that is grounded in \(\Xi_1\) and thatcontains \(\mathit{brother}\) and / or \(\mathsf{B} \mathit{brother}\)like our \(\Gamma_1\) above. The reason is that we fail to derive\(\mathit{brother}\) from \(\Xi_1 \cup \{\mathsf{B} \psi \mid \psi \in\Gamma_1\} \cup \{\neg \mathsf{B} \psi \mid \psi \notin \Gamma_1\}\).The only stable extension of \(\Xi_1\) contains \(\neg \mathsf{B}\mathit{brother}\) and \(\neg \mathit{brother}\).

Just as in default logic, in autoepistemic logic some sets ofassumptions have no stable extensions while some have multiple stableextensions. We demonstrate the former case with \(\Xi_2 = \{\neg\mathsf{B} \phi \supset \phi\}\). Suppose there is a stable extension\(\Gamma\) of \(\Xi_2\). First note that there is no way to ground\(\phi\) in view of \(\Xi_2\). Hence \(\phi \notin \Gamma\) whichmeans that \(\neg \mathsf{B} \phi \in \Gamma\). But then \(\phi \in\Gamma\) by modus ponens which is a contradiction.

A potentially problematic property of Moore’s notion ofgroundedness is that it allows for a type of epistemic bootstrapping.Take \(\Xi_3 = \{\mathsf{B}\phi \supset \phi\}\). Suppose that anagent adopts the belief that she believes \(\phi\), i.e.\(\mathsf{B}\phi\). Now she can use \(\mathsf{B}\phi \supset \phi\) toderive \(\phi\). Hence, on the basis of \(\Xi_3\) she can ground herbelief in \(\phi\) on her belief that she believes \(\phi\). In fact,there is a stable extension of \(\Xi_3\) that contains \(\phi\) and\(\mathsf{B} \phi\). In view of this, weaker forms of groundednesshave been proposed inKonolige (1988) that do not allow for this form of bootstrapping and according towhich we only have the extension of \(\Xi_3\) that contains neither\(\phi\) nor \(\mathsf{B}\phi\).

The centrality of autoepistemic logic in NML is emphasized by the factthat several tight links to other seminal NMLs have been established.Let us mention three such links.

First, there are close connections between autoepistemic logic andlogic programming. For instance, Gelfond’s and Lifschitz’sstable model semantics for logic programming withnegation as failure which serves as the foundation for the answer set programmingparadigm in computer science has been equivalently expressed by meansof autoepistemic logic (Gelfond and Lifschitz 1988). The result is achieved by translating clauses in logic programming

\(\phi ~~ \leftarrow ~~ \phi_1, \dotsc, \phi_n, \mathit{not}~\psi_1,\dotsc, \mathit{not}~\psi_m\)

in such a way that negation as failure (\(\mathit{not}~\psi\)) getsthe meaning “it is not believed that \(\psi\)” (\(\neg\mathsf{B}\psi\)):

\((\phi_1 \wedge \ldots \wedge \phi_{n} \wedge \neg \mathsf{B} \psi_1\wedge \ldots \wedge \neg \mathsf{B} \psi_m) \supset \psi\)

A second link has been established inKonolige (1988) with default logic which has been shown translatable to andequi-expressive with Konolige’s strongly grounded variant ofautoepistemic logic. Default rules \((\gamma : \theta) / \tau\) aretranslated by interpreting consistency conditions \(\theta\) by \(\neg\mathsf{B} \neg \theta\) which can be read as “\(\theta\) isconsistent with the given beliefs”:

\((\mathsf{B} \gamma \wedge \neg \mathsf{B} \neg \theta) \supset\tau\)

Especially the latter link is rather remarkable since the subjectmatter of the given formalisms is quite different. While default logicdeals with defeasible rules, i.e. rules that allow for exceptions(such as ‘Birds fly’), the non-monotonicity ofautoepistemic logic is rooted in the indexicality of its autoepistemicbelief operator (Moore 1984,Konolige 1988): it refers to the autoepistemic theory into which it isembeded, and hence its meaning may change when we add beliefs to thetheory.

Various modal semantics have been proposed for autoepistemic logic(see the entry onmodal logic for more background on modal logics). For instance,Moore (1984) proposes an S5-based Kripkean possible world semantics andLin and Shoham (1990) propose bi-modal preferential semantics (see SectionSelection semantics below) for both autoepistemic logic and default logic. InKonolige (1988) it has been observed that the fixed point characterization of stableexpansions can be alternatively phrased only on the basis of the setof formulas \(\Gamma_0\) in \(\Gamma\) that do not contain occurrencesof the modal operator \(\mathsf{B}\):

\( \Gamma = \mathrm{Cn}_{\mathsf{K45}}\bigl( \Xi \cup \{\mathsf{B}\phi \mid \phi \in \Gamma_0\} \cup \{\neg \mathsf{B}\phi\mid \phi \notin \Gamma_0\}\bigr)\)

where \(\mathrm{Cn}_{\mathsf{K45}}(\cdot)\) is the consequencefunction of the modal logic \(\mathsf{K45}\).

3.6 Selection semantics

In CL a formula \(\phi\) is entailed by \(\Gamma\) (in signs \(\Gamma\vDash \phi\)) if and only if \(\phi\) is valid in all classicalmodels of \(\Gamma\). An influential idea in NML is to definenon-monotonic entailment not in terms ofall classical modelsof \(\Gamma\), but rather in terms of aselection of thesemodels (Shoham 1987). Intuitively, the idea is to read

\(\Gamma \nc \phi\) as “\(\phi\) holds in the mostnormal/natural/etc. models of \(\Gamma\).”

3.6.1 The core properties

In the seminal paperKraus et al. (1990) this idea is investigated systematically. The authors introducevarious semantic structures, among them preferential ones. Apreferential structure \(S\) consists of

  • a set \(\Omega\) of states
  • and an irreflexive and transitive relation \(\prec\) on\(\Omega\).

In the context of preferential structures one may think of states interms of labelled models \(M_s\) of classical propositional logic,where each label \(s\) is attached to a unique model \(M\) but a modelmay occur under various labels in \(\Omega\).[5] For the ease of demonstration we will henceforth just talk about‘models in \(\Omega\)’ and not anymore refer to states orlabelled models.

Intuitively, \(M \prec M'\) if \(M\) is more normal than \(M'\). Therelation \(\prec\) can be employed to formally realize the idea ofdefining a consequence relation in view of the most normal models,namely by focusing on \(\prec\)-minimal models. Formally, where\([\psi]\) is the set of all models of \(\psi\) in \(\Omega\),\(\nc^S\), is defined as follows:

\(\psi \nc^S \phi\) if and only if \(\phi\) holds in all\(\prec\)-minimal models in \([\psi]\).

In order to warrant that there are such minimal states, \(\prec\) isconsidered to besmooth (also sometimes calledstuttered), i.e., for each \(M\) either \(M\) is\(\prec\)-minimal or there is a \(\prec\)-minimal \(M'\) such that\(M' \prec M\).

Preferential structures enjoy a central role in NML since theycharacterizepreferential consequence relations, i.e.,non-monotonic consequence relations \(\nc\) that fulfill the followingcentral properties, also referred to as thecore propertiesor theconservative core of non-monotonic reasoning systemsor as theKLM-properties (in reference to the authors ofKraus, Lehmann, Magidor 1990):

  • Reflexivity: \(\phi \nc \phi\).
  • Cut: If \(\phi \wedge \psi \nc \tau\) and \(\phi \nc\psi\), then \(\phi \nc \tau\).
  • Cautious Monotony: If \(\phi \nc \psi\) and \(\phi \nc\tau\) then \(\phi \wedge \psi \nc \tau\).
  • Left Logical Equivalence: If \(\vDash \phi \equiv \psi\)and \(\phi \nc \tau\), then \(\psi \nc \tau\).
  • Right Weakening: If \(\vDash \phi \supset \psi\) and \(\tau\nc \phi\), then \(\tau\nc \psi\).
  • OR: If \(\phi \nc \psi\) and \(\tau \nc \psi\), then \(\phi\vee \tau \nc \psi\).

The former three properties are those we have already seen above.According to Left Logical Equivalence, classically equivalent formulashave the same non-monotonic consequences. Where \(\psi\) isclassically entailed by \(\phi\), Right Weakening expresses thatwhenever \(\phi\) is a non-monotonic consequence of \(\tau\) then sois \(\psi\).

Without further commentary we state two important derived rules:

  • AND: If \(\phi \nc \psi\) and \(\phi \nc \tau\) then \(\phi\nc \psi \wedge \tau\).
  • S: If \(\phi \wedge \psi \nc \tau\) then \(\phi \nc \psi\supset \tau\).

InKraus et al. (1990) it is shown that a consequence relation \(\nc\) is preferential ifand only if \(\nc = \nc^S\) for some preferential structure \(S\).

Given a set of conditional assertions \(\mathbf{K}\) of the type\(\phi \nc \psi\) one may be interested in investigating what otherconditional assertions follow. The following two strategems lead tothe same results. The first option is to intersect all preferentialconsequence relations \(\nc\) that extend \(\mathbf{K}\) (in the sensethat the conditional assertions in \(\mathbf{K}\) hold for \(\nc\))obtaining thePreferential Closure \(\nc^{\mathrm{P}}\) of\(\mathbf{K}\). The second option is to use the five definingproperties of preferential consequence relations as deduction rulesfor syntactic units of the form \(\phi \nc \psi\). This way we obtainthe deductivesystem P with its consequence relation\(\vdash^{\mathrm{P}}\) for which:

\(\mathbf{K} \vdash^{\mathrm{P}} \phi \nc \psi\) if and only if \(\phi\nc^{\mathrm{P}} \psi\).

3.6.2 Various semantics

One of the most remarkable facts in the study of NMLs is that variousother semantics have been proposed —often independently andbased on very different considerations— that also adequatelycharacterize preferential consequence relations. This underlines thecentral status of the core properties in the formal study ofdefeasible reasoning.

Many of these approaches use structures \(S = (\Omega, \Pi)\) where\(\Omega\) is again a set of classical models and \(\Pi\) is a mappingwith the domain \(\wp(\Omega)\) (the set of subsets of \(\Omega\)) andan ordered co-domain \((D, <)\). The exact nature of \((D, <)\)depends on the given approach and we give some examples below. Thebasic common idea is to let \(\phi \nc^S \psi\) in case \(\Pi([\phi\wedge \psi])\) is preferable to \(\Pi([\phi \wedge \neg \psi])\) inview of \(<\). The following table lists some proposals which wediscuss some more below:

\(\Pi\)\(\phi \nc^S \psi\) iffStructures
possibility measure\(\pi([\phi]) = 0\) orpossibilistic structures
\(\pi: \wp(\Omega) \rightarrow [0,1]\)\(\pi([\phi \wedge \psi]) > \pi([\phi \wedge \neg\psi])\) 
ordinal ranking function\(\kappa([\phi]) = \infty\) orordinal ranking structures
\(\kappa: \wp(\Omega) \rightarrow \{0,1,\dotsc,\infty\}\)\(\kappa([\phi \wedge \psi]) < \kappa([\phi \wedge \neg\psi])\) 
plausibility measure\(\mathrm{Pl}([\phi]) = \bot\) orplausibility structures
\(\mathrm{Pl}: \wp(\Omega) \rightarrow D\)\(\mathrm{Pl}([\phi \wedge \psi]) > \mathrm{Pl}([\phi \wedge\neg \psi])\) 

Possibility measures assign real numbers in the interval [0,1] topropositions to measure their possibility, where 0 corresponds toimpossible states and 1 to necessary states (Dubois and Prade 1990). Ordinal ranking functions rank propositions via natural numbersclosed with ∞ (Goldszmidt and Pearl 1992,Spohn 1988). We may think of \(\kappa([\phi])\) as the level ofsurprise that we would face if \(\phi\) holds, where ∞represents maximal surprise. Finally, plausibility measures (Friedman and Halpern 1996,Rott 2013) associate propositions with elements in a partiallyordered domain with the bottom element \(\bot\) and the top element\(\top\) in order to compare their plausibilities. Given some simpleconstraints on \(\mathrm{Pl}\) (such as: If \(\mathrm{Pl}(X) =\mathrm{Pl}(Y) = \bot\) then \(\mathrm{Pl}(X \cup Y) = \bot\)) wespeak ofqualitative plausibility structures. The followingstatements are all equivalent, which emphasizes the centrality of thecore properties in the study of NMLs:

  • \(\mathbf{K} \vdash^P \psi\)
  • \(\phi \nc^S \psi\) for all preferential structures \(S\) forwhich \(\nc^S\) extends \(\mathbf{K}\)
  • \(\phi \nc^S \psi\) for all possibilistic structures \(S\) forwhich \(\nc^S\) extends \(\mathbf{K}\)
  • \(\phi \nc^S \psi\) for all ordinal ranking structures \(S\) forwhich \(\nc^S\) extends \(\mathbf{K}\)
  • \(\phi \nc^S \psi\) for all qualitative plausibility structures\(S\) for which \(\nc^S\) extends \(\mathbf{K}\)

Yet another well-known semantics that characterizes preferentialconsequence relations makes use ofconditional probabilities.The idea is that \(\phi \nc \psi\) holds in a structure if theconditional probability \(P(\psi \mid \phi)\) is closer to 1 than anarbitrary \(\epsilon\), whence the name\(\epsilon\)-semantics (Adams 1975,Pearl 1989).

The following example demonstrates that the intuitive idea of using athreshold value such as ½ instead of infinitesimals and tointerpret \(\phi \nc \psi\) as \(P(\psi \mid \phi)\) > ½does not work in a straightforward way. Let \(\alpha\) abbreviate“being a vegan”, \(\beta\) abbreviate “being anenvironmentalist”, and \(\gamma\) abbreviate “avoiding theuse of palm oil”. Further, let our knowledge base comprise thestatements “αs are usually βs,”“\((\alpha \wedge \beta)\)s are usually γs”. Thefollowing Euler diagram illustrates the conditional probabilities in apossible probability distribution for the given statements.

venn.png

We have for instance: \(P(\beta \mid \alpha)\) = ¾, \(P(\gamma\mid \alpha \wedge \beta)\) = ⅔ and \(P(\gamma \mid \alpha)\) =½. Hence, under the proposed reading of \(\nc\), we get\(\alpha \nc \beta\) and \(\alpha \wedge \beta \nc \gamma\) while wedo not have \(\alpha \nc \gamma\). This means that Cut is violated.Similarly, it can be shown that other properties such as Or areviolated under this reading (e.g.,Pearl 1988).

In view of these difficulties it is remarkable that there is aprobabilistic account according to which \(\phi \nc \psi\) isinterpreted as \(P(\psi \mid \phi)\) > ½ and thatnevertheless characterizes preferential consequence relations (Benferhat et al. 1999). The idea is to restrict the focus to structures \(S= (\Omega, P,\prec)\) that have specific probability distributions known asatomic bound systems orbig-stepped probabilities.First, a linear order \(\prec\) is imposed on the set of classicalmodels in \(\Omega\) over which the probability measure \(P\) isdefined. Second, \(P\) is required to respect this order by“taking big steps”, i.e., in such a way that for any \(M\in \Omega\), \(P(\{M\}) > \sum\{P(\{M'\}) \mid M' \prec M\}\).This warrants that \(\phi \nc \psi\) holds if and only if the unique\(\prec\)-maximal model in \([\phi]\) validates \(\psi\) (in signs,\(\max[\phi] \models \psi\)).[6] Here is how this helps us with the problematic example above:\(\alpha \nc \beta\) means that \(\max[\alpha] \models \beta\) andhence that \(\max[\alpha] = \max[\alpha \wedge \beta]\). \(\alpha\wedge \beta \nc \gamma\) means that \(\max[\alpha \wedge \beta]\models \gamma\) and hence \(\max[\alpha] \models \gamma\) whichimplies \(\alpha \nc \gamma\).

InGilio (2002) an approach, is presented in which conditionals \(\alpha \nc \beta\)are characterized byimprecise conditional probabilities \(0\le \tau_1 \le P(\beta \mid \alpha) \le \tau_2 \le 1\) with a lowerbound \(\tau_1\) and an upper bound \(\tau_2\) on the conditionalprobabilities. In this approach it is possible to determine how theprobability bounds degrade in view of applications of specificinference rules. InFord (2004) this is used to distinguish argument strengths (seeabove).

3.6.3 Beyond the core properties

Preferential consequence relations do not in general validate

  • Rational Monotony: If \(\phi \nc \tau\) and \(\phi \nnc\neg \psi\), then \(\phi \wedge \psi \nc \tau\).

A preferential consequence relation \(\nc\) that satisfies RationalMonotony is called arational consequence relation. These arecharacterized byranked structures which are preferentialstructures \(S = (\Omega, \prec)\) for which \(\prec\) ismodular, that is, for all \(M, M', M'' \in \Omega\), if \(M\prec M'\) then \(M'' \prec M'\) or \(M \prec M''\). One can picturethis as follows: models in \(\Omega\) are arranged in linearly orderedlevels, and the level of a model reflects its degree of normality (itsrank).

The seminalKraus et al. (1990) inspired a huge variety of subsequent work. We briefly highlight somecontributions.

While inKraus et al. (1990) the standard of deduction was classical propositional logic, inArieli and Avron (2000) also nonclassical monotonic core logics and variants with multipleconclusions are considered. Various publications investigate thepreferential and rational consequence relations in a first-orderlanguage (e.g.,Lehmann and Magidor 1990,Delgrande 1998,Friedman et al. 2000). Given the problematic status of Rational Monotony (see our discussionin Section1), alternative semantics have been investigated that validate weakerproperties such as Disjunctive Rationality (if \(\phi_1 \vee \phi_2\nc \psi\) then \(\phi_1\nc\psi\) or \(\phi_2\nc\psi\)) (e.g., inFreund 1993,Rott 2014,Booth and Varzinczak 2021)

As we have seen, the properties of preferential or rationalconsequence relations may also serve as deductive rules for syntacticunits of the form \(\phi \nc \psi\) under the usual readings such as“If \(\phi\) then usually \(\psi\).” This approach can begeneralized to gainconditional logics by allowing forformulas where a conditional assertion \(\phi \nc \psi\) is within thescope of classical connectives such as \(\wedge, \vee, \neg\), etc.(see e.g.,Delgrande 1987,Asher and Morreau 1991,Friedman and Halpern 1996,Giordano et al. 2009,Egre et al. forthcoming, andParent 2021 for an overview on conditional deontic logics).

We state one remarkable result obtained inBoutilier (1990) that closely links the study of NMLs to the study of modal logics inthe Kripkean tradition. Suppose we translate the deduction rules ofsystem P into Hilbert-style axiom schemes such that, for instance,Cautious Monotony becomes

\(\vdash \bigl( ( \phi \nc \psi) \wedge (\phi \nc \tau) \bigr) \supset\bigl( ( \phi \wedge \psi) \nc \tau \bigr)\)

Conditional assertions are shown to be expressible in standardKripkean modal frames in such a way that system P (under thistranslation) corresponds to a fragment of the well-known modal logicS4. An analogous result is obtained for the modal logic S4.3 and thesystem that results from the strengthening system P with an axiomscheme for Rational Monotony.

Various contributions are specifically devoted to problems related torelevance. Consider some of the conditional assertions in the NixonDiamond: \(\mathbf{K}\) = {Quaker \(\nc\)Pacifist,Republican \(\nc\) ¬Pacifist}. It seemsdesirable to derive e.g.Quaker \(\wedge\)worker\(\nc\)Pacifist since in view of \(\mathbf{K}\), being aworker isirrelevant to the assertionQuaker \(\nc\)Pacifist. Intuitively speaking,Quaker \(\nc\)¬worker should fail in which case Rational MonotonyyieldsQuaker \(\wedge\)worker \(\nc\)Pacifist in view ofQuaker \(\nc\)Pacifist. Hence, prima facie we may want to proceed asfollows: let \(\nc^{R}\) be the result of intersecting all rationalconsequence relations \(\nc\) that extend \(\mathbf{K}\) (in the sensethat the conditional assertions in \(\mathbf{K}\) hold for \(\nc\)).Unfortunately it isnot the case thatQuaker\(\wedge\)worker \(\nc^{R}\)Pacifist. The reasonis simply that there are rational consequence relations for whichQuaker \(\nc\) ¬worker and whence RationalMonotony does not yield the desiredQuaker \(\wedge\)worker \(\nc\)Pacifist. Moreover, it has been shownthat \(\nc^R\) is identical to the preferential closure \(\nc^P\).

InLehmann and Magidor (1992) aRational Closure for conditional knowledge bases such as\(\mathbf{K}\) is proposed that yields the desired consequences. Weomit the technical details. The basic idea is to assign naturalnumbers, so-called ranks, to formulas that indicate how exceptionalthey are relative to the given knowledge base \(\mathbf{K}\). Then theranks of formulas are minimized, which means that each formula isinterpreted as normally as possible. A conditional assertion \(\phi\nc \psi\) is in the Rational Closure of \(\mathbf{K}\) if the rank of\(\phi \wedge \psi\) is strictly less than the rank of \(\phi \wedge\neg \psi\) (or \(\phi\) has no rank). The rank of a formula\(\alpha\) is calculated iteratively. Let \(m(\mathbf{K}') = \{ \phi\supset \psi \mid \phi \nc \psi \in \mathbf{K}'\}\) denote thematerial counterpart of a given set of conditional assertions\(\mathbf{K}'\). Let \(\mathbf{K}_0 = \mathbf{K}\). \(\alpha\) hasrank 0 if it is consistent with \(m(\mathbf{K}_0)\). Let\(\mathbf{K}_{i+1}\) be the set of all members \(\phi \nc \psi\) of\(\mathbf{K}_i\) for which the rank of \(\phi\) is not less or equalto \(i\). \(\alpha\) has rank \(i+1\) if it doesn’t have a ranksmaller or equal to \(i\) and it is consistent with\(m(\mathbf{K}_{i+1})\).

In our exampleQuaker \(\wedge\) worker has rank 0, just likeQuaker. This is strictly less than rank 1 ofQuaker\(\wedge\) ¬Pacifist andQuaker \(\wedge\) worker\(\wedge\) ¬Pacifist. This means that the desiredQuaker \(\wedge\)worker \(\nc\)Pacifistis in the Rational Closure of our \(\mathbf{K}\).

A system equivalent to Rational Closure has been independentlyproposed under the namesystemZ based on\(\epsilon\)-semantics inPearl (1990). A weaker version of Rational Closure has been proposed inBooth and Varzinczak (2021): it invalidates Rational Monotony, but validates DisjunctiveRationality.

One may consider it a drawback of Rational Closure that it suffersfrom the Drowning Problem (seeabove). To see this, consider the following conditional knowledge baseKL:

logic-course.png

We have the following ranks:

formula\(l\)\(c\)\(r\)\(f\)\(l \wedge f\)\(l \wedge \neg f\)
rank100011

Since the ranks of \(l \wedge f\) and \(l \wedge \neg f\) are equal,\(l\nc f\) is not in the rational closure of \(\mathbf{KL}\).

This problem has been tackled inLehmann (1995) with the formalismLexicographic Closure. Rougly the idea isto compare models by measuring the severity of violations ofassertions in the given conditional knowledge base to which they giverise. \(\phi \nc \psi\) is entailed by \(\mathbf{K}\) if the bestmodels of \(\phi \wedge \psi\) are better than the best models of\(\phi \wedge \neg \psi\). A model \(M\) violates a conditionalassertion \(\phi \nc \psi\) if it validates \(\phi\) but not \(\psi\).The severity of a violation is proportional to the rank of \(\phi\).For instance, in our example we have the following models of \(l\wedge f\) resp. of \(l \wedge \neg f\):

\(l\)\(f\)\(r\)\(c\)modelviolation:rank
1111\(M_1\)\(l \nc \neg r: 1\)
1110\(M_2\)\(l \nc \neg r : 1\), \(l \nc c : 1\)
1101\(M_3\)\(c \nc r: 0\)
1100\(M_4\)\(l \nc c: 1\)
1011\(M_5\)\(l\nc \neg r:1\), \(c \nc f: 0\)
1010\(M_6\)\(l \nc \neg r : 1\), \(l \nc c: 1\)
1001\(M_7\)\(c \nc r : 0\), \(c \nc f : 0\)
1000\(M_8\)\(l \nc c:1\)

The best model of \(l \wedge f\) is \(M_3\) since it does not violateany conditional assertion of rank higher than 0, while all othermodels of \(l \wedge f\) do. For the same reason \(M_7\) is the bestmodel of \(l \wedge \neg f\). Moreover, \(M_3\) violates lessconditional assertions of rank 0 than \(M_7\) and is thus thepreferred interpretation. Hence, \(l \nc f\) is in the LexicographicClosure. Altogether, what avoids the drowning problem inLehmann’s approach is a combination between specificity handlingand aquantitative rationale according to whichinterpretations are preferred that violate less conditionals.

To highlight the latter, consider a knowledge base \(\mathbf{K}\)consisting of

  • Party \(\nc\)Anne,
  • Party \(\nc\)Phil, and
  • Party \(\nc\)Frank.

We expect from each, Anne, Phil, and Frank to come to the party.Suppose, moreover, we know thatParty \(\wedge\) ((¬Anne \(\wedge\) ¬Phil) \(\vee\) ¬Frank). In view of this fact, not all three assertions arevalid. At best, either the former two hold or the latter. If we try tovalidate as many conditional assertions as possible, we will preferthe situation in which only the latter is violated. This occurs in theLexicographic Closure of \(\mathbf{K}\) which contains

  • Party \(\wedge\) ((¬Anne \(\wedge\) ¬Phil) \(\vee\) ¬Frank) \(\nc\)¬Frank and
  • Party \(\wedge\) ((¬Anne \(\wedge\) ¬Phil) \(\vee\) ¬Frank) \(\nc\)Anne\(\wedge\)Phil.

Similar quantitative considerations play a role in theMaximumEntropy Approach inGoldzsmidt et al. (1993) which is based on \(\epsilon\)-semantics. The basic idea is similarto Lexicographic Closure: \(\phi \nc \psi\) is entailed by\(\mathbf{K}\) if \(\min(\{\kappa(M) \mid M \models \phi \wedge\psi\}) < \min (\{\kappa(M) \mid M \models \phi \wedge \neg\psi\})\), where \(\kappa(M)\) outputs a weigthed sum of violations in\(M\) and where weights are attributed to violations in view ofspecificity considerations. Similar to Lexicographic Closure theviolation of more specific conditionals weigh heavier than violationsof more general conditionals. As a consequence, just like inLexicographic Closure, \(l \nc f\) is entailed in our example and sothe drowning problem is avoided. A difference to Lexicographic Closureis, for instance, that \(l \wedge \neg f \nc c\) is not entailedaccording to the maximal entropy approach. The reason is that theviolation of several less specific assertions may in sumcounter-balance the violation of a more specific assertion. Forinstance, while model \(M_7\) is preferred over model \(M_8\) in theLexicographic approach, in the Maximum Entropy both models turn out tobe equally good and the best models of \(l \wedge \neg f\).

3.7 Assumption-based approaches

In a family of approaches, defeasible inferences are encoded asmaterial inferences with explicit assumptions:

[†]    \((\phi \wedge \mathsf{as}) \supset \psi\)

expresses that \(\phi\) defeasibly implies \(\psi\) on the assumptionthat \(\mathsf{as}\) holds. The assumptions are interpreted as valid“as much as possible”. There are various ways to give amore precise meaning to this.

One such approach is offered byAdaptive Logics (Batens 2007,Straßer 2014). An adaptive logic AL is given by a triple consistingof

  • alower limit logic LLL with a consequence relation\(\vdash\) that is reflexive, monotonic, satisfies Cut, and is compact(If \(\Gamma \vdash \phi\) then there is a finite \(\Gamma' \subseteq\Gamma\) such that \(\Gamma' \vdash \phi\));
  • a set ofabnormalities \(\Omega\) containing formulaswhich are to be interpretated “as false as much aspossible”; and
  • anadaptive strategy in view of which the phrase“as false as much as possible” will be disambiguated.

LLL defines the deductive core of AL: all LLL-valid inferences areAL-valid. AL strengthens LLL by allowing for defeasible inferences bymeans of the following scheme:

[‡]    Where \(\mathsf{ab}\) is an abnormal formulain \(\Omega\): if \(\phi\vdash \psi \vee \mathsf{ab}\) then \(\psi\)follows defeasibly from \(\phi\) on the assumption that\(\mathsf{ab}\) is false (or, equivalently, that \(\neg \mathsf{ab}\)is true).

Where \(\mathsf{as} = \neg \mathsf{ab}\), the antecedent of [‡]is equivalent to \(\vdash (\phi \wedge \mathsf{as}) \supset \psi\)which is [†] above (under the classical meaning of \(\neg,\wedge\), and \(\vee\)).

Examples of concrete classes of adaptive logics are:

  • Inconsistency-Adaptive Logics (Batens 1980,Batens 1999,Priest 1991): In domains in which contradictions may occur it is useful to workwith a paraconsistent core logic LLL that has a negation \(\sim\) thatis non-explosive (\(p, {\sim} p \nvdash q\)) and for which disjunctivesyllogism doesn’t hold (\(\phi \vee \psi, {\sim} \phi \nvdash\psi\)). Let \(\Omega\) consist of \(\sim\)-contradictions in logicalatoms (\(p \wedge {\sim} p\)). Then we have \(p \vee q, {\sim} p\vdash q \vee \mathsf{ab}\) where \(\mathsf{ab} = p \wedge {\sim}p\).This means, disjunctive syllogism can be applied on the assumptionthat there is no \(\sim\)-contradiction in \(p\). This way LLL issignificantly strengthened in a non-monotonic way.
  • Adaptive Logics of Inductive Generalizations: Let LLL beCL and consider a \(\Omega\) that contains, for instance, formulas ofthe form \(\exists x P(x) \wedge \neg \forall x P(x)\). Then we have,for example, \(P(a) \vdash \forall x P(x) \vee \mathsf{ab}\) where\(\mathsf{ab} = \exists x P(x) \wedge \neg \forall x P(x)\). Thismeans we inductively generalize from \(P(a)\) to \(\forall x P(x)\)based on the assumption that the abnormality \(\mathsf{ab}\) does nothold. (This is a simplified account compared to the more subtle logicspresented inBatens (2011).)
  • There are numerous adaptive logics for other defeasible reasoningforms such as abductive reasoning, normative reasoning, beliefrevision, diagnosis, etc.

Adaptive logics implement the idea behind [‡] syntactically interms of adynamic proof theory. A dynamic proof from thepremise set \(\{P(a), \neg P(b)\}\) for an adaptive logic of inductivegeneralizations may look as follows:

1.\(P(a)\)PREM
✓2.\(\forall x P(x)\)1; RC\(\{\exists x P(x) \wedge \neg \forall x P(x)\}\)
3.\(\neg P(b)\)PREM
4.\(\exists x P(x) \wedge \neg \forall x P(x)\)1,3; RU

The last column of each line of the proof contains its condition, thatis, a set of abnormalities \(\mathrm{COND} \subseteq \Omega\) thatencode the assumptions used to derive the formula in the second columnof the line: each \(\mathsf{ab} \in \mathrm{COND}\) is assumed to befalse. The generic rule RU (rule unconditional) represents anyinference that is licensed by LLL. The generic rule RC (ruleconditional) follows scheme [‡] where \(\mathsf{ab}\) is pushedto the condition of the line. Sometimes there are good reasons toconsider assumptions violated, in which case the corresponding linesare ✓-marked as retracted. This is clearly the case ifthe condition of a line is \(\mathrm{COND}\) while for a\(\{\mathsf{ab}_1, \dotsc, \mathsf{ab}_n\} \subseteq \mathrm{COND}\),\(\mathsf{ab}_1 \vee \mathsf{ab}_2 \vee \ldots \vee \mathsf{ab}_n\) isderived without defeasible steps (i.e., on the condition ∅).This means that (at least) one of the formulas in \(\mathrm{COND}\) isfalse. Thus, line 2 is marked as retracted when we reach line 4 of theproof. In view of this behavior, dynamic proofs are internallydynamic: Even if no new premises are introduced, the addition of newlines may cause the retraction of previous lines of a proof.

Not all cases of retraction are of this straightforward kind. Variousadaptive strategies come with marking mechanisms to handlemore complicated cases, such as the one in the following proof:

1.\(P(a)\)PREM
2.\(Q(a)\)PREM
3.\(\neg P(b) \vee \neg Q(c)\)PREM
4.\(\forall x P(x) \vee \forall x Q(x)\)1; RC\(\{\exists x P(x) \wedge \neg \forall x P(x)\}\)
5.\(\forall x P(x) \vee \forall x Q(x)\)2; RC\(\{\exists x Q(x) \wedge \neg \forall x Q (x)\}\)
6.\((\exists x P(x) \wedge \neg \forall x P(x)) \vee ( \exists xQ(x) \wedge \neg \forall x Q(x))\)1–3; RU

The formula \(\forall x P(x) \vee \forall x Q(x)\) is derived at line4 and 5 on two respective conditions: \(\{\exists x P(x) \wedge \neg\forall x P(x)\}\) and \(\{\exists x Q(x) \wedge \neg \forall xQ(x)\}\). Neither \(\exists x P(x) \wedge \neg \forall x P(x)\) nor\(\exists x Q(x) \wedge \neg \forall x Q(x)\) is derivable on thecondition ∅. However, both are part of the (minimal) disjunctionof abnormalities derived at line 6. According to one strategy, theminimal abnormality strategy, the premises are interpreted insuch a way that as many abnormalities as possible are considered to befalse, which leaves us with two interpretations: one in which\(\exists x P(x) \wedge \neg \forall x P(x)\) holds while \(\exists xQ(x) \wedge \neg \forall x Q(x)\) is false, and another one in which\(\exists x Q(x) \wedge \neg \forall x Q(x)\) holds while \(\exists xP(x) \wedge \neg \forall x P(x)\) is false. In both interpretationseither the assumption of line 4 or the assumption of line 5 holdswhich means that in either interpretation the defeasible inference to\(\forall x P(x) \vee \forall x Q(x)\) goes through. Thus, accordingto the minimal abnormality strategy neither line 4 nor line 5 ismarked (we omit the technical details). Another strategy, thereliability strategy, is more cautious. According to it anyline that involves an abnormality in its condition that is part of aminimal disjunction of abnormalities derived on the condition ∅is marked. This means that in our example, lines 4 and 5 are marked.

Lines may get marked at specific stages of a proof just to getunmarked at later stages and vice versa. This mirrors the internaldynamics of defeasible reasoning. In order to define a consequencerelation, a stable notion of derivability is needed. A formula derivedat an unmarked linel in an adaptive proof from a premise set\(\Gamma\) is considered aconsequence of \(\Gamma\) if anyextension of the proof in whichl is marked is furtherextendable so that the line is unmarked again.

Such a consequence relation can equivalently be expressed semanticallyin terms of preferential semantics (see SectionSelection semantics). Given an LLL-model \(M\) we can identify itsabnormal part\(Ab(M) = \{\mathsf{ab} \in \Omega \mid M \models \mathsf{ab}\}\) tobe the set of all abnormalities that hold in \(M\). The selectionsemantics for minimal abnormality can be phrased as follows. We orderthe LLL-models by \(M \prec M'\) if and only if \(Ab(M) \subsetAb(M')\). Then we define that \(\phi\) is a semantic consequence of\(\Gamma\) if and only if for all \(\prec\)-minimal LLL-models \(M\)of \(\Gamma\), \(M\models \phi\). (We omit selection semantics forother strategies.)

A similar system to adaptive logics makes use of maximally consistentsets. InMakinson (2003) it was developed under the namedefault assumptions. Given aset of assumptions \(\Xi\),

\(\Gamma \nc[\Xi] \phi\) if and only if \(\Xi' \cup \Gamma \vDash\phi\) for all \(\Xi' \subseteq \Xi\) that are maximally consistentwith \(\Gamma\) (i.e., \(\Xi' \cup \Gamma\) is consistent and there isno \(\Xi'' \subseteq \Xi\) for which \(\Xi' \subset \Xi''\) and\(\Xi'' \cup \Gamma\) is consistent).

If we take \(\Xi\) to be \(\{\neg \mathsf{ab} \mid \mathsf{ab} \in\Omega\}\) then we have an equivalent characterization of adaptivelogics with the minimal abnormality strategy and the set ofabnormalities \(\Omega\) (Van De Putte 2013).

Conditional Entailment is another assumption-based approach (Geffner and Pearl 1992). Suppose that we start with a theory \(T = (\Delta, \Gamma, \Lambda)\)where \(\Delta = \{\phi_i \rightarrow \psi_i \mid i \in I\}\) consistsof default rules, \(\Gamma\) consists of necessary facts, while\(\Lambda\) consists of contingent facts. It is translated into anassumption-based theory \(T' = (\Delta', \Gamma', \Lambda)\)as follows:

  • For each \(\phi_i \rightarrow \psi_i \in \Delta\) we introduce adesignated assumption constant \(\pi_i\) and encode \(\phi_i\rightarrow \psi_i\) by \(\phi_i \wedge \pi_i \supset \psi_i\). Theformer is just our scheme [†] while the latter makes sure thatassumptions are —by default— considered to hold.
  • The set of defaults \(\Delta'\) is \(\{\phi_i \rightarrow \pi_i\mid i \in I\}\) and our background knowledge becomes \(\Gamma' =\Gamma \cup \{\phi_i \wedge \pi_i \supset \psi_i \mid i \in I\}\).

Just as in the adaptive logic approach, models are compared withrespect to their abnormal parts. For a classical model \(M\) of\(\Gamma' \cup \Lambda\), \(Ab(M)\) is the set of all \(\pi_i\) forwhich \(M \models \neg \pi_i\). One important aspect thatdistinguishes conditional entailment from adaptive logics and defaultassumptions is the fact that it implements the Specificity Principle.For this, the assumptions are ordered by means of a (smooth) relation<. The models are then compared as follows:

\(M \lessdot M'\) if and only if \(Ab(M) \neq Ab(M')\) and for all\(\pi_i \in Ab(M)\setminus Ab(M')\) there is a \(\pi_j \in Ab(M')\setminus Ab(M)\) for which \(\pi_i < \pi_j\).

The idea is: \(M\) is preferred over \(M'\) if every assumption\(\pi_i\) that does not hold in \(M\) but that holds in \(M'\) is‘compensated for’ by a more specific assumption \(\pi_j\)that holds in \(M\) but that does not hold in \(M'\).

For this to work, < has to include specificity relations amongassumptions. Such orders < are calledadmissible relativeto the background knowledge \(\Gamma'\) if they satisfy the followingproperty: for every set of assumptions \(\Pi \subseteq \{\pi_i \mid i\in I\}\), if \(\Pi\) violates some default \(\phi_j \rightarrow\pi_j\) in view of the given background knowledge \(\Gamma'\) (insigns, \(\Pi, \phi_j, \Gamma' \models \neg \pi_j\)) then there is a\(\pi_k \in \Pi\) for which \(\pi_k < \pi_j\).

This is best understood when put into action. Take the case withTweety the penguin. We have \(\Delta' = \{ p \rightarrow \pi_1, b\rightarrow \pi_2\}\), \(\Gamma' = \{ p \supset b, p \wedge \pi_1\supset \neg f, b \wedge \pi_2 \supset f\}\), and \(\Lambda = \{p\}\).Let \(\Pi = \{\pi_2\}\). Then \(\Pi, \Gamma', p \models \neg \pi_1\)and thus for < to be admissible, \(\pi_2 < \pi_1\). We have twotypes of models of \(\Gamma' \cup \Lambda\): models \(M_1\) for which\(M_1 \models \pi_1\) and therefore \(M_1 \models \neg f\) and models\(M_2\) for which \(M_2 \models \pi_2\) and thus \(M_2 \models f\).Note that \(M_1 \lessdot M_2\) since for the only assumption in\(Ab(M_1)\) —namely \(\pi_2\)— there is \(\pi_1 \inAb(M_2) \setminus Ab(M_1)\) and \(\pi_2 < \pi_1\).

Analogous to adaptive logics with the minimal abnormality strategy,conditional entailment is defined via \(\lessdot\)-minimal models:

\((\Delta', \Gamma', \Lambda) \nc \phi\) if and only if for eachadmissible < (relative to \(\Gamma'\)) and all \(\lessdot\)-minimalmodels \(M\) of \(\Gamma' \cup \Lambda\), \(M \models \phi\).

In our example, \(\neg f\) is a conditionally entailed since all\(\lessdot\)-minimal models have the same abnormal part as \(M_1\).

A consequence of expressing defeasible inferences via materialimplication in assumption-based approaches is that defeasibleinferences arecontrapositable. Clearly, if \(\phi \wedge \pi\supset \psi\) then \(\neg \psi \wedge \pi \supset \neg \phi\). As aconsequence, formalisms such as default logic have a more greedy styleof applying default rules. We demonstrate this with conditionalentailment. Consider a theory consisting of the defaults \(p_1\rightarrow p_2\), \(p_2 \rightarrow p_3\), \(p_3 \rightarrow p_4\)and the factual information \(\Lambda = \{p_1, \neg p_4\}\) (where\(p_i\) are logical atoms). In assumption-based approaches such asconditional entailment the defeasible rules will be encoded as \(p_1\wedge \pi_1 \supset p_2\), \(p_2 \wedge \pi_2 \supset p_3\), and\(p_3 \wedge \pi_3 \supset p_4\). It can easily be seen that < =∅ is an admissible ordering which means that for instance amodel \(M\) with \(Ab(M) = \{\pi_1\}\) is \(\lessdot\)-minimal. Insuch a model we have \(M \models \neg p_3\) and \(M \models \neg p_2\)by reasoning backwards via contraposition from \(\neg p_4 \wedge\pi_3\) to \(\neg p_3\) and from \(\neg p_3 \wedge \pi_2\) to \(\negp_2\). This means that neither \(p_2\) nor \(p_3\) is conditionallyentailed.

The situation is different in default logic where both \(p_2\) and\(p_3\) are derivable. The reasoning follows a greedy policy inapplying default rules: whenever a rule is applicable (that is, itsantecedent holds by the currently held beliefs \(\Xi\) and itsconsequent does not contradict with \(\Xi\)) it is applied. There isdisagreement among scholars on whether and when contraposition is adesirable property for defeasible inferences (e.g.,Caminada 2008,Prakken 2012).

4. Non-monotonic logic and human reasoning

In view of the fact that test subjects seem to perform very poorly invarious paradigmatic reasoning tests (e.g., Wason’s SelectionTask (Wason 1966) or the Supression Task (Byrne 1989)), main streams in the psychology of reasoning have traditionallyascribed to logic at best a subordinate role in human reasoning. Inrecent years this assessment has been criticized as the result ofevaluating performances in tests against the standard of classicallogic, whereas other standards based on probabilistic considerationsor on NMLs have been argued to be more appropriate.

This resulted in the rise of a new probabilistic paradigm (Oaksford and Chater 2007,Pfeifer and Douven 2014) where probability theory provides a calculus forrational belief update. Although the program is sometimes phrased indecidedly anti-logicist terms,[7] logic is here usually understood as monotonic and deductive. Therelation to NML is less clear and it has been argued that there areclose connections especially to probabilistic accounts of NML (Over 2009,Pfeifer and Kleiter 2009).Politzer and Bonnefon 2009 warn against the premature acceptance of the probabilistic paradigmin view of the rich variety of alternatives such as possibilitymeasures, plausibility measures, etc.).

Another argument for the relevance of NML is advocated inStenning and Van Lambalgen (2008) who distinguish between reasoningto and reasoningfrom an interpretation. In the former process, agentsestablish a logical form that is relative to both the specific contextin which the reasoning takes place and to the agent’s goals.When establishing a logical form, agents choose—interalia—a formal language, a semantics (e.g., extensional vs.intensional), a notion of validity, etc. Once a logical form isestablished, agents engage in law-like rule-based inferences which arebased on this form. It is argued that in the majority of cases instandard reasoning tasks, subjects use non-monontonic logical formsthat are based on closed world assumptions.

Non-monotonic logicians often state that their motivation stems fromobserving the defeasible structure of actual commonsense reasoning.Empirical studies have been explicitly cited as both inspiration forworking on NMLs and as standards against which to evaluate NMLs.However, it has also been noted that logicians often rely too much ontheir own intuitions without critically assessing them against thebackground of empirical studies (Pelletier and Elio 1997).

Various studies investigate their test subjects’ tendency toreason according to specific inference principles of NMLs. Moststudies support the descriptive adequacy of the rules of system P.However, there are some open or controversial issues. For instance,while some studies report results suggestive of the adequacy ofweakened monotony principles such as Cautious Monotony (Schurz 2005,Neves et al. 2002,Pfeifer and Kleiter 2005) and Rational Monotony (Neves et al. 2002),Benferhat et al. (2005) report mixed results. Specificity considerations play arole in the reasoning process of test subjects inSchurz (2005), whereas according toFord and Billington (2000) they do not.Benferhat et al. (2005) are specifically interested in the question whether the responses oftheir test subjects corresponded better to Lexicographic Closure orRational Closure. Although the results were not fully conclusive, theystill suggest a preference for the former.

Pelletier and Elio (1994) investigate various relevant factors that influence subjects’reasoning about exceptions of defaults or inheritance relations. Theirstudy makes use of the benchmark problems for defeasible reasoningproposed inLifschitz (1989). For example, it is observed that the exceptional status of an objectA with respect to some default is more likely to spread toother objects if they share properties withA that may play arole in explaining the exceptional status. For example, whenconfronted with a student club that violates the default that studentclubs only allow for student members, subjects are more likely toascribe this exceptional status also to another club if they learnthat both clubs have been struggling to maintain minimum membershiprequirements.

The question of the descriptive adequacy of NMLs for human reasoningis also related to questions concerning the nature and limits ofcognitive modules in view of which agents are capable of logicalreasoning. For instance, the question arises whether such modulescould be realized on a neurological level. Concerning the formerquestion there are successful representations of NMLs in terms ofneural networks (seeStenning and Van Lambalgen (2008),Garcez et al (2009),Hölldobler and Kalinke (1994) forlogic programming with closed world assumptions,Besold et al (2017) for input/output logic, andLeitgeb (2001) for NMLs in the tradition ofSelection semantics).

Human reasoning is constrained by limited computational resources,available information, and time, leading it to rely on heuristicshortcuts. Some scholars argue that nonmonotonic reasoning serves asan adaptive evolutionary strategy for humans (Schurz 2005,Gabbay and Woods 2008). With respect to the limitations of human reasoningcapabilities, the logics discussed here are idealized, often assuminglogical closure, among other things. Recently, research intoresource-bounded models of defeasible inference has been gainingmomentum (e.g.D’Agostino and Modgil 2018,Beierle and Kutsch 2019,Ehab and Ismail 2021).

5. Conclusion

There are three main issues connected with the development of logicalframeworks that can adequately represent defeasible reasoning:(i) material adequacy; (ii) formal properties; and(iii) complexity. A non-monotonic formalism is materiallyadequate to the extent to which it captures examples of defeasiblereasoning and to the extent to which it has intuitive properties. Thequestion of formal properties has to do with the degree to which theformalism gives rise to a consequence relation that satisfiesdesirable theoretic properties such as the above-mentionedReflexivity, Cut, and Cautious Monotony. The third set of issues hasto do with the computational complexity of the most basic questionsconcerning the framework.

There is a potential tension between (i) and (ii):the desire to capture a broad range of intuitions can lead toadhoc solutions that can sometimes undermine the desirable formalproperties of the framework. In general, the development of NMLs andrelated formalisms has been driven, since its inception, byconsideration (i) and has relied on a rich and well-chosenarray of examples. Of course, there is some question as to whether anysingle framework can aspire to be universal in this respect.

More recently, researchers have started paying attention toconsideration (ii), looking at the extent to which NMLs havegenerated well-behaved relations of logical consequence. AsMakinson (1994) points out, practitioners of the field have encountered mixedsuccess. In particular, one abstract property, Cautious Monotony,appears at the same time to be crucial and elusive for many of theframeworks to be found in the literature. This is a fact that isperhaps to be traced back, at least in part, to the above-mentionedtension between the requirement of material adequacy and the need togenerate a well-behaved consequence relation.

The complexity issue appears to be the most difficult among the onesthat have been highlighted. NMLs appear to be stubbornly intractablewith respect to the corresponding problem for classical logic. This isclear in the case of default logic, given the ubiquitous consistencychecks. But in addition to consistency checks, there are other, oftenoverlooked, sources of complexity that are purely combinatorial. Otherforms of non-monotonic reasoning, besides default logic, are far fromimmune from these combinatorial roots of intractability. Although someimportant work has been done trying to make various non-monotonicformalisms more tractable, this is perhaps the problem on whichprogress has been slowest in coming.

Bibliography

  • Adams, Ernest W., 1975.The Logic of Conditionals. Dordrecht: D. ReidelPublishing Co.
  • Alferes, Jose Julio, Damasio, Carlos Viegas, & Pereira, Luis Moniz, 1995. A Logic Programming System for Non-monotonic Reasoning.Journal of Automated Reasoning, 14(1): 93–147.
  • Aliseda, Atocha, 2017. The logic of abduction: An introduction. InSpringerHandbook of Model-Based Science, Magnani, Lorenzo and Bertolotti,Tommaso (Edts.). pp. 219–230.
  • –––, 1997. Defeasible inheritance on cyclic networks.ArtificialIntelligence, 92(1): 1–23.
  • Antonelli, Gian Aldo, 1999. A directly cautious theory of defeasible consequence fordefault logic via the notion of general extension.ArtificialIntelligence, 109(1): 71–109.
  • Arieli, Ofer, & Avron, Arnon, 2000. General Patterns for Nonmonotonic Reasoning: From BasicEntailments to Plausible Relations.Logic Journal of theIGPL, 8: 119–148.
  • Arieli, Ofer, Borg, AnneMarie, and Heyninck, Jesse, 2019. A review of the relations between logical argumentation andreasoning with maximal consistency.Annals of Mathematics andArtificial Intelligence, 87(3), 187–226.
  • Arieli, Ofer, Borg, AnneMarie, Heyninck, Jesse and Straßer, Christian, 2021. Logic-based approaches to formal argumentation.Journal ofApplied Logics-IfCoLog Journal, 8(6): 1793–1898.
  • Arieli, Ofer and Straßer, Christian, 2015. Sequent-Based Logical Argumentation.Argument andComputation, 1(6): 73–99.
  • Asher, Nicholas, & Morreau, Michael, 1991. Commonsense entailment: A modal theory of nonmonotonicreasoning. InLogics in AI, Berlin: Springer, pp.1–30.
  • Batens, Diderik, 1980. Paraconsistent extensional propositional logics.Logique atAnalyse, 90–91: 195–234.
  • –––, 1999. Inconsistency-Adaptive Logics. InLogic at Work. EssaysDedicated to the Memory of Helena Rasiowa, Heidelberg, New York:Physica Verlag (Springer), pp. 445–472.
  • –––, 2004. The need for adaptive logics in epistemology. InLogic,epistemology, and the unity of science. Berlin: Springer, pp.459–485.
  • –––, 2007. A Universal Logic Approach to Adaptive Logics.LogicaUniversalis, 1: 221–242.
  • –––, 2011. Logics for qualitative inductive generalization.StudiaLogica, 97(1): pp. 61–80.
  • Beierle, Christoph and Kutsch, Steven, 2019. Computation and comparison of nonmonotonic skeptical inferencerelations induced by sets of ranking models for the realization ofintelligent agents.Applied Intelligence, 49:28–42.
  • Beirlaen, Mathieu, Heyninck, Jesse, Pardo, Pere, & Straßer, Christian, 2018. Argument strength in formal argumentation.IfCoLog Journalof Logics and their Applications, 5(3): 629–675.
  • Benferhat, Salem, Cayrol, Claudette, Dubois, Didier, Lang, Jerome, & and Prade, Henri, 1993. Inconsistency management and prioritized syntax-basedentailment. InProceedings of IJCAI 1993, pp.640–645.
  • Benferhat, Salem, Dubier, Didier, & Prade, Henri, 1997. Some Syntactic Approaches to the Handling of InconsistentKnowledge Basis: A Comparative Study. Part I: The Flat Case.Studia Logica, 58: 17–45.
  • –––, 1999. Possibilistic and standard probabilistic semantics ofconditional knowledge bases.Journal of Logic andComputation, 9(6): 873–895.
  • Benferhat, Salem, Bonnefon, Jean F., & da Silva Neves, Rui, 2005. An overview of possibilistic handling of default reasoning,with experimental studies.Synthese, 146(1–2):53–70.
  • van Berkel, Kees, Straßer, Christian and Zhou, Zheng, 2024. Towards an Argumentative Unification of Default Reasoning.Proceedings of COMMA, Chris Reed, Matthias Thimm, andTjitze Rienstra (eds.), Amsterdam: IOS Press, 313–324.
  • Besnard, P. and A. Hunter 2009. Argumentation based on classical logic. InArgumentation inArtificial Intelligence, I. Rahwan and G. Simari (eds.), Berlin:Springer.
  • Besold, Tarek R., Garcez, Artur d’Avila, Stenning, Keith, van der Torre, Leendert, Lambalgen, Michiel van, 2017. Reasoning in Non-Probabilistic Uncertainty: Logic Programmingand Neural-Symbolic Computing As Examples.Minds andMachines, 27(1): 37–77.
  • Bochman, Alexander, 2018. Argumentation, Nonmonotonic Reasoning and Logic. InHandbook of Formal Argumentation Vol. 1. Eds. P. Baroni, D.Gabbay, M. Giacomin, & L. van der Torre. pp.2887–2926
  • Booth, Richard and Varzinczak, Ivan, 2021. Conditional Inference Under Disjunctive Rationality.Proceedings of the AAAI Conference on Artificial Intelligence.,35(7): 6227–6234.
  • Boutilier, Craig, 1990. Conditional Logics of Normality as Modal Systems.AAAI(Volume 90), pp. 594–599.
  • Brewka, Gerhard, & Eiter, Thomas, 2000. Prioritizing Default Logic. InIntellectics andComputational Logic, Applied Logic Series (Volume 19), Dordrecht:Kluwer, 27–45.
  • Byrne, Ruth M.J., 1989. Suppressing valid inferences with conditionals.Cognition, 31(1): 61–83.
  • Caminada, M., 2008. On the issue of contraposition of defeasible rules.Frontiers in Artificial Intelligence and Applications, 172:109–115.
  • D’Agostino, Marcello and Modgil, Sanjay, 2018. A Study of Argumentative Characterisations of Preferred Subtheories.Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI 2018), J. Lang (ed.), International Joint Conferences on Artificial Intelligence, 1788–1794.
  • D’Agostino, Marcello and Modgil, Sanjay, 2018. Classical logic, argument and dialectic.ArtificialIntelligence, 262: 15–51.
  • Delgrande, James, Schaub, Torsten, Tompits, Hans, & Wang, Kewen, 2004. A classification and survey of preference handling approaches in nonmonotonic reasoning.Computational Intelligence, 20(2): 308–334.
  • Delgrande, James P., 1987. A first-order conditional logic for prototypical properties.Artificial Intelligence, 33(1): 105–130.
  • –––, 1998. On first-order conditional logics.ArtificialIntelligence, 105(1): 105–137.
  • Delgrande, James P, & Schaub, Torsten, 2000. Expressing preferences in default logic.ArtificialIntelligence, 123(1): 41–87.
  • Dubois, Didier, & Prade, Henri, 1990. An introduction to possibilistic and fuzzy logics. InReadings in Uncertain Reasoning. San Francisco: MorganKaufmann Publishers Inc., pp. 742–761.
  • Dung, Phan Minh, 1995. On the Acceptability of Arguments and its Fundamental Role inNonmonotonic Reasoning, Logic Programming and n-Person Games.Artifical Intelligence, 77: 321–358.
  • Dung, P.M., Kowalski, R.A., & Toni, F., 2009. Assumption-based argumentation.Argumentation in ArtificialIntelligence, 199–218.
  • Ehab, Nourhan and Ismail, Haythem O., 2021. Log\(_A\)G: An algebraic non-monotonic logic for reasoning withgraded propositions.Annals of Mathematics and ArtificialIntelligence, 89: 103–158.
  • Egré, Paul, Rossi, Lorenzo and Sprenger, Jan, forthcoming. Certain and Uncertain Inference with Indicative Conditionals.Australasian Journal of Philosophy.
  • Elio, Renée, & Pelletier, Francis Jeffry, 1994. On Relevance in Nonmonotonic Reasoning: Some Empirical Studies.In Russ, Greiner, & Devika, Subramanian (eds.),Relevance:AAAI 1994 Fall Symposium Series. Palo Alto: AAAI Press, pp.64–67.
  • Ford, Marilyn, 2004. System LS: A Three-Tiered Nonmonotonic Reasoning System.Computational Intelligence, 20(1): 89–108.
  • Ford, Marilyn, & Billington, David, 2000. Strategies in human nonmonotonic reasoning.ComputationalIntelligence, 16(3): 446–468.
  • Freund, Michael, 1993. Injective Models and Disjunctive Relations.Journal ofLogic and Computation, 3(3), 231–247.
  • Friedman, Nir, & Halpern, Joseph Y., 1996. Plausibility measures and default reasoning.Journal of theACM, 48: 1297–1304.
  • Friedman, Nir, Halpern, Joseph Y., & Koller, Daphne, 2000. First-order conditional logic for default reasoning revisited.ACM Trans. Comput. Logic, 1(October): 175–207.
  • Gabbay, Dov M., 1985. Theoretical foundations for non-monotonic reasoning in expertsystems.Logics and models of concurrent systems. New York:Springer-Verlag, pp. 439–457.
  • Gabbay, Dov, Hogger, C., & Robinson, J. (eds.), 1994.Handbook of Logic in Artificial Intelligence and LogicProgramming (Volume 3), Oxford and New York: Oxford UniversityPress.
  • Gabbay, Dov and Woods, John, 2008. Resource-origins of nonmonotonicity.Studia Logica,88: 85–112.
  • Garcez, Artur S. d’Avila, Lamb, Luís C., Gabbay, Dov, 2009. Neural-symbolic cognitive reasoning. Springer.
  • Geffner, Hector, & Pearl, Judea, 1992. Conditional entailment: bridging two approaches to defaultreasoning.Artifical Intelligence, 53(2–3):209–244.
  • Gelfond, Michael, & Lifschitz, Vladimir, 1988. The stable model semantics for logic programming. InICLP/SLP (Volume 88), pp. 1070–1080.
  • Gilio, Angelo, 2002. Probabilistic reasoning under coherence in System P.Annalsof Mathematics and Artificial Intelligence, 34(1–3):5–34.
  • Ginsberg, Matthew L., 1994.Essentials of Artificial Intelligence. San Francisco:Morgan Kaufmann Publishers Inc.
  • ––– (ed.), 1987.Readings in nonmonotonic reasoning. San Francisco:Morgan Kaufmann.
  • Giordano, Laura, Gliozzi, Valentina, Olivetti, Nicola, & Pozzato, Gian Luca, 2009. Analytic tableaux calculi for KLM logics of nonmonotonicreasoning.ACM Transactions on Computational Logic (TOCL),10(3): 18.
  • Goldszmidt, Moisés, & Pearl, Judea, 1992. Rank-based Systems: A Simple Approach to Belief Revision,Belief Update, and Reasoning about Evidence and Actions. InProceedings of the Third International Conference on KnowledgeRepresentation and Reasoning. San Francisco: Morgan Kaufmann, pp.661–672.
  • Goldzsmidt, Moisés, Morris, Paul, & Pearl, Judea, 1993. A Maximum Entropy Approach to Nonmonotonic Reasoning.IEEETransactions on Pattern Analysis and Machine Intelligence, 15(3):220–232.
  • Hölldobler, Steffen and Kalinke, Yvonne, 1994. Towards a new massible parallel computational model for logicprogramming. InProceedings of the Workshop on Combining Symbolicand Connectionist Processing ECAI, pp. 68–77.
  • Horty, John F., 1994. Some direct theories of nonmonotonic inheritance. In Gabbay,Dov M., Hogger, Christopher J., & Robinson, J. A. (eds.),Handbook of Logic in Artificial Intelligence and LogicProgramming, Volume 3: Nonmonotonic Reasoning and UncertainReasoning. Oxford: Oxford University Press, pp.111–187.
  • –––, 2002. Skepticism and floating conclusions.ArtificalIntelligence, 135(1–2): 55–72.
  • –––, 2007. Defaults with Priorities.Journal of PhilosophicalLogic, 36: 367–413.
  • Konolige, Kurt, 1988. On the relation between default and autoepistemic logic.Artifical Intelligence, 35(3): 343–382.
  • Koons, Robert, 2017. Defeasible Reasoning. InThe Stanford Encyclopedia ofPhilosophy (Spring 2017 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2017/entries/reasoning-defeasible/>.
  • Kraus, Sarit, Lehmann, Daniel, & Magidor, Menachem, 1990. Nonmonotonic Reasoning, Preferential Models and CumulativeLogics.Artifical Intelligence, 44: 167–207.
  • Kripke, Saul, 1975. Outline of a Theory of Truth.Journal of Philosophy,72: 690–716.
  • Lehmann, Daniel J., 1995. Another Perspective on Default Reasoning.Annals ofMathematics and Artificial Intelligence, 15(1): 61–82.
  • Lehmann, Daniel J., & Magidor, Menachem, 1990. Preferential logics: the predicate calculus case. InProceedings of the 3rd conference on Theoretical aspects ofreasoning about knowledge, San Francisco: Morgan KaufmannPublishers Inc., pp. 57–72.
  • –––, 1992. What does a conditional knowledge base entail?ArtificialIntelligence, 55(1): 1–60.
  • Leitgeb, Hannes, 2001. Nonmonotonic reasoning by inhibition nets.ArtificalIntelligence, 128(May): 161–201.
  • Liao, Beishui, Oren, Nir, van der Torre, Leendert, & Villata, Serena, 2018. Prioritized norms in formal argumentation.Journal of Logicand Computation, 29(2): 215–240.
  • Lifschitz, Vladimir, 1989. Benchmark problems for formal nonmonotonic reasoning. InNon-Monotonic Reasoning. Berlin: Springer, pp.202–219.
  • Lin, Fangzhen, & Shoham, Yoav, 1990. Epistemic semantics for fixed-points non-monotonic logics. InProceedings of the 3rd Conference on Theoretical Aspects ofReasoning About Knowledge (TARK’90), Pacific Grove, CA:Morgan Kaufmann Publishers Inc, pp. 111–120.
  • Lukaszewicz, Witold, 1988. Considerations on default logic: an alternative approach.Computational intelligence, 4(1): 1–16.
  • Makinson, David, 1994. General patterns in nonmonotonic reasoning. In:Handbook ofLogic in Artificial Intelligence and Logic Programming, vol. III,D. Gabbay, C. Hogger, J.A. Robinson (eds.), pp. 35–110, Oxford:Oxford University Press.
  • –––, 2003. Bridges between classical and nonmonotonic logic.LogicJournal of IGPL, 11(1): 69–96.
  • Makinson, David, & Gärdenfors, Peter, 1991. Relations between the logic of theory change and nonmonotoniclogic.The logic of theory change, Berlin: Springer, pp.183–205.
  • Makinson, David, & Schlechta, Karl, 1991. Floating conclusions and zombie paths: two deep difficulties inthe “directly skeptical” approach to defeasibleinheritance nets.Artifical Intelligence, 48(2):199–209.
  • McCarthy, J., 1980. Circumscription – A Form of Non-Monotonic Reasoning.Artifical Intelligence, 13: 27–29.
  • Modgil, Sanjay and Prakken, Henry, 2013. A general account of argumentation with preferences.Artificial Intelligence, 195, 361–397.
  • Moore, Robert C., 1984. Possible-World Semantics for Autoepistemic Logic. InProceedings of the Workshop on non-monotonic reasoning, AAAI,pp. 344–354.
  • –––, 1985. Semantical considerations on nonmonotonic logic.ArtificalIntelligence, 25(1): 75–94.
  • Neves, Rui Da Silva, Bonnefon, Jean-François, & Raufaste, Eric, 2002. An empirical test of patterns for nonmonotonic inference.Annals of Mathematics and Artificial Intelligence,34(1–3): 107–130.
  • Nute, D., 1994. Defeasible logics. InHandbook of Logic in ArtificialIntelligence and Logic Programming (Vol. 3). Oxford: OxfordUniversity Press, pp. 353–395.
  • Oaksford, Mike, & Chater, Nick, 2007.Bayesian rationality the probabilistic approach to humanreasoning. Oxford: Oxford University Press.
  • –––, 2009. Précis of Bayesian rationality: The probabilisticapproach to human reasoning.Behavioral and Brain Sciences,32(01): 69–84.
  • Orlowska, Ewa (ed.), 1999.Logic at Work. Essays Dedicated to the Memory of HelenaRasiowa. Heidelberg, New York: Physica Verlag (Springer).
  • Over, David E., 2009. New paradigm psychology of reasoning.Thinking &Reasoning, 15(4): 431–438.
  • Pardo, Pere and Straßer, Christian, 2024. Modular orders on defaults in formal argumentation.Journalof Logic and Computation, 34(4), 665–697.
  • Parent, Xavier, 2021.Preference semantics for Hansson-type dyadic deontic logic: a survey of results.Handbook of deontic logic and normative systems (Volume 2), D. Gabbayet al. (eds.), London: College Publications, 7–70.
  • Pearl, Judea, 1988.Probabilistic reasoning in intelligent systems: networks ofplausible inference. San Francisco: Morgan Kaufmann.
  • –––, 1989. Probabilistic semantics for nonmonotonic reasoning: a survey.InProceedings of the first international conference on Principlesof knowledge representation and reasoning. San Francisco: MorganKaufmann Publishers, pp. 505–516.
  • –––, 1990. System Z: a natural ordering of defaults with tractableapplications to nonmonotonic reasoning. InTARK ’90:Proceedings of the 3rd conference on Theoretical aspects of reasoningabout knowledge. San Francisco: Morgan Kaufmann Publishers, pp.121–135.
  • Pelletier, Francis Jeffry, & Elio, Renée, 1997. What should default reasoning be, by default?ComputationalIntelligence, 13(2): 165–187.
  • Pfeifer, Niki, & Douven, Igor, 2014. Formal Epistemology and the New Paradigm Psychology ofReasoning.Review of Philosophy and Psychology, 5(2):199–221.
  • Pfeifer, Niki, & Kleiter, Gernot D., 2005. Coherence and nonmonotonicity in human reasoning.Synthese, 146(1–2): 93–109.
  • Pfeifer, Niki, & Kleiter, Gernot D., 2009. Mental probability logic.Behavioral and BrainSciences, 32(01): 98–99.
  • Politzer, Guy, & Bonnefon, Jean-François, 2009. Let us not put the probabilistic cart before the uncertaintybull.Behavioral and Brain Sciences, 32(01):100–101.
  • Pollock, John, 1991. A Theory of Defeasible Reasoning.International Journal ofIntelligent Systems, 6: 33–54.
  • –––, 1995.Cognitive Carpentry, Cambridge, MA: Bradford/MITPress.
  • –––, 2008. Defeasible Reasoning. InReasoning: Studies of HumanInference and its Foundations, J. E. Adler and L. J. Rips (eds.),Cambridge: Cambridge University Press, pp. 451–470.
  • Poole, David, 1985. On the Comparison of Theories: Preferring the Most SpecificExplanation. InIJCAI (Volume 85), pp. 144–147.
  • Prakken, Henry, 2010. An abstract framework for argumentation with structuredarguments.Argument & Computation, 1(2):93–124.
  • –––, 2012. Some reflections on two current trends in formal argumentation.In A. Artikis, et al. (ed.),Logic Programs, Norms andAction. Dordrecht: Springer, pp. 249–272.
  • –––, 2018. Historical overview of formal argumentation. InHandbookof formal argumentation (Vol. 1). Eds. P. Baroni, D. Gabbay, M.Giacomin, & L. van der Torre. London: College Publications.pp.43–141
  • Prakken, Henry, & Vreeswijk, Gerard A. W., 2002.Logics for Defeasible Argumentation (Vol. 4),Dordrecht: Kluwer, pp. 219–318.
  • Priest, Graham, 1991. Minimally inconsistent LP.Studia Logica, 50(2):321–331.
  • Reiter, Raymond, 1980. A Logic for Default Reasoning.Artifical Intelligence,13: 81–132.
  • Rescher, Nicholas & Manor, Ruth, 1970. On inference from inconsistent premises.Theory andDecision, 1: 179–217
  • Rott, Hans, 2013. Two concepts of plausibility in default reasoning.Erkenntnis, 79(S6): 1219–1252.
  • Rott, Hans, 2014. Four floors for the theory of theory change: The case of imperfect discrimination.Logics in Artificial Intelligence (14th European Conference, JELIA 2014, Funchal, Madeira, Portugal, September 24–26, 2014, Proceedings), E. Fermé and J. Leite (eds.), Cham: Springer, 368–382.
  • Schurz, Gerhard, 2005. Non-monotonic reasoning from an evolution-theoreticperspective: Ontic, logical and cognitive foundations.Synthese, 146(1–2): 37–51.
  • Shoham, Yoav, 1987. A Semantical Approach to Nonmonotonic Logics. In Ginsberg, M.L. (ed.),Readings in Non-Monotonic Reasoning. Los Altos, CA:Morgan Kaufmann, pp. 227–249.
  • Simari, Guillermo R, & Loui, Ronald P., 1992. A mathematical treatment of defeasible reasoning and itsimplementation.Artificial intelligence, 53(2):125–157.
  • Spohn, Wolfgang, 1988. Ordinal conditional functions: A dynamic theory of epistemicstates. InCausation in Decision, Belief Change andStatistics, W. L. Harper, & B. Skyrms (Eds.), Springer, pp.105–134.
  • Stalnaker, Robert, 1993. A note on non-monotonic modal logic.ArtificialIntelligence, 64(2): 183–196.
  • –––, 1994. What is a nonmonotonic consequence relation?FundamentaInformaticae, 21(1): 7–21.
  • Stein, Lynn Andrea, 1992. Resolving ambiguity in nonmonotonic inheritance hierarchies.Artificial Intelligence, 55(2): 259–310.
  • Stenning, Keith, & Van Lambalgen, Michiel, 2008.Human reasoning and cognitive science. Cambridge, MA:MIT Press.
  • Straßer, Christian, 2014.Adaptive Logic and Defeasible Reasoning. Applications inArgumentation, Normative Reasoning and Default Reasoning (Trendsin Logic, Volume 38). Dordrecht: Springer.
  • Touretzky, David S., Thomason, Richmond H., & Horty, John F., 1991. A Skeptic’s Menagerie: Conflictors, Preemptors,Reinstaters, and Zombies in Nonmonotonic Inheritance. InIJCAI, pp. 478–485 of.
  • Van De Putte, Frederik, 2013. Default Assumptions and Selection Functions: A GenericFramework for Non-monotonic Logics. InAdvances in ArtificialIntelligence and Its Applications, Dordrecht: Springer, pp.54–67.
  • Verheij, Bart, 1996.Rules, Reasons, Arguments. Formal Studies of Argumentationand Defeat, Ph.D. thesis, Universiteit Maastricht.
  • Wason, P. C., 1966. Reasoning. in B. Foss (ed.),New horizons in psychologyI, Harmondsworth: Penguin, pp. 135–151.

Other Internet Resources

[Please contact the author with suggestions.]

Copyright © 2024 by
Christian Strasser<Christian.Strasser@ruhr-uni-bochum.de>
G. Aldo Antonelli

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)Philosophy, Stanford University

The Stanford Encyclopedia of Philosophy iscopyright © 2024 byThe Metaphysics Research Lab, Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054


[8]ページ先頭

©2009-2025 Movatter.jp