Logic-Nonmonotonic A4
Logic-Nonmonotonic A4
Non-monotonic Logic
http://plato.stanford.edu/archives/fall2015/entries/logic-nonmonotonic/ Non-monotonic Logic
from the Fall 2015 Edition of the First published Tue Dec 11, 2001; substantive revision Tue Dec 2, 2014
Stanford Encyclopedia The term “non-monotonic logic” (in short, NML) covers a family of
formal frameworks devised to capture and represent defeasible inference,
of Philosophy i.e., that kind of inference in which reasoners draw conclusions tentatively,
reserving the right to retract them in the light of further information.
Examples are numerous, reaching from inductive generalizations to
abduction to inferences on the basis of expert opinion, etc. We find
defeasible inferences in everyday reasoning, in expert reasoning (e.g.
medical diagnosis), and in scientific reasoning.
Edward N. Zalta Uri Nodelman Colin Allen R. Lanier Anderson
Principal Editor Senior Editor Associate Editor Faculty Sponsor
Defeasible reasoning just like deductive reasoning, can follow complex
Editorial Board
patterns. However, such patterns are beyond reach for classical logic (CL),
http://plato.stanford.edu/board.html
intuitionistic logic (IL) or other logics that characterize deductive
Library of Congress Catalog Data reasoning since they—by their very nature—do not allow for a retraction
ISSN: 1095-5054
of inferences. The challenge tackled in the domain of NMLs is to provide
Notice: This PDF version was distributed by request to mem- for defeasible reasoning forms what CL or IL provide for mathematical
bers of the Friends of the SEP Society and by courtesy to SEP reasoning: namely a formally precise account that is materially adequate,
content contributors. It is solely for their fair use. Unauthorized where material adequacy concerns the question of how broad a range of
distribution is prohibited. To learn how to join the Friends of the examples is captured by the framework, and the extent to which the
SEP Society and obtain authorized PDF versions of SEP entries, framework can do justice to our intuitions on the subject (at least the most
please visit https://leibniz.stanford.edu/friends/ . entrenched ones).
1
Non-monotonic Logic Christian Strasser and G. Aldo Antonelli
3.3 Default logic of the conclusion as in deductive reasoning, but for which the conclusion
3.4 Autoepistemic logic nevertheless holds in most/typical/etc. cases in which the premises hold.
3.5 Selection semantics
3.6 Assumption-based approaches Defeasible reasoning may also have a corrective character in that the
4. Non-monotonic logics and human reasoning inferences are deductive or truth-preserving while nevertheless being
5. Conclusion retractable. Suppose we reason classically on the basis of a complex body
Bibliography of premises Γ (e.g., a mathematical or scientific theory, or a code of law).
Academic Tools If we do not know whether Γ is consistent (i.e., whether a contradiction is
Other Internet Resources derivable), we may adopt a careful rationale for drawing inferences. Let us
Related Entries call a formula φ in Γ free if it does not belong to a minimally inconsistent
subset of Γ (i.e., an inconsistent subset of Γ which has no proper subset
that is also inconsistent). We may reason as follows: an inference in CL is
1. Dealing with the dynamics of defeasible reasoning retracted as soon as we find out that it relies on premises that are not free.
This way we only accept consequences that are derivable on the basis of
Defeasible reasoning is dynamic in that it allows for a retraction of the consistent part of our premise set. The resulting consequences are
inferences. Take, for instance, reasoning on the basis of normality or known as the free consequences (Benferhat et al. (1997)). For instance,
typicality assumptions. While we infer that Tweety flies on the basis of the where p, q, s, t are logical atoms and Γ = {p ∧ q, ¬p, s ∧ t}, the inference
information that Tweety is a bird and the background knowledge that birds for q from p ∧ q will be retracted since p ∧ q is not free in Γ, while the
usually fly, we have good reasons to retract this inference when learning inference to s from s ∧ t goes through. In contrast to ampliative reasoning,
that Tweety is a penguin or a kiwi. each inference is in accordance with CL and hence deductive. However,
given an inconsistent theory, not all deductive inferences will be accepted.
Another example is abductive reasoning. Given the observation that the
streets are wet we may infer the explanation that it has been raining Most of scholarly attention has been paid to what has been called the
recently. However, recalling that this very day the streets are cleaned and synchronic (Pollock (2008)) or the external dynamics (Batens (2004)) of
that the roof tops are dry, we will retract this inference. defeasible reasoning. For this, inferences are retracted as a result of
gaining new information. Formally this can be expressed in terms of the
As a last example take probabilistic reasoning where we infer “X is a B”
consequence relation of a logic for defeasible reasoning. Consider the
from “X is an A and most As are Bs”. Clearly, we may learn that X is an
following property that is characteristic of deductive reasoning and holds,
exceptional A with respect to being a B.
for instance, for the relation of logical consequence ⊨ of CL (see the entry
Our previous examples are instances of ampliative reasoning. It is based on classical logic for details on the relation ⊨):
on inferences for which the truth of the premises does not warrant the truth
Monotony: If Γ ϕ then Γ ∪ Γʹ′ ϕ.
There are two different kinds of conflicts that can arise within a given non- We have a conflict between the following two arguments (where
monotonic framework: (i) conflicts between defeasible conclusions and arguments are sequences of inferences): Penguin ⇒ Bird → flies and
“hard facts,” some of which possibly newly learned; and (ii) conflicts Penguin → not-flies. Both arguments include a final defeasible inference.
between one potential defeasible conclusion and another (many What is important to notice is that a penguin is a specific type of bird since
formalisms, for instance, provide some form of defeasible inference rules, Penguin ⇒ Bird (while not Bird ⇒ Penguin). According to the Specificity
and such rules might have conflicting conclusions). When a conflict (of Principle an inference with a more specific antecedent overrides a
either kind) arises, steps have to be taken to preserve or restore conflicting defeasible inference with a less specific antecedent.
consistency. Concerning the penguin Tweety we thus infer that she doesn't fly on the
basis of Penguin → not-flies rather than that she flies on the basis of
Conflicts of type (i) are to be resolved in favor of the hard facts in the
Penguin ⇒ Bird → flies.
sense that the conflicting defeasible conclusion is to be retracted. More
interesting are mechanisms to resolve conflicts of type (ii). In order to Logicians distinguish between strong and weak specificity: according to
analyze these, we will make use of schematic inference graphs (similar to strong specificity A ↛ C overrides A ⇒ B → C; according weak specificity
the ones used in Inheritance Networks, see below). For instance, our A ↛ C overrides A → B → C. Note that the difference concerns the nature
previous example featuring the bird Tweety is illustrated as follows: of the link between A and B.
epistemic context we may compare the strengths of A → B and C → D by such rules with highest priority, but for the sake of simplicity we neglect
appealing to the reliability of the source from which the respective this possibility in what follows.) Take the following example that is
conditional knowledge stems. In the context of legal reasoning we may frequently discussed in the literature. We have the rules A → B, A ↛ C and
have the principles lex superior resp. lex posterior, according to which the B → C where A → B ≺ A ↛ C ≺ B → C and we take A to be given. We can
higher ranked resp. the later issued law dominates. apply A → B and A ↛ C. Since A → B ≺ A ↛ C we apply A ↛ C first and
derive ¬C. Now only A → B is applicable. So we derive B. Although the
Given a way to compare the strength of defeasible inference steps by antecedent of B → C is already derived, we cannot apply this rule for the
means of a preference relation ≺, there is still the question of how to sake of consistency. Brewka and Eiter (2000) argue against the procedural
compare the strength of conflicting sequences of inferences viz. approach and that it is more intuitive to derive B and C. Delgrande and
arguments. We give some examples. For a systematic survey and Schaub (2000) argue that the example presents an incoherent set of rules.
classification of preference handling mechanisms in NML the interested This is put in question in Horty (2007) where a consistent deontic reading
reader is referred to Delgrande et al. (2004). in terms of conditional imperatives is presented which also challenges the
procedural approach by favoring the conclusions B and C.
According to the Weakest Link Principle (Pollock (1991)) an argument is
preferred over another conflicting argument if its weakest defeasible link Ford (2004) pointed out that the order of strict and defeasible links in
is stronger than the weakest defeasible link in the conflicting argument. arguments matters. For instance, she argues that an argument of the form A
Take, for example, the situation in the following figure: → B ⇒ D is stronger than an argument of the form A ⇒ C ↛ D (where A
→ B reads “Most As are Bs” and A ⇒ C reads “All As are Cs.”). The
reason is that in the former case it is not possible that no A is a D while in
the second case it is possible that no A is a not-D. This is illustrated in the
following figure:
We now discuss some further issues that arise in the context of conflicting
arguments. The first issue is illustrated in the following figure:
The question whether the exceptional status of Penguin relative to flies reasoning contexts in which “the value of drawing conclusions is high
should spread also to other properties of Bird may depend on specific relative to the costs involved if some of those conclusions turn out not to
relevance relations among these properties. For instance, Koons (2014) be correct” while they should be avoided “when the cost of error rises” (p.
proposes that causal relations play a role: whereas has strong forlimb 123).
muscles is causally related to flies and hence should not be attributed to
We conclude our discussion with so-called Zombie-Arguments (Makinson
penguins, the situation is different with is cold-blooded. Similarly,
and Schlechta (1991), Touretzky et al. (1991)). Recall that a skeptical
Pelletier and Elio (1994) argue that explanatory relations play a significant
reasoner does not commit to a conflicting argument. Makinson and
role in the way in which reasoners treat exceptional information in
Schlechta (1991) argue that super-arguments of such conflicted arguments
nonmonotonic inference.
—although not acceptable— nevertheless still have the power to
Another much discussed issue (e.g., Ginsberg (1994), Makinson and undermine the commitment of a reasoner to an otherwise unconflicted
Schlechta (1991), Horty (2002)) concerns the question whether a argument. We see an example in the following figure:
conclusion that is derivable via two conflicting arguments should be
derived. Such conclusions are called Floating Conclusions. The following
figure illustrates this with an extended version of the Nixon Diamond.
extraordinarily rich, we will restrict the focus on presenting the basic ideas
behind some of the most influential and well-known approaches.
order logic (FOL), one is allowed to explicitly quantify over predicates, Another influential mechanism realizing the closed world assumption is
forming sentences such as ∃P∀xPx (“there is a universal predicate”) or Negation as Failure (or Default Negation). It can nicely be explained if we
∀P(Pa ≡ Pb) (“a and b are indiscernible”). take a look at Logic Programming. A logic program consists of a list of
rules such as:
In circumscription, given predicates P and Q, we abbreviate ∀x(Px ⊃ Qx)
as P≤Q; similarly, we abbreviate P≤Q ∧ ¬Q≤P as P<Q. If A(P) is a τ ← φ1, …, φn, not ψ1, …, not ψm
formula containing occurrences of a predicate P, then the circumscription
of P in A is the second-order sentence A*(P): In basic logic programs τ is a logical atom and φ1, …, φn, ψ1, …, ψm are
logical literals. This has been generalized in various ways (e.g., Alferes et
A(P) ∧ ¬∃Q[A(Q) ∧ Q<P] al. (1995)) and many ways of interpreting rules have been proposed, but
for us it is sufficient to stick to this simple form and an explanation on an
A*(P) says that P satisfies A, and that no smaller predicate does. Let Px be intuitive level. A concrete example for a rule is:
the predicate “x is abnormal,” and let A(P) be the sentence “All birds that
are not abnormal fly.” Then the sentence “Tweety is a bird,” together with flies ← bird, not penguin
A*(P) implies “Tweety flies,” for the circumscription axiom forces the
extension of P to be empty, so that “Tweety is normal” is automatically Such rules read as expected: if the formulas in the antecedent (right hand
true. side) hold, then the conclusion (left hand side) holds. Default negation is
realized as follows: if penguin cannot be proved then not penguin is
In terms of consequence relations, circumscription allows us to define, for considered to hold. A logic program for our Tweety example may consist
each predicate P, a non-monotonic relation A(P) φ that holds precisely of the rule above and
when A*(P) ⊨ φ. (This basic form of circumscription has been
generalized, for, in practice, one needs to minimize the extension of a not-flies ← penguin
predicate, while allowing the extension of certain other predicates to vary.) bird ← penguin
From the point of view of applications, however, circumscription has a
Suppose first all we know is bird. The latter two rules will not be
major computational shortcoming, which is due to the nature of the
triggered, and the first rule be will interpreted with the closed world
second-order language in which circumscription is formulated. The
assumption so that flies can be inferred. Now suppose we know penguin.
problem is that SOL, contrary to FOL, lacks a complete inference
In this case the first rule is not applicable but the latter two are and we
procedure: the price one pays for the greater expressive power of SOL is
derive bird and not-flies.
that there are no complete axiomatizations, as we have for FOL. It follows
that there is no way to list, in an effective manner, all SOL validities, and 3.2 Inheritance networks and argument-based approaches
hence to determine whether A(P) φ.
Whenever we have a taxonomically organized body of knowledge, we In such a network, links of the form A → B represent the fact that,
presuppose that subclasses inherit properties from their superclasses: dogs typically and for the most part, As are Bs, and that therefore information
have lungs because they are mammals, and mammals have lungs. about As is more specific than information about Bs. More specific
However, there can be exceptions, which can interact in complex ways as information overrides more generic information. Research on non-
in the following example: mammals, by and large, don't fly; since bats are monotonic inheritance focuses on the different ways in which one can
mammals, in the absence of any information to the contrary, we are make this idea precise.
justified in inferring that bats do not fly. But if we learn that bats are
exceptional mammals, in that they do fly, the conclusion that they don't fly The main issue in defeasible inheritance is to characterize the set of
is retracted, and the conclusion that they fly is drawn instead. Things can assertions that are supported by a given network. It is of course not enough
be more complicated still, for in turn, baby bats are exceptional bats, in to devise a representational formalism, one also needs to specify how the
that they do not fly (does that make them unexceptional mammals?). Here formalism is to be interpreted. Such a characterization is accomplished
we have potentially conflicting inferences. When we infer that Stellaluna, through the notion of an extension of a given network. There are two
being a baby bat, does not fly, we are resolving all these potential conflicts competing characterizations of extension for this kind of networks, one
based on the Specificity Principle. that follows the credulous strategy and one that follows the skeptical one.
Both proceed by first defining the degree of a path through the network as
Non-monotonic inheritance networks were developed for the purpose of the length of the longest sequence of links connecting its endpoints;
capturing taxonomic examples such as the one above. Such networks are extensions are then constructed by considering paths in ascending order of
collections of nodes and directed (“is-a”) links representing taxonomic their degrees. We are not going to review the details, since many of the
information. When exceptions are allowed, the network is interpreted same issues arise in connection with Default Logic (which is discussed
defeasibly. The following figure gives a network representing this state of below), but Horty (1994) provides an extensive survey. It is worth
affair: mentioning that since the notion of degree makes sense only in the case of
acyclic networks, special issues arise when networks contain cycles (see
Antonelli (1997) for a treatment of inheritance on cyclic networks).
In argument-based approaches to defeasible reasoning the notion of a path Various argumentation semantics have been proposed for such graphs
through an inheritance network is generalized to the notion of an specifying criteria for selecting sets of arguments that represent stances of
argument. Abstracting from the specifics and subtleties of formalisms rational agents. Clearly, a selected set of arguments S ⊆ Args should
proposed in the literature (for an excellent survey see Prakken and satisfy the following requirements:
Vreeswijk (2002))[3], an argument can be thought of in the following way.
Given a language L, a set of L-formulas Γ, a set of strict rules SRules of S should be conflict-free, i.e., for all a, b ∈ Args, a does not attack b.
the form φ1, …, φn ⇒ ψ (where φi and ψ are L-formulas) and a set of S should be able to defend itself from all attackers. More precisely, a
defeasible rules DRules of the form φ1, …, φn → ψ (where φi and ψ are conflict-free S is admissible if for every a ∈ Args that attacks some b
L-formulas) an argument (Θ,τ) for τ is a proof of τ from some Θ ⊆ Γ ∈ S there is a c ∈ S that attacks a.
using the rules in SRules and DRules.
For instance, given the following graph
A central notion in argument-based formalism is argumentative attack. We
can, for instance, distinguish between rebuts and undercuts. A rebut of an
argument (Θ,τ) is an argument that establishes that τ is not the case, viz.
an argument for ¬τ. An undercut of (Θ,τ) establishes that Θ does not
support τ. For instance, the argument that concludes that an object is red
from the fact that it looks red, is undercut by means of the observation that
that object is illuminated by red light (Pollock (1995)). Note that in order
to undercut an argument for τ one need not establish that τ doesn't hold. {a} is admissible, while {a, b} is not conflict-free and {d} is not
admissible.
On an intuitive level, the basic idea is that the question whether an
argument is acceptable concerns the question whether it is defended from Given these basic requirements we can define, for instance, the following
its argumentative attacks. Dung (1995) proposed a way to address this semantics that frequently appears in the literature:
question purely on the basis of the attack relations between arguments
A preferred set of arguments S is an admissible set of arguments that
while abstracting from the concrete structure of the given arguments.
is maximal in Args relative to set-inclusion.
Where Args is the set of the arguments that are generated from Γ by the
rules in SRules and DRules, we define an attack relation ↝ ⊆ Args × Args In our example {a, d} and {b, d} are the two preferred sets. This shows
as follows: a ↝ b if and only if a attacks (e.g., rebuts or undercuts) b. This that the preferred semantics does not determine a unique selection. We can
gives rise to a directed graph, the abstract argumentation framework, proceed either according to the credulous rationale or according to the
where arguments are nodes and arrows represent the attack relation. Note skeptical rationale in order to define a consequence relation. Considering
that at the level of the directed graph arguments are treated in an abstract an abstract argumentation framework based on Γ, SRules, and DRules, we
way: the concrete structure of the arguments is not presented in the graph.
give two examples of how a consequence set can be characterized by Reiter's default logic (Reiter (1980)) uses the notion of an extension to
means of the skeptical approach (Prakken (2010)): make precise the idea that the consistency condition has to be met both
before and after the rule is applied. Given a set Δ of defaults, an extension
τ is a consequence if in each preferred set S of arguments there is an for Δ represents a set of inferences that can be reasonably and consistently
argument a for τ. drawn using defaults from Δ. Such inferences are those that are warranted
τ is a consequence if there is an argument a for τ that is in every on the basis of a maximal set of defaults whose consistency condition is
preferred set of arguments S. met both before and after their being triggered.
Clearly, the second approach is more cautious. Intuitively, it demands that More in particular, an extension is defined relative to a default theory. The
there is a specific argument for τ that is contained in each rational stance a latter is a pair (Γ, Δ), where Δ is a (finite) set of defaults, and Γ is a set of
reasoner can take given Γ, DRules, and SRules. The first option doesn't sentences (a world description). The idea is that Γ represents the strict or
bind the acceptability of τ to a specific argument: it is sufficient if background information, whereas Δ specifies the defeasible information.
according to each rational stance there is some argument for τ. We say that Ξ is an extension for a default theory (Γ, Δ) if and only if
In Default Logic, the main representational tool is that of a default rule, or where Ξ0 = Γ, and
simply a default. A default is a defeasible inference rule of the form
Ξn+1 = Cn(Ξn) ∪ {τ ∣ (γ : θ) / τ ∈ Δ where ¬θ ∉ Ξ and γ ∈ Ξn}
(γ : θ) / τ
where Cn(•) is the consequence relation of CL. It is important to notice the
where γ, θ, τ are sentences in a given language, respectively called the occurrence of the limit Ξ in the definition of Ξn+1: the condition above is
pre-requisite, the justification and the conclusion of the default. The not a garden-variety recursive definition, but a truly circular
interpretation of the default is that if γ is known, and there is no evidence characterization of extensions.
that θ might be false, then τ can be inferred.
This circularity can be made explicit by giving an equivalent definition of
As is clear, application of the rule requires that a consistency condition be extension as solution of fixpoint equations. Given a default theory (Γ, Δ),
satisfied. What makes meeting the condition complicated is the fact that let S be an operator defined on sets of sentences such that for any such set
rules can interact in complex ways. In particular, it is possible that an Φ, S(Φ) is the smallest set that satisfies the following three requirements:
application of some rule might cause the consistency condition to fail for
some, not necessarily distinct, rule. For instance, if θ is ¬τ then it contains Γ (Γ ⊆ S(Φ)),
application of the rule is in a sense self-defeating, in that an application of it is deductively closed (i.e., if S(Φ) ⊨ φ then φ ∈ S(Φ)),
the rule itself causes the consistency condition to fail. and it is closed under the default rules in Δ: whenever for a default (γ
: θ) / τ ∈ Δ both γ ∈ S(Φ) and ¬θ ∉ Φ, then τ ∈ S(Φ). consistent with the given beliefs, its own justificiation, and each of the
justifications of previously applied defaults. For normal theories
Then one can show that Ξ is an extension for (Γ,Δ) if and only if Ξ is a Lukaszewicz's definition of extensions is equivalent to the definitions from
fixed point of S, i.e., if S(Ξ) = Ξ. above.
Neither one of the two characterizations of extension for default logic (i.e., Let us now consider an example of a default theory with multiple
the fixpoint definition and the pseudo-iterative one) provides us with a extensions. Let Γ = ∅, and suppose Δ comprises the following two
way to “construct” extensions by means of anything resembling an defaults
iterative process. Essentially, one has to “guess” a set of sentences Ξ, and
then verify that it satisfies the definition of an extension. (⊤ : θ) / ¬τ and (⊤ : τ) / ¬θ
For any given default theory, extensions need not exist, and even when This theory has exactly two extensions, one in which the first default is
they exist, they need not be unique. We start with an example of the applied and one in which the second one is. It is easy to see that at least
former situation: let Γ = ∅ and let Δ comprise the single default[4] one default has to be applied in any extension, and that both defaults
cannot be applied in the same extension.
(⊤ : θ) / ¬θ
The fact that default theories can have zero, one, or multiple extensions
If Ξ were an extension, then the justification θ of the default above would raises the issue of what inferences one is warranted in drawing from a
either be consistent with Ξ or not, and either case is impossible. For if θ given default theory. The problem can be presented as follows: given a
were consistent with Ξ, then the default would be applied to derive ¬θ in default theory (Γ, Δ), what sentences can be regarded as defeasible
contradiction the the consistency of Ξ with θ. Similarly, if Ξ were consequences of the theory?
inconsistent with θ then Ξ ⊨ ¬θ and hence, by deductive closure, ¬θ ∈ Ξ.
Our default would not be applicable, in which case Ξ = Ξ1 = Cn(∅). But Sometimes it may be useful to map out all the consequences that can be
then ¬θ ∉ Ξ,—a contradiction. drawn from all the extensions, for instance, in order to identify extensions
that give rise to undesired consequences (in view of extralogical
For normal default theories that only consist of normal defaults, i.e., considerations). The credulous approach does just that: (Γ, Δ) φ if and
defaults of the form (γ : θ) / θ, extensions always exist. only if φ belongs to any extension of the theory. Clearly, in case of
multiple extensions the consequence set will not be closed under CL and it
Lukaszewicz (1988) presents a modified definition of extension that
may be inconsistent.
avoids the previous two problems: it is defined in an iterative way and it
warrants the existence of an extension. In a nutshell the idea is to keep Alternatively, one can adopt the skeptical strategy according to which (Γ,
track of the used justifications in the procedure. A default is applied in Δ) φ if and only if φ is contained in every extension of (Γ, Δ).
case its precondition is implied by the current beliefs and its conclusion is
Skeptical consequence, as based on Reiter's notion of extension, fails to be unlike that of Kripke's theory of truth (Kripke (1975))), in virtue of which
cautiously monotonic (Makinson (1994)). To see this, consider the default general extensions are now endowed with a non-trivial (i.e., not “flat”)
theory (∅, Δ), where Δ comprises the following two defaults: algebraic structure. It is then possible to pick out, in a principled way, a
particular general extension (for instance, the unique least one, which is
(⊤ : θ) / θ and (θ ∨ γ : ¬θ) / ¬θ always guaranteed to exist) on which to base a notion of defeasible
consequence.
This theory has only one extension, coinciding with the deductive closure
of {θ}. Hence, according to skeptical consequence we have (∅, Δ) θ, as A different set of issues arises in connection with the behavior of default
well as (∅, Δ) θ ∨ γ (by the deductive closure of extensions). logic from the point of view of computation. For a given semi-decidable
set Φ of sentences, the set of all φ that are a consequence of Φ in FOL is
Now consider the theory ({θ ∨ γ}, Δ). We have two extensions: one the
itself semi-decidable (see the entry on computability and complexity). In
same as before, but also another one coinciding with the deductive closure
the case of default logic, to formulate the corresponding problem one
of {¬θ}, and hence not containing θ. It follows that the intersection of the
extends (in the obvious way) the notion of (semi-)decidability to sets of
extensions no longer contains θ, so that ({θ ∨ γ}, Δ) θ fails, against
defaults. The problem, then, is to decide, given a default theory (Γ, Δ) and
Cautious Monotony. (Notice that the same example establishes a counter-
a sentence φ whether (Γ, Δ) φ, where is defined, say, skeptically.
example for Cut for the credulous strategy, when we pick the extension of
Such a problem is not even semi-decidable, since, in order to determine
({θ ∨ γ}, Δ) that contains ¬θ.)
whether a default is triggered by a pair of sets of sentences, one has to
In Antonelli (1999) a notion of general extension for default logic is perform a consistency check, and such checks are not computable.
introduced, showing that this notion yields a well-behaved relation of
Default logic as defined above does not prioritize among defaults. In Poole
defeasible consequence that satisfies Supraclassicality (if Γ ⊨ φ then Γ
(1985) we find an approach in which the Specificity Principle is
φ) and the three central requirements for NMLs Reflexivity, Cut, and
considered. In Horty (2007) defaults are ordered by a strict partial order ≺
Cautious Monotony from Gabbay (1985). The idea behind general
where (γ : θ) / τ ≺ (γʹ′ : θʹ′) / τʹ′ means that (γʹ′ : θʹ′) / τʹ′ has priority over (γ :
extensions can be explained in a particularly simple way in the case of
θ) / τ. The ordering ≺ may express —depending on the application—
pre-requisite free default theories (called “categorical” in Antonelli
specificity relations, the comparative reliability of the conditional
(1999)). A general extension for such a default theory is a pair (Γ+, Γ−) of
knowledge expressed by defaults, etc. An ordered default theory is then a
sets of defaults (or conclusions thereof) that simultaneously satisfies two
triple (Γ, Δ, ≺) where (Γ, Δ) is a default theory. We give an example but
fixpoint equations. The set Γ+ comprises (conclusions of) defaults that are
omit a more technical explanation. Take Γ = {bird, penguin}, Δ = {bird →
explicitly triggered, and the set Γ− comprises (conclusions of) defaults that
flies, penguin → ¬flies} and ≺ = {(bird → flies, penguin → ¬flies)}, where
are explicitly ruled out. The intuition is that defaults that are not ruled out
φ → ψ represents the normal default (φ : ψ) / ψ. We have two extensions
can still prevent other defaults from being triggered (although they might
of (Γ, Δ), one in which bird → flies is applied and one in which penguin
not be triggered themselves). We thus obtain a “3-valued” approach (not
→ ¬flies is applied. Since the latter default is ≺-preferred over the former, ¬brother ∈ Γ2. Thus, by introspection also B¬brother ∈ Γ2.
only the latter extension is an extension of (Γ, Δ, ≺).
The second option seems more intuitive since given only ¬Bbrother ⊃
3.4 Autoepistemic logic ¬brother the belief in brother seems intuitively speaking ungrounded in
view of Ξ1. To make this formally precise, Moore defines
Another formalism closely related to default logic is Moore's
Autoepistemic Logic (Moore (1985)). It models the reasoning of an ideal Γ is grounded in a set of assumptions Ξ if for each ψ ∈ Γ, Ξ ∪ {Bφ ∣
agent reflecting on her own beliefs. For instance, sometimes the absence φ ∈ Γ} ∪ {¬Bφ ∣ φ ∉ Γ} ⊨ ψ.
of a belief in φ may give an agent a reason to infer ¬φ. Moore gives the
Stable theories that are grounded in a set of assumptions Ξ are called
following example: If I had an older brother, I would know about it. Since
stable expansions of Ξ or autoepistemic extensions of Ξ. Stable
I don't, I believe not to have an older brother.
expansions Γ of Ξ can equivalently be characterized as fixed points:
An autoepistemic theory consists of the beliefs of an agent including her
Γ = Cn(Ξ ∪ {Bφ ∣ φ ∈ Γ} ∪ {¬Bφ ∣ φ ∉ Γ})
reflective beliefs about her beliefs. For the latter an autoepistemic belief
operator B is used. In our example such a theory may contain ¬Bbrother ⊃ where Cn(•) is the consequence function of CL.
¬brother. Autoepistemic logic specifies ideal properties of such theories.
Following Stalnaker (1993), an autoepistemic theory Γ should be stable: Clearly, there is no stable theory that is grounded in Ξ1 and that contains
brother and/or Bbrother like our Γ1 above. The reason is that we fail to
Γ should be closed under classical logic: if Γ ⊨ φ then φ ∈ Γ; derive brother from Ξ1 ∪ {Bψ ∣ ψ ∈ Γ1} ∪ {¬Bψ ∣ ψ ∉ Γ1}. The only
Γ should be introspectively adequate: stable extension of Ξ1 contains ¬Bbrother and ¬brother.
if φ ∈ Γ then Bφ ∈ Γ;
if φ ∉ Γ then ¬Bφ ∈ Γ. Just as in default logic, some sets of assumptions have no stable
extensions while some have multiple stable extensions. We demonstrate
Given Γ is consistent, stability implies that the former case with Ξ2 = {¬Bφ ⊃ φ}. Suppose there is a stable extension
Γ of Ξ2. First note that there is no way to ground φ in view of Ξ2. Hence
φ ∈ Γ if and only if Bφ ∈ Γ, and
φ ∉ Γ which means that ¬Bφ ∈ Γ. But then φ ∈ Γ by modus ponens and
φ ∉ Γ if and only if ¬Bφ ∈ Γ.
since ¬Bφ ⊃ φ ∈ Γ,—a contradiction.
Let us, for instance, consider the two types of consistent stable sets that
Moore's notion of groundedness allows for a type of epistemic
extend Ξ1 = {¬Bbrother ⊃ ¬brother}:
bootstrapping. Take Ξ3 = {Bφ ⊃ φ}. Suppose an agent adopts the belief
1. Γ1 for which brother ∈ Γ1 and by introspection also Bbrother ∈ Γ1. that she believes φ, i.e. Bφ. Now she can use Bφ ⊃ φ to derive φ. Hence,
2. Γ2 for which brother ∉ Γ2. Hence, ¬Bbrother ∈ Γ2 and by closure, on the basis of Ξ3, she can ground her belief in φ on her belief that she
believes φ. Indeed, there is a stable extension of Ξ3 containing φ and Bφ.
In view of this, weaker forms of groundedness have been proposed in defeasible rules, i.e., rules that allow for exceptions (such as 'Birds fly'),
Konolige (1988) that do not allow for this form of bootstrapping and the non-monotonicity of autoepistemic logic is rooted in the indexicality
according to which we only have the extension of Ξ3 that contains neither of its autoepistemic belief operator (Moore (1984), Konolige (1988)): it
φ nor Bφ. refers to the autoepistemic theory into which it is embeded and hence its
meaning may change when we add beliefs to the theory.
The centrality of autoepistemic logic in NML is emphasized by the fact
that several tight links to other seminal formalism have been established. Various modal semantics have been proposed for autoepistemic logic (see
Let us mention three such links. the entry on modal logic for more background on modal logics). For
instance, Moore (1984) proposes an S5-based Kripkean possible world
First, there are close connections between autoepistemic logic and logic semantics and Lin and Shoham (1990) propose bi-modal preferential
programming. For instance, Gelfond's and Lifschitz's stable model semantics (see Section Selection semantics below) for both autoepistemic
semantics for logic programming with negation as failure which serves as logic and default logic. In Konolige (1988) it has been observed that the
the foundation for the answer set programming paradigm in computer fixed point characterization of stable expansions can be alternatively
science has been equivalently expressed by means of autoepistemic logic phrased only on the basis of the set of formulas Γ0 in Γ that do not contain
(Gelfond and Lifschitz (1988)). The result is achieved by translating occurrences of the modal operator B:
clauses in logic programming
Γ = Cn[K45](Ξ ∪ {Bφ ∣ φ ∈ Γ0} ∪ {¬Bφ ∣ φ ∉ Γ0})
φ ← φ1, …, φn, not ψ1, …, not ψm
where Cn[K45](•) is the consequence function of the modal logic K45.
in such a way that negation as failure (not ψ) gets the meaning “it is not
believed that ψ” (¬Bψ): 3.5 Selection semantics
(φ1 ∧ … ∧ φn ∧ ¬Bψ1 ∧ … ∧ ¬Bψm) ⊃ ψ In CL a formula φ is entailed by Γ (in signs Γ ⊨ φ) if and only if φ is valid
in all classical models of Γ. An influential idea in NML is to define non-
A second link has been established in Konolige (1988) with default logic
monotonic entailment not in terms of all classical models of Γ, but rather
which has been shown inter-translatable and equi-expressive with
in terms of a selection of these models (Shoham (1987)). Intuitively the
Konolige's strongly grounded variant of autoepistemic logic. Default rules
idea is to read
(γ : θ) / τ are translated by interpreting consistency conditions θ by ¬B¬θ
which can be read as “θ is consistent with the given beliefs”: Γ φ
(Bγ ∧ ¬B¬θ) ⊃ τ as
Especially the latter link is rather remarkable since the subject matter of “φ holds in the most normal/natural/etc. models of Γ.”
the given formalisms is quite different. While default logic deals with
In the seminal paper Kraus et al. (1990) this idea is investigated systems or as the KLM-properties (in reference to the authors of Kraus,
systematically. Lehmann, Magidor (1990)):
Intuitively, M ≺ Mʹ′ if M is more normal than Mʹ′. The relation ≺ can be AND: If φ ψ and φ τ then φ ψ ∧ τ.
employed to formally realize the idea of defining a consequence relation in OR: If φ ψ and τ ψ then φ ∨ τ ψ.
view of the most normal models, namely by focusing on ≺-minimal
models. Formally, where [ψ] is the set of all models of ψ in Ω, S is In Kraus et al. (1990) it is shown that a consequence relation is
defined as follows: preferential if and only if = S for some preferential structure S.
ψ S φ if and only if φ holds in all ≺-minimal models in [ψ]. Given a set of conditional assertions K of the type φ ψ one may be
interested in investigating what other conditional assertions follow. The
In order to warrant that there are such minimal states, ≺ is considered to be following two strategems lead to the same results. The first option is to
smooth (also sometimes called stuttered), i.e., for each M either M is ≺- intersect all preferential consequence relations that extend K (in the
minimal or there is a ≺-minimal Mʹ′ such that Mʹ′ ≺ M. sense that the conditional assertions in K hold for ) obtaining the
Preferential Closure P of K. The second option is to use the five defining
Preferential structures enjoy a central role in NML since they characterize properties of preferential consequence relations as deduction rules for
preferential consequence relations, i.e., non-monotonic consequence syntactic units of the form φ ψ. This way we obtain the deductive
relations that fulfill the following central properties, also referred to as system P with its consequence relation ⊢P for which:
the core properties or the conservative core of non-monotonic reasoning
K ⊢P φ ψ if and only if φ Pψ Possibility measures assign real numbers in the interval [0,1] to
propositions in order to measure their possibility, where 0 corresponds to
3.5.2 Various semantics impossible states and 1 to necessary states (Dubois and Prade (1990)).
Ordinal ranking functions rank propositions via natural numbers closed
One of the most remarkable facts in the study of NMLs is that various with ∞ (Goldszmidt and Pearl (1992)). One may think of κ([φ]) as the
other semantics have been proposed —often independently and based on level of surprise we would face were φ to hold, where ∞ represents
very different considerations— that also adequately characterize maximal surprise. Finally, plausibility measures (Friedman and Halpern
preferential consequence relations. This underlines the central status of the (1996)) associate propositions with elements in a partially ordered domain
core properties in the formal study of defeasible reasoning. with bottom element ⊥ and top element ⊤ in order to compare their
plausibilities. Given some simple constraints on Pl (such as: If Pl(X) =
Many of these approaches use structures S = (Ω, Π) where Ω is again a set
Pl(Y) = ⊥ then Pl(X ∪ Y) = ⊥) we speak of qualitative plausibility
of classical models and Π is a mapping with the domain ℘(Ω) (the set of
structures. The following statements are all equivalent which emphasizes
subsets of Ω) and an ordered co-domain (D, <). The exact nature of (D, <)
the centrality of the core properties in the study of NMLs:
depends on the given approach and we give some examples below. The
basic common idea is to let φ S ψ in case Π([φ∧ψ]) is preferable to K ⊢P φ ψ
Π([φ∧¬ψ]) in view of <. The following table lists some proposals which φ S ψ for all preferential structures S for which S extends K
we discuss some more below: φ S ψ for all possibilistic structures S for which S extends K
φ S ψ for all ordinal ranking structures S for which S extends K
Π φ S ψ iff Structures φ S ψ for all qualitative plausibility structures S for which S
possibility measure π([φ]) = 0 or possibilistic structures extends K
π([φ ∧ ψ]) > π([φ ∧ Yet another well-known semantics that characterizes preferential
π: ℘(Ω) → [0,1]
¬ψ]) consequence relations makes use of conditional probabilities. The idea is
that φ ψ holds in a structure if the conditional probability P(ψ|φ) is
ordinal ranking ordinal ranking
κ([φ]) = ∞ or closer to 1 than an arbitrary ε, whence the name ε-semantics (Adams
function structures
(1975), Pearl (1989)).
κ: ℘(Ω) → {0,1, κ([φ ∧ ψ]) < κ([φ ∧
…,∞} ¬ψ]) The following example demonstrates that the intuitive idea of using a
threshold value such as ½ instead of infinitesimals and to interpret φ ψ
plausibility measure Pl([φ]) = ⊥ or plausibility structures
as P(ψ∣φ) > ½ does not work in a straightforward way. Let α abbreviate
Pl([φ ∧ ψ]) > Pl([φ ∧ “being a Pennsylvania Dutch”, β abbreviate “being a native speaker of
Pl: ℘(Ω) → D
¬ψ]) German”, and γ abbreviate “being born in Germany”. Further, let our
knowledge base comprise the statements “αs are usually βs,” “α∧β's are the problematic example above: α β means that max[α] ⊧ β and hence
usually γs”. The following Euler diagram illustrates the conditional that max[α] = max[α∧β]. α∧β γ means that max[α∧β] ⊧ γ and hence
probabilities in a possible probability distribution for the given statements. max[α] ⊧ γ which implies α γ.
considered. Various publications investigate the preferential and rational Monotonicity yields Quaker ∧ worker Pacifist in view of Quaker
consequence relations in a first-order language (e.g., Lehmann and Pacifist. Hence, prima facie we may want to proceed as follows: let R be
Magidor (1990), Delgrande (1998), Friedman et al. (2000)). the result of intersecting all rational consequence relations that extend K
(in the sense that the conditional assertions in K hold for ).
As we have seen, the properties of preferential or rational consequence Unfortunately it is not the case that Quaker ∧ worker R Pacifist. The
relations may also serve as deductive rules for syntactic units of the form reason is simply that there are rational consequence relations for which
φ ψ under the usual readings such as “If φ then usually ψ.” This Quaker ¬worker and whence Rational Monotonicity does not yield the
approach can be generalized to gain conditional logics by allowing for desired Quaker ∧ worker Pacifist. Moreover, it has been shown that R
formulas where a conditional assertion φ ψ is within the scope of is identical to the preferential closure P.
classical connectives such as ∧, ∨, ¬, etc. (e.g., Delgrande (1987), Asher
and Morreau (1991), Friedman and Halpern (1996), Giordano et al. In Lehmann and Magidor (1992) a Rational Closure for conditional
(2009)). We state one remarkable result obtained in Boutilier (1990) that knowledge bases such as K is proposed that yields the desired
closely links the study of NMLs to the study of modal logics in the consequences. We omit the technical details. The basic idea is to assign
Kripkean tradition. Suppose we translate the deduction rules of system P natural numbers, i.e., ranks, to formulas which indicate how exceptional
into Hilbert-style axiom schemes such that, for instance, Cautious they are. Then the ranks of formulas are minimized which means that each
Monotonicity becomes formula is interpreted as normally as possible. A conditional assertion φ
ψ is in the Rational Closure of K if the rank of φ is strictly less than the
⊢ ((φ ψ) ∧ (φ τ)) ⊃ ((φ ∧ ψ) τ) rank of φ ∧ ¬ψ. In our example Quaker ∧ worker will get the same rank
as Quaker which will be stricly less than the ranks of Quaker ∧ ¬Pacifist
It is shown that conditional assertions can be expressed in standard
and Quaker ∧ worker ∧ ¬Pacifist. This means that the desired Quaker ∧
Kripkean modal frames in such a way that system P (under this
worker Pacifist is in the Rational Closure of our K.
translation) corresponds to a fragment of the well-known modal logic S4.
An analogous result is obtained for the modal logic S4.3 and the system A system equivalent to Rational Closure has been independently proposed
that results from strengthening system P with an axiom scheme for under the name system Z based on ε-semantics in Pearl (1990).
Rational Monotonicity.
One may consider it a drawback of Rational Closure that it suffers from
Various contributions are specifically devoted to problems related to the Drowning Problem (see above). This problem has been tackled in
relevancy. Consider some of the conditional assertions in the Nixon Lehmann (1995) with the formalism Lexicographic Closure. For instance,
Diamond: K = {Quaker Pacifist, Republican ¬Pacifist}. It seems Penguin ∧ Bird has wings is in the Lexicographic Closure but not in the
desirable to derive e.g. Quaker ∧ worker Pacifist since in view of K, Rational Closure of K = {Penguin Bird, Penguin ¬flies, Bird flies,
being a worker is irrelevant to the assertion Quaker Pacifist. Intuitively Bird has wings}.
speaking, Quaker ¬worker should fail in which case Rational
Lexicographic Closure also implements a quantitative rationale according violations in view of specificity considerations. E.g., in our example, the
to which in cases of conflicts as many conditional assertions as possible violation of two less specific assertions counter-balances the violation of
are validated. Suppose our knowledge base K consists of the more specific assertion.
3. ¬P(b) PREM ∅ The formula ∀xP(x) ∨ ∀xQ(x) is derived at line 4 and 5 on two respective
4. ∃xP(x) ∧ ¬∀xP(x) 1,3; RU ∅ conditions: {∃xP(x) ∧ ¬∀xP(x)} and {∃xQ(x) ∧ ¬∀xQ(x)}. Neither ∃xP(x)
∧ ¬∀xP(x) nor ∃xQ(x) ∧ ¬∀xQ(x) is derivable on the condition ∅. minimal LLL-models M of Γ, M ⊧ φ. (We omit selection semantics for
However, both are part of the (minimal) disjunction of abnormalities other strategies.)
derived at line 6. According to the minimal abnormality strategy the
premises are interpreted in such a way that as many abnormalities as A similar formulation makes use of maximally consistent sets. In
possible are considered to be false, which leaves us with two Makinson (2003) it was developed under the name default assumptions.
interpretations: one in which ∃xP(x) ∧ ¬∀xP(x) holds while ∃xQ(x) ∧ Given a set of assumptions Ξ,
¬∀xQ(x) is false, and another one in which ∃xQ(x) ∧ ¬∀xQ(x) holds while
Γ [Ξ] φ if and only if Ξʹ′ ∪ Γ ⊨ φ for all Ξʹ′ ⊆ Ξ that are
∃xP(x) ∧ ¬∀xP(x) is false. In both interpretations either the assumption of
maximally consistent with Γ (i.e., Ξʹ′ ∪ Γ is consistent and there is
line 4 or the assumption of line 5 holds which means that in either
no Ξʺ″ ⊆ Ξ such that Ξʹ′ ⊂ Ξʺ″ and Ξʺ″ ∪ Γ is consistent).
interpretation the defeasible inference to ∀xP(x) ∨ ∀xQ(x) goes through.
Thus, according to the minimal abnormality strategy neither line 4 nor line If we take Ξ to be {¬υ ∣ υ ∈ Ω} then we have an equivalent
5 is marked (we omit the technical details). The reliability strategy is more characterization of adaptive logics with the minimal abnormality strategy
cautious. According to it any line that involves an abnormality in its and the set of abnormalities Ω (Van De Putte (2013)).
condition that is part of a minimal disjunction of abnormalities derived on
the condition ∅ is marked. This means that in our example, lines 4 and 5 Conditional Entailment is another assumption-based approach (Geffner
are marked. and Pearl (1992)). Suppose we start with a theory T = (Δ, Γ, Λ) where Δ =
{φi → ψi ∣ i ∈ I} consists of default rules, Γ consists of necessary facts,
Lines may get marked at specific stages of a proof just to get unmarked at while Λ consists of contingent facts. It is translated into an assumption-
latter stages and vice versa. This mirrors the internal dynamics of based theory Tʹ′ = (Δʹ′, Γʹ′, Λ) as follows:
defeasible reasoning. In order to define a consequence relation, a stable
notion of derivability is needed. A formula derived at an unmarked line l For each φi → ψi ∈ Δ we introduce a designated assumption
in an adaptive proof from a premise set Γ is considered a consequence of constant πi and encode φi → ψi by φi ∧ πi ⊃ ψi and the default φi →
Γ if any extension of the proof in which l is marked is further extendable πi. The former is just our scheme [†] while the latter makes sure that
so that the line is unmarked again. assumptions are —by default— considered to hold.
The set of defaults Δʹ′ is {φi → πi ∣ i ∈ I} and our background
Such a consequence relation can equivalently be expressed semantically in knowledge becomes Γʹ′ = Γ ∪ {φi ∧ πi ⊃ ψi ∣ i ∈ I}.
terms of a preferential semantics (see Section Selection semantics). Given
an LLL-model M we can identify its abnormal part Ab(M) = {υ ∈ Ω ∣ M ⊧ Just as in the adaptive logic approach, models are compared with respect
υ} to be the set of all abnormalities that hold in M. The selection to their abnormal parts. For a classical model M of Γʹ′ ∪ Λ, Ab(M) is the
semantics for minimal abnormality can be phrased as follows. We order set of all πi for which M ⊧ ¬πi. One important aspect that distinguishes
the LLL-models by M ≺ Mʹ′ if and only if Ab(M) ⊂ Ab(Mʹ′). Then we conditional entailment from adaptive logics and default assumptions, is the
define that φ is a semantic consequence of Γ if and only if for all ≺- fact that it implements the Specificity Principle. For this, assumptions are
ordered by means of a (smooth) relation <. Models are then compared as In our example, ¬f is a conditionally entailed since all ⋖-minimal models
follows: have the same abnormal part as M1.
M ⋖ Mʹ′ if and only if Ab(M) ≠ Ab(Mʹ′) and for all πi ∈ A consequence of expressing defeasible inferences via material
Ab(M)∖Ab(Mʹ′) there is a πj ∈ Ab(Mʹ′)∖Ab(M) such that πi < πj. implication in assumption-based approaches is that defeasible inferences
are contrapositable. Clearly, if φ ∧ π ⊃ ψ then ¬ψ ∧ π ⊃ ¬φ. As a
The idea is: M is preferred over Mʹ′ if every assumption πi that doesn't consequence, formalisms such as default logic have a more greedy style of
hold in M but that holds in Mʹ′ is 'compensated for' by a more specific applying default rules. We demonstrate this with conditional entailment.
assumption πj that holds in M but doesn't hold in Mʹ′. Consider a theory consisting of the defaults p1 → p2, p2 → p3, p3 → p4
and the factual information Λ = {p1, ¬p4} (where pi are logical atoms). In
For this to work, < has to include specificity relations among assumptions.
assumption-based approaches such as conditional entailment the
Such orders < are called admissible relative to the background knowledge
defeasible rules will be encoded as p1 ∧ π1 ⊃ p2, p2 ∧ π2 ⊃ p3, and p3 ∧
Γʹ′ if they satisfy the following property: for every set of assumptions Π ⊆
π3 ⊃ p4. It can easily be seen that < = ∅ is an admissible ordering which
{πi ∣ i ∈ I}, if Π violates some default φj → πj in view of the given
means that for instance a model M with Ab(M) = {π1} is ⋖-minimal. In
background knowledge Γʹ′ (in signs, Π, φj, Γʹ′ ⊨ ¬πj) then there is a πk ∈
such a model we have M ⊧ ¬p3 and M ⊧ ¬p2 by reasoning backwards via
Π for which πk < πj.
contraposition from ¬p4 ∧ π3 to ¬p3 and from ¬p3 ∧ π2 to ¬p2. This
This is best understood when put into action. Take the case with Tweety means that neither p2 nor p3 is conditionally entailed.
the penguin. We have Δʹ′ = {p → π1, b → π2}, Γʹ′ = {p ⊃ b, p ∧ π1 ⊃ ¬f, b
The situation is different in default logic where both p2 and p3 are
∧ π2 ⊃ f} and Λ = {p}. Take Π = {π2}, then Π,Γʹ′,p ⊨ ¬π1 and thus for <
derivable. The reasoning follows a greedy policy in applying default rules:
to be admissible, π2 < π1. We have two types of models of Γʹ′ ∪ Λ: models
whenever a rule is applicable (i.e., its antecedent holds by the currently
M1 for which M1 ⊧ π1 and therefore M1 ⊧ ¬f and models M2 for which M2
held beliefs Ξ and its consequent doesn't contradict with Ξ) it is applied.
⊧ π2 and thus M2 ⊧ f. Note that M1 ⋖ M2 since for the only assumption in
There is disagreement among scholars whether and when contraposition is
Ab(M1) —namely π2— there is π1 ∈ Ab(M2)∖Ab(M1) for which π2 <
a desirable property for defeasible inferences (e.g., Caminada (2008),
π1.
Prakken (2012)).
Analogous to adaptive logics with the minimal abnormality strategy,
conditional entailment is defined via ⋖-minimal models: 4. Non-monotonic logic and human reasoning
(Δʹ′, Γʹ′, Λ) φ if and only if for each admissible < (relative to Γʹ′) In view of the fact that test subjects seem to perform very poorly in
and all ⋖-minimal models M of Γʹ′ ∪ Λ, M ⊧ φ. various paradigmatic reasoning tests (e.g., Wason's Selection Task (Wason
(1966)) or the Supression Task (Byrne (1989))), main streams in the
psychology of reasoning have traditionally ascribed to logic at best a
subordinate role in human reasoning. In recent years this assessment has However, it has also been noted that logicians often rely too much on their
been criticized as the result of evaluating performances in tests against the own intuitions without critically assessing them against the background of
standard of classical logic whereas other standards based on probabilistic empirical studies (Pelletier and Elio (1997)).
considerations or on NMLs have been argued to be more appropriate.
Various studies investigate their test subjects' tendency to reason according
This resulted in the rise of a new probabilistic paradigm (Oaksford and to specific inference principles of NMLs. Most studies support the
Chater (2007), Pfeifer and Douven (2014)) where probability theory descriptive adequacy of the rules of system P. There are, however, some
provides a calculus for rational belief update. Although the program is open or controversial issues. For instance, while some studies report
sometimes phrased in decidedly anti-logicist terms,[7] logic is here usually results suggestive of the adequacy of weakened monotonicity principles
understood as monotonic and deductive. The relation to NML is less clear such as Cautious Monotonicity (Schurz (2005), Neves et al. (2002),
and it has been argued that there are close connections especially to Pfeifer and Kleiter (2005)) and Rational Monotonicity (Neves et al.
probabilistic accounts of NML (Over (2009), Pfeifer and Kleiter (2009)). (2002)), Benferhat et al. (2005) report mixed results. Specificity
Politzer and Bonnefon (2009) warn against the premature acceptance of considerations play a role in the reasoning process of test subjects in
the probabilistic paradigm in view of the rich variety of alternatives such Schurz (2005), whereas according to Ford and Billington (2000) they do
as possibility measures, plausibility measures, etc. (see also above). not. Benferhat et al. (2005) are specifically interested in the question
whether the responses of their test subjects corresponded better to
Another argument for the relevance of NML is advocated in Stenning and Lexicographic Closure or to Rational Closure. While the results were not
Van Lambalgen (2008) who distinguish between reasoning to and fully conclusive they still suggest a preference for the former.
reasoning from an interpretation. In the former process agents establish a
logical form that is relative both to the specific context in which the Pelletier and Elio (1994) investigate various relevant factors that influence
reasoning takes place and to the agent's goals. When establishing a logical subjects' reasoning about exceptions of defaults or inheritance relations.
form agents choose—inter alia—a formal language, a semantics (e.g., Their study makes use of the benchmark problems for defeasible
extensional vs. intensional), a notion of validity, etc. Once a logical form reasoning proposed in Lifschitz (1989). It is, for instance, observed that
is established, agents engage in lawlike rule-based inferences which are the exceptional status of an object A with respect to some default is more
based on this form. It is argued that in the majority of cases in standard likely to spread to other objects if they share properties with A that may
reasoning tasks, subjects use non-monontonic logical forms that are based play a role in explaining the exceptional status. For example, when
on closed world assumptions. confronted with a student club that violates the default that student clubs
only allow for student members, subjects are more likely to ascribe this
Nonmonotonic logicians often state that their motivation stems from exceptional status also to another club if they learn that both clubs have
observing the defeasible structure of actual commonsense reasoning. been struggling to maintain minimum membership requirements.
Empirical studies have been explicitly cited as both inspiration for
working on NMLs and as standards against which to evaluate NMLs.
The question of the descriptive adequacy of NMLs to human reasoning is More recently, researchers have started paying attention to consideration
also related to questions concerning the nature and limits of cognitive (ii), looking at the extent to which NMLs have generated well-behaved
modules in view of which agents are capable of logical reasoning. For relations of logical consequence. As Makinson (1994) points out,
instance, the question arises, whether such modules could be realized on a practitioners of the field have encountered mixed success. In particular,
neurological level. Concerning the former question there are successful one abstract property, Cautious Monotony, appears at the same time to be
representations of NMLs in terms of neural networks (see Stenning and crucial and elusive for many of the frameworks to be found in the
Van Lambalgen (2008) for logic programming with closed world literature. This is a fact that is perhaps to be traced back, at least in part, to
assumptions and Leitgeb (2001) for NMLs in the tradition of Selection the above-mentioned tension between the requirement of material
semantics). adequacy and the need to generate a well-behaved consequence relation.
5. Conclusion The complexity issue appears to be the most difficult among the ones that
have been singled out. NMLs appear to be stubbornly intractable with
There are three major issues connected with the development of logical respect to the corresponding problem for classical logic. This is clear in
frameworks that can adequately represent defeasible reasoning: (i) the case of default logic, given the ubiquitous consistency checks. But
materially adequacy; (ii) formal properties; and (iii) complexity. A non- besides consistency checks, there are other, often overlooked, sources of
monotonic formalism is material adequate to the extent to which it complexity that are purely combinatorial. Other forms of non-monotonic
captures examples of defeasible reasoning and to the extent to which it has reasoning, besides default logic, are far from immune from these
intuitive properties. The question of formal properties has to do with the combinatorial roots of intractability. Although some important work has
degree to which the formalism gives rise to a consequence relation that been done trying to make various non-monotonic formalism more
satisfies desirable theoretic properties such as the above mentioned tractable, this is perhaps the problem on which progress has been slowest
Reflexivity, Cut, and Cautious Monotony. The third set of issues has to do in coming.
with computational complexity of the most basic questions concerning the
framework. Bibliography
There is a potential tension between (i) and (ii): the desire to capture a Adams, Ernest W., 1975. The Logic of Conditionals. Dordrecht: D. Reidel
broad range of intuitions can lead to ad hoc solutions that can sometimes Publishing Co.
undermine the desirable formal properties of the framework. In general, Alferes, Jose Julio, Damasio, Carlos Viegas, & Pereira, Luis Moniz, 1995.
the development of NMLs and related formalisms has been driven, since A Logic Programming System for Nonmonotonic Reasoning. Journal
its inception, by consideration (i) and has relied on a rich and well-chosen of Automated Reasoning, 14(1): 93–147.
array of examples. Of course, there is some question as to whether any Antonelli, Gian Aldo, 1999. A directly cautious theory of defeasible
single framework can aspire to be universal in this respect. consequence for default logic via the notion of general extension.
Artificial Intelligence, 109(1): 71–109.
–––, 1997. Defeasible inheritance on cyclic networks. Artificial Brewka, Gerhard, & Eiter, Thomas, 2000. Prioritizing Default Logic. In
Intelligence, 92(1): 1–23. Intellectics and Computational Logic, Applied Logic Series (Volume
Arieli, Ofer, & Avron, Arnon, 2000. General Patterns for Nonmonotonic 19), Dordrecht: Kluwer, 27–45.
Reasoning: From Basic Entailments to Plausible Relations. Logic Byrne, Ruth M.J., 1989. Suppressing valid inferences with conditionals.
Journal of the IGPL, 8: 119–148. Cognition, 31(1): 61–83.
Asher, Nicholas, & Morreau, Michael, 1991. Commonsense entailment: A Caminada, M., 2008. On the issue of contraposition of defeasible rules.
modal theory of nonmonotonic reasoning. In Logics in AI, Berlin: Frontiers in Artificial Intelligence and Applications, 172: 109–115.
Springer, pp. 1–30. Delgrande, James, Schaub, Torsten, Tompits, Hans, & Wang, Kewen,
Batens, Diderik, 1980. Paraconsistent extensional propositional logics. 2004. A classification and survey of preference handling approaches
Logique at Analyse, 90–91: 195–234. in nonmonotonic reasoning. Computational Intelligence, 20(2): 308–
Batens, Diderik, 1999. Inconsistency-Adaptive Logics. In Logic at Work. 334.
Essays Dedicated to the Memory of Helena Rasiowa, Heidelberg, Delgrande, James P., 1987. A first-order conditional logic for prototypical
New York: Physica Verlag (Springer), pp. 445–472. properties. Artificial Intelligence, 33(1): 105–130.
Batens, Diderik, 2004. The need for adaptive logics in epistemology. In Delgrande, James P., 1998. On first-order conditional logics. Artificial
Logic, epistemology, and the unity of science. Berlin: Springer, pp. Intelligence, 105(1): 105–137.
459–485. Delgrande, James P, & Schaub, Torsten, 2000. Expressing preferences in
Batens, Diderik, 2007. A Universal Logic Approach to Adaptive Logics. default logic. Artificial Intelligence, 123(1): 41–87.
Logica Universalis, 1: 221–242. Dov, Gabbay, Hogger, C., & Robinson, J. (eds.), 1994. Handbook of Logic
Batens, Diderik, 2011. Logics for qualitative inductive generalization. in Artificial Intelligence and Logic Programming (Volume 3), Oxford
Studia Logica, 97(1): pp. 61–80. and New York: Oxford University Press.
Benferhat, Salem, Dubier, Didier, & Prade, Henri, 1997. Some Syntactic Dubois, Didier, & Prade, Henri, 1990. An introduction to possibilistic and
Approaches to the Handling of Inconsistent Knowledge Basis: A fuzzy logics. In Readings in Uncertain Reasoning. San Francisco:
Comparative Study. Part I: The Flat Case. Studia Logica, 58: 17–45. Morgan Kaufmann Publishers Inc., pp. 742–761.
Benferhat, Salem, Dubois, Didier, & Prade, Henri, 1999. Possibilistic and Dung, Phan Minh, 1995. On the Acceptability of Arguments and its
standard probabilistic semantics of conditional knowledge bases. Fundamental Role in Nonmonotonic Reasoning, Logic Programming
Journal of Logic and Computation, 9(6): 873–895. and n-Person Games. Artifical Intelligence, 77: 321–358.
Benferhat, Salem, Bonnefon, Jean F, & da Silva Neves, Rui, 2005. An Dung, P.M., Kowalski, R.A., & Toni, F., 2009. Assumption-based
overview of possibilistic handling of default reasoning, with argumentation. Argumentation in Artificial Intelligence, 199–218.
experimental studies. Synthese, 146(1–2): 53–70. Ford, Marilyn, 2004. System LS: A Three-Tiered Nonmonotonic
Boutilier, Craig, 1990. Conditional Logics of Normality as Modal Reasoning System. Computational Intelligence, 20(1): 89–108.
Systems. AAAI (Volume 90), pp. 594–599. Ford, Marilyn, & Billington, David, 2000. Strategies in human
nonmonotonic reasoning. Computational Intelligence, 16(3): 446– Goldzsmidt, Moisés, Morris, Paul, & Pearl, Judea, 1993. A Maximum
468. Entropy Approach to Nonmonotonic Reasoning. IEEE Transactions
Friedman, Nir, & Halpern, Joseph Y., 1996. Plausibility measures and on Pattern Analysis and Machine Intelligence, 15(3): 220–232.
default reasoning. Journal of the ACM, 48: 1297–1304. Horty, John, 2007. Defaults with Priorities. Journal of Philosophical
Friedman, Nir, Halpern, Joseph Y., & Koller, Daphne, 2000. First-order Logic, 36: 367–413.
conditional logic for default reasoning revisited. ACM Trans. Horty, John F., 1994. Some direct theories of nonmonotonic inheritance. In
Comput. Logic, 1(October): 175–207. Gabbay, Dov M., Hogger, Christopher J., & Robinson, J. A. (eds.),
Gabbay, Dov M., 1985. Theoretical foundations for non-monotonic Handbook of Logic in Artificial Intelligence and Logic Programming,
reasoning in expert systems. Logics and models of concurrent Volume 3: Nonmonotonic Reasoning and Uncertain Reasoning.
systems. New York: Springer-Verlag, pp. 439–457. Oxford: Oxford University Press, pp. 111–187.
Geffner, Hector, & Pearl, Judea, 1992. Conditional entailment: bridging Horty, John F., 2002. Skepticism and floating conclusions. Artifical
two approaches to default reasoning. Artifical Intelligence, 53(2–3): Intelligence, 135(1–2): 55–72.
209–244. Jeffry, Pelletier Francis, & Renee, Elio, 1994. On Relevance in
Gelfond, Michael, & Lifschitz, Vladimir, 1988. The stable model Nonmonotonic Reasoning: Some Empirical Studies. In Russ, Greiner,
semantics for logic programming. In ICLP/SLP (Volume 88), pp. & Devika, Subramanian (eds.), Relevance: AAAI 1994 Fall
1070–1080. Symposium Series. Palo Alto: AAAI Press, pp. 64–67.
Gilio, Angelo, 2002. Probabilistic reasoning under coherence in System P. Konolige, Kurt, 1988. On the relation between default and autoepistemic
Annals of Mathematics and Artificial Intelligence, 34(1–3): 5–34. logic. Artifical Intelligence, 35(3): 343–382.
Ginsberg, Matt, 1994. Essentials of Artificial Intelligence. San Francisco: Koons, Robert, 2014. Defeasible Reasoning. In The Stanford
Morgan Kaufmann Publishers Inc. Encyclopedia of Philosophy (Spring 2014 Edition), Edward N. Zalta
Ginsberg, Matthew L. (ed.), 1987. Readings in nonmonotonic reasoning. (ed.), URL =
San Francisco: Morgan Kaufmann. <http://plato.stanford.edu/archives/spr2014/entries/reasoning-
Giordano, Laura, Gliozzi, Valentina, Olivetti, Nicola, & Pozzato, defeasible/>.
Gian Luca, 2009. Analytic tableaux calculi for KLM logics of Kraus, Sarit, Lehmann, Daniel, & Magidor, Menachem, 1990.
nonmonotonic reasoning. ACM Transactions on Computational Logic Nonmonotonic Reasoning, Preferential Models and Cumulative
(TOCL), 10(3): 18. Logics. Artifical Intelligence, 44: 167–207.
Goldszmidt, Moisés, & Pearl, Judea, 1992. Rank-based Systems: A Simple Kripke, Saul, 1975. Outline of a Theory of Truth. Journal of Philosophy,
Approach to Belief Revision, Belief Update, and Reasoning about 72: 690–716.
Evidence and Actions. In Proceedings of the Third International Lehmann, Daniel J., 1995. Another Perspective on Default Reasoning.
Conference on Knowledge Representation and Reasoning. San Annals of Mathematics and Artificial Intelligence, 15(1): 61–82.
Francisco: Morgan Kaufmann, pp. 661–672. Lehmann, Daniel J., & Magidor, Menachem, 1990. Preferential logics: the
predicate calculus case. In Proceedings of the 3rd conference on Reasoning. Artifical Intelligence, 13: 27–29.
Theoretical aspects of reasoning about knowledge, San Francisco: Moore, Robert C., 1984. Possible-World Semantics for Autoepistemic
Morgan Kaufmann Publishers Inc., pp. 57–72. Logic. In Proceedings of the Workshop on non-monotonic reasoning,
Lehmann, Daniel J., & Magidor, Menachem, 1992. What does a AAAI, pp. 344–354.
conditional knowledge base entail? Artificial Intelligence, 55(1): 1– Moore, Robert C., 1985. Semantical considerations on nonmonotonic
60. logic. Artifical Intelligence, 25(1): 75–94.
Leitgeb, Hannes, 2001. Nonmonotonic reasoning by inhibition nets. Neves, Rui Da Silva, Bonnefon, Jean-François, & Raufaste, Eric, 2002.
Artifical Intelligence, 128(May): 161–201. An empirical test of patterns for nonmonotonic inference. Annals of
Lifschitz, Vladimir, 1989. Benchmark problems for formal nonmonotonic Mathematics and Artificial Intelligence, 34(1–3): 107–130.
reasoning. In Non-Monotonic Reasoning. Berlin: Springer, pp. 202– Nute, D., 1994. Defeasible logics. In Handbook of Logic in Artificial
219. Intelligence and Logic Programming (Vol. 3). Oxford: Oxford
Lin, Fangzhen, & Shoham, Yoav, 1990. Epistemic semantics for fixed- University Press, pp. 353–395.
points non-monotonic logics. In Proceedings of the 3rd Conference Oaksford, Mike, & Chater, Nick, 2007. Bayesian rationality the
on Theoretical Aspects of Reasoning About Knowledge (TARK'90), probabilistic approach to human reasoning. Oxford: Oxford
Pacific Grove, CA: Morgan Kaufmann Publishers Inc, pp. 111–120. University Press.
Lukaszewicz, Witold, 1988. Considerations on default logic: an alternative Oaksford, Mike, & Chater, Nick, 2009. Précis of Bayesian rationality: The
approach. Computational intelligence, 4(1): 1–16. probabilistic approach to human reasoning. Behavioral and Brain
Makinson, David, 1994. General patterns in nonmonotonic reasoning. In: Sciences, 32(01): 69–84.
Handbook of Logic in Artificial Intelligence and Logic Programming, Orlowska, Ewa (ed.), 1999. Logic at Work. Essays Dedicated to the
vol. III, D. Gabbay, C. Hogger, J.A. Robinson (eds.), pp. 35–110, Memory of Helena Rasiowa. Heidelberg, New York: Physica Verlag
Oxford: Oxford University Press. (Springer).
Makinson, David, 2003. Bridges between classical and nonmonotonic Over, David E., 2009. New paradigm psychology of reasoning. Thinking
logic. Logic Journal of IGPL, 11(1): 69–96. & Reasoning, 15(4): 431–438.
Makinson, David, & Gärdenfors, Peter, 1991. Relations between the logic Pearl, Judea, 1988. Probabilistic reasoning in intelligent systems:
of theory change and nonmonotonic logic. The logic of theory networks of plausible inference. San Francisco: Morgan Kaufmann.
change, Berlin: Springer, pp. 183–205. Pearl, Judea, 1989. Probabilistic semantics for nonmonotonic reasoning: a
Makinson, David, & Schlechta, Karl, 1991. Floating conclusions and survey. In Proceedings of the first international conference on
zombie paths: two deep difficulties in the “directly skeptical” Principles of knowledge representation and reasoning. San
approach to defeasible inheritance nets. Artifical Intelligence, 48(2): Francisco: Morgan Kaufmann Publishers, pp. 505–516.
199–209. Pearl, Judea, 1990. System Z: a natural ordering of defaults with tractable
McCarthy, J., 1980. Circumscription – A Form of Non-Monotonic applications to nonmonotonic reasoning. In TARK '90: Proceedings
of the 3rd conference on Theoretical aspects of reasoning about Argumentation (Vol. 4), Dordrecht: Kluwer, pp. 219–318.
knowledge. San Francisco: Morgan Kaufmann Publishers, pp. 121– Reiter, Raymond, 1980. A Logic for Default Reasoning. Artifical
135. Intelligence, 13: 81–132.
Pelletier, Francis Jeffry, & Elio, Renée, 1997. What should default Schurz, Gerhard, 2005. Non-monotonic reasoning from an evolution-
reasoning be, by default? Computational Intelligence, 13(2): 165– theoretic perspective: Ontic, logical and cognitive foundations.
187. Synthese, 146(1–2): 37–51.
Pfeifer, Niki, & Douven, Igor, 2014. Formal Epistemology and the New Shoham, Yoav, 1987. A Semantical Approach to Nonmonotonic Logics. In
Paradigm Psychology of Reasoning. Review of Philosophy and Ginsberg, M. L. (ed.), Readings in Non-Monotonic Reasoning. Los
Psychology, 5(2): 199–221. Altos, CA: Morgan Kaufmann, pp. 227–249.
Pfeifer, Niki, & Kleiter, Gernot D., 2005. Coherence and nonmonotonicity Simari, Guillermo R, & Loui, Ronald P., 1992. A mathematical treatment
in human reasoning. Synthese, 146(1–2): 93–109. of defeasible reasoning and its implementation. Artificial intelligence,
Pfeifer, Niki, & Kleiter, Gernot D., 2009. Mental probability logic. 53(2): 125–157.
Behavioral and Brain Sciences, 32(01): 98–99. Stalnaker, Robert, 1993. A note on non-monotonic modal logic. Artificial
Politzer, Guy, & Bonnefon, Jean-François, 2009. Let us not put the Intelligence, 64(2): 183–196.
probabilistic cart before the uncertainty bull. Behavioral and Brain Stalnaker, Robert, 1994. What is a nonmonotonic consequence relation?
Sciences, 32(01): 100–101. Fundamenta Informaticae, 21(1): 7–21.
Pollock, John, 1991. A Theory of Defeasible Reasoning. International Stenning, Keith, & Van Lambalgen, Michiel, 2008. Human reasoning and
Journal of Intelligent Systems, 6: 33–54. cognitive science. Cambridge, MA: MIT Press.
Pollock, John, 1995. Cognitive Carpentry, Cambridge, MA: Bradford/MIT Straßer, Christian, 2014. Adaptive Logic and Defeasible Reasoning.
Press. Applications in Argumentation, Normative Reasoning and Default
Pollock, John, 2008. Defeasible Reasoning. In Reasoning: Studies of Reasoning (Trends in Logic, Volume 38). Dordrecht: Springer.
Human Inference and its Foundations, J. E. Adler and L. J. Rips Touretzky, David S., Thomason, Richmond H., & Horty, John F., 1991. A
(eds.), Cambridge: Cambridge University Press, pp. 451–470. Skeptic's Menagerie: Conflictors, Preemptors, Reinstaters, and
Poole, David, 1985. On the Comparison of Theories: Preferring the Most Zombies in Nonmonotonic Inheritance. In IJCAI, pp. 478–485 of.
Specific Explanation. In IJCAI (Volume 85), pp. 144–147. Van De Putte, Frederik, 2013. Default Assumptions and Selection
Prakken, H., 2010. An abstract framework for argumentation with Functions: A Generic Framework for Non-monotonic Logics. In
structured arguments. Argument & Computation, 1(2): 93–124. Advances in Artificial Intelligence and Its Applications, Dordrecht:
Prakken, H., 2012. Some reflections on two current trends in formal Springer, pp. 54–67.
argumentation. In A. Artikis, et al. (ed.), Logic Programs, Norms and Verheij, Bart, 1996. Rules, Reasons, Arguments. Formal Studies of
Action. Dordrecht: Springer, pp. 249–272. Argumentation and Defeat, Ph.D. thesis, Universiteit Maastricht.
Prakken, Henry, & Vreeswijk, Gerard A. W., 2002. Logics for Defeasible Wason, P. C., 1966. Reasoning. in B. Foss (ed.), New horizons in
psychology I, Harmondsworth: Penguin, pp. 135–151. that Verdi and Bizet are compatriots (C(v,b)). This leads us no longer to
endorse either the proposition that Verdi is Italian (because he could be
Academic Tools French), or that Bizet is French (because he could be Italian); but we
would still draw the defeasible consequence that Satie is French, since
How to cite this entry. nothing that we have learned conflicts with it. Thus,
Preview the PDF version of this entry at the Friends of the SEP
C(v,b) F(s)
Society.
Look up this entry topic at the Indiana Philosophy Ontology Now consider the proposition C(v,s) that Verdi and Satie are compatriots.
Project (InPhO). Before learning that C(v,b) we would be inclined to reject the proposition
Enhanced bibliography for this entry at PhilPapers, with links C(v,s) because we endorse I(v) and F(s), but after learning that Verdi and
to its database. Bizet are compatriots, we can no longer endorse I(v), and therefore no
longer reject C(v,s). Then the following fails:
Other Internet Resources
C(v,b) ¬C(v,s).
[Please contact the author with suggestions.]
However, if we also added C(v,s) to our stock of beliefs, we would lose
the inference to F(s): in the context of C(v,b), the proposition C(v,s) is
Related Entries
equivalent to the statement that all three composers have the same
artificial intelligence | artificial intelligence: logic and | computability and nationality. This leads us to suspend our assent to the proposition F(s). In
complexity | logic: classical | logic: modal | model theory: first-order other words, and contrary to Rational Monotony, the following also fails:
7. For instance, “We thus argue that human rationality, and the coherence
of human thought, is defined not by logic, but by probability.” (emphasis
added, Oaksford and Chater (2009), p. 69)