2 (I)
2 (I)
Sentences Sentence
Entails
Semantics
Semantics
Representation
World
Figure 7.6 Sentences are physical configurations of the agent, and reasoning is a process
of constructing new physical configurations from old ones. Logical reasoning should en-
sure that the new configurations represent aspects of the world that actually follow from the
aspects that the old configurations represent.
to the real-world relationship whereby some aspect of the real world is the case6 by virtue
of other aspects of the real world being the case. This correspondence between world and
representation is illustrated in Figure 7.6.
GROUNDING The final issue to consider is grounding—the connection between logical reasoning
processes and the real environment in which the agent exists. In particular, how do we know
that KB is true in the real world? (After all, KB is just “syntax” inside the agent’s head.)
This is a philosophical question about which many, many books have been written. (See
Chapter 26.) A simple answer is that the agent’s sensors create the connection. For example,
our wumpus-world agent has a smell sensor. The agent program creates a suitable sentence
whenever there is a smell. Then, whenever that sentence is in the knowledge base, it is
true in the real world. Thus, the meaning and truth of percept sentences are defined by the
processes of sensing and sentence construction that produce them. What about the rest of the
agent’s knowledge, such as its belief that wumpuses cause smells in adjacent squares? This
is not a direct representation of a single percept, but a general rule—derived, perhaps, from
perceptual experience but not identical to a statement of that experience. General rules like
this are produced by a sentence construction process called learning, which is the subject
of Part V. Learning is fallible. It could be the case that wumpuses cause smells except on
February 29 in leap years, which is when they take their baths. Thus, KB may not be true in
the real world, but with good learning procedures, there is reason for optimism.
PROPOSITIONAL
LOGIC We now present a simple but powerful logic called propositional logic. We cover the syntax
of propositional logic and its semantics—the way in which the truth of sentences is deter-
mined. Then we look at entailment—the relation between a sentence and another sentence
that follows from it—and see how this leads to a simple algorithm for logical inference. Ev-
erything takes place, of course, in the wumpus world.
6 As Wittgenstein (1922) put it in his famous Tractatus: “The world is everything that is the case.”
244 Chapter 7. Logical Agents
7.4.1 Syntax
ATOMIC SENTENCES The syntax of propositional logic defines the allowable sentences. The atomic sentences
PROPOSITION
SYMBOL consist of a single proposition symbol. Each such symbol stands for a proposition that can
be true or false. We use symbols that start with an uppercase letter and may contain other
letters or subscripts, for example: P , Q, R, W1,3 and North. The names are arbitrary but
are often chosen to have some mnemonic value—we use W1,3 to stand for the proposition
that the wumpus is in [1,3]. (Remember that symbols such as W1,3 are atomic, i.e., W , 1,
and 3 are not meaningful parts of the symbol.) There are two proposition symbols with fixed
meanings: True is the always-true proposition and False is the always-false proposition.
COMPLEX
SENTENCES Complex sentences are constructed from simpler sentences, using parentheses and logical
LOGICAL
CONNECTIVES connectives. There are five connectives in common use:
NEGATION ¬ (not). A sentence such as ¬W1,3 is called the negation of W1,3 . A literal is either an
LITERAL atomic sentence (a positive literal) or a negated atomic sentence (a negative literal).
∧ (and). A sentence whose main connective is ∧, such as W1,3 ∧ P3,1 , is called a con-
CONJUNCTION junction; its parts are the conjuncts. (The ∧ looks like an “A” for “And.”)
DISJUNCTION ∨ (or). A sentence using ∨, such as (W1,3 ∧ P3,1 )∨ W2,2 , is a disjunction of the disjuncts
(W1,3 ∧ P3,1 ) and W2,2 . (Historically, the ∨ comes from the Latin “vel,” which means
“or.” For most people, it is easier to remember ∨ as an upside-down ∧.)
IMPLICATION ⇒ (implies). A sentence such as (W1,3 ∧ P3,1 ) ⇒ ¬W2,2 is called an implication (or con-
PREMISE ditional). Its premise or antecedent is (W1,3 ∧ P3,1 ), and its conclusion or consequent
CONCLUSION is ¬W2,2 . Implications are also known as rules or if–then statements. The implication
RULES symbol is sometimes written in other books as ⊃ or →.
BICONDITIONAL ⇔ (if and only if). The sentence W1,3 ⇔ ¬W2,2 is a biconditional. Some other books
write this as ≡.
O PERATOR P RECEDENCE : ¬, ∧, ∨, ⇒, ⇔
Figure 7.7 gives a formal grammar of propositional logic; see page 1060 if you are not
familiar with the BNF notation. The BNF grammar by itself is ambiguous; a sentence with
several operators can be parsed by the grammar in multiple ways. To eliminate the ambiguity
we define a precedence for each operator. The “not” operator (¬) has the highest precedence,
which means that in the sentence ¬A ∧ B the ¬ binds most tightly, giving us the equivalent
of (¬A) ∧ B rather than ¬(A ∧ B). (The notation for ordinary arithmetic is the same: −2 + 4
is 2, not –6.) When in doubt, use parentheses to make sure of the right interpretation. Square
brackets mean the same thing as parentheses; the choice of square brackets or parentheses is
solely to make it easier for a human to read a sentence.
7.4.2 Semantics
Having specified the syntax of propositional logic, we now specify its semantics. The se-
mantics defines the rules for determining the truth of a sentence with respect to a particular
TRUTH VALUE model. In propositional logic, a model simply fixes the truth value—true or false—for ev-
ery proposition symbol. For example, if the sentences in the knowledge base make use of the
proposition symbols P1,2 , P2,2 , and P3,1 , then one possible model is
m1 = {P1,2 = false, P2,2 = false, P3,1 = true} .
With three proposition symbols, there are 23 = 8 possible models—exactly those depicted
in Figure 7.5. Notice, however, that the models are purely mathematical objects with no
necessary connection to wumpus worlds. P1,2 is just a symbol; it might mean “there is a pit
in [1,2]” or “I’m in Paris today and tomorrow.”
The semantics for propositional logic must specify how to compute the truth value of
any sentence, given a model. This is done recursively. All sentences are constructed from
atomic sentences and the five connectives; therefore, we need to specify how to compute the
truth of atomic sentences and how to compute the truth of sentences formed with each of the
five connectives. Atomic sentences are easy:
• True is true in every model and False is false in every model.
• The truth value of every other proposition symbol must be specified directly in the
model. For example, in the model m1 given earlier, P1,2 is false.
For complex sentences, we have five rules, which hold for any subsentences P and Q in any
model m (here “iff” means “if and only if”):
• ¬P is true iff P is false in m.
• P ∧ Q is true iff both P and Q are true in m.
• P ∨ Q is true iff either P or Q is true in m.
• P ⇒ Q is true unless P is true and Q is false in m.
• P ⇔ Q is true iff P and Q are both true or both false in m.
TRUTH TABLE The rules can also be expressed with truth tables that specify the truth value of a complex
sentence for each possible assignment of truth values to its components. Truth tables for the
five connectives are given in Figure 7.8. From these tables, the truth value of any sentence s
can be computed with respect to any model m by a simple recursive evaluation. For example,
246 Chapter 7. Logical Agents
P Q ¬P P ∧Q P ∨Q P ⇒ Q P ⇔ Q
false false true false false true true
false true true false true true false
true false false false true false false
true true false true true true true
Figure 7.8 Truth tables for the five logical connectives. To use the table to compute, for
example, the value of P ∨ Q when P is true and Q is false, first look on the left for the row
where P is true and Q is false (the third row). Then look in that row under the P ∨Q column
to see the result: true.
the sentence ¬P1,2 ∧ (P2,2 ∨ P3,1 ), evaluated in m1 , gives true ∧ (false ∨ true) = true ∧
true = true. Exercise 7.3 asks you to write the algorithm PL-T RUE ?(s, m), which computes
the truth value of a propositional logic sentence s in a model m.
The truth tables for “and,” “or,” and “not” are in close accord with our intuitions about
the English words. The main point of possible confusion is that P ∨ Q is true when P is true
or Q is true or both. A different connective, called “exclusive or” (“xor” for short), yields
false when both disjuncts are true.7 There is no consensus on the symbol for exclusive or;
some choices are ∨˙ or = or ⊕.
The truth table for ⇒ may not quite fit one’s intuitive understanding of “P implies Q”
or “if P then Q.” For one thing, propositional logic does not require any relation of causation
or relevance between P and Q. The sentence “5 is odd implies Tokyo is the capital of Japan”
is a true sentence of propositional logic (under the normal interpretation), even though it is
a decidedly odd sentence of English. Another point of confusion is that any implication is
true whenever its antecedent is false. For example, “5 is even implies Sam is smart” is true,
regardless of whether Sam is smart. This seems bizarre, but it makes sense if you think of
“P ⇒ Q” as saying, “If P is true, then I am claiming that Q is true. Otherwise I am making
no claim.” The only way for this sentence to be false is if P is true but Q is false.
The biconditional, P ⇔ Q, is true whenever both P ⇒ Q and Q ⇒ P are true. In
English, this is often written as “P if and only if Q.” Many of the rules of the wumpus world
are best written using ⇔. For example, a square is breezy if a neighboring square has a pit,
and a square is breezy only if a neighboring square has a pit. So we need a biconditional,
B1,1 ⇔ (P1,2 ∨ P2,1 ) ,
where B1,1 means that there is a breeze in [1,1].
Figure 7.9 A truth table constructed for the knowledge base given in the text. KB is true
if R1 through R5 are true, which occurs in just 3 of the 128 rows (the ones underlined in the
right-hand column). In all 3 rows, P1,2 is false, so there is no pit in [1,2]. On the other hand,
there might (or might not) be a pit in [2,2].
(α ∧ β) ≡ (β ∧ α) commutativity of ∧
(α ∨ β) ≡ (β ∨ α) commutativity of ∨
((α ∧ β) ∧ γ) ≡ (α ∧ (β ∧ γ)) associativity of ∧
((α ∨ β) ∨ γ) ≡ (α ∨ (β ∨ γ)) associativity of ∨
¬(¬α) ≡ α double-negation elimination
(α ⇒ β) ≡ (¬β ⇒ ¬α) contraposition
(α ⇒ β) ≡ (¬α ∨ β) implication elimination
(α ⇔ β) ≡ ((α ⇒ β) ∧ (β ⇒ α)) biconditional elimination
¬(α ∧ β) ≡ (¬α ∨ ¬β) De Morgan
¬(α ∨ β) ≡ (¬α ∧ ¬β) De Morgan
(α ∧ (β ∨ γ)) ≡ ((α ∧ β) ∨ (α ∧ γ)) distributivity of ∧ over ∨
(α ∨ (β ∧ γ)) ≡ ((α ∨ β) ∧ (α ∨ γ)) distributivity of ∨ over ∧
Figure 7.11 Standard logical equivalences. The symbols α, β, and γ stand for arbitrary
sentences of propositional logic.
So far, we have shown how to determine entailment by model checking: enumerating models
and showing that the sentence must hold in all models. In this section, we show how entail-
THEOREM PROVING ment can be done by theorem proving—applying rules of inference directly to the sentences
in our knowledge base to construct a proof of the desired sentence without consulting models.
If the number of models is large but the length of the proof is short, then theorem proving can
be more efficient than model checking.
Before we plunge into the details of theorem-proving algorithms, we will need some
LOGICAL
EQUIVALENCE additional concepts related to entailment. The first concept is logical equivalence: two sen-
tences α and β are logically equivalent if they are true in the same set of models. We write
this as α ≡ β. For example, we can easily show (using truth tables) that P ∧ Q and Q ∧ P
are logically equivalent; other equivalences are shown in Figure 7.11. These equivalences
play much the same role in logic as arithmetic identities do in ordinary mathematics. An
alternative definition of equivalence is as follows: any two sentences α and β are equivalent
only if each of them entails the other:
α≡β if and only if α |= β and β |= α .
VALIDITY The second concept we will need is validity. A sentence is valid if it is true in all models. For
TAUTOLOGY example, the sentence P ∨ ¬P is valid. Valid sentences are also known as tautologies—they
are necessarily true. Because the sentence True is true in all models, every valid sentence
is logically equivalent to True. What good are valid sentences? From our definition of
DEDUCTION
THEOREM entailment, we can derive the deduction theorem, which was known to the ancient Greeks:
For any sentences α and β, α |= β if and only if the sentence (α ⇒ β) is valid.
(Exercise 7.5 asks for a proof.) Hence, we can decide if α |= β by checking that (α ⇒ β) is
true in every model—which is essentially what the inference algorithm in Figure 7.10 does—
250 Chapter 7. Logical Agents
Not all inference rules work in both directions like this. For example, we cannot run Modus
Ponens in the opposite direction to obtain α ⇒ β and α from β.
Let us see how these inference rules and equivalences can be used in the wumpus world.
We start with the knowledge base containing R1 through R5 and show how to prove ¬P1,2 ,
that is, there is no pit in [1,2]. First, we apply biconditional elimination to R2 to obtain
R6 : (B1,1 ⇒ (P1,2 ∨ P2,1 )) ∧ ((P1,2 ∨ P2,1 ) ⇒ B1,1 ) .
Then we apply And-Elimination to R6 to obtain
R7 : ((P1,2 ∨ P2,1 ) ⇒ B1,1 ) .
Logical equivalence for contrapositives gives
R8 : (¬B1,1 ⇒ ¬(P1,2 ∨ P2,1 )) .
Now we can apply Modus Ponens with R8 and the percept R4 (i.e., ¬B1,1 ), to obtain
R9 : ¬(P1,2 ∨ P2,1 ) .
Finally, we apply De Morgan’s rule, giving the conclusion
R10 : ¬P1,2 ∧ ¬P2,1 .
That is, neither [1,2] nor [2,1] contains a pit.
We found this proof by hand, but we can apply any of the search algorithms in Chapter 3
to find a sequence of steps that constitutes a proof. We just need to define a proof problem as
follows:
• I NITIAL S TATE: the initial knowledge base.
• ACTIONS: the set of actions consists of all the inference rules applied to all the sen-
tences that match the top half of the inference rule.
• R ESULT: the result of an action is to add the sentence in the bottom half of the inference
rule.
• G OAL: the goal is a state that contains the sentence we are trying to prove.
Thus, searching for proofs is an alternative to enumerating models. In many practical cases
finding a proof can be more efficient because the proof can ignore irrelevant propositions, no
matter how many of them there are. For example, the proof given earlier leading to ¬P1,2 ∧
¬P2,1 does not mention the propositions B2,1 , P1,1 , P2,2 , or P3,1 . They can be ignored
because the goal proposition, P1,2 , appears only in sentence R2 ; the other propositions in R2
appear only in R4 and R2 ; so R1 , R3 , and R5 have no bearing on the proof. The same would
hold even if we added a million more sentences to the knowledge base; the simple truth-table
algorithm, on the other hand, would be overwhelmed by the exponential explosion of models.
MONOTONICITY One final property of logical systems is monotonicity, which says that the set of en-
tailed sentences can only increase as information is added to the knowledge base.8 For any
sentences α and β,
if KB |= α then KB ∧ β |= α .
8 Nonmonotonic logics, which violate the monotonicity property, capture a common property of human rea-
soning: changing one’s mind. They are discussed in Section 12.6.
252 Chapter 7. Logical Agents
For example, suppose the knowledge base contains the additional assertion β stating that there
are exactly eight pits in the world. This knowledge might help the agent draw additional con-
clusions, but it cannot invalidate any conclusion α already inferred—such as the conclusion
that there is no pit in [1,2]. Monotonicity means that inference rules can be applied whenever
suitable premises are found in the knowledge base—the conclusion of the rule must follow
regardless of what else is in the knowledge base.
CLAUSE of the other). Thus, the unit resolution rule takes a clause—a disjunction of literals—and a
literal and produces a new clause. Note that a single literal can be viewed as a disjunction of
UNIT CLAUSE one literal, also known as a unit clause.
RESOLUTION The unit resolution rule can be generalized to the full resolution rule,
1 ∨ · · · ∨ k , m1 ∨ · · · ∨ m n
,
1 ∨ · · · ∨ i−1 ∨ i+1 ∨ · · · ∨ k ∨ m1 ∨ · · · ∨ mj−1 ∨ mj+1 ∨ · · · ∨ mn
where i and mj are complementary literals. This says that resolution takes two clauses and
produces a new clause containing all the literals of the two original clauses except the two
complementary literals. For example, we have
P1,1 ∨ P3,1 , ¬P1,1 ∨ ¬P2,2
.
P3,1 ∨ ¬P2,2
There is one more technical aspect of the resolution rule: the resulting clause should contain
FACTORING only one copy of each literal.9 The removal of multiple copies of literals is called factoring.
For example, if we resolve (A ∨ B) with (A ∨ ¬B), we obtain (A ∨ A), which is reduced to
just A.
The soundness of the resolution rule can be seen easily by considering the literal i that
is complementary to literal mj in the other clause. If i is true, then mj is false, and hence
m1 ∨ · · · ∨ mj−1 ∨ mj+1 ∨ · · · ∨ mn must be true, because m1 ∨ · · · ∨ mn is given. If i is
false, then 1 ∨ · · · ∨ i−1 ∨ i+1 ∨ · · · ∨ k must be true because 1 ∨ · · · ∨ k is given. Now
i is either true or false, so one or other of these conclusions holds—exactly as the resolution
rule states.
What is more surprising about the resolution rule is that it forms the basis for a family
of complete inference procedures. A resolution-based theorem prover can, for any sentences
α and β in propositional logic, decide whether α |= β. The next two subsections explain
how resolution accomplishes this.
A resolution algorithm
Inference procedures based on resolution work by using the principle of proof by contradic-
tion introduced on page 250. That is, to show that KB |= α, we show that (KB ∧ ¬α) is
unsatisfiable. We do this by proving a contradiction.
A resolution algorithm is shown in Figure 7.12. First, (KB ∧ ¬α) is converted into
CNF. Then, the resolution rule is applied to the resulting clauses. Each pair that contains
complementary literals is resolved to produce a new clause, which is added to the set if it is
not already present. The process continues until one of two things happens:
• there are no new clauses that can be added, in which case KB does not entail α; or,
• two clauses resolve to yield the empty clause, in which case KB entails α.
The empty clause—a disjunction of no disjuncts—is equivalent to False because a disjunction
is true only if at least one of its disjuncts is true. Another way to see that an empty clause
represents a contradiction is to observe that it arises only from resolving two complementary
unit clauses such as P and ¬P .
We can apply the resolution procedure to a very simple inference in the wumpus world.
When the agent is in [1,1], there is no breeze, so there can be no pits in neighboring squares.
The relevant knowledge base is
KB = R2 ∧ R4 = (B1,1 ⇔ (P1,2 ∨ P2,1 )) ∧ ¬B1,1
and we wish to prove α which is, say, ¬P1,2 . When we convert (KB ∧ ¬α) into CNF, we
obtain the clauses shown at the top of Figure 7.13. The second row of the figure shows
clauses obtained by resolving pairs in the first row. Then, when P1,2 is resolved with ¬P1,2 ,
we obtain the empty clause, shown as a small square. Inspection of Figure 7.13 reveals that
many resolution steps are pointless. For example, the clause B1,1 ∨ ¬B1,1 ∨ P1,2 is equivalent
to True ∨ P1,2 which is equivalent to True. Deducing that True is true is not very helpful.
Therefore, any clause in which two complementary literals appear can be discarded.
Section 7.5. Propositional Theorem Proving 255
Figure 7.12 A simple resolution algorithm for propositional logic. The function
PL-R ESOLVE returns the set of all possible clauses obtained by resolving its two inputs.
^ ^ ^ ^
¬P2,1 B1,1 ¬B1,1 P1,2 P2,1 ¬P1,2 B1,1 ¬B1,1 P1,2
^ ^ ^ ^ ^ ^ ^ ^
¬B1,1 P1,2 B1,1 P1,2 P2,1 ¬P2,1 ¬B1,1 P2,1 B1,1 P1,2 P2,1 ¬P1,2 ¬P2,1 ¬P1,2
Figure 7.13 Partial application of PL-R ESOLUTION to a simple inference in the wumpus
world. ¬P1,2 is shown to follow from the first four clauses in the top row.
Completeness of resolution
To conclude our discussion of resolution, we now show why PL-R ESOLUTION is complete.
RESOLUTION
CLOSURE To do this, we introduce the resolution closure RC (S) of a set of clauses S, which is the set
of all clauses derivable by repeated application of the resolution rule to clauses in S or their
derivatives. The resolution closure is what PL-R ESOLUTION computes as the final value of
the variable clauses. It is easy to see that RC (S) must be finite, because there are only finitely
many distinct clauses that can be constructed out of the symbols P1 , . . . , Pk that appear in S.
(Notice that this would not be true without the factoring step that removes multiple copies of
literals.) Hence, PL-R ESOLUTION always terminates.
The completeness theorem for resolution in propositional logic is called the ground
GROUND
RESOLUTION resolution theorem:
THEOREM
If a set of clauses is unsatisfiable, then the resolution closure of those clauses
contains the empty clause.
This theorem is proved by demonstrating its contrapositive: if the closure RC (S) does not
256 Chapter 7. Logical Agents
contain the empty clause, then S is satisfiable. In fact, we can construct a model for S with
suitable truth values for P1 , . . . , Pk . The construction procedure is as follows:
For i from 1 to k,
– If a clause in RC (S) contains the literal ¬Pi and all its other literals are false under
the assignment chosen for P1 , . . . , Pi−1 , then assign false to Pi .
– Otherwise, assign true to Pi .
This assignment to P1 , . . . , Pk is a model of S. To see this, assume the opposite—that, at
some stage i in the sequence, assigning symbol Pi causes some clause C to become false.
For this to happen, it must be the case that all the other literals in C must already have been
falsified by assignments to P1 , . . . , Pi−1 . Thus, C must now look like either (false ∨ false ∨
· · · false ∨Pi ) or like (false ∨false ∨· · · false ∨¬Pi ). If just one of these two is in RC(S), then
the algorithm will assign the appropriate truth value to Pi to make C true, so C can only be
falsified if both of these clauses are in RC(S). Now, since RC(S) is closed under resolution,
it will contain the resolvent of these two clauses, and that resolvent will have all of its literals
already falsified by the assignments to P1 , . . . , Pi−1 . This contradicts our assumption that
the first falsified clause appears at stage i. Hence, we have proved that the construction never
falsifies a clause in RC(S); that is, it produces a model of RC(S) and thus a model of S
itself (since S is contained in RC(S)).
Figure 7.14 A grammar for conjunctive normal form, Horn clauses, and definite clauses.
A clause such as A ∧ B ⇒ C is still a definite clause when it is written as ¬A ∨ ¬B ∨ C,
but only the former is considered the canonical form for definite clauses. One more class is
the k-CNF sentence, which is a CNF sentence where each clause has at most k literals.
FORWARD-CHAINING 2. Inference with Horn clauses can be done through the forward-chaining and backward-
BACKWARD-
CHAINING chaining algorithms, which we explain next. Both of these algorithms are natural,
in that the inference steps are obvious and easy for humans to follow. This type of
inference is the basis for logic programming, which is discussed in Chapter 9.
3. Deciding entailment with Horn clauses can be done in time that is linear in the size of
the knowledge base—a pleasant surprise.
Figure 7.15 The forward-chaining algorithm for propositional logic. The agenda keeps
track of symbols known to be true but not yet “processed.” The count table keeps track of
how many premises of each implication are as yet unknown. Whenever a new symbol p from
the agenda is processed, the count is reduced by one for each implication in whose premise
p appears (easily identified in constant time with appropriate indexing.) If a count reaches
zero, all the premises of the implication are known, so its conclusion can be added to the
agenda. Finally, we need to keep track of which symbols have been processed; a symbol that
is already in the set of inferred symbols need not be added to the agenda again. This avoids
redundant work and prevents loops caused by implications such as P ⇒ Q and Q ⇒ P .
It is easy to see that forward chaining is sound: every inference is essentially an appli-
cation of Modus Ponens. Forward chaining is also complete: every entailed atomic sentence
will be derived. The easiest way to see this is to consider the final state of the inferred table
FIXED POINT (after the algorithm reaches a fixed point where no new inferences are possible). The table
contains true for each symbol inferred during the process, and false for all other symbols.
We can view the table as a logical model; moreover, every definite clause in the original KB is
true in this model. To see this, assume the opposite, namely that some clause a1 ∧. . .∧ak ⇒ b
is false in the model. Then a1 ∧ . . . ∧ ak must be true in the model and b must be false in
the model. But this contradicts our assumption that the algorithm has reached a fixed point!
We can conclude, therefore, that the set of atomic sentences inferred at the fixed point defines
a model of the original KB. Furthermore, any atomic sentence q that is entailed by the KB
must be true in all its models and in this model in particular. Hence, every entailed atomic
sentence q must be inferred by the algorithm.
DATA-DRIVEN Forward chaining is an example of the general concept of data-driven reasoning—that
is, reasoning in which the focus of attention starts with the known data. It can be used within
an agent to derive conclusions from incoming percepts, often without a specific query in
mind. For example, the wumpus agent might T ELL its percepts to the knowledge base using
Section 7.6. Effective Propositional Model Checking 259
P ⇒ Q
L∧M ⇒ P P
B∧L ⇒ M
A∧P ⇒ L M
A∧B ⇒ L
L
A
B
A B
(a) (b)
Figure 7.16 (a) A set of Horn clauses. (b) The corresponding AND – OR graph.
an incremental forward-chaining algorithm in which new facts can be added to the agenda to
initiate new inferences. In humans, a certain amount of data-driven reasoning occurs as new
information arrives. For example, if I am indoors and hear rain starting to fall, it might occur
to me that the picnic will be canceled. Yet it will probably not occur to me that the seventeenth
petal on the largest rose in my neighbor’s garden will get wet; humans keep forward chaining
under careful control, lest they be swamped with irrelevant consequences.
The backward-chaining algorithm, as its name suggests, works backward from the
query. If the query q is known to be true, then no work is needed. Otherwise, the algorithm
finds those implications in the knowledge base whose conclusion is q. If all the premises of
one of those implications can be proved true (by backward chaining), then q is true. When
applied to the query Q in Figure 7.16, it works back down the graph until it reaches a set of
known facts, A and B, that forms the basis for a proof. The algorithm is essentially identical
to the A ND -O R -G RAPH -S EARCH algorithm in Figure 4.11. As with forward chaining, an
efficient implementation runs in linear time.
GOAL-DIRECTED
REASONING Backward chaining is a form of goal-directed reasoning. It is useful for answering
specific questions such as “What shall I do now?” and “Where are my keys?” Often, the cost
of backward chaining is much less than linear in the size of the knowledge base, because the
process touches only relevant facts.
In this section, we describe two families of efficient algorithms for general propositional
inference based on model checking: One approach based on backtracking search, and one
on local hill-climbing search. These algorithms are part of the “technology” of propositional
logic. This section can be skimmed on a first reading of the chapter.