Module 3 Notes_Part2
Module 3 Notes_Part2
• Text book 1: Chapter 4 – 4.1, 4.2 Chapter 7- 7.1, 7.2, 7.3, 7.4, 7.5
• Now that we have defined the semantics for propositional logic, we can
construct a knowledge base for the wumpus world.
• For now, we need the following symbols for each [x, y] location:
• R4 : ¬B1,1 .
• R5 : B2,1 .
• On the other hand, P2,2 is true in two of the three models and false in
one, so we cannot yet tell whether there is a pit in [2,2].
• If the number of models is large but the length of the proof is short, then
theorem proving can be more efficient than model checking.
(α ∧ ¬β) is unsatisfiable.
• This section covers inference rules that can be applied to derive a proof—
a chain of conclusions that leads to the desired goal.
• The best-known rule is called Modus Ponens (Latin for mode that
affirms) and is written
BAD402 Artificial Intelligence Module 3_Part 2
• The notation means that, whenever any sentences of the form α ⇒ β and α
are given, then the sentence β can be inferred.
• The Modus Ponens rule is one of the most important rules of inference,
and it states that if P and P → Q is true, then we can infer that Q will be
true. It can be represented as:
• Example:
• By considering the possible truth values of α and β, one can show easily
that Modus Ponens and And-Elimination are sound once and for all.
• These rules can then be used in any particular instances where they
apply, generating sound inferences without the need for enumerating
models.
• Let us see how these inference rules and equivalences can be used in the
wumpus world. We start with the knowledge base containing R1 through
R5 and show how to prove ¬P1,2, that is, there is no pit in [1,2].
• Now we can apply Modus Ponens with R8 and the percept R4 (i.e.,
¬B1,1), to obtain
• R10 : ¬P1,2 ∧ ¬P2,1 . That is, neither [1,2] nor [2,1] contains a pit.
• We found this proof by hand, but we can apply any of the search
algorithms in Chapter 3 to find a sequence of steps that constitutes a
proof.
• ACTIONS: the set of actions consists of all the inference rules applied to
all the sentences that match the top half of the inference rule.
• RESULT: the result of an action is to add the sentence in the bottom half
of the inference rule.
• GOAL: the goal is a state that contains the sentence we are trying to
prove.
• In many practical cases finding a proof can be more efficient because the
proof can ignore irrelevant propositions, no matter how many of them
there are.
• One final property of logical systems is monotonicity, which says that the
set of entailed sentences can only increase as information is added to the
knowledge base.
BAD402 Artificial Intelligence Module 3_Part 2
• if KB |= α then KB ∧ β |= α .
• This knowledge might help the agent draw additional conclusions, but it
cannot invalidate any conclusion α already inferred—such as the
conclusion that there is no pit in [1,2].
• We have argued that the inference rules covered so far are sound, but we
have not discussed the question of completeness for the inference
algorithms that use them.
• Let us consider the steps leading up to Figure 7.4(a): the agent returns
from [2,1] to [1,1] and then goes to [1,2], where it perceives a stench,
but no breeze. We add the following facts to the knowledge base:
BAD402 Artificial Intelligence Module 3_Part 2
• R11 : ¬B1,2 .
• By the same process that led to R10 earlier, we can now derive the
absence of pits in [2,2] and [1,3] (remember that [1,1] is already
known to be pitless):
• R13 : ¬P2,2 .
• R14 : ¬P1,3 .
• Now comes the first application of the resolution rule: the literal ¬P2,2 in
R13 resolves with the literal P2,2 in R15 to give the resolvent R16 :
P1,1 ∨ P3,1 .
• In English; if there’s a pit in one of [1,1], [2,2], and [3,1] and it’s not in
[2,2], then it’s in [1,1] or [3,1].
• Similarly, the literal ¬P1,1 in R1 resolves with the literal P1,1 in R16 to
give R17 : P3,1 .
• In English: if there’s a pit in [1,1] or [3,1] and it’s not in [1,1], then
it’s in [3,1].
• These last two inference steps are examples of the unit resolution
inference rule,
of the other).
The unit resolution rule can be generalized to the full resolution rule,
BAD402 Artificial Intelligence Module 3_Part 2
• There is one more technical aspect of the resolution rule: the resulting
clause should contain only one copy of each literal.
A resolution algorithm
• there are no new clauses that can be added, in which case KB does not
entail α; or,
Completeness of resolution
• It is easy to see that RC (S) must be finite, because there are only finitely
many distinct clauses that can be constructed out of the symbols
P1,...,Pk that appear in S.
• (Notice that this would not be true without the factoring step that removes
multiple copies of literals.)
• For i from 1 to k, – If a clause in RC(S) contains the literal ¬Pi and all
its other literals are false under the assignment chosen for P1,...,Pi−1,
then assign false to Pi. – Otherwise, assign true to Pi.
• For this to happen, it must be the case that all the other literals in C
must already have been falsified by assignments to P1,...,Pi−1.
• Thus, C must now look like either (false ∨ false ∨ ··· false∨Pi) or like
(false∨false∨··· false∨¬Pi).
• If just one of these two is in RC(S), then the algorithm will assign the
appropriate truth value to Pi to make C true, so C can only be
falsified if both of these clauses are in RC(S).
• This contradicts our assumption that the first falsified clause appears at
stage i.
• So all definite clauses are Horn clauses, as are clauses with no positive
literals; these are called goal clauses.
• Horn clauses are closed under resolution: if you resolve two Horn
clauses, you get back a Horn clause.
• In Horn form, the premise is called the body and the conclusion is called
the head.
BAD402 Artificial Intelligence Module 3_Part 2
• Both of these algorithms are natural, in that the inference steps are
obvious and easy for humans to follow.
• Deciding entailment with Horn clauses can be done in time that is linear
in the size of the knowledge base—a pleasant surprise.
BAD402 Artificial Intelligence Module 3_Part 2
• For example, if L1,1 and Breeze are known and (L1,1 ∧ Breeze) ⇒
B1,1 is in the knowledge base, then B1,1 can be added.
BAD402 Artificial Intelligence Module 3_Part 2
• It is easy to see how forward chaining works in the graph. The known
leaves (here, A and B) are set, and inference propagates up the graph as
far as possible.
• The easiest way to see this is to consider the final state of the inferred
table (after the algorithm reaches a fixed point where no new inferences
are possible).
• The table contains true for each symbol inferred during the process, and
false for all other symbols.
• We can view the table as a logical model; moreover, every definite clause
in the original KB is true in this model.
• But this contradicts our assumption that the algorithm has reached a
fixed point! We can conclude, therefore, that the set of atomic
sentences inferred at the fixed point defines a model of the original
KB.
• For example, the wumpus agent might TELL its percepts to the
knowledge base using an incremental forward-chaining algorithm in
which new facts can be added to the agenda to initiate new
inferences.
• Yet it will probably not occur to me that the seventeenth petal on the
largest rose in my neighbor’s garden will get wet; humans keep forward
BAD402 Artificial Intelligence Module 3_Part 2
• If all the premises of one of those implications can be proved true (by
backward chaining), then q is true.
• When applied to the query Q in Figure 7.16, it works back down the
graph until it reaches a set of known facts, A and B, that forms the basis
for a proof.
• Often, the cost of backward chaining is much less than linear in the
size of the knowledge base, because the process touches only relevant
facts.