0% found this document useful (0 votes)
21 views137 pages

Module 3

The document discusses informed search strategies in artificial intelligence, emphasizing the importance of heuristic functions to improve search efficiency. It contrasts uninformed search, which lacks bias and can be computationally expensive, with informed strategies like Greedy Search and A* Search, which utilize problem-specific knowledge to optimize search paths. Key concepts include the evaluation functions used in these strategies and their respective complexities.

Uploaded by

skatelove1510
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views137 pages

Module 3

The document discusses informed search strategies in artificial intelligence, emphasizing the importance of heuristic functions to improve search efficiency. It contrasts uninformed search, which lacks bias and can be computationally expensive, with informed strategies like Greedy Search and A* Search, which utilize problem-specific knowledge to optimize search paths. Key concepts include the evaluation functions used in these strategies and their respective complexities.

Uploaded by

skatelove1510
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 137

MODULE 4

informed () search , Heuristic


function,
logical agents.
Using problem specific
knowledge to aid searching
 Without incorporating
Search
knowledge into searching,
one can have no bias (i.e. a everywhere
preference) on the search !!
space.

 Without a bias, one is forced


to look everywhere to find the
answer. Hence, the
complexity of uninformed
search is intractable.

2
Using problem specific
knowledge to aid searching
 With knowledge, one can search the state space as if he was
given “hints” when exploring a maze.
 Heuristic information in search = Hints
 Leads to dramatic speed up in efficiency.

B C D E
Search
F G H I J
only in
this K L M N
subtree!!
O 3
More formally, why heuristic
functions work?
 In any search problem where there are at most b choices
at each node and a depth of d at the goal node, a naive
search algorithm would have to, in the worst case, search
around O(bd) nodes before finding a solution
(Exponential Time Complexity).

 Heuristics improve the efficiency of search algorithms


by reducing the effective branching factor from b to
(ideally) a low constant b* such that
 1 =< b* < b

4
Informed Search
Strategies

Best First Search


An implementation of Best
First Search
function BEST-FIRST-SEARCH (problem, eval-
fn)
returns a solution sequence, or failure

queuing-fn = a function that sorts nodes by eval-fn

return GENERIC-SEARCH (problem,queuing-


fn)

6
Informed Search
Strategies

Greedy Search
eval-fn: f(n) = h(n)
8
9
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
10
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
11
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
12
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
13
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
14
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
15
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
16
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
17
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
18
Greedy Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
19
Greedy Search: Tree
Search
Start
A

20
Greedy Search: Tree
Search
Start
A 75
118

[329] C 140 [374] B

[253] E

21
Greedy Search: Tree
Search
Start
A 75
118

[329] C 140 [374] B

[253] E
80 99

[193] [178]
G F
[366] A

22
Greedy Search: Tree
Search
Start
A 75
118

[329] C 140 [374] B

[253] E
80 99

[193] [178]
G F
[366] A
211

[253] E I [0]

Goal

23
Greedy Search: Tree
Search
Start
A 75
118

[329] C 140 [374] B

[253] E
80 99

[193] [178]
G F
[366] A
211

[253] E I [0]

Goal
Path cost(A-E-F-I) = 253 + 178 + 0 = 431
24
dist(A-E-F-I) = 140 + 99 + 211 = 450
Greedy Search: Optimal ?
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
25
Greedy Search:
Complete ?
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E ** C 250
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = h (n) = straight-line distance
heuristic
26
Greedy Search: Tree
Search
Start
A

27
Greedy Search: Tree
Search
Start
A 75
118

[250] C 140 [374] B

[253] E

28
Greedy Search: Tree
Search
Start
A 75
118

[250] C 140 [374] B

[253] E
111
[244] D

29
Greedy Search: Tree
Search
Start
A 75
118

[250] C 140 [374] B

[253] E
111
[244] D

Infinite Branch !
[250] C

30
Greedy Search: Tree
Search
Start
A 75
118

[250] C 140 [374] B

[253] E
111
[244] D

Infinite Branch !
[250] C

[244] D

31
Greedy Search: Tree
Search
Start
A 75
118

[250] C 140 [374] B

[253] E
111
[244] D

Infinite Branch !
[250] C

[244] D

32
Greedy Search: Time and
Space Complexity ?
Start
A
118 75 • Greedy search is not
C 140 B optimal.
111
E
• Greedy search is
D 80 99
incomplete without
G F systematic checking of
97 repeated states.
H 211
• In the worst case, the
101
I Time and Space Complexity
Goal of Greedy Search are both
33
O(b )
m
Informed Search
Strategies

A* Search

eval-fn: f(n)=g(n)+h(n)
A* (A Star)
 Greedy Search minimizes a heuristic h(n) which is an
estimated cost from a node n to the goal state. Greedy
Search is efficient but it is not optimal nor complete.

 Uniform Cost Search minimizes the cost g(n) from the


initial state to n. UCS is optimal and complete but not
efficient.

 New Strategy: Combine Greedy Search and UCS to get an


efficient algorithm which is complete and optimal.

35
A* (A Star)
 A* uses a heuristic function which
combines g(n) and h(n): f(n) = g(n) + h(n)

 g(n) is the exact cost to reach node n from


the initial state.

 h(n) is an estimation of the remaining cost


to reach the goal.
36
A* (A Star)

g(n)

f(n) = g(n) n
+h(n)

h(n)

37
A* Search
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 98
101 I 0
I
Goal f(n) = g(n) + h (n)
g(n): is the exact cost to reach node n from the initial 38
A* Search: Tree Search
A Start

39
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
]

40
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[413 G F [417
] ]

41
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[413 G F [417
] ]
97
[415 H
]

42
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[413 G F [417
] ]
97
[415 H
]
101
Goal I [418]

43
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[413 G F [417
] ]
97
[415 H I [450
] ]
101
Goal I [418]

44
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[413 G F [417
] ]
97
[415 H I [450
] ]
101
Goal I [418]

45
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[413 G F [417
] ]
97
[415 H I [450
] ]
101
Goal I [418]

46
A* with f() not Admissible

h() overestimates the cost to


reach the goal state
A* Search: h not
admissible !
Start State Heuristic:
A 75 h(n)
118
A 366
140 B
C B 374
111
E C 329
D 80 99 D 244
E 253
G F
F 178
97
G 193
H 211 H 138
101 I 0
I
Goal f(n) = g(n) + h (n) – (H-I) Overestimated
g(n): is the exact cost to reach node n from the initial 48
A* Search: Tree Search
A Start

49
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
]

50
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[413 G F [417
] ]

51
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[413 G F [417
] ]
97
[455 H
]

52
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[413 G F [417
] ]
97
[455 H Goal I [450
] ]

53
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[473 [413 G F [417


D
] ] ]
97
[455 H Goal I [450
] ]

54
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[473 [413 G F [417


D
] ] ]
97
[455 H Goal I [450
] ]

55
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[473 [413 G F [417


D
] ] ]
97
[455 H Goal I [450
] ]

56
A* Search: Tree Search
A Start

118 75
140

[447 C E [393 B [449


] ]
] 80 99

[473 [413 G F [417


D
] ] ]
97
[455 H Goal I [450
] ]

A* not optimal !!!


57
A* Algorithm

A* with systematic checking


for repeated states …
A* Algorithm
1. Search queue Q is empty.
2. Place the start state s in Q with f value h(s).
3. If Q is empty, return failure.
4. Take node n from Q with lowest f value.
(Keep Q sorted by f values and pick the first element).
5. If n is a goal node, stop and return solution.
6. Generate successors of node n.
7. For each successor n’ of n do:
a) Compute f(n’) = g(n) + cost(n,n’) + h(n’).
b) If n’ is new (never generated before), add n’ to Q.
c) If node n’ is already in Q with a higher f value,
replace it with current f(n’) and place it in sorted
order in Q.
End for
8. Go back to step 3. 59
A* Search: Analysis
Start •A* is complete except if there
118
A 75 is an infinity of nodes with f <
B f(G).
C 140
111 •A* is optimal if heuristic h is
E
D 80 99 admissible.
•Time complexity depends on
G F
the quality of heuristic but is
97
still exponential.
H 211
•For space complexity, A*
101
keeps all nodes in memory. A*
I
Goal has worst case O(bd) space
complexity, but an iterative 60
A* Search
 Complete

Yes, unless there are infinitely many nodes with f <= f(G)
 Time

Exponential

The better the heuristic, the better the time

Best case h is perfect, O(d)

Worst case h = 0, O(bd) same as BFS
 Space

Keeps all nodes in memory and save in case of repetition

This is O(bd) or worse

A* usually runs out of space before it runs out of time
 Optimal

Yes, cannot expand fi+1 unless fi is finished

AI: Chapter 4: Informed


Search and Exploration 61
Iterative Deepening
A*:IDA*


Use f(N) = g(N) + h(N) with
admissible and consistent h

 Each iteration is depth-first with


cutoff on the value of f of
expanded nodes
62
Consistent Heuristic
 The admissible heuristic h is consistent
(or satisfies the monotone restriction) if
for every node N and every successor N’
of N: N
c(N,N’)

N’ h(N)
h(N)  c(N,N’) + h(N’)
h(N’)

(triangular inequality)
 A consistent heuristic is admissible.
63
Recursive breadth first
Search (RBSF)
 F(n)=g(n)+h(n)
 Requires linear space
 Uses f-limit variable to keep track of f-value to
keep track of best alternative path.

AI: Chapter 4: Informed


Search and Exploration 64
Recursive breadth first
Search (RBSF)
 F(n)=g(n)+h(n)
 Requires linear space
 Uses f-limit variable to keep track of f-value to
keep track of best alternative path.

AI: Chapter 4: Informed


Search and Exploration 65
Recursive breadth first
Search (RBSF)
 Complete

Yes, if b is finite
 Time

Difficult to charaterise
 Space

linear
 Optimal

Yes, if h(n) is admissible

AI: Chapter 4: Informed


Search and Exploration 66
Heuristic Functions
 A heuristic function is a function f(n) that gives an estimation on
the “cost” of getting from node n to the goal state – so that the
node with the least cost among all possible choices can be
selected for expansion first.

 Three approaches to defining f:


f measures the value of the current state (its “goodness”)


f measures the estimated cost of getting to the goal from the current
state:

f(n) = h(n) where h(n) = an estimate of the cost to get
from n to a goal


f measures the estimated cost of getting to the goal state from the
current state and the cost of the existing path to it. Often, in this
case, we decompose f:

f(n) = g(n) + h(n) where g(n) = the cost to get to n (from
initial state) 67
When to Use Search
Techniques
 The search space is small, and
 There are no other available techniques,
or
 It is not worth the effort to develop a
more efficient technique

 The search space is large, and


 There is no other available techniques,
and
 There exist “good” heuristics
68
Heuristic Functions
 To use A* a heuristic function must be
used that never overestimates the
number of steps to the goal

 h1=the number of misplaced tiles

 h2=the sum of the Manhattan distances


of the tiles from their goal positions
AI: Chapter 4: Informed
Search and Exploration 69
Heuristic Functions
 h1 = 7
 h2 = 4+0+3+3+1+0+2+1 = 14

AI: Chapter 4: Informed


Search and Exploration 70
Dominance
 If h2(n) > h1(n) for all n (both
admissible) then h2(n) dominates
h1(n) and is better for the search

 Take a look at figure 4.8!

AI: Chapter 4: Informed


Search and Exploration 71
Dominance
 If h2(n) > h1(n) for all n (both
admissible) then h2(n) dominates
h1(n) and is better for the search

 Take a look at figure 4.8!

AI: Chapter 4: Informed


Search and Exploration 72
Relaxed Problems
 A Relaxed Problem is a problem with
fewer restrictions on the actions
 The cost of an optimal solution to a
relaxed problem is an admissible heuristic
for the original problem

 Key point: The optimal solution of a


relaxed problem is no greater than the
optimal solution of the real problem
AI: Chapter 4: Informed
Search and Exploration 73
Relaxed Problems
 Example: 8-puzzle
 Consider only getting tiles 1, 2, 3, and
4 into place

 If the rules are relaxed such that a tile


can move anywhere then h1(n) gives
the shortest solution
 If the rules are relaxed such that a tile
can move to any adjacent square then
h2(n) gives the shortest solution
AI: Chapter 4: Informed
Search and Exploration 74
Conclusions
 Frustration with uninformed search led to the idea
of using domain specific knowledge in a search so
that one can intelligently explore only the relevant
part of the search space that has a good chance of
containing the goal state. These new techniques
are called informed (heuristic) search strategies.

 Even though heuristics improve the performance


of informed search algorithms, they are still time
consuming especially for large size instances.
75
Logical Agents
 Humans can know “things” and “reason”

Representation: How are the things stored?

Reasoning: How is the knowledge used?

To solve a problem…

To generate more knowledge…

 Knowledge and reasoning are important to


artificial agents because they enable successful
behaviors difficult to achieve otherwise

Useful in partially observable environments

 Can benefit from knowledge in very general


forms, combining and recombining information
AI: Chapter 7: Logical Agents 76
Knowledge-Based Agents
 Central component of a Knowledge-
Based Agent is a Knowledge-Base
 A set of sentences in a formal language

Sentences are expressed using a knowledge
representation language

 Two generic functions:


 TELL - add new sentences (facts) to the KB

“Tell it what it needs to know”
 ASK - query what is known from the KB

“Ask what to do next”

AI: Chapter 7: Logical Agents 77


Knowledge bases

 Knowledge base = set of sentences in a formal language

 Declarative approach to building an agent (or other system):



Tell it what it needs to know

 Then it can Ask itself what to do - answers should follow from


the KB

 Agents can be viewed at the knowledge level


i.e., what they know, regardless of how implemented

Knowledge-Based Agents

AI: Chapter 7: Logical Agents 79


Wumpus World

AI: Chapter 7: Logical Agents 80


Wumpus World

AI: Chapter 7: Logical Agents 81


Wumpus World

AI: Chapter 7: Logical Agents 82


Wumpus World
 Performance Measure

Gold +1000, Death – 1000

Step -1, Use arrow -10

 Environment

Square adjacent to the Wumpus are
smelly

Squares adjacent to the pit are breezy

Glitter iff gold is in the same square

Shooting kills Wumpus if you are
facing it

Shooting uses up the only arrow

Grabbing picks up the gold if in the
same square

Releasing drops the gold in the same
square

 Actuators

Left turn, right turn, forward, grab,
release, shoot

 Sensors

Breeze, glitter, and smell

 See page 197-8 for more details!

83
Wumpus World
 Characterization of Wumpus World

Observable

partial, only local perception

Deterministic

Yes, outcomes are specified

Static

Yes, Wumpus and pits do not move

Discrete

Yes

Single Agent

Yes

AI: Chapter 7: Logical Agents 84


Wumpus World

Wumpus World

85
Wumpus World

Wumpus World

86
Logic
 Knowledge bases  Example:
consist of sentences x + 2 >= y is a sentence
in a formal language
 Syntax x2 + y > is not a sentence

Sentences are well
formed x + 2 >= y is true iff x + 2
 Semantics is no less than y

The “meaning” of the
sentence x + 2 >= y is true in a

The truth of each world where x = 7, y=1
sentence with
respect to each x + 2 >= y is false in
possible world
world where x = 0, y =6
(model)

AI: Chapter 7: Logical Agents 87


Logic
 Entailment means that one thing follows
logically from another
 |= 

  |=  iff in every model in which  is true,


 is also true

 if  is true, then  must be true

 the truth of  is “contained” in the truth of 


AI: Chapter 7: Logical Agents 88
Logic
 Example:
 A KB containing

“Cleveland won”

“Dallas won”

Entails…

“Either Cleveland won or Dallas won”

 Example:
x + y = 4 entails 4 = x + y
AI: Chapter 7: Logical Agents 89
Logic

90
Propositional logic: Syntax
 Propositional logic is the simplest logic – illustrates basic
ideas
 Atomic sentences = single proposition symbols

E.g., P, Q, R

Special cases: True = always true, False = always false

 Complex sentences:

If S is a sentence, S is a sentence (negation)

If S1 and S2 are sentences, S1  S2 is a sentence (conjunction)

If S1 and S2 are sentences, S1  S2 is a sentence (disjunction)

If S1 and S2 are sentences, S1  S2 is a sentence (implication)

If S1 and S2 are sentences, S1  S2 is a sentence (biconditional)
Wumpus world sentences
Let Pi,j be true if there is a pit in [i, j].
Let Bi,j be true if there is a breeze in [i, j].
start:  P1,1
 B1,1
B2,1

 "Pits cause breezes in adjacent squares"


B1,1  (P1,2  P2,1)
B2,1  (P1,1  P2,2  P3,1)

 KB can be expressed as the conjunction of all of these


sentences

 Note that these sentences are rather long-winded!


 E.g., breeze “rule” must be stated explicitly for each square
 First-order logic will allow us to define more general patterns.
Truth tables for
connectives
Truth tables for
connectives
Evaluation Demo -

mplication is always true


hen the premise is false

hy? P=>Q means “if P is true then I am claiming that


otherwise no claim”
nly way for this to be false is if P is true and Q is false
Wumpus models

 KB = all possible wumpus-worlds


consistent with the observations and the
“physics” of the Wumpus world.
Listing of possible worlds for the Wumpus
KB
α1 = ”square [1,2] is safe”.
KB = detect nothing in [1,1], detect breeze in [2,1]
Wumpus World Sentences
 Let Pi,j be True if  “Pits cause breezes
there is a pit in [i,j] in adjacent squares”
 Let Bi,j be True if
there is a breeze B1,1  (P1,2  P2,1)
in [i,j]
B2,1  (P1,1  P2,1  P3,1)
 ¬P1,1
 A square is breezy if
 ¬ B1,1
and only if there is
 B2,1 an adjacent pit
February 20, 2006 AI: Chapter 7: Logical Agents 97
A Simple Knowledge Base

February 20, 2006 AI: Chapter 7: Logical Agents 98


A Simple Knowledge Base


R1: ¬P1,1  KB consists of

R2: B1,1  (P1,2  P2,1) sentences R1 thru R5

R3: B2,1 (P1,1  P2,2  P3,1)

R4: ¬ B1,1
 R1  R2  R3  R4  R5

R5: B2,1

February 20, 2006 AI: Chapter 7: Logical Agents 99


A Simple Knowledge Base

 Every known inference algorithm for propositional logic has a worst-case


complexity that is exponential in the size of the input. (co-NP complete)

February 20, 2006 AI: Chapter 7: Logical Agents 100


Equivalence, Validity,
Satisfiability

February 20, 2006 AI: Chapter 7: Logical Agents 101


Equivalence, Validity,
Satisfiability
 A sentence if valid if it is true in all models
 e.g. True, A  ¬A, A  A, (A  (A  B)  B
 Validity is connected to inference via the Deduction
Theorem
 KB |-  iff (KB  ) is valid
 A sentence is satisfiable if it is True in some model
 e.g. A  B, C
 A sentence is unstatisfiable if it is True in no models
 e.g. A  ¬A
 Satisfiability is connected to inference via the
following
 KB |=  iff (KB  ¬) is unsatisfiable
 proof by contradiction

February 20, 2006 AI: Chapter 7: Logical Agents 102


Reasoning Patterns
 Inference Rules

Patterns of inference that can be applied to derive chains
of conclusions that lead to the desired goal.

 Modus Ponens

Given: S1  S2 and S1, derive S2

 And-Elimination

Given: S1  S2, derive S1

Given: S1  S2, derive S2

 DeMorgan’s Law

Given: ( A  B) derive A  B

Given: ( A  B) derive A  B

February 20, 2006 AI: Chapter 7: Logical Agents 103


Reasoning Patterns
 And Elimination  Modus Ponens
     

 

 From a
 Whenever sentences
of the form    and
conjunction, any of  are given, then
the conjuncts can sentence  can be
be inferred inferred
 (WumpusAhead  WumpusAlive)
 Shoot and (WumpusAhead 
 (WumpusAhead  WumpusAlive), Shoot can be
WumpusAlive), WumpusAlive inferred
can be inferred

February 20, 2006 AI: Chapter 7: Logical Agents 104


Example Proof By
Deduction
 Knowledge
S1: B22  ( P21  P23  P12  P32 ) rule
S2: B22 observation
 Inferences
S3: (B22  (P21  P23  P12  P32 ))
((P21  P23  P12  P32 )  B22) [S1,bi elim]
S4: ((P21  P23  P12  P32 )  B22) [S3, and elim]
S5: (B22  ( P21  P23  P12  P32 )) [contrapos]
S6: (P21  P23  P12  P32 ) [S2,S6, MP]
S7: P21  P23  P12  P32 [S6, DeMorg]

February 20, 2006 AI: Chapter 7: Logical Agents 105


Evaluation of Deductive
Inference
 Sound
 Yes, because the inference rules themselves are
sound. (This can be proven using a truth table
argument).
 Complete
 If we allow all possible inference rules, we’re
searching in an infinite space, hence not complete
 If we limit inference rules, we run the risk of
leaving out the necessary one…
 Monotonic
 If we have a proof, adding information to the DB
will not invalidate the proof

February 20, 2006 AI: Chapter 7: Logical Agents 106


Resolution
 Resolution allows a complete inference
mechanism (search-based) using only one
rule of inference
 Resolution rule:
 Given: P1  P2  P3 … Pn, and P1  Q1 … Qm
 Conclude: P2  P3 … Pn  Q1 … Qm
Complementary literals P1 and P1 “cancel out”
 Why it works:
 Consider 2 cases: P1 is true, and P1 is false

February 20, 2006 AI: Chapter 7: Logical Agents 107


Resolution in Wumpus
World
 There is a pit at 2,1 or 2,3 or 1,2 or
3,2
 P21  P23  P12  P32
 There is no pit at 2,1
 P21
 Therefore (by resolution) the pit must
be at 2,3 or 1,2 or 3,2
 P23  P12  P32
February 20, 2006 AI: Chapter 7: Logical Agents 108
Proof using Resolution
 To prove a fact P, repeatedly apply resolution until
either:

No new clauses can be added, (KB does not entail P)

The empty clause is derived (KB does entail P)

 This is proof by contradiction: if we prove that KB  P


derives a contradiction (empty clause) and we know KB
is true, then P must be false, so P must be true!

 To apply resolution mechanically, facts need to be in


Conjunctive Normal Form (CNF)

 To carry out the proof, need a search mechanism that


will enumerate all possible resolutions.

February 20, 2006 AI: Chapter 7: Logical Agents 109


CNF Example
1. B22  ( P21  P23  P12  P32 )

2. Eliminate  , replacing with two implications


(B22  ( P21  P23  P12  P32 ))  ((P21  P23  P12  P32 )  B22)

3. Replace implication (A  B) by A  B
(B22  ( P21  P23  P12  P32 ))  ((P21  P23  P12  P32 )  B22)

4. Move  “inwards” (unnecessary parens removed)


(B22  P21  P23  P12  P32 )  ( (P21  P23  P12  P32 )  B22)

4. Distributive Law
(B22  P21  P23  P12  P32 )  (P21  B22)  (P23  B22)  (P12  B22)  (P32 
B22)

(Final result has 5 clauses)


February 20, 2006 AI: Chapter 7: Logical Agents 110
Resolution Example
 Given B22 and P21 and P23 and P32 ,
prove P12
 (B22  P21  P23  P12  P32 ) ; P12
 (B22  P21  P23  P32 ) ; P21
 (B22  P23  P32 ) ; P23
 (B22  P32 ) ; P32
 (B22) ; B22
 [empty clause]
February 20, 2006 AI: Chapter 7: Logical Agents 111
Evaluation of Resolution
 Resolution is sound
 Because the resolution rule is true in all cases
 Resolution is complete
 Provided a complete search method is used
to find the proof, if a proof can be found it will
 Note: you must know what you’re trying to
prove in order to prove it!
 Resolution is exponential
 The number of clauses that we must search
grows exponentially…

February 20, 2006 AI: Chapter 7: Logical Agents 112


Horn Clauses
 A Horn Clause is a CNF clause with exactly one
positive literal
 The positive literal is called the head
 The negative literals are called the body
 Prolog: head:- body1, body2, body3 …
 English: “To prove the head, prove body1, …”
 Implication: If (body1, body2 …) then head
 Horn Clauses form the basis of forward and
backward chaining
 The Prolog language is based on Horn Clauses
 Deciding entailment with Horn Clauses is linear
in the size of the knowledge base
February 20, 2006 AI: Chapter 7: Logical Agents 113
Reasoning with Horn
Clauses
 Forward Chaining
 For each new piece of data, generate all new
facts, until the desired fact is generated
 Data-directed reasoning
 Backward Chaining
 To prove the goal, find a clause that contains
the goal as its head, and prove the body
recursively
 (Backtrack when you chose the wrong
clause)
 Goal-directed reasoning
February 20, 2006 AI: Chapter 7: Logical Agents 114
Forward Chaining
 Fire any rule whose premises are satisfied in the KB
 Add its conclusion to the KB until the query is found

February 20, 2006 AI: Chapter 7: Logical Agents 115


Forward Chaining
 AND-OR Graph

multiple links joined by an arc indicate conjunction – every link must be proved

multiple links without an arc indicate disjunction – any link can be proved

February 20, 2006 AI: Chapter 7: Logical Agents 116


Forward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 117


Forward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 118


Forward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 119


Forward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 120


Forward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 121


Forward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 122


Forward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 123


Forward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 124


Backward Chaining
 Idea: work backwards from the query q:
 To prove q by BC,

Check if q is known already, or

Prove by BC all premises of some rule concluding q

 Avoid loops
 Check if new subgoal is already on the goal stack

 Avoid repeated work: check if new subgoal


 Has already been proved true, or
 Has already failed

February 20, 2006 AI: Chapter 7: Logical Agents 125


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 126


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 127


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 128


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 129


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 130


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 131


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 132


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 133


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 134


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 135


Backward Chaining

February 20, 2006 AI: Chapter 7: Logical Agents 136


Forward Chaining vs.
Backward Chaining
 Forward Chaining is data driven

Automatic, unconscious processing

E.g. object recognition, routine decisions

May do lots of work that is irrelevant to the goal

 Backward Chaining is goal driven



Appropriate for problem solving

E.g. “Where are my keys?”, “How do I start the
car?”

 The complexity of BC can be much less


than linear in size of the KB
February 20, 2006 AI: Chapter 7: Logical Agents 137

You might also like