Unit 2
Unit 2
Unit 2
Introduction
Problem solving State space Difference between AI and traditional search method Methods of problem solving:
Production system
Helps AI programs to do search process more
It Consists of: start state and final state Knowledge representation schemes Production rules on the left side(current state) that
determines the applicability of the rule and the right side(new state) describes the action to be performed if the rule is applied. Control strategies
Example of water jug problem and formulate the production rules to solve the problem.
Problem statement: 2 jugs- 5 gallon and 3 gallon
with no measuring mark on them. There is endless supply of water through tap. Our task is to get 4 g of water in 5 g jug.
Solution
State space for this problem can be described as the
set of ordered pair of integers(X,Y) such that X represents the no. of gallons of water in 5 g and Y for 3 g jug.
Start state is (0,0)
Goal state is (4,N) for any value N3.
Possible operations:
Fill 5 g jug from the tap and empty the 5 g jug by
throwing water down the drain Fill 3 g jug from the tap and empty the 3 g jug by throwing water down the drain. Pour some or 3 g water jug from 5 g jug into the 3 g jug to make it full. Pour some or full 3 jug water into the 5 g jug.
Solution path -1
to goal state Consists of four components A set S containing start states of the problem. A set G containing goal states of the problem. Set of nodes. Set of arc connecting the nodes
Problem Formulation
11
transform one state of the world into another (operators). to the goal state.
ever leaving a group of missionaries in one place outnumbered by cannibals in that place.
Possible Operators
2M0C 1M1C 0M2C 1M0C 0M1C
Solution
State space for this problem can be described as set
of ordered pair of left and right banks of the river as (L,R) where each bank is represented as a list [nM, mC, B] n= no. of missionaries M m= no. of cannibals C B= boat
Solution
Start state: ([3M, 3 C, 1B], [0M, 0C, 0B])or
[331:000] 1B means boat is present and 0B means boat is absent. Goal state : ([0M, 0 C, 0B], [3M, 3C, 1B]) 0r [000:331]
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
= 11 crossings
39
Solution
Start state: ([3M, 3 C, 1B], [0M, 0C, 0B]) 1B means boat is present and 0B means boat is
absent. Any state : ([n1M, m1C,_],[n2M, m2C_]) with constraints and conditions at any state as n1(0)m1; n2(0)m2; n1+n2=3, m1+m2=3; Boat can be either side. Goal state : ([0M, 0 C, 0B], [3M, 3C, 1B])
Production rules
Solution path
arranged on it with empty cell. At any point, the adjacent tile can move to the empty cell, creating a new empty cell. We have to arrange the tiles such that we get the goal state from the start state.
1 4 7
2 8 6
1 4
2 5 8
3 6
Initial state
Goal state
Operators: slide blank up, slide blank down, slide blank left, slide blank right
1 4 7 2 8 6 5 3 1 4 7 2 8 6 3 5 1 4 7 2 8 3 5 6 1 4 7 8 2 3 5 6 1 4 7 2 5 8 6 3 1 4 7 2 5 8 3 6
Solution: sb-down, sb-left, sb-up,sb-right, sb-down Path cost: 5 steps to reach the goal
Control Strategies
There are two direction in which a search can
proceed:
Data driven search i.e. forward chaining(8 puzzle) Goal driven search i.e. backward chaining(NIM game
playing)
searches: breadth-first, depth-first, depth limited, iterative deepening, and bidirectional search
by an evaluation function: Branch and Bound, Hill climbing,Greedy best-first, A*, IDA*, and beam search Constraint satisfaction
Blind search
Types of Blind search :
Breadth first search Depth first search Depth first iterative deepening(DFID) Bidirectional search
state, expands two steps away from the start state and so on. Until a goal state is reached. Implemented using two steps
OPEN list- those states that are to be expanded CLOSED lists- those states that are already expanded.
Application1: Given the following state space (tree search), give the sequence of visited nodes when using BFS (assume that the nodeO is the goal state):
A
B F G
C H I
D J N
A,
A B C D E
A, B,
A B F G C D E
A, B,C
A B F G C H D E
A, B,C,D
A B F G C H I D J E
A, B,C,D,E
A B F G C H I D J E
A, B,C,D,E, F,
A B F G C H I D J E
A, B,C,D,E, F,G
A B F G C H I D J E
A, B,C,D,E, F,G,H
A B F G C H I D J E
A, B,C,D,E, F,G,H,I
A B F G C H I D J E
A, B,C,D,E, F,G,H,I,J,
A B F G C H I D J N E
A, B,C,D,E, F,G,H,I,J, K,
B F G C
A D H I J N E
A D H I J N E
A D H I J N E
A D H I J N E
A D H I J N E
there is one).
The (maximum) branching factor b is defined as the maximum nodes created when a new node is expanded. The length of a path to a goal is the depth d. The maximum length of any path in the state space m.
Complete? Yes. Optimal? Yes(unit cost) Time Complexity: O(bd) Space Complexity: O(bd)
Algorithm
Input: START and GOAL states Local variables: OPEN, CLOSED, STATE-X, SUCCs, FOUND; OUTPUT: Yes Or No Method: Initalize OPEN list with START and CLOSED=; FOUND= false; While(OPEN and FOUND= false) do { Remove the first state from OPEN and call it STATE-X; Put STATE-X in the front of closed list{maintained as stack}; If STATE-X=GOAL then FOUND= true else { Perform EXPAND operation on STATE-X, producing a list of SUCCs; Remove the successors from those states, if any that are in the CLOSED list; Append SUCCs at the end of OPEN list; /*queue*/ } } /* end of while*/ If FOUND= true then return Yes else return No Stop
Application2: Given the following state space (tree search), give the sequence of visited nodes when using DFS (assume that the nodeO is the goal state):
A
B F G
C H I
D J N
A,
A B C D E
A,B,
A B F G C D E
A,B,F,
A B F G C D E
A,B,F, G,
A B F G C D E
A,B,F, G,K,
A B F G C D E
A,B,F, G,K, L,
A B F G C D E
Main idea: Expand node at the deepest level (breaking ties left to right).
Complete? No (Yes on finite trees, with no loops). Optimal? No Time Complexity: O(bd), where d is the maximum depth. Space Complexity: O(d), where d is the maximum depth.
DFID combines the benefits of BFS and DFS: Like DFS the memory requirements are very modest (O(d)). Like BFS, it is complete when the branching factor is finite.
In general, iterative deepening is the preferred uninformed search method when there is a large search space and the depth of the solution is not known.
Limit = 3
Limit = 4
A,
Limit = 0
A, Failure
Limit = 0
A,
A Limit = 1 B C D E
A,B,
A Limit = 1 B C D E
A,B, C,
A
Limit = 1
A,B, C, D,
A
Limit = 1
A,B C, D, E,
B C
A D E
Limit = 1
A,B, C, D, E, Failure
B C
A D E
Limit = 1
A,
A B Limit = 2 C D E
A,B,
A B Limit = 2 F G C D E
A,B,F,
A B Limit = 2 F G C D E
A,B,F, G,
A B C G D E
Limit = 2
A,B,F, G, C,
A B C G H D E
Limit = 2
A,B,F, G, C,H,
A B C G H D E
Limit = 2
A,B,F, G, C,H, D,
B C G
A D H I J E
Limit = 2
A D H I J E
Limit = 2
A D H I J E
Limit = 2
A B G C H I D J E
Limit = 2
A B G C H I D J N E
Limit = 2
A,
A B C D E
Limit = 3
A,B,
A B F G C D E
Limit = 3
A,B,F,
A B F G C D E
Limit = 3
A,B,F, G,
A B F G C D E
Limit = 3
A,B,F, G,K,
A B F G C D E
Limit = 3
A,B,F, G,K, L,
A B F G C D E
Limit = 3
A,B,F, G,K, L, C,
B F G C
A D H E
Limit = 3
Search
115
A D H E
Limit = 3
A D H I J E
Limit = 3
A D H I J E
Limit = 3
A D H I J E
Limit = 3
A B G C H I D J N E
Limit = 3
A B G C H I D J N E
Limit = 3
A B G C H I D J N E
Limit = 3
A B G C H I D J N E
Limit = 3
A,
A B C D E
Limit = 4
A,B,
A B F G C D E
Limit = 4
A,B,F,
A B F G C D E
Limit = 4
A,B,F, G,
A B F G C D E
Limit = 4
A,B,F, G,K,
A B F G C D E
Limit = 4
A,B,F, G,K, L,
A B F G C D E
Limit = 4
Limit = 4
goal depth
both the initial state and the goal state, meet in the middle.
d is the depth of the solution. Space Complexity: O(bd/2), where d is the depth of the solution.
much use in real life applications. We need to have some intelligent searches which take into account some relevant problem info an finds solution faster.
Statement: In TSP ,one required to find the
shortest route of visiting all the cities once and return back to starting point. Assume that there are n cities and distance between each pair of cities is given.
It would require (n-1) ! paths to be examined. If the number of cities grows then the time require
approach.
Continue till all the paths have been explored .In this
be thought of applying .So we need some intelligent methods which can make use of problem knowledge and improves search time.
which among several alternatives will be the most effective to achieve some goal.
This technique improves the efficiency of search
process . It no longer guarantees to find the best solution but almost always finds a very good solution.
g(X) is designed that assigns expense to the path from start node to the current node X by applying sequence of operators.
While generating a search space ,least cost path
147
1
[6]
Goal state
4
[9] [3] 4 [7]
7
[9] 5 [8]
148
149
1
[3]
7
[9]
150
1
[3] 4 [7] 5
7
[9]
[8]
151
1
[6] [9]
4
[3] 4 [7]
7
[9] 5 [8]
2 [2]
[5]
4
[9] [3] 4 [7]
7
[9] 5 [8]
153
1
[6] [9]
4
[3] 4 [7]
7
[9] 5 [8]
Since the shortest path is always choosen for extension,the path first reaching the goal is certain to be optimal but not guaranteed to find the solution quickly. In branch and bound method ,If g(X)=1 for all operators. Then it degenerates to simple breadth first search.
Hill Climbing
156
Hill climbing search algorithm (also known as greedy local search) uses a loop that continually moves in the direction of increasing/decreasing values (that is uphill/downhill).
Hill Climbing
157
evaluation
states
Cost
States
Hill Climbing
159
Current Solution
Hill Climbing
Current Solution
160
Hill Climbing
Current Solution
161
Hill Climbing
Current Solution
162
Hill Climbing
Best
163
Local Minimum
Global Minimum
not a solution but from where no move improves a solution. This will happen if we have reached :
local maximum
plateau
ridge
all the neighbors but not better than some other states which are far away.
ridge is a long thin region of high land with low land on either side. When looking in one of the four direction finding the right direction might be very tricky as one can fall in either direction.
Beam search
NARROWING THE WIDTH OF THE BREADTH-FIRST SEARCH
BEAM SEARCH
In beam search is a heuristic search algorithm that
explores a graph by expanding the most promising node in a limited set. tree. At each level of the tree, it generates all successors of the states at the current level, sorting them in increasing order of heuristic cost. The greater the beam width, the fewer states are pruned. With an infinite beam width, no states are pruned and beam search is identical to breadth-first search.
S
8.9
(example : 2 )
new nodes
depending on heuristic
Depth 2)
A
6.7
S S
D D A
10.4
8.9
6.9
X ignore
X ignore
S D
X
OptimiB
B
C _ E
6.9
6.7
end
X ignore
3.0
Depth 4)
A B
X
S D D A
X 10.4
C _
B
A C _
F
G
0.0
172
States
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
done on the successors nodes, whereas in best first search ,sorting is done on the entire list.
It is not guaranteed to find an optimal solution, but
generally it finds some solution faster than solution obtained from any other method.
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
Greedy Search
191
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
Greedy Search
192
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
Greedy Search
193
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
Greedy Search
194
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
Greedy Search
195
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
Greedy Search
196
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
Greedy Search
197
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
Greedy Search
198
A
C E D G
Start B
State A
B
C D E F
374
329 244 253 178
G
H I H I Goal
193
98 0
A
[329] C [253] E
Start [374] B
A
[329] C [253] E
Start [374] B
[193]
[178]
A
[329] C
Start [374] B
140 [253] E
[193]
[178]
I
Goal
[0]
118
[329] C 80
[193]
I
Path cost(A-E-F-I) = 253 + 178 + 0 = 431 dist(A-E-F-I) = 140 + 99 + 211 = 450 Goal
[0]
118
C 111 D 80 G 97 H 101 I
Start A 75 140 E 99 F B
State A
B
C D E F
374
329 244 253 178
G
211 H I Goal
193
98 0
118
C 111 D 80 G 97 H 101 I
Start A 75 140 E 99 F B
State A
B
** C D E F
374
250 244 253 178
G
211 H I Goal
193
98 0
118
[250] C
118
[250] 111 [244] D C
118
[250] 111 [244] D Infinite Branch ! [250] C C
118
[250] 111 [244] D Infinite Branch ! [250] C C
[244] D
118
[250] 111 [244] D Infinite Branch ! [250] C C
[244] D
118
C 111 D 80 G 97 H 101 I
Start A 75 140 E 99 F B
211
Goal
A* SEARCH
E VA L - F N : F ( N ) = G ( N ) + H ( N )
A* (A Star)
212
estimated cost from a node n to the goal state. However, although greedy search can considerably cut the search time (efficient), it is neither optimal nor complete.
Uniform Cost Search minimizes the cost g(n) from the
A* (A Star)
213
the goal.
A* Search
214
118
C 111 D 80 G 97 H 101 I
Start A 75 140 E 99 F B
State A
B
C D E F
374
329 244 253 178
G
211 H I Goal
193
98 0
g(n): is the exact cost to reach node n from the initial state.
Start
Start
118
[118+329=447] C
140
E
75
B [75+374=449]
[140+253=393]
Start
118
[447] C
140
E 80 [393] 99
75
B [449]
[140+80+193=413]
[140+99+178=417]
Start
118
[447] C
140
E 80 [393] 99
75
B [449]
[413]
G
97
[417]
[140+80+97+98=415]
Start
118
[447] C
140
E 80 [393] 99
75
B [449]
[413]
G
97
[417]
[415]
101
Goal I [140+80+97+101+0=418]
Start
118
[447] C
140
E 80 [393] 99
75
B [449]
[413]
G
97
[417]
[415]
[450]
101
Goal I [418]
Start
118
[447] C
140
E 80 [393] 99
75
B [449]
[413]
G
97
[417]
[415]
[450]
101
Goal I [418]
Start
118
[447] C
140
E 80 [393] 99
75
B [449]
[413]
G
97
[417]
[415]
[450]
101
Goal I [418]
A* Search
Let us consider an example of eight puzzle and solve by A* algo: The function f(x) =h(x)+g(x),where
Initial state
Goal state
7 1
6 2 8
f=(0+4)
5 4
u
3 7 6
2 1 8
l
3 5 7 1 4
(1+5)
6 2 8
r
3 5 4 7 1 8 6 2
(1+3) (2+3)
3 5 4 7 1
5 4
(1+5)
u
6 2 8
(2+3)
3 7 5 4 1
l
6 2 8 3 5 4
r
7 2 1
(2+4)
6
l (3+2)
3 5 4 7 1 6 2 8
r
3 5 4
(3+4)
6 7 1 5 7 4 1 2 8 3 6 2 8
d
5 3 7 4 1
(4+1)
6 2 8
Goal state
224
IDA* is a combination of DFID+A* algorithm Each iteration is depth-first with cutoff on the value
of f of expanded nodes
1+5=6 X
1+7=8
XX
1+3=4
1+8=9 XX
4 XX
XX
XX
8 XX
4 8 XX
GOAL
XX
Optimal solution by A*
UNDERESTIMATION OVERESTIMATION ADMISSIBILITY MONOTONIC FUNCTION
UNDERESTIMATION/OVERESTIMATION
If we can guarantee that h never overestimates actual
value from current to goal ,then A* algorithm ensures to find an optimal path to a goal, if one exist.
We find worse solution by overestimating and it
A* algorithm
When h(n) = actual cost to goal Only nodes in the correct path are expanded Optimal solution is found When h(n) < actual cost to goal Additional nodes are expanded Optimal solution is found When h(n) > actual cost to goal Optimal solution can be overlooked
ADMISSIBILITY
A search algorithm is admissible ,if for any grah ,it
always terminates in an optimal path from start state to goal state We have seen earlier that if heuristic function h underestimates the actual value from the current state to goal state ,then it bounds to give an optimal solution . So,we can say that A* always terminates with the optimal path in case h is an admissible heuristic function.
Monotonic function
A heuristic function h is monotone if
1.For all states Xi and Xj such that Xj is successor of Xi h(Xi)-h(Xj)=<cost(Xi,Xj) i.e. actual cost of going from Xi to Xj. 2. h(Goal)=0
Conclusions
236
using domain specific knowledge in a search so that one can intelligently explore only the relevant part of the search space that has a good chance of containing the goal state. These new techniques are called informed (heuristic) search strategies.
Even though heuristics improve the performance of
informed search algorithms, they are still time consuming especially for large size instances.
satisfaction problems (CSPs) are mathematical problems defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time.
satisfaction problem
Eight queens puzzle Map colouring problem Sudoku, Futoshiki, Kakuro (Cross Sums), Numbrix, Hidato and many other logic puzzles
Varieties of constraints
Unary constraints involve a single variable, e.g., SA green Binary constraints involve pairs of variables, e.g., SA WA Higher-order constraints involve 3 or more
variables,
Example: Map-Coloring
Example: Map-Coloring
Example: Cryptarithmetic
242
Variables: F T U W
O + O = R + 10 X1 X1 + W + W = U + 10 X2 X2 + T + T = O + 10 X3 X3 = F, T 0, F 0
Real-world CSPs
243
variables