0% found this document useful (0 votes)
40 views22 pages

Unit II - AI Notes - Search Methods

The document discusses problem solving in AI as a search process, emphasizing the importance of problem definition, analysis, and knowledge representation. It outlines state-space search techniques, including examples like the water jug problem, and contrasts uninformed and informed search methods. Additionally, it highlights the role of heuristics in improving search efficiency for complex problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views22 pages

Unit II - AI Notes - Search Methods

The document discusses problem solving in AI as a search process, emphasizing the importance of problem definition, analysis, and knowledge representation. It outlines state-space search techniques, including examples like the water jug problem, and contrasts uninformed and informed search methods. Additionally, it highlights the role of heuristics in improving search efficiency for complex problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

AI/Unit-I(b)

Problems and Search


Problem Solving as Search
Problem solving involves:
 Problem definition – This must include precise specifications of what the initial
situation(s) will be and what final situation constitute acceptable solution to this problem.
 Problem analysis – To identify various possible techniques for solving the problem.
 Knowledge representation – Represent the task knowledge that is necessary to solve the
problem.
 Problem solving -- Selection of best techniques.
Problem Definition
In order to solve the problem play a game, which is restricted to two person table or
board games, we require the rules of the game and the targets for winning as well as a means of
representing positions in the game. The opening position can be defined as the initial state and
a winning position as a goal state, there can be more than one. legal moves allow for transfer
from initial state to other states leading to the goal state. However the rules are far to copious in
most games especially chess where they exceed the number of particles in the universe. Thus the
rules cannot in general be supplied accurately and computer programs cannot easily handle them.
The storage also presents another problem but searching can be achieved by hashing.
The number of rules that are used must be minimised and the set can be produced by
expressing each rule in as general a form as possible. The representation of games in this way
leads to a state space representation and it is natural for well organised games with some
structure. This representation allows for the formal definition of a problem which necessitates the
movement from a set of initial positions to one of a set of target positions. It means that the
solution involves using known techniques and a systematic search. This is quite a common
method in AI.
 Well organised problems (e.g. games) can be described as a set of rules.
 Rules can be generalised and represented as a state space representation:
-formal definition.
-move from initial states to one of a set of target positions.
-move is achieved via a systematic search.

In general, search techniques are used to find a sequence of actions that will get us from some
initial state to some goal state(s).
The actions may be:
 Physical actions (such as move from town A to town B, or put block C on the table)
 Or may be more abstract actions, like theorem proving steps (we may be searching for a
sequence of steps that will allow us to prove X from the set of facts S).
There are two basic approaches to using search in problem solving.
 In the first (state-space search) a node in the search tree is a state of the world. Successor
nodes in the search tree will be possible new states.
 In the second (problem reduction) a node in the search tree is a goal (or set of goals) to be
satisfied. Successor nodes will be the different sub goals that can be used to satisfy that
goal. These two approaches will be discussed in turn (though we'll emphasize state-space
search).

Page 1 of 22
AI/Unit-I(b)

Problem as State Space Search

State space search techniques are often illustrated by showing how they can be used in
solving puzzles of the sort you find in intelligence tests. One such puzzle is the water jug
problem:
“You are given two jugs, a 4-gallon one and a 3-gallon one. Neither have any measuring
markers on it. There is a tap that can be used to fill the jugs with water. How can you get exactly
2 gallons of water into the 4-gallon jug”.
Given such a problem, we have to decide
 How to represent the problem state (e.g., amount of water in each jug),
 What are the initial and final states in this representation, and
 How to represent the actions available in the problem, in terms of how they change the
problem state.
This particular puzzle is based on a simple problem-solving domain where the problem state can
be represented simply as a pair of numbers giving the amount of water in each jug (e.g., [4, 3]
means that there is 4 gallons in the 4-gallon jug and 3 in the 3 gallon jug). The initial state is [0,0]
and the final state is [2,X] where X can take any value. There are only a small number of
available actions (e.g., fill the 4-gallon jug), and these can be simple represented as rules or
operators, which show how the problem state changes given the different actions:

1 [x,y] [4,y] Fill the 4-gallon jug


if x<4
2 [x,y] [x,3] Fill the 3-gallon jug
if y<3
3 [x,y] [0,x+y] Pour all water from 4-g jug into 3-g jug
if x+y<=3
x>0
4 [x,y] [x+y,0] Pour all water from 3-g jug into 4-g jug
if x+y<=4
y>0
5 [x, y] [4, x+y-4] fill the 4-g jug from 3-g jug
if x+y>4
6 [x, y] [x+y-3, 3] Fill the 3-g jug from 4-g jug
if x+y>3
7 [x,y] [x,0] Empty the 3-gallon jug on the ground
if y>0
8 [x,y] [0,y] Empty the 4-gallon jug on the ground
if x>0
9 [x,y] [x,y-d] Pour some water out of the 3-gallon jug
if y>0
10 [x,y] [x-d,y] Pour some water out of the 4-gallon jug
if x>0
11 [0,2] [2,0] Pour 2g from the 3-g jug into 4-g jug
12 [2,y] [0,y] Empty the 2g in 4-g jug on the ground

Page 2 of 22
AI/Unit-I(b)

Rules such as [X, Y]  [X, 0] mean that we can get from a state where there are X gallons in the
first jug and Y in the second jug to a state where there are X gallons in the first and none in the
second, using the given action. If there is a condition such as (if X+Y < 3) this means that we can
only apply the rule if that condition holds. We'll also only consider actions that cause a change in
the current state.
We have to look at all our actions, and find ones that apply given the current state. We
can then use the rules above to find the state resulting from that action. A particular node in the
search tree, rather than being just the name of a town, will now be the representation of a
particular state (e.g., [2, 3]).

One solution to the water jug problem is:

Gallons in the Gallons in the Rule/Action applied


4-Gallon Jug 3-Gallon Jug
0 0
2
0 3
9
3 0
2
3 3
7
4 2
5 or 12
0 2
9 or 11
2 0

One more solution is to fill the 4 gallon jug, fill the 3 gallon from the 4 gallon, empty the
3 gallon, empty the 4 gallon into the 3 gallon, fill the 4 gallon and fill the 3 gallon from the 4
gallon again.
In fact, You have to decide on a representation of the state of the world, of the available
actions in the domain, and systematically going through all possible sequences to find a sequence
of actions that will get you from your initial to target state.
In the water jug problem there is no real need to use a heuristic search technique - the
domain is sufficiently constrained that you can go through all possibilities pretty quickly. In fact,
its hard to think what a good evaluation function would be. If you want 2 gallons in the jug, does
it mean you are close to a solution if you have 1 or 3 gallons? Many problems have the property
that you seem to have to undo some of the apparent progress you have made if you are to
eventually get to the solution (e.g., empty out the jug when you've got 1 gallon in it).
There are lots of other problems that have been solved using similar techniques, and
which are often found in AI textbooks. The ``Missionaries and cannibals'' problem, the
``Monkeys and bananas'' problem and the ``8 puzzle'' are three that are often quoted.
However, search techniques are used in many more serious AI applications, such as:
 Language understanding, where we may search through possible syntactic structures;
 Robotics, where we may search for a good path for the robot to take;
 Vision, where we may search for meaningful interpretations of features of the object to
be recognized.
Page 3 of 22
AI/Unit-I(b)

To solve real problems you generally have to put a lot more effort into representing the problem.
You have to think exactly
 What a representation of a world state should be,
 What the initial and goal states are, and
 What the available operators/actions are, that allow you to transform one state into
another.
 You also need to decide whether or not state space search is appropriate, or whether, for
example, problem reduction, described above, might be better.
State Space Representation Example
Games: tic-tac-toe

The Travelling Salesman Problem


Find the shortest path
- Visit all cities
- Return home

Page 4 of 22
AI/Unit-I(b)

Traveling Salesman Partial State Space

Page 5 of 22
AI/Unit-I(b)

(0,0)

(3,0) (0,4)

(3,4) (0,3) (0,0) (0,0) (3,1) (3,4)

(3,0) (0,4) (3,3) (3,0) (0,0) (0,4) (3,4) (3,0) (0,1) (0,4)

(3,4) (2,4) (3,0) (0,3) (3,1) (1,0) (0,4)

(0,4) (2,0) (3,3) (0,0) (1,4) (0,1) (3,0)

(3,0) (2,4) (0,0) (0,2) (0,4) (3,2) (1,0) (3,4)

(3,0) (0,2) (3,4)

The Jug Puzzle

Page 6 of 22
AI/Unit-I(b)

Searching/Search Methods:
There are two basic category of search methods:
 Uninformed/Blind search – A blind or uninformed search is one that uses no
information other than the initial state, goal state, and a test for a solution. It can only
move according to position in search.
 Informed/Heuristic search - Use domain-specific information to decide where to search
next.
Uninformed/Blind Search
Depth-First Search (DFS):
1. Set L to be a list of the initial nodes in the problem.
2. If L is empty, fail otherwise pick the first node n from L
3. If n is a goal state, quit and return path from initial node.
4. Otherwise remove n from L and add to the front of L all of n's children. Label each child
with its path from initial node.
5. Return to 2

Note: All numbers in Fig refer to order visited in search.

Page 7 of 22
AI/Unit-I(b)

Advantages of DFS:
1. Depth first require less memory since only thenode on the current path are stored.
2. It may find a solution without examining much of the search space at all.
Disadvantages of DFS:
1. In DFS, sometime we may follw a single unfriteful path for a very long time.
2. DFS may find a long path to a solution in one part of the tree, when a shorter path exists
in some other unexplored part of the tree.
Breadth-First Search (BFS):
1. Set L to be a list of the initial nodes in the problem.
2. If L is empty, fail otherwise pick the first node n from L
3. If n is a goal state, quit and return path from initial node.
4. Otherwise remove n from L and add to the end of L all of n's children. Label each child
with its path from initial node.
5. Return to 2.

123gggg

Note: All numbers in Fig refer to order visited in search.

Page 8 of 22
AI/Unit-I(b)

Advantages of BFS:
1. BFS will not get trapped exploring a blind alley.
2. If there is a solution, then BFS is guaranteed to find it. Furthermore, if there are multiple
solutions, then a minimal solution (that require minimum number of steps) will be found.
Disadvantages of BFS:
1. Require more memory since all of the tree that has been generated must be stored.
2. All parts of the tree must be examined to level n before any nodes on level n+1 can be
examined.
Heuristic Search
A heuristic is a method that
 might not always find the best solution
 but is guaranteed to find a good solution in reasonable time.
 By sacrificing completeness it increases efficiency.
 Useful in solving tough problems which could not be solved any other way. Solutions
take an infinite time or very long time to compute.
The classic example of heuristic search methods is the travelling salesman problem.
The basic idea of heuristic search is that, rather than trying all possible search paths, you
try and focus on paths that seem to be getting you nearer your goal state. Of course, you
generally can't be sure that you are really near your goal state - it could be that you'll have to take
some amazingly complicated and circuitous sequence of steps to get there. But we might be able
to have a good guess. Heuristics are used to help us make that guess.
To use heuristic search you need an evaluation function that scores a node in the search
tree according to how close to the target/goal state it seems to be. This will just be a guess, but it
should still be useful. For example, for finding a route between two towns a possible evaluation
function might be a ‘as the crow flies’ distance between the town being considered and the target
town. It may turn out that this does not accurately reflect the actual (by road) distance - maybe
there aren't any good roads from this town to your target town. However, it provides a quick way
of guessing that helps in the search.
In order to find the most promising nodes a heuristic function is needed called f' where f'
is an approximation to f and is made up of two parts g and h' where
 g is the cost of going from the initial state to the current node. g is considered simply in
this context to be the number of arcs traversed each of which is treated as being of unit
weight.
 h' is an estimate of the initial cost of getting from the current node to the goal state.
The function f' is the approximate value or estimate of getting from the initial state to the goal
state. Both g and h' are positive valued variables.
i.e. f’(n) = g(n) + h’(n) : n is node
We'll assume that we are searching trees rather than graphs (i.e., there aren't any loops etc).
However, all the algorithms can be simply extended for graph search by using a closed list of
nodes as described above (though this is unnecessary for hill climbing).
Note that there are many minor variants of these algorithms, which are described in many
different ways. Don't expect all the algorithms in different textbooks to look identical.

Heuristic Search methods


Generate and Test Algorithm
1. generate a possible solution which can either be a point in the problem space or a path
from the initial state.
Page 9 of 22
AI/Unit-I(b)

2. test to see if this possible solution is a real solution by comparing the state reached with
the set of goal states.
3. if it is a real solution, return. Otherwise repeat from 1.
This method is basically a depth first search as complete solutions must be created before
testing. It is often called the British Museum method as it is like looking for an exhibit at
random. A heuristic is needed to sharpen up the search. The most straightforward way to
implement generate and test is as a depth first search tree with backtracking
Hill climbing
Here the generate and test method is augmented by an heuristic function which measures the
closeness of the current state to the goal state.
Algorithm:
1. Evaluate the initial state if it is goal state quit otherwise current state is initial state.
2. Select a new operator for this state and generate a new state.
3. Evaluate the new state
(i) If it is goal state, then return it and quit.
(ii) If it is not a goal state but it is better than the current state, then make
it the current state.
(iii) If it is not better than the current state, then continue in the loop.
Note: Hill climbing may fail to find a solution. Algorithm may terminate by getting a state
from which no better states can be generated.
Problems with Hill climbing:
Sometime algorithm may terminate by getting a state from which no better states can be
generated, not by finding a goal state. This will happen if the program has reached to the
following problems:
Local Maximum: A local maximum is state that is better than all its neighbors but not better
than some other states further away.
One solution to this problem is backtracking to an ancestor node and trying a secondary path
choice.
Plateau: A plateau is a flat area of the search in which a whole set of neighboring states has the
same value. So it is not possible to determine the best direction in which to move by making
local comparisons.
One solution to this problem is make a big jump in some direction to try to get to a new section
of the search space
Ridge: A ridge is a special kind of local maximum. It is an area of the search space that is higher
than surrounding area and that itself has a slope.
One solution to this problem is applying two or more rule before doing the test. This corresponds
to moving in several directions at once

Page 10 of 22
AI/Unit-I(b)

Best First Search


A combination of depth first and breadth first searches. Depth first is good because a
solution can be found without computing all nodes and breadth first is good because it does not
get trapped in dead ends.
In best-first, at each step the most promising node is chosen. If one of the nodes chosen
generates nodes that are less promising it is possible to choose another at the same level and in
effect the search changes from depth to breadth. If on analysis these are no better then this
previously unexpanded node and branch is not forgotten and the search method reverts to the
descendants of the first choice and proceeds, backtracking as it were.
This process is very similar to hill climbing, but in hill climbing once a move is chosen
and the others are rejected (the others are never reconsidered) while in best first they are saved to
enable revisits if an impasse occurs on the apparent best path. Also the best available state is
selected in best first even its value is worse than the value of the node just explored whereas in
hill climbing the progress stops if there are no better successor nodes.
Heuristics: In order to find the most promising nodes a heuristic function is needed called f'
where f' is an approximation to f and is made up of two parts g and h' where
 g is the cost of going from the initial state to the current node. g is considered simply in
this context to be the number of arcs traversed each of which is treated as being of unit
weight.
 h' is an estimate of the initial cost of getting from the current node to the goal state.
The function f' is the approximate value or estimate of getting from the initial state to the goal
state. Both g and h' are positive valued variables. i.e. f’(n) = g(n) + h’(n) : n is node
Best First Search Algorithm:
1. Place the starting node S on the queue.
2. If the queue is empty, return failure and stop.
3. If the first node on the queue is a goal node G, return success and stop. Otherwise,
4. Remove the first element from the queue, generate its children and compute its cost.
Place the children on the queue (at either end) and sort all queue elements in ascending
order.
5. Return to step 2.

Page 11 of 22
AI/Unit-I(b)

Note: In hill climbing, one move is sected and all the others are rejected, never to be
reconsidered.
In best first search, one move is selected, but the others are kept around so that they can be
revisited later.

Solving 8-puzzle problem by best-first search method

Page 12 of 22
AI/Unit-I(b)

A* Search
The Best First algorithm is a simplified form of the A* algorithm.
The A* search algorithm will involve an OR graph which avoids the problem of node
duplication and assumes that each node has a parent link to give the best node from which it
came and a link to all its successors. In this way if a better node is found this path can be
propagated down to the successors.
This method of using an OR graph requires 2 lists of nodes:
 OPEN is a priority queue of nodes that have been evaluated by the heuristic function but
which have not yet been expanded into successors. The most promising nodes are at the
front.
 CLOSED are nodes that have already been generated and these nodes must be stored
because a graph is being used in preference to a tree.
Heuristics: In order to find the most promising nodes a heuristic function is needed called f'
where f' is an approximation to f and is made up of two parts g and h' where
 g is the cost of going from the initial state to the current node. g is considered simply in
this context to be the number of arcs traversed each of which is treated as being of unit
weight.
 h' is an estimate of the initial cost of getting from the current node to the goal state.
The function f' is the approximate value or estimate of getting from the initial state to the goal
state. Both g and h' are positive valued variables. i.e. f’(n) = g(n) + h’(n) : n is node
Algorithm:
1. Place the starting node S on OPEN.
2. If OPEN is empty, stop and return failure.
3. Pick the BEST node from OPEN such that f’ = g + h’ is minimal.
4. If BEST is goal node quit and return the path from initial to BEST Otherwise,
5. Expand BEST, generate all of its successors BEST* and place BEST on CLOSED.
Page 13 of 22
AI/Unit-I(b)

For every successor BEST*:


 If BEST* is not already on OPEN or CLOSED, attach a back-pointer to BEST,
compute f(BEST*) and place it on OPEN.
 If BEST* is already on OPEN or CLOSED should be attached to back-pointer
which reflect the lowest g(BEST*) path.
 If BEST* was on CLOSED and its pointer was changed, remove it and place it on
OPEN.
6. Return to step 2.

Several interesting observation can be made about this algorithm.


1. The role of g function: it lets us choose which node to expand next on the basis of how
good the node itself looks and how good the path to the node was. If we want to find a
path involving the fewest number of steps, then we have to set the cost of going from a
node to its sucessor.
2. h’ the estimator of h: if h’ is a perfect estimator of h, then A* will converge immediately
to the goal with no search. On the other hand, if the value of h’ is 0, the search will be
controlled by g. If the g is also 0, the search will be random. If the value of g is always 1,
the search will be breadth first.
On the other hand, what happen when h’ is neither perfect nor 0? If h' never
overestimates h. then, the A* algorithm is guaranteed to find an optimal (as determined
by g) path to a goal, if one exists. This can easily be seen from a few examples.

Figure : h' Underestimates h


Consider the situation shown in Figure above. Assume that the cost of all arcs is 1. Initially, all
nodes except A are on OPEN (although the figure shows the situation two steps later, after B and
E have been expanded). For each node, f' is indicated as the, sum of h' and g. In this example,
node B has the lowest f’, 4, so it is expanded first. Suppose it has only one successor E, which
also appears to be three moves away from a goal. Now f’(E) is 5, the same as f'(C). Suppose we
resolve this in favour of the path we are currently following. Then we will expand E next.
Suppose it too has a single successor f’, also judged to be three moves from a goal. We are
clearly using up moves and making no progress. But f'(F) = 6, which is greater than f’(C). So we
will expand C next. Thus we see that by underestimating h(B) we have wasted some effort. But
eventually we discover that B was farther away than we thought and we go back and try another
path.

Page 14 of 22
AI/Unit-I(b)

Now consider the situation shown in Figure below. Again we expand B on the first step. On the
second step we again expand E. At the next step we expand F, and finally we generate G, for a
solution path of length 4. But suppose there is a direct path from D to a solution, giving a path of
length 2. We will never find it. By overestimating h' (D) we make D look so bad that we may
find some other, worse solution without ever expanding D.
In general, if h' might overestimate h, we cannot be guaranteed of finding the cheapest path
solution unless we expand the entire graph until all paths are longer than the best solution. An
interesting question is,
"The practical significance of the theorem is-- if h' never overestimates h then A * is
admissible?"

Figure : h' Overestimates h


3. Relationship between trees and graphs: The algithm was stated in its most general form
as it applies to graphs. It can,of course, be simplyfied to apply to trees by not bothering to
check whether a new node is already on OPEN or CLOSED
Admissibility condition:
Algorithm is admissible if it is guaranteed to return an optimal solution when one exits.
Graceful decay of admissibility
If h' rarely overestimates h by more than d then the A* algorithm will rarely find a
solution whose cost is d greater than the optimal solution.
Completeness condition:
Algorithm is complete if it always terminates with a solution when one exits.
Note: A* algorithm is both complete and admissible.

Page 15 of 22
AI/Unit-I(b)

Problem Reduction

If we are looking for a sequence of actions to achieve some goal, then


 One way to do it is to use state-space search, where each node in you search space is a
state of the world, and you are searching for a sequence of actions that get you from an
initial state to a final state.
The simple state-space search techniques described above could all be represented
using a tree where each successor node represents an alternative action to be taken. The
graph structure being searched is referred to as an OR graph.
 Another way is to consider the different ways that the goal state can be decomposed into
simpler sub goals.
For example, when planning a trip to Goa you probably don't want to search through
all the possible sequences of actions that might get you to Goa. You're more likely to
decompose the problem into simpler ones - such as
“Getting to the station, then getting a train/bus to Goa.”
There may of course be more than one possible way of decomposing the problem - an
alternative strategy might be
 To get to the airport,
 Fly to Bombay, and
 Get the Bus/Taxi from Bombay to Goa.
These different possible plans would have different costs (and benefits) associated with
them, and you might have to choose the best plan.
And-Or Graphs:
To represent problem reduction techniques we need to use an AND-OR graph/tree.
 You can have and nodes whose successors must all be achieved, and
 Or nodes where one of the successors must be achieved (i.e., they are alternatives).
This allows us to represent both cases where ALL of a set of sub goals must be satisfied to
achieve some goal, and where there are alternative sub goals, any of which could achieve the
goal.
E.g.

Figure : A Simple AND-OR Graph


In this example we have alternating levels of AND nodes and OR nodes. Successors of
AND nodes represent goals to be jointly achieved. Successors of OR nodes represent different
ways of achieving a goal. Some AND-OR trees have levels that involve both AND and OR
nodes, but this tends to be less clear.
There are many possible ways of searching AND-OR graphs.
1. One way that is suggested by Rich & Knight for searching AND-OR trees is:

Page 16 of 22
AI/Unit-I(b)

2. Another-way is to effectively transform them back into OR graphs where each node
represents a whole set of goals to be satisfied. So, in terms of our search algorithms each
item on the LIST is a set of goals.
 Finding a successor to an item on the LIST involves picking a non-primitive goal
(say G) from this set of goals and finding possible sub goals for that goal.
 If the node (corresponding to G) was an AND node then there is a single
successor which is a set of goals with the goal G replaced by its sub goals.
 If the node was an OR node then there will be several possible successor node,
each being a set of goals with G replaced by a possible successor.
 A final ‘goal state’ in these terms will be a set of directly executable or primitive
goals/actions, and our node lists will be lists of lists of goals, where each sub list
will represent a partially developed possible plan.
Things get slightly more complex when you want to use heuristic search. Then we have to
evaluate the ‘goodness’ of a whole set of goals. If each goal has an associated (approximate) cost,
then the cost of a set of goals will just be the sum of these costs.

Note:
1. The above is simpler, but will lead to non-optimal performance. In particular, it will
result in some redundant work.
2. AND-OR graph (or tree) search techniques are needed if we want to use backward
chaining to prove something given a set of rules of the form “IF A AND B AND ...
THEN C”. The problem of proving (say) C is being reduced to the problems of proving A
and proving B etc.
3. And-Or Graphs are useful for certain problems where :
 The solution involves decomposing the problem into smaller problems.
Page 17 of 22
AI/Unit-I(b)

 We then solve these smaller problems.


4. The use of arcs to indicate that one or more nodes must all be satisfied before the parent
node is achieved.
5. To find solutions using an And-Or GRAPH the best first algorithm is Inadequate:
CANNOT deal with AND bit well. But can be used as a basis (with a modification) to
handle the set of nodes linked by the AND factor.

AO* Algorithm
1. Let GRAPH consist only of the node representing the initial state. (Call this node INIT.)
Compute h'(INIT).
2.Until INIT is labeled SOLVED or until INIT’s h’ value becomes greater than FUTILITY,
repeat the following procedure:
a.) Trace the labeled arcs from INIT and select for expansion one of the as yet
unexpanded nodes that occurs on this path. Call the selected node NODE.
b.) Generate the successors of NODE. If there are none, then assign FUTILITY as the h’
value of NODE. This is equivalent to saying that NODE is not solvable. If there are
successors, then for each one (called SUCCESSOR) that is not also an ancestor of NODE
do the following:
i.) Add SUCCESSOR to GRAPH.
ii.) If SUCCESSOR is a terminal node, label it SOLVED and assign it an h’ value
of 0.
iii.) If SUCCESSOR is not a terminal node, compute its h' value.
c.) Propagate the newly discovered information up the graph by doing the following: Let
S be a set of nodes that have been labeled SOLVED or whose, h’ values have been
changed and, so need to have values propagated back to their parents. Initialize S to
NODE. Until S is empty, repeat the following procedure:
i.) If possible, select from S a node none of whose descendants in GRAPH occurs in
S. If there is no such node, select any node from S. Call this node CURRENT, and
remove it from S.
ii.) Compute the cost of each of the arcs emerging from CURRENT. The cost of
each arc is equal to the sum of the h’ values of each of the nodes at the end of the arc
plus whatever the cost of the arc itself is. Assign as CURRENT's new h’ value the
minimum of the costs just computed for the arcs emerging from it.
iii.) Mark the best path out of CURRENT by marking the arc that had the minimum
cost as computed in the previous step.
iv.) Mark CURRENT SOLVED if all of the nodes connected to it through the new-
labeled arc has been labeled SOLVED.
v.) If CURRENT has been labeled SOLVED or if the cost of CURRENT was just
changed, then its new status must be propagated back up the graph. So add all of the
ancestors of CURRENT to S.

State space search vs problem reduction


Most problems could, in principle, be formulated in either state space or problem
reduction terms. However, usually one way of formulating the problem will be more natural and
more efficient. The appropriate technique depends both on the nature of the solution to the
problem, and on the most natural way to go about solving it.

Page 18 of 22
AI/Unit-I(b)

Generally speaking, state space search may be good when the solution to a problem is
naturally expressed in terms of either a final state, or a path from an initial state to a final state.
We also need to be able to define rules for transforming one state into another, based on
available actions in the domain.
Problem reduction may be better if it is easy to decompose a problem into independent
sub problems - we would have to define rules to do this. It may also allow a more natural
explanation of the decision making process which allowed you to arrive at a solution, and may
result in less search than state-space approaches (so may be better for very complex problems).
Constraint Satisfaction
Many problem in AI can be viewed as problem of constraint satisfaction in which:
 To find a solution that satisfies a set of constraints.
 Heuristics used not to estimate the distance to the goal but to decide what node to expand
next.
 Examples of this technique are design problem, labelling graphs, robot path planning and
cryptarithmetic puzzles.
Constraint satisfaction is a search procedure that operates in a space of constraiont sets. The
initial state contains the constraint that are given in the problem. A goal state is any state that has
been constraint “enough”, where ‘enough’ must be defined for each problem.
E.g. For cryptarithmatic problem, enough means that each latter has been assigned a unique
numeric value.
Constraint satisfaction is a two-step process:
1. Constraints are discovered and propagated as far as possible throught the system. Then if
there is still not a solution, search begins.
2. A guess about something is made and added as a new constraint. Propagation can then
begin with this new constraint.

Algorithm:
1. Propagate available constraints. To do this, first set OPEN to the set of all objects that must
have values assigned to them in a complete solution. Then do until an inconsistency is detected
or until OPEN is empty:
a.) Select an object OB from OPEN. Strengthen as much as possible the set of constraint
that apply to OB.
b.) If this set is different from the set that was assigned the last time OB was examined or
if this is the first time OB has been examined, then add to OPEN all objects that share
any constraints with OB.
c.) Remove OB from OPEN.
2. If the union of the constraints discovered above defines a solution, then quit and report the
solution.
3. If the union of the constraints discovered above defines a contradiction, then return failure.
4. If neither of the above occurs, then it is necessary to make a guess at something in order to
proceed. To do this, loop until a solution is found or all possible solutions have been eliminated:
a) Select an object whose value is not yet determined and select a way of strengthening the
constraints on that object.
b) Recursively invoke constraint satisfaction with the current set of constraints augmented
by the strengthening constraint just selected.

Page 19 of 22
AI/Unit-I(b)

Example: SEND + MORE = MONEY


SEND + MORE = MONEY is a classical “crypto-arithmetic” puzzle: the variables S, E,
N, D, M, O, R, Y represent digits between 0 and 9, and the task is finding values for then such
that the following arithmetic operation is correct:
S E N D
+ M O R E
M O N E Y
Moreover, all variables must take unique values, and all the numbers must be well-formed
(which implies that M > 0 and S > 0.
Initially,
 M=1, since two single digit numbers plus a carry cannot total more than 19.
 S=8 or 9, since S+M+C3>9 (to generate a carry) and M=1, S+1+C3>9, So S+C3>8 and
C3 is at most 1.
 O=0, since S+M(1)+C3(<=1) must be at least 10 to generate a carry and it can be at most
11. But M is already 1, so O must be 0.
 N=E or E+1, depending on the value of C2. But N can not have the same value as E. So,
N=E+1 and C2 is 1.
 In order for C2 to be 1, the sum of N+R+C1 must be greater than 9, so N+R must be
greater than 8.
 N+R can not be greater than 18, even with a carry in, so E cannot be 9
At this point, let us assume that no more constraints can be generated. Then to make progress
from here, we must guess:
Suppose E is assigned the value 2, the constraint propagator now observe that:
 N=3, since N=E+1.
 R=8 or 9, since R+N (3)+C1 (1or0)=2 or 12. But since N is already 3, the sum of these
nonnegative numbers cannot be less than 3. Thus R+3+ (0 or 1)=12 and R=8 or 9
 2+D=Y or 2+D=10+Y, from the sum in the rightmost column.
Again, assuming no further constraint can be generated, a guess is required.
In this way we have to propagate.
Solution to this problem is:
O=0, M=1, Y=2, E=5, N=6, D=7, R=8, S=9

9567
1085

10652

Some other examples of Crypt-arithmetic problem are:

CROSS 96233 DONALD 526485


+ ROADS 62513 + GERALD 197485

DANGER 158746 ROBERT 723970

Page 20 of 22
AI/Unit-I(b)

Figure: Solving a crypt-arithmetic problem

Why are these topics important?

 Search and knowledge representation form the basis for many AI techniques.
 We have studied search in some detail.
Here are a few pointers as to where specific search methods are used in this course.

S. No Area/Field Search Methods


1 Knowledge Representation Best first search (A*), Constraint satisfaction and
means-ends analysis searches used in Rule based
and knowledge.
2 Uncertain reasoning Depth first, breadth first and constraint satisfaction
methods used
3 Distributed reasoning Best first search (A*) and Constraint satisfaction.
4 Planning Best first search (A*), AO*, Constraint satisfaction
and means-ends analysis.
5 Understanding Constraint satisfaction.
6 Learning Constraint satisfaction, Means-ends analysis
7 Common sense Constraint satisfaction
Page 21 of 22
AI/Unit-I(b)

8 Vision Depth first, breadth first, heuristics, simulated


annealing, constraint satisfaction are all used
extensively.
9 Robotics Constraint satisfaction and means-ends analysis
used in planning robot paths.

Summary of search techniques

Page 22 of 22

You might also like