0% found this document useful (0 votes)
78 views

04 Problem Solving in AI - Search Algorithms

The document discusses problem solving techniques in artificial intelligence, specifically search algorithms. It describes problem solving as a search for a solution and outlines three generic problem types: state space transformation, problem reduction representation, and constraint satisfaction. It then focuses on state space transformation problems, representing them as graphs and trees to be searched using algorithms like depth-first, breadth-first, and iterative deepening search. The complexity of these algorithms is analyzed in terms of parameters like branching factor and depth. An example problem of transporting missionaries and cannibals across a river is provided to illustrate state space representation.

Uploaded by

N_gine
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

04 Problem Solving in AI - Search Algorithms

The document discusses problem solving techniques in artificial intelligence, specifically search algorithms. It describes problem solving as a search for a solution and outlines three generic problem types: state space transformation, problem reduction representation, and constraint satisfaction. It then focuses on state space transformation problems, representing them as graphs and trees to be searched using algorithms like depth-first, breadth-first, and iterative deepening search. The complexity of these algorithms is analyzed in terms of parameters like branching factor and depth. An example problem of transporting missionaries and cannibals across a river is provided to illustrate state space representation.

Uploaded by

N_gine
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 28

Problem Solving in AI: Search

Algorithms

Most AI problem solving techniques are


implemented as a search for a solution
The nature of problem solving

 Human problem solving involves searching for a solution


 The ‘solution’ is a structured object built, i.e. synthesised,
along the way
 Three generic problem types are
State space transformation
 Manipulate a current situation in an allowable way starting from
some initial position until a satisfactory end position is reached
 Transformation is achieved by bringing an operator to bear on the
current state
 There is usually a choice of operators at each point
The nature of problem solving ct’d
Problem reduction representation
 The problem is solved by breaking it down at each stage into one or more
simpler problems
 Each state is a sub-problem or sub-goal of the original problem
 There may or may not be constraints on the order in which these must be
solved
 Eventually primitive problems are reached; the solution to these is
immediate, i.e. does not require further decomposition
 This is how Prolog solves problems, i.e queries
Constraint satisfaction
 Obtain a solution consistent with a number of given constraints
 Although Prolog naturally uses problem reduction it can, through
definition of appropriate predicates, handle the other types
State space transformation

 Uses the notion that each intermediate problem


position is a 'state of the world’
 The set of all possible positions is the state space
 There are three ingredients in the statement of such
a problem:
1. The set of initial states
2. The set of operators for moving from one state to another
3. The set of goal states or conditions that a state must satisfy to be a
goal state
 The problem is solved by finding a path, i.e. a
sequence from an initial state to a goal state
Search spaces, graphs and trees

 A problem is represented as a graph with the nodes characterising


situations and arcs (edges) indicating the permitted transformations
 This is usually implicit since it is too vast to generate wholly
 Arcs are usually directed (giving direction of permitted travel)
 If there is a directed arc from a to b, then a is a predecessor or parent of
b and b is a successor of a
 Additionally they may be labelled (say with costs of traversal)
 The average number of successors to a node is the branching factor
 and is a key indicator in considering the computational complexity of the
search
Search spaces, graphs and trees ct’d
 A directed acyclic graph (DAG) is one containing no loops or cycles
 An important type of DAG is a tree
 This has a special node called the root
 For each node (other than the root) there is one parent, i.e. there is a single
arc directed from the parent to the nod.
 A node having no successor is called a leaf
 The maximum distances in terms of arcs from the root to a leaf is the depth of
the tree
 As the search progresses an explicit graph (or more usually a tree) is
built - this is the search graph or search tree
 It can be used to keep track of the development of the solution
 The 'solution' may require finding a path with particular properties
(e.g. least cost)
A simple problem

Start: a
Goal: f, j

Successors:
a  b, c b  d, e
c  f, g dh
e  i, j fk
Search tree
a

b c

d e f g

h i j k

• Goals at depth 2 and 3


• Average branching factor < 2
A basic search algorithm
1. Set L, the open list, to be the list of start nodes
2. If L is empty, fail. Otherwise pick a node, n, from L
3. If n is a goal node, stop and return it together with the path from
the initial node to n
4. Otherwise remove n from L and add all of n's successors to L
labelling each with its path from the initial node Return to step 2.

 Specifying a search algorithm involves giving a recipe for


which successor nodes to try
 And, optionally, for recording what has already been tried
 Examining the successors of a node is called expanding the
node
Types of search

 This general procedure does not specify how a node is chosen


for expansion,
 i.e. in what order such nodes are chosen
 This specification is all important leading to different actual
algorithms
 Search can be blind or heuristic
 A blind search method follows a fixed prescription regardless of the
particular characteristics of the problem
 This is unrealistic except for toy problems
 A heuristic search method uses knowledge, in the form of heuristics
appropriate to the problem in hand, to guide search
 thus narrowing the space actually covered
Blind search
 Depth-first (with or without a depth bound)
 Dive deep into the search tree
 Nodes for expansion are selected from the front of L and
successors to a node are also added to the front of L
 Use of depth bound, M, prevents excessive pursuit of a goal to
substantial depths when there may be a shallow goal on another
branch
 Breadth-first
 Move across the search tree then down
 Nodes for expansion are selected from the front of L and
successors to a node are also added to the end of L
Example of depth-first search

b c

d e f g

h i j k

• Nodes are visited in the order : a, b, d, h, e, i, j.


• Solution path is : a, b, e, j
Example of breadth-first search

b c

d e f g

h i j k

• Nodes are visited in the order : a, b, c, d, e, f


• Solution path is : a, c, f
Complexity of algorithms

 Search algorithms have properties of


 time complexity
 i.e. how long they take to complete the search
 memory (or space) complexity,
 i. e. how much storage space they need for intermediate results
 Calculations are made in general or average terms as the
actual requirements will differ from problem to problem
 Complexities are specified in terms of parameters of the
problem, e.g. branching factor, using o notation (meaning
order of)
 o(x2) means complexity varies as x2
Complexities for depth and breadth-first
Method Time Memory
bd
~ o(bd )
Depth first 2 d (b 1) 1~ o(db)
bd
(1  1/ b) ~ o(bd )
Breadth first 2 bd 1 ~ o(bd )

 Both have same time complexity


 exponential in depth
 It is impossible to improve on this as a general result for any blind
search algorithm
 Memory requirement for breadth-first is much worse
 again, exponential in depth
 because alternative paths are stored
Other blind search algorithms
Depth-limited
 A variation on depth-first
 A maximum depth parameter is specified for the search.
 This acts a ‘glass floor’
 when the max depth is reached, backtracking occurs and the search
moves across instead of continuing down
Iterative deepening
 A sequence of depth-limited searches beginning with max depth =1
 If the goal is not found the search is repeated with max depth =2 and
so on ...
 Has the same complexity as depth-first yet finds shallow nodes first
like breadth-first
Avoiding loops in the search
 If a node has more than one parent then expansion may
lead to re-visiting of nodes already considered
 making a loop or repetition in the path
 Here it is useful to maintain another list to store the nodes
which have been visited already
 This is called the closed list
 After a node has been expanded, it is placed in the closed
list
 A successor node is added to L (the open list) provided:
 it is not in L already and it is not in the closed list
Sample problem:
missionaries and cannibals
 Three missionaries and three cannibals are on one side of a river
 Using a boat which can hold at most two people they must all get to the other side
 If the missionaries are ever outnumbered on one side of the river they will be eaten by cannibals

 A state can be represented as:


(Left, Boat-left, Right)
Left represents the people on the left bank,
Boat represents those on the boat and
Right those on the right
Each of Left, Boat and Right is a pair (M,C) where M is the number of missionaries and C the number of cannibals
Initial state: ( (3,3), (0,0)-left, (0,0) )
Goal state: ( (0,0), (0,0)-right, (3,3) )
Sample problem:
water containers
 Given both a seven and a five litre container, initially empty, the goal is to find a sequence of actions
which leaves four litres of liquid in the seven litre container
 There are three kinds of actions which can alter the state of the containers:
i. A container can be filled
ii. A container can be emptied
iii. Liquid can be poured from one container into the other, until the first is empty or the second is full
 A state can be represented as:
(C1, C2) where C1 and C2 are the current volumes in the first (seven litre) and second (five litre) containers
 Initial state is is (0, 0)
 Goal state is (4, _) , i.e the volume of the second container is irrelevant
Heuristic search

 A heuristic, h’, is an inspired guess or ‘rule-of thumb’


about where to go next in the search:
 It is often a measure of distance from (or, closeness to) the
goal - called an evaluation function
 It uses knowledge of the particular problem
 Nodes not yet visited are evaluated using the
measure and the one with the best score is chosen
 Heuristics offer no guarantees
 They are used because they tend to give better
results
Types of heuristic search
 Hill climbing
 From the current node choose (if possible) a successor node with the best
evaluation and which improves upon that of the node itself
 This is a local choice of ‘best’ leading to a depth-first approach
 Best-first
 Choose the best node globally to expand next from the open list
 Depth and breadth first are special cases
 Heuristic search will only improve on blind search if the
heuristic contains real information about how near the
goal is.
 This will enable it to:
 find a short, direct path to the goal and
 find this path with little meandering along the way
Example of hill-climbing search

a h’ =1.6

h’ = 0.7 b c h’=0.8

h ‘=1.8 d h ’=0.9 e h’=0 f h’ =2.7 g

h h ’=4.9 i h ’=3.7 j h ’=0 k h‘ =6.2

• Nodes are visited in the order : a, b, e, j


• Solution path is : a, b, e, j
Example of best first search

a h’=1.6

h’=0.7 b c h’=0.8

h’=1.8 d h’=0.9 e h’=0 f h’=2.7 g

h h’=4.9 i h’=3.7 j h’=0 k h’=6.2

• Nodes are visited in the order : a, b, c, f


• Solution path is : a, c, f
The A* algorithm

 This is a version of best-first which applies to a special


class of evaluation function: f(n)=g(n) + h'(n)
 g(n) is the 'cost' incurred in reaching n
 Costs are attached to arcs of the graph
 If a solution with shortest path length is required, arc cost = 1
 g(n) is the actual cost in reaching n
 There may be a path (not found) to n with lower cost
 Thus g(n) over-estimates the minimum cost
 h'(n) is a heuristic that estimates the cost that will be
incurred in reaching the goal from n
The A* algorithm ct’d
 If the estimate h'(n) always under-estimates the true cost,
then h' is said to be admissible
 In this case A* will always find a least cost path to the
goal
 assuming such a path exists
 Ideally h'(n) should be a tight estimate
 Should be close to the true cost without going over
 The tighter the estimate the more direct or focused the search will
be
 there will be less meandering on the way
 The h'(n) = true cost (as determined from arc costs, then
searches progresses directly to the goal
Example of A* search
a h’=1.6

h’=0.7 b c h’=0.8

h’=1.8 d h’=0.9 e h’=0 f h’=2.7 g

h h’=4.9 i h’=3.7 j h’=0 k h’=6.2

Cost per arc = 1


h’ values are admissible, e.g. at b, actual cost of reaching goal (j) is 1+1=2 but h’
is only 0.7. At b, f (b) =g (b) + h’ (b) = 1 + 0.7 =1.8
Sample problem:the 8-puzzle
 Requires the eight numbered tiles in a 3*3 grid to be rearranged
(an example is shown below)
 An individual move consists of a horizontal or vertical shift of a tile
into the space
 Can generalise problem to n*n-1

2 1 6 1 2 3
4 8 8 4
7 5 3 7 6 5
Heuristic for the 8-puzzle
 A good heuristic is provided by the total, over numbered
tiles, of the Manhatten distance (md) of a tile from its final
position
md = sum of horizontal and vertical distance to correct position
 E.g. in diagram, tile number 6 has to travel distances of 1
to the left and 2 down : md = 1+2
 Can be improved by adding a sequence misalignment
factor
 A tile in the centre scores 1
 A tile on a non-central square scores 0 if it is followed the clockwise
direction by the correct tile in the goal configuration; else its score is 2

You might also like