0% found this document useful (0 votes)
4 views73 pages

Chapter 3 - Problem Solving by Searching(I)

Chapter 3 discusses problem-solving through searching, focusing on the types of agents, problem and goal formulation, and various search strategies. It outlines the steps involved in searching, including problem formulation, goal formulation, and the execution of action sequences to achieve desired outcomes. The chapter also categorizes problems based on knowledge of states and actions, providing examples such as the vacuum world and various puzzles.

Uploaded by

nataniumcscbe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views73 pages

Chapter 3 - Problem Solving by Searching(I)

Chapter 3 discusses problem-solving through searching, focusing on the types of agents, problem and goal formulation, and various search strategies. It outlines the steps involved in searching, including problem formulation, goal formulation, and the execution of action sequences to achieve desired outcomes. The chapter also categorizes problems based on knowledge of states and actions, providing examples such as the vacuum world and various puzzles.

Uploaded by

nataniumcscbe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 73

Chapter 3

Solving problems by searching

1
Objectives

Identify the type of agent that solve problem by searching


Problem formulation and goal formulation
Types of problem based on environment type
Discuss various techniques of search strategies
Discuss about game playing theory

2
Type of agent that solve problem by searching
 Such agent is not reflex or model based reflex agent because this
agent needs to achieve some target (goal)
 It can be goal based or utility based or learning agent
 Intelligent agent knows that to achieve certain goal, the state of
the environment will change sequentially and the change should
be towards the goal

3
Steps to undertake during searching
 Problem formulation:
 Involves:
 Abstracting the real environment configuration into state information using
preferred data structure
 Describe the initial state according to the data structure
 The set of action possible on a given state at specific point in the process.
 The cost of the action at each state
 Deciding what actions and states to consider, given a goal

 For vacuum world problem, the problem formulation involve:


 State is described as list of 3 elements where the first element describe information
about block A, the second element describe information about block B and the last
element describe the location of the Agent
 [dirty, dirty, A]
 Suck, moveRight, moveLeft
 Determine which of the above action are valid for a give action
 Cost can be determined in many ways
4
Steps to undertake during searching

 Goal formulation: refers to the understanding of the


objective of the agent based on the state description of the
final environment
 Based on the current situation and the agent’s performance
measure, is the first step in problem solving
 Goal can be consider as a set of world states—exactly those
states in which the goal is satisfied.
 The agent’s task is to find out how to act, now and in the
future, so that it reaches a goal state
 For example, for the vacuum world problem, the goal can
be formulated as
[clean, Clean, agent at any block]
5
 Solution is a sequence of world state in which the final state
satisfy the goal or solution is action sequence in which the last
action will result the goal state.
 Solution quality is measured by the path cost function, and
an optimal solution has the lowest path cost among all
solutions.
 Each action change one state to the next state of the world
 A search algorithm take a problem as input and returns a solution
in the form of action or state sequence.
 To achieve the goal, the action sequences must be executed
accordingly
 The general “formulate-search-execute” algorithm of search is
given bellow:
6
Problem-solving agents

7
Agent Program

8
Example: Road map of Ethiopia
Aksum
100
200 Mekele
Gondar 180
80
Lalibela
110 250
150
Bahr dar
Dessie
170

Debre markos 330 Dire Dawa


230
400
330
Jima Addis Ababa
100
430 Nazarez 370

Gambela 230 320 Nekemt

Awasa

9
Example: Road map of Ethiopia
 Current position of the agent: Awasa.
 Needs to arrive to: Gondar
 Formulate goal:
 be in Gondar
 Formulate problem:
 states: various cities
 actions: drive between cities
 Find solution:
 sequence of cities, e.g., Awasa, Nazarez, Addis Ababa, Dessie, Godar

10
Types of Problems: Knowledge of States and Actions
 What happens when knowledge of the states or actions is incomplete?
 Four types of problems exist in the real situations:
1. Single-state problem
 The environment is Deterministic, fully observable, static and discreet
 Out of the possible state space, agent knows exactly which state it will be
in; solution is a sequence
 Complete world state knowledge and action knowledge
2. Sensor less problem (conformant problem)
 The environment is non-observable, deterministic , static and discreet
 It is also called multi-state problem
 Incomplete world state knowledge and action knowledge
 Agent may have no idea where it is; solution is a sequence
 If the agent has no sensors at all, then (as far as it knows) it could be in one of
several possible initial states, and each action might therefore lead to one of
several possible successor states
11
Types of Problems
3. Contingency problem
 The environment is nondeterministic and/or partially observable
 It is not possible to know the effect of the agent action
 percepts provide new information about current state
 It is impossible to define a complete sequence of actions that
constitute a solution in advance because information about the
intermediary states is unknown
4. Exploration problem
 The environment is partially observable
 It is also called unknown state space
 Agent must learn the effect of actions.
 When the states and actions of the environment are unknown,
the agent must act to discover them.
 It can be viewed as an extreme case of contingency problems
 State space and effects of actions unknown
12
Example: vacuum world
 Single-state
 Starting state us known say in #5.
 What is the Solution?

13
Example: vacuum world
 The vacuum cleaner always knows where it is and where
the dirt is. The solution then is reduced to searching for a
path from the initial state to the goal state
 Single-state, start in #5.
Solution? [Right, Suck]

14
Example: vacuum world
 Sensorless,
 Suppose that the vacuum agent knows
all the effects of its actions, but has no
sensors.
 It doesn’t know what the current
state is
 It doesn’t know where it or
the dirt is
 So the current start is either of the
following: {1,2,3,4,5,6,7,8}

 What is the Solution?

15
Example: vacuum world
 Sensorless Solution
 Right goes to {2,4,6,8}
Solution?
 [Right,Suck,Left,Suck]

16
Sensorless example : vacuum world

17
Example: vacuum world
 Contingency
 Nondeterministic: Suck may
dirty a clean carpet
 Partially observable:
 Hence we have partial
information
 Let’s assume the current percept
is: [L, Clean]
 i.e. start in #5 or #7

 What is the Solution?

18
Example: vacuum world
 Contingency Solution
[Right, if dirt then Suck]
Suppose that the “vacuum” action Move right
sometimes deposits dirt on the
carpet –but only if the carpet is
already clean.
The agent must have sensor and it suck
must combine decision making
, sensing and execution.
[Right,suck,left suck] is not correct
plane , because one room might be
clean originally, them become
dirty. No fixed plane that always
19
works
Exploration
Example 1:
 Assume the agent is some where outside the blocks and wants to
clean the block. So how to get into the blocks? No clear
information about their location
 What will be the solution?
 Solution is exploration

Example 2:
 The agent is at some point in the world and want to reach a city
called CITY which is unknown to the agent.
 The agent doesn’t have any map
 What will be the solution?
 Solution is exploration

20
Some more problems that can be solved by searching
 We have seen two such problems: The road map problem and
the vacuum cleaner world problem

 The following are some more problems


 The three mice and the three cats problem
 The three cannibal and the three missionaries problem
 The water jug problem
 The colored block world problem

21
3 cat and 3 mice puzzle
 Three cats and three mice come to a crocodile infested river. There is a
boat on their sides that can be used by one or two “persons”. If cats
outnumber the mice at any time, the cats eat the mice. How can they use
the boat to cross the river so that all mice survive.
 State description
 [#of cats to the left side,
#of mice to the left side,
boat location,
#of cats to the right side,
#of mice to the right side]
 Initial state
 [3,3,Left,0,0]
 Goal
 [0,0,Right,3,3]
22
3 cat and 3 mice puzzle
 Action
 A legal action is a move which moves upto two person at a time
using the boat from the boat location to the other side provided
that action doesn’t contradict the constraint (#mice < #cats)
 We can represent the action as
 Move_Ncats_M_mice_lr if boat is at the left side or
 Move_Ncats_M_mice_rl if boat is at the right side.
 All the set of possible action except the constraints are:

23
3 cat and 3 mice puzzle
 Question
 Draw the state space of the problem
 Provide one possible solution

24
3 cannibal and 3 missionaries problem
 Three missionaries and three cannibals come to the bank of a river
they wish to cross.
 There is a boat that will hold only two and any of the group is able to
row.
 If there are ever more missionaries than cannibals on any side of the
river the cannibals will get converted.
 How can they use the boat to cross the river without conversion.
 State description
 [#of cannibals to the left side,
#of missionaries to the left side,
boat location,
#of cannibals to the right side,
#of missionaries to the right side]
 Initial state
 [3,3,Left,0,0]
 Goal
 [0,0,Right,3,3]
25
3 cannibal and 3 missionaries problem
 Action
 A legal action is a move which moves up to two person at a time
using the boat from the boat location to the other side provided
that action doesn’t contradict the constraint (#cannibal <
#missionaries)
 We can represent the action as
 Move_Ncannibal_M_missionaries_lr if boat is at the left side or
 Move_Ncannibal_M_missionaries_rl if boat is at the right side.
 All the set of possible action except the constraints are:

26
3 cannibal and 3 missionaries problem
 Question
 Draw the state space of the problem
 Provide one possible solution

27
Water Jug problem
 We have one 3 liter jug, one 5 liter jug and unlimited supply of
water. The goal is to get exactly one liter of water in either of the
jug. Either jug can be emptied, filled or poured into the other.
 State description
 [Amount of water in 5 litter jug,
Amount of water in 3 litter jug]
 Initial state
 [0,0]
 Goal
 [1,ANY] or [ANY, 1]

28
Water Jug problem
 Action
 Fill the 3 litter jug with water (F3)
 Fill the 5 litter jug with water(F5)
 Empty the 5 litter jug (E5)
 Empty the 3 litter jug (E3)
 Pour the all 3 litter jug water onto the 5 litter jug (P35)
 Pour the all 5 litter jug water onto the 3 litter jug (P53)
 Pour the 3 litter jug water onto the 5 litter jug until the 5 litter
jug filled completely. (P_part35)
 Pour the 5 litter jug water onto the 3 litter jug until the 3 litter
jug filled completely. (P_part53)
29
Water Jug problem
 Question
 Draw the complete state space diagram
 Find one possible solution as action and state sequence

Initial state [0,0]


Action State
F3 [3,0]
P35 [0,3]
F3 [3,3]
P_part35 [1,5]

30
The colored block world problem
 Problem Description

 The Green and Red problem


 Assume there are two containers and Two boxes colored red and
green.
 The area of each of the container is sufficient to hold one box and
a robot but not both the boxes side by side.
 It is possible to keep both the boxes one on top of each other as
shown in the example.
 Initially, the location of the blocks can be in either of the container
or in one of the container.
 The robot can transfer one box at a time to achieve the required
goal specification.
 The basic operations that the robot can perform to achieve the
objective are as follow:
31
The colored block world problem
1. Flip: This a miraculous action that the robot can perform. It
will invert the arrangements of the two blocks if they are one
on top of the other irrispective of the location of the agent.
2. Hold: This will order the agent to hold the top block from the
container that the agent is located
3. Drop: This will order the robot to drop what the it holds if there
is any in the same block as the agent location
4. Move left: This order the robot to move from the right to the
left part of the container if it is on the right side
5. Move right: This order the robot to move from the left to the
right part of the container if it is on the left side

32
The colored block world problem
Data structure
The data structure used to describe a state is as follows:

[ Color of the left Bottom,


Color of the left Top,
Robot Location,
Color of the right Bottom,
Color of the right Top
]

33
Well-defined problems and solutions (single state)
 A well defined problem is a problem in which
 The start state of the problem
 Its goal state
 The possible actions (operators that can be applied to make
move from state to state)
 The constraints upon the possible action to avoid invalid
moves (this defines legal and illegal moves)
are known in advance

34
Well-defined problems and solutions (single state)
 A problem which is not well defined is called ill-defined
 Ill-defined problem presents a dilemma in planning to get
the goal
 The goal may not be precisely formulated
 Examples
 Cooking dinner
 Writing term paper
 All the problem we need to consider in this course are well
defined

35
Well-defined problems and solutions
A problem is defined by five items:
1. initial state that the agent starts in e.g., "at Awasa"
2. actions or successor function S(x) = set of action–state pairs
 e.g., S(Awasa) = {<Awasa  Addis Ababa, Addis Ababa>, <Awasa  Nazarez,
Nazarez>, … }
 Note: <AB, B> indicates action is A  B and next state is B

3. A description of what each action does.-transition model, specified by a function


RESULT(s, a) that returns the state that results from doing action a in state s. e.g.,
RESULT(In(Awasa), Go(Addis Ababa)) = In(Addis Ababa)
 Initial state, actions and transition model implicitly define the state space of the
problem-the set of all states reachable from the initial state by any sequence
 The state space forms a directed network or graph in which the nodes are states
and the links between nodes are actions
 A path in the state space is a sequence of states connected by a sequence of
actions
4. goal test, which determines whether a given state is a goal state. It can be
 explicit, e.g., x = "at Gonder"

36  implicit, e.g., CheckGoal(x)


Well-defined problems and solutions (single state)
5. The constraints
path cost (additive)
 e.g., sum of distances, number of actions executed, etc.
 c(x,a,y) is the step cost, assumed to be ≥ 0 (the cost of applying action
a being at initial state x which takes into next state y
 assigns a numeric cost to each path
invalid (actions that doesn’t change states)
 solution is a sequence of actions leading from the initial state
to a goal state or sequence of states in which the last state is the
goal state-Path from an initial to a goal state
 Search Costs: Time and storage requirements to find a
solution
 Total Costs: Search costs + path costs

37
Selecting a state space
 Real world problem can not be directly represented in the agent
architecture since it is absurdly complex
 state space must be abstracted for problem solving
 (Abstract) state = set of real states
 (Abstract) action = complex combination of real actions
 e.g., “Awasa  Addis Ababa" represents a complex set of
possible routes, detours, rest stops, etc.
 (Abstract) solution = set of real paths that are solutions in the real
world
 Each abstract action should be "easier" than the original problem

38
Vacuum world state space graph

 Initial states?
 states?
 actions?
 goal test?
 path cost?

39
Vacuum world state space graph

 Intial states? Any state can be designated as the initial state


states? Information on dirt and robot location (one of the 8
states)
 actions? Left, Right, Suck
 goal test? no dirt at all locations
 path cost? 1 per action
40
Example: The 8-puzzle

 Initial states?
 states?
 actions?
 goal test?
 path cost?
41
Example: The 8-puzzle

 Intial states? Any state can be designated as the initial state


 states? locations of tiles(eight tiles and one blank space)
 actions? move blank space left, right, up, down
 goal test? checks whether the state matches the goal configuration
shown in figure above
 path cost? 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
42
Example: robotic assembly

 states?: real-valued coordinates of robot joint angles and parts


of the object to be assembled
 actions?: continuous motions of robot joints
 goal test?: complete assembly
 path cost?: time to execute

43
8 queens problem

 N queens problem formulation 1


• States: Any arrangement of 0 to 8 queens on the board
• Initial state: 0 queens on the board
• Successor function: Add a queen in any square
• Goal test: 8 queens on the board, none are attacked
44
Infrastructure for search algorithms

 Search algorithm require a data structure to keep track of the


search tree that is being constructed
 For each node n of the tree, we have a structure that contains
four components:
 n.STATE: the state in the state space to which the node
corresponds;
 n.PARENT: the node in the search tree that generated this node;
 n.ACTION: the action that was applied to the parent to generate
the node;
 n.PATH-COST: the cost, traditionally denoted by g(n), of the path
from the initial state to the node, as indicated by the parent
pointers.

45
Implementation issue: states vs nodes
 A state is a (representation of) a physical configuration
 A node is a data structure constituting part of a search tree.
correspond to states in the state space of the problem
 It includes:
 state,
 parent node,
 action,
 depth and
 one or more costs [like path cost g(x), heuristic cost h(x), evaluation
function cost f(x)]

46
Searching For Solution (Tree search algorithms)
 Given state space, and network of states via actions.
 The network structure is usually a graph
 Tree is a network in which there is exactly one path defined from
the root to any node
 Given state S and valid actions being at S
 the set of next state generated by executing each action is called successor
of S
 Searching for solution is a simulated exploration of state space
by generating successors of already-explored states

47
Implementation issue: states vs. nodes
 Nodes are the data structures from which the search tree is
constructed.
 Each has a parent, a state, and various bookkeeping fields.
 Arrows point from child to parent
 A state corresponds to a configuration of the world
 Thus, nodes are on particular paths, as defined by
PARENT pointers, whereas states are not.
 Example:

48
 Queue is the appropriate data structure used to store
fringes(frontier).
 The search algorithm can expand the next node according to
some preferred strategy.
 Operation of queue are:
 EMPTY?(queue) returns true only if there are no more elements in the
queue.
 POP(queue) removes the first element of the queue and returns it.
 INSERT(element, queue) inserts an element and returns the resulting
queue.
 Queues are characterized by the order in which they store the
inserted nodes: FIFO queue, LIFO queue and priority queue
 Note :The set of all leaf nodes available for expansion at
49any given point is called fringes or frontier or open list.
Searching For Solution (Tree search algorithms)

• The Successor-Fn generate all the successors state and the action
that leads moves the current state into the successor state
• The Expand function creates new nodes, filling in the various fields
of the node using the information given by the Successor-Fn and
the input parameters
50
Tree search example
Awasa

Nazarez Addis Ababa

Gambela Dire Debre Awasa


Gambela AA Nazarez Jima Nekemt
Dawa Markos
Dessie
Awasa

BahrDar AA
Lalibela AA Gondar

Gondar Debre M.

51
Implementation: general tree search

52
Search strategies
 A search strategy is defined by picking the order of node expansion
 Strategies are evaluated along the following dimensions:
 completeness: does it always find a solution if one exists?
 time complexity: How long does it take to find a solution?
number of nodes generated
 space complexity: maximum number of nodes in memory
 optimality: does it always find a least-cost solution?
 Time and space complexity are measured in terms of
 b: maximum branching factor of the search tree(i.e., max. number of
successor of any node)
 d: depth of the least-cost solution((i.e., the number of steps along the path
from the root)
 m: maximum depth of the state space (may be ∞)
 Search cost used to assess the effectiveness of search algorithm. It
depends on time complexity but can also include memory usage or
we can use the total cost, which combines the search cost and the path
cost
53
of the solution found.
Search strategies cont’d …
 E.g., For the problem of finding a route from Awasa to
Gonder, the search cost is the amount of time taken by the
search and the solution cost is the total length of the path in
kilometers . To compute the total cost, we have to add
milliseconds and kilometers
 There is no “official exchange rate” between the two, but it
might be reasonable in this case to convert kilometers into
milliseconds by using an estimate of the car’s average speed
(because time is what the agent cares about)
 Generally, searching strategies can be classified in to
two as uninformed and informed search strategies
54
Uninformed search (blind search) strategies
 Uninformed search strategies use only the information available in the
problem definition
 They have no information about the number of steps or the path cost
from the current state to the goal
 They can distinguish the goal state from other states
 They are still important because there are problems with no additional
information.
 Six kinds of such search strategies will be discussed and each depends
on the order of expansion of successor nodes.
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
6. Bidirectional search
55
Breadth-first search (1)
 Breadth-first search is a simple strategy in which the root node is expanded first,
then all the SEARCH successors of the root node are expanded next, then their
successors, and so on.
 Expand shallowest unexpanded node
 Finds the shallowest goal state
 Breath-first search always has shallowest path to every node on the fringe
 Implementation:
 Fringe (open list) is a FIFO queue, i.e., new successors go at the end

56
Breadth-first search (2):Properties
 Time Complexity: Let b be the maximal branching factor and d
the depth of a solution path. Then the maximal number of
nodes expanded is
 (Note: If the algorithm were to apply the goal test to nodes
when selected for expansion rather than when generated, the
whole layer of nodes at depth d would be expanded before the
goal was detected and the time complexity would be
 Space Complexity: Every node generated is kept in memory.
Therefore space needed for the fringe is and for the
explored set .
 a maximum of this match node will be while reaching to the
goal node. This is a major problem for real problem
 Complete : yes(if b is finite)
57Optimal? Yes (if cost = constant (k) per step)
Uniform-cost search (1)
 Expands the node n with the lowest path cost g(n)
 Equivalent to breadth-first if step costs all equal
 Implementation:
 This is done by storing the fringe as a priority queue ordered by
g
 Differ from breadth-first search is that:
 Ordering of the queue by path cost
 Goal test is applied to a node when it is selected for expansion
rather than when it is first generated
 A test is added in case a better path is found to a node currently
on the fringe

58
Uniform-cost search (2)
 Uniform-cost search does not care about the number of steps a
path has, but only about their total cost
 When all step costs are the same, uniform-cost search is similar
to breadth-first search, except that the latter stops as soon as it
generates a goal, whereas uniform-cost search examines all the
nodes at the goal’s depth to see if one has a lower cost; thus
uniform-cost search does strictly more work by expanding nodes
at depth d unnecessarily S
 Consider the problem that moves from node S to G
A, 1 B, 5 C, 15
A
1 10 S
5 B 5
S G
A, 1 B, 5 C, 15
C 5
15
G, 11
S

A, 1 B, 5 C, 15

59
G, 11 G, 10
Uniform-cost search (3): Properties
 It finds the cheapest solution if the cost of a path never decrease as
we go along the path
 i.e. g(sucessor(n)) ≥ g(n) for every node n.
 Complete? Yes(if step costs positive)
 Time? # of nodes with g ≤ cost of optimal solution
 let ε be the minimum step cost in the search tree, C* is the total cost of the
optimal solution and branching factor b.
 Now you can ask, In the worst case, what will be the number of nodes that
exist all of which has step cost = ε, branching factor b and path cost ≤ C*.
The resulting tree will have depth value floor(C*/ ε)
 Hence the total node will be bceilling(C*/ ε) . Therefore, time complexity becomes
O(bceiling(C*/ ε) )
 Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
 Optimal? Yes – nodes expanded in increasing order of g(n)

60
Depth-first search (1)

 Expand deepest unexpanded node


 Implementation:
 fringe = LIFO queue, i.e., put successors at front

61
Depth-first search (2): Properties
 Complete? No: fails in infinite-depth spaces, spaces with loops
 Modify to avoid repeated states along path (see graph search)
 Yes: in finite spaces

 Time? O(bm): terrible if m is much larger than d( m is the maximum length of path in the
state space)
 but if solutions are dense, may be much faster than breadth-first

 Space? O(bm), i.e., linear space!


 In tree based search needs to store the nodes along the path from the root to the leaf
node. Once the node has been expanded , it can be removed from memory as soon as
all its descendants have been fully explored
 Optimal? No
 Advantage: Time may be much less than breadth-first search if solutions are dense
 Disadvantage: Time terrible if m much larger than d
 Note
 Depth-first search can perform infinite cyclic excursions
 Need a finite, non-cyclic search space (or repeated-state checking)
62
Depth-limited search (1)
 We can see that the breadth search is complete which can be taken as its
advantage though its space complexity is the worst
 Similarly the depth first search strategy is best in terms of space
complexity even if it is the worst in terms of its completeness and time
complexity compared to breadth first search
 Hence, we can find an algorithm that incorporate both benefits and
avoid the limitation

 Such algorithm is called depth limited search and its improved version
is called iterative deepening search strategy.
 These two strategies will be explored in the following sections

63
Depth-limited search (2)
 depth-first search in infinite state spaces can be alleviated by
supplying depth-first search with a predetermined depth limit
 That is, nodes at depth are treated as if they have no successors
 The depth limit solves the infinite-path problem
 Depth-first search with depth limit l, will truncate all nodes having depth value
greater than l from the search space and apply depth first search on the rest of the
structure
 It return solution if solution exist, if there is no solution
 it return cutoff if l < m, failure otherwise
Recursive implementation:

64
Depth-limited search (3)
 Complete? No( l < d, that is, the shallowest goal is beyond the depth limit)
 Time? O(bl)
 Space? O(bl)
 Optimal? No(if l>d)
 Introduces an additional source of incompleteness if we choose l < d, that is,
the shallowest goal is beyond the depth limit. (This is likely when d is
unknown.)
 Depth-first search can be viewed as a special case of depth-limited search
with l =∞
 Sometimes, depth limits can be based on knowledge of the problem
 Notice that depth-limited search can terminate with two kinds of failure: the
standard failure value indicates no solution; the cutclff value indicates no
solution within the depth limit.

65
Iterative deepening search
 Depth limit search never return a solution due to the limitation
of the limit if all solution node exist at depth >l. This limitation
can be avoided by applying iterative deepening search strategy
 Is a general strategy, often used in combination with depth-
first tree search, that finds the best depth limit
 It does this by gradually increasing the limit—first 0, then
1, then 2, and so on—until a goal is found
 This will occur when the depth limit reaches d, the depth of
the shallowest goal node
 Iterative deepening combines the benefits of depth-first
and breadth-first search.
 Like depth-first search, its memory requirements are modest:
to be precise.
 Like breadth-first search, it is complete when the branching
66 factor is finite and optimal when the path cost is a nondecreasing
Iterative deepening search is analogous to
breadth-first search in that it explores a complete
layer of new nodes at each iteration before going
on to the next layer

67
Iterative deepening search l =0

68
Iterative deepening search l =3

69
Properties of iterative deepening search
 Complete? Yes
 Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
 Space? O(bd)
 Optimal? Yes, if step cost = 1

70
Bidirectional search
• Simultaneously search forward from the initial state and
backward from the goal state and terminate when the two search
meet in the middle
• Reconstruct the solution by backward tracking towards the root
and forward tracking towards the goal from the point of
intersection
• These algorithm is efficient if there any very limited one or two
nodes with solution state in the search space
• Space and time complexity :
• The algorithm is complete and optimal (for uniform step costs) if
both searches are breadth-first; other combinations may sacrifice
completeness, optimality, or both.

71
Summary of algorithms

Repeated states
 Failure to detect repeated states can turn a linear problem into an
exponential one!

72
Graph search

73

You might also like