0% found this document useful (0 votes)
94 views73 pages

Chapter 3 AI

This document discusses problem solving by searching. It identifies the type of agent that solves problems by searching as a goal-based, utility-based, or learning agent. The document outlines the steps to problem formulation, goal formulation, and searching for a solution. It describes different types of problems based on the environment, including single-state, sensorless, contingency, and exploration problems. Examples of problems that can be solved by searching include the road map problem, vacuum world problem, and various puzzles. Well-defined problems have a known start state, goal state, possible actions, and constraints, while ill-defined problems have an unclear goal or formulation.

Uploaded by

MULUKEN TILAHUN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views73 pages

Chapter 3 AI

This document discusses problem solving by searching. It identifies the type of agent that solves problems by searching as a goal-based, utility-based, or learning agent. The document outlines the steps to problem formulation, goal formulation, and searching for a solution. It describes different types of problems based on the environment, including single-state, sensorless, contingency, and exploration problems. Examples of problems that can be solved by searching include the road map problem, vacuum world problem, and various puzzles. Well-defined problems have a known start state, goal state, possible actions, and constraints, while ill-defined problems have an unclear goal or formulation.

Uploaded by

MULUKEN TILAHUN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 73

Chapter 3

Solving problems by searching

1
Objectives

o Identify the type of agent that solve problem by searching


o Problem formulation and goal formulation
o Types of problem based on environment type
o Discuss various techniques of search strategies

2
Type of agent that solve problem by searching

o Such agent is not reflex or model based reflex agent because this
agent needs to achieve some target (goal).
o It can be goal based or utility based or learning agent.
o Intelligent agent knows that to achieve certain goal, the state of
the environment will change sequentially and the change should
be towards the goal.

3
Steps to undertake during searching
o Problem formulation:
o Involves:
o Abstracting the real environment configuration into state information using
preferred data structure
o Describe the initial state according to the data structure
o The set of action possible on a given state at specific point in the process.
o The cost of the action at each state
o Deciding what actions and states to consider, given a goal
o For vacuum world problem, the problem formulation involve:
o State is described as list of 3 elements where the first element describe
information about block A, the second element describe information about
block B and the last element describe the location of the Agent
o [dirty, dirty, A]
o Suck, moveRight, moveLeft
o Determine which of the above action are valid for a give action
o Cost can be determined in many ways
4
Cont’d
o Goal formulation: refers to the understanding of the
objective of the agent based on the state description of the
final environment.
o Based on the current situation and the agent’s performance
measure, is the first step in problem solving
o Goal can be consider as a set of world states—exactly those
states in which the goal is satisfied.
o The agent’s task is to find out how to act, now and in the
future, so that it reaches a goal state.
o For example, for the vacuum world problem, the goal can be
formulated as [clean, Clean, agent at any block].
5
o Solution is a sequence of world state in which the final state
satisfy the goal or solution is action sequence in which the last
action will result the goal state.
o Solution quality is measured by the path cost function, and an
optimal solution has the lowest path cost among all solutions.
o Each action change one state to the next state of the world
o A search algorithm take a problem as input and returns a solution
in the form of action or state sequence.
o To achieve the goal, the action sequences must be executed
accordingly.
o The general “formulate-search-execute” algorithm of search is
given bellow:

6
Problem-solving agents

7
Agent Program

8
Example: Road map of Ethiopia
Aksum
100
200 Mekele
Gondar 80
180
Lalibela
110 250
150
Bahr dar
Dessie
170

Debre markos 330 Dire Dawa


230
400
330
Jima Addis Ababa
100
430 Nazarez 370

Gambela 230 320 Nekemt

Awasa

9
Cont’d
o Current position of the agent: Awasa.
o Needs to arrive to: Gondar
o Formulate goal:
o be in Gondar
o Formulate problem:
o states: various cities
o actions: drive between cities
o Find solution:
o sequence of cities, e.g., Awasa, Nazaret, Addis Ababa, Dessie,
Gondar

10
Types of Problems: Knowledge of States and Actions
o What happens when knowledge of the states or actions is incomplete?
o Four types of problems exist in the real situations:
1. Single-state problem
o The environment is Deterministic, fully observable, static and discreet
o Out of the possible state space, agent knows exactly which state it will be
in; solution is a sequence
o Complete world state knowledge and action knowledge
2. Sensor less problem (conformant problem)
o The environment is non-observable, deterministic , static and discreet
o It is also called multi-state problem
o Incomplete world state knowledge and action knowledge
o Agent may have no idea where it is; solution is a sequence
o If the agent has no sensors at all, then (as far as it knows) it could be in one of
several possible initial states, and each action might therefore lead to one of
several possible successor states
11
Cont’d
3. Contingency problem
o The environment is nondeterministic and/or partially observable
o It is not possible to know the effect of the agent action
o percepts provide new information about current state
o It is impossible to define a complete sequence of actions that
constitute a solution in advance because information about the
intermediary states is unknown

4. Exploration problem
o The environment is partially observable
o It is also called unknown state space
o Agent must learn the effect of actions.
o When the states and actions of the environment are unknown, the agent
must act to discover them.
o It can be viewed as an extreme case of contingency problems
o State space and effects of actions unknown

12
Example: vacuum world
o Single-state
o Starting state us known say in #5.
o What is the Solution?

13
Cont’d
o The vacuum cleaner always knows where it is and where
the dirt is. The solution then is reduced to searching for a
path from the initial state to the goal state
o Single-state, start in #5.
Solution? [Right, Suck]

14
Cont’d
o Sensor less,
o Suppose that the vacuum agent
knows all the effects of its actions,
but has no sensors.
o It doesn’t know what the current
state is
o It doesn’t know where it or the
dirt is
o So the current start is either of the
following: {1,2,3,4,5,6,7,8}

 What is the Solution?

15
Cont’d
o Sensor less Solution
o Right goes to {2,4,6,8}
Solution?
o [Right,Suck,Left,Suck]

16
Cont’d
o Contingency
o Nondeterministic: Suck may
dirty a clean carpet
o Partially observable:
o Hence we have partial
information
o Let’s assume the current percept
is: [L, Clean]
o i.e. start in #5 or #7

o What is the Solution?

17
Cont’d
o Contingency Solution
[Right, if dirt then Suck]
Suppose that the “vacuum” action Move right
sometimes deposits dirt on the
carpet –but only if the carpet is
already clean.
The agent must have sensor and it suck
must combine decision making
, sensing and execution.
[Right,suck,left suck] is not
correct plane , because one room
might be clean originally, them
become dirty. No fixed plane that
18
always works
Cont’d
Exploration
Example 1:
o Assume the agent is some where outside the blocks and wants to
clean the block. So how to get into the blocks? No clear
information about their location
o What will be the solution?
o Solution is exploration
Example 2:
o The agent is at some point in the world and want to reach a city
called CITY which is unknown to the agent.
o The agent doesn’t have any map
o What will be the solution?
o Solution is exploration

19
Some more problems that can be solved by searching
o We have seen two such problems: The road map problem and
the vacuum cleaner world problem

o The following are some more problems


o The three mice and the three cats problem
o The three cannibal and the three missionaries problem
o The water jug problem
o The colored block world problem

20
Well-defined problems and solutions (single state)
o A well defined problem is a problem in which
o The start state of the problem
o Its goal state
o The possible actions (operators that can be applied to make
move from state to state)
o The constraints upon the possible action to avoid invalid
moves (this defines legal and illegal moves) are known in
advance

21
Cont’d

o A problem which is not well defined is called ill-defined


o Ill-defined problem presents a dilemma in planning to get
the goal
o The goal may not be precisely formulated
o Examples
o Cooking dinner
o Writing term paper
o All the problem we need to consider in this course are well
defined

22
Cont’d
A problem is defined by five items:
1. initial state that the agent starts in e.g., "at Awasa"
2. actions or successor function S(x) = set of action–state pairs
o e.g., S(Awasa) = {<Awasa  Addis Ababa, Addis Ababa>, <Awasa 
Nazaret, Nazaret>, … }
o Note: <AB, B> indicates action is A  B and next state is B

3. A description of what each action does.-transition model, specified by a function


RESULT(s, a) that returns the state that results from doing action a in state s. e.g.,
RESULT(In(Awasa), Go(Addis Ababa)) = In(Addis Ababa)
o Initial state, actions and transition model implicitly define the state space of the
problem-the set of all states reachable from the initial state by any sequence
o The state space forms a directed network or graph in which the nodes are states
and the links between nodes are actions
o A path in the state space is a sequence of states connected by a sequence of
actions
4. goal test, which determines whether a given state is a goal state. It can be
o explicit, e.g., x = "at Gonder"

23 o implicit, e.g., CheckGoal(x)


Cont’d
5. The constraints
o path cost (additive)
o e.g., sum of distances, number of actions executed, etc.
o c(x,a,y) is the step cost, assumed to be ≥ 0 (the cost of applying action
a being at initial state x which takes into next state y
 assigns a numeric cost to each path
o Invalid (actions that doesn’t change states)
o Solution is a sequence of actions leading from the initial state
to a goal state or sequence of states in which the last state is the
goal state-Path from an initial to a goal state
o Search Costs: Time and storage requirements to find a
solution
o Total Costs: Search costs + path costs
24
Selecting a state space
o Real world problem can not be directly represented in the agent
architecture since it is absurdly complex
 state space must be abstracted for problem solving
o (Abstract) state = set of real states
o (Abstract) action = complex combination of real actions
o e.g., “Awasa  Addis Ababa" represents a complex set of
possible routes, detours, rest stops, etc.
o (Abstract) solution = set of real paths that are solutions in the real
world
o Each abstract action should be "easier" than the original problem

25
Vacuum world state space graph

o Initial states?
o states?
o actions?
o goal test?
o path cost?

26
Cont’d

o Initial states? Any state can be designated as the initial state


o states? Information on dirt and robot location (one of the 8
states)
o actions? Left, Right, Suck
o goal test? no dirt at all locations
o path cost? 1 per action
27
Example: The 8-puzzle

o Initial states?
o states?
o actions?
o goal test?
o path cost?
28
Cont’d

o Intial states? Any state can be designated as the initial state


o states? locations of tiles(eight tiles and one blank space)
o actions? move blank space left, right, up, down
o goal test? checks whether the state matches the goal configuration
shown in figure above
o path cost? 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]
29
Searching For Solution (Tree search algorithms)

o The Successor-Fn generate all the successors state and the action
that leads moves the current state into the successor state
o The Expand function creates new nodes, filling in the various
fields of the node using the information given by the Successor-Fn
and the input parameters
30
Tree search example
Awasa

Nazarez Addis Ababa

Gambela Dire Debre Awasa


Gambela AA Nazarez Jima Nekemt
Dawa Markos
Dessie
Awasa

BahrDar AA
Lalibela AA Gondar

Gondar Debre M.

31
Implementation: General tree search

32
Search strategies

33
Search strategies
o A search strategy is defined by picking the order of node expansion
o Strategies are evaluated along the following dimensions:
o completeness: does it always find a solution if one exists?
o time complexity: How long does it take to find a solution?
number of nodes generate
o space complexity: maximum number of nodes in memory
o optimality: does it always find a least-cost solution?
o Time and space complexity are measured in terms of
o b: maximum branching factor of the search tree(i.e., max. number of successor
of any node)
o d: depth of the least-cost solution((i.e., the number of steps along the path from
the root)
o m: maximum depth of the state space (may be ∞)
o Search cost used to assess the effectiveness of search algorithm. It
depends on time complexity but can also include memory usage or we
can use the total cost, which combines the search cost and the path cost
of the solution found.
34
Uninformed search (blind search) strategies
o Uninformed search strategies use only the information available in the
problem definition
o They have no information about the number of steps or the path cost
from the current state to the goal
o They can distinguish the goal state from other states
o They are still important because there are problems with no additional
information.
o Six kinds of such search strategies will be discussed and each depends on
the order of expansion of successor nodes.
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
6. Bidirectional search
35
Breadth-first search
o Breadth-first search is a simple strategy in which the root node is expanded first,
then all the SEARCH successors of the root node are expanded next, then their
successors, and so on.
o Expand shallowest unexpanded node
o Finds the shallowest goal state
o Breath-first search always has shallowest path to every node on the fringe
o Implementation:
o Fringe (open list) is a FIFO queue, i.e., new successors go at the end

36
Cont’d

37
Cont’d
o Time Complexity: Let b be the maximal branching factor and d
the depth of a solution path. Then the maximal number of
nodes expanded is
o (Note: If the algorithm were to apply the goal test to nodes
when selected for expansion rather than when generated, the
whole layer of nodes at depth d would be expanded before the
goal was detected and the time complexity would be
o Space Complexity: Every node generated is kept in memory.
Therefore space needed for the fringe is and for the
explored set .
o a maximum of this match node will be while reaching to the
goal node. This is a major problem for real problem
Complete: yes (if b is finite) Optimal? Yes (if cost = constant (k) per step)
38
Uniform-cost search
o Expands the node n with the lowest path cost g(n)
o Equivalent to breadth-first if step costs all equal
o Implementation:
o Done by storing the fringe as a priority queue ordered by g
o Differ from breadth-first search is that:
o Ordering of the queue by path cost
o Goal test is applied to a node when it is selected for
expansion rather than when it is first generated
o A test is added in case a better path is found to a node
currently on the fringe

39
Cont’d

40
Cont’d
o Uniform-cost search does not care about the number of steps a
path has, but only about their total cost
o When all step costs are the same, uniform-cost search is similar
to breadth-first search, except that the latter stops as soon as it
generates a goal, whereas uniform-cost search examines all the
nodes at the goal’s depth to see if one has a lower cost; thus
uniform-cost search does strictly more work by expanding nodes
at depth d unnecessarily S
o Consider the problem that moves from node S to A, 1
G B, 5 C, 15
A
1 10 S
5 B 5
S G
A, 1 B, 5 C, 15
C 5
15
G, 11
S

A, 1 B, 5 C, 15

41
G, 11 G, 10
Cont’d
o It finds the cheapest solution if the cost of a path never decrease as
we go along the path
o i.e. g(sucessor(n)) ≥ g(n) for every node n.
o Complete? Yes(if step costs positive)
o Time? # of nodes with g ≤ cost of optimal solution
o let ε be the minimum step cost in the search tree, C* is the total cost of the
optimal solution and branching factor b.
o Now you can ask, In the worst case, what will be the number of nodes that
exist all of which has step cost = ε, branching factor b and path cost ≤ C*.
The resulting tree will have depth value floor(C*/ ε)
o Hence the total node will be bceilling(C*/ ε) . Therefore, time complexity becomes
O(bceiling(C*/ ε) )
o Space? # of nodes with g ≤ cost of optimal solution, O(bceiling(C*/ ε))
o Optimal? Yes – nodes expanded in increasing order of g(n)

42
Depth-first search

o Expand deepest unexpanded node


o Implementation:
o fringe = LIFO queue, i.e., put successors at front

43
Cont’d
o Complete? No: fails in infinite-depth spaces, spaces with loops
o Modify to avoid repeated states along path (see graph search)

 Yes: in finite spaces


o Time? O(bm): terrible if m is much larger than d( m is the maximum length of path in the
state space)
 but if solutions are dense, may be much faster than breadth-first
o Space? O(bm), i.e., linear space!
 In tree based search needs to store the nodes along the path from the root to the leaf
node. Once the node has been expanded , it can be removed from memory as soon as
all its descendants have been fully explored
o Optimal? No
o Advantage: Time may be much less than breadth-first search if solutions are dense
o Disadvantage: Time terrible if m much larger than d
o Note
o Depth-first search can perform infinite cyclic excursions
o Need a finite, non-cyclic search space (or repeated-state checking)
44
Depth-limited search
o We can see that the breadth search is complete which can be taken as its
advantage though its space complexity is the worst
o Similarly the depth first search strategy is best in terms of space
complexity even if it is the worst in terms of its completeness and time
complexity compared to breadth first search
o Hence, we can find an algorithm that incorporate both benefits and
avoid the limitation

o Such algorithm is called depth limited search and its improved version
is called iterative deepening search strategy.
o These two strategies will be explored in the following sections

45
Cont’d
o depth-first search in infinite state spaces can be alleviated by
supplying depth-first search with a predetermined depth limit
o That is, nodes at depth are treated as if they have no successors
o The depth limit solves the infinite-path problem
o Depth-first search with depth limit l, will truncate all nodes having depth value
greater than l from the search space and apply depth first search on the rest of the
structure
o It return solution if solution exist, if there is no solution
o it return cutoff if l < m, failure otherwise
Recursive implementation:

46
Cont’d
o Complete? No( l < d, that is, the shallowest goal is beyond the depth limit)
o Time? O(bl)
o Space? O(bl)
o Optimal? No(if l>d)
o Introduces an additional source of incompleteness if we choose l < d, that is,
the shallowest goal is beyond the depth limit. (This is likely when d is
unknown.)
o Depth-first search can be viewed as a special case of depth-limited search
with l =∞
o Sometimes, depth limits can be based on knowledge of the problem
o Notice that depth-limited search can terminate with two kinds of failure: the
standard failure value indicates no solution; the cutclff value indicates no
solution within the depth limit.

47
Iterative deepening search
o Depth limit search never return a solution due to the limitation of the
limit if all solution node exist at depth >l. This limitation can be
avoided by applying iterative deepening search strategy
o Is a general strategy, often used in combination with depth-first tree
search, that finds the best depth limit
o It does this by gradually increasing the limit—first 0, then 1, then 2,
and so on—until a goal is found
o This will occur when the depth limit reaches d, the depth of the
shallowest goal node
o Iterative deepening combines the benefits of depth-first
and breadth-first search.
o Like depth-first search, its memory requirements are modest:
to be precise.
o Like breadth-first search, it is complete when the branching factor
is finite and optimal when the path cost is a nondecreasing function
of the depth of the node.
48
Cont’d
o Iterative deepening search is analogous to breadth-first
search in that it explores a complete layer of new nodes at
each iteration before going on to the next layer

49
Iterative deepening search l =0

50
Iterative deepening search l =3

51
Properties of iterative deepening search
o Complete? Yes
o Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
o Space? O(bd)
o Optimal? Yes, if step cost = 1

52
Bidirectional search
o Simultaneously search forward from the initial state and
backward from the goal state and terminate when the two search
meet in the middle
o Reconstruct the solution by backward tracking towards the root
and forward tracking towards the goal from the point of
intersection
o These algorithm is efficient if there any very limited one or two
nodes with solution state in the search space
o Space and time complexity :
o The algorithm is complete and optimal (for uniform step costs) if
both searches are breadth-first; other combinations may sacrifice
completeness, optimality, or both.

53
Summary of algorithms

54
Graph search

55
Solving problems by searching – Informed Search

56
Informed search algorithms
o We now consider informed search that uses problem-specific
knowledge beyond the definition of the problem itself
o This information helps to find solutions more efficiently than
an uninformed strategy
o An evaluation function f(n) determines how promising a node
n in the search tree appears to be for the task of reaching the
goal
o Informed search is a strategy that uses information about the
cost that may incur to achieve the goal state from the current
state.
o The information may not be accurate. But it will help the agent
to make better decision
o This information is called heuristic information
57
Cont’d
o A key component of an evaluation function is a heuristic
function h(n), which estimates the cost of the cheapest path
from node n to a goal node
o In connection of a search problem “heuristics” refers to a
certain (but loose) upper or lower bound for the cost of the best
solution
o Goal states are nevertheless identified: in a corresponding node
n it is required that h(n) = 0 (i.e., if n is a goal node, then
h(n)=0)
o E.g., a certain lower bound - bringing no information- would
be to set
o Heuristic functions are the most common form in which
additional knowledge is imparted to the search algorithm
58
Cont’d
o Blind search strategies rely only on exact information (initial
state, operators, goal predicate)
o They don’t make use of additional information about the
nature of the problem that might make the search more
efficient
o If we have a (vague) idea in which direction to look for the
solution, why not use this information to improve
the search?

59
Cont’d
o Heuristics - rules about the nature of the problem, which are
grounded in experience and whose purpose is to direct the
search towards the goal so that it becomes more efficient
o Heuristic function h : S R+ assigns to each state an estimate
of the distance between that state and the goal state
o The smaller the value h(s), the closer is s to the goal state. If s
is the goal state, then h(s) = 0
o Search strategies that use heuristics to narrow down the search
are called heuristic (informed, directed) search strategies

60
Informed search
o There several algorithms that belongs to this group. Some of
these are:
 Best-first search
1. Greedy best-first search
2. A* search
o Best-first search is an instance of the general TREE-
SEARCH or GRAPH-SEARCH algorithm in which a node
is selected for expansion based on an evaluation function, f
(n).
o Traditionally, the node with the lowest evaluation is selected
for expansion, because the evaluation measures distance to
the goal
o Best-first search can be implemented within our general
search framework via a priority queue, a data structure that
will maintain the fringe in ascending order of f values
61
Best-first search
o Idea: use an evaluation function f(n) for each node
o Estimate of "desirability“ using heuristic and path cost
o Expand most desirable unexpanded node
o The information gives a clue about which node to be expanded
first
o This will be done during queuing
o The best node according to the evaluation function may not be
best
Implementation:
o Order the nodes in fringe in decreasing order of desirability
(increasing order of cost evaluation function)

62
Cont’d
o Note that, hSLD cannot be computed from the problem
description itself.
o It takes a certain amount of experience to know that it is
correlated with actual road distances, and therefore it is a
useful heuristic

63
Cont’d
o Tries to expand the node that is closest to the goal, on the grounds that this
is likely to lead to a solution quickly
o it evaluates nodes by using just the heuristic function: f ( n )= h(n)
(heuristic)
o = estimate of cost from n to goal
o That means the agent prefers to choose the action which is assumed to be
best after every action
o e.g., hSLD(n) = straight-line distance from n to Gonder
o Greedy best-first search expands the node that appears to be closest to goal
(It tries to minimizes the estimated cost to reach the goal). disregarding the
total path cost accumulated so far.
o Greedy best-first search resembles depth-first search in the way it prefers
to follow a single path all the way to the goal, but will back up when it hits
a dead end
o The chosen path may not be optimal, but the algorithm doesn’t backtrack to
correct this. Hence, a greedy algorithm is not optimal.
64
Open List
o The open list (also: frontier) organizes the leaves of a search
tree.
o set of nodes that are currently candidates for expansion
o It must support two operations efficiently:
o determine and remove the next node to expand
o insert a new node that is a candidate node for expansion
o Some implementations support modifying an open list entry
when a shorter path to the corresponding state is found.
o Methods of an open list
o open.is empty() test if the open list is empty
o open.pop() removes and returns the next node to expand
o open.insert(n) inserts node n into the open list

65
Closed List
o The closed list remembers expanded states to avoid
duplicated expansions of the same state.
o set of already expanded nodes (and their states)
o It must support two operations efficiently:
o insert a node whose state is not yet in the closed list
o test if a node with a given state is in the closed list; if yes,
return it
o Methods of a closed list
o closed.insert(n) insert node n into closed; if a node with this
state already exists in closed, replace it
o closed.lookup(s) test if a node with state s exists in the closed
list; if yes, return it; otherwise, return none
66
Cont’d
o Properties
o Complete? Yes if repetition is controlled otherwise it
can get stuck in loops
o Time? O(bm), but a good heuristic can give dramatic
improvement
o Space? O(bm), keeps all nodes in memory
o Optimal? No

67
A* search
o Idea: avoid expanding paths that are already expensive
o Evaluation function f(n) = g(n) + h(n) where
o g(n) = cost so far to reach n, gives the path cost from the start node to node
n
o h(n) = estimated cost from n to goal
o f(n) = estimated total cost of path through n to goal
o It tries to minimizes the total path cost to reach into the goal at every node
N.
o the algorithm combines best-first and uniform cost search
o Path costs are updated when nodes are expanded (similarly as with uniform
cost search)
Example one
Indicate the flow of search to move from Awasa to Gondar using A*

68
Cont’d

69
Cont’d
o Example 2: Given the Heuristic
following tree structure, R  G -------------- 100
show the content of the open A  G -------------- 60
list and closed list generated B  G -------------- 80
by A* best first search C  G -------------- 70
algorithm D  G -------------- 65
R
35
70 E  G -------------- 40
40
F  G -------------- 45
A B C
25 10 62 45 H  G ---------------10
18 21
I  G ---------------- 20
D E F G1 H G2
J  G ---------------- 8
15 20 5
G1,G2,G3  G ------------ 0
I G3 J

70
Properties of A* search
o Complete? Yes (unless there are infinitely many nodes
with f ≤ f(G) )
o Optimal? Yes (provided that the heuristic is
admissible)
o Time?
o In the best case (if the heuristic is the same as the actual cost), it
is equivalent to the depth of the solution node (i.e. it is linear).
O(d)

o In the worst case, it is equivalent to the number of nodes which


has f-value ≤ f-value of the solution node O(bceiling(C*/ ε)) where
C* is the f value of the solution node

71
o This shows, A* search is computationally efficient compared to
Greedy best first search strategy
Cont’d
o Space? Keeps all nodes in memory (exponential)
o i.e. O(bm)
o This is again a limitation in the same way as we saw while
discussing Greedy, Breadth and Depth first search
o Hence, it is advisable to have modified version of such
algorithm in which the modification minimizes the space
complexity
o There are two such modifications
1. Iterative deepening A* (IDA*) search
2. Simplified Memory Bound A* (SMA*) search

72
Thank you!!!

73

You might also like