0% found this document useful (0 votes)
40 views55 pages

A and Weighted A Search: Maxim Likhachev Carnegie Mellon University

The document discusses graph search algorithms for planning problems. It introduces A* search, which finds optimal paths in graphs by maintaining estimates of the cost to reach each node from the start (g-values) and heuristically estimating the cost to reach the goal (h-values). The search expands nodes with the lowest total cost (f-value = g + h). This allows A* to iteratively improve paths until finding the optimal solution.

Uploaded by

Avijit Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views55 pages

A and Weighted A Search: Maxim Likhachev Carnegie Mellon University

The document discusses graph search algorithms for planning problems. It introduces A* search, which finds optimal paths in graphs by maintaining estimates of the cost to reach each node from the start (g-values) and heuristically estimating the cost to reach the goal (h-values). The search expands nodes with the lowest total cost (f-value = g + h). This allows A* to iteratively improve paths until finding the optimal solution.

Uploaded by

Avijit Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

A* and Weighted A* Search

Maxim Likhachev
Carnegie Mellon University

Planning as Graph Search Problem

1. Construct a graph representing the planning problem

2. Search the graph for a (hopefully, close-to-optimal)


path

The two steps above are often interleaved

Maxim Likhachev

Carnegie Mellon University

Planning as Graph Search Problem

1. Construct a graph representing the planning problem


(future lectures)

2. Search the graph for a (hopefully, close-to-optimal)


path (three next lectures)

The two steps above are often interleaved

Maxim Likhachev

Carnegie Mellon University

Examples of Graph Construction


Cell decomposition
- X-connected grids

- lattice-based graphs

replicate action
template online

Skeletonization of the environment/C-Space


-Visibility graphs
- Voronoi diagrams
- Probabilistic roadmaps
Maxim Likhachev

Carnegie Mellon University

Examples of Graph Construction


Cell decomposition
- X-connected grids

- lattice-based graphs

replicate action
template online

Will all be covered later

Skeletonization of the environment/C-Space


-Visibility graphs
- Voronoi diagrams
- Probabilistic roadmaps
Maxim Likhachev

Carnegie Mellon University

Examples of Search-based Planning


1. Construct a graph representing the planning problem
2. Search the graph for a (hopefully, close-to-optimal) path
The two steps are often interleaved
motion planning for autonomous vehicles in 4D (<x,y,orientation,velocity>)
running Anytime Incremental A* (Anytime D*) on multi-resolution lattice
[Likhachev & Ferguson, IJRR09]

part of efforts by Tartanracing team from CMU for the Urban Challenge 2007 race
Maxim Likhachev

Carnegie Mellon University

Examples of Search-based Planning


1. Construct a graph representing the planning problem
2. Search the graph for a (hopefully, close-to-optimal) path
The two steps are often interleaved

8-dim foothold planning for quadrupeds using R* graph search

Maxim Likhachev

Carnegie Mellon University

Searching Graphs for a Least-cost Path


Once a graph is constructed (from skeletonization or uniform cell
decomposition or adaptive cell decomposition or lattice or whatever else), we
need to search it for a least-cost path

1
Sstart

S2

S1

1
1
S4

Maxim Likhachev

Sgoal

S3

Carnegie Mellon University

Searching Graphs for a Least-cost Path


Many searches work by computing optimal g-values for
relevant states
g(s) an estimate of the cost of a least-cost path from sstart to s
optimal values satisfy:

g(s) = mins pred(s) g(s) + c(s,s)

g=1
g=0
Sstart

S2

g=3

S1

the cost c(s1,sgoal) of


an edge from s1 to sgoal
2

1
1
S4
g=2

Maxim Likhachev

g=5
Sgoal

S3
g=5

Carnegie Mellon University

Searching Graphs for a Least-cost Path


Many searches work by computing optimal g-values for
relevant states
g(s) an estimate of the cost of a least-cost path from sstart to s
optimal values satisfy:

g(s) = mins pred(s) g(s) + c(s,s)

g=1
g=0
Sstart

S2

g=3

S1

the cost c(s1,sgoal) of


why?
an edge from s1 to sgoal
2

1
1
S4
g=2

Maxim Likhachev

g=5
Sgoal

S3
g=5

Carnegie Mellon University

10

Searching Graphs for a Least-cost Path


Least-cost path is a greedy path computed by backtracking:
start with sgoal and from any state s move to the predecessor state
s such that

s' arg min s '' pred ( s ) ( g ( s' ' ) c( s' ' , s))

g=1
g=0
Sstart

S2

g=3

S1

1
1
S4
g=2

Maxim Likhachev

g=5
Sgoal

S3
g=5

Carnegie Mellon University

11

A* Search
Computes optimal g-values for relevant states
at any point of time:
an (under) estimate of the cost
of a shortest path from s to sgoal
the cost of a shortest path
from sstart to s found so far

g(s)

h(s)
S

S1
Sstart

Maxim Likhachev

S2

Carnegie Mellon University

Sgoal

12

A* Search
Computes optimal g-values for relevant states
at any point of time:
heuristic function

g(s)

h(s)
S

S1
Sstart

S2

Sgoal

one popular heuristic function Euclidean distance


Maxim Likhachev

Carnegie Mellon University

13

A* Search
minimal cost from s to sgoal

Heuristic function must be:


admissible: for every state s, h(s) c*(s,sgoal)
consistent (satisfy triangle inequality):
h(sgoal,sgoal) = 0 and for every ssgoal, h(s) c(s,succ(s)) + h(succ(s))

admissibility follows from consistency and often consistency


follows from admissibility

Maxim Likhachev

Carnegie Mellon University

14

A* Search
Computes optimal g-values for relevant states
Main function
g(sstart) = 0; all other g-values are infinite; OPEN = {sstart};
ComputePath();
publish solution;
set of candidates for expansion
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
expand s;
g=

g=0
h=3
for every expanded state
g(s) is optimal

Sstart

h=2
S2

1
1

(if heuristics are consistent)

S4
Maxim Likhachev

g=
h=1
S1

Carnegie Mellon University

g=
h=2

g=
h=0
Sgoal

S3
g=
h=1

15

A* Search
Computes optimal g-values for relevant states

ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
expand s;
g=
g=0
h=3

Sstart

h=2
S2

g=
h=1
S1

1
1
S4

Maxim Likhachev

Carnegie Mellon University

g=
h=2

g=
h=0
Sgoal

S3
g=
h=1

16

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
set of states that have already been expanded
insert s into OPEN;
tries to decrease g(s) using the
found path from sstart to s

g=0
h=3

Sstart

g=
h=2
S2

g=
h=1
S1

1
1
S4

Maxim Likhachev

Carnegie Mellon University

g=
h=2

g=
h=0
Sgoal

S3
g=
h=1

17

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;

CLOSED = {}
OPEN = {sstart}
next state to expand: sstart
Maxim Likhachev

g=0
h=3

Sstart

Carnegie Mellon University

g=
h=2
S2

g=
h=1
S1

1
1
S4
g=
h=2

g=
h=0
Sgoal

S3
g=
h=1

18

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s2) > g(sstart) + c(sstart,s2)
g(s) = g(s) + c(s,s);
insert s into OPEN;

CLOSED = {}
OPEN = {sstart}
next state to expand: sstart
Maxim Likhachev

g=0
h=3

Sstart

Carnegie Mellon University

g=
h=2
S2

g=
h=1
S1

1
1
S4
g=
h=2

g=
h=0
Sgoal

S3
g=
h=1

19

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;
g=0
h=3

g=1
h=2
S2

Sstart

g=
h=1
S1

1
1

S4
Maxim Likhachev

Carnegie Mellon University

g=
h=2

g=
h=0
Sgoal

S3
g=
h=1

20

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;

CLOSED = {sstart}
OPEN = {s2}
next state to expand: s2
Maxim Likhachev

g=0
h=3

g=1
h=2
S2

Sstart

g=
h=1
S1

1
1

S4
Carnegie Mellon University

g=
h=2

g=
h=0
Sgoal

S3
g=
h=1

21

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;

CLOSED = {sstart,s2}
OPEN = {s1,s4}
next state to expand: s1
Maxim Likhachev

g=0
h=3

g=1
h=2
S2

Sstart

g= 3
h=1
S1

1
1

S4
Carnegie Mellon University

g= 2
h=2

g=
h=0
Sgoal

S3
g=
h=1

22

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;

CLOSED = {sstart,s2,s1}
OPEN = {s4,sgoal}
next state to expand: s4
Maxim Likhachev

g=0
h=3

g=1
h=2
S2

Sstart

g= 3
h=1
S1

1
1

S4
Carnegie Mellon University

g= 2
h=2

g= 5
h=0
Sgoal

S3
g=
h=1

23

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;

CLOSED = {sstart,s2,s1,s4}
OPEN = {s3,sgoal}
next state to expand: sgoal
Maxim Likhachev

g=0
h=3

g=1
h=2
S2

Sstart

Carnegie Mellon University

g= 3
h=1
S1

1
1

S4
g= 2
h=2

g= 5
h=0
Sgoal

S3
g= 5
h=1

24

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;

CLOSED = {sstart,s2,s1,s4,sgoal}
OPEN = {s3}
done
Maxim Likhachev

g=0
h=3

g=1
h=2
S2

Sstart

Carnegie Mellon University

g= 3
h=1
S1

1
1

S4
g= 2
h=2

g= 5
h=0
Sgoal

S3
g= 5
h=1

25

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;
g=0
h=3

g=1
h=2
S2

Sstart

for every expanded state g(s) is optimal


for every other state g(s) is an upper bound
we can now compute a least-cost path
Maxim Likhachev

Carnegie Mellon University

g= 3
h=1
S1

1
1

S4
g= 2
h=2

g= 5
h=0
Sgoal

S3
g= 5
h=1

26

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;
g=0
h=3

g=1
h=2
S2

Sstart

for every expanded state g(s) is optimal


for every other state g(s) is an upper bound
we can now compute a least-cost path
Maxim Likhachev

Carnegie Mellon University

g= 3
h=1
S1

1
1

S4
g= 2
h=2

g= 5
h=0
Sgoal

S3
g= 5
h=1

27

A* Search
Computes optimal g-values for relevant states
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;
g=0
h=3

Sstart

g=1
h=2
S2

Carnegie Mellon University

for every expanded state g(s) is optimal


S4
for every other state g(s) is an upper bound why?
we can now compute a least-cost path
g= 2
Maxim Likhachev

g= 3
h=1
S1

h=2

1
3

g= 5
h=0
Sgoal

S3
g= 5
h=1

28

A* Search
Is guaranteed to return an optimal path (in fact, for every
expanded state) optimal in terms of the solution

Performs provably minimal number of state expansions


required to guarantee optimality optimal in terms of the
computations

g=0
h=3

g=1
h=2
S2

Sstart

g= 3
h=1
S1

1
1

S4
Maxim Likhachev

Carnegie Mellon University

g= 2
h=2

g= 5
h=0
Sgoal

S3
g= 5
h=1

29

A* Search
Is guaranteed to return an optimal path (in fact, for every
expanded state) optimal in terms of the solution
Sketch of proof by induction for h = 0:
assume all previously expanded states have optimal g-values
next state to expand is s: f(s) = g(s) min among states in OPEN
OPEN separates expanded states from never seen states
thus, path to s via a state in OPEN or an unseen state will be
g= 3
worse than g(s) (assuming positive costs) g=1

CLOSED = {sstart,s2,s1,s4}
OPEN = {s3,sgoal}
next state to expand: sgoal
Maxim Likhachev

g=0
h=3

h=2
S2

Sstart

Carnegie Mellon University

h=1
S1

1
1
S4
g= 2
h=2

g= 5
h=0
Sgoal

S3
g= 5
h=1

30

Effect of the Heuristic Function


A* Search: expands states in the order of f = g+h values
ComputePath function
while(sgoal is not expanded)
remove s with the smallest [f(s) = g(s)+h(s)] from OPEN;
insert s into CLOSED;
for every successor s of s such that s not in CLOSED
if g(s) > g(s) + c(s,s)
g(s) = g(s) + c(s,s);
insert s into OPEN;

Maxim Likhachev

Carnegie Mellon University

expansion of s

31

Effect of the Heuristic Function


A* Search: expands states in the order of f = g+h values
Sketch of proof of optimality by induction for consistent h:
1. assume all previously expanded states have optimal g-values
2. next state to expand is s: f(s) = g(s)+h(s) min among states in
OPEN
3. assume g(s) is suboptimal
4. then there must be at least one state s on an optimal path from
start to s such that it is in OPEN but wasnt expanded
5. g(s) + h(s) g(s)+h(s)
6. but g(s) + c*(s,s) < g(s) =>
g(s) + c*(s,s) + h(s) < g(s) + h(s) =>
g(s) + h(s) < g(s) + h(s)
7. thus it must be the case that g(s) is optimal
Maxim Likhachev

Carnegie Mellon University

32

Effect of the Heuristic Function


A* Search: expands states in the order of f = g+h values
Dijkstras: expands states in the order of f = g values (pretty
much)

Intuitively: f(s) estimate of the cost of a least cost path


from start to goal via s
an (under) estimate of the cost
of a shortest path from s to sgoal

the cost of a shortest path


from sstart to s found so far

g(s)

h(s)
S

S1
Sstart

Maxim Likhachev

S2

Carnegie Mellon University

Sgoal

33

Effect of the Heuristic Function


A* Search: expands states in the order of f = g+h values
Dijkstras: expands states in the order of f = g values (pretty
much)

Weighted A*: expands states in the order of f = g+h


values, > 1 = bias towards states that are closer to goal
an (under) estimate of the cost
of a shortest path from s to sgoal
the cost of a shortest path
from sstart to s found so far

g(s)

h(s)
S

S1
Sstart

Maxim Likhachev

S2

Carnegie Mellon University

Sgoal

34

Effect of the Heuristic Function


Dijkstras: expands states in the order of f = g values
What are the states expanded?

sstart

Maxim Likhachev

sgoal

Carnegie Mellon University

35

Effect of the Heuristic Function


A* Search: expands states in the order of f = g+h values
What are the states expanded?

sstart

Maxim Likhachev

sgoal

University of Pennsylvania

36

Effect of the Heuristic Function


A* Search: expands states in the order of f = g+h values

for large problems this results in A* quickly


running out of memory (memory: O(n))

sstart

Maxim Likhachev

sgoal

University of Pennsylvania

37

Effect of the Heuristic Function


Weighted A* Search: expands states in the order of f =
g+h values, > 1 = bias towards states that are closer to
goal
what states are expanded?
research question

sstart

sgoal
key to finding solution fast:
shallow minima for h(s)-h*(s) function

Maxim Likhachev

Carnegie Mellon University

38

Effect of the Heuristic Function


Weighted A* Search:
trades off optimality for speed
-suboptimal:
cost(solution) cost(optimal solution)
in many domains, it has been shown to be orders of magnitude
faster than A*
research becomes to develop a heuristic function that has
shallow local minima

Maxim Likhachev

Carnegie Mellon University

39

Effect of the Heuristic Function


Weighted A* Search:
trades off optimality for speed
-suboptimal:
cost(solution) cost(optimal solution)
it guaranteed
in many domains, it has beenIs
shown
to be orderstoofexpand
magnitude
no more states than A*?
faster than A*
research becomes to develop a heuristic function that has
shallow local minima

Maxim Likhachev

Carnegie Mellon University

40

Effect of the Heuristic Function


Constructing anytime search based on weighted A*:
- find the best path possible given some amount of time for planning
- do it by running a series of weighted A* searches with decreasing :
=2.5

=1.5

=1.0

13 expansions
solution=11 moves

15 expansions
solution=11 moves

20 expansions
solution=10 moves

Maxim Likhachev

Carnegie Mellon University

41

Effect of the Heuristic Function


Constructing anytime search based on weighted A*:
- find the best path possible given some amount of time for planning
- do it by running a series of weighted A* searches with decreasing :
=2.5

=1.5

=1.0

13 expansions
solution=11 moves

15 expansions
solution=11 moves

20 expansions
solution=10 moves

Inefficient because
many state values remain the same between search iterations
we should be able to reuse the results of previous searches
Maxim Likhachev

Carnegie Mellon University

42

Effect of the Heuristic Function


Constructing anytime search based on weighted A*:
- find the best path possible given some amount of time for planning
- do it by running a series of weighted A* searches with decreasing :
=2.5

=1.5

=1.0

13 expansions
solution=11 moves

15 expansions
solution=11 moves

20 expansions
solution=10 moves

ARA*(will be explained in a later lecture)


- an efficient version of the above that reuses state values within any search iteration
- will learn next lecture after we learn about incremental version of A*
Maxim Likhachev

Carnegie Mellon University

43

Effect of the Heuristic Function


Useful properties to know:
- h1(s), h2(s) consistent, then:
h(s) = max(h1(s),h2(s)) consistent
- if A* uses -consistent heuristics:
h(sgoal) = 0 and h(s) c(s,succ(s)) + h(succ(s) for all ssgoal,

then A* is -suboptimal:
cost(solution) cost(optimal solution)

- weighted A* is A* with -consistent heuristics


- h1(s), h2(s) consistent, then:
h(s) = h1(s)+h2(s) -consistent
Maxim Likhachev

Carnegie Mellon University

44

Effect of the Heuristic Function


Useful properties to know:
- h1(s), h2(s) consistent, then:
h(s) = max(h1(s),h2(s)) consistent
- if A* uses -consistent heuristics:
h(sgoal) = 0 and h(s) c(s,succ(s)) + h(succ(s) for all ssgoal,

then A* is -suboptimal:
cost(solution) cost(optimal solution)

- weighted A* is A* with -consistent heuristics

Proof?

- h1(s), h2(s) consistent, then:


h(s) = h1(s)+h2(s) -consistent
Maxim Likhachev

Carnegie Mellon University

45

Effect of the Heuristic Function


Useful properties to know:
- h1(s), h2(s) consistent, then:
h(s) = max(h1(s),h2(s)) consistent
- if A* uses -consistent heuristics:
h(sgoal) = 0 and h(s) c(s,succ(s)) + h(succ(s) for all ssgoal,

then A* is -suboptimal:
cost(solution) cost(optimal solution)

- weighted A* is A* with -consistent heuristics

Proof?

- h1(s), h2(s) consistent, then:


What is ? Proof?
h(s) = h1(s)+h2(s) -consistent
Maxim Likhachev

Carnegie Mellon University

46

Examples of Heuristic Function


For grid-based navigation:

Euclidean distance
Manhattan distance: h(x,y) = abs(x-xgoal) + abs(y-ygoal)
Diagonal distance: h(x,y) = max(abs(x-xgoal), abs(y-ygoal))
More informed distances???

Robot arm planning:


End-effector distance
Any others???

Maxim Likhachev

Carnegie Mellon University

Examples of Heuristic Function


For grid-based navigation:

Euclidean distance
Manhattan distance: h(x,y) = abs(x-xgoal) + abs(y-ygoal)
Diagonal distance: h(x,y) = max(abs(x-xgoal), abs(y-ygoal))
More informed distances???

Robot arm planning:


End-effector distance
Any others???

Maxim Likhachev

Carnegie Mellon University

Examples of Heuristic Function


For grid-based navigation:

Euclidean distance
Manhattan distance: h(x,y) = abs(x-xgoal) + abs(y-ygoal)
Diagonal distance: h(x,y) = max(abs(x-xgoal), abs(y-ygoal))
More informed distances???

Autonomous door opening:


Heuristic function???

Maxim Likhachev

Carnegie Mellon University

Memory Issues
A* does provably minimum number of expansions (O(n)) for finding
a provably optimal solution
Memory requirements of A* (O(n)) can be improved though
Memory requirements of weighted A* are often but not always better

Maxim Likhachev

Carnegie Mellon University

50

Memory Issues
Alternatives:
Depth-First Search (w/o coloring all expanded states):
explore each every possible path at a time avoiding looping and keeping in the
memory only the best path discovered so far
Complete and optimal (assuming finite state-spaces)

Memory: O(bm), where b max. branching factor, m max. pathlength


Complexity: O(bm), since it will repeatedly re-expand states

Maxim Likhachev

Carnegie Mellon University

51

Memory Issues
Alternatives:
Depth-First Search (w/o coloring all expanded states):
explore each every possible path at a time avoiding looping and keeping in the
memory only the best path discovered so far
Complete and optimal (assuming finite state-spaces)

Memory: O(bm), where b max. branching factor, m max. pathlength


Complexity: O(bm), since it will repeatedly re-expand states
Example:
graph: a 4-connected grid of 40 by 40 cells, start: center of the grid
A* expands up to 800 states, DFS may expand way over 420 > 1012 states

Maxim Likhachev

Carnegie Mellon University

52

Memory Issues
Alternatives:
Depth-First Search (w/o coloring all expanded states):
explore each every possible path at a time avoiding looping and keeping in the
memory only the best path discovered so far
Complete and optimal (assuming finite state-spaces)

Memory: O(bm), where b max. branching factor, m max. pathlength

What if goal is few steps away in


a huge
state-space?
Complexity: O(bm), since it will repeatedly
re-expand
states

Example:
graph: a 4-connected grid of 40 by 40 cells, start: center of the grid
A* expands up to 800 states, DFS may expand way over 420 > 1012 states

Maxim Likhachev

Carnegie Mellon University

53

Memory Issues

Alternatives:

IDA* (Iterative Deepening A*)


1.
2.
3.
4.

set fmax = 1 (or some other small value)


execute (previously explained) DFS that does not expand states with f>fmax
If DFS returns a path to the goal, return it
Otherwise fmax= fmax+1 (or larger increment) and go to step 2

Maxim Likhachev

Carnegie Mellon University

54

Memory Issues

Alternatives:

IDA* (Iterative Deepening A*)


1.
2.
3.
4.

set fmax = 1 (or some other small value)


execute (previously explained) DFS that does not expand states with f>fmax
If DFS returns a path to the goal, return it
Otherwise fmax= fmax+1 (or larger increment) and go to step 2

Complete and optimal in any state-space (with positive costs)

Memory: O(bl), where b max. branching factor, l length of optimal


path

Complexity: O(kbl), where k is the number of times DFS is called

Maxim Likhachev

Carnegie Mellon University

55

You might also like