0% found this document useful (0 votes)
102 views

A Search Algorithm - Wikipedia

The document describes the A* search algorithm, which is a computer algorithm that is widely used in pathfinding and graph traversal. A* is an informed search algorithm that aims to find the shortest path between nodes in a graph by maintaining a tree of paths and extending paths one node at a time using heuristics to guide its search.

Uploaded by

Varun Irani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views

A Search Algorithm - Wikipedia

The document describes the A* search algorithm, which is a computer algorithm that is widely used in pathfinding and graph traversal. A* is an informed search algorithm that aims to find the shortest path between nodes in a graph by maintaining a tree of paths and extending paths one node at a time using heuristics to guide its search.

Uploaded by

Varun Irani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

A* search algorithm

In computer science, A* (pronounced "A-star") is a computer algorithm


Class Search
that is widely used in pathfinding and graph traversal, which is the
algorithm
process of finding a path between multiple points, called "nodes". It
enjoys widespread use due to its performance and accuracy. However, in Data structure Graph
practical travel-routing systems, it is generally outperformed by Worst-case
algorithms which can pre-process the graph to attain better performance
performance,[1] although other work has found A* to be superior to other Worst-case
approaches.[2] space
complexity
Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research
Institute (now SRI International) first published the algorithm in 1968.[3] It can be seen as an extension of Edsger
Dijkstra's 1959 algorithm. A* achieves better performance by using heuristics to guide its search.

Contents
History
Description
Pseudocode
Example
Implementation details
Special cases
Properties
Termination and Completeness
Admissibility
Optimal Efficiency
Bounded relaxation
Complexity
Applications
Relations to other algorithms
Variants
See also
Notes
References
Further reading
External links

History
A* was created as part of the Shakey project, which had the aim of building a mobile robot that could plan its own
actions. Nils Nilsson originally proposed using the Graph Traverser algorithm[4] for Shakey's path planning.[5] Graph
Traverser is guided by a heuristic function the estimated distance
from node to the goal node: it entirely ignores the distance from
the start node to Bertram Raphael suggested using the sum,
.[5] Peter Hart invented the concepts we now call
admissibility and consistency of heuristic functions. A* was originally
designed for finding least-cost paths when the cost of a path is the sum of
its edge costs, but it has been shown that A* can be used to find optimal
paths for any problem satisfying the conditions of a cost algebra.[6]

The original 1968 A* paper[3] contained a theorem that no A*-like


algorithm[7] could expand fewer nodes than A* if the heuristic function is
consistent and A*’s tie-breaking rule is suitably chosen. A ″correction″
was published a few years later[8] claiming that consistency was not
required, but this was shown to be false in Dechter and Pearl’s definitive
study of A*'s optimality (now called optimal efficiency), which gave an
example of A* with a heuristic that was admissible but not consistent
expanding arbitrarily more nodes than an alternative A*-like
algorithm.[9]
A* was invented by researchers
working on Shakey the Robot's path
Description planning.

A* is an informed search algorithm, or a best-first search, meaning that it


is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to
the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining a
tree of paths originating at the start node and extending those paths one edge at a time until its termination criterion
is satisfied.

At each iteration of its main loop, A* needs to determine which of its paths to extend. It does so based on the cost of
the path and an estimate of the cost required to extend the path all the way to the goal. Specifically, A* selects the path
that minimizes

where n is the next node on the path, g(n) is the cost of the path from the start node to n, and h(n) is a heuristic
function that estimates the cost of the cheapest path from n to the goal. A* terminates when the path it chooses to
extend is a path from start to goal or if there are no paths eligible to be extended. The heuristic function is problem-
specific. If the heuristic function is admissible, meaning that it never overestimates the actual cost to get to the goal,
A* is guaranteed to return a least-cost path from start to goal.

Typical implementations of A* use a priority queue to perform the repeated selection of minimum (estimated) cost
nodes to expand. This priority queue is known as the open set or fringe. At each step of the algorithm, the node with
the lowest f(x) value is removed from the queue, the f and g values of its neighbors are updated accordingly, and these
neighbors are added to the queue. The algorithm continues until a goal node has a lower f value than any node in the
queue (or until the queue is empty).[a] The f value of the goal is then the cost of the shortest path, since h at the goal is
zero in an admissible heuristic.
The algorithm described so far gives us only the length of the shortest path. To find the actual sequence of steps, the
algorithm can be easily revised so that each node on the path keeps track of its predecessor. After this algorithm is
run, the ending node will point to its predecessor, and so on, until some node's predecessor is the start node.

As an example, when searching for the shortest route on a map, h(x) might represent the straight-line distance to the
goal, since that is physically the smallest possible distance between any two points.

If the heuristic h satisfies the additional condition h(x) ≤ d(x, y) + h(y) for every edge (x, y) of the graph (where d
denotes the length of that edge), then h is called monotone, or consistent. With a consistent heuristic, A* is
guaranteed to find an optimal path without processing any node more than once and A* is equivalent to running
Dijkstra's algorithm with the reduced cost d'(x, y) = d(x, y) + h(y) − h(x).

Pseudocode
The following pseudocode describes the algorithm:

function reconstruct_path(cameFrom, current)


total_path := {current}
while current in cameFrom.Keys:
current := cameFrom[current]
total_path.prepend(current)
return total_path

// A* finds a path from start to goal.


// h is the heuristic function. h(n) estimates the cost to reach goal from node n.
function A_Star(start, goal, h)
// The set of discovered nodes that need to be (re-)expanded.
// Initially, only the start node is known.
openSet := {start}

// For node n, cameFrom[n] is the node immediately preceding it on the cheapest path from
start to n currently known.
cameFrom := an empty map

// For node n, gScore[n] is the cost of the cheapest path from start to n currently known.
gScore := map with default value of Infinity
gScore[start] := 0

// Initialize closed set as empty.


closedSet := {}

// For node n, fScore[n] := gScore[n] + h(n).


fScore := map with default value of Infinity
fScore[start] := h(start)

while openSet is not empty


current := the node in openSet having the lowest fScore[] value
if current = goal
return reconstruct_path(cameFrom, current)

openSet.Remove(current)
closedSet.Add(current)
for each neighbor of current
if neighbor in closedSet
continue
// d(current,neighbor) is the weight of the edge from current to neighbor
// tentative_gScore is the distance from start to the neighbor through current
tentative_gScore := gScore[current] + d(current, neighbor)
if tentative_gScore < gScore[neighbor]
// This path to neighbor is better than any previous one. Record it!
cameFrom[neighbor] := current
gScore[neighbor] := tentative_gScore
fScore[neighbor] := gScore[neighbor] + h(neighbor)
if neighbor not in openSet
openSet.add(neighbor)
// Open set is empty but goal was never reached
return failure

Remark: In this pseudocode, if a node is reached by one path, removed from openSet, and subsequently reached by a
cheaper path, it will be added to openSet again. This is essential to guarantee that the path returned is optimal if the
heuristic function is admissible but not consistent. If the heuristic is consistent, when a node is removed from openSet
the path to it is guaranteed to be optimal so the test ‘tentative_gScore < gScore[neighbor]’ will always fail if the node
is reached again.

Example
An example of an A* algorithm in action where nodes are cities connected
with roads and h(x) is the straight-line distance to target point:

Illustration of A* search for finding


path from a start node to a goal
node in a robot motion planning
problem. The empty circles
represent the nodes in the open
set, i.e., those that remain to be
explored, and the filled ones are in
the closed set. Color on each
closed node indicates the distance
from the start: the greener, the
Key: green: start; blue: goal; orange: visited farther. One can first see the A*
moving in a straight line in the
The A* algorithm also has real-world applications. In this example, edges direction of the goal, then when
are railroads and h(x) is the great-circle distance (the shortest possible hitting the obstacle, it explores
distance on a sphere) to the target. The algorithm is searching for a path alternative routes through the
between Washington, D.C. and Los Angeles. nodes from the open set.
Implementation details
There are a number of simple optimizations or implementation details that can significantly affect the performance of
an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant
effect on performance in some situations. If ties are broken so the queue behaves in a LIFO manner, A* will behave
like depth-first search among equal cost paths (avoiding exploring more than one equally optimal solution).

When a path is required at the end of the search, it is common to keep with each node a reference to that node's
parent. At the end of the search these references can be used to recover the optimal path. If these references are being
kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry
corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a
node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are
changed to correspond to the lower cost path. A standard binary heap based priority queue does not directly support
the operation of searching for one of its elements, but it can be augmented with a hash table that maps elements to
their position in the heap, allowing this decrease-priority operation to be performed in logarithmic time. Alternatively,
a Fibonacci heap can perform the same decrease-priority operations in constant amortized time.

Special cases
Dijkstra's algorithm, as another example of a uniform-cost search algorithm, can be viewed as a special case of A*
where for all x.[10][11] General depth-first search can be implemented using A* by considering that there is a
global counter C initialized with a very large value. Every time we process a node we assign C to all of its newly
discovered neighbors. After each single assignment, we decrease the counter C by one. Thus the earlier a node is
discovered, the higher its value. Both Dijkstra's algorithm and depth-first search can be implemented more
efficiently without including an value at each node.

Properties

Termination and Completeness


On finite graphs with non-negative edge weights A* is guaranteed to terminate and is complete, i.e. it will always find
a solution (a path from start to goal) if one exists. On infinite graphs with a finite branching factor and edge costs that
are bounded away from zero ( for some fixed ), A* is guaranteed to terminate only if there exists a
solution.

Admissibility
A search algorithm is said to be admissible if it is guaranteed to return an optimal solution. If the heuristic function
used by A* is admissible, then A* is admissible. An intuitive ″proof″ of this is as follows:

When A* terminates its search, it has found a path from start to goal whose actual cost is lower than the estimated
cost of any path from start to goal through any open node (the node's value). When the heuristic is admissible, those
estimates are optimistic (not quite -- see the next paragraph), so A* can safely ignore those nodes because they cannot
possibly lead to a cheaper solution than the one it already has. In other words, A* will never overlook the possibility of
a lower-cost path from start to goal and so it will continue to search until no such possibilities exist.

The actual proof is a bit more involved because the values of open nodes are not guaranteed to be optimistic even if
the heuristic is admissible. This is because the values of open nodes are not guaranteed to be optimal, so the sum
is not guaranteed to be optimistic.

Optimal Efficiency
Algorithm A is optimally efficient with respect to a set of alternative algorithms Alts on a set of problems P if for every
problem P in P and every algorithm A′ in Alts, the set of nodes expanded by A in solving P is a subset (possibly equal)
of the set of nodes expanded by A′ in solving P. The definitive study of the optimal efficiency of A* is due to Rina
Dechter and Judea Pearl.[9] They considered a variety of definitions of Alts and P in combination with A*'s heuristic
being merely admissible or being both consistent and admissible. The most interesting positive result they proved is
that A*, with a consistent heuristic, is optimally efficient with respect to all admissible A*-like search algorithms on all
″non-pathological″ search problems. Roughly speaking, their notion of non-pathological problem is what we now
mean by ″up to tie-breaking″. This result does not hold if A*'s heuristic is admissible but not consistent. In that case,
Dechter and Pearl showed there exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A*
on some non-pathological problems.

Optimal efficiency is about the set of nodes expanded, not the number of node expansions (the number of iterations of
A*'s main loop). When the heuristic being used is admissible but not consistent, it is possible for a node to be
expanded by A* many times, an exponential number of times in the worst case.[12] In such circumstances Dijkstra's
algorithm could outperform A* by a large margin.
Bounded relaxation
While the admissibility criterion guarantees an optimal solution path, it
also means that A* must examine all equally meritorious paths to find the
optimal path. To compute approximate shortest paths, it is possible to
speed up the search at the expense of optimality by relaxing the
admissibility criterion. Oftentimes we want to bound this relaxation, so that
we can guarantee that the solution path is no worse than (1 + ε) times the
optimal solution path. This new guarantee is referred to as ε-admissible.

There are a number of ε-admissible algorithms:

Weighted A*/Static Weighting.[13] If ha(n) is an admissible heuristic


function, in the weighted version of the A* search one uses
hw(n) = ε ha(n), ε > 1 as the heuristic function, and perform the A* A* search that uses a heuristic that
search as usual (which eventually happens faster than using ha since is 5.0(=ε) times a consistent
fewer nodes are expanded). The path hence found by the search heuristic, and obtains a suboptimal
algorithm can have a cost of at most ε times that of the least cost path path.
in the graph.[14]

Dynamic Weighting[15] uses the cost function , where

, and where is the depth of the search and N is the anticipated length of

the solution path.

Sampled Dynamic Weighting[16] uses sampling of nodes to better estimate and debias the heuristic error.

.[17] uses two heuristic functions. The first is the FOCAL list, which is used to select candidate nodes, and the
second hF is used to select the most promising node from the FOCAL list.

Aε[18] selects nodes with the function , where A and B are constants. If no nodes can be
selected, the algorithm will backtrack with the function , where C and D are constants.

AlphA*[19] attempts to promote depth-first exploitation by preferring recently expanded nodes. AlphA* uses the
cost function , where , where λ and Λ are constants

with , π(n) is the parent of n, and ñ is the most recently expanded node.

Complexity
The time complexity of A* depends on the heuristic. In the worst case of an unbounded search space, the number of
nodes expanded is exponential in the depth of the solution (the shortest path) d: O(bd), where b is the branching
factor (the average number of successors per state).[20] This assumes that a goal state exists at all, and is reachable
from the start state; if it is not, and the state space is infinite, the algorithm will not terminate.

The heuristic function has a major effect on the practical performance of A* search, since a good heuristic allows A* to
prune away many of the bd nodes that an uninformed search would expand. Its quality can be expressed in terms of
the effective branching factor b*, which can be determined empirically for a problem instance by measuring the
number of nodes expanded, N, and the depth of the solution, then solving[21]
Good heuristics are those with low effective branching factor (the optimal being b* = 1).

The time complexity is polynomial when the search space is a tree, there is a single goal state, and the heuristic
function h meets the following condition:

where h* is the optimal heuristic, the exact cost to get from x to the goal. In other words, the error of h will not grow
faster than the logarithm of the "perfect heuristic" h* that returns the true distance from x to the goal.[14][20]

Applications
A* is commonly used for the common pathfinding problem in applications such as video games, but was originally
designed as a general graph traversal algorithm.[3] It finds applications to diverse problems, including the problem of
parsing using stochastic grammars in NLP.[22] Other cases include an Informational search with online learning.[23]

Relations to other algorithms


What sets A* apart from a greedy best-first search algorithm is that it takes the cost/distance already traveled, g(n),
into account.

Some common variants of Dijkstra's algorithm can be viewed as a special case of A* where the heuristic for
all nodes;[10][11] in turn, both Dijkstra and A* are special cases of dynamic programming.[24] A* itself is a special case
of a generalization of branch and bound.[25]

Variants
Anytime A*[26] or Anytime Repairing A* (ARA*)[27]
Anytime Dynamic A*
Block A*
D*
Field D*
Fringe
Fringe Saving A* (FSA*)
Generalized Adaptive A* (GAA*)
Incremental heuristic search
Informational search[23]
Iterative deepening A* (IDA*)
Jump point search
Lifelong Planning A* (LPA*)
New Bidirectional A* (NBA*)[28]
Simplified Memory bounded A* (SMA*)
Realtime A*[29]
Theta*
Time-Bounded A* (TBA*)[30]
A* can also be adapted to a bidirectional search algorithm. Special care needs to be taken for the stopping
criterion.[31]
See also
Breadth-first search
Depth-first search
Any-angle path planning, search for paths that are not limited to move along graph edges but rather can take on
any angle

Notes
a. Goal nodes may be passed over multiple times if there remain other nodes with lower f values, as they may lead
to a shorter path to a goal.

References
1. Delling, D.; Sanders, P.; Schultes, D.; Wagner, D. (2009). "Engineering Route Planning Algorithms". Algorithmics
of Large and Complex Networks: Design, Analysis, and Simulation. Lecture Notes in Computer Science. 5515.
Springer. pp. 11 $7–139. CiteSeerX 10.1.1.164.8916 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.
1.164.8916). doi:10.1007/978-3-642-02094-0_7 (https://doi.org/10.1007%2F978-3-642-02094-0_7). ISBN 978-3-
642-02093-3.
2. Zeng, W.; Church, R. L. (2009). "Finding shortest paths on real road networks: the case for A*" (https://zenodo.or
g/record/979689). International Journal of Geographical Information Science. 23 (4): 531–543.
doi:10.1080/13658810801949850 (https://doi.org/10.1080%2F13658810801949850).
3. Hart, P. E.; Nilsson, N. J.; Raphael, B. (1968). "A Formal Basis for the Heuristic Determination of Minimum Cost
Paths". IEEE Transactions on Systems Science and Cybernetics SSC4. 4 (2): 100–107.
doi:10.1109/TSSC.1968.300136 (https://doi.org/10.1109%2FTSSC.1968.300136).
4. Doran, J. E.; Michie, D. (1966-09-20). "Experiments with the Graph Traverser program" (http://rspa.royalsocietyp
ublishing.org/content/294/1437/235). Proc. R. Soc. Lond. A. 294 (1437): 235–259. doi:10.1098/rspa.1966.0205 (h
ttps://doi.org/10.1098%2Frspa.1966.0205). ISSN 0080-4630 (https://www.worldcat.org/issn/0080-4630).
5. Nilsson, Nils J. (2009-10-30). The Quest for Artificial Intelligence (https://ai.stanford.edu/~nilsson/QAI/qai.pdf)
(PDF). Cambridge: Cambridge University Press. ISBN 9780521122931.
6. Edelkamp, Stefan; Jabbar, Shahid; Lluch-Lafuente, Alberto (2005). "Cost-Algebraic Heuristic Search" (http://www.
aaai.org/Papers/AAAI/2005/AAAI05-216.pdf) (PDF). Proceedings of the Twentieth National Conference on
Artificial Intelligence (AAAI): 1362–1367.
7. “A*-like” means the algorithm searches by extending paths originating at the start node one edge at a time, just
as A* does. This excludes, for example, algorithms that search backward from the goal or in both directions
simultaneously. In addition, the algorithms covered by this theorem must be admissible and “not more informed”
than A*.
8. Hart, Peter E.; Nilsson, Nils J.; Raphael, Bertram (1972-12-01). "Correction to 'A Formal Basis for the Heuristic
Determination of Minimum Cost Paths' " (https://www.ics.uci.edu/~dechter/publications/r0.pdf) (PDF). ACM
SIGART Bulletin (37): 28–29. doi:10.1145/1056777.1056779 (https://doi.org/10.1145%2F1056777.1056779).
ISSN 0163-5719 (https://www.worldcat.org/issn/0163-5719).
9. Dechter, Rina; Judea Pearl (1985). "Generalized best-first search strategies and the optimality of A*". Journal of
the ACM. 32 (3): 505–536. doi:10.1145/3828.3830 (https://doi.org/10.1145%2F3828.3830).
10. De Smith, Michael John; Goodchild, Michael F.; Longley, Paul (2007), Geospatial Analysis: A Comprehensive
Guide to Principles, Techniques and Software Tools (https://books.google.com/books?id=SULMdT8qPwEC&pg=
PA344), Troubadour Publishing Ltd, p. 344, ISBN 9781905886609.
11. Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language (https://boo
ks.google.com/books?id=9_AXCmGDiz8C&pg=PA214), Apress, p. 214, ISBN 9781430232377.
12. Martelli, Alberto (1977). "On the Complexity of Admissible Search Algorithms". Artificial Intelligence. 8 (1): 1–13.
doi:10.1016/0004-3702(77)90002-9 (https://doi.org/10.1016%2F0004-3702%2877%2990002-9).
13. Pohl, Ira (1970). "First results on the effect of error in heuristic search". Machine Intelligence. 5: 219–236.
14. Pearl, Judea (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving (https://archive.org/d
etails/heuristicsintell00pear). Addison-Wesley. ISBN 978-0-201-05594-8.
15. Pohl, Ira (August 1973). "The avoidance of (relative) catastrophe, heuristic competence, genuine dynamic
weighting and computational issues in heuristic problem solving" (https://www.cs.auckland.ac.nz/courses/compsci
709s2c/resources/Mike.d/Pohl1973WeightedAStar.pdf) (PDF). Proceedings of the Third International Joint
Conference on Artificial Intelligence (IJCAI-73). 3. California, USA. pp. 11–17.
16. Köll, Andreas; Hermann Kaindl (August 1992). "A new approach to dynamic weighting". Proceedings of the Tenth
European Conference on Artificial Intelligence (ECAI-92). Vienna, Austria. pp. 16–17.
17. Pearl, Judea; Jin H. Kim (1982). "Studies in semi-admissible heuristics". IEEE Transactions on Pattern Analysis
and Machine Intelligence (PAMI). 4 (4): 392–399. doi:10.1109/TPAMI.1982.4767270 (https://doi.org/10.1109%2F
TPAMI.1982.4767270).
18. Ghallab, Malik; Dennis Allard (August 1983). "Aε – an efficient near admissible heuristic search algorithm" (https:/
/web.archive.org/web/20140806200328/http://ijcai.org/Past%20Proceedings/IJCAI-83-VOL-2/PDF/048.pdf)
(PDF). Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83). 2. Karlsruhe,
Germany. pp. 789–791. Archived from the original (http://ijcai.org/Past%20Proceedings/IJCAI-83-VOL-2/PDF/048
.pdf) (PDF) on 2014-08-06.
19. Reese, Bjørn (1999). "AlphA*: An ε-admissible heuristic search algorithm" (https://web.archive.org/web/20160131
214618/http://home1.stofanet.dk/breese/astaralpha-submitted.pdf.gz). Archived from the original (http://home1.st
ofanet.dk/breese/astaralpha-submitted.pdf.gz) on 2016-01-31. Retrieved 2014-11-05.
20. Russell, Stuart; Norvig, Peter (2003) [1995]. Artificial Intelligence: A Modern Approach (2nd ed.). Prentice Hall.
pp. 97–104. ISBN 978-0137903955.
21. Russell, Stuart; Norvig, Peter (2009) [1995]. Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall.
p. 103. ISBN 978-0-13-604259-4.
22. Klein, Dan; Manning, Christopher D. (2003). A* parsing: fast exact Viterbi parse selection. Proc. NAACL-HLT.
23. Kagan E. and Ben-Gal I. (2014). "A Group-Testing Algorithm with Online Informational Learning" (http://www.eng.t
au.ac.il/~bengal/GTA.pdf) (PDF). IIE Transactions, 46:2, 164-184.
24. Ferguson, Dave; Likhachev, Maxim; Stentz, Anthony (2005). A Guide to Heuristic-based Path Planning (https://w
ww.cs.cmu.edu/afs/cs.cmu.edu/Web/People/maxim/files/hsplanguide_icaps05ws.pdf) (PDF). Proc. ICAPS
Workshop on Planning under Uncertainty for Autonomous Systems.
25. Nau, Dana S.; Kumar, Vipin; Kanal, Laveen (1984). "General branch and bound, and its relation to A∗ and AO∗" (
https://www.cs.umd.edu/~nau/papers/nau1984general.pdf) (PDF). Artificial Intelligence. 23 (1): 29–58.
doi:10.1016/0004-3702(84)90004-3 (https://doi.org/10.1016%2F0004-3702%2884%2990004-3).
26. Hansen, Eric A., and Rong Zhou. "Anytime Heuristic Search. (http://www.jair.org/media/2096/live-2096-3136-jair.p
df?q=anytime:)" J. Artif. Intell. Res.(JAIR) 28 (2007): 267-297.
27. Likhachev, Maxim; Gordon, Geoff; Thrun, Sebastian. "ARA*: Anytime A* search with provable bounds on sub-
optimality (http://robots.stanford.edu/papers/Likhachev03b.pdf)". In S. Thrun, L. Saul, and B. Schölkopf, editors,
Proceedings of Conference on Neural Information Processing Systems (NIPS), Cambridge, MA, 2003. MIT
Press.
28. Pijls, Wim; Post, Henk "Yet another bidirectional algorithm for shortest paths (https://repub.eur.nl/pub/16100/ei200
9-10.pdf)" In Econometric Institute Report EI 2009-10/Econometric Institute, Erasmus University Rotterdam.
Erasmus School of Economics.
29. Korf, Richard E. "Real-time heuristic search. (https://pdfs.semanticscholar.org/2fda/10f6079156c4621fefc8b7cad7
2c1829ee94.pdf)" Artificial intelligence 42.2-3 (1990): 189-211.
30. Björnsson, Yngvi; Bulitko, Vadim; Sturtevant, Nathan (July 11–17, 2009). TBA*: time-bounded A* (http://web.cs.d
u.edu/~sturtevant/papers/TBA.pdf) (PDF). IJCAI 2009, Proceedings of the 21st International Joint Conference on
Artificial Intelligence. Pasadena, California, USA: Morgan Kaufmann Publishers Inc. pp. 431–436.
31. "Efficient Point-to-Point Shortest Path Algorithms" (http://www.cs.princeton.edu/courses/archive/spr06/cos423/Ha
ndouts/EPP%20shortest%20path%20algorithms.pdf) (PDF). from Princeton University
Further reading
Nilsson, N. J. (1980). Principles of Artificial Intelligence (https://archive.org/details/principlesofarti00nils). Palo
Alto, California: Tioga Publishing Company. ISBN 978-0-935382-01-3.

External links
Clear visual A* explanation, with advice and thoughts on path-finding (http://theory.stanford.edu/~amitp/GamePro
gramming/)
Variation on A* called Hierarchical Path-Finding A* (HPA*) (https://web.archive.org/web/20090917155722/http://w
ww.cs.ualberta.ca/~mmueller/ps/hpastar.pdf)

Retrieved from "https://en.wikipedia.org/w/index.php?title=A*_search_algorithm&oldid=921217231"

This page was last edited on 14 October 2019, at 15:59 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using
this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.

You might also like