Lab Report: Department of Mechatronics & Control Engineering
Lab Report: Department of Mechatronics & Control Engineering
(Artificial Intelligence)
Submitted To:-
Submitted By:-
A DFS is the most natural output using a spanning tree – which is a tree made up of all vertices and some edges in
an undirected graph. In this formation, the graph is divided into three classes: Forward edges, pointing from a node to
a child node; back edges, pointing from a node to an earlier node; and cross edges,which do not do either one of
these.
The space complexity of IDDFS is O(bd), where b is the branching factor and d is the depth of shallowest goal. Since
iterative deepening visits states multiple times, it may seem wasteful, but it turns out to be not so costly, since in a
tree most of the nodes are in the bottom level, so it does not matter much if the upper levels are visited multiple
times.[1]
The main advantage of IDDFS in game tree searching is that the earlier searches tend to improve the commonly used
heuristics, such as thekiller heuristic and alpha-beta pruning, so that a more accurate estimate of the score of various
nodes at the final depth search can occur, and the search completes more quickly since it is done in a better order.
For example, alpha-beta pruning is most efficient if it searches the best moves first
A second advantage is the responsiveness of the algorithm. Because early iterations use small values for d, they
execute extremely quickly. This allows the algorithm to supply early indications of the result almost immediately,
followed by refinements as d increases. When used in an interactive setting, such as in a chess-playing program, this
facility allows the program to play at any time with the current best move found in the search it has completed so far.
This is not possible with a traditional depth-first search.
The time complexity of IDDFS in well-balanced trees works out to be the same as Depth-first search: O(bd).
In an iterative deepening search, the nodes on the bottom level are expanded once, those on the next to bottom level
are expanded twice, and so on, up to the root of the search tree, which is expanded d + 1 times.[1] So the total
number of expansions in an iterative deepening search is
All together, an iterative deepening search from depth 1 to depth d expands only about 11% more
nodes than a single breadth-first or depth-limited search to depth d, when b = 10. The higher the
branching factor, the lower the overhead of repeatedly expanded states, but even when the
branching factor is 2, iterative deepening search only takes about twice as long as a complete
breadth-first search. This means that the time complexity of iterative deepening is still O(bd), and
the space complexity is O(bd). In general, iterative deepening is the preferred search method when
there is a large search space and the depth of the solution is not known.
Bidirectional search
Bidirectional search is a graph search algorithm that finds ashortest path from an initial vertex to a goal vertex in
a directed graph. It runs two simultaneous searches: one forward from the initial state, and one backward from the
goal, stopping when the two meet in the middle. The reason for this approach is that in many cases it is faster: for
instance, in a simplified model of search problem complexity in which both searches expand a tree with branching
factor b, and the distance from start to goal is d, each of the two searches has complexity O(bd/2) (in Big O notation),
and the sum of these two search times is much less than the O(bd) complexity that would result from a single search
from the beginning to the goal.
This speedup does not come without a price: a bidirectional search algorithm must include additional logic to decide
which search tree to extend at each step, increasing the difficulty of implementation. The goal state must be known
(rather than having a general goal criterion that may be met by many different states), the algorithm must be able to
step backwards from goal to initial state (which may not be possible without extra work), and the algorithm needs an
efficient way to find the intersection of the two search trees. Additionally, the branching factor of backwards steps may
differ from that for forward steps. The additional complexity of performing a bidirectional search means that the A*
search algorithm is often a better choice if we have a reasonable heuristic.
As in A* search, bi-directional search can be guided by a heuristic estimate of the remaining distance to the goal (in
the forward tree) or from the start (in the backward tree). An admissible heuristic will also produce a shortest solution,
as was proven originally for A*.
A node to be expanded is selected from the frontier that has the least number of open nodes and which is most
promising. Termination happens when such a node resides also in the other frontier. A descendant node's f-value
must take into account the g-values of all open nodes at the other frontier. Hence node expansion is more costly than
for A*. The collection of nodes to be visited can be smaller as outlined above. Thus one trades in less space for
more computation. The 1977 reference showed that the bi-directional algorithm found solutions where A* had run out
of space. Shorter paths were also found when non admissible heuristics were used. These tests were done on the
15-puzzle used by Ira Pohl.
Heuristic Searches
A* Search
A* uses a best-first search and finds the least-cost path from a given initial node to one goal node (out of one or
more possible goals).
It uses a distance-plus-cost heuristic function (usually denoted f(x)) to determine the order in which the search visits
nodes in the tree. The distance-plus-cost heuristic is a sum of two functions:
the path-cost function, which is the cost from the starting node to the current node (usually denoted g(x))
and an admissible "heuristic estimate" of the distance to the goal (usually denoted h(x)).
The h(x) part of the f(x) function must be an admissible heuristic; that is, it must not overestimate the distance to the
goal. Thus, for an application like routing, h(x) might represent the straight-line distance to the goal, since that is
physically the smallest possible distance between any two points or nodes.
If the heuristic h satisfies the additional condition for every edge x, y of the graph
(where d denotes the length of that edge), then h is called monotone, or consistent. In such a case, A* can be
implemented more efficiently—roughly speaking, no node needs to be processed more than once (see closed
set below)—and A* is equivalent to running Dijkstra's algorithm with thereduced cost d'(x,y): = d(x,y) − h(x) + h(y).
As A* traverses the graph, it follows a path of the lowest known cost, keeping a sorted priority queueof alternate path
segments along the way. If, at any point, a segment of the path being traversed has a higher cost than another
encountered path segment, it abandons the higher-cost path segment and traverses the lower-cost path segment
instead. This process continues until the goal is reached.