0% found this document useful (0 votes)
55 views39 pages

Unit 4 Aids

Uploaded by

Priyanka jadahav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views39 pages

Unit 4 Aids

Uploaded by

Priyanka jadahav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Optimal path finding

Brute force approach


A brute force approach is an approach that finds all the possible solutions to
find a satisfactory solution to a given problem. The brute force algorithm
tries out all the possibilities till a satisfactory solution is not found.

Such an algorithm can be of two types:

o Optimizing: In this case, the best solution is found. To find the best solution,
it may either find all the possible solutions to find the best solution or if the
value of the best solution is known, it stops finding when the best solution is
found. For example: Finding the best path for the travelling salesman
problem. Here best path means that travelling all the cities and the cost of
travelling should be minimum.
o Satisficing: It stops finding the solution as soon as the satisfactory solution
is found. Or example, finding the travelling salesman path which is within
10% of optimal.
o Often Brute force algorithms require exponential time. Various heuristics and
optimization can be used:
o Heuristic: A rule of thumb that helps you to decide which possibilities we
should look at first.
o Optimization: A certain possibilities are eliminated without exploring all of
them.

Let's understand the brute force search through an example.

Suppose we have converted the problem in the form of the tree


shown as below:
Brute force search considers each and every state of a tree, and the state is
represented in the form of a node. As far as the starting position is
concerned, we have two choices, i.e., A state and B state. We can either
generate state A or state B. In the case of B state, we have two states, i.e.,
state E and F.

In the case of brute force search, each state is considered one by one. As we
can observe in the above tree that the brute force search takes 12 steps to
find the solution.

On the other hand, backtracking, which uses Depth-First search, considers


the below states only when the state provides a feasible solution. Consider
the above tree, start from the root node, then move to node A and then node
C. If node C does not provide the feasible solution, then there is no point in
considering the states G and H. We backtrack from node C to node A. Then,
we move from node A to node D. Since node D does not provide the feasible
solution, we discard this state and backtrack from node D to node A.

We move to node B, then we move from node B to node E. We move from


node E to node K; Since k is a solution, so it takes 10 steps to find the
solution. In this way, we eliminate a greater number of states in a single
iteration. Therefore, we can say that backtracking is faster and more efficient
than the brute force approach.
Advantages of a brute-force algorithm
The following are the advantages of the brute-force algorithm:

o This algorithm finds all the possible solutions, and it also guarantees that it
finds the correct solution to a problem.
o This type of algorithm is applicable to a wide range of domains.
o It is mainly used for solving simpler and small problems.
o It can be considered a comparison benchmark to solve a simple problem and
does not require any particular domain knowledge.

Disadvantages of a brute-force algorithm


The following are the disadvantages of the brute-force algorithm:

o It is an inefficient algorithm as it requires solving each and every state.


o It is a very slow algorithm to find the correct solution as it solves each state
without considering whether the solution is feasible or not.
o The brute force algorithm is neither constructive nor creative as compared to
other algorithms.

Branch and bound


What is Branch and bound?

Branch and bound is one of the techniques used for problem solving. It is similar to
the backtracking since it also uses the state space tree. It is used for solving the
optimization problems and minimization problems. If we have given a
maximization problem then we can convert it using the Branch and bound
technique by simply converting the problem into a maximization problem.

Let's understand through an example.

Jobs = {j1, j2, j3, j4}

P = {10, 5, 8, 3}
d = {1, 2, 1, 2}

The above are jobs, problems and problems given. We can write the solutions in
two ways which are given below:

Suppose we want to perform the jobs j1 and j2 then the solution can be represented
in two ways:

The first way of representing the solutions is the subsets of jobs.

S1 = {j1, j4}

The second way of representing the solution is that first job is done, second and
third jobs are not done, and fourth job is done.

S2 = {1, 0, 0, 1}

The solution s1 is the variable-size solution while the solution s2 is the fixed-size
solution.

First, we will see the subset method where we will see the variable size.

First method:

In this case, we first consider the first job, then second job, then third job and
finally we consider the last job.

As we can observe in the above figure that the breadth first search is performed but
not the depth first search. Here we move breadth wise for exploring the solutions.
In backtracking, we go depth-wise whereas in branch and bound, we go breadth
wise.

Now one level is completed. Once I take first job, then we can consider either j2, j3
or j4. If we follow the route then it says that we are doing jobs j1 and j4 so we will
not consider jobs j2 and j3.

Now we will consider the node 3. In this case, we are doing job j2 so we can
consider either job j3 or j4. Here, we have discarded the job j1.
Now we will expand the node 4. Since here we are doing job j3 so we will consider
only job j4.

Now we will expand node 6, and here we will consider the jobs j3 and j4.
Now we will expand node 7 and here we will consider job j4.
Now we will expand node 9, and here we will consider job j4.

The last node, i.e., node 12 which is left to be expanded. Here, we consider job j4.

The above is the state space tree for the solution s1 = {j1, j4}

Second method:

We will see another way to solve the problem to achieve the solution s1.

First, we consider the node 1 shown as below:

Now, we will expand the node 1. After expansion, the state space tree would be
appeared as:

On each expansion, the node will be pushed into the stack shown as below:
Now the expansion would be based on the node that appears on the top of the
stack. Since the node 5 appears on the top of the stack, so we will expand the node
5. We will pop out the node 5 from the stack. Since the node 5 is in the last job,
i.e., j4 so there is no further scope of expansion.
The next node that appears on the top of the stack is node 4. Pop out the node 4
and expand. On expansion, job j4 will be considered and node 6 will be added into
the stack shown as below:

The next node is 6 which is to be expanded. Pop out the node 6 and expand. Since
the node 6 is in the last job, i.e., j4 so there is no further scope of expansion.
The next node to be expanded is node 3. Since the node 3 works on the job j2 so
node 3 will be expanded to two nodes, i.e., 7 and 8 working on jobs 3 and 4
respectively. The nodes 7 and 8 will be pushed into the stack shown as below:
The next node that appears on the top of the stack is node 8. Pop out the node 8
and expand. Since the node 8 works on the job j4 so there is no further scope for
the expansion.

The next node that appears on the top of the stack is node 7. Pop out the node 7
and expand. Since the node 7 works on the job j3 so node 7 will be further
expanded to node 9 that works on the job j4 as shown as below and the node 9 will
be pushed into the stack.
The next node that appears on the top of the stack is node 9. Since the node 9
works on the job 4 so there is no further scope for the expansion.
The next node that appears on the top of the stack is node 2. Since the node 2
works on the job j1 so it means that the node 2 can be further expanded. It can be
expanded upto three nodes named as 10, 11, 12 working on jobs j2, j3, and j4
respectively. There newly nodes will be pushed into the stack shown as below:

In the above method, we explored all the nodes using the stack that follows the
LIFO principle.

Third method

There is one more method that can be used to find the solution and that method is
Least cost branch and bound. In this technique, nodes are explored based on the
cost of the node. The cost of the node can be defined using the problem and with
the help of the given problem, we can define the cost function. Once the cost
function is defined, we can define the cost of the node.

Let's first consider the node 1 having cost infinity shown as below:

Now we will expand the node 1. The node 1 will be expanded into four nodes
named as 2, 3, 4 and 5 shown as below:
Let's assume that cost of the nodes 2, 3, 4, and 5 are 25, 12, 19 and 30
respectively.

Since it is the least cost branch n bound, so we will explore the node which is
having the least cost. In the above figure, we can observe that the node with a
minimum cost is node 3. So, we will explore the node 3 having cost 12.

Since the node 3 works on the job j2 so it will be expanded into two nodes named
as 6 and 7 shown as below:
The node 6 works on job j3 while the node 7 works on job j4. The cost of the node
6 is 8 and the cost of the node 7 is 7. Now we have to select the node which is
having the minimum cost. The node 7 has the minimum cost so we will explore the
node 7. Since the node 7 already works on the job j4 so there is no further scope
for the expansion.

Dijkstra’s Algorithm:

Dijkstra’s algorithm is a popular algorithms for solving many single-source


shortest path problems having non-negative edge weight in the graphs i.e., it is to
find the shortest distance between two vertices on a graph. It was conceived by
Dutch computer scientist Edsger W. Dijkstra in 1956.
The algorithm maintains a set of visited vertices and a set of unvisited vertices. It
starts at the source vertex and iteratively selects the unvisited vertex with the
smallest tentative distance from the source. It then visits the neighbors of this
vertex and updates their tentative distances if a shorter path is found. This process
continues until the destination vertex is reached, or all reachable vertices have
been visited.

Need for Dijkstra’s Algorithm (Purpose and Use-Cases)


The need for Dijkstra’s algorithm arises in many applications where finding the
shortest path between two points is crucial.
For example, It can be used in the routing protocols for computer networks and
also used by map systems to find the shortest path between starting point and the
Destination (as explained in How does Google Maps work? )
Can Dijkstra’s Algorithm work on both Directed and Undirected graphs?
Yes, Dijkstra’s algorithm can work on both directed graphs and undirected graphs
as this algorithm is designed to work on any type of graph as long as it meets the
requirements of having non-negative edge weights and being connected.

 In a directed graph, each edge has a direction, indicating the direction of


travel between the vertices connected by the edge. In this case, the algorithm
follows the direction of the edges when searching for the shortest path.
 In an undirected graph, the edges have no direction, and the algorithm can
traverse both forward and backward along the edges when searching for the
shortest path.

Algorithm for Dijkstra’s Algorithm :
1. Mark the source node with a current distance of 0 and the rest with infinity.
2. Set the non-visited node with the smallest current distance as the current node.
3. For each neighbor, N of the current node adds the current distance of the
adjacent node with the weight of the edge connecting 0->1. If it is smaller than
the current distance of Node, set it as the new current distance of N.
4. Mark the current node 1 as visited.
5. Go to step 2 if there are any nodes are unvisited.
6.
How does Dijkstra’s Algorithm works?
Let’s see how Dijkstra’s Algorithm works with an example given below:
Dijkstra’s Algorithm will generate the shortest path from Node 0 to all other
Nodes in the graph.
Consider the below graph:

Dijkstra’s Algorithm
The algorithm will generate the shortest path from node 0 to all the other nodes
in the graph.
For this graph, we will assume that the weight of the edges represents the
distance between two nodes.
As, we can see we have the shortest path from,
Node 0 to Node 1, from
Node 0 to Node 2, from
Node 0 to Node 3, from
Node 0 to Node 4, from
Node 0 to Node 6.
Initially we have a set of resources given below :
 The Distance from the source node to itself is 0. In this example the source
node is 0.
 The distance from the source node to all other node is unknown so we mark all
of them as infinity.
Example: 0 -> 0, 1-> ∞,2-> ∞,3-> ∞,4-> ∞,5-> ∞,6-> ∞.
 we’ll also have an array of unvisited elements that will keep track of unvisited
or unmarked Nodes.
 Algorithm will complete when all the nodes marked as visited and the distance
between them added to the path. Unvisited Nodes:- 0 1 2 3 4 5 6.
Step 1: Start from Node 0 and mark Node as visited as you can check in below
image visited Node is marked red.

Dijkstra’s Algorithm
Step 2: Check for adjacent Nodes, Now we have to choices (Either choose Node1
with distance 2 or either choose Node 2 with distance 6 ) and choose Node with
minimum distance. In this step Node 1 is Minimum distance adjacent Node, so
marked it as visited and add up the distance.
Distance: Node 0 -> Node 1 = 2

Dijkstra’s Algorithm

Step 3: Then Move Forward and check for adjacent Node which is Node 3, so
marked it as visited and add up the distance, Now the distance will be:
Distance: Node 0 -> Node 1 -> Node 3 = 2 + 5 = 7
Dijkstra’s Algorithm

Step 4: Again we have two choices for adjacent Nodes (Either we can choose
Node 4 with distance 10 or either we can choose Node 5 with distance 15) so
choose Node with minimum distance. In this step Node 4 is Minimum distance
adjacent Node, so marked it as visited and add up the distance.
Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 = 2 + 5 + 10 = 17
Dijkstra’s Algorithm

Step 5: Again, Move Forward and check for adjacent Node which is Node 6, so
marked it as visited and add up the distance, Now the distance will be:
Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 -> Node 6 = 2 + 5 + 10 + 2 =
19

Dijkstra’s Algorithm

So, the Shortest Distance from the Source Vertex is 19 which is optimal one

A* Algorithm-

 A* Algorithm is one of the best and popular techniques used for path finding and
graph traversals.
 A lot of games and web-based maps use this algorithm for finding the shortest path
efficiently.
 It is essentially a best first search algorithm.
Working-
A* Algorithm works as-

 It maintains a tree of paths originating at the start node.


 It extends those paths one edge at a time.
 It continues until its termination criterion is satisfied.

A* Algorithm extends the path that minimizes the following function-


f(n) = g(n) + h(n)
Here,
 ‘n’ is the last node on the path
 g(n) is the cost of the path from start node to node ‘n’
 h(n) is a heuristic function that estimates cost of the cheapest path from node ‘n’ to
the goal node

Algorithm-

 The implementation of A* Algorithm involves maintaining two lists- OPEN and


CLOSED.
 OPEN contains those nodes that have been evaluated by the heuristic function but
have not been expanded into successors yet.
 CLOSED contains those nodes that have already been visited.

The algorithm is as follows-

Step-01:

 Define a list OPEN.


 Initially, OPEN consists solely of a single node, the start node S.

Step-02:

If the list is empty, return failure and exit.


Step-03:

 Remove node n with the smallest value of f(n) from OPEN and move it to list
CLOSED.
 If node n is a goal state, return success and exit.

Step-04:

Expand node n.

Step-05:

 If any successor to n is the goal node, return success and the solution by tracing the
path from goal node to S.
 Otherwise, go to Step-06.

Step-06:

For each successor node,


For each successor node,
 Apply the evaluation function f to the node.
 If the node has not been in either list, add it to OPEN.

Step-07:

Go back to Step-02.

PRACTICE PROBLEMS BASED ON A* ALGORITHM-

Problem-01:

Given an initial state of a 8-puzzle problem and final state to be reached-


Find the most cost-effective path to reach the final state from initial state using A* Algorithm.
Consider g(n) = Depth of node and h(n) = Number of misplaced tiles.

Solution-

 A* Algorithm maintains a tree of paths originating at the initial state.


 It extends those paths one edge at a time.
 It continues until final state is reached.

Admissible A*

In AIML (Artificial Intelligence and Machine Learning), particularly in the context


of pathfinding and search algorithms, "A* (A-star)" is a popular informed search
algorithm used for finding the shortest path between nodes in a graph. It is widely
used due to its efficiency and optimality under certain conditions, especially when
combined with an admissible heuristic function. Let's break down what an
admissible A* algorithm entails and provide an example to illustrate its
application:

A* Algorithm Overview

A* search algorithm combines elements of both uniform cost search and greedy
best-first search:
1. Cost Evaluation: A* evaluates nodes by combining the cost to reach the
node (g(n)) and an estimate of the cost from the node to the goal (h(n)),
often referred to as the heuristic function:

f(n)=g(n)+h(n)f(n) = g(n) + h(n)f(n)=g(n)+h(n)

o g(n)g(n)g(n): Cost of the path from the start node to node nnn.
o h(n)h(n)h(n): Heuristic estimate of the cost from node nnn to the goal.

2. Priority Queue: Nodes are expanded based on the total estimated cost
f(n)f(n)f(n). At each step, A* expands the node with the lowest f(n)f(n)f(n)
value, prioritizing nodes that are closer to the goal.
3. Optimality: A* guarantees finding the shortest path from the start node to
the goal node if:
o The heuristic function h(n)h(n)h(n) is admissible (never overestimates
the actual cost to reach the goal).
o The graph does not have cycles with negative edge costs (or a proper
adjustment is made to handle negative costs).

Admissible Heuristic Function

An admissible heuristic function in the context of A* search is crucial because it


ensures that the algorithm finds an optimal solution. Specifically, an admissible
heuristic h(n)h(n)h(n) satisfies the following condition for all nodes nnn:

h(n)≤h∗(n)h(n) \leq h^*(n) h(n)≤h∗(n)

where h∗(n)h^*(n)h∗(n) is the true cost from node nnn to the nearest goal state
(the actual minimum cost).

Example: Pathfinding on a Grid

Consider a grid-based pathfinding problem where the goal is to find the shortest
path from a start node (S) to a goal node (G), navigating through obstacles
represented as walls (X). Let's illustrate A* with an admissible heuristic
(Manhattan distance):

 Grid Representation:

diff
Copy code
S-----
-X--X-
-X----
---X--
-X-G--

 Heuristic Function:
o Use Manhattan distance h(n)h(n)h(n), which is the sum of the absolute
differences in the xxx and yyy coordinates between the current node
nnn and the goal node GGG:
h(n)=∣current_x−goal_x∣+∣current_y−goal_y∣h(n) = |current\_x - goal\
_x| + |current\_y - goal\_y|
h(n)=∣current_x−goal_x∣+∣current_y−goal_y∣

 A Execution*:
o Start from the initial node SSS.
o Expand nodes based on f(n)=g(n)+h(n)f(n) = g(n) + h(n)f(n)=g(n)
+h(n), where g(n)g(n)g(n) is the cost to reach node nnn from the start.
o Use a priority queue to expand nodes with the lowest f(n)f(n)f(n)
value first.
o Continue until reaching the goal node GGG.

Benefits and Considerations

 Efficiency: A* with an admissible heuristic is efficient and often


outperforms other search algorithms in terms of finding optimal paths.
 Implementation: Ensure the heuristic function is admissible to guarantee
optimality.
 Applications: Widely used in robotics, game AI, and route planning
applications where finding the shortest path is critical.

In conclusion, A* with an admissible heuristic function is a powerful algorithm for


solving shortest path problems efficiently and optimally in AIML applications.
Choosing and implementing an appropriate admissible heuristic is key to ensuring
the algorithm's effectiveness in various domains requiring pathfinding and
optimization.

Iterative Deepening A* Algorithm (IDA*)


Continually Deepening The depth-first search and A* search's greatest
qualities are combined in the heuristic search algorithm known as the A*
algorithm (IDA*). The shortest route between the start state and the
objective state in a network or tree is found using an optimum search
method. The IDA* algorithm uses less memory than the A* algorithm
because it simply keeps track of the present node and its associated cost,
rather than the full search space that has been examined. This article will
examine the IDA* algorithm's operation, as well as its benefits and
drawbacks, and practical applications.

Introduction to Heuristic Search Algorithms


In order to identify the best solution to a problem, heuristic search
algorithms explore the search area in a methodical manner. They are
employed in a number of fields, including robotics, video games, and natural
language processing. A heuristic search algorithm uses a heuristic function
to evaluate the distance between the current state and the goal state in
order to identify the shortest route from the start state to the goal state.
There are various heuristic search algorithms, including A* search, Uniform
Cost Search (UCS), Depth-First Search (DFS), and Breadth-First Search (BFS).

A* Search Algorithm
A* search algorithm is a well-known heuristic search method that calculates
the distance between the current state and the objective state using a
heuristic function. The A* search method adds the actual cost from the start
node to the current node and the predicted cost from the current node to the
target node to determine the cost of each node. A heuristic function that
estimates the distance between the current node and the desired node is
used to determine the estimated cost. The algorithm then chooses the node
with the lowest cost, grows it, and keeps doing this until it reaches the
destination node.

As long as the heuristic function is acceptable and consistent, the A* search


algorithm assures finding the shortest path to the destination node. This
makes it an ideal search method. A heuristic function is considered
acceptable if it never overestimates the destination node's distance.
According to the triangle inequality, a consistent heuristic function is one in
which the estimated cost from the current node to the target node is less
than or equal to the actual cost plus the estimated cost from the next node
to the goal node.
Iterative Deepening A* Algorithm
In terms of memory utilisation, the IDA* algorithm outperforms the A* search
algorithm. The whole examined search space is kept in memory by the A*
search method, which can be memory-intensive for large search spaces.
Contrarily, the IDA* method just saves the current node and its associated
cost, not the whole searched area.

In order to explore the search space, the IDA* method employs depth-first
search. Starting with a threshold value equal to the heuristic function's
anticipated cost from the start node to the destination node. After that, it
expands nodes with an overall price less than or equivalent to the threshold
value via a depth-first search starting at the start node. The method ends
with the best answer if the goal node is located. The algorithm raises the
threshold value to the minimal cost of the nodes that were not extended if
the threshold value is surpassed. Once the objective node has been located,
the algorithm then repeats the procedure.

The IDA* method is full and optimum in the sense that it always finds the
best solution if one exists and stops searching if none is discovered. The
technique uses less memory since it just saves the current node and its
associated cost, not the full search space that has been investigated.
Routing, scheduling, and gaming are a few examples of real-world
applications where the IDA* method is often employed.

Steps for Iterative Deepening A* Algorithm (IDA*)


The IDA* algorithm includes the following steps:

ADVERTISEMENT

o Start with an initial cost limit.

The algorithm begins with an initial cost limit, which is usually set to the
heuristic cost estimate of the optimal path to the goal node.

o Perform a depth-first search (DFS) within the cost limit.

The algorithm performs a DFS search from the starting node until it reaches
a node with a cost that exceeds the current cost limit.

o Check for the goal node.


If the goal node is found during the DFS search, the algorithm returns the
optimal path to the goal.

o Update the cost limit.

If the goal node is not found during the DFS search, the algorithm updates
the cost limit to the minimum cost of any node that was expanded during the
search.

o Repeat the process until the goal is found.

The algorithm repeats the process, increasing the cost limit each time until
the goal node is found.

Example Implementation
Let's look at a graph example to see how the Iterative Deepening A* (IDA*)
technique functions. Assume we have the graph below, where the figures in
parenthesis represent the expense of travelling between the nodes:
We want to find the optimal path from node A to node F using the IDA*
algorithm. The first step is to set an initial cost limit. Let's use the heuristic
estimate of the optimal path, which is 7 (the sum of the costs from A to C to
F).

1. Set the cost limit to 7.


2. Start the search at node A.
3. Expand node A and generate its neighbors, B and C.
4. Evaluate the heuristic cost of the paths from A to B and A to C, which are 5
and 10 respectively.
5. Since the cost of the path to B is less than the cost limit, continue the search
from node B.
6. Expand node B and generate its neighbors, D and E.
7. Evaluate the heuristic cost of the paths from A to D and A to E, which are 10
and 9 respectively.
8. Since the cost of the path to D exceeds the cost limit, backtrack to node B.
9. Evaluate the heuristic cost of the path from A to C, which is 10.
10.Since the cost of the path to C is less than the cost limit, continue the search
from node C.
11.Expand node C and generate its neighbor, F.
12.Evaluate the heuristic cost of the path from A to F, which is 7.
13.Since the cost of the path to F is less than the cost limit, return the optimal
path, which is A - C - F.

We're done since the ideal route was discovered within the initial pricing
range. We would have adjusted the cost limit to the lowest cost of any node
that was enlarged throughout the search and then repeated the procedure
until the goal node was located if the best path could not be discovered
within the cost limit.

A strong and adaptable search algorithm, the IDA* method may be used to
identify the best course of action in a variety of situations. It effectively
searches huge state spaces and, if there is an optimal solution, finds it by
combining the benefits of DFS and A* search.
Advantages
o Completeness: The IDA* method is a complete search algorithm, which
means that, if an optimum solution exists, it will be discovered.
o Memory effectiveness: The IDA* method only keeps one path in memory at a
time, making it memory efficient.
o Flexibility: Depending on the application, the IDA* method may be employed
with a number of heuristic functions.
o Performance: The IDA* method sometimes outperforms other search
algorithms like uniform-cost search (UCS) or breadth-first search (BFS) (UCS).
o The IDA* algorithm is incremental, which means that it may be stopped at
any time and continued at a later time without losing any progress.
o As long as the heuristic function utilised is acceptable, the IDA* method will
always discover the best solution, if one exists. This implies that the
algorithm will always choose the route that leads directly to the target node.

Disadvantages
o Ineffective for huge search areas. IDA* potential for being incredibly
ineffective for vast search spaces is one of its biggest drawbacks. Since IDA*
expands the same nodes using a depth-first search on each iteration, this
might result in repetitive calculations.
o Get caught in a nearby minima. The ability of IDA* to become caught in local
minima is another drawback.
o Extremely reliant on the effectiveness of the heuristic function. The
effectiveness of the heuristic function utilised heavily influences IDA*
performance.
o Although IDA* is memory-efficient in that it only saves one path at a time,
there are some situations when it may still be necessary to use a substantial
amount of memory.
o confined to issues with clear objective states. IDA* is intended for issues
when the desired state is clearly defined and identifiable.
Recursive Best-First Search (RBFS)
Recursive Best-First Search (RBFS) is an algorithm used for finding the
shortest path in a graph or tree. It combines the benefits of both depth-first
search (DFS) and best-first search (BFS), aiming to find the optimal path
while keeping memory usage relatively low compared to pure BFS.

Overview of RBFS

RBFS maintains a two-tier structure during its execution:

1. Current Path Cost: Tracks the cost of the path being explored.
2. F-Limit: The threshold value beyond which RBFS will not explore paths
(similar to the concept of "bound" in A* search).

Algorithm Steps

1. Initialization:
o Start with an initial state and initialize f_limit to a large value (initially
infinity).
o Invoke the RBFS function with the initial state, initial f_limit, and a
reference to the current path cost set to zero.

2. Recursive RBFS Function:


o For a given state:
 Generate its successors (children states).
 If no successors exist (it's a leaf node), return a tuple (None,
infinity).
 If the goal state is found among the successors, return
(goal_state, 0).
 Calculate f_values (estimated total cost to reach the goal) for
each successor.
 Sort the successors based on f_values.
 Loop through each successor:
 If the f_value exceeds f_limit, update f_limit and
continue.
 Recursively call RBFS on the successor with updated
parameters (f_limit, current path cost).
 If the recursive call returns a non-goal state and its
f_value is better than the current best, update the current
best.
 If the recursive call returns a goal state, propagate the
goal state upwards.
o After processing all successors, return the best state found and its
f_value.

3. Termination:
o When the RBFS function returns a goal state, propagate it upwards
until the initial call to RBFS receives the final result.

Example

Consider a grid-based pathfinding scenario where you have a grid with obstacles,
and you need to find the shortest path from a start point to a goal point. Each
movement from one cell to an adjacent cell has a cost associated with it.

Let's say our initial grid looks like this:

diff
Copy code
S---
-X--
---X
-XG-

 S is the start point.


 G is the goal point.
 X represents obstacles.
 - represents free spaces.

Assume movement costs:

 Horizontal/Vertical movement cost: 1


 Diagonal movement cost: 1.5

Here’s how RBFS might explore this grid:

1. Start at S with an initial f_limit set to infinity.


2. Generate successors for S (possible moves: right, down).
3. Explore each successor recursively, updating f_limit as needed.
4. Continue until reaching G, updating the best path found so far.

The RBFS algorithm would navigate through the grid, exploring paths
incrementally while keeping track of the current best path cost (f_limit). It balances
between exploring new paths and backtracking when necessary to ensure it finds
the optimal path efficiently.

Conclusion

Recursive Best-First Search is a powerful algorithm for finding optimal paths in


graphs or grids. By using recursion and maintaining a limit (f_limit) on the paths
explored, RBFS efficiently navigates towards the goal state while prioritizing the
most promising paths first, akin to best-first search but with the advantage of using
less memory and allowing deeper exploration when necessary.
Problem-02:

You might also like