0% found this document useful (0 votes)
29 views42 pages

Unit-Iii (Part-2) Dynamic Progrogramming

Uploaded by

maneeshgopisetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views42 pages

Unit-Iii (Part-2) Dynamic Progrogramming

Uploaded by

maneeshgopisetty
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

K.CHAITANYA DEEPTHI ASST.

PROF CSE,DEPT

UNIT-III (PART-2)

Dynamic Programming: General Method, All pairs shortest paths, Single Source Shortest
|Paths– General Weights (Bellman Ford Algorithm), Optimal Binary Search Trees,
0/1Knapsack, String Editing, Travelling Salesperson problem.

Dynamic Programming

Dynamic programming is a technique that breaks the problems into sub-problems and saves the
result for future purposes so that we do not need to compute the result again. The sub problems are
optimized to optimize the overall solution is known as optimal substructure property. The main
use of dynamic programming is to solve optimization problems.

Dynamic Programming Definition: It is a programming technique in which solution is obtained


from a sequence of decisions.

Optimization problems: optimization problems mean that when we are trying to find out the
minimum or the maximum solution of a problem. The dynamic programming guarantees to find
the optimal solution of a problem if the solution exists.

Dynamic Programming:

The General Method:

Dynamic programming, as greedy method, is a powerful algorithm design technique that can be

used when the solution to the problem may be viewed as the result of a sequence of decisions.

In the greedy method we make irrevocable decisions one at a time, using a greedy criterion.

However, in dynamic programming we examine the decision sequence to see whether an

optimal decision sequence contains optimal decision subsequence.

1. Dynamic programming is an algorithm design method that can be used when the solution

to a problem can be viewed as the result of a sequence of decisions.

2. Dynamic programming is a technique that breaks the problems into sub-problems and
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

saves the result for future purposes so that we do not need to compute the result again

3. Dynamic Programming is used when the sub problems are not independent.

4. Dynamic Programming is the most powerful design technique for solving optimization
problems.

5. Divide & Conquer algorithm partition the problem into disjoint sub problems solve the sub
problems recursively and then combine their solution to solve the original problems.

6. For example: when they share the same sub problems. In this case, divide and conquer may do
more work than necessary, because it solves the same sub problem multiple times.

7. Dynamic Programming solves each sub problems just once and stores the result in a table so
that it can be repeatedly retrieved if needed again.

8. Dynamic Programming is a Bottom-up approach- we solve all possible small problems and then
combine to obtain solutions for bigger problems.

9. Dynamic Programming is a paradigm of algorithm design in which an optimization


problem is solved by a combination of achieving sub-problem solutions and appearing to the
"Principle of optimality".

10. Two main properties of a problem suggest that the given problem can be solved using
Dynamic Programming. These properties are overlapping sub-problems and optimal Sub
structure.

Overlapping Sub-Problems: Similar to Divide-and-Conquer approach, Dynamic Programming


also combines solutions to sub-problems. It is mainly used where the solution of one sub-problem
is needed repeatedly .The computed solutions are stored in a table, so that these don’t have to be
re-computed. Hence, this technique is needed where overlapping sub- problem exists.

Optimal Sub-Structure:

A given problem has Optimal Substructure Property, if the optimal solution of the given problem
can be obtained using optimal solutions of its sub-problems.

Elements of Dynamic Programming:

There are basically three elements that characterize a dynamic programming algorithm:-

Substructure: Decompose the given problem into smaller sub problems. Express the solution

of the original problem in terms of the solution for smaller problems.

Table Structure: After solving the sub-problems, store the results to the sub problems in a
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

table. This is done because sub problem solutions are reused many times, and we do not want

to repeatedly solve the same problem over and over again.

Bottom-up Computation: Using table, combine the solution of smaller sub problems to solve

large sub problems and eventually arrives at a solution to complete problem.

Bottom-up means:-

Start with smallest sub problems. Combining their solutions obtain the solution to sub-problems of
increasing size. Until solving the solution of the original problem.

Stages: The problem can be divided into several sub problems, which are called stages. A stage is
a small portion of a given problem. For example, in the shortest path problem, they were defined
by the structure of the graph.

States: Each stage has several states associated with it. The states for the shortest path problem
was the node reached.

Decision: At each stage, there can be multiple choices out of which one of the best decisions
should be taken. The decision taken at every stage should be optimal; this is called a stage
decision.

Optimal policy: It is a rule which determines the decision at each stage; a policy is called an
optimal policy if it is globally optimal. This is known as Bellman principle of optimality.

 Given the current state, the optimal choices for each of the remaining states does not
depend on the previous states or decisions.
 In the shortest path problem, it was not necessary to know how we got a node only that we
did.
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

 There exist a recursive relationship that identify the optimal decisions for stage j, given
that stage j+1, has already been solved. The final stage must be solved by itself.

Steps of Dynamic Programming Algorithm work on the following principles:

It can be broken into four steps:

1. Characterize the structure of an optimal solution.

2. Recursively defined the value of the optimal solution. Like Divide and Conquer, divide the
problem into two or more optimal parts recursively. This helps to determine what the solution will
look like.

3. Compute the value of the optimal solution from the bottom up (starting with the smallest sub
problems)

4. Construct the optimal solution for the entire problem form the computed values of smaller sub
problems.

How Does Dynamic Programming (DP) Work?


 Identify Sub problems: Divide the main problem into smaller, independent sub problems.
 Store Solutions: Solve each sub problem and store the solution in a table or array.
 Build Up Solutions: Use the stored solutions to build up the solution to the main problem.
 Avoid Redundancy: By storing solutions, DP ensures that each sub problem is solved only
once, reducing computation time.
Techniques to solve Dynamic Programming Problems:
1. Top-Down (Memoization):
This approach follows the memoization technique. It consists of recursion and caching. In
computation, recursion represents the process of calling functions repeatedly, whereas cache
refers to the process of storing intermediate results.
Memoization: Memoization is a specific form of caching that is used in dynamic
programming. The purpose of caching is to improve the performance of our programs and keep
data accessible that can be used later. It basically stores the previously calculated result of the
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

sub problem and uses the stored result for the same sub problem. This removes the extra effort
to calculate again and again for the same problem. And we already know that if the same
problem occurs again and again, then that problem is recursive in nature.
2. Bottom-Up Approach: This approach uses the tabulation technique to implement the
dynamic programming solution. It addresses the same problems as before, but without
recursion. In this approach, iteration replaces recursion. Hence, there is no stack overflow error
or overhead of recursive procedures.
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

Differentiate between Divide & Conquer Method /Greedy Method vs DynamicProgramming


K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

Examples of Dynamic Programming (DP):


Example 1: Consider the problem of finding the Fibonacci sequence:
Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …
Brute Force Approach:
To find the nth Fibonacci number using a brute force approach, you would simply add the (n-
1)th and (n-2)th Fibonacci numbers. This would work, but it would be inefficient for large
values of n, as it would require calculating all the previous Fibonacci numbers.
Dynamic Programming Approach:

Fibonacci Series using Dynamic Programming


 Sub problems: F(0), F(1), F(2), F(3), …
 Store Solutions: Create a table to store the values of F(n) as they are calculated.
 Build Up Solutions: For F(n), look up F(n-1) and F(n-2) in the table and add them.
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

 Avoid Redundancy: The table ensures that each sub problem (e.g., F(2)) is solved only
once.
By using DP, we can efficiently calculate the Fibonacci sequence without having to recompute
sub problems.

Example:
Given 3 numbers {1, 3, 5}, the task is to tell the total number of ways we can form a number N
using the sum of the given three numbers. (allowing repetitions and different arrangements).
The total number of ways to form 6 is: 8
1+1+1+1+1+1
1+1+1+3
1+1+3+1
1+3+1+1
3+1+1+1
3+3
1+5
5+1
Following are the steps to solve the Dynamic Programming problem:
 We choose a state for the given problem.
 N will be used as the determining factor for the state because it can be used to identify any
subproblem.
 The DP state will resemble state(N), where the state(N) is the total number of arrangements
required to create N using the elements 1, 3, and 5. Identify the relationship of the transition
between any two states.
 We must now calculate the state (N).
How to Compute the state?
As we can only use 1, 3, or 5 to form a given number N. Let us assume that we know the result
for N = 1, 2, 3, 4, 5, 6
Let us say we know the result for:
state (n = 1), state (n = 2), state (n = 3) ……… state (n = 6)
Now, we wish to know the result of the state (n = 7). See, we can only add 1, 3, and 5. Now we
can get a sum total of 7 in the following 3 ways:
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

1) Adding 1 to all possible combinations of state (n = 6)


Eg: [ (1+1+1+1+1+1) + 1]
[ (1+1+1+3) + 1]
[ (1+1+3+1) + 1]
[ (1+3+1+1) + 1]
[ (3+1+1+1) + 1]
[ (3+3) + 1]
[ (1+5) + 1]
[ (5+1) + 1]
2) Adding 3 to all possible combinations of state (n = 4);
[(1+1+1+1) + 3]
[(1+3) + 3]
[(3+1) + 3]
3) Adding 5 to all possible combinations of state(n = 2)
[ (1+1) + 5]
(Note how it sufficient to add only on the right-side – all the add-from-left-side cases are
covered, either in the same state, or another, e.g. [ 1+(1+1+1+3)] is not needed in state (n=6)
because it’s covered by state (n = 4) [(1+1+1+1) + 3])
Now, think carefully and satisfy yourself that the above three cases are covering all possible
ways to form a sum total of 7;
Therefore, we can say that result for
state(7) = state (6) + state (4) + state (2)
OR
state(7) = state (7-1) + state (7-3) + state (7-5)
In general,
state(n) = state(n-1) + state(n-3) + state(n-5)
Time Complexity: O(3n), As at every stage we need to take three decisions and the height of
the tree will be of the order of n.

Auxiliary Space: O(n)


K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

Applications of dynamic programming:

1. 0/1 knapsack problem: The 0/1 knapsack problem means that the items are either
completely or no items are filled in a knapsack.

For example, we have two items having weights 2kg and 3kg, respectively. If we pick the
2kg item then we cannot pick 1kg item from the 2kg item (item is not divisible); we have to
pick the 2kg item completely. This is a 0/1 knapsack problem in which either we pick the
item completely or we will pick that item. The 0/1 knapsack problem is solved by the
dynamic programming.

2. All pair Shortest path problem: In the all pairs shortest path problem, we are to find a
shortest path between every pair of vertices in a directed graph G. That is, for every pair of
vertices (i, j), we are to find a shortest path from i to j as well as one from j to i. These two
paths are the same when G is undirected.

3. Reliability design problem: The reliability design problem is the designing of a system
composed of several devices connected in series or parallel. Reliability means the probability
to get the success of the device.
Let’s say, we have to set up a system consisting of D1, D2, D3, …………, and Dn devices, each
device has some costs C1, C2, C3, …….., Cn. Each device has a reliability of 0.9 then the entire
system has reliability which is equal to the product of the reliabilities of all devices i.e., πri =
(0.9)4.
4. Longest common subsequence (LCS): longest means that the subsequence should be the
biggest one. The common means that some of the characters are common between the two
strings. The subsequence means that some of the characters are taken from the string that is
written in increasing order to form a subsequence.

5. Mathematical optimization problem.

6. Time-sharing: It schedules the job to maximize CPU usage.


K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

ALL PAIRS SHORTEST PATHS


In the all pairs shortest path problem, we are to find a shortest path between every pair of

vertices in a directed graph G. That is, for every pair of vertices (i, j), we are to find a shortest
path from i to j as well as one from j to i. These two paths are the same when G is undirected.

When no edge has a negative length, the all-pairs shortest path problem may be solved by using
Dijkstra‟s greedy single source algorithm n times, once with each of the n vertices as the source
vertex.

The all pairs shortest path problem is to determine a matrix A such that A (i, j) is the length of

a shortest path from i to j. The matrix A can be obtained by solving n single-source problems
using the algorithm shortest Paths. Since each application of this procedure requires O (n2) time,
the matrix A can be obtained in O (n3) time.

The dynamic programming solution, called Floyd‟s Warshall algorithm, runs in O (n3) time.
Floyd‟s Warshall algorithm works even when the graph has negative length edges (provided
there are no negative length cycles).

The Floyd-Warshall algorithm is a dynamic programming algorithm used to discover the


shortest paths in a weighted graph, which includes negative weight cycles. The algorithm works
with the aid of computing the shortest direction between every pair of vertices within the graph,
the usage of a matrix of intermediate vertices to keep music of the exceptional-recognized route
thus far.
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

Complexity Analysis: A Dynamic programming algorithm based on this recurrence involves in

calculating n+1 matrices, each of size n x n. Therefore, the algorithm has a complexity of O (n3).

Example: 1

Given a weighted digraph G = (V, E) with weight. Determine the length of the shortest path

between all pairs of vertices in G. Here we assume that there are no cycles with zero or
negative cost.
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

EXAMPLE-2:
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

So A3 is the all pairs shortest path matrix .


K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT

EXampl-3
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

SINGLE SOURCE SHORTEST PATH -

Given a source vertex s from a set of vertices V in a weighted directed graph where its edge

weights w(u, v) can be negative, find the shortest path weights d(s, v) from source s for all
vertices v present in the graph. If the graph contains a negative-weight cycle, report it.

Dijkstra doesn’t work for Graphs with negative weights, Bellman-Ford works for such graphs.

Bellman-Ford is also simpler than Dijkstra and suites well for distributed systems. But time
complexity of Bellman-Ford is O(VE), which is more than Dijkstra.

1. It is a shortest path problem where the shortest path from a given source vertex to all other
remaining vertices is computed.

2. Dijkstra’s Algorithm and Bellman Ford Algorithm are the famous algorithms used for solving
single-source shortest path problem.

DIJKSTRA’S ALGORITHM

1. An algorithm that is used for finding the shortest distance, or path, from starting node to target
node in a weighted graph is known as Dijkstra’s Algorithm.

2. This algorithm makes a tree of the shortest path from the starting node, the source, to all other

nodes (points) in the graph.

3. Dijkstra's algorithm makes use of weights of the edges for finding the path that minimizes the
total distance (weight) among the source node and all other nodes. This algorithm is also known
as the single-source shortest path algorithm.

4. It is important to note that Dijkstra’s algorithm is only applicable when all weights are positive

because, during the execution, the weights of the edges are added to find the shortest path.

5. And therefore if any of the weights are introduced to be negative on the edges of the graph, the

algorithm would never work properly. However, some algorithms like the BELLMAN-FORD
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

ALGORITHM can be used in such cases.

6. Generally, Dijkstra’s algorithm works on the principle of relaxation where an approximation


of the accurate distance is steadily displaced by more suitable values until the shortest distance is
achieved. Also, the estimated distance to every node is always an overvalue of the true distance
and is generally substituted by the least of its previous value with the distance of a recently
determined path.

How to Implement the Dijkstra Algorithm?


1. Before proceeding the step by step process for implementing the algorithm, let us consider
some essential characteristics of Dijkstra’s algorithm;

2. Basically, the Dijkstra’s algorithm begins from the node to be selected, the source node, and it

examines the entire graph to determine the shortest path among that node and all the other nodes
in the graph.

3. The algorithm maintains the track of the currently recognized shortest distance from each node
to the source

4. Code and updates these values if it identifies another shortest path.

5. Once the algorithm has determined the shortest path amid the source code to another node, the

node is marked as “visited” and can be added to the path.

6. This process is being continued till all the nodes in the graph have been added to the path, as
this way, a path gets created that connects the source node to all the other nodes following the
plausible shortest path to reach each node.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

Working Example of Dijkstra's Algorithm:


K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

2 . Now the neighbors of node C will be checked, i.e, node A, B, and D. We start with B, here
we will add the minimum distance of current node (0) with the weight of the edge (7) that linked
the node C to node B and get 0+ 7= 7.

Now, this value will be compared with the minimum distance of B (infinity), the least value is
the one that remains the minimum distance of B, like in this case, 7 is less than infinity, and
marks the least value to node B.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

Now, we will select the new current node such that the node must be unvisited with the lowest

minimum distance, or the node with the least number and no check mark. Here, node A is the

unvisited with minimum distance 1, marked as current node with red dot.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

We repeat the algorithm, checking the neighbor of the current node while ignoring the visited

node, so onlynode B will be checked.

For node B, we add 1 with 3 (weight of the edge connecting node A to B) and obtain 4. This

value, 4, will becompared with the minimum distance of B, 7, and mark the lowest value at B

as 4.

After this, node A marked as visited with a green check mark. The current node is selected as
node D, it is unvisited and has a smallest recent distance. We repeat the algorithm and check for
node B and E.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

So, we are done as no unvisited node is left. The minimum distance of each node is now
representing the minimum distance of that node from node C.

It is important to note that Dijkstra’s algorithm is only applicable when all weights are positive
because,during the execution, the weights of the edges are added to find the shortest path.

And therefore if any of the weights are introduced to be negative on the edges of the graph, the
algorithm would never work properly. However, some algorithms like the Bellman-Ford
Algorithm can be used in such cases.

Bellman Ford Algorithm:


Bellman ford algorithm is a single-source shortest path algorithm. This algorithm is used to find
the shortest distance from the single vertex to all the other vertices of a weighted graph. There
are various other algorithms used to find the shortest path like Dijkstra algorithm, etc. If the
weighted graph contains the negative weight values, then the Dijkstra algorithm does not confirm
whether it produces the correct answer or not. In contrast to Dijkstra algorithm, bellman ford
algorithm guarantees the correct answer even if the weighted graph contains the negative weight
values.

Rule of this algorithm:

We will go on relaxing all the edges (n - 1) times where,

n = number of vertices

Bellman ford algorithm provides solution for Single source shortest path problems.

 Bellman ford algorithm is used to indicate whether the graph has negative weight cycles
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

or not.

 Bellman Ford Algorithm can be applied for directed and weighted graphs.

Why do we need to be careful with negative weights?

Negative weight edges can create negative weight cycles i.e. a cycle that will reduce the total

path distance by coming back to the same point.


K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

An Example of Bellman-Ford Algorithm:

Consider the weighted graph below.


K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

(OR)

Algorithm to Find Negative Cycle in a Directed Weighted Graph Using Bellman-Ford:


 Initialize distance array dist[] for each vertex ‘v‘ as dist[v] = INFINITY.
 Assume any vertex (let’s say ‘0’) as source and assign dist = 0.
 Relax all the edges(u,v,weight) N-1 times as per the below condition:
o dist[v] = minimum(dist[v], distance[u] + weight)
 Now, Relax all the edges one more time i.e. the Nth time and based on the below two
cases we can detect the negative cycle:
o Case 1 (Negative cycle exists): For any edge(u, v, weight), if dist[u] + weight
< dist[v]
o Case 2 (No Negative cycle) : case 1 fails for all the edges.
Working of Bellman-Ford Algorithm to Detect the Negative cycle in the graph:
Let’s suppose we have a graph which is given below and we want to find whether there
exists a negative cycle or not using Bellman-Ford.

Step 1: Initialize a distance array Dist[] to store the shortest distance for each vertex from
the source vertex. Initially distance of source will be 0 and Distance of other vertices will
be INFINITY.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

Step 2: Start relaxing the edges, during 1st Relaxation:


 Current Distance of B > (Distance of A) + (Weight of A to B) i.e. Infinity > 0 + 5
o Therefore, Dist[B] = 5
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

Step 3: During 2nd Relaxation:


 Current Distance of D > (Distance of B) + (Weight of B to D) i.e. Infinity > 5 + 2
o Dist[D] = 7
 Current Distance of C > (Distance of B) + (Weight of B to C) i.e. Infinity > 5 + 1
o Dist[C] = 6
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

Step 4: During 3rd Relaxation:


 Current Distance of F > (Distance of D ) + (Weight of D to F) i.e. Infinity > 7 + 2
o Dist[F] = 9
 Current Distance of E > (Distance of C ) + (Weight of C to E) i.e. Infinity > 6 + 1
o Dist[E] = 7
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

Step 5: During 4th Relaxation:


 Current Distance of D > (Distance of E) + (Weight of E to D) i.e. 7 > 7 + (-1)
o Dist[D] = 6
 Current Distance of E > (Distance of F ) + (Weight of F to E) i.e. 7 > 9 + (-3)
o Dist[E] = 6
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

step 6: During 5th Relaxation:


 Current Distance of F > (Distance of D) + (Weight of D to F) i.e. 9 > 6 + 2
o Dist[F] = 8
 Current Distance of D > (Distance of E ) + (Weight of E to D) i.e. 6 > 6 + (-1)
o Dist[D] = 5
 Since the graph h 6 vertices, So during the 5th relaxation the shortest distance for
all the vertices should have been calculated.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

step 7: Now the final relaxation i.e. the 6th relaxation should indicate the presence of
negative cycle if there is any changes in the distance array of 5th relaxation.
During the 6th relaxation, following changes can be seen:
 Current Distance of E > (Distance of F) + (Weight of F to E) i.e. 6 > 8 + (-3)
o Dist[E]=5
 Current Distance of F > (Distance of D ) + (Weight of D to F) i.e. 8 > 5 + 2
o Dist[F]=7
Since, we observer changes in the Distance array Hence ,we can conclude the presence of
a negative cycle in the graph.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

NOTE: Adjacency matrix REPRESENTATION

 VERTEX AND ITS SELF MARK IT AS 0.

 VERTEX HAVING RELATIONSHIP WITH OTHER VERTEX / HAVING


NEARSEST NEIGBOUR WITH THE VERTEX ALSO CONSIDER AS 0.

 NO RELATIONSHIP BETWEEN THE VERTICES/NO NEAREST

NEIGHBOURS MARK IT AS

 IF YOU ARE IDENTIFY THE SHORTEST PATH DISTANCE /DISTANCE


BETWEEN THE VERTICES MARK THE WEIGHT/ COST IN BTWEEN THE
TWO VERTICES.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT

Example:

You might also like