Unit-Iii (Part-2) Dynamic Progrogramming
Unit-Iii (Part-2) Dynamic Progrogramming
PROF CSE,DEPT
UNIT-III (PART-2)
Dynamic Programming: General Method, All pairs shortest paths, Single Source Shortest
|Paths– General Weights (Bellman Ford Algorithm), Optimal Binary Search Trees,
0/1Knapsack, String Editing, Travelling Salesperson problem.
Dynamic Programming
Dynamic programming is a technique that breaks the problems into sub-problems and saves the
result for future purposes so that we do not need to compute the result again. The sub problems are
optimized to optimize the overall solution is known as optimal substructure property. The main
use of dynamic programming is to solve optimization problems.
Optimization problems: optimization problems mean that when we are trying to find out the
minimum or the maximum solution of a problem. The dynamic programming guarantees to find
the optimal solution of a problem if the solution exists.
Dynamic Programming:
Dynamic programming, as greedy method, is a powerful algorithm design technique that can be
used when the solution to the problem may be viewed as the result of a sequence of decisions.
In the greedy method we make irrevocable decisions one at a time, using a greedy criterion.
1. Dynamic programming is an algorithm design method that can be used when the solution
2. Dynamic programming is a technique that breaks the problems into sub-problems and
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
saves the result for future purposes so that we do not need to compute the result again
3. Dynamic Programming is used when the sub problems are not independent.
4. Dynamic Programming is the most powerful design technique for solving optimization
problems.
5. Divide & Conquer algorithm partition the problem into disjoint sub problems solve the sub
problems recursively and then combine their solution to solve the original problems.
6. For example: when they share the same sub problems. In this case, divide and conquer may do
more work than necessary, because it solves the same sub problem multiple times.
7. Dynamic Programming solves each sub problems just once and stores the result in a table so
that it can be repeatedly retrieved if needed again.
8. Dynamic Programming is a Bottom-up approach- we solve all possible small problems and then
combine to obtain solutions for bigger problems.
10. Two main properties of a problem suggest that the given problem can be solved using
Dynamic Programming. These properties are overlapping sub-problems and optimal Sub
structure.
Optimal Sub-Structure:
A given problem has Optimal Substructure Property, if the optimal solution of the given problem
can be obtained using optimal solutions of its sub-problems.
There are basically three elements that characterize a dynamic programming algorithm:-
Substructure: Decompose the given problem into smaller sub problems. Express the solution
Table Structure: After solving the sub-problems, store the results to the sub problems in a
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
table. This is done because sub problem solutions are reused many times, and we do not want
Bottom-up Computation: Using table, combine the solution of smaller sub problems to solve
Bottom-up means:-
Start with smallest sub problems. Combining their solutions obtain the solution to sub-problems of
increasing size. Until solving the solution of the original problem.
Stages: The problem can be divided into several sub problems, which are called stages. A stage is
a small portion of a given problem. For example, in the shortest path problem, they were defined
by the structure of the graph.
States: Each stage has several states associated with it. The states for the shortest path problem
was the node reached.
Decision: At each stage, there can be multiple choices out of which one of the best decisions
should be taken. The decision taken at every stage should be optimal; this is called a stage
decision.
Optimal policy: It is a rule which determines the decision at each stage; a policy is called an
optimal policy if it is globally optimal. This is known as Bellman principle of optimality.
Given the current state, the optimal choices for each of the remaining states does not
depend on the previous states or decisions.
In the shortest path problem, it was not necessary to know how we got a node only that we
did.
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
There exist a recursive relationship that identify the optimal decisions for stage j, given
that stage j+1, has already been solved. The final stage must be solved by itself.
2. Recursively defined the value of the optimal solution. Like Divide and Conquer, divide the
problem into two or more optimal parts recursively. This helps to determine what the solution will
look like.
3. Compute the value of the optimal solution from the bottom up (starting with the smallest sub
problems)
4. Construct the optimal solution for the entire problem form the computed values of smaller sub
problems.
sub problem and uses the stored result for the same sub problem. This removes the extra effort
to calculate again and again for the same problem. And we already know that if the same
problem occurs again and again, then that problem is recursive in nature.
2. Bottom-Up Approach: This approach uses the tabulation technique to implement the
dynamic programming solution. It addresses the same problems as before, but without
recursion. In this approach, iteration replaces recursion. Hence, there is no stack overflow error
or overhead of recursive procedures.
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
Avoid Redundancy: The table ensures that each sub problem (e.g., F(2)) is solved only
once.
By using DP, we can efficiently calculate the Fibonacci sequence without having to recompute
sub problems.
Example:
Given 3 numbers {1, 3, 5}, the task is to tell the total number of ways we can form a number N
using the sum of the given three numbers. (allowing repetitions and different arrangements).
The total number of ways to form 6 is: 8
1+1+1+1+1+1
1+1+1+3
1+1+3+1
1+3+1+1
3+1+1+1
3+3
1+5
5+1
Following are the steps to solve the Dynamic Programming problem:
We choose a state for the given problem.
N will be used as the determining factor for the state because it can be used to identify any
subproblem.
The DP state will resemble state(N), where the state(N) is the total number of arrangements
required to create N using the elements 1, 3, and 5. Identify the relationship of the transition
between any two states.
We must now calculate the state (N).
How to Compute the state?
As we can only use 1, 3, or 5 to form a given number N. Let us assume that we know the result
for N = 1, 2, 3, 4, 5, 6
Let us say we know the result for:
state (n = 1), state (n = 2), state (n = 3) ……… state (n = 6)
Now, we wish to know the result of the state (n = 7). See, we can only add 1, 3, and 5. Now we
can get a sum total of 7 in the following 3 ways:
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
1. 0/1 knapsack problem: The 0/1 knapsack problem means that the items are either
completely or no items are filled in a knapsack.
For example, we have two items having weights 2kg and 3kg, respectively. If we pick the
2kg item then we cannot pick 1kg item from the 2kg item (item is not divisible); we have to
pick the 2kg item completely. This is a 0/1 knapsack problem in which either we pick the
item completely or we will pick that item. The 0/1 knapsack problem is solved by the
dynamic programming.
2. All pair Shortest path problem: In the all pairs shortest path problem, we are to find a
shortest path between every pair of vertices in a directed graph G. That is, for every pair of
vertices (i, j), we are to find a shortest path from i to j as well as one from j to i. These two
paths are the same when G is undirected.
3. Reliability design problem: The reliability design problem is the designing of a system
composed of several devices connected in series or parallel. Reliability means the probability
to get the success of the device.
Let’s say, we have to set up a system consisting of D1, D2, D3, …………, and Dn devices, each
device has some costs C1, C2, C3, …….., Cn. Each device has a reliability of 0.9 then the entire
system has reliability which is equal to the product of the reliabilities of all devices i.e., πri =
(0.9)4.
4. Longest common subsequence (LCS): longest means that the subsequence should be the
biggest one. The common means that some of the characters are common between the two
strings. The subsequence means that some of the characters are taken from the string that is
written in increasing order to form a subsequence.
vertices in a directed graph G. That is, for every pair of vertices (i, j), we are to find a shortest
path from i to j as well as one from j to i. These two paths are the same when G is undirected.
When no edge has a negative length, the all-pairs shortest path problem may be solved by using
Dijkstra‟s greedy single source algorithm n times, once with each of the n vertices as the source
vertex.
The all pairs shortest path problem is to determine a matrix A such that A (i, j) is the length of
a shortest path from i to j. The matrix A can be obtained by solving n single-source problems
using the algorithm shortest Paths. Since each application of this procedure requires O (n2) time,
the matrix A can be obtained in O (n3) time.
The dynamic programming solution, called Floyd‟s Warshall algorithm, runs in O (n3) time.
Floyd‟s Warshall algorithm works even when the graph has negative length edges (provided
there are no negative length cycles).
calculating n+1 matrices, each of size n x n. Therefore, the algorithm has a complexity of O (n3).
Example: 1
Given a weighted digraph G = (V, E) with weight. Determine the length of the shortest path
between all pairs of vertices in G. Here we assume that there are no cycles with zero or
negative cost.
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
EXAMPLE-2:
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
EXampl-3
K.CHAITANYA DEEPTHI ASST.PROF CSE,DEPT
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
Given a source vertex s from a set of vertices V in a weighted directed graph where its edge
weights w(u, v) can be negative, find the shortest path weights d(s, v) from source s for all
vertices v present in the graph. If the graph contains a negative-weight cycle, report it.
Dijkstra doesn’t work for Graphs with negative weights, Bellman-Ford works for such graphs.
Bellman-Ford is also simpler than Dijkstra and suites well for distributed systems. But time
complexity of Bellman-Ford is O(VE), which is more than Dijkstra.
1. It is a shortest path problem where the shortest path from a given source vertex to all other
remaining vertices is computed.
2. Dijkstra’s Algorithm and Bellman Ford Algorithm are the famous algorithms used for solving
single-source shortest path problem.
DIJKSTRA’S ALGORITHM
1. An algorithm that is used for finding the shortest distance, or path, from starting node to target
node in a weighted graph is known as Dijkstra’s Algorithm.
2. This algorithm makes a tree of the shortest path from the starting node, the source, to all other
3. Dijkstra's algorithm makes use of weights of the edges for finding the path that minimizes the
total distance (weight) among the source node and all other nodes. This algorithm is also known
as the single-source shortest path algorithm.
4. It is important to note that Dijkstra’s algorithm is only applicable when all weights are positive
because, during the execution, the weights of the edges are added to find the shortest path.
5. And therefore if any of the weights are introduced to be negative on the edges of the graph, the
algorithm would never work properly. However, some algorithms like the BELLMAN-FORD
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
2. Basically, the Dijkstra’s algorithm begins from the node to be selected, the source node, and it
examines the entire graph to determine the shortest path among that node and all the other nodes
in the graph.
3. The algorithm maintains the track of the currently recognized shortest distance from each node
to the source
5. Once the algorithm has determined the shortest path amid the source code to another node, the
6. This process is being continued till all the nodes in the graph have been added to the path, as
this way, a path gets created that connects the source node to all the other nodes following the
plausible shortest path to reach each node.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
2 . Now the neighbors of node C will be checked, i.e, node A, B, and D. We start with B, here
we will add the minimum distance of current node (0) with the weight of the edge (7) that linked
the node C to node B and get 0+ 7= 7.
Now, this value will be compared with the minimum distance of B (infinity), the least value is
the one that remains the minimum distance of B, like in this case, 7 is less than infinity, and
marks the least value to node B.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
Now, we will select the new current node such that the node must be unvisited with the lowest
minimum distance, or the node with the least number and no check mark. Here, node A is the
unvisited with minimum distance 1, marked as current node with red dot.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
We repeat the algorithm, checking the neighbor of the current node while ignoring the visited
For node B, we add 1 with 3 (weight of the edge connecting node A to B) and obtain 4. This
value, 4, will becompared with the minimum distance of B, 7, and mark the lowest value at B
as 4.
After this, node A marked as visited with a green check mark. The current node is selected as
node D, it is unvisited and has a smallest recent distance. We repeat the algorithm and check for
node B and E.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
So, we are done as no unvisited node is left. The minimum distance of each node is now
representing the minimum distance of that node from node C.
It is important to note that Dijkstra’s algorithm is only applicable when all weights are positive
because,during the execution, the weights of the edges are added to find the shortest path.
And therefore if any of the weights are introduced to be negative on the edges of the graph, the
algorithm would never work properly. However, some algorithms like the Bellman-Ford
Algorithm can be used in such cases.
n = number of vertices
Bellman ford algorithm provides solution for Single source shortest path problems.
Bellman ford algorithm is used to indicate whether the graph has negative weight cycles
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
or not.
Bellman Ford Algorithm can be applied for directed and weighted graphs.
Negative weight edges can create negative weight cycles i.e. a cycle that will reduce the total
(OR)
Step 1: Initialize a distance array Dist[] to store the shortest distance for each vertex from
the source vertex. Initially distance of source will be 0 and Distance of other vertices will
be INFINITY.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
step 7: Now the final relaxation i.e. the 6th relaxation should indicate the presence of
negative cycle if there is any changes in the distance array of 5th relaxation.
During the 6th relaxation, following changes can be seen:
Current Distance of E > (Distance of F) + (Weight of F to E) i.e. 6 > 8 + (-3)
o Dist[E]=5
Current Distance of F > (Distance of D ) + (Weight of D to F) i.e. 8 > 5 + 2
o Dist[F]=7
Since, we observer changes in the Distance array Hence ,we can conclude the presence of
a negative cycle in the graph.
K.CHAITANYA DEEPTHI ASST.PROF CSE DPT
NEIGHBOURS MARK IT AS
Example: