0% found this document useful (0 votes)
4 views

04Chapter-Four Dynamic Programming

Chapter 4 discusses dynamic programming (DP) as a technique for solving optimization problems defined by recurrences with overlapping sub-problems. It contrasts DP with divide-and-conquer and greedy methods, highlighting its efficiency through memoization and tabulation. Key applications of DP include the 0/1 Knapsack Problem, All Pairs Shortest Path Problems, and the Travelling Salesman Problem.

Uploaded by

hirpaadugna1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

04Chapter-Four Dynamic Programming

Chapter 4 discusses dynamic programming (DP) as a technique for solving optimization problems defined by recurrences with overlapping sub-problems. It contrasts DP with divide-and-conquer and greedy methods, highlighting its efficiency through memoization and tabulation. Key applications of DP include the 0/1 Knapsack Problem, All Pairs Shortest Path Problems, and the Travelling Salesman Problem.

Uploaded by

hirpaadugna1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

Chapter- 4

Dynamic Programming and


Traversal Techniques
Basic Topics of Chapter-4

• Introduction to dynamic programming

• Divide and Conquer vs Dynamic Programming

• Greedy method vs Dynamic programming

• Principle of Optimality

• O/I Knapsack

• Multistage graphs, all pairs shortest pattern

• Travelling sales man problem, game tree

• Depth First Search

• Disconnected Component, Depth First Search


What is Dynamic Programming?
 Dynamic Programming(DP) is a general algorithm design technique

for solving problems defined by recurrences with overlapping sub

problems

 DP- is used to solving optimization problems.

 Invented by American mathematician Richard Bellman in the 1950s to

solve optimization problems . “Programming” here means “planning”

 Like divide-and-conquer method, DP solves problems by combining

the solutions of sub-problems.


What is Dynamic Programming?...
 But, DP is a way of improving on inefficient divide and conquer

algorithms.

 By “inefficient”, we mean that the same recursive call is made over and

over.

 In DP, If same sub problem is solved several times, we can use table to

store result of a sub problem the first time it is computed and thus never

have to re compute it again.

 DP is applicable when the sub problems are dependent, that is, when sub

problems share sub-problems


What is Dynamic Programming?...
Main idea:
1. Break down the complex problems into simplex sub problems.
2. Find Optimal solution to these sub problems
3. Store the results of sub problems(memoization)
4. Reuse them so that same sub problem is not calculated more
than once
5. Finally calculates the result of complex problem
 thereby avoiding the work of re-computing the answer
every time.
Simple Example:
Dynamic Programming Techniques

• Two main properties of a problem suggest that the given


problem can be solved using Dynamic Programming.
• These properties are:

a) Overlapping sub-problems
b) Optimal substructure.
a. Overlapping Sub-Problems
• Similar to Divide-and-Conquer approach, Dynamic Programming also
combines solutions to sub-problems.
• It is mainly used where the solution of one sub-problem is needed
repeatedly.
• The computed solutions are stored in a table, so that these don’t have
to be re-computed.
• Hence, this technique is needed where overlapping sub-problem
exists.
• For example, Binary Search does not have overlapping sub-problem.
Whereas recursive program of Fibonacci numbers have many
overlapping sub-problems.
b. Optimal Sub-Structure
• A given problem has Optimal Substructure Property, if the optimal
solution of the given problem can be obtained using optimal solutions
of its sub-problems.
• For example, the Shortest Path problem has the following optimal
sub-structure property −
• If a node x lies in the shortest path from a source node u to
destination node v, then the shortest path from u to v is the
combination of the shortest path from u to x, and the shortest path
from x to v.
• The standard All Pair Shortest Path algorithms like Floyd-Warshall
and Bellman-Ford are typical examples of Dynamic Programming.
Divide and Conquer vs DP

 Divide and-Conquer:

– divide and conquer combines the solutions of the sub-problems to obtain the

solution of the main problem

– It deals (involves) three steps at each level of recursion:

Divide ,Conquer ,Combine

– inefficient because the same common sub-problems have to be solved many times.

hence has more time consumption

– In this sub-problems are independent of each other.

– It is Recursive.

– It is a top-down approach

– For example: Merge Sort & Binary Search etc.


Divide and-Conquer vs DP…
 Dynamic Programming:

• DP uses the result of the sub-problems to find the optimum solution of the
main problem.

• DP does not solve the sub-problems independently.


 i.e. It solves sub-problems only once and their answers are stored in a
table for future use.

• It is non Recursive

• More efficient, as compered to divide and conquer

• Compute the value of an optimal solution, typically in a bottom-up


approach
 i.e “Programming” refers to a tabular method

• For example: Matrix Multiplication, optimal binary search tree


Greedy Method v/s Dynamic Programing

 Both are used for Optimization problem. It required minimum or maximum


results.
 In Greedy Method:- Always follow a function until you find the optimal result
whether it is maximum or minimum.
 Uses Prims algorithm for MST & Dijkstra Algorithm for finding shortest path.
 In Dynamic programing we will try to find all possible solution and then we’ll
pick the best solution which is optimal.
 It is time consuming as compared to greedy method.
 It use recursive formulas for solving problem.
 It follows the principal of optimality.
Greedy Method v/s Dynamic Programing…

Greedy Method
Dynamic Programming
• It gives local “ Optimal
 It gives “Global Optimal
Solution”
Solution”
• Print in time makes “local
 It makes decision “ smart
optimization”
recursion ”
• More efficient as compared to
 Less efficient as compared too
dynamic programing
greedy technique
• Example:-Fractional Knapsack
 Example:- 0/1 Knapsack
Principle of Optimality
 The dynamic Programming works on a principle of optimality.
 Principle of optimality says that a problem can be solved by
sequence of decisions to get the optimal solution.
 It follows Memoization Problem

 What is Memoization?
 is a technique for improving the performance of recursive algorithms..
 Memoization ensures that method does not execute more than once for
same inputs by storing the results in the data structure(Usually
Hashtable or HashMap or Array ). ...
 Let's understand with the help of Fibonacci example. Here is sample
fibonacci series.
Example : Fibonacci Sequence

 Fibonacci sequence is the sequence of numbers in which


every next item is the total of the previous two items.
 And each number of the Fibonacci sequence is called
Fibonacci number.
Example:
0 ,1,1,2,3,5,8,13,21,....................... is a Fibonacci sequence.
Recursively How Fibonacci Sequence works?
Example: Fibonacci number

• Computing the nth Fibonacci • Fib(5)


number recursively (top-down):
Dynamic Programming: Memoization and Tabulation

• There are two major approaches to solving a problem with dynamic

programming.

i. Tabulation: is a bottom-up approach.


– It starts by solving the lowest level subproblem.

– The solution then lets us solve the next subproblem, and so forth. We
iteratively solve all subproblems in this way until we’ve solved all
subproblems, thus finding the solution to the original problem.
ii. Memoization: is a top-down approach
– It starts with the highest-level sub-problems (the ones closest to the
original problem), and recursively calls the next sub-problem, and the
next
Memoization vs tabulation
• Both tabulation and memoization store the answers to sub-problems as they are
solved.
• They both operate on the same trade off: sacrifice space for time savings by
caching answers to solved problems.
• However, they differ subtly in the way that they use these stored
values.
– Since tabulation is bottom up, every sub-problem must be answered to
generate the final answer, so the order in which we choose to solve sub-
problems is important.
– tabulation has to look through the entire search space; memoization does not.
– tabulation requires careful ordering of the sub-problems
is; memoization doesn't care much about the order of recursive calls.
Applications of Dynamic Programming Approach

• 0/1 Knapsack Problem


• All Pairs Shortest Path Problems
• Travelling Salesman Problem
• Matrix Chain Multiplication
• Longest Common Subsequence
• Optimal Binary Search Tree
• Reliability Design
0/1 Knapsack Problem- Dynamic
Programming
• We have discussed Greedy approach gives an optimal solution for
Fractional Knapsack.
• However, this section cover 0-1 Knapsack problem and its analysis.

• In 0-1 Knapsack, items cannot be broken which means the thief should take
the item as a whole or should leave it.
• This is reason behind calling it as 0-1 Knapsack.

• Hence, in case of 0-1 Knapsack, the value of xi can be either 0 or 1, where


other constraints remain the same.
• 0-1 Knapsack cannot be solved by Greedy approach. It solved by DP

• Greedy approach does not ensure an optimal solution.

• In many instances, Greedy approach may give an optimal solution.


0-1 knapsack Problem: Example #1

• Problem-

For the given set of items and knapsack capacity = 5 kg, find the
optimal solution for the 0/1 knapsack problem making use of
dynamic programming approach.
• Consider-
• n = 4 m = 5 kg
• (w1, w2, w3, w4) = (2, 3, 4, 5)
• (p1, p2, p3, p4) = (3, 4, 5, 6)
Solution
• Given-
• Knapsack capacity (m) = 5 kg
• Number of items (n) = 4
• Step-01:
• Draw a table say ‘T’ with (n+1) = 4 + 1 = 5 number of rows and
(w+1) = 5 + 1 = 6 number of columns.
• Fill all the boxes of 0th row and 0th column with 0.
Solution…
• Step-02: Start filling the table row wise top to bottom from
left to right using the formula-

• T(i,j)=max{T(i-1,j),T([i-1,j-w[i])+p[i]}
• Finding T(1,1)-
• We have,
i=1
j=1
(value)i = (value)1 = 3
(weight)i = (weight)1 = 2
Substituting the values, we get-
T(1,1) = max { T(1-1 , 1) , 3 + T(1-1 , 1-2) }
T(1,1) = max { T(0,1) , 3 + T(0,-1) }
T(1,1) = T(0,1) { Ignore T(0,-1) }
T(1,1) = 0
Solution…
• Similarly, compute all the entries.
• After all the entries are computed and filled in the table, we get the
following table-

• The last entry represents the maximum possible value that can be put into
the knapsack.
• So, maximum possible value that can be put into the knapsack = 7.
Solution…
• Identifying Items To Be Put Into Knapsack-
• Following Step-03,
• We mark the rows labelled “1” and “2”.
• Thus, items that must be put into the knapsack to obtain the
maximum value 7 are-
• Item-1 and Item-2
0-1 knapsack Problem: Exercise
• A thief enters a house for robbing it. He can carry a maximal
weight of 8 kg into his bag. There are 4 items in the house
with the following weights and values.
• weights= {3,4,6,5}
• Profits(value)={2,3,1,4}
• What items should thief take if he/she either takes the item
completely or leaves it completely?
All pairs shortest path
• The all pair shortest path algorithm is also known as Floyd-
Warshall algorithm is used to find all pair shortest path problem
from a given weighted graph.
• Solved by Dynamic programming,
• The idea of Dynamic programming is the problem should be solved
by sequence of discussion.
• This is similar to single source shortest path that is Dijkstra
algorithm
• But Dijkstra find the shortest path from one of the source vertices.
06/02/2025
All pairs shortest path problem Algorithm
Example: All pair shortest path problem
• Given a weighted digraph G=(V,E) with weight. Determine
the length of shortest path between all pair of vertices in G.
Here we assume that these are no cycles with zero and
negative cost.
Solution…
Exercise #2
• Given a weighted digraph G=(V,E) with weight. Determine
the length of shortest path between all pair of vertices in G.
Here we assume that these are no cycles with zero and
negative cost.
Reading Assignment
• Travelling Salesman Problem
• Depth First Search
• Matrix Chain Multiplication
• Longest Common Subsequence
• Reliability Design

You might also like