ADA
Chapter – 4
Dynamic Programming
1
Introduction
Dynamic Programming is typically applied to the
optimization problems.
Such problems can have many possible solutions.
Each Solution has a value, and we wish to find an
optimal (best) solution with optimal (maximum or
minimum) value.
Dynamic programming is a technique for solving
problems with overlapping sub problems.
In this method each sub problem is solved only once.
The result of each sub problem is recorded into a
table from which we can obtain the solution of main
problem. 2
Steps of Dynamic Programming
1. Characterize the structure of an optimal solution.
(i.e. Develop a mathematical notation that can
express solution for the given problem)
2. Recursively defines the value of an optimal
solution.
3. Compute the value of an optimal solution in a
bottom-up fashion.
4. From computed information, Construct an
optimal solution.
3
Principle of optimality
The dynamic programming algorithm obtains the
solution using ‘principle of optimality’.
The principle of optimality states that “In an
optimal sequence of decisions (or choices), each
subsequence must also be optimal”.
When it is not possible to apply the principle of
optimality it is almost impossible to obtain the
solution using the dynamic programming.
4
Problems that can be solved using
Dynamic Programming
1) Calculating the Binomial Coefficient
2) Making Change Problem
3) 0/1 Knapsack problem
4) Matrix chain multiplication
5) Assembly Line-Scheduling
6) All Pair Shortest path
7) Longest Common Subsequence and many more
5
Divide and Conquer Dynamic Programming
1) The problem is divided into sub 1) All the possible solutions to a
problems. All sub problems are problem are obtained and the
solved independently. Finally all the 6 solution with optimal value becomes
solutions of the sub problems are the solution of the problem.
combined to get the solution of
given problem.
2) It uses Top-Down approach. 2) It uses Bottom-Up approach.
3) It uses Recursion. 3) It uses Iteration.
4) Each sub problem is independent 4) Sub problems are dependent and
of each other. overlapping.
5) It splits inputs at specific points. 5) It splits inputs at each and every
possible points.
6) Lots of repetition of work; so It is 6) No repetition of work; so It is
less efficient. more efficient.
7) Key to Divide and Conquer is 7) Key to Dynamic Programming is
‘Division’ into sub problems. ‘Remembering’.
8) Ex. Binary Search, Merge Sort, 8) Ex. Making Change Problem,
Quick Sort Matrix Chain Multiplication, 0/1
Knapsack
Greedy Algorithms Dynamic Programming
1) Optimal Solution is not 1) Optimal Solution is
guaranteed. guaranteed.
2) It focuses on locally optimal 2) It focuses on globally optimal
solution. solution.
3) So, it doesn’t consider all the 3) So, it considers all the
possible solutions of a problem. possible solutions of a problem.
4) Sub problems do not overlap. 4) Sub problems overlap.
5) It does less work. 5) It does more work.
6) Ex. Minimum Spanning Tree, 6) Ex. Making Change Problem,
Huffman Code, Fractional Matrix Chain Multiplication, 0/1
Knapsack problem Knapsack problem
8