Dynamic Programming - 1
Dynamic Programming - 1
Sharath Raghvendra
Associate Professor, Dept. of Computer Science,
North Carolina State University
Spring 2025
Agenda
Today’s topic…
Dynamic Programming
After that…
NP-Completeness
Dynamic Programming
Computing Fibonacci numbers (We saw this example in our first lecture)
Principles of Dynamic Programming
Two jobs are compatible if their start and end times do not overlap.
Dynamic Programming – “Value” of OPT
Each time a new array element in M[] is filled, two recursive calls are
made.
How many array elements are there?
There are at most n array elements which result in atmost two recursive
calls. Therefore, total number of recursive calls is at most 2n.
Instead of recursion, we can come up with a simple iterative procedure
Iterative Procedure
p(1)=0
p(2)=0
p(3)=1
p(4)=1
p(5)=3
Example
p(1)=0
p(2)=0
p(3)=1
p(4)=1
p(5)=3
Example
p(1)=0
p(2)=0
p(3)=1
p(4)=1
p(5)=3
Example
p(1)=0
p(2)=0
p(3)=1
p(4)=1
p(5)=3
Example
p(1)=0
p(2)=0
p(3)=1
p(4)=1
p(5)=3
Example
p(1)=0
p(2)=0
p(3)=1
p(4)=1
p(5)=3
Computing Optimal solution from value
p(1)=0
p(2)=0
p(3)=1
p(4)=1
p(5)=3
Example
p(1)=0
p(2)=0
p(3)=1
p(4)=1
p(5)=3
Example
p(1)=0
p(2)=0
p(3)=1
p(4)=1
p(5)=3
Find Solution
Execution Time
Sorting all processes based on finish time takes O(n log n) time
Computing p(j) for every job j takes 𝑂𝑂 𝑛𝑛 log 𝑛𝑛 time
Time to fill M and compute the optimal value = O(n) time
Time to backtrack and find a solution: O(n)