0% found this document useful (0 votes)
3 views

Dynamic Programming - 1

The document outlines a lecture on Dynamic Programming, focusing on its principles, applications, and comparison with other algorithmic approaches such as Greedy and Divide and Conquer. It discusses the historical context of Dynamic Programming, its optimal substructure property, and provides examples like computing Fibonacci numbers and weighted interval scheduling. The document also covers memoization techniques to improve execution time and the overall complexity of dynamic programming solutions.

Uploaded by

ravichandu.b1409
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Dynamic Programming - 1

The document outlines a lecture on Dynamic Programming, focusing on its principles, applications, and comparison with other algorithmic approaches such as Greedy and Divide and Conquer. It discusses the historical context of Dynamic Programming, its optimal substructure property, and provides examples like computing Fibonacci numbers and weighted interval scheduling. The document also covers memoization techniques to improve execution time and the overall complexity of dynamic programming solutions.

Uploaded by

ravichandu.b1409
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

CSC505: Design and Analysis of Algorithms

Sharath Raghvendra
Associate Professor, Dept. of Computer Science,
North Carolina State University
Spring 2025
Agenda

 Today’s topic…
 Dynamic Programming
 After that…
 NP-Completeness
Dynamic Programming

 Goal: Design efficient polynomial time algorithms for minimization/maximization


problems.
 Greedy Approach
 Pro: Natural and easy algorithms
 Cons: Many greedy algorithms for the same problem; not all may work, sometimes none
work
 Divide and Conquer Approach
 Pro: Simple strategy to divide problem into smaller sub-problems and merge solutions
 Con: Usually speeds up polynomial time solutions
 Dynamic Programming
 More powerful than the other two approaches
 Solve problems by combining solutions to smaller sub-problems.
 Leads to substantial improvements in execution time
History

 Developed by Bellman in 1950s


 “Dynamic Programming” named so for political reasons
 The secretary of defense was hostile to mathematical research
 Bellman sought an impressive name to avoid confrontation
Applications of Dynamic Programming

 Operations Research: Best-known algorithms to solve routing problems in


networks (shortest-path with negative costs, traveling salesman problem
etc.)
 Diff command in Unix uses dynamic programming (We will do this on
Wednesday)
 Sequence alignment for biological applications
 And many more…

 Computing Fibonacci numbers (We saw this example in our first lecture)
Principles of Dynamic Programming

1. Identify the optimal sub-structure property.


2. Use it to define polynomial number of sub-problems. One of them should
be the final solution
3. There is a natural ordering of these sub-problems from smallest to
largest such that we can obtain solution to the larger sub-problem by
combining solutions to smaller ones.
4. Optimal value for smallest sub-problem can be computed in a straight
forward way. Optimal value of a larger sub-problem can be computed
from optimal value of the smaller sub-problems in an iterative fashion.
Computing Fibonacci numbers?

 Recollect computing Fibonacci numbers


 Problem : Compute Fibonnaci[n]
 Subproblems ordered by size: Fibonnaci[0]…Fibonnaci[n-1]
 Compute solution to larger sub-problem using solutions to smaller sub-problem:
Fibonnaci[i]=Fibonnaci[i-1]+Fibonnaci[i-2]
Weighted Interval Scheduling – Problem

 Two jobs are compatible if their start and end times do not overlap.
Dynamic Programming – “Value” of OPT

 Let OPT be the optimal solution


 We will compute the value of the optimal solution and not the optimal
solution itself. This is just a number
 Typical in DP Algorithms
 First focus on computing the optimal “total value”. Example: what is the highest total
value of a set of compatible intervals?
 In this process, you will compute a table created on the way.
 Use this table to backtrack and create the solution
Principles of Dynamic Programming

1. Identify the optimal sub-structure property.


2. Use it to define polynomial number of sub-problems. One of them should
be the final solution
3. There is a natural ordering of these sub-problems from smallest to
largest such that we can obtain solution to the larger sub-problem by
combining solutions to smaller ones.
4. Optimal value for smallest sub-problem can be computed in a straight
forward way. Optimal value of a larger sub-problem can be computed
from optimal value of the smaller sub-problems in an iterative fashion.
Sub-Problems

 Sort all jobs based on increasing order of finish time


 Index the jobs based on this order.
Optimal Substructure for this problem?

 What is the optimal solution for this instance?


 {v1, v3, v5}
 What is the optimal solution to the problem instance with only the first
three jobs {1,2,3}.
Sub-Problems

 OPT(j) varies from j=1 to n


 Therefore, we have created n sub-problems (polynomial). OPT(n) is the
final solution we are interested in
 There is a natural ordering on sub-problems OPT(i) is “smaller sub-
problem” than OPT(j) if i < j.
 OPT(1) is trivial to compute.
 Objective: Express solution to OPT(j) as a combination of optimal solutions
to smaller sub-problems. We discuss this next.
Approach

 Important to remember: p(j) < j


Optimal Solution

 We use 𝑝𝑝(𝑗𝑗) to recursively describe the optimal solution


 Let OPT be an optimal solution and let j be the last interval (interval with
the largest index)
Recurrence Relation
Principles of Dynamic Programming

1. Define polynomial number of sub-problems. One of them should be the


final solution
1. We will define n subproblems: OPT j = 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑜𝑜𝑜𝑜 𝑗𝑗𝑗𝑗𝑗𝑗𝑗𝑗 (1 … 𝑗𝑗), for j=1 to n. OPT(n)
is the final solution.
2. There is a natural ordering of problems from smallest to largest such that
we can obtain the solution for the larger problem by combining solutions
to the smaller ones.
1. For our problem, consider OPT(j) in increasing order of j.
3. Optimal value of a problem can be computed from optimal value of its
sub problems (Optimal Substructure property).
Recursive Implementation

 What is its execution time?


 Exponential time. Why?
Worst-Case Example
Memoize

 Problem: We are repeatedly solving the same sub-problem leading to


exponential running time
 Solution: Save the answer for each sub-problem as you compute. When
you compute OPT(j), save it in a global array M.
Memoize Code
Execution Time

 Each time a new array element in M[] is filled, two recursive calls are
made.
 How many array elements are there?
 There are at most n array elements which result in atmost two recursive
calls. Therefore, total number of recursive calls is at most 2n.
 Instead of recursion, we can come up with a simple iterative procedure
Iterative Procedure

 When we compute M[j], we only need values from M[0],…, M[j-1]


Example

p(1)=0
p(2)=0
p(3)=1
p(4)=1

p(5)=3
Example

p(1)=0
p(2)=0
p(3)=1
p(4)=1

p(5)=3
Example

p(1)=0
p(2)=0
p(3)=1
p(4)=1

p(5)=3
Example

p(1)=0
p(2)=0
p(3)=1
p(4)=1

p(5)=3
Example

p(1)=0
p(2)=0
p(3)=1
p(4)=1

p(5)=3
Example

p(1)=0
p(2)=0
p(3)=1
p(4)=1

p(5)=3
Computing Optimal solution from value

 We have to obtain optimal solution from optimal value. How do we decide


which interval is selected and which is not.
 We will use the lookup table M to find which branch was taken
Optimal Solution from Value
Example

p(1)=0
p(2)=0
p(3)=1
p(4)=1

p(5)=3
Example

p(1)=0
p(2)=0
p(3)=1
p(4)=1

p(5)=3
Example

p(1)=0
p(2)=0
p(3)=1
p(4)=1

p(5)=3
Find Solution
Execution Time

 Sorting all processes based on finish time takes O(n log n) time
 Computing p(j) for every job j takes 𝑂𝑂 𝑛𝑛 log 𝑛𝑛 time
 Time to fill M and compute the optimal value = O(n) time
 Time to backtrack and find a solution: O(n)

You might also like