0% found this document useful (0 votes)
11 views

Lec11 Dynamic Programming

Here are the key steps to construct the optimal solution: 1. Use the values of l* and li[j] computed in Step 3 to trace back through the optimal path. 2. Start from the last station (l* = 1 or 2) 3. Use the li[j] values to determine if the optimal path at station j came from station j-1 on the same line or from a transfer from the other line. 4. Trace back recursively until reaching the starting point to construct the complete optimal solution. This allows constructing the optimal solution in O(n) time using the information computed in the dynamic programming table from Step 3.

Uploaded by

marah qadi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Lec11 Dynamic Programming

Here are the key steps to construct the optimal solution: 1. Use the values of l* and li[j] computed in Step 3 to trace back through the optimal path. 2. Start from the last station (l* = 1 or 2) 3. Use the li[j] values to determine if the optimal path at station j came from station j-1 on the same line or from a transfer from the other line. 4. Trace back recursively until reaching the starting point to construct the complete optimal solution. This allows constructing the optimal solution in O(n) time using the information computed in the dynamic programming table from Step 3.

Uploaded by

marah qadi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Introduction to Algorithms

Chapter 15: Dynamic Programming


Dynamic Programming
n Well known algorithm design techniques:
q Brute-Force (iterative) algorithms
q Divide-and-conquer algorithms

n Another strategy for designing algorithms is dynamic


programming.
q Used when problem breaks down into recurring small
subproblems

n Dynamic programming is typically applied to


optimization problems. In such problem there can be
many solutions. Each solution has a value, and we
wish to find a solution with the optimal value.
Divide-and-conquer
n Divide-and-conquer method for algorithm design:

n Divide: If the input size is too large to deal with in a


straightforward manner, divide the problem into two or
more disjoint subproblems

n Conquer: conquer recursively to solve the subproblems

n Combine: Take the solutions to the subproblems and


“merge” these solutions into a solution for the original
problem
Divide-and-conquer - Example
Dynamic programming
n Dynamic programming is a way of improving on inefficient divide-
and-conquer algorithms.

n By “inefficient”, we mean that the same recursive call is made


over and over.

n If same subproblem is solved several times, we can use table to


store result of a subproblem the first time it is computed and thus
never have to recompute it again.

n Dynamic programming is applicable when the subproblems are


dependent, that is, when subproblems share subsubproblems.

n “Programming” refers to a tabular method


Difference between DP and Divide-and-
Conquer
n Using Divide-and-Conquer to solve these
problems is inefficient because the same
common subproblems have to be solved
many times.

n DP will solve each of them once and their


answers are stored in a table for future use.
Elements of Dynamic Programming (DP)
DP is used to solve problems with the following
characteristics:

n Simple subproblems
q We should be able to break the original problem to smaller
subproblems that have the same structure

n Optimal substructure of the problems


q The optimal solution to the problem contains within optimal
solutions to its subproblems.

n Overlapping sub-problems
q there exist some places where we solve the same subproblem
more than once.
Steps to Designing a Dynamic Programming
Algorithm
1. Characterize optimal substructure

2. Recursively define the value of an optimal


solution

3. Compute the value bottom up

4. (if needed) Construct an optimal solution


Fibonacci Numbers
n Fn= Fn-1+ Fn-2 n≥2
n F0 =0, F1 =1
n 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, …

n Straightforward recursive procedure is slow!

n Let’s draw the recursion tree


Fibonacci Numbers
Fibonacci Numbers
n We can calculate Fn in linear time by remembering
solutions to the solved subproblems – dynamic
programming

n Compute solution in a bottom-up fashion

n In this case, only two values need to be


remembered at any time
Ex1:Assembly-line scheduling
n Automobiles factory with two assembly lines.
q Each line has the same number “n” of stations. Numbered j = 1,
2, ..., n.

q We denote the jth station on line i (where i is 1 or 2) by Si,j .

q The jth station on line 1 (S1,j) performs the same function as the jth
station on line 2 (S2,j ).

q The time required at each station varies, even between stations at the
same position on the two different lines, as each assembly line has
different technology.

q time required at station Si,j is (ai,j) .

q There is also an entry time (ei) for the chassis to enter assembly line i
and an exit time (xi) for the completed auto to exit assembly line i.
Ex1:Assembly-line scheduling
n After going through station Si,j, can either

q q stay on same line


n next station is Si,j+1
n no transfer cost , or

q q transfer to other line


n next station is S3-i,j+1
n transfer cost from Si,j to S3-i,j+1 is ti,j ( j = 1, … , n–1)
n No ti,n, because the assembly line is complete after Si,n

13
Ex1:Assembly-line scheduling
•(Time between adjacent stations are nearly 0).
Problem Definition
n Problem: Given all these costs, what stations
should be chosen from line 1 and from line 2 for
minimizing the total time for car assembly.

n “Brute force” is to try all possibilities.


q requires to examine Ω(2n) possibilities

q Trying all 2n subsets is infeasible when n is large.

q Simple example : 2 stations à (2n) possibilities =4


Step 1: Optimal Solution Structure
n optimal substructure : choosing the best path
to Sij.

n The structure of the fastest way through the


factory (from the starting point)

n The fastest possible way to get through Si,1 (i =


1, 2)
q Only one way: from entry starting point to Si,1
q take time is entry time (ei)
Step 1: Optimal Solution Structure
n The fastest possible way to get through Si,j (i = 1, 2) (j = 2, 3, ...,
n). Two choices:

q Stay in the same line: Si,j-1 à Si,j


q Time is Ti,j-1 + ai,j
n If the fastest way through Si,j is through Si,j-1, it must have taken
a fastest way through Si,j-1

q Transfer to other line: S3-i,j-1 à Si,j


q Time is T3-i,j-1 + t3-i,j-1 + ai,j
n Same as above
Step 1: Optimal Solution Structure
n An optimal solution to a problem
q finding the fastest way to get through Si,j
n contains within it an optimal solution to sub-problems
q finding the fastest way to get through either Si,j-1 or S3-i,j-1
n Fastest way from starting point to Si,j is either:
q The fastest way from starting point to Si,j-1 and then directly from
Si,j-1 to Si,j
or
q The fastest way from starting point to S3-i,j-1 then a transfer from
line 3-i to line i and finally to Si,j

n àOptimal Substructure.
Example
Step 2: Recursive Solution
n Define the value of an optimal solution recursively in
terms of the optimal solution to sub-problems

n Sub-problem here
q finding the fastest way through station j on both lines (i=1,2)
q Let fi [j] be the fastest possible time to go from starting point
through Si,j

n The fastest time to go all the way through the factory: f*

n x1 and x2 are the exit times from lines 1 and 2,


respectively
Step 2: Recursive Solution
n The fastest time to go through Si,j
q e1 and e2 are the entry times for lines 1 and 2
Example
Example
Step 2: Recursive Solution
n To help us keep track of how to construct an
optimal solution, let us define
q li[j ]: line # whose station j-1 is used in a fastest way
through Si,j (i = 1, 2, and j = 2, 3,..., n)

n we avoid defining li[1] because no station


precedes station 1 on either lines.

n We also define
q l*: the line whose station n is used in a fastest way
through the entire factory
Step 2: Recursive Solution
n Using the values of l* and li[j] shown in Figure (b) in next
slide, we would trace a fastest way through the factory
shown in part (a) as follows

q The fastest total time comes from choosing stations


q Line 1: 1, 3, & 6 Line 2: 2, 4, & 5
Step 2: Recursive Solution
Step 3: Optimal Solution Value
Step 3: Optimal Solution Value

2
Step 3: Optimal Solution Value

2
Step 3: Optimal Solution Value

Use substitution method to solve it à O(2n) à exponential


Step 3: Optimal Solution Value
Step 3: Optimal Solution Value
Step 3: Optimal Solution Value
Step 4: Optimal Solution
n Constructing the fastest way through the
factory

You might also like