0% found this document useful (0 votes)
16 views

Lecture 2 DP

Hh

Uploaded by

lemonahmedmlwbd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Lecture 2 DP

Hh

Uploaded by

lemonahmedmlwbd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Outline – Chapter 4

1. Dynamic Programming ( Introduction)


i) Multistage Graph
ii) Optimal Binary Search Tree(OBST)
iii) 0/1 Knapsack Problem
iv) Travelling Salesman Problem
2. Greedy Algorithms ( Introduction)
i) Job Sequencing
ii) Optimal Merge Patterns

3/8/2009 ADA Unit -4 I.S Borse 2


Dynamic Programming
• Dynamic Programming is an algorithm
design method that can be used when
the solution to a problem may be
viewed as the result of a sequence of
decisions

3/8/2009 ADA Unit -4 I.S Borse 3


The General Method
Dynamic programming
An algorithm design method that can be used when the solution
can be viewed as the result of a sequence of decisions
Some solvable by Greedy method under the condition
Condition: an optimal sequence of decisions can be found by
making the decisions one at a time and never making an
erroneous decision
For many other problems
Not possible to make stepwise decisions (based only on local
information) in a manner like Greedy method

3/8/2009 ADA Unit -4 I.S Borse 4


The General Method
• Enumeration vs. dynamic programming
– Enumeration
• Enumerating all possible decision sequences and picking
out the best  prohibitive time and storage requirements
– Dynamic programming
• Drastically reducing the time and storage by avoiding some
decision sequences that cannot possibly be optimal
• Making explicit appeal to the principle of optimality
• Definition [Principle of optimality] The principle of optimality
states that an optimal sequence of decisions has the property that
whatever the initial state and decision are, the remaining
decisions must constitute an optimal decision sequence with
regard to the state resulting from the first decision.
3/8/2009 ADA Unit -4 I.S Borse 5
Principle of optimality
• Principle of optimality: Suppose that in solving a
problem, we have to make a sequence of
decisions D1, D2, …, Dn. If this sequence is
optimal, then the last k decisions, 1 k n must
be optimal.
• e.g. the shortest path problem
If i, i1, i2, …, j is a shortest path from i to j, then i1,
i2, …, j must be a shortest path from i1 to j
• In summary, if a problem can be described by a
multistage graph, then it can be solved by
dynamic programming.
3/8/2009 ADA Unit -4 I.S Borse 6
The General Method
• Greedy method vs. dynamic programming
–Greedy method
• Only one decision sequence is ever generated
Dynamic programming
• Many decision sequences may be generated
• But sequences containing suboptimal subsequences
discarded because they cannot be optimal due to the
principle of optimality

3/8/2009 ADA Unit -4 I.S Borse 7


The General Method
• Notation and formulation for the principle
– Notation and formulation
• S0: initial problem state
• n decisions di, 1≤i≤n have to be made to solve the
problem and D1={r1,r2,…,rj} is the set of possible
decision values for d1
• Si is the problem state when ri chosen, and Γi is an
optimal sequence wrt Si
• Then, when the principle of optimality holds, an
optimal sequence wrt S0 is the best of the decision
sequences ri,Γi, 1≤i≤n
3/8/2009 ADA Unit -4 I.S Borse 8
Dynamic Programming
• Forward approach and backward approach:
– Note that if the recurrence relations are formulated
using the forward approach then the relations are
solved backwards . i.e., beginning with the last
decision
– On the other hand if the relations are formulated
using the backward approach, they are solved
forwards.
• To solve a problem by using dynamic programming:
– Find out the recurrence relations.
– Represent the problem by a multistage graph.

3/8/2009 ADA Unit -4 I.S Borse 9


Steps for Dynamic Programming
1) The problem can be divided into stages with decision
required at each stage
2) Each state has a number of states associated with it
3) The decision at one stage transform one stage to next
4) Given current state the optimal decision for each of the
remaining state does not depend on the previous state
or decision
5) There exists a recursive relationship that identifies the
optimal solution for stage j , given the stage j+1 has
already been solved
6) One final stage must be solvable by itself.
3/8/2009 ADA Unit -4 I.S Borse 10
The shortest path

• To find a shortest path in a multi-stage graph


3 2 7

1 4
S A B 5
T

5 6
• Apply the greedy method :
the shortest path from S to T :
1+2+5=8

3/8/2009 ADA Unit -4 I.S Borse 11


The shortest path in multistage graphs
• e.g. A
4
D
1 18
11 9

2 5 13
S B E T
16 2

5
C 2
F

• The greedy method can not be applied to this case:


(S, A, D, T) 1+4+18 = 23.
• The real shortest path is:
(S, C, F, T) 5+2+2 = 9.

3/8/2009 ADA Unit -4 I.S Borse 12


Dynamic programming approach
• Dynamic programming approach
1 A
d(A, T)

2 d(B, T)
S B T

d(C, T)
5
C

• d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}


4
 d(A,T) = min{4+d(D,T), 11+d(E,T)} A D
d(D, T)
= min{4+18, 11+13} = 22.
11
E T
d(E, T)
3/8/2009 ADA Unit -4 I.S Borse 22
Dynamic programming

• d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}


= min{9+18, 5+13, 16+2} = 18.
• d(C, T) = min{ 2+d(F, T) } = 2+2 = 4
• d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
= min{1+22, 2+18, 5+4} = 9.
• The above way of reasoning is called
backward reasoning. D 9
d(D , T )

5 d(E , T )
B E T

d(F , T )
16
F
3/8/2009 ADA Unit -4 I.S Borse 23
Backward approach (forward reasoning)
• d(S, A) = 1
d(S, B) = 2
d(S, C) = 5
• d(S,D)=min{d(S, A)+d(A, D),d(S, B)+d(B, D)}
= min{ 1+4, 2+9 } = 5
d(S,E)=min{d(S, A)+d(A, E),d(S, B)+d(B, E)}
= min{ 1+11, 2+5 } = 7
d(S,F)=min{d(S, A)+d(A, F),d(S, B)+d(B, F)}
= min{ 2+16, 5+2 } = 7

3/8/2009 ADA Unit -4 I.S Borse 24


• d(S,T) = min{d(S, D)+d(D, T),d(S,E)+
d(E,T), d(S, F)+d(F, T)}
= min{ 5+18, 7+13, 7+2 }
=9

3/8/2009 ADA Unit -4 I.S Borse 25


Multistage Graphs
• Definition: multistage graph G(V,E)
– A directed graph in which the vertices are
partitioned into k≥2 disjoint sets Vi, 1≤i≤k
– If <u,v> ∈ E, then u ∈ Vi and v ∈ Vi+1 for some I,
1≤i<k
– |V1|= |Vk|=1, and s(source) ∈ V1 and t(sink) ∈ Vk
– c(i,j)=cost of edge <i,j>
• Definition: Multistage Graph Problem
– Find a minimum-cost path from s to t
– e.g., (5-stage graph)
3/8/2009 ADA Unit -4 I.S Borse 13
Multistage Graphs

3/8/2009 ADA Unit -4 I.S Borse 14


Multistage Graphs
DP formulation
Every s to t path is the result of a sequence of k-2 decisions
The principle of optimality holds (Why?)
p(i, j) = a minimum-cost path from vertex j in Vi to vertex t
cost(i, j) = cost of path p(i , j)

cos t ( i , j ) min { c ( j , l ) cos t ( i 1, l )}


l Vi 1
j ,l E

cost(k-1,j) = c(j,t) if <j,t> ∈ E, ∞ otherwise


Then computing cost(k-2,j) for all j ∈ Vk-2
Then computing cost(k-3,j) for all j ∈ Vk-3

Finally computing cost(1,s)
3/8/2009 ADA Unit -4 I.S Borse 15
Multistage Graphs

(k=5)
Stage 5
cost(5,12) = 0.0
Stage 4
cost(4,9) = min {4+cost(5,12)} = 4
cost(4,10) = min {2+cost(5,12)} = 2
cost(4,11) = min {5+cost(5,12)} = 5
Stage 3
cost(3,6) = min {6+cost(4,9), 5+cost(4,10)} = 7
cost(3,7) = min {4+cost(4,9), 3+cost(4,10)} = 5
cost(3,8) = min {5+cost(4,10), 6+cost(4,11)} = 7

3/8/2009 ADA Unit -4 I.S Borse 16


Multistage Graphs

– Stage 2
• cost(2,2) = min {4+cost(3,6), 2+cost(3,7), 1+cost(3,8)} = 7
• cost(2,3) = min {2+cost(3,6), 7+cost(3,7)} = 9
• cost(2,4) = min {11+cost(3,8)} = 18
• cost(2,5) = min {11+cost(3,7), 8+cost(3,8)} = 15
– Stage 1
• cost(1,1) = min {9+cost(2,2), 7+cost(2,3), 3+cost(2,4), 2+cost(2,5)} = 16
– Important notes: avoiding the recomputation of cost(3,6), cost(3,7), and
cost(3,8) in computing cost(2,2)
3/8/2009 ADA Unit -4 I.S Borse 17
Multistage Graphs
• void Fgraph (graph G, int k, int n, int p[] )
• // The input is a k-stage graph G = (V,E) with n vertices indexed in order
• // of stages. E is a set of edges and c[i][j] is the cost of <i, j>.
• // p[1 : k] is a minimum-cost path.
• {
• float cost[MAXSIZE]; int d[MAXSIZE], r;
• cost[n] = 0.0;
• for (int j=n-1; j >= 1; j--)
• { // Compute cost[j].
• let r be a vertex such that <j, r> is an edge
• of G and c[j][r] + cost[r] is minimum;
• cost[j] = c[j][r] + cost[r];
• d[j] = r;
• }
• // Find a minimum-cost path.
• p[1] = 1; p[k] =n ;
• for ( j=2; j <= k-1; j++) p[j] = d[ p[ j-1 ] ];
• }
3/8/2009 ADA Unit -4 I.S Borse 19
Multistage Graphs
Backward approach

bcos t ( i , j) min { bcos t ( i 1, l ) c ( l , j)}


l Vi 1
j, l E

3/8/2009 ADA Unit -4 I.S Borse 20


0/1 knapsack problem
• n objects , weight W1, W2, ,Wn
profit P1, P2, ,Pn capacity M

maximize Pi x i subject to W x 1 i n
M i i
1 i n

xi = 0 or 1, 1 i n
• e. g.
i Wi Pi
1 10 40 M = 10
2 3 20
3 5 30

3/8/2009 ADA Unit -4 I.S Borse 32


The multistage graph solution
• The 0/1 knapsack problem can be described
by a multistage graph.
x3=0
x2=0 10 100
0
1 0
0
x1=1 011
40 x3=1
30 0
S T
01 0 010 0
0 x2=1 x3=0
x1=0 20
0
0
x3=1 001 0
0 30
x2=0 00
0
x3=0 000

3/8/2009 ADA Unit -4 I.S Borse 33


The Dynamic Programming Approach
• The longest path represents the optimal
solution:
x1=0, x2=1, x3=1
P x = 20+30 = 50
i i

• Let fi(Q) be the value of an optimal solution to


objects 1,2,3,…,i with capacity Q.
• fi(Q) = max{ fi-1(Q), fi-1(Q-Wi)+Pi }
• The optimal solution is fn(M).

3/8/2009 ADA Unit -4 I.S Borse 34


Review: Dynamic Programming
• Dynamic programming is another strategy for
designing algorithms
• Use when problem breaks down into recurring
small subproblems

3/8/2009 ADA Unit -4 I.S Borse 46


Review: Dynamic Programming
• Summary of the basic idea:
– Optimal substructure: optimal solution to problem
consists of optimal solutions to subproblems
– Overlapping subproblems: few subproblems in total,
many recurring instances of each
– Solve bottom-up, building a table of solved
subproblems that are used to solve larger ones
• Variations:
– “Table” could be 3-dimensional, triangular, a tree, etc.

3/8/2009 ADA Unit -4 I.S Borse 47


Greedy Algorithms

3/8/2009 ADA Unit -4 I.S Borse 48


Overview
• Like dynamic programming, used to solve
optimization problems.
• Problems exhibit optimal substructure (like DP).
• Problems also exhibit the greedy-choice
property.
– When we have a choice to make, make the one that
looks best right now.
– Make a locally optimal choice in hope of getting a
globally optimal solution.

3/8/2009 ADA Unit -4 I.S Borse 49


Greedy Strategy
• The choice that seems best at the moment is the
one we go with.
– Prove that when there is a choice to make, one
of the optimal choices is the greedy choice.
Therefore, it’s always safe to make the greedy
choice.
– Show that all but one of the subproblems
resulting from the greedy choice are empty.

3/8/2009 ADA Unit -4 I.S Borse 50


Elements of Greedy Algorithms
• Greedy-choice Property.
– A globally optimal solution can be arrived at by
making a locally optimal (greedy) choice.
• Optimal Substructure.

3/8/2009 ADA Unit -4 I.S Borse 51


Greedy Algorithms
• A greedy algorithm always makes the choice
that looks best at the moment
– My everyday examples:
• Walking to the Corner
• Playing a bridge hand
– The hope: a locally optimal choice will lead to a
globally optimal solution
– For some problems, it works
• Dynamic programming can be overkill; greedy
algorithms tend to be easier to code
3/8/2009 ADA Unit -4 I.S Borse 52
Review: The Knapsack Problem
• More formally, the 0-1 knapsack problem:
– The thief must choose among n items, where the
ith item worth vi dollars and weighs wi pounds
– Carrying at most W pounds, maximize value
• Note: assume vi, wi, and W are all integers
• “0-1” b/c each item must be taken or left in entirety
• A variation, the fractional knapsack problem:
– Thief can take fractions of items
– Think of items in 0-1 problem as gold ingots, in
fractional problem as buckets of gold dust
3/8/2009 ADA Unit -4 I.S Borse 53
Review: The Knapsack Problem
And Optimal Substructure
• Both variations exhibit optimal substructure
• To show this for the 0-1 problem, consider the
most valuable load weighing at most W pounds
– If we remove item j from the load, what do we know
about the remaining load?
– A: remainder must be the most valuable load
weighing at most W - wj that thief could take from
museum, excluding item j

3/8/2009 ADA Unit -4 I.S Borse 54


Solving The Knapsack Problem
• The optimal solution to the fractional knapsack
problem can be found with a greedy algorithm
– How?
• The optimal solution to the 0-1 problem cannot
be found with the same greedy strategy
– Greedy strategy: take in order of dollars/pound
– Example: 3 items weighing 10, 20, and 30 pounds,
knapsack can hold 50 pounds
• Suppose item 2 is worth $100. Assign values to the other
items so that the greedy strategy will fail

3/8/2009 ADA Unit -4 I.S Borse 55


Greedy Choice Property
• Dynamic programming? Memorize? Yes, but…
• Activity selection problem also exhibits the
greedy choice property:
– Locally optimal choice globally optimal sol’n
– Them 17.1: if S is an activity selection problem
sorted by finish time, then optimal solution
A S such that {1} A
• Sketch of proof: if optimal solution B that does not
contain {1}, can always replace first activity in B with {1}
(Why?). Same number of activities, thus optimal.

3/8/2009 ADA Unit -4 I.S Borse 56


Review: The Knapsack Problem
• The famous knapsack problem:
– A thief breaks into a museum. Fabulous paintings,
sculptures, and jewels are everywhere. The thief has
a good eye for the value of these objects, and knows
that each will fetch hundreds or thousands of dollars
on the clandestine art collector’s market. But, the
thief has only brought a single knapsack to the scene
of the robbery, and can take away only what he can
carry. What items should the thief take to maximize
the haul?

3/8/2009 ADA Unit -4 I.S Borse 57


The Knapsack Problem: Greedy Vs. Dynamic

• The fractional problem can be solved greedily


• The 0-1 problem cannot be solved with a
greedy approach
– As you have seen, however, it can be solved with
dynamic programming

3/8/2009 ADA Unit -4 I.S Borse 58


ADA Unit -3 I.S Borse 59

You might also like