0% found this document useful (0 votes)
9 views

Day4.3 Algorithms Part2

Algo p2

Uploaded by

Rishav Dhama
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Day4.3 Algorithms Part2

Algo p2

Uploaded by

Rishav Dhama
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 128

Algorithms

12/08/2022 1
Divide and Conquer

Selection Algorithms

12/08/2022 2
Order Statistics
• The ith order statistic in a set of n elements is
the ith smallest element
• The minimum is thus the 1st order statistic
• The maximum is the nth order statistic
• The median is the n/2 order statistic
– If n is even, there are 2 medians
• How can we calculate order statistics?
• What is the running time?
12/08/2022 3
Order Statistics
• How many comparisons are needed to find the
minimum element in a set? The maximum?
• Can we find the minimum and maximum with
less than twice the cost?
• Yes:
– Walk through elements by pairs
• Compare each element in pair to the other
• Compare the largest to maximum, smallest to minimum
– Total cost: 3 comparisons per 2 elements = 3n/2

12/08/2022 4
Finding Order Statistics:
The Selection Problem
• A more interesting problem is selection:
finding the ith smallest element of a set
• We will show:
– A practical randomized algorithm with O(n)
expected running time
– A cool algorithm of theoretical interest only with
O(n) worst-case running time

12/08/2022 5
Randomized Selection
• Key idea: use partition() from quicksort
– But, only need to examine one subarray
– This savings shows up in running time: O(n)
q = RandomizedPartition(A, p, r)

 A[q]  A[q]

p q r
12/08/2022 6
Randomized Selection
RandomizedSelect(A, p, r, i)
if (p == r) then return A[p];
q = RandomizedPartition(A, p, r)
k = q - p + 1;
if (i == k) then return A[q];
if (i < k) then
return RandomizedSelect(A, p, q-1, i);
else
return RandomizedSelect(A, q+1, r, i-k);
k

 A[q]  A[q]

p q r
12/08/2022 7
Randomized Selection
• Analyzing RandomizedSelect()
– Worst case: partition always 0:n-1
T(n) = T(n-1) + O(n) = ???
= O(n2) (arithmetic series)
• No better than sorting!
– “Best” case: suppose a 9:1 partition
T(n) = T(9n/10) + O(n) = ???
= O(n) (Master Theorem, case 3)
• Better than sorting!
• What if this had been a 99:1 split?

12/08/2022 8
Randomized Selection
• Average case
– For upper bound, assume ith element always falls
in larger side of partition:
1 n −1
T (n )   T (max (k , n − k − 1)) + (n )
n k =0

2 n −1
  T (k ) + (n ) What happened here?
n k =n / 2
– Let’s show that T(n) = O(n) by substitution

12/08/2022 9
Randomized Selection
• Assume T(k)  ck for k < n, c sufficiently large
2 n −1
T ( n)   T (k ) + (n )
n k =n / 2
The recurrence we started with

2 n −1
  ck + (n ) Substitute T(n) here?
What happened cn for T(k)
n k =n / 2
2c  n −1 n 2 −1

=   k −  k  + (n ) “Split” the recurrence
What happened here?
n  k =1 k =1 

2c  1 1n n
=  (n − 1)n −  − 1  + (n ) Expand arithmetic
What happened series
here?
n 2 22  2
cn 
= c(n − 1) −  − 1 + (n ) Multiply it out here?
What happened
12/08/2022
22  10
Randomized Selection
• Assume T(n)  cn for sufficiently large c:
cn 
T (n)  c(n − 1) −  − 1 + (n ) The recurrence so far
2 2 

cn − c − + + (n )
cn c
= What happened
Multiply it out here?
4 2
cn − − + (n )
cn c
= What happened
Subtract c/2 here?
4 2
 cn c 
= cn −  + − (n ) Rearrange the arithmetic
What happened here?
 4 2 
 cn (if c is big enough) What we set out here?
happened to prove

12/08/2022 11
Greedy Algorithms

12/08/2022 12
Customers at a Petrol Pump
• There are n customers waiting at a petrol
pump to be served
• Customer j requires c(j) units of time to be
served.
• Total waiting time for customer j before being
served
= Total serving time of customers served
before j

12/08/2022 13
Example

• <c(1),c(2),c(3),c(4),c(5)> = <25,21,14,10,5>
• Consider the schedule: <5,14,10,25,21>
• Cumulative waiting times = <0, 5, 19, 29, 54>
• Sum of waiting times = 5 + 19 + 29 + 54 = 107
• Now consider: <5,10,14,25,21>
• Cumulative waiting times = <0, 5, 15, 29, 54>
• Sum of waiting times = 5 + 15 + 29 + 54 = 103
• If a costlier job is serviced earlier in the schedule,
it delays more jobs than if served later
12/08/2022 14
Customers at a Petrol Pump
• Condition: c(1) ≥ c(2) ≥ …… ≥ c(n)
• Goal: To minimize the sum total of the waiting time
of all customers
• Claim: The total waiting time is minimized if the
customers are processed in the order
• S = <s(1),s(2),…..s(n)> = <c(n), c(n-1),…..,c(1)>
• Hence s(1) ≤s(2)≤s(3) ….≤s(n)

12/08/2022 15
Customers at a Petrol Pump
• Proof: Let the optimal schedule for processing the
customers be
• T = <t(1), t(2), ….t(n)>
• Let i be the smallest index such that t(i) ≠ s(i)
• s(1) = t(1), s(2) = t(2), …. s(i-1) = t(i-1)

12/08/2022 16
Customers at a Petrol Pump
• Since t(i) ≠ s(i),
• t(i) = s(k) for some k > i and
• t(j) = s(i) for some j > i
• Since s(1) ≤s(2)≤s(3) ….≤s(n), we get t(j) ≤ t(i)

12/08/2022 17
Customers at a Petrol Pump
• Let T’ be the schedule obtained from T by swapping
t(i) and t(j)
• Since j moves j-i places up and i moves j-i places
down in the schedule
• Cost(T’) = Cost(T) + t(j)*(j-i) - t(i)*(j-i)
• Cost(T’) = Cost(T) – (j-i)*(t(i)-t(j))
• Since j > i and t(j) ≤ t(i), Cost(T’) ≤ Cost(T)

12/08/2022 18
Customers at a Petrol Pump
• S = <s(1),s(2),…..s(n)>
• T = <t(1), t(2), ….t(n)>
• s(1) = t(1), s(2) = t(2), …. s(i-1) = t(i-1), s(i) ≠ t(i)
• T’ = <t’(1), t’(2), ….t’(n)>
• s(1) = t’(1), s(2) = t’(2), …. s(i-1) = t’(i-1), s(i) = t’(i)
• Cost (T’) ≤ Cost(T)

12/08/2022 19
Customers at a Petrol Pump
• S = <s(1),s(2),…..s(n)>
• T = <t(1), t(2), ….t(n)>
• T matches with S upto position i-1
• T’ matches with S upto position i

12/08/2022 20
Customers at a Petrol Pump

• Cost (T’) ≤ Cost(T)


• Thus we can find schedules T = Ti-1, T’=Ti, Ti+1, ….
Tn where Tj matches with S upto position j
• Cost(S) = Cost(Tn) ≤ Cost(Tn-1) ≤….≤Cost(Ti) ≤
Cost(Ti-1) = Cost(T)
• Thus Cost(S) ≤ Cost(T)
• Bust since T is an optimal schedule, Cost(T) ≤
Cost(S)
• Hence Cost(S) = Cost(T)
12/08/2022 21
Greedy algorithms
• A greedy algorithm always makes the choice that
looks best at the moment
– Everyday examples:
• Driving in a city
• Playing cards
• Invest on stocks
• Choose a university
– The hope: a locally optimal choice will lead to a globally
optimal solution
– For some problems, it works
• greedy algorithms tend to be easier to code

12/08/2022 22
Activity Selection Problem
(aka Interval Scheduling)

• Input: A set of activities S = {a1,…, an}


• Each activity has start time and a finish
time
– ai=(si, fi)
• Two activities are compatible if and only if
their interval does not overlap
• Output: a maximum-size subset of
mutually compatible activities

12/08/2022 23
The Activity Selection Problem
• Here are a set of start and finish times

• What is the maximum number of activities that can be


completed?
• {a3, a9, a11} can be completed
• But so can {a1, a4, a8’ a11} which is a larger set
• But it is not unique, consider {a2, a4, a9’ a11}

12/08/2022 24
Interval Representation

12/08/2022 25
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 2615
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 2715
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 2815
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 2915
Early Finish Greedy
• Select the activity with the earliest finish
• Eliminate the activities that could not be
scheduled
• Repeat!

12/08/2022 30
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3115
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3215
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3315
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3415
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3515
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3615
0
12/08/2022
1 2 3 4 5 6 7 8 9 10 11 12 13 14 3715
Assuming activities are sorted by
finish time

12/08/2022 38
Why it is Greedy?
• Greedy in the sense that it leaves as much
opportunity as possible for the remaining
activities to be scheduled
• The greedy choice is the one that maximizes
the amount of unscheduled time remaining

12/08/2022 39
Greedy-Choice Property
• Show there is an optimal solution that begins with a
greedy choice (with activity 1, which has the earliest finish
time)
• Suppose A  S in an optimal solution
– Order the activities in A by finish time. The first activity in A is k
• If k = 1, the schedule A begins with a greedy choice
• If k  1, show that there is an optimal solution B to S that begins with the
greedy choice, activity 1
– Let B = A – {k}  {1}
• f1  fk → activities in B are disjoint (compatible)
• B has the same number of activities as A
• Thus, B is optimal

12/08/2022 40
Knapsack Problem
• One wants to pack n items in a luggage
– The ith item is worth vi rupees and weighs wi kgs
– Maximize the value but cannot exceed W
– vi , wi, W are integers
• 0-1 knapsack → each item is taken or not taken
• Fractional knapsack → fractions of items can be
taken

12/08/2022 41
Greedy Algorithm for Fractional
Knapsack problem
• Fractional knapsack can be solvable by the greedy
strategy
– Compute the value per weight vi/wi for each item
– Obeying a greedy strategy, take as much as possible of the
item with the greatest value per weight.
– If the supply of that item is exhausted and there is still more
room, take as much as possible of the item with the next
value per weight, and so forth until there is no more room
– O(n lg n) (we need to sort the items by value per weight)
– Greedy Algorithm?
– Correctness?

12/08/2022 42
0-1 knapsack is harder!
• 0-1 knapsack cannot be solved by the greedy
strategy
– Unable to fill the knapsack to capacity, and the empty
space lowers the effective value per pound of the packing
– We must compare the solution to the sub-problem in
which the item is included with the solution to the sub-
problem in which the item is excluded before we can make
the choice
– Dynamic Programming

12/08/2022 43
12/08/2022 44
The Knapsack Problem:
Greedy Vs. Dynamic
• The fractional problem can be solved greedily
• The 0-1 problem cannot be solved with a
greedy approach
– However, it can be solved with dynamic
programming

12/08/2022 45
Algorithms

Module : Dynamic Programming

12/08/2022 46
Divide and Conquer Vs.
Dynamic Programming

• Divide-and-Conquer: a top-down approach.


• Many smaller instances are computed more
than once.

• Dynamic programming: a bottom-up


approach.
• Solutions for smaller instances are stored in
a table for later use.
12/08/2022 47
Why Dynamic Programming?
• Sometimes the natural way of dividing an
instance suggested by the structure of the
problem leads us to consider several
overlapping subinstances.
• If we solve each of these independently, a
large number of identical subinstances result
•If we pay no attention to this duplication, it is
likely that we will end up with an inefficient
algorithm.

12/08/2022 48
Bottom-up technique
• Avoid calculating the same thing twice,
usually by keeping a table of known results,
which we fill up as subinstances are solved.
• Dynamic programming is a bottom-up
technique.
• Memoization is a variant of dynamic
programming that offers the efficiency of
dynamic programming (by avoiding solving
common subproblems more than once) but
maintains a top-down flow

12/08/2022 49
Dynamic Programming: History

• Inventedby American mathematician Richard


Bellman in the 1950s to solve optimization
problems.
• “Programming” here means “planning”.

12/08/2022 50
Fibonacci Numbers
• Fibonacci numbers:
0, 1, 1, 2, 3, 5, 8, 13, 21, 24

• Recurrence Relation of Fibonacci numbers

12/08/2022 51
Example: Fibonacci numbers
• Recall definition of Fibonacci numbers:
f(0) = 0
f(1) = 1
f(n) = f(n-1) + f(n-2) for n ≥ 2
• Computing the nth Fibonacci number recursively (top-
down): f(n)

f(n-1) + f(n-2)

f(n-2) + f(n-3) f(n-3) + f(n-4)


...

12/08/2022 52
Fibonacci: Simple Recursive Vs. Memoized
int fib(int n) {
if (n <= 1) return n; // stopping conditions
else return fib(n-1) + fib(n-2); // recursive step
}
int fibDyn(int n, vector<int>& fibList) {
int fibValue;
if (fibList[n] >= 0) // check for a previously computed result and return
return fibList[n];
// otherwise execute the recursive algorithm to obtain the result
if (n <= 1) // stopping conditions
fibValue = n;
else // recursive step
fibValue = fibDyn(n-1, fibList) + fibDyn(n-2, fibList);
// store the result and return its value
fibList[n] = fibValue;
return fibValue;
}

12/08/2022 53
Example: Fibonacci numbers
Computing the nth fibonacci number using bottom-up
iteration:
• f(0) = 0
• f(1) = 1
• f(2) = 0+1 = 1
• f(3) = 1+1 = 2
• f(4) = 1+2 = 3
• f(5) = 2+3 = 5



• f(n-2) = f(n-3)+f(n-4)
• f(n-1) = f(n-2)+f(n-3)
• f(n) = f(n-1) + f(n-2)

12/08/2022 54
Recursive calls for fib(5)

fib(5)

fib(4) fib(3)

fib(3) fib(2) fib(2) fib(1)

fib(2) fib(1) fib(1) fib(0) fib(1) fib(0)

fib(1) fib(0)

12/08/2022 55
fib(5) Using Dynamic Programming

fib(5)

6 fib(3)
fib(4)
5
fib(3) fib(2) fib(2) fib(1)
4

fib(2) fib(1) fib(1) fib(0) fib(1) fib(0)


3

fib(1) fib(0)

1 2
12/08/2022 56
Statistics (function calls)
fib fibDyn

N = 20 21,891 39

N = 40 331,160,281 79

12/08/2022 57
Top down vs. Bottom up
• Top down dynamic programming moves
through recursive process and stores results as
algorithm computes and looks up result when
already precomputed.
• Bottom up dynamic programming evaluates
by computing all function values in order,
starting at lowest and using previously
computed values.

12/08/2022 58
Principle of Optimality

• Principle of optimality
– the optimal solution to any nontrivial instance of
a problem is a combination of optimal solutions
to some of its sub-instances.

12/08/2022 59
Binomial Coefficient
• Binomial coefficient:
n n!
  = for 0  k  n
 k  k!(n − k )!

• Cannot compute using this formula because of n!


• Instead, use the following formula:

12/08/2022 60
Binomial Using Divide & Conquer
• Binomial formula:

  n − 1   n − 1
C   + C   0  k  n
 n    k − 1  k 
C   = 
k   n n
1 k = 0 or k = n (C   or C  )

 0 n

12/08/2022 61
Bottom-Up
• Recursive property:
C[i] [j] = C[i – 1] [j – 1] + C[i – 1][j] 0 < j < i

1 j = 0 or j = i

12/08/2022 62
Pascal’s Triangle
0 1 2 3 4 … j k
0 1
1 1 1
2 1 2 1
3 1 3 3 1
4 1 4 6 4 1
… C[i-1][j-1]+ C[i-1][j]

i C[i][j]
n
12/08/2022 63
Binomial Coefficient
• Record the values in a table of n+1 rows and k+1 columns

0 1 2 3 … k-1 k
0 1
1 1 1
2 1 2 1
3 1 3 3 1
...
k 1 1

 n − 1  n − 1
n-1 1 C   C  
 k − 1 k 
n
n 1 C k  
12/08/2022   64
Binomial Coefficient
ALGORITHM Binomial(n,k)
//Computes C(n, k) by the dynamic programming algorithm
//Input: A pair of nonnegative integers n ≥ k ≥ 0
//Output: The value of C(n ,k)
for i 0 to n do
for j 0 to min (i ,k) do
if j = 0 or j = k
C [i , j]  1
else C [i , j]  C[i-1, j-1] + C[i-1, j]
return C [n, k]

k i −1 n k k n
A(n, k ) = 1 +  1 =  (i − 1) +  k
i =1 j =1 i = k +1 j =1 i =1 i = K +1

(k − 1)k
= + k (n − k )  (nk )
2

12/08/2022 65
All pairs shortest paths

•Find shortest path when direct path doesn’t exist


•In a weighted graph, find shortest paths between every pair of
vertices
• Same idea: construct solution through series of matrices D(0),
D(1), … using an initial subset of the vertices as intermediaries.
• Example: 4 3
1
1
6
1 5

4
2 3
12/08/2022 66
Shortest Path
• Optimization problem – more than one candidate for the
solution
• Solution is the candidate with optimal value
• Solution 1 – brute force
– Find all possible paths, compute minimum
– Efficiency?
• Solution 2 – dynamic programming
– Algorithm that determines only lengths of shortest paths
– Modify to produce shortest paths as well

12/08/2022 67
Example
1 2 3 4 5 1 2 3 4 5

1 0 1 ∞ 1 5 1 0 1 3 1 4

2 9 0 3 2 ∞ 2 8 0 3 2 5

3 ∞ ∞ 0 4 ∞ 3 10 11 0 4 7

4 ∞ ∞ 2 0 3 4 6 7 2 0 3

5 3 ∞ ∞ ∞ 0 5 3 4 6 4 0

W - Graph in adjacency matrix D - Floyd’s algorithm


12/08/2022 68
Meanings

• D(0)[2][5] = length[v2, v5]= ∞


• D(1)[2][5] = minimum(length[v2,v5], length[v2,v1,v5])
= minimum (∞, 14) = 14
• D(2)[2][5] = D(1)[2][5] = 14
• D(3)[2][5] = D(2)[2][5] = 14
• D(4)[2][5] = minimum(length[v2,v1,v5], length[v2,v4,v5]),
length[v2,v1,v5], length[v2, v3,v4,v5]),
= minimum (14, 5, 13, 10) = 5
• D(5)[2][5] = D(4)[2][5] = 5

12/08/2022 69
Floyd’s Algorithm

2
a b

6 7
3

c d
1

d ij( k ) = min{d ij( k −1) , d ik( k −1) + d kj( k −1) } for k  1, d ij( 0) = wij

12/08/2022 70
Computing D
• D(0) = W
• Now compute D(1)
• Then D(2)
• Etc.

12/08/2022 71
Floyd’s Algorithm

• ALGORITHM Floyd (W[1 … n, 1… n])


•For k ← 1 to n do
•For i ← 1 to n do
•For j ← 1 to n do
•W[i, j] ← min{W[i,j], W{i, k] + W[k, j]}

•Return W
•Efficiency = ?

Θ(n)

12/08/2022 72
Example
Example: Apply Floyd’s algorithm to find the t All-
pairs shortest-path problem of the digraph defined by
the following weight matrix

0 2 ∞ 1 8
6 0 3 2 ∞
∞ ∞ 0 4 ∞
∞ ∞ 2 0 3
3 ∞ ∞ ∞ 0

12/08/2022 73
Floyd's Algorithm using a single D
Floyd
1. D  W // initialize D array to W [ ]
2. P  0 // initialize P array to [0]
3. for k  1 to n
4. do for i  1 to n
5. do for j  1 to n
6. if (D[ i, j ] > D[ i, k ] + D[ k, j ] )
7. then D[ i, j ]  D[ i, k ] + D[ k, j ]
8. P[ i, j ]  k;
Floyd’s Algorithm 74
12/08/2022
Printing intermediate nodes on
shortest path from q to r
path(index q, r)
if (P[ q, r ]!=0)
path(q, P[q, r])
println( “v”+ P[q, r]) P=
path(P[q, r], r)
return;
//no intermediate nodes
else return

Before calling path check D[q, r] < 

Floyd’s Algorithm 75
12/08/2022
Matrix-chain Multiplication
• Suppose we have a sequence or chain A1, A2,
…, An of n matrices to be multiplied
– That is, we want to compute the product
A1A2…An

• There are many possible ways


(parenthesizations) to compute the product

12/08/2022 76
11-76
Matrix-chain Multiplication …contd

• Example: consider the chain A1, A2, A3, A4 of


4 matrices
– Let us compute the product A1A2A3A4
• There are 5 possible ways:
1. (A1(A2(A3A4)))
2. (A1((A2A3)A4))
3. ((A1A2)(A3A4))
4. ((A1(A2A3))A4)
5. (((A1A2)A3)A4)
12/08/2022 77
11-77
Example
• Consider multiplication of four matrices:
A x B x C x D
(20 x 2) (2 x 30) (30 x 12) (12 x 8)
• Matrix multiplication is associative
A(B (CD)) = (AB) (CD)
• Five different orders for multiplying 4 matrices
1. A(B (CD)) = 30*12*8 + 2*30*8 + 20*2*8 = 3,680
2. (AB) (CD) = 20*2*30 + 30*12*8 + 20*30*8 = 8,880
3. A ((BC) D) = 2*30*12 + 2*12*3 + 20*2*8 = 1,232
4. ((AB) C) D = 20*2*30 + 20*30*12 + 20*12*8 = 10,320
5. (A (BC)) D = 2*30*12 + 20*2*12 + 20*12*8 = 3,120

12/08/2022 78
Matrix-chain Multiplication …contd

• To compute the number of scalar


multiplications necessary, we must know:
– Algorithm to multiply two matrices
– Matrix dimensions

• Can you write the algorithm to multiply two


matrices?

12/08/2022 79
11-79
Algorithm to Multiply 2 Matrices
Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r)
Result: Matrix Cp×r resulting from the product A·B

MATRIX-MULTIPLY(Ap×q , Bq×r)
1. for i ← 1 to p
2. for j ← 1 to r
3. C[i, j] ← 0
4. for k ← 1 to q
5. C[i, j] ← C[i, j] + A[i, k] · B[k, j]
6. return C

Scalar multiplication in line 5 dominates time to compute


CNumber of scalar multiplications = pqr
12/08/2022 80
11-80
Matrix-chain Multiplication …contd

• Example: Consider three matrices A10100,


B1005, and C550
• There are 2 ways to parenthesize
– ((AB)C) = D105 · C550
• AB  10·100·5=5,000 scalar multiplications Total:
• DC  10·5·50 =2,500 scalar multiplications 7,500
– (A(BC)) = A10100 · E10050
• BC  100·5·50=25,000 scalar multiplications
• AE  10·100·50 =50,000 scalar multiplications
Total:
12/08/2022 75,000 81
11-81
Matrix-chain Multiplication …contd

• Matrix-chain multiplication problem


– Given a chain A1, A2, …, An of n matrices, where
for i=1, 2, …, n, matrix Ai has dimension pi-1pi
– Parenthesize the product A1A2…An such that the
total number of scalar multiplications is
minimized
• Brute force method of exhaustive search
takes time exponential in n

12/08/2022 82
11-82
Dynamic Programming Approach
• The structure of an optimal solution
– Let us use the notation Ai..j for the matrix that
results from the product Ai Ai+1 … Aj
– An optimal parenthesization of the product
A1A2…An splits the product between Ak and Ak+1
for some integer k where1 ≤ k < n
– First compute matrices A1..k and Ak+1..n ; then
multiply them to get the final matrix A1..n

12/08/2022 83
11-83
Dynamic Programming Approach
…contd

– Key observation: parenthesizations of the


subchains A1A2…Ak and Ak+1Ak+2…An must also be
optimal if the parenthesization of the chain
A1A2…An is optimal (why?)

– That is, the optimal solution to the problem


contains within it the optimal solution to
subproblems

12/08/2022 84
11-84
Dynamic Programming Approach
…contd

• Recursive definition of the value of an


optimal solution
– Let m[i, j] be the minimum number of scalar
multiplications necessary to compute Ai..j
– Minimum cost to compute A1..n is m[1, n]
– Suppose the optimal parenthesization of Ai..j
splits the product between Ak and Ak+1 for some
integer k where i ≤ k < j

12/08/2022 85
11-85
Dynamic Programming Approach
…contd

– Ai..j = (Ai Ai+1…Ak)·(Ak+1Ak+2…Aj)= Ai..k · Ak+1..j


– Cost of computing Ai..j = cost of computing Ai..k +
cost of computing Ak+1..j + cost of multiplying Ai..k
and Ak+1..j
– Cost of multiplying Ai..k and Ak+1..j is pi-1pk pj

– m[i, j ] = m[i, k] + m[k+1, j ] + pi-1pk pj for i ≤ k < j


– m[i, i ] = 0 for i=1,2,…,n

12/08/2022 86
11-86
Dynamic Programming Approach
…contd

– But… optimal parenthesization occurs at one


value of k among all possible i ≤ k < j
– Check all these and select the best one

0 if i=j
m[i, j ] =
min {m[i, k] + m[k+1, j ] + pi-1pk pj } if i<j
i ≤ k< j

12/08/2022 87
11-87
Dynamic Programming Approach
…contd

• To keep track of how to construct an optimal


solution, we use a table s
• s[i, j ] = value of k at which Ai Ai+1 … Aj is split
for optimal parenthesization
• Algorithm: next slide
– First computes costs for chains of length l=1
– Then for chains of length l=2,3, … and so on
– Computes the optimal cost bottom-up

12/08/2022 88
11-88
Algorithm
int minmult (int n, const ind d[], index S[ ] [ ])
{
index i, j, k, diagonal;
int M[1..n][1..n];
for (i = 1; i <= n; i++)
M[i][i] = 0;
for (diagonal = 1; diagonal <= n-1; diagonal++)
for (i = 1; i <= n-diagonal; i++)
{ j = i + diagonal;
M[i] [j] = minimum(M[i][k] + M[k+1][j] + d[i-1]*d[k]*d[j]);
// minimun (i <= k <= j-1)
S[i] [j] = a value of k that gave the minimum;
}
return M[1][n];
}

12/08/2022 89
Algorithm Visualization
M i , j = min{M i ,k + M k +1, j + d i d k +1d j +1}
ik  j
• The bottom-up answer
construction fills in the M 0 1 2 j … n-1
M array by diagonals 0
• Mi,j gets values from 1
previous entries in i-th …
row and j-th column i
• Filling in each entry in
the M table takes O(n)
time.
• Total run time: O(n3)
• Getting actual
n-1
parenthesization can be
done by remembering
“k” for each M entry

12/08/2022 90
Constructing The Optimal Solution
• Our algorithm computes the minimum-cost
table m and the split table s
• The optimal solution can be constructed
from the split table S
– Each entry s[i, j ]=k shows where to split the
product Ai Ai+1 … Aj for the minimum cost

12/08/2022 91
11-91
Example
• Show how to multiply this Matrix Dimension
matrix chain optimally
A1 30×35

• Solution on the board A2 35×15


– Minimum cost 15,125
A3 15×5
– Optimal parenthesization
((A1(A2A3))((A4 A5)A6)) A4 5×10
A5 10×20
A6 20×25

12/08/2022 92
11-92
The 0/1 Knapsack Problem
• Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
• Goal: Choose items with maximum total benefit but with weight at
most W.
• If we are not allowed to take fractional amounts, then this is the 0/1
knapsack problem.
– In this case, we let T denote the set of items we take

– Objective: maximize
b
iT
i

– Constraint:
w
iT
i W
12/08/2022 93
Example
• Given: A set S of n items, with each item i having
– bi - a positive benefit
– wi - a positive weight
• Goal: Choose items with maximum total benefit but with weight at
most W.
“knapsack”

Items: Solution:
• 5 (2 in)
1 2 3 4 5
• 3 (2 in)
• 1 (4 in)
Weight: 4 in 2 in 2 in 6 in 2 in
9 in
Benefit: $20 $3 $6 $25 $80

12/08/2022 94
First Attempt
• Sk: Set of items numbered 1 to k.
• Define B[k] = best selection from Sk.
• Problem: does not have subproblem optimality:
– Consider S={(3,2),(5,4),(8,5),(4,3),(10,9)} benefit-weight pairs

Best for S4:

Best for S5:

12/08/2022 95
Second Attempt
• Sk: Set of items numbered 1 to k.
• Define B[k,w] = best selection from Sk with weight exactly
equal to w
• Good news: this does have subproblem optimality:
 B[k − 1, w] if wk  w
B[k , w] = 
max{ B[k − 1, w], B[k − 1, w − wk ] + bk } else

• i.e., best subset of Sk with weight exactly w is either the


best subset of Sk-1 with weight w or the best subset of Sk-1
with weight w-wk plus item k.

12/08/2022 96
Dynamic Programming Algorithm
• Fill up a table of B[i,j] values
• In what order should the computation
proceed?

12/08/2022 97
The 0/1 Knapsack Algorithm
• Recall definition of B[k,w]:
 B[k − 1, w] if wk  w
B[k , w] = 
max{ B[k − 1, w], B[k − 1, w − wk ] + bk } else

• Since B[k,w] is defined in


terms of B[k-1,*], we can
reuse the same array
• Running time: O(nC).
• Not a polynomial-time
algorithm
• This is a pseudo-polynomial
time algorithm

12/08/2022 98
7.6

Analysis
• An optimal solution to the Knapsack
problem can be found in

(nW) time and (nW) space.

This is a pseudo-polynomial time algorithm

12/08/2022 99
Algorithms

Backtracking and Branch and Bound

12/08/2022 100
Outline
• Problem space/ State space
• Exhaustive search
– Depth-first search
– Breadth-first search
• Backtracking
• Branch-and-bound

12/08/2022 101
Problem Space

or State Space

12/08/2022 102
Problem Space

• General problem statement


– Given a problem instance P,
find answer A=(a1, a2, …, an) such that
the criteria C(A, P) is satisfied.

• Problem space of P is the set of all possible


answers A.

12/08/2022 103
Example: Shortest path
• Given a graph G=(V,E), and nodes u and v, find the
shortest path between u and v.

• General problem statement


– Given a graph G and nodes u and v,
find the path (u, n1, n2, …, nk, v), and
(u, n1, n2,…, nk, v) is the shortest path between u and v.

• Problem space
– Set of all possible path between u and v.
– {(u, n1, n2, …, nk, v)| ni is in V, for 1ik}.

12/08/2022 104
Example: 0/1 Knapsack
• Given a set S of n objects, where the object i has value vi
and weight wi , and a knapsack with weight capacity C,
find the maximum of value of objects in S which can be
put in the knapsack.

• General problem statement


– Given vi and wi , for 1  i  n ,
find the set K such that for each i in K, 1  i  n,
 vi is maximum while  wi  C.
iK iK

• Problem space
– Any subset of S.

12/08/2022 105
Example: n Queens
• Given an nxn board, find the n squares in the
board to put n queens so that no pair can attack.

• General problem statement


– Find (p1, p2, …, pn) where pi = (xi, yi) is a square on the
board, where there is no pair (xi, yi) and (xj, yj) such
that xi = xj or yi = yj.

• Problem space
– A set of any n positions on the board.

12/08/2022 106
Exhaustive Search

12/08/2022 107
Exhaustive Search
• Generate every possible answer
• Test each answer with the constraint to find the
correct answer
• Inefficient because the number of answers in the
problem space can be exponential.
• Examples:
– Shortest path
• n! paths to be considered, where n is the number of nodes.
– 0/1 Knapsack
• 2n selections, where n is the number of objects.

12/08/2022 108

State-Space Tree
Let (a , a , …, a ) be a possible answer.
1 2 n
• Suppose ai is either 0 or 1, for 1  i  nใ

(?, …, ?)

(0, ?, …, ?) (1, ?, …, ?)

(0, 0, ?, …, ?) (0, 1, ?, …, ?) (1, 0,?, …, ?) (1, 1, ?, …, ?)

(0,0,0, …, ?) (0,0,1, …, ?) (0,1,0, …, ?) (0,1,1, …, ?)

12/08/2022 109
State-Space Tree: Shortest Path
8 (1)
3 5
2

1
5 2
1
(1,2) (1,3)
-6
-1
2 (1,2,3) (1,2,4) (1,3,4) (1,3,5)
2 4

(1,2,3,4) (1,2,3,5) (1,2,4,5) (1,3,4,5)

(1,2,3,4,5)

12/08/2022 110
Generating Possible Paths
• Let {1,2,3, …, n} be a set of nodes and E[i][j] is the weight
of the edge between node i and j.

path(p)
last = last node in the path p
for next = 1 to n
do np = p
if next is not in np and E[last][next] != 0
then np = np || next
path(np)
else return

12/08/2022 111
State-Space Tree : 0/1 Knapsack
• Given a set of objects o1, …, o5.
{}

{1} {2} {3} {4} {5}

{1,2} {1,3} {1,4} {1,5}

{1,2,3} {1,2,4} {1,2,5} {1,3,4} {1,3,5} {1,4,5}

{1,2,3,4} {1,2,3,5} {1,2,4,5} {1,3,4, 5}

{1,2,3,4,5}
12/08/2022 112
State-Space Tree : n Queen
Q Q
Q Q
Q Q

Q Q
Q

Q
Q

Q
Q

Q Q
Q
Q
Q Q

Q
Q Q Q
Q Q
Q Q Q
12/08/2022 113
Depth-first Search
• Traverse the tree from root until a leaf is reached.
• Then, traverse back up to visited the next
unvisited node.

depthFirst(v)
visited[v] = 1
for each node k adjacent to v
do if not visited[k]
then depthFirst(k)
12/08/2022 114
0/1 Knapsack: Depth-first Search
Global: maxV=0 maxSack={}

DFknapsack(sack, unchosen)
for each object p in unchosen
do unchosen=unchosen-{p}
sack=sack U {p}
val=evaluate(sack)
if unchosen is empty
► A leaf is reached.
then maxV=max(maxV, val)
if maxV=val then maxSack=sack
return
else DFknapsack(sack, unchosen)
return

12/08/2022 115
Breadth-first Search
• Traverse the tree from root until the nodes of the same
depth are all visited.
• Then, visited the node in the next level.
breadthFirst(v)
Q = empty queue
enqueue(Q, v) visited[v] = 1
while not empty (Q)
do u = dequeue(Q)
for each node k adjacent to u
do if not visited[k]
then visited[k] = true
enqueue(Q, k)

12/08/2022 116
0/1 Knapsack: Breadth-first Search
BFknapsack
Q = empty queue maxV=0
sack = { }
unchosen = set of all objects
enqueue(Q, <sack, unchosen>)
while not empty (Q)
do <sack, unchosen> = dequeue(Q)
if unchosen is not empty
then for each object p in unchosen
do enqueue(Q,<sackU{p}, unchosen-{p}>)
else maxV = max(maxV, evaluate(sack))
if maxV=evaluate(sack) then maxSack = sack

12/08/2022 117
Backtracking

12/08/2022 118
Backtracking
• Reduce the search by cutting down some
branches in the tree

12/08/2022 119
0/1 Knapsack: Backtracking
{}
Node

Sack
Current weight, current value 0,0

{1} {2} {3} {4}


2,5 1,4 3,8 2,7

{1,2} {1,3} {1,4} {2,3} {2,4} {3,4}


3,9 5,13 4,12 4,12 3,11 5,15
object weight value
{1,2,3} {1,2,4} {2,3,4}
1 2 5
6, 17 5, 16 6,19
2 1 4
3 3 8
4 2 7

12/08/2022
Capacity = 5 120
0/1 Knapsack: Backtracking
BTknapsack(sack, unchosen)
for each object p in unchosen
do unchosen=unchosen-{p}
if p can be put in sack then sack = sack U {p}
► Backtracking occurs when p cannot be put in sack.
val=evaluate(sack)
if unchosen is empty
► A leaf is reached.
then maxV=max(maxV, val)
maxSack=sack
return
else BTknapsack(sack, unchosen)
return

12/08/2022 121
Branch-and-Bound

12/08/2022 122
Branch-and-bound
• Use for optimization problems
• An optimal solution is a feasible solution with
the optimal value of the objective function.
• Search in state-space trees can be pruned by
using bound.

12/08/2022 123
State-Space Tree with Bound
State
bound
State State State
bound bound bound

State State State State


bound bound bound bound

State State State


bound bound bound

12/08/2022 124
Branch and Bound
From a node N:
• If the bound of N is not better than the current
overall bound, terminate the search from N.
• Otherwise,
– If N is a leaf node
• If its bound is better than the current overall bound, update
the overall bound and terminate the search from N.
• Otherwise, terminate the search from N.
– Otherwise, search each child of N.

12/08/2022 125
0/1 Knapsack: Branch-and-Bound
Global: OvBound=0
BBknapsack(sack, unchosen)
if bound(sack, unchosen)>OvBound
► Estimated bound can be better than the overall bound.
then for each object p in unchosen
do unchosen=unchosen-{p}
if p can be put in sack then sack = sack U {p}
► Backtracking occurs when p cannot be put in sack.
val=evaluate(sack)
if unchosen is empty
► A leaf is reached.
then maxV = max(maxV, val)
maxSack = sack
OvBound = max(evaluate(sack), OvBound)
return
else ► A leaf is not reached.
BBknapsack(sack, unchosen)
return

12/08/2022 126
0/1 Knapsack: Estimated bound
• Current value
• available space * best ratio value:space of
unchosen objects

12/08/2022 127
Branch-and-bound
{} 20
Node

Sack estimated bound


Current weight, current value 0,0

{1}17 {2}18 {3}16 {4}19


2,5 1,4 3,8 2,7

{1,2}16 {1,3} 13 {2,3}15.5 {2,4}16.3 {3,4}15


3,9 5,13 4,12 3,11 5,15
object weight value ratio
{1,2,4} 16
1 2 5 2.5
5, 16 Overall bound 16
0
2 1 4 4
3 3 8 2.67
4 2 7 3.5

12/08/2022
Capacity = 5 128

You might also like