0% found this document useful (0 votes)
16 views

Algorithms

The document describes the divide-and-conquer algorithm for finding the maximum sum subarray in an array. It explains that the problem can be divided into finding the maximum subarrays in the left and right halves of the array, or one that crosses the midpoint. The maximum sum subarray of the overall array is determined by taking the maximum of the left, right, and crossing subarrays. The algorithm runs in O(n) time, providing an improvement over the brute force O(n^2) method of evaluating all subarrays.

Uploaded by

Al Shahriar Alif
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Algorithms

The document describes the divide-and-conquer algorithm for finding the maximum sum subarray in an array. It explains that the problem can be divided into finding the maximum subarrays in the left and right halves of the array, or one that crosses the midpoint. The maximum sum subarray of the overall array is determined by taking the maximum of the left, right, and crossing subarrays. The algorithm runs in O(n) time, providing an improvement over the brute force O(n^2) method of evaluating all subarrays.

Uploaded by

Al Shahriar Alif
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 219

Divide-and-Conquer Technique:

Finding Maximum & Minimum

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Divide-and-Conquer
 Divide-and-Conquer is a general
algorithm design paradigm:
 Divide the problem into a number of
subproblems that are smaller
instances of the same problem
 Conquer the subproblems by solving
them recursively
 Combine the solutions to the
subproblems into the solution for the
original problem
 The base case for the recursion are
subproblems of constant size
 Analysis can be done using recurrence
equations

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Divide-and-Conquer

a problem of size n

subproblem 1 subproblem 2
of size n/2 Divide of size n/2

a solution to a solution to
subproblem 1 subproblem 2

a solution to
the original problem

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Finding Maximum and Minimum

• Input: an array A[1..n] of n numbers


• Output: the maximum and minimum value
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
A 13 -13 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7

List List 1 List 2


n elements n/2 elements n/2 elements

min, max min1, max1 min2, max2

min = MIN ( min1, min2 )


max = MAX ( max1, max2 )
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Finding Maximum and Minimum

The straightforward algorithm:

max ← min ← A [1];


for i ← 2 to n do
if (A [i] > max) then max ← A [i];
if (A [i] < min) then min ← A [i];

No. of comparisons: 2(n – 1)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Finding Maximum and Minimum
The Divide-and-Conquer algorithm:
procedure Rmaxmin (i, j, fmax, fmin); // i, j are index #, fmax,
begin // fmin are output parameters
case:
i = j: fmax ← fmin ← A[i];
i = j –1: if A[i] < A[j] then fmax ← A[j];
fmin ← A[i];
else fmax ← A[i];
fmin ← A[j];
else: mid ← (i + j)/2;
call Rmaxmin (i, mid, gmax, gmin);
call Rmaxmin (mid+1, j, hmax, hmin);
fmax ← MAX (gmax, hmax);
fmin ← MIN (gmin, hmin);
end
end;
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Finding Maximum and Minimum
Example: find max and min in the array:
22, 13, -5, -8, 15, 60, 17, 31, 47 ( n = 9 )

Index: 1 2 3 4 5 6 7 8 9
Array: 22 13 -5 -8 15 60 17 31 47

(1)
Rmaxmin(1, 9, 60, -8)
(6)

1, 5, 22, -8 6, 9, 60, 17
(2) (5) (7) (8)

1, 3, 22, -5 4, 5, 15, -8 6, 7, 60, 17 8, 9, 47, 31


(3) (4)

1, 2, 22, 13 3, 3, -5, -5

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Finding Maximum and Minimum

The recurrence for the worst-case running time T(n) is

T(n) = (1) if n = 1 or 2
2T(n/2) + (1) if n > 2

equivalently

T(n) = b if n = 1 or 2
2T(n/2) + b if n > 2

By solving the recurrence, we get


T(n) is O( n )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Divide-and-Conquer Technique:
Maximum Sum Subarray problem

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Divide-and-Conquer
 Divide-and-Conquer is a general
algorithm design paradigm:
 Divide the problem into a number of
subproblems that are smaller
instances of the same problem
 Conquer the subproblems by solving
them recursively
 Combine the solutions to the
subproblems into the solution for the
original problem
 The base case for the recursion are
subproblems of constant size
 Analysis can be done using recurrence
equations

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Divide-and-Conquer

a problem of size n

subproblem 1 subproblem 2
of size n/2 Divide of size n/2

a solution to a solution to
subproblem 1 subproblem 2

a solution to
the original problem

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Maximum Sum Subarray Problem

• Input: an array A[1..n] of n numbers


– Assume that some of the numbers are negative,
because this problem is trivial when all numbers
are nonnegative
• Output: a nonempty subarray A[i..j] having the
largest sum S[i, j] = ai + ai+1 +... + aj

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
A 13 -3 -25 20 -3 -16 -23 18 20 -7 12 -5 -22 15 -4 7

maximum subarray

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Target array : 1 -4 3 2
What is a maximum
1 1
subarray?
All the sub arrays:
-4 -4 Ans: The subarray
3 3
with the largest sum
2 2

1 -4 -3
What is the brute-
-4 3 -1
force time?
Max! 3 2 5

1 -4 3 0

-4 3 2 1

1 -4 3 2 2
Brute-Force Algorithm

All possible contiguous subarrays


 A[1..1], A[1..2], A[1..3], ..., A[1..(n-1)], A[1..n]
 A[2..2], A[2..3], ..., A[2..(n-1)], A[2..n]
 ...
 A[(n-1)..(n-1)], A[(n-1)..n]
 A[n..n]

How many of them in total? O(n2)

Algorithm: For each subarray, compute the sum.


Find the subarray that has the maximum sum.
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Brute-Force Algorithm

Example: 2 -6 -1 3 -1 2 -2
sum from A[1]: 2 -4 -5 -2 -3 -1 -3
sum from A[2]: -6 -7 -4 -5 -3 -5
sum from A[3]: -1 2 1 3 1
sum from A[4]: 3 2 4 2
sum from A[5]: -1 1 -1
sum from A[6]: 2 0
sum from A[7]: -2

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Brute-Force Algorithm

Outer loop: index variable i to indicate start of subarray,


for 1 ≤ i ≤ n, i.e., A[1], A[2], ..., A[n]
 for i = 1 to n do ...

Inner loop: for each start index i, we need to go through


A[i..i], A[i..(i+1)], ..., A[i..n]
 use an index j for i ≤ j ≤ n, i.e., consider A[i..j]
 for j = i to n do ...

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Brute-Force Algorithm
max = -∞
for i = 1 to n do Time
complexity?
begin
O(n2)
sum = 0
for j = i to n do
begin
sum = sum + A[j]
if sum > max
then max = sum
end
end

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Divide-and-Conquer Algorithm
Possible locations of a maximum subarray A[i..j] of
A[low..high], where mid = (low + high)/2
 entirely in A[low..mid] (low  i  j  mid)
 entirely in A[mid+1..high] (mid < i  j  high)
 crossing the midpoint (low  i  mid < j  high)

crossing the midpoint

low mid high

mid +1

entirely in A[low..mid] entirely in A[mid+1..high]


Possible locations of subarrays of A[low..high]
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Divide-and-Conquer Algorithm
FIND-MAX-CROSSING-SUBARRAY (A, low, mid, high)
left-sum = -∞ // Find a maximum subarray of the form A[i..mid]
sum = 0
for i = mid downto low
sum = sum + A[i ]
if sum > left-sum
left-sum = sum
max-left = i
right-sum = -∞ // Find a maximum subarray of the form A[mid + 1 .. j ]
sum =0
for j = mid +1 to high
sum = sum + A[j]
if sum > right-sum
right-sum = sum
max-right = j
// Return the indices and the sum of the two subarrays
return (max-left, max-right, left-sum + right-sum)
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Divide-and-Conquer Algorithm

A[mid+1..j]

low i mid high

mid +1 j

A[i..mid]

A[i..j] comprises two subarrays A[i..mid] and A[mid+1..j]

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Divide-and-Conquer Algorithm
mid =5
1 2 3 4 5 6 7 8 9 10

A 13 -3 -25 20 -3 -16 -23 18 20 -7

S[5 .. 5] = -3
S[4 .. 5] = 17  (max-left = 4)
S[3 .. 5] = -8
S[2 .. 5] = -11
S[1 .. 5] = 2
mid =5
1 2 3 4 5 6 7 8 9 10

A 13 -3 -25 20 -3 -16 -23 18 20 -7

S[6 .. 6] = -16
S[6 .. 7] = -39
S[6 .. 8] = -21
S[6 .. 9] = (max-right = 9)  -1
S[6..10] = -8
 maximum subarray crossing mid is S[4..9] = 16
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Divide-and-Conquer Algorithm
FIND-MAXIMUM-SUBARRAY (A, low, high)
if high == low
return (low, high, A[low]) // base case: only one element
else mid = low  high / 2
(left-low, left-high, left-sum) =
FIND-MAXIMUM-SUBARRAY(A, low, mid)
(right-low, right-high, right-sum) =
FIND-MAXIMUM-SUBARRAY(A, mid + 1, high)
(cross-low, cross-high, cross-sum) =
FIND-MAX-CROSSING-SUBARRAY(A, low, mid, high)
if left-sum ≧ right-sum and left-sum ≧ cross-sum
return (left-low, left-high, left-sum)
elseif right-sum ≧ left-sum and right-sum ≧ cross-sum
return (right-low, right-high, right-sum)
else return (cross-low, cross-high, cross-sum)
Initial call: FIND-MAXIMUM-SUBARRAY (A, 1, n)
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Divide-and-Conquer Algorithm
Analyzing time complexity

FIND-MAX-CROSSING-SUBARRAY : (n),
where n = high  low + 1

FIND-MAXIMUM-SUBARRAY
1 if n  1,
T n   
2T n / 2  n  if n  1.

T(n) = 2T(n/2) + (n)


= (n lg n) (similar to merge-sort)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Conclusion: Divide-and-Conquer

 This Divide and conquer algorithm is clearly substantially


faster than any of the brute-force methods. It required
some cleverness, and the programming is a little more
complicated – but the payoff is large.

 Divide and conquer is just one of several powerful


techniques for algorithm design
 Divide-and-conquer algorithms can be analyzed using
recurrences
 Can lead to more efficient algorithms

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Divide-and-Conquer Technique:
Merge Sort, Quick Sort

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Divide-and-Conquer
 Divide-and-Conquer is a general
algorithm design paradigm:
 Divide the problem into a number of
subproblems that are smaller
instances of the same problem
 Conquer the subproblems by solving
them recursively
 Combine the solutions to the
subproblems into the solution for the
original problem
 The base case for the recursion are
subproblems of constant size
 Analysis can be done using recurrence
equations

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Merge Sort and Quick Sort

Two well-known sorting algorithms adopt this divide-and-


conquer strategy

 Merge sort
 Divide step is trivial – just split the list into two equal parts

 Work is carried out in the conquer step by merging two


sorted lists

 Quick sort
 Work is carried out in the divide step using a pivot element

 Conquer step is trivial

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Merge Sort: Algorithm

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Merge Sort: Algorithm

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Merge Sort: Example

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Execution Example

Partition
7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 2 9 4  2 4 7 9 3 8 6 1  1 3 8 6

7 2  2 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 7
Execution Example

Recursive call, partition


7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 8 6

7 2  2 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 8
Execution Example

Recursive call, partition


7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 8 6

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 9
Execution Example

Recursive call, base case


7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 8 6

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 10
Execution Example

Recursive call, base case


7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 8 6

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 11
Execution Example

Merge
7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 8 6

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 12
Execution Example

Recursive call, …, base case, merge


7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 8 6

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 13
Execution Example

Merge
7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 8 6

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 14
Execution Example

Recursive call, …, merge, merge


7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 6 8

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 15
Execution Example

Merge
7 2 9 43 8 6 1  1 2 3 4 6 7 8 9

7 29 4 2 4 7 9 3 8 6 1  1 3 6 8

722 7 9 4  4 9 3 8  3 8 6 1  1 6

77 22 99 44 33 88 66 11

Merge Sort 16
Merge Sort: Running Time

The recurrence for the worst-case running time T(n) is

T(n) = (1) if n = 1
2T(n/2) + (n) if n > 1

equivalently

T(n) = b if n = 1
2T(n/2) + bn if n > 1

Solve this recurrence by


(1) iteratively expansion
(2) using the recursion tree

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Merge Sort: Running Time (Iterative Expansion)
T ( n )  2T (n / 2)  bn
 2(2T ( n / 22 ))  b( n / 2))  bn
 22 T ( n / 22 )  2bn
 23 T ( n / 23 )  3bn
 24 T ( n / 24 )  4bn
 ...
 2i T (n / 2i )  ibn

Note that base, T(n) =b, case occurs when 2i = n.


That is, i = log n.
So, T (n)  bn  bn log n

Thus, T(n) is O(n log n).


Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Merge Sort: Running Time (Recursion Tree)

 Draw the recursion tree for the recurrence relation and look for a
pattern:
 b if n  1
T ( n)  
2T (n / 2)  bn if n  2
time
depth T’s size
0 1 n bn

1 2 n/2 bn

i 2i n/2i bn

… … … …

Total time = bn + bn log n


(last level plus all previous levels)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Quick Sort: Algorithm

 Another divide-and-conquer algorithm


 The array A[p..r] is partitioned into two non-empty
subarrays A[p..q] and A[q+1..r]
 Invariant: All elements in A[p..q] are less than all elements in
A[q+1..r]
 The subarrays are recursively sorted by calls to quicksort
 Unlike merge sort, no combining step: two subarrays form
an already-sorted array

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Quick Sort: Algorithm

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Quick Sort: Algorithm (Partition)

 Clearly, all the actions take place in the partition()


function
 Rearranges the subarrays in place
 End result:
 Two subarrays
 All values in first subarray  all values in the second

 Returns the index of the “pivot” element separating the two


subarrays

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Quick Sort: Algorithm

pivot

From i +1 to j is a window of elements


> A[r]. The cursor j moves right one
step at a time.

If the cursor j “discovers” an element


≤ A[r], then this element is swapped
with the front element of the window,
effectively moving the window right
one step; if it discovers an element >
A[r], then the window simply becomes
longer one unit.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Quick Sort: Algorithm

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Quick Sort: Analysis

 What will be the worst case for the algorithm?


 Partition is always unbalanced
 What will be the best case for the algorithm?
 Partition is perfectly balanced

 Which is more likely?


 The latter, by far, except...
 Will any particular input elicit the worst case?
 Yes: Already-sorted input

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Quick Sort: Analysis

 In the worst case:


T(1) = (1)
T(n) = T(n - 1) + (n)
Works out to
T(n) = (n2)

 In the best case:


T(1) = (1)
T(n) = 2T(n/2) + (n)
Works out to
T(n) = (n lg n)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Quick Sort: Analysis

 The real liability of quicksort is that it runs in O(n2) on


already-sorted input

 Book discusses two solutions:


 Randomize the input array, OR
 Pick a random pivot element

 How will these solve the problem?


 By ensuring that no particular input can be chosen to make
quicksort run in O(n2) time

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Analyzing Quicksort: Average Case

 Assuming random input, average-case running time is


much closer to O(n lg n) than O(n2)
 First, a more intuitive explanation/example:
 Suppose that partition() always produces a 9-to-1 split.
This looks quite unbalanced!
 The recurrence is thus:
T(n) = T(9n/10) + T(n/10) + n
 How deep will the recursion go? (draw it)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Analyzing Quicksort: Average Case

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Analyzing Quicksort: Average Case
 Intuitively, a real-life run of quicksort will produce a mix
of “bad” and “good” splits
 Randomly distributed among the recursion tree
 Pretend for intuition that they alternate between best-case
(n/2 : n/2) and worst-case (n-1 : 1)
 What happens if we bad-split root node, then good-split the
resulting size (n-1) node?

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Analyzing Quicksort: Average Case
 Intuitively, a real-life run of quicksort will produce a mix
of “bad” and “good” splits
 Randomly distributed among the recursion tree
 Pretend for intuition that they alternate between best-case
(n/2 : n/2) and worst-case (n-1 : 1)
 What happens if we bad-split root node, then good-split the
resulting size (n-1) node?
 We end up with three subarrays, size 1, (n-1)/2, (n-1)/2
 Combined cost of splits = n + n -1 = 2n -1 = O(n)

 No worse than if we had good-split the root node!

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Analyzing Quicksort: Average Case
 Intuitively, the O(n) cost of a bad split
(or 2 or 3 bad splits) can be absorbed
into the O(n) cost of each good split
 Thus running time of alternating bad and good splits is
still O(n lg n), with slightly higher constants
 How can we be more rigorous?

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Solving Recurrences:
Master Theorem

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Solving Recurrences

● Iteration Method
● Master Method
● Recursion Tree Method

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Solving Recurrences: Iteration Method

 Given: a divide and conquer algorithm


 An algorithm that divides the problem of size n into a
subproblems, each of size n/b, where a  1, b > 1
 The a subproblems are solved recursively, each in time T(n/b)
 Let the cost of each stage (i.e., the work to divide the problem
+ combine solved subproblems) be described by cn
 Then, the recurrence is
 c n 1
 n
T (n)  aT
   cn n  1
  b 

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Solving Recurrences: Iteration Method

● The “iteration method”


■ Expand the recurrence
■ Work some algebra to express as a summation
■ Evaluate the summation
● We show by using

 c n 1
 n
T (n)  aT
   cn n  1
  b 

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● T(n) =
aT(n/b) + cn
a(aT(n/b/b) + cn/b) + cn
a2T(n/b2) + cna/b + cn
a2T(n/b2) + cn(a/b + 1)
a2(aT(n/b2/b) + cn/b2) + cn(a/b + 1)
a3T(n/b3) + cn(a2/b2) + cn(a/b + 1)
a3T(n/b3) + cn(a2/b2 + a/b + 1)

akT(n/bk) + cn(ak-1/bk-1 + ak-2/bk-2 + … + a2/b2 + a/b + 1)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So we have
■ T(n) = akT(n/bk) + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
● For n/bk = 1
■ n = bk → k = logb n
■ T(n) = akT(1) + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= akc + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= cak bk /bk + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= cn ak /bk + cn(ak-1/bk-1 + ... + a2/b2 + a/b + 1)
= cn(ak/bk + ... + a2/b2 + a/b + 1)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a = b?
■ T(n) = cn(k + 1)
= cn(logb n + 1)
= (n log n)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a < b?

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a < b?
■ Recall that (xk + xk-1 + … + x + 1) = (xk+1 -1)/(x-1)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a < b?
■ Recall that (xk + xk-1 + … + x + 1) = (xk+1 -1)/(x-1)
■ So:
a k a k 1 a
 k 1     1 
a b k 1  1 
1  a b 
k 1

1
k
b b b a b   1 1  a b  1 a b

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a < b?
■ Recall that (xk + xk-1 + … + x + 1) = (xk+1 -1)/(x-1)
■ So:
a k a k 1 a
 k 1     1 
a b k 1  1 
1  a b 
k 1

1
k
b b b a b   1 1  a b  1 a b

■ T(n) = cn ·(1) = (n)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a > b?

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
■ T(n) = cn · (ak / bk)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
■ T(n) = cn · (ak / bk)
= cn · (alogb n / blogb n) = cn · (alogb n / n)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
■ T(n) = cn · (ak / bk)
= cn · (alogb n / blogb n) = cn · (alogb n / n)
recall logarithm fact: alogb n = nlogb a

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
■ T(n) = cn · (ak / bk)
= cn · (alogb n / blogb n) = cn · (alogb n / n)
recall logarithm fact: alogb n = nlogb a
= cn · (nlogb a / n) = (cn · nlogb a / n)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So with k = logb n
■ T(n) = cn(ak/bk + ... + a2/b2 + a/b + 1)
● What if a > b?
a k a k 1 a
 k 1     1 
a b k 1  1 
  a b 
k

k
b b b a b   1
■ T(n) = cn · (ak / bk)
= cn · (alogb n / blogb n) = cn · (alogb n / n)
recall logarithm fact: alogb n = nlogb a
= cn · (nlogb a / n) = (cn · nlogb a / n)
= (nlogb a )
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
 c n 1
 n
T ( n)  aT
   cn n 1
  b 

● So…

 n  ab

T ( n)   n log b n  ab
  log b a 
 n 

ab

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The Master Theorem

 Given: a divide and conquer algorithm


 An algorithm that divides the problem of size n into a
subproblems, each of size n/b, where a  1, b > 1
 The a subproblems are solved recursively, each in
time T(n/b)
 Let the cost of each stage (i.e., the work to divide the
problem + combine solved subproblems) be described
by the function f(n), where f is asymptotically positive
 T(n) is monotonically increasing function
 Then, the Master Theorem gives us a cookbook for
the algorithm’s running time.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The Master Theorem: Pitfalls

 You cannot use the Master Theorem if


 T(n) is not monotone, e.g. T(n) = sin(x)
 f(n) is not a polynomial, e.g., T(n) = 2T(n/2) + 2n
 b cannot be expressed as a constant, e.g.

 Note that the Master Theorem does not solve all


recurrence equations

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The Master Theorem
Let a  1 and b > 1 be constants, let f(n) be a function, and let
T(n) be defined on the nonnegative integers by the recurrence
T(n) = a T(n/b) + f (n).
Then T(n) has the following asymptotic bounds:
log b a  
o If f ( n )  O ( n ) for some constant e > 0, then
T ( n )   ( n log b a )
log b a
o If f ( n )   ( n ) then
T ( n )   ( n log b a log n )
log b a  
o If f ( n )  O ( n ) for some constant e > 0 then
T(n) = ( f (n))

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Excuse me, what did it say ???

 Essentially, the Master theorem compares the function f (n)


with the function g(n) = nlogb(a).
Roughly, the theorem says:
 If f (n) << g(n) then T(n) = (g(n))
 If f (n)  g(n) then T(n) = (g(n) logb n)
 If f (n) >> g(n) then T(n) = (f (n))

 Now go back and memorize the theorem !!!

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Idea of Master Theorem

Recursion tree:
T (n) T (n)
a
T (n/b) T (n/b) … T (n/b) a T (n/b)
h = logbn a
T (n/b2) T (n/b2) … T (n/b2) a2 T (n/b2)

#leaves = ah

… T (1) = alogbn
= nlogba

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


nlogbaT (1)
Idea of Master Theorem
Let us iteratively substitute the recurrence:
T ( n )  aT ( n / b )  f ( n )
 a ( aT ( n / b 2 ))  f ( n / b ))  f ( n )
 a 2 T ( n / b 2 )  af ( n / b )  f ( n )
 a 3 T ( n / b 3 )  a 2 f ( n / b 2 )  af ( n / b )  f ( n )
 ...
(log b n )  1
 a log b n T (1) 
i0

a i f (n / b i )

(log b n )  1
 n log b a T (1)  
a i f (n / b i )
i0
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Idea of Master Theorem
 Thus, we obtained
T(n) = nlogb(a) T(1) +  ai f(n/bi)
The proof proceeds by distinguishing three cases:
 The first term is dominant:
f(n) = O(nlogb(a)-e)
 Each term of the summation is equally dominant:
f(n) = (nlogb(a) )
 The second term is dominant and can be bounded by a
geometric series:
f(n) = (nlogb(a)+e)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Master Theorem: Three Common Cases
Compare f (n) with nlogba
1. f (n) = O(nlogba – e) for some constant e > 0.
• f (n) grows polynomially slower than nlogba
Solution: T(n) = (nlogba).

2. f (n) = (nlogba)
• f (n) and nlogba grow at similar rates.
Solution: T(n) = (nlogba logn).

3. f (n) = (nlogba + e) for some constant e > 0.


• f (n) grows polynomially faster than nlogba
Solution: T(n) = ( f (n)).
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Master Theorem: Examples
Ex. T(n) = 4T(n/2) + n
a = 4, b = 2  nlogba = n2; f (n) = n.
CASE 1: f (n) = O(n2 – e) for e = 1.
 T(n) = (n2).

Ex. T(n) = 4T(n/2) + n2


a = 4, b = 2  nlogba = n2; f (n) = n2.
CASE 2: f (n) = (n2).
 T(n) = (n2log n).

Ex. T(n) = 4T(n/2) + n3


a = 4, b = 2  nlogba = n2; f (n) = n3.
CASE 3: f (n) = (n2 + e) for e = 1
 T(n) = (n3).
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Algorithms:
Greedy Method

Activity--Selection Problem
Activity

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Greedy Algorithms
Algorithms:: Principles

 A greedy algorithm always makes the


choice that looks best at the moment.
 A greedy algorithm works in phases.
At each phase:
 You take the best you can get right now,
without regard for future consequences.
 You hope that by choosing a local optimum
at each step, you will end up at a global
optimum.
 For some problems, it works.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


An Activity Selection Problem
 Input: A set of activities S = {a1,…, an}
 Each activity ai has a start time si and a finish time fi,
where 0 ≤ si < fi < ∞
 If selected, activity ai takes place during the half-open
time interval [si, fi)

 Two activities are compatible if and only if their


intervals do not overlap

 Output: a maximum-size subset of mutually


compatible activities

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The Activity Selection Problem
 Here are a set of start and finish times
i 1 2 3 4 5 6 7 8 9 10 11
si 1 3 0 5 3 5 6 8 8 2 12
fi 4 5 6 7 8 9 10 11 12 13 14

 What is the maximum number of activities that can be


completed?
• {a3, a9, a11} can be completed
• But so can {a1, a4, a8, a11} which is a larger set
• But it is not unique, consider {a2, a4, a9 , a11}

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Interval Representation
 Here are a set of start and finish times
i 1 2 3 4 5 6 7 8 9 10 11
si 1 3 0 5 3 5 6 8 8 2 12
fi 4 5 6 7 8 9 10 11 12 13 14

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
{a3, a9, a11} can be completed
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
{a1, a4, a8, a11} can be completed
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
{a2, a4, a9 , a11} can be completed
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Early Finish Greedy
 Select the activity with the earliest finish
 Eliminate the activities that could not be scheduled
 Repeat!

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Assuming activities are sorted by finish time

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Why is it Greedy?
Greedy?
 Greedy in the sense that it leaves as much opportunity as
possible for the remaining activities to be scheduled
 The greedy choice is the one that maximizes the amount
of unscheduled time remaining

 We will show that this algorithm uses the following


properties
 The algorithm satisfies the greedy-choice property
 The problem has the optimal substructure property

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Elements of Greedy Strategy

 A greedy algorithm makes a sequence of choices, each


of the choices that seems best at the moment is chosen
 NOT always produce an optimal solution
 Two ingredients that are exhibited by most problems
that lend themselves to a greedy strategy
 Greedy-choice property
 Optimal substructure

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Greedy--Choice Property
Greedy
 A globally optimal solution can be arrived at by
making a locally optimal (greedy) choice
 Make whatever choice seems best at the moment and
then solve the sub-problem arising after the choice is
made
 The choice made by a greedy algorithm may depend on
choices so far, but it cannot depend on any future
choices or on the solutions to sub-problems
 Of course, we must prove that a greedy choice at
each step yields a globally optimal solution

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Optimal Substructures

 A problem exhibits optimal substructure if an optimal


solution to the problem contains within it optimal
solutions to sub-problems
 If an optimal solution A to S begins with activity 1, then
Aʹ = A – {1} is optimal to Sʹ ={i S: si  f1}

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Greedy--Choice Property
Greedy

 Show there is an optimal solution that begins with a greedy


choice (with activity 1, which as the earliest finish time)
 Suppose A  S in an optimal solution
 Order the activities in A by finish time. The first activity in A
is k
 If k = 1, the schedule A begins with a greedy choice
 If k  1, show that there is an optimal solution B to S that begins with
the greedy choice, activity 1
 Let B = A – {k}  {1}
 f1  fk activities in B are disjoint (compatible)
 B has the same number of activities as A

 Thus, B is optimal

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Optimal Substructure Property
 Once the greedy choice of activity 1 is made, the problem
reduces to finding an optimal solution for the activity-selection
problem over those activities in S that are compatible with
activity 1
 Optimal Substructure
 If A is optimal to S, then Aʹ = A – {1} is optimal to Sʹ ={i S: si  f1}

 Why?

 If we could find a solution Bʹ to Sʹ with more activities than Aʹ, adding


activity 1 to Bʹ would yield a solution B to S with more activities than A
 contradicting the optimality of A
 After each greedy choice is made, we are left with an
optimization problem of the same form as the original problem
 By induction on the number of choices made, making the greedy
choice at every step produces an optimal solution

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Algorithms:
Greedy Method

Knapsack Problem

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Greedy Algorithms
Algorithms:: Principles

 A greedy algorithm always makes the


choice that looks best at the moment.
 A greedy algorithm works in phases.
At each phase:
 You take the best you can get right now,
without regard for future consequences.
 You hope that by choosing a local optimum
at each step, you will end up at a global
optimum.
 For some problems, it works.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The Knapsack Problem
The famous Knapsack Problem:
A thief breaks into a museum. Fabulous paintings, sculptures, and
jewels are everywhere. The thief has a good eye for the value of
these objects, and knows that each will fetch hundreds or thousands
of dollars on the clandestine art collector’s market. But, the thief
has only brought a single knapsack to the scene of the robbery, and
can take away only what he can carry. What items should the thief
take to maximize the haul?

Which items
should I take?

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The Knapsack Problem

There are two versions of the problem:


(1) “0-1 knapsack problem”
Items are indivisible: you either take an item or
not. Solved with dynamic programming.

(2) “Fractional knapsack problem”


Items are divisible: you can take any fraction
of an item. Solved with a greedy algorithm.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The Knapsack Problem
 More formally, the 0-1 knapsack problem:
 The thief must choose among n items, where the ith item
worth vi dollars and weighs wi pounds
 Carrying at most W pounds, maximize value
 Note: assume vi, wi, and W are all integers
 “0-1” because each item must be taken or left in entirety

 A variation, the fractional knapsack problem:


 Thief can take fractions of items
 Think of items in 0-1 problem as gold ingots, in fractional
problem as buckets of gold dust.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Optimal Substructure Property
 Both problems exhibit the optimal substructure property.
 To show this for both the problems, consider the most valuable
load weighing at most W pounds
 Q: If we remove item j from the load, what do we know about
the remaining load?
 A: The remaining load must be the most valuable load weighing
at most W - wj that the thief could take from the n-1 original
items excluding item j.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Fractional Knapsack Problem
 Knapsack capacity: W

 There are n items: the i-th item has value vi and weight wi

 Goal:

 find xi such that for all 0  xi  1, i = 1, 2, .., n

 wixi  W and

 xivi is maximum

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Fractional Knapsack - Example

20
$80
---
Item 3 30 +

Item 2 50 50
20 $100
Item 1 30
20 +
10 10 $60

$60 $100 $120 $240

$6/pound $5/pound $4/pound

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Fractional Knapsack Problem
Greedy strategy:
 Pick the item with the maximum value per pound vi/wi

 If the supply of that element is exhausted and the thief can


carry more: take as much as possible from the item with the
next greatest value per pound

 It is good to order items based on their value per pound


v1 v2 vn
  ... 
w1 w2 wn

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Fractional Knapsack Problem
Alg.: Fractional-Knapsack (W, v[n], w[n])

1. while w > 0 and as long as there are items remaining

2. pick item with maximum vi/wi

3. xi  min (1, w/wi)

4. remove item i from list

5. w  w – xiwi
 w – the amount of space remaining in the knapsack (initially w = W)

 Running time: (n) if items already ordered; else (nlogn)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


0-1 Knapsack problem
 Thief has a knapsack with maximum capacity W, and a set S
consisting of n items
 Each item i has some weight wi and benefit value vi (all wi , vi
and W are integer values)
 Problem: How to pack the knapsack to achieve maximum total
value of packed items?
 Goal:
find xi such that for all xi  {0, 1}, i = 1, 2, .., n
 wixi  W and
 xivi is maximum

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


0-1 Knapsack - Greedy Strategy Fails

Item 3 30 $120

50 50 +
Item 2 50
20 $100
Item 1 30
20 + 20 $100
10 10 $60

$60 $100 $120 W $160 $220

$6/pound $5/pound $4/pound

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Dynamic Programming:
Introduction

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Algorithmic Paradigms
 Greedy: Build up a global solution incrementally, myopically
by optimizing some local criterion.

 Divide-and-conquer: Break up a problem into disjoint (non-


overlapping) sub-problems, solve the sub-problems recursively,
and then combine their solutions to form solution to the original
problem. Brand
Brand--new subproblems are generated at each step of
the recursion.

 Dynamic programming: Break up a problem into a series of


overlapping sub-problems, and build up solutions to larger and
larger sub-problems. Typically, same subproblems are
generated repeatedly when a recursive algorithm is run.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET 2


Dynamic Programming History
 Bellman. [1950s] Pioneered the systematic study of
dynamic programming.

 Etymology.
 Dynamic programming = planning over time.
 Secretary of Defense was hostile to mathematical research.
 Bellman sought an impressive name to avoid confrontation.

Reference: Bellman, R. E. Eye of the Hurricane, An Autobiography.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET 3


Dynamic Programming Applications
 Areas.
 Bioinformatics.
 Control theory.
 Information theory.
 Operations research.
 Computer science: theory, graphics, AI, compilers,
systems, ….

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET 4


Properties of a Problem that can be
Solved with Dynamic Programming
 Simple Subproblems
 We should be able to break the original problem to smaller
subproblems that have the same structure

 Optimal Substructure of the Problems


 The solution to the problem must be a composition of subproblem
solutions

 Subproblem Overlap
 Optimal subproblems to unrelated problems can contain
subproblems in common

 No. of Subproblems is Small


 The total number of distinct subproblems is a polynomial in the
input size
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Computing Fibonacci Numbers
 Fibonacci numbers:
 F0 = 0
 F1 = 1
 Fn = Fn - 1 + Fn - 2 for n > 1
Sequence is 0, 1, 1, 2, 3, 5, 8, 13, …

 Obvious recursive algorithm (Sometimes can be inefficient):


Fib(n):
if n = 0 or 1 then
return n
else
return ( Fib(n  1) + Fib(n  2) )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Recursion Tree for Fib(5)

Fib(5)

Fib(4) Fib(3)

Fib(3) Fib(2) Fib(2) Fib(1)

Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0)

Fib(1) Fib(0)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


How Many Recursive Calls?
 If all leaves had the same depth, then there would be about 2n
recursive calls.
 But this is over-counting.
 However with more careful counting it can be shown that it is
Ω((1.6)n)
 Still exponential!

 Wasteful approach - repeat work unnecessarily


 Fib(2) is computed three times
 Instead, compute Fib(2) once, store result in a table, and
access it when needed

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


More Efficient Recursive Algorithm

 F[0] := 0; F[1] := 1; F[n] := Fib(n);

 Fib(n):
if n = 0 or 1 then return F[n]
if F[n  1] = NIL then F[n  1] := Fib(n  1)
if F[n  2] = NIL then F[n  2] := Fib(n  2)
return ( F[n  1] + F[n  2] )

 computes each F[i] only once, store result in a table, and


access it when needed.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Example of Memoized Fib

F returns 3+2 = 5,
0 0 Fib(5)
fills in F[5] with 5
1 1
2 1
NIL returns 2+1 = 3,
Fib(4)
fills in F[4] with 3
3 2
NIL
4 3
NIL returns 1+1 = 2,
Fib(3)
5 5
NIL fills in F[3] with 2
returns 0+1 = 1,
Fib(2)
fills in F[2] with 1

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Get Rid of the Recursion
 Recursion adds overhead
 extra time for function calls
 extra space to store information on the runtime stack about
each currently active function call
 Avoid the recursion overhead by filling in the table
entries bottom up, instead of top down.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Subproblem Dependencies
 Figure out which subproblems rely on which other
subproblems
 Example:

F0 F1 F2 F3 … Fn-2 Fn-1 Fn

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Order for Computing Subproblems
 Then figure out an order for computing the subproblems
that respects the dependencies:
 when you are solving a subproblem, you have already
solved all the subproblems on which it depends
 Example: Just solve them in the order
F0 , F1 , F2 , F3 , …

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


DP Solution for Fibonacci

 Fib(n):
F[0] := 0; F[1] := 1;
for i := 2 to n do
 F[i] := F[i  1] + F[i  2]

return F[n]
 Can perform application-specific optimizations
 e.g., save space by only keeping last two numbers
computed

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Dynamic Programming:
Longest Common Subsequence

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Longest Common Subsequence
 Given two sequences
X =  x1, x2, …, xm 
Y =  y1, y2, …, yn 
A subsequence of a given sequence is just the given sequence
with zero or more elements left out.
 A common subsequence Z =  z1, z2, …, zk  of X and Y
 Z is a subsequence of both X and Y

 Example:
X= AB C BDAB
Y= BDCAB A

Goal: Find the Longest Common Subsequence (LCS)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


An Impractical LCS Algorithm
 Brute-force algorithm: For every subsequence of x, check if it is
a subsequence of y
 How many subsequences of x are there?

 What will be the running time of the brute-force algorithm?

 2m subsequences of x to check against n elements of y


 Running time: O(n 2m)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Optimal Substructure Property of LCS

 The LCS problem has an optimal substructure property


 solutions of subproblems are parts of the final solution
 Subproblems: LCS of pairs of prefixes of X and Y

 An LCS of two sequences contains within it an LCS of prefixes


of the two sequences.

 Given a sequence X =  x1, x2, …, xm , we define the ith prefix of X


as Xi =  x1, x2, …, xi 
Example:
X = ABCBDABBDCAB
X5 = ABCBD
X7 = ABCBDAB

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Optimal Substructure Property of LCS
Theorem:
Let X =  x1, x2, …, xm  and Y =  y1, y2, …, yn  be any sequences,
and let Z =  z1, z2, …, zk  be any LCS of X and Y.
 If xm = yn, then zk = xm = yn and Zk – 1 is an LCS of Xm – 1 and Yn – 1.
 If xm  yn, then zk  xm implies that Z is an LCS of Xm – 1 and Y.
 If xm  yn, then zk  yn implies that Z is an LCS of X and Yn – 1.

 Proof ?

 The Theorem tells us that an LCS of two sequences contains within it an


LCS of prefixes of the two sequences.
Thus the LCS problem has an optimal substructure property.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Overlapping Subproblem Property of LCS
Theorem:
Let X =  x1, x2, …, xm  and Y =  y1, y2, …, yn  be any sequences,
and let Z =  z1, z2, …, zk  be any LCS of X and Y.
 If xm = yn, then zk = xm = yn and Zk – 1 is an LCS of Xm – 1 and Yn – 1.
 If xm  yn, then zk  xm implies that Z is an LCS of Xm – 1 and Y.
 If xm  yn, then zk  yn implies that Z is an LCS of X and Yn – 1.

 The Theorem tells us that


 To find an LCS of X and Y, we may need to find the LCSs of X and Yn – 1
and of Xm – 1 and Y. But each of these subproblems has the subsubproblem
of finding an LCS of Xm – 1 and Yn – 1.
Thus the LCS problem has the overlapping subproblem property.

 The number of distinct subproblems: O(mn).


Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Develop a Recursive Solution
 Define c[i, j] to be the length of an LCS of the sequences Xi and Yj .
 Goal: Find c[m, n]

 Basis: c[i, j] = 0 if either i = 0 or j = 0

 Recursion: How to define c[i, j] recursively ?

 Finding an LCS of X =  x1, x2, …, xm  and Y =  y1, y2, …, yn 


 If xm = yn, then we must find an LCS of Xm – 1 and Yn – 1.
 Appending xm = yn to this LCS yields an LCS of X and Y.
 If xm  yn, then we must solve two subproblems:
 Finding an LCS of Xm – 1 and Y
 Finding an LCS of X and Yn – 1

 Whichever of these two LCSs is longer is an LCS of X and Y.

 The recursive formula is


c[i  1, j  1]  1 if i, j  0 and x[i ]  y[ j ],
c[i, j ]  
 max{c[i, j  1], c[i  1, j ]} if i, j  0 and x[i ]  y[ j ]
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Develop a Recursive Solution
 Case 1: xi = yj
 Recursively find LCS of Xi – 1 and Yj – 1 and append xi
 So c[i, j] = c[i – 1, j – 1] + 1 if i, j > 0, and xi = yj

Xi-1 xi
equal

Yj-1 yj

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Develop a Recursive Solution
 Case 2: xi ≠ yj
 Recursively find LCS of Xi – 1 and Yj
 Recursively find LCS of Xi and Yj – 1
 Take the longer one
 So c[i, j] = max{c[i, j – 1], c[i – 1, j]} if i, j > 0, and xi ≠ yj

Xi-1 xi
not equal

Yj-1 yj

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Dependencies among Subproblems
0 1 j-1 j n
0 c[i, j] depends on
1  c[i-1, j-1],
 c[i-1, j], and
 c[i, j-1]
i-1
i

m Goal

 An order for solving the subproblems (i.e., filling in the array) that
respects the dependencies is row major order:
 do the rows from top to bottom
 inside each row, go from left to right
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Develop a Recursive Solution

 The algorithm calculates the values of each


entry of the array c[m, n].
 Each c[i, j] is calculated in constant time,
and there are m·n elements in the array.
 So the running time is O(m·n).

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example

We’ll see how LCS algorithm works on the following example:


X = ABCG
Y = BDCAG

LCS(X, Y) = BCG

X=AB C G
Y= BDCAG

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi

1 A

2 B

3 C
4 G

X = ABCG; m = |X| = 4
Y = BDCAG; n = |Y| = 5
Allocate array: c[5, 4]
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0
2 B 0
3 C 0
4 G 0

for i = 0 to m c[i, 0] = 0
for j = 1 to n c[0, j] = 0
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0
2 B 0
3 C 0
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0
2 B 0
3 C 0
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1
2 B 0
3 C 0
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0
3 C 0
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1
3 C 0
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1
3 C 0
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 1
3 C 0
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 1
3 C 0 1 1
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 1
3 C 0 1 1 2
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 1
3 C 0 1 1 2 2 2
4 G 0

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 1
3 C 0 1 1 2 2 2
4 G 0 1

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 1
3 C 0 1 1 2 2 2
4 G 0 1 1 2 2

if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


LCS Example ABCG
j 0 1 2 3 4 5
BDCAG
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 1
3 C 0 1 1 2 2 2
4 G 0 1 1 2 2 3
if ( xi == yj )
c[i, j] = c[i-1, j-1] + 1
else c[i, j] = max( c[i-1, j], c[i, j-1] )

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Another LCS Example

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


How to Find Actual LCS

 So far, we have just found the length of LCS, but not LCS itself.
 We can modify this algorithm to make it output an LCS of X
and Y.
 Each c[i, j] depends on c[i-1, j-1], or c[i-1, j] and c[i, j-1].
 For each c[i, j] we can say how it was acquired.

2 2 For example, here


c[i, j] = c[i-1, j-1] +1 = 2+1=3
2 3

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


How to Find Actual LCS

 Remember that
c[i  1, j  1]  1 if x[i ]  y[ j ],
c[i, j ]  
 max(c[i, j  1], c[i  1, j ]) otherwise

 So we can start from c[m, n] and go backwards


 Whenever c[i, j] = c[i-1, j-1]+1, remember x[i], because x[i] is a
part of LCS
 When i=0 or j=0 (i.e. we reached the beginning), output
remembered letters in reverse order

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Finding LCS: Example
j 0 1 2 3 4 5
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 G 0 1 1 2 2 3

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Finding LCS: Example
j 0 1 2 3 4 5
i yj B D C A G
0 xi 0 0 0 0 0 0
1 A 0 0 0 0 1 1
2 B 0 1 1 1 1 2
3 C 0 1 1 2 2 2
4 G 0 1 1 2 2 3
LCS (reversed order): G C B
LCS (straight order): B C G
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Finding LCS: Algorithm

Trace backwards from b[m, n]

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Dynamic Programming:
Coin Change Problem

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Coin Change Problem

Given unlimited amounts of coins of denominations c1 > … > cd ,


give change for amount M with the least number of coins.

Example: c1 = 25, c2 =10, c3 = 5, c4 = 1 and M = 48


Greedy solution: 25*1 + 10*2 + 1*3 = c1 + 2c2 + 3c4

Greedy solution is
 optimal for any amount and “normal’’ set of denominations

 may not be optimal for arbitrary coin denominations

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Coin Change Problem

Goal: Convert some amount of money M into given


denominations, using the fewest possible number of coins

Input: An amount of money M, and an array of d


denominations c = (c1, c2, …, cd), in a decreasing order of
value (c1 > c2 > … > cd)

Output: A list of d integers i1, i2, …, id such that


c1i1 + c2i2 + … + cdid = M and i1 + i2 + … + id is minimal

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Greedy Choice Principles
 Suppose you want to count out a certain
amount of money, using the fewest
possible coins.
 At each step, take the largest possible
coin that does not overshoot.

 Example: To make Tk. 157/-, you,


 Choose a Tk. 100/- note,
 Choose a Tk. 50/- note,
 Choose a Tk. 5/- coin,
 Choose a Tk. 2/- coin.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Greedy Choice Principles: Failure
 To find the minimum number of US coins to make any amount,
the greedy method always works
 At each step, just choose the largest coin that does not overshoot the
desired amount: 31¢ = (25+5+1)

 The greedy method would not work if we did not have 5¢ coins
 For 31 cents, the greedy method gives seven coins (25+1+1+1+1+1+1),
but we can do it with four (10+10+10+1)

 The greedy method also would not work if we had a 21¢ coin
 For 63 cents, the greedy method gives six coins (25+25+10+1+1+1), but
we can do it with three (21+21+21)

 The greedy algorithm results in a solution, but always not in an


optimal solution
 How can we find the minimum number of coins for any given
coin set?
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Coin Change Problem: Example
Given the denominations 1, 3, and 5, what is the
minimum number of coins needed to make change
for a given value?

Value 1 2 3 4 5 6 7 8 9 10
Min # of coins 1 1 1

Only one coin is needed to make change for the


values 1, 3, and 5

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Coin Change Problem: Example
Given the denominations 1, 3, and 5, what is the
minimum number of coins needed to make change
for a given value?

Value 1 2 3 4 5 6 7 8 9 10
Min # of coins 1 2 1 2 1 2 2 2

However, two coins are needed to make change for


the values 2, 4, 6, 8, and 10.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Coin Change Problem: Example
Given the denominations 1, 3, and 5, what is the
minimum number of coins needed to make change
for a given value?

Value 1 2 3 4 5 6 7 8 9 10
Min # of coins 1 2 1 2 1 2 3 2 3 2

Lastly, three coins are needed to make change for the


values 7 and 9

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Coin Change Problem: Recurrence

This example is expressed by the following


recurrence relation:

minNumCoins(M-1) + 1
min minNumCoins(M-3) + 1
minNumCoins(M) =
of
minNumCoins(M-5) + 1

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Coin Change Problem: Recurrence
Given the denominations c: c1, c2, …, cd, the
recurrence relation is:

minNumCoins(M-c1) + 1

min minNumCoins(M-c2) + 1
minNumCoins(M) =
of …
minNumCoins(M-cd) + 1

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Coin Change Problem: A Recursive Algorithm

1. RecursiveChange(M, c, d)
2. if M = 0
3. return 0
4. bestNumCoins  infinity
5. for i  1 to d
6. if M ≥ ci
7. numCoins  RecursiveChange(M – ci , c, d)
8. if numCoins + 1 < bestNumCoins
9. bestNumCoins  numCoins + 1
10. return bestNumCoins

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


RecursiveChange is not Efficient

 It recalculates the optimal coin combination for a given


amount of money repeatedly
 i.e., M = 77, c = (1, 3, 7):
 Optimal coin for 70 cents is computed 9 times!

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The RecursiveChange Tree

77

76 74 70

75 73 69 73 71 67 69 67 63

747268 686662 706864 686662 626056


727066 727066 666460 666460

... ..
70 70 70 70 70
.
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
We Can Do Better

 We are re-computing values in our algorithm more than


once

 Save results of each computation for 0 to M

 This way, we can do a reference call to find an already


computed value, instead of re-computing each time

 Running time M*d, where M is the value of money and d


is the number of denominations

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Coin Change Problem: Dynamic Programming

1. DPChange(M, c, d)
2. bestNumCoins0  0 Running time: O(M*d)
3. for m  1 to M
4. bestNumCoinsm  infinity
5. for i  1 to d
6. if m ≥ ci
7. if bestNumCoinsm – ci+ 1 < bestNumCoinsm
8. bestNumCoinsm  bestNumCoinsm – ci+ 1
9. return bestNumCoinsM

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


DPChange:: Example
DPChange

0 0 1 2 1 2 3 2
0 0 1 2 3 4 5 6

0 1
0 1 0 1 2 1 2 3 2 1
0 1 2 3 4 5 6 7
0 1 2
0 1 2
0 1 2 1 2 3 2 1 2
0 1 2 1 0 1 2 3 4 5 6 7 8
0 1 2 3
0 1 2 1 2 3 2 1 2 3
0 1 2 1 2 0 1 2 3 4 5 6 7 8 9
0 1 2 3 4

0 1 2 1 2 3
c = (1, 3, 7)
0 1 2 3 4 5 M=9

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Dynamic Programming:
0-1 Knapsack Problem

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The Knapsack Problem
The famous Knapsack Problem:
A thief breaks into a museum. Fabulous paintings, sculptures, and
jewels are everywhere. The thief has a good eye for the value of
these objects, and knows that each will fetch hundreds or thousands
of dollars on the clandestine art collector’s market. But, the thief
has only brought a single knapsack to the scene of the robbery, and
can take away only what he can carry. What items should the thief
take to maximize the haul?

Which items
should I take?

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


The Knapsack Problem

There are two versions of the problem:


(1) “0-1 knapsack problem”
Items are indivisible: you either take an item or not.
Solved with dynamic programming.

(2) “Fractional knapsack problem”


Items are divisible: you can take any fraction of
an item. Solved with a greedy algorithm.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


0-1 Knapsack problem
 Thief has a knapsack with maximum capacity W, and a set S
consisting of n items
 Each item i has some weight wi and benefit value vi (all wi , vi
and W are integer values)
 Problem: How to pack the knapsack to achieve maximum total
value of packed items?
 Goal:
find xi such that for all xi  {0, 1}, i = 1, 2, .., n
 wixi  W and
 xivi is maximum

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Optimal Substructure Property
 Both problems exhibit the optimal substructure property.
 To show this for both the problems, consider the most valuable
load weighing at most W pounds
 Q: If we remove item j from the load, what do we know about the
remaining load?
 A: The remaining load must be the most valuable load weighing
at most W - wj that the thief could take from the n-1 original items
excluding item j.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


0-1 Knapsack - Greedy Strategy Fails

Item 3 30 $120

50 50 50 +
Item 2
20 $100
Item 1 30
20 + 20 $100
10 10 $60

$60 $100 $120 W $160 $220

$6/pound $5/pound $4/pound

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


0-1 Knapsack: Brute-
Brute-Force Approach

 Since there are n items, there are 2n possible combinations of


items.
 We go through all combinations and find the one with the
most total value and with total weight less or equal to W.
 Running time will be O(2n).

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


0-1 Knapsack - Dynamic Programming

 P(i, w) – the maximum profit that can be


obtained from items 1 to i, if the
knapsack has size w
 Case 1: thief takes item i
P(i, w) = vi + P(i - 1, w-wi)
 Case 2: thief does not take item i
P(i, w) = P(i - 1, w)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Recursive Formula
 P[i  1, w ] if w  wi
P[i , w ]  
 max{ vi  P[i  1, w  wi ], P[i  1, w ]} else

 The best subset that has the total weight w, either contains item i
or not.
 First case: w <wi. Item i can’t be part of the solution, since if it
was, the total weight would be > w, which is unacceptable.
 Second case: w >= wi. Then the item i can be in the solution,
and we choose the case with greater value.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Dependencies among Subproblems

Item i was taken Item i was not taken

P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }


0: 1 w - wi w W

0 0 0 0 0 0 0 0 0 0 0 0
0 first
0 second
i-1 0

i 0
0
n 0

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Overlapping Subproblems

P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }


0: 1 w W

0 0 0 0 0 0 0 0 0 0 0 0
0
0
i-1 0

i 0
0
n 0

E.g.: all the subproblems shown in grey


may depend on P(i-1, w)
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
W = 5
Item Weight Value
Example:
1 2 12
P(i, w) = max {vi + P(i - 1, w-wi), P(i - 1, w) }
2 1 10
0 1 2 3 4 5 3 3 20
0 0 0 0 0 0 0 P(1, 1) = P(0, 1) = 0 4 2 15
1 0 0 12 12 12 12 P(1, 2) = max{12+0, 0} = 12
2 0 10 12 22 22 22 P(1, 3) = max{12+0, 0} = 12
3 0 10 12 22 30 32 P(1, 4) = max{12+0, 0} = 12
4 0 10 15 25 30 37 P(1, 5) = max{12+0, 0} = 12

P(2, 1)= max{10+0, 0} = 10 P(3, 1)= P(2,1) = 10 P(4, 1)= P(3,1) = 10


P(2, 2)= max{10+0, 12} = 12 P(3, 2)= P(2,2) = 12 P(4, 2)= max{15+0, 12} = 15
P(2, 3)= max{10+12, 12} = 22 P(3, 3)= max{20+0, 22}=22 P(4, 3)= max{15+10, 22}=25
P(2, 4)= max{10+12, 12} = 22 P(3, 4)= max{20+10,22}=30 P(4, 4)= max{15+12, 30}=30
P(2, 5)= max{10+12, 12} = 22 P(4, 5)= max{20+12,22}=32 P(4, 5)= max{15+22, 32}=37
Reconstructing the Optimal Solution
0 1 2 3 4 5
0 0 0 0 0 0 0 • Item 4

1 0 0 12 12 12 12
• Item 2
2 0 10 12 22 22 22
0 • Item 1
3 10 12 22 30 32
0
4 10 15 25 30 37

• Start at P(n, W)
• When you go left-up  item i has been taken
• When you go straight up  item i has not been taken
0-1 Knapsack Algorithm
for w = 0 to W
P[0, w] = 0
for i = 0 to n Running time: O(n*W)
P[i, 0] = 0
for w = 0 to W
if wi <= w // item i can be part of the solution
if (vi + P[i-1, w-wi] > P[i-1, w])
P[i, w] = vi + P[i-1, w- wi]
else
P[i, w] = P[i-1, w]
else P[i, w] = P[i-1, w] // wi > w

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Dynamic Programming:
Matrix Chain Multiplication

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Matrix Chain Multiplication Problem

 Multiplying non-square matrices:


 A is p  q, B is q  r must be equal
 AB is p  r whose (i, j) entry is ∑aik bkj

 Computing AB takes p·q·r scalar multiplications and


p(q1)r scalar additions (using basic algorithm).

 Suppose we have a sequence of matrices to multiply.


What is the best order?

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Matrix Chain Multiplication Problem

Given a sequence of matrices A1, A2, …, An, then


Compute C = A1. A2. …, An
 Different ways to compute C

 Matrix multiplication is associative


 So output will be the same
 However, time cost can be very different
 Example

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Why Order Matters
 Suppose we have 4 matrices:
 A, 30  01
 B, 01  40
 C, 40  10
 D, 10  25

 ((AB)(CD)) : requires 41,200 multiplications


[ (30140) + (401025) +(304025) = 41,200 ]

 (A((BC)D)) : requires 1400 multiplications


[ (14010) + (11025) +(30125) = 1,400 ]

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Matrix Chain Multiplication Problem
Given a sequence of matrices A1, A2, …, An, where Ai is pi-1  pi:
1) What is minimum number of scalar multiplications
required to compute A1· A2 ·… · An?
2) What order of matrix multiplications achieves this
minimum?

 Fully parenthesize the product in a way that minimizes


the number of scalar multiplications
(( )( ))(( )(( )(( )( ))))

No. of parenthesizations: ???

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


A Possible Solution
 Try all possibilities and choose the best one.
 Drawback is there are too many of them (exponential in
the number of matrices to be multiplied)
 The number of parenthesizations is
1 if n  1

P( n)  n1

P(k ) P(n  k ) if n  2
k 1
 The solution to the recurrence is (2n)
No. of parenthesizations: Exponential

 Need to be more clever - try dynamic programming !

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 1: Optimal Substructure Property
 A problem exhibits optimal substructure if an optimal
solution to the problem contains within it optimal solutions to
subproblems.
 Whenever a problem exhibits optimal substructure, we have a
good clue that dynamic programming might apply.
 Consequently, we must take care to ensure that the range of
subproblems we consider includes those used in an optimal
solution.
 We must also take care to ensure that the total number of
distinct subproblems is a polynomial in the input size.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 1: Optimal Substructure Property
 Define Ai··j , i  j, to be the matrix that results from evaluating
the product Ai· Ai+1 ·… · Aj.
 If the problem is nontrivial, i.e, i < j, then to parenthesize
Ai· Ai+1 ·… · Aj, split the product between Ak and Ak+1 for some
k, where i  k < j.

 The cost of parenthesizing this way is


 The cost of computing the matrix Ai··k +
 The cost of computing the matrix Ak+1··j +
 The cost of multiplying them together
 The optimal substructure of this problem is:
An optimal parenthesization of Ai· Ai+1 ·… · Aj contains within it
optimal parenthesizations of Ai· Ai+1 ·… · Ak and Ak+1· Ak+2 ·… · Aj
Proof ?
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Overlapping Subproblem Property
 Two subproblems of the same problem are independent if they
do not share resources.
 Two subproblems are overlapping if they are really the same
subproblem that occurs as a subproblem of different problems.
 A problem exhibits overlapping subproblem if the number of
subproblems is “small” in the sense that a recursive algorithm
solves the same subproblems over and over, rather than always
generating new subproblems.
 Dynamic programming algorithms take advantage of
overlapping subproblems by solving each subproblem once and
then storing the solution in a table where it can be looked up
when needed.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Overlapping Subproblem Property

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 2: Develop a Recursive Solution
 Define m[i, j] to be the minimum number of multiplications
needed to compute Ai· Ai+1 ·… · Aj .
 Goal: Find m[1, n]
 Basis: m[i, i] = 0
 Recursion: How to define m[i, j] recursively ?

 Consider all possible ways to split Ai through Aj into two pieces.


 Compare the costs of all these splits:
 best case cost for computing the product of the two pieces
 plus the cost of multiplying the two products
 Take the best one

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 2: Develop a Recursive Solution

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 2: Develop a Recursive Solution
 Let T(n) be the time taken by Recursive-Matrix-Chain for n
matrices.
T (1)  1
n 1
T ( n)  1   (T (k )  T (n  k )  1) for n  1
k 1
For i = 1, 2, …, n1, each term T(i) appears once as T(k) and
once as T(nk) . Thus, we have
n 1
T (n)  2  T (i )  n
i 1
2 n 1

Then T(n) = (2n)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


A Recursive Solution (Memoization)
 A memoized recursive algorithm maintains an entry in a table
for the solution to each subproblem.

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


A Recursive Solution (Memoization)

Lookup-Chain(m, p, i, j)
if m[i, j] < 
running time O(n 3)
return m[i, j]
if i = = j
m[i, j] = 0
else for k = i to j – 1
q = Lookup-Chain(m, p, i, k) +
Lookup-Chain(m, p, k + 1, j) + pi-1 pk pj
if q < m[i, j]
m[i, j] = q
return m[i, j]

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 3: Compute the Optimal Costs
Find Dependencies among Subproblems

1 2 3 4 5
m:
1 0 GOAL !
2 n/a 0
computing the red
3 n/a n/a 0
square requires the
4 n/a n/a n/a 0 blue ones: to the
left and below.
5 n/a n/a n/a n/a 0

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 3: Compute the Optimal Costs
Find Dependencies among Subproblems
m: 1 2 3 j 5
1 0
i n/a 0
3 n/a n/a 0
4 n/a n/a n/a 0
5 n/a n/a n/a n/a 0
 Computing m(i, j) uses
 everything in same row to the left:
m(i, i), m(i, i+1), …, m(i, j-1)
 and everything in same column below:
m(i+1, j), m(i+2, j), …, m(j, j)
Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET
Step 3: Compute the Optimal Costs
Identify Order for Solving Subproblems
 Solve the subproblems (i.e., fill in the table entries) this way:
 go along the diagonal
 start just above the main diagonal
 end in the upper right corner (goal)

m: 1 2 3 4 5
1 0
2 n/a 0
3 n/a n/a 0
4 n/a n/a n/a 0
5 n/a n/a n/a n/a 0

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 3: Compute the Optimal Costs
Identify Order for Solving Subproblems
 A1 (30  35)
 A2 (35  15)
 A3 (15  05)
 A4 (05  10)
 A5 (10  20)
 A6 (20  25)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 3: Compute the Optimal Costs
Identify Order for Solving Subproblems
1 2 3 4 5 6
 A1 (30  35)
 A2 (35  15) 1 0 15750 7875 9375 11875 15125
 A3 (15  05)
2 0 2625 4375 7125 10500
 A4 (05  10)
 A5 (10  20) 3 0 750 2500 5375
 A6 (20  25)
4 0 1000 3500

5 0 5000

6 0

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 3: Compute the Optimal Costs
Pseudocode

running time O(n3)

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET


Step 4: Construct an Optimal Solution
 It's fine to know the cost of the cheapest order, but what is
that cheapest order?
 Keep another array s and update it when computing the
minimum cost in the inner loop
 After m and s have been filled in, then call a recursive
algorithm on s to print out the actual order

Dr. Md. Abul Kashem Mia, Professor, CSE Dept, BUET

You might also like