0% found this document useful (0 votes)
124 views88 pages

Divide and Conquer DAA

The document discusses various algorithms for divide and conquer techniques. It covers algorithms to find the sum of an array, calculate power of a number, and sort an array. For finding the sum, it examines brute force, divide and conquer, and decrease and conquer approaches. For calculating power, it analyzes brute force, divide and conquer, decrease and conquer, and decrease by a constant factor approaches. For sorting, it specifically describes the merge sort algorithm, including its recursive implementation and running time of O(n log n). It also covers quicksort, discussing its best, worst, and average cases.

Uploaded by

Nandeesh H S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
124 views88 pages

Divide and Conquer DAA

The document discusses various algorithms for divide and conquer techniques. It covers algorithms to find the sum of an array, calculate power of a number, and sort an array. For finding the sum, it examines brute force, divide and conquer, and decrease and conquer approaches. For calculating power, it analyzes brute force, divide and conquer, decrease and conquer, and decrease by a constant factor approaches. For sorting, it specifically describes the merge sort algorithm, including its recursive implementation and running time of O(n log n). It also covers quicksort, discussing its best, worst, and average cases.

Uploaded by

Nandeesh H S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

Design and Analysis of

Algorithms (UE16CS251)

Unit II - Divide and Conquer


Mr. Channa Bankapur
channabankapur {@pes.edu, @gmail.com}
Divide-and-Conquer!
Divide-and-Conquer!

It is a well-known algorithm design technique:


1. Divide instance of a problem into two or more smaller instances.
2. Solve the smaller instances of the same problem.
3. Obtain a solution to the original instance by combining the
solutions of the smaller instances.
Q: Write an algorithm to find the sum of an array of n numbers
using Brute Force approach.

Algorithm Sum(A[0..n-1])
//Sum of the numbers in an array
//Input: Array A having n numbers
//Output: Sum of n numbers in the array A

Q: Write an algorithm to find the sum of an array of n numbers
using Brute Force approach.

Algorithm Sum(A[0..n-1])
//Sum of the numbers in an array
//Input: Array A having n numbers
//Output: Sum of n numbers in the array A
sum ← 0
for i ← 0 to n-1
sum ← sum + A[i]
return sum

T(n) = n ∈ (n)
Q: Write an algorithm to find the sum of an array of n numbers
using Divide-and-Conquer approach.

Algorithm Sum(A[0..n-1])
//Sum of the numbers in an array
//Input: Array A having n numbers
//Output: Sum of n numbers in the array A

Q: Write an algorithm to find the sum of an array of n numbers
using Divide-and-Conquer approach.

Algorithm Sum(A[0..n-1])
//Sum of the numbers in an array
//Input: Array A having n numbers
//Output: Sum of n numbers in the array A
if (n = 0)
return 0
if (n = 1)
return A0
return Sum(A[0..⌊(n-1)/2⌋]) +
Sum(A[⌊(n-1)/2⌋+1..n-1])

T(n) = 2T(n/2) + 1, T(1) = 1


= 2n - 1 ∈ (n)
Q: Write an algorithm to find the sum of an array of n numbers
using Decrease-and-Conquer approach.

Algorithm Sum(A[0..n-1])
//Sum of the numbers in an array
//Input: Array A having n numbers
//Output: Sum of n numbers in the array A
if (n = 0)
return 0
return Sum(A[0..(n-2)]) + A[n-1]

T(n) = T(n-1) + 1, T(1) = 1


= n ∈ (n)

This approach is called as Decrease-and-Conquer. It


resonates more with the Math Induction.
● Brute Force:
○ Sum(A[0..n-1]) = A[0] + A[1] + … + A[n-1]
○ T(n) ∈ (n)

● Divide-and-Conquer:
○ Sum(A[0..n-1]) = Sum(A[0..n/2-1]) + Sum(A[n/2..n-1])
○ C(n) = 2C(n/2) + 1, C(1) = 1
T(n) ∈ (n)

● Decrease-and-Conquer:
○ Sum(A[0..n-1]) = Sum(A[0..n-2]) + A[n-1]
○ C(n) = C(n-1) + 1, C(1) = 1
T(n) ∈ (n)
Finding an using Brute Force approach.

Algorithm Power(a, n)
//Input: a ∈ R and n ∈ I+
//Output: an

Finding an using Brute Force approach.

Algorithm Power(a, n)
//Computes an = a*a*...a (n times)
//Input: a ∈ R and n ∈ I+
//Output: an
p ← 1
for i ← 1 to n
p ← p * a
return p

C(n) = n
T(n) ∈ (n)
Finding an using Divide-and-Conquer approach.

Algorithm Power(a, n)
//Input: a ∈ R and n ∈ I+
//Output: an

Finding an using Divide-and-Conquer approach.

Algorithm Power(a, n)
//Computes an = a⌊n/2⌋ * a⌈n/2⌉
//Input: a ∈ R and n ∈ I+
//Output: an
if (n = 0) return 1
if (n = 1) return a
return Power(a, ⌊n/2⌋) * Power(a, ⌈n/2⌉)

C(n) = 2C(n/2) + 1
T(n) ∈ (n)
Finding an using Decrease-and-Conquer approach.

Algorithm Power(a, n)
//Input: a ∈ R and n ∈ I+
//Output: an

Finding an using Decrease-and-Conquer approach.

Algorithm Power(a, n)
//Computes an = an-1 * a
//Input: a ∈ R and n ∈ I+
//Output: an
if (n = 0) return 1
return Power(a, n-1) * a

C(n) = C(n-1) + 1
T(n) ∈ (n)
Finding an using
Decrease-by-a-constant-factor-and-Conquer approach.

Algorithm Power(a, n)
//Computes an = (a⌊n/2⌋)2 * an mod 2

//Input: a ∈ R and n ∈ I+
//Output: an
if (n = 0) return 1
p ← Power(a, ⌊n/2⌋)
p ← p * p
if (n is odd) p ← p * a
return p

What is its time complexity?


C(n) = ...
T(n) ∈ (...)
Finding an using
Decrease-by-a-constant-factor-and-Conquer approach.

Algorithm Power(a, n)
//Computes an = (a⌊n/2⌋)2 * an mod 2

//Input: a ∈ R and n ∈ I+
//Output: an
if (n = 0) return 1
p ← Power(a, ⌊n/2⌋)
p ← p * p
if (n is odd) p ← p * a
return p

C(n) = C(n/2) + 2
T(n) ∈ (log n)
Finding an using different approaches.
● Brute-Force approach in Θ(n)
○ an = a * a * … a (n times)
● Divide-and-Conquer approach in Θ(n)
○ an = a⌊n/2⌋ * a⌈n/2⌉
● Decrease-by-a-constant-and-Conquer in Θ(n)
○ an = an-1 * a
● Decrease-by-a-constant-factor-and-Conquer in Θ(log n)
○ an = (a⌊n/2⌋)2 * an mod 2, a0 = 1
○ an = (an/2)2 when n is even
an = a*(a(n-1)/2)2 when n is odd and
a0 = 1
Divide-and-Conquer Examples:

● Sorting: Mergesort and Quicksort

● Search: Binary search

● Multiplication of large integers

● Matrix multiplication: Strassen’s algorithm

● Binary tree traversals


Idea of Merge Sort
Recursion tree of Merge Sort
Algorithm MergeSort(A[0..n-1])
//Sorts array A[0..n-1] by recursive Merge Sort
//Procedure Merge(A[0..n-1], m) merges two
// sorted subarrays A[0..m-1] and A[m..n-1]
// into a sorted array A[0..n-1].
if(n ≤ 1)return
m = ⌊n/2⌋
MergeSort(A[0..m-1])
MergeSort(A[m..n-1])
Merge(A[0..n-1], m)
Merge two sorted arrays into a sorted array:

Array1 Array2 Merged


01 02 01
05 03 02
06 04 03
07 04
08 05
09 06
07
08
09

Two sorted arrays concatenated:


01, 05, 06, 02, 03, 04, 07, 08, 09
After merging: 01, 02, 03, 04, 05, 06, 07, 08, 09
Example for merging two sorted arrays:
List1: 2, 4, 5, 6, 8, 9
List2: 1, 3, 7
Merged: 1, 2, 3, 4, 5, 6, 7, 8, 9
Algorithm Merge(A[0..n-1], m)
//Merges two sorted arrays A[0..m-1] and A[m..n-1] into
//the sorted array A[0..n-1]
i ← 0, j ← m, k ← 0
while(i < m and j < n) do
if(A[i] ≤ A[j]) B[k] ← A[i]; i ← i+1
else B[k] ← A[j]; j ← j+1
k ← k+1
if(j = n) Copy A[i..m-1] to B[k..n-1]
else Copy A[j..n-1] to B[k..n-1]
Copy B[0..n-1] to A[0..n-1]

Input Size: n
Basic Operation : Increment operation k←k+1
C(n) = n ∈ Θ(n)
Analysis of Mergesort

Algorithm MergeSort(A[0..n-1])
//Sorts array A[0..n-1] by recursive Merge Sort
//Procedure Merge(A[0..n-1], m) merges two
// sorted subarrays A[0..m-1] and A[m..n-1]
// into a sorted array A[0..n-1].
if(n ≤ 1)return
m = ⌊n/2⌋
MergeSort(A[0..m-1])
MergeSort(A[m..n-1])
Merge(A[0..n-1], m)
Algorithm: MergeSort(A[0..n-1])
Input Size: n
Basic Operation: Basic operation in Merge()
C(n) = cn + 2 * C(n / 2), C(1) = 0, when cn is the basic
operation count of Merge() with input size n.
C(n) = 2 C(n / 2) + cn, C(1) = 0
= 2 * [2 C(n/4) + cn/2] + cn
= 4 * C(n/4) + cn + cn
= 4 * [2 C(n/8) + cn/4] + 2*cn
= 23 C(n/23) + 3*cn
= 2i * C(n/2i) + i*cn
C(n/2i) is C(1) when n/2i = 1 ⇒ n = 2i ⇒ i = log2n
C(n) = n * C(1) + (log2n)*cn
= cn * log2n ∈ Θ(n logn)
Algorithm: MergeSort(A[0..n-1])
Input Size: n
Basic Operation: Basic operation in Merge()
C(n) = 2 * C(n / 2) + cn + 1, C(1) = 1, when cn is the basic
operation count of Merge() with input size n.
C(n) = 2 C(n / 2) + cn + 1, C(1) = 0
= 2 * [2 C(n/4) + cn/2 + 1] + cn + 1
= 4 * C(n/4) + cn + cn + 2 + 1
= 4 * [2 C(n/8) + cn/4 + 1] + 2*cn + 2 + 1
= 23 C(n/23) + 3*cn + (23 - 1)
= 2i * C(n/2i) + i*cn + (2i - 1)
C(n/2i) is C(1) when n/2i = 1 ⇒ n = 2i ⇒ i = log2n
C(n) = n * C(1) + (log2n)*cn + (n - 1)
= 2n - 1 + cn * log2n ∈ Θ(n logn)
Non-recursive (Bottom-up) Mergesort:
Mergesort:
● Input size n being not a power of 2?
● Scope for parallelism in this algo?
● What’s the basic operation in Merge Sort for Time
Complexity analysis?
● Is Mergesort an in-place sorting algo?
● Is Mergesort a stable sorting algo?
● Implementation of Mergesort in iterative bottom-up
approach skipping the divide stage.
● How far Mergesort is from the theoretical limit of any
comparison-based sorting algos?
Problem:
Partition an array into two parts where
the left part has elements ≤ pivot element and
the right part has the elements ≥ pivot element.

Eg: 35 33 42 10 14 19 27 44 26 31
Let key = 31.

Array partitioned on the pivot element:


14 26 27 19 10 31 42 33 44 35
Partition an array into two parts where
the left part has elements ≤ key and
the right part has the elements ≥ key.

Eg: 35 33 42 10 14 19 27 44 26 31
Partition in the eyes of Ullas Aparanji :)
Algorithm Partition(A[0..n-1])
p ← A[0]
i ← 1, j ← n-1
while(i ≤ j)
while(i ≤ j and A[i] < p) i ← i + 1
while(i ≤ j and A[j] > p) j ← j - 1
if(i < j)
swap A[i], A[j]
i ← i + 1
j ← j - 1
swap A[j], A[0]
return j
Idea of Quick Sort
Quick Sort
Algorithm QuickSort(A[0..n-1])
if(n ≤ 1) return
s ← Partition(A[0..n-1])
QuickSort(A[0..s-1])
QuickSort(A[s+1..n-1])
return
Example
Example
Examples of extreme
cases:

Split at the end


1 2 3 4 5 6 7 8 9

Split in the middle


4 2 1 3 6 5 7
Best case:
C(n) = 1 + cn + 2 C(n/2), C(1) = 1
C(n) = 2 C(n/2) + 1+cn, C(1) = 1
= 2i C(n/2i) + i*cn + (2i - 1)
C(n) = 2n - 1 + cn * log2n ∈ Θ(n log n)

Worst case:
C(n) = 1 + cn + C(n-1), C(1) = 1
C(n) = C(n-1) + 1+cn, C(1) = 1
= C(n-i) + i + c(n + n-1 + .. + n-i+1)
C(n) = 1 + (n-1) + cn(n+1)/2 ∈ Θ(n2)

Avg case: C(n) ∈ O(n2)


C(n) ∈ Θ(n log n) ?
Quicksort:
Avg case: C(n) ∈ O(n2)
Concluding remarks on Quicksort:

Is Quicksort a Stable Sorting algorithm?


Recursion needs stack space. Skewed recursion needs more
stack space. Deal with it?

Binary Search:

Efficient algorithm for searching in a sorted array.


Search for key element K in an Array A having n elements.
Let m = ⌊(n-1)/2⌋
K
vs
A[0] . . . A[m] . . . A[n-1]

If K = A[m], stop (successful search);


otherwise, continue searching by the same method
in A[0..m-1] if K < A[m]
and in A[m+1..n-1] if K > A[m]
Binary Search:
Algorithm BinarySearchRec(A[0..n-1], K)
if(n <= 0)
return -1
m = ⌊n/2⌋
if(k = A[m])
return m
if(k < A[m])
return BinarySearchRec(A[0..m-1], K)
else
return BinarySearchRec(A[m+1..n-1], K)
Algorithm: BinarySearchRec(A[0..n-1], k)

Input Size: n
Basic Operation : (k = A[m])

Worst case:
C(n) = C(n / 2) + 1, C(1) = 1
= C(n/4) + 1 + 1 = C(n/4) + 2
= C(n/8) + 3
= C(n/24) + 4
= C(n/2i) + i
C(n/2i) is C(1) when n/2i = 1 ⇒ n = 2i ⇒ i = log2n
C(n) = C(1) + log2n
= 1 + log2n ∈ Θ(log n)
Algorithm: BinarySearchRec(A[0..n-1], K)

Input Size: n
Basic Operation : (k = A[m])

Worst case:
C(n) = C(n / 2) + 1, C(1) = 1
C(n) = 1 + log2n ∈ Θ(log n)

Best case: C(n) = 1 ∈ Θ(1)

Avg case: C(n) ∈ O(log n) ∈ Θ(?)


We can prove C(n) ∈ Θ(log n)
Multiplication of Large Integers:
Consider the problem of multiplying two (large) n-digit integers
represented by arrays of their digits such as:
A = 12345678901357986429 B = 87654321284820912836

Brute-Force Strategy:
a1 a2 … an* b1 b2 … bn
2135 * 4014
d10d11d12 … d1n
8540
d20 d21d22 … d2n
2135+
… … … … … … …
0000++
dn0dn1dn2 … dnn
8540+++
_________________________________________________

8569890
Write a brute-force algorithm to multiply two arbitrarily
large (of n digits) integers.

12345678 * 32165487
86419746
98765424+
49382712++
Basic Operation:
61728390+++ single-digit multiplication
74074068++++
12345678+++++ C(n) = n2 one-digit multiplications
24691356++++++ C(n) ∈ Θ(n2)
37037034+++++++
_______________________________________________________________

397104745215186
Multiplication of Large Integers by Divide-and-Conquer

Idea: To multiply A = 23 and B = 54.


A = (2·101 + 3), B = (5·101 + 4)
A * B = (2·101 + 3) * (5·101 + 4)
= 2 * 5 ·102 + (2 * 4 + 3 * 5) ·101 + 3 * 4

For a base value ‘x’,


A = 2x + 3 and B = 5x + 4
A = (2·x1 + 3), B = (5·x1 + 4)
A * B = (2·x1 + 3) * (5·x1 + 4)
= 2 * 5 ·x2 + (2 * 4 + 3 * 5) ·x1 + 3 * 4
Multiplication of Large Integers by Divide-and-Conquer:

Idea:
To find A * B where A = 2140 and B = 3514
A = (21·102 + 40), B = (35 ·102 + 14)

So, A * B = (21 ·102 + 40) * (35 ·102 + 14)


= 21 * 35 ·104 + (21 * 14 + 40 * 35) ·102 + 40 * 14

In general, if A = A1A2 and B = B1B2 (where A and B are


n-digit, A1, A2, B1 and B2 are n/2-digit numbers),
A * B = A1 * B1·10n + (A1 * B2 + A2 * B1) ·10n/2 + A2 * B2
Multiplication of Large Integers by Divide-and-Conquer:
In general, if A = A1A2 and B = B1B2 (where A and B are
n-digit, A1, A2, B1 and B2 are n/2-digit numbers),
A * B = A1 * B1·10n + (A1 * B2 + A2 * B1) ·10n/2 + A2 * B2
Trivial case: When n = 1, just multiply A*B directly.

∴ One multiplication of n-digit integers, requires four


multiplications of n/2-digit integers, when n > 1.

Basic operation: single-digit multiplication


C(n) = 4C(n/2), C(1) = 1
∴ C(n) ∈ Θ(n2)
Multiplication of Large Integers by Karatsuba algorithm:

A * B = A1 * B1·10n + (A1 * B2 + A2 * B1) ·10n/2 + A2 * B2

The idea is to decrease the number of multiplications from 4 to 3:


(A1 + A2 ) * (B1 + B2 ) = A1 * B1 + (A1 * B2 + A2 * B1) + A2 * B2,
(A1 * B2 + A2 * B1) = (A1 + A2 ) * (B1 + B2 ) - A1 * B1 - A2 * B2,
which requires only 3 multiplications at the expense of 3 extra
add/sub. Note that we are reusing A1 * B1 and A2 * B2 one more
time.
A * B = A1 * B1·10n +
[(A1 + A2 ) * (B1 + B2 ) - A1 * B1 - A2 * B2] ·10n/2 + A2 * B2
Multiplication of Large Integers by Karatsuba algorithm:

A = A1 A 2 , B = B1 B 2
It needs three n/2 digits multiplications:
P 1 = A 1 * B1 ,
P2 = A2 * B2 and
P3 = (A1+A2) * (B1+B2)

A * B = A1 * B1·10n +
((A1 + A2 ) * (B1 + B2 ) - A1 * B1 - A2 * B2,) ·10n/2 + A2 * B2
is equivalent to:
A * B = P1·10n + (P3 - P1 - P2,) ·10n/2 + P2
Multiplication of Large Integers by Karatsuba algorithm:

Algorithm Karatsuba(a[0..n-1], b[0..n-1])


if(n = 1) return a[0]*b[0]
if(n is odd) n ← n+1 with a leading 0 padded.
m = n/2
a1, a2 = split_at(a, m)
b1, b2 = split_at(b, m)
p1 = Karatsuba(a1[0..m-1], b1[0..m-1])
p2 = Karatsuba(a2[0..m-1], b2[0..m-1])
p3 = Karatsuba((a1+a2)[0..m], (b1+b2)[0..m])
return (p1.10n + (p3-p1-p2).10m + p2)[0..2n-1]
Time complexity of the Karatsuba algorithm:

C(n) = 3C(n/2), C(1) = 1


C(n) = 3log2 n = nlog2 3 ≈ n1.585
C(n) ∈ Θ(nlog2 3) ≈ Θ(n1.585)

The number of single-digit multiplications needed to multiply


two 1024-digit (n = 1024 = 210) numbers is:
Karatsuba algorithm requires 3log2 n = 310 = 59,049,
Classical D-n-C algorithm requires 4log2 n = 410 = 1,048,576.
Multiplication of Large Integers by Karatsuba algorithm:
Matrix Multiplication:

Multiplication of two 2X2 matrices requires


8 element-level multiplications and
4 element-level additions.
Matrix Multiplication by Divide-and-Conquer strategy:
Let A and B be two n x n matrices where n is a power of 2. (If
n is not a power of 2, matrices can be padded with rows and
columns of zeros.) We can divide A, B and their product C into
four n/2 x n/2 submatrices each as follows:
Asymptotic Efficiency of
Strassen’s Matrix Multiplication:
Asymptotic Efficiency of
Strassen’s Matrix Multiplication:
Divide-n-Conquer algorithms:
Master Theorem of Divide-n-Conquer:
T(n) = a T(n/b) + c f(n) where T(1) = c and f(n) ∈ Θ(nd), d ≥ 0
If a < bd, T(n) ∈ Θ(nd)
If a = bd, T(n) ∈ Θ(nd log n)
If a > bd, T(n) ∈ Θ(nlog b a )
Examples:
Array Sum: T(n) = 2T(n/2) + 1 ∈?
Mergesort: T(n) = 2T(n/2) + n ∈?
Bin Search: T(n) = T(n/2) + 1 ∈?
T(n) = 4T(n/2) ∈ ?, T(n) = 3T(n/2) ∈ ?
T(n) = 3T(n/2) + n ∈ ?, T(n) = 3T(n/2) + n2 ∈ ?
Remarks on the Master Theorem:

T(n) = a T(n/b) + c f(n) where T(1) = c and f(n) ∈ Θ(nd), d ≥ 0


If a < bd, T(n) ∈ Θ(nd)
If a = bd, T(n) ∈ Θ(nd log n)
If a > bd, T(n) ∈ Θ(nlog b a )

● When f(n) ∉ Θ(nd)


● When division is not uniform
● Values of c and n0
Binary Tree: is a divide-and-conquer ready data structure.
A null node is a binary tree, and a non-null node having a left
and a right binary trees is a binary tree.
Q: Write an algorithm to find the height of a binary tree,
where height of a binary tree is the length of the longest
path from the root to a leaf.
E.g.: Height of a tree with only root node = 0
Height of a null tree = -1
Input Size: n(T), number of nodes in T
Basic Operation : Addition
C(n(T)) = C(n(TL)) + C(n(TR)) + 1, C(0) = 0
= n(T) ∈ Θ(n(T))

Basic Operation : (T = )
C(n(T)) = C(n(TL)) + C(n(TR)) + 1, C(0) = 1
= 2n(T) + 1 ∈ Θ(n(T))
Binary Tree
Traversals:
● Pre-order
● In-order
● Post-order
Q: Write an algorithm to count the number of nodes in a binary
tree.

Algorithm CountNodes(T)
//Counts number of nodes in the binary tree
//Input: Binary tree T
//Output: Number of nodes in T

Q: Write an algorithm to count the number of leaf-nodes in a
binary tree.

Algorithm CountLeafNodes(T)
//Counts number of leaf-nodes in the binary tree
//Input: Binary tree T
//Output: Number of leaf-nodes in T

</ End of Divide-and-Conquer >

You might also like