0% found this document useful (0 votes)
25 views16 pages

Unit 3 Divide and Conquer

Uploaded by

androicons
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views16 pages

Unit 3 Divide and Conquer

Uploaded by

androicons
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

DESIGN AND ANALYSIS OF ALGORITHMS

UNIT-III DIVIDE & CONQUER


Divide and Conquer : Genereal Method, applications – Binary search, Quick sort, Merge sort,
Strassen’s Matrix multiplication.
-*-*-*-
This technique involves in solving a particular problem by dividing it into one or more sub
problems of smaller size; Recursively solving each sub problem and then merging the solved
sub problems to produce a solution to the original problem.
This paradigm involves 3 steps at each level of recursion:

1) Divide : divide the problem into number of sub-problems.


2) Conquer : Conquer the sub problems by solving them recursively. If the sub problem
sizes are small enough, just solve the sub problem directly.
3) Combine : Combine the solutions of sub problems into the solution for Original
Problem.

Let P be a problem to be solved. Let it be divided into k smaller problems


P1, P2, P3, ……, Pk. Same as that of original problem. Let n be the size of P.
Let n1, n2, n3, …. Nk be the sizes of P1, P2, …., Pk.

The general logic (Control abstraction) code for the divide and conquer strategy can
be expressed as

Algorithm DC(P)
{
if P is too small then
return solution of P;
else
{
Divide P into smaller problems P1, P2, .. Pk k>=1
Apply DC to each of the sub problems
Return combine( DC(P1), DC(P2), DC(P3)…..DC(Pk));
}
}

Let T(n) be the time required to find the solution for the given problem. Let g(n) be
the time required to solve the problem if n is small.
Let f(n) be the time required for dividing the problem P into smaller sub problems.
Then the computing time can be expressed as recursive relation.

T(n) = g(n) if n is small


T(n1)+T(n2)+………+T(nk)+f(n) otherwise.

The complexity of many divide and conquer algorithms is given by recurrence relation.

T(n) = T(1) when n=1


aT(n/b)+f(n) when n>1

Where a and b are known constants. We assume that T(1) is known and n is a power
of b i.e n=bk.
One of the methods for solving any such recurrence relation is called the substitution
method. This method repeatedly makes substitutions for each occurrence of the
function T in the right hand side until all such occurrences disappear.
Dr.DSK DAA UNIT-III Divide & Conquer Page 1
Consider the case in which a=2 and b=2. Let T(1)=2 and f(n)=n. We have
T(n) = 2T(n/2)+n
T(n) = 2[2T(n/4)+n/2]+n
T(n) = 4T(n/4)+2n
T(n) = 4[2T(n/8)+n/4]+2n
T(n) = 8T(n/8)+3n
:
:
In general we see that T(n) = 2iT(n/2i)+in for any log2n >=i>=1. In particular, then
T(n)=2lognT(n/2 logn)+nlogn corresponding to the choice of i = log n. Thus
T(n)=nT(1)+nlog2n+2n
Example 1) Solve the following recurrence relation
T(n) = 2 if n=1
2T(n/2)+n if n>1
n
T (n) = 2.T   + n (1)
2
  n  n
T (n) = 2.  2T   +  + n Putting n = n / 2 inT (n)
  4  2
n
T (n) = 4T   + 2n
4
  n  n
T (n) = 4  2T   +  + 2n
  8  4
n
T (n) = 8T   + 3n
8
:
:
k  n 
: T ( n) = 2 T  k  + kn
2 
T (n) = 2n.T (1) + n log 2 n ( Put n = 2k ⇒ k = log 2 n)
T (n) = n log 2 n + 2n (∵ T (1) = 2)

Example 2) Solve the following Recurrence Relation


T(n) = 2T(n/2)+2, given T(1)=0, T(2)=1 if n>1

n
T (n) = 2T   + 2
2
 n 
T (n) = 2  2T   + 2  + 2
 4 
n
T ( n) = 2 2 T   + 2 2 + 2
4
 n 
T (n) = 22  2T   + 8  + 22 + 2
 8 
 n
T ( n ) = 2 3 T  3  + 23 + 2 2 + 2
2 
:
:
:

Dr.DSK DAA UNIT-III Divide & Conquer Page 2


 n 
T (n) = 2k −1T  k −1  + 2k −1 + ...... + 2 2 + 2 + 1 − 1
2 
2 k
 n.2  2k −1
T (n) = .T  k  + −1 ( put n = 2k ⇒ k = log 2 n)
2  2  2 −1
n
T (n) = .T (2) + n − 2 (∵2 k = n)
2
n
T (n) = .(1) + n − 2
2
3n
T (n) = −2
2
T ( n) = O ( n )

Applications :
Binary Search,
Finding Minimum and Maximum,
Merge Sort,
Quick Sort,
Strassen’s Matrix multiplication

Quick Sort
The technique involved in this quick sort is :
1) Consider the first element of the array as the pivot element.
2) Place the pivot element in the array such that all elements before the pivot element
have a value less than the pivot element and all the elements after the pivot element
have a value greater than the pivot element.
3) At this stage we will be obtaining two different arrays with the pivot element
positioned in between. Apply the same technique to these arrays and repeat the
process until the final solution is obtained.

The quick sort scheme is developed by C.A.R.Hoare, has the best average behavior among all
the sorting methods.

Let ‘A’ be an array and ‘n’ the number of elements in the array to be sorted. Choose an
element ‘x’ from a specific position within the array i.e., x = a[1]
Suppose the elements of array ‘a’ are partitioned, such that ‘x’ is placed into position ‘j’ and
the following conditions hold.

1. Each of the element in positions ‘1’ through ‘j-1’ is less than or equal to ‘x’.
2. Each of the elements in positions ‘j+1’ through ‘n’ is greater than or equal to ‘x’.

Notice that if these two conditions hold for a particular ‘x’ and ‘j’, ‘x’ is the jth smallest
element of array ‘a’, so that ‘x’ remains in position ‘j’ when the array is completely sorted. If
the foregoing process is repeated with the sub arrays a[1] through a[j-1] and a[j+1] through
a[n] and any sub arrays created by the process in successive iterations, the final result is a
sorted array.

Dr.DSK DAA UNIT-III Divide & Conquer Page 3


Let us illustrate the quick sort with an example.
25 57 48 37 12 92 86 33

First element is placed in its proper position.


12 25 57 48 37 92 86 33

At this point 25 is in its proper position in the array a[2], each element below that
position is less than or equal to 25 and each element above that position is greater than or
equal to 25 Since 25 is in its final position the original problem is decomposed into the
problem of sorting two arrays.
(12) and (57 48 37 92 86 33)
The array of one element is already sorted. To sort the second sub array the process is
repeated and the sub array is further sub divided. The entire array is now be viewed as
12 25 ( 57 48 37 92 86 33)
Where parentheses enclose the sub arrays that are yet to be sorted.
12 25 ( 48 37 33 ) 57 ( 92 86 )
and further representations are
12 25 ( 37 33 ) 48 57 ( 92 86 )
12 25 ( 33) 37 48 57 86 (92 )
12 25 33 37 48 57 86 92

Unsorted array
25 57 48 37 12 92 86 33
(12) 25 (57 48 37 92 86 33)
12 25 (48 37 33) 57 (92 86)
12 25 (37 33) 48 57 (92 86)
12 25 33 37 48 57 (92 86)
12 25 33 37 48 57 86 92 Sorted array

Note that the final array is sorted. By this time you should noticed that the quick sort may be
defined most conveniently as a recursive procedure.

Efficiency of Quick Sort is 0(n log n).


Assume that the file size is n and n is a power of 2. say n = 2m
so m = log 2n can be read as log n base 2.

And also assume that the proper position for pivot always points the exact middle element of
the sub array. In that case there will be approximately n comparisons. Actually n-1
comparisons.

The file of size n will be splitted into 2 subfiles each of size n/2.
The file of size n/2 will be splitted into 2 subfiles each of size n/4.
The file of size n/4 will be splitted into 2 subfiles each of size n/8.
The file of size n/8 will be splitted into 2 subfiles each of size n/16.
And so on until the subfile having size 1.
After having the subfile m times there are n files of size 1. Thus the total no of comparisions
for the entire sort is approximately
= n + 2*n/2 + 4 * n/4 + 8*n/8 + ……………………….n*n/n upto m terms
=n*m
= n*log n
= O(n log n)

Dr.DSK DAA UNIT-III Divide & Conquer Page 4


Algorithm for Quick sort
Algorithm quicksort(a,low,high) Algorithm partition(a,low,high)
{ {
if (low<high) pivotkey := a[low];
{ pivotloc := low;
pivotloc:=partition(a,low,high); for i := low+1 to high do
quicksort(a,low,pivotloc-1); {
quicksort(a,pivotloc+1,high); if (a[i]<pivotkey)
} {
} swap(a,pivotloc+1,i);
swap(a, pivotloc, pivotloc+1);
pivotloc := pivotloc+1;
Algorithm swap(a,i,j) }
{ }
temp:=a[i]; return(pivotloc);
a[i]:=a[j]; }
a[j]:=temp;
}

In analyzing Quick sort, we count only the number of element comparisons C(n). It is easy to
see that the frequency count of other operations is of the same order as C(n). The number of
element comparisons in each call of partition is at most high-low+1. Let r be the total number
of elements in all the calls to partition at any level of recursion.
Cw(n) = 1+2+3+…..+n
Worst case time complexity O(n2)
Average case time complexity O(n log n)

Binary Search
Divide and Conquer Method can be used to solve this problem.

Recursive Algorithm for Binary Search : Divide and Conquer Method


Algorithm Binsearch(a,low,high,x)
{
if (low=high) then
{ if (x=a[low]) then return low;
else return -1;
}
else
{
mid := (low+high)/2;
if (x=a[mid]) then return mid;
else
if (x<a[mid]) then
return binsearch(a,low,mid-1,x);
else
return binsearch(a,mid+1,high,x);
}
}

Dr.DSK DAA UNIT-III Divide & Conquer Page 5


Conventional Method::Iterative algorithm for Binary Search
Algorithm binary_search(a,n,x)
// a is an array of size n
{
low := 1;
high := n;
pos := 0;
while ( low<=high)
{
mid:=(low+high)/2;
if (x = a[mid]) then pos:=mid;
else
if (x > a[mid]) then
low:= mid+1;
else
high := mid – 1;
}
return pos;
}

Example)
Position 1 2 3 4 5 6 7 8 9 10 11 12 13 14
value -15 -6 0 7 9 23 54 82 101 112 125 131 142 151

If x=151 Low high Mid Condition


x a[mid]
1 14 7 151>54
8 14 11 151>125
12 14 13 151>142
14 14 14 151=151 Found

If x=-14 Low high Mid Condition


x a[mid]
1 14 7 -14<54
1 6 3 -14<0
1 2 1 -14>-15
2 2 2 -14<-6
2 1 Not found

If x=9 Low high Mid Condition


x a[mid]
1 14 7 9<54
1 6 3 9>0
4 6 5 9=9 Found

Dr.DSK DAA UNIT-III Divide & Conquer Page 6


Binary decision Tree for binary search when n=14

No element requires more than 4 comparisons to be found.


Number of elements of the array are considered as power of 2
i.e n=2m
log(n)=log (2m)
= m.log(2) we know that log2 base 2 is 1
log(n)=m
m=log(n)
maximum comparisons are <= m only
Time complexity of a successful search
Best case = O(1)
Average case = O(log n)
Worst Case = O (log n)
Time complexity of an unsuccessful search = O (log n)

MERGE SORT
Worst Case time complexity O(n log(n))
Assume that the elements are to be sorted in non decreasing order.
Assume that the Array contains n elements a[1], a[2],……a[n]. Imagine them split in to two
sub arrays.

a[1],a[2] ….a[n/2]
a[n/2+1]……a[n]

Each set is individually sorted and the resulting sorted sequences are merged to produce a
single sorted sequence of n elements. Thus we have another ideal example of divide and
conquer strategy in which the splitting is into two equal sized sets and the combining
operation is the merging of two sorted sets into one.

Example
A[1:10] = (310, 285, 179, 652, 351, 423, 861, 254, 450, 520)
A[1:5]=( 310, 285, 179, 652, 351)
A[6:10]=( 423, 861, 254, 450, 520)

Dr.DSK DAA UNIT-III Divide & Conquer Page 7


Tree of calls of Merge Sort

Now the merge begins. Note that no movement of data has yet taken place. A record of the
sub arrays is implicitly maintained by the recursive mechanism.

(310| 285| 179|, 652, 351| 423, 861, 254, 450, 520)
(285, 310| 179|, 652, 351 | 423, 861, 254, 450, 520)

(179, 285, 310| 652, 351 | 423, 861, 254, 450, 520)
(179, 285, 310| 351, 652 | 423, 861, 254, 450, 520)
(179, 285, 310, 351, 652 | 423, 861, 254, 450, 520)
(179, 285, 310, 351, 652 | 423| 861| 254| 450, 520)
(179, 285, 310, 351, 652 | 423, 861| 254| 450, 520)
(179, 285, 310, 351, 652 | 254, 423, 861| 450, 520)
(179, 285, 310, 351, 652 | 254, 423, 450, 520, 861)
Final
(179, 254, 285, 310, 351, 423, 450, 520, 652, 861)

Dr.DSK DAA UNIT-III Divide & Conquer Page 8


Algorithm for Merge Sort
Algorithm mergesort(low,high) Algorithm merge(low,mid,high)
//a[low:high] is a global array {
{ h:=low;
if (low<high) then i:= low;
{ j:= mid+1;
mid:= (low+high)/2; while ((h<=mid)and(j<=high)) do
mergesort(low,mid); {
mergesort(mid+1,high); if (a[h] <= a[j]) then
//combine the solutions {
merge(low,mid,high); b[i]:=a[h];
} h:=h+1;
} }
else
{
b[i]:=a[j];
h:=h+1;
}
i:=i+1;
}
if (h>mid) then
for k:= j to high do
{
b[i]:=a[k];
i:=i+1;
}
else
for k:= h to mid do
{
b[i]:=a[k];
i:=i+1;
}
for k:= low to high do
a[k]:=b[k];
}

Tree of calls of Merge

Dr.DSK DAA UNIT-III Divide & Conquer Page 9


Time Complexity of Merge Sort

If the time for merging operation is 4n, then the computing time for merge sort is described
by the recurrence relation

T(n) = a if n=1
2T(n/2)+cn if n>1

Where n is a power of 2 i.e n=2k. We can solve this equation by successive substitutions

T(n) = 2T(n/2)+cn -------------------------------------------1


Replace n with n/2
T(n/2) = 2T(n/4)+cn/2 --------------------------------------------2
Substituting T(n/2) in equation 1
T(n) = 2(2T(n/4)+cn/2)+cn
T(n) = 4T(n/4)+2cn ---------------------------------------------3
Replace n with n/4 in eq 1
T(n/4) = 2T(n/8)+cn/4 --------------------------------------------4
Substituting T(n/4) in equation 3
T(n) = 4(2T(n/8)+cn/4)+2cn
T(n) = 8T(n/8)+4cn ---------------------------------------------5
:
:
:
:
T(n) = 2k(1)+k.c.n n=2k
= n.a + log n . c. n k=log n
= an + c n logn
It is easy to see that is 2k<=n<=2k+1 then T(n)<=T(2k+1)>=T(N)=O(n log n)

FINDING MINIMUM AND MAXIMUM


Let us consider another simple problem that can be solved by the divide and conquer
technique. The problem is to find the maximum and minimum items in a set of n elements.

Conventional method- Iterative Algorithm

Algorithm minmax(a,n,min,max)
{
min:=max:=a[1];
for i:= 2 to n do
{
if a[i] < min then min:=a[i];
if a[i] > max then max:=a[i];
}
}

In analyzing the time complexity of this algorithm we once again concentrate on the number
of element comparisons. In the conventional method there are 2(n-1) element comparisons in
the best average and worst cases. Time complexity is O(n)

Dr.DSK DAA UNIT-III Divide & Conquer Page 10


Recursive algorithm- Divide and Conquer Method
Algorithm minmax(lowpos,highpos,min,max)
{
if (lowpos=highpos) then min=max=a[lowpos];
else
if (lowpos=highpos-1) then
{
if (a[lowpos]<a[highpos]) then
{
min:=a[lowpos];
max:=a[highpos];
}
else
{
min := a[highpos];
max:= a[lowpos];
}
}
else
{
mid := (lowpos+highpos)/2;
minmax(lowpos,mid,min,max);
minmax(mid+1,highpos,min1,max1);
// combine
if (max<max1) then max:=max1;
if (min<min1) then min:=min1;
}
}

This is a recursive algorithm that finds the minimum and maximum of the set of elements
(a[1],a[2],a[3]….a[n]}. The situation of set sizes one ( lowpos = highpos ) and two
(lowpos=highpos-1) are handled separately. For sets containing more than two elements the
midpoint is determined and two new sub problems are generated. When the min and max of
these two sub problems are determined, the two max are compared and the two min are
compared to achieve the solutions for the entire set.

The number of comparisons needed for minmax() will be T(n). The recurrence relation is
T(n)= T(n/2)+T(n/2)+2 n>2
1 n=2
0 n=1
n
T (n) = 2T   + 2
2
 n 
T (n) = 2  2T   + 2  + 2
 4 
n
T (n) = 4T   + 4 + 2
4
:
:
T (n) = 2k −1T ( 2 ) + ∑ 2i
1≤i ≤ k −1

3n
T (n) = 2k −1 + 2k − 2 = −2
2

Dr.DSK DAA UNIT-III Divide & Conquer Page 11


3n
Note that − 2 is the best, average and worst case number of comparisons when n is a
2
power of 2.

Example ) Suppose we simulate the minmax on the following nine elements of array a:

Position 1 2 3 4 5 6 7 8 9
Array Value 22 13 -5 -8 15 60 17 31 47
minmax(1,9,min,max)
minmax(1,5,min,max) minmax(6,9,min,max)
minmax(1,3,min,max) minmax(4,5,min,max)
minmax(1,2,min,max) minmax(3,3,min,max)
minmax(1,1,min,max) minmax(2,2,min,max)
minmax(4,4,min,max) minmax(5,5,min,max)
minmax(6,7,min,max) minmax(8,9,min,max)
minmax(6,6,min,max) minmax(7,7,min,max)
minmax(8,8,min,max) minmax(8,8,min,max)
minmax(9,9,min,max) minmax(9,9,min,max)

Trees of recursive calls of minmax

Dr.DSK DAA UNIT-III Divide & Conquer Page 12


STRASSEN’S MATRIX MULTIPLICATION
Let A and B be two nxn matrices. The product matrix C=AB is also an nxn matrix
C[i,j]=∑ A[i,k].B[k,j] where 1<=k<=n
for all I and j between 1 and n.
To compute C[i,j] using this formula, we need n multiplications. As C has n2 elements the
time for the resulting matrix multiplication algorithm, which we refer to as the conventional
method is O(n3).
The Divide and Conquer strategy suggests another way to compute the product of two nxn
matrices. We assume that n is a power of 2, that is that there exists a non negative integer k,
such that n=2k. In case n is not a power of 2 then enough rows and columns of zeroes can be
added to both A and B.
Imagine that A and B are each partitioned into 4 square sub matrices. Each sub matrix having
dimensions n/2 x n/2 .
Then the product AB can be computed by using the formula for the product of 2x2 matrices.

A11 A12 B11 B12 C11 C12


X =
A21 A22 B21 B22 C21 C22

C11 = A11 x B11 + A12 x B21


C12 = A11 x B12 + A12 x B22
C21 = A21 x B11 + A22 x B21
C22 = A21 x B12 + A22 x B22
To compute AB, we need to perform eight multiplications of n/2 x n/2 matrices and four
additions of n/2 x n/2 matrices. Since these matrices can be added in time cn2 for some
constant ‘c’, the overall computing time, T(n) is given by the recurrence formula
T(n) = b if n<=2
2
8T(n/2)+cn if n>2 where b, c are constants.
To make improvement over the conventional method , so that we have less number of
multiplications & more number of additions. Matrix multiplications are expensive O(n3) than
the matrix additions i.e O(n2).
Volker Strassen has discovered a way to compute Cij’s of equation only using 7
multiplications and 18 additions or subtractions. This method involves computing the 7 n/2 x
n/2 matrices P,Q,R,S,T,U, & V first and then Cij’s are computed.

P = (A11 + A22)(B11+B22) C11 = P+S-T+V


Q = (A21+A22)B11 C12 = R+T
R = A11(B12-B22) C21 = Q+S
S = A22(B21-B11) C22 = P+R-Q+U
T = (A11+A12)B22
U = (A21-A11)(B11+B12)
V = (A12-A22)(B21+B22)
The resulting recurrence relation for T(n) is
T(n) = b if n<=2
7T(n/2)+an2 if n>2

T(n) = O(nlog 7) = O (n 2.81)

Dr.DSK DAA UNIT-III Divide & Conquer Page 13


EXAMPLE-1:

Dr.DSK DAA UNIT-III Divide & Conquer Page 14


EXAMPLE-2:

Dr.DSK DAA UNIT-III Divide & Conquer Page 15


QUESTIONS FROM OLD QUESTION PAPERS

1)Write and explain the control abstraction algorithm of divide and conquer.
2)Design Divide and conquer algorithm for computing the number of levels on a binary tree.
3) Analyze the average case time complexity for quick sort.
4)Write and explain the divide and conquer merge sort algorithm. Also compute its time
complexity.
5) Suggest refinements to Merge Sort to make it in-place.
6) Explain the Strassen’s matrix multiplication concept with an example.
7) Write deletion algorithm of binary search tree.

Dr.DSK DAA UNIT-III Divide & Conquer Page 16

You might also like