0% found this document useful (0 votes)
289 views14 pages

General Method - Binary Search - Finding The Maximum and Minimum - Merge Sort - Quick Sort - Selection Sort - Strassen's Matrix Multiplications

The document describes the divide and conquer algorithmic strategy. It can be applied to problems involving computing a function on inputs by dividing the inputs into subsets, solving the subproblems recursively, and combining the solutions. The time complexity of many divide and conquer algorithms can be described by recurrence relations of the form T(n)=aT(n/b)+f(n). It then provides examples of using this strategy to perform binary search and find the maximum and minimum elements in an array.

Uploaded by

chitrashan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
289 views14 pages

General Method - Binary Search - Finding The Maximum and Minimum - Merge Sort - Quick Sort - Selection Sort - Strassen's Matrix Multiplications

The document describes the divide and conquer algorithmic strategy. It can be applied to problems involving computing a function on inputs by dividing the inputs into subsets, solving the subproblems recursively, and combining the solutions. The time complexity of many divide and conquer algorithms can be described by recurrence relations of the form T(n)=aT(n/b)+f(n). It then provides examples of using this strategy to perform binary search and find the maximum and minimum elements in an array.

Uploaded by

chitrashan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

UNIT – II

General Method – Binary Search – Finding the Maximum and Minimum – Merge Sort – Quick
Sort – Selection Sort – Strassen’s Matrix Multiplications.
DIVIDE AND CONQUER
General method
 Given a function to compute on ‘n’ inputs the divide-and-conquer strategy suggests
splitting the inputs into ‘k’ distinct subsets, 1<k<=n, yielding ‘k’ sub problems.
 These sub problems must be solved, and then a method must be found to combine sub
solutions into a solution of the whole.
 If the sub problems are still relatively large, then the divide-and-conquer strategy can
possibly be reapplied.
 Often the sub problems resulting from a divide-and-conquer design are of the same type
as the original problem.
 For those cases the re application of the divide-and-conquer principle is naturally
expressed by a recursive algorithm.
 D And C(Algorithm) is initially invoked as D and C(P), where ‘p’ is the problem to be
solved.
 Small(P) is a Boolean-valued function that determines whether the i/p size is small
enough that the answer can be computed without splitting.
 If this so, the function ‘S’ is invoked.
 Otherwise, the problem P is divided into smaller sub problems.
 These sub problems P1, P2 …Pk are solved by recursive application of D And C.
 Combine is a function that determines the solution to p using the solutions to the ‘k’ sub
problems.
 If the size of ‘p’ is n and the sizes of the ‘k’ sub problems are n1, n2 ….nk, respectively,
then the computing time of D And C is described by the recurrence relation.
T(n)= { g(n) n small
T(n1)+T(n2)+……………+T(nk)+f(n); otherwise
Where T(n) → is the time for D And C on any I/p of size ‘n’.
g(n) → is the time of compute the answer directly for small I/ps.
f(n) → is the time for dividing P & combining the solution to sub problems.
Algorithm DAndC(P)
{
if small(P) then return S(P);
else
{
divide P into smaller instances P1, P2… Pk, k>=1;
Apply DAndC to each of these sub problems;
return combine (DAndC(P1), DAndC(P2),…….,DAndC(Pk));
}
}
 The complexity of many divide-and-conquer algorithms is given by recurrences of the
form
T(n) = { T(1) n=1
aT(n/b)+f(n) n>1
→Where a & b are known constants.
→We assume that T(1) is known & ‘n’ is a power of b(i.e., n=bk)
 One of the methods for solving any such recurrence relation is called the substitution
method.
 This method repeatedly makes substitution for each occurrence of the function. T is the
Right-hand side until all such occurrences disappear.
Example:
1) Consider the case in which a=2 and b=2. Let T(1)=2 & f(n)=n. We have,
T(n) = 2T(n/2)+n
= 2 [2T(n/2/2)+n/2]+n
= [4T(n/4)+n]+n
= 4 T(n/4)+2n
= 4 [2T(n/4/2)+n/4]+2n
= 4 [2T(n/8)+n/4]+2n
= 8 T(n/8)+n+2n
= 8 T(n/8)+3n
*
*
In general, we see that T(n)=2iT(n/2i )+i n., for any log n >=i>=1.
→ T(n) =2logn T(n/2logn) + n log n
→ Corresponding to the choice of i=log n
→ Thus, T(n) = 2logn T(n/2logn) + n log n
= n. T(n/n) + n log n
= n. T(1) + n log n [since, log 1=0, 20=1]
= 2n + n log n

BINARY SEARCH
Algorithm Bin search(a,n,x)
// Given an array a[1:n] of elements in non-decreasing order, n>=0,determine
// whether ‘x’ is present and if so, return ‘j’ such that x=a[j]; else return 0.
{
low:=1; high:=n;
while (low<=high) do
{
mid:=[(low+high)/2];
if (x<a[mid]) then high;
else if(x>a[mid]) then
low=mid+1;
else return mid;
}
return 0;
}
 Algorithm, describes this binary search method, where Binsrch has 4 i/ps a[], I , l & x.
 It is initially invoked as Binsrch (a,1,n,x)
 A non-recursive version of Binsrch is given below.
 This Binsearch has 3 i/ps a,n, & x.
 The while loop continues processing as long as there are more elements left to check.
 At the conclusion of the procedure 0 is returned if x is not present, or ‘j’ is returned, such
that a[j]=x.
 We observe that low & high are integer Variables such that each time through the loop
either x is found or low is increased by at least one or high is decreased at least one.
 Thus we have 2 sequences of integers approaching each other and eventually low
becomes > than high & causes termination in a finite no. of steps if ‘x’ is not present.
Example:
1) Let us select the 14 entries.
-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.
→Place them in a[1:14], and simulate the steps Binsearch goes through as it searches for
different values of ‘x’.
→Only the variables, low, high & mid need to be traced as we simulate the algorithm.
→We try the following values for x: 151, -14 and 9 for 2 successful searches & 1 unsuccessful
search.
 Table. Shows the traces of Bin search on these 3 steps.
X=151 low high mid
1 14 7
8 14 11
12 14 13
14 14 14
Found
x=-14 low high mid
1 14 7
1 6 3
1 2 1
2 2 2
2 1 Not found
x=9 low high mid
1 14 7
1 6 3
4 6 5
Found

Maximum and Minimum


 Let us consider another simple problem that can be solved by the divide-andconquer
technique.
 The problem is to find the maximum and minimum items in a set of ‘n’ elements.
 In analyzing the time complexity of this algorithm, we once again concentrate on the no.
of element comparisons.
 More importantly, when the elements in a[1:n] are polynomials, vectors, very large
numbers, or strings of character, the cost of an element comparison is much higher than
the cost of the other operations.
 Hence, the time is determined mainly by the total cost of the element comparison.
Algorithm straightMaxMin(a,n,max,min)
// set max to the maximum & min to the minimum of a[1:n]
{
max:=min:=a[1];
for i:=2 to n do
{
if(a[i]>max) then max:=a[i];
if(a[i]<min) then min:=a[i];
}
}
Algorithm: Straight forward Maximum & Minimum
 Straight MaxMin requires 2(n-1) element comparison in the best, average & worst cases.
 An immediate improvement is possible by realizing that the comparison a[i]<min is
necessary only when a[i]>max is false.
 Hence we can replace the contents of the for loop by,
If(a[i]>max) then max:=a[i];
Else if (a[i]<min) then min:=a[i];
 Now the best case occurs when the elements are in increasing order.
→The no. of element comparison is (n-1).
 The worst case occurs when the elements are in decreasing order.
→The no. of elements comparison is 2(n-1)
 The average no. of element comparison is < than 2(n-1)
 On the average a[i] is > than max half the time, and so, the avg. no. Of comparison is
3n/2-1.
 A divide- and conquer algorithm for this problem would proceed as follows:
→Let P=(n, a[i] ,……,a[j]) denote an arbitrary instance of the problem.
→Here ‘n’ is the no. of elements in the list (a[i],….,a[j]) and we are interested in finding
the maximum and minimum of the list.
 If the list has more than 2 elements, P has to be divided into smaller instances.
 For example , we might divide ‘P’ into the 2 instances,
 P1=([n/2],a[1],……..a[n/2]) & P2= (n-[n/2],a[[n/2]+1],…..,a[n])
 After having divided ‘P’ into 2 smaller sub problems, we can solve them by recursively
invoking the same divide-and-conquer algorithm.
Algorithm: Recursively Finding the Maximum & Minimum
Algorithm MaxMin (i,j,max,min)
//a[1:n] is a global array, parameters i & j are integers, 1<=i<=j<=n.The effect is
// to set max & min to the largest & smallest value in a[i:j], respectively.
{
if(i=j) then max:= min:= a[i];
else if (i=j-1) then // Another case of small(p)
{
if (a[i]<a[j]) then
{
max:=a[j];
min:=a[i];
}
else
{
max:=a[i];
min:=a[j];
}
}
else
{
// if P is not small, divide P into subproblems. Find where to split the set
mid:=[(i+j)/2];
//solve the subproblems
MaxMin(i,mid,max.min);
MaxMin(mid+1,j,max1,min1);
//combine the solution
if (max<max1) then max=max1;
if(min>min1) then min = min1;
}
}
 The procedure is initially invoked by the statement,
MaxMin(1,n,x,y)
 Suppose we simulate MaxMin on the following 9 elements
A: [1] [2] [3] [4] [5] [6] [7] [8] [9]
22 13 -5 -8 15 60 17 31 47
 A good way of keeping track of recursive calls is to build a tree by adding a node each
time a new call is made.
 For this Algorithm, each node has 4 items of information: i, j, max & imin.
 Examining fig: we see that the root node contains 1 & 9 as the values of i &j
corresponding to the initial call to MaxMin.
 This execution produces 2 new calls to MaxMin, where I & j have the values 1, 5 & 6, 9
respectively & thus split the set into 2 subsets of approximately the same size.
 From the tree, we can immediately see the maximum depth of recursion is 4. (including
the 1st call)
 The include no.s in the upper left corner of each node represent the order in which max &
min are assigned values.
No. of element Comparison:
 If T(n) represents this no., then the resulting recurrence relations is
T(n)={ T([n/2]+T[n/2]+2 n>2
1 n=2
0 n=1
→When ‘n’ is a power of 2, n=2^k for some +ve integer ‘k’, then
T(n) = 2T(n/2) +2
= 2(2T(n/4)+2)+2
= 4T(n/4)+4+2
*
*
= 2k-1T(2)+
= 2k-1+2k-2
= 2k/2+2k-2
= n/2+n-2
= (n+2n)/2)-2
T(n)=(3n/2)-2
*Note that (3n/3)-3 is the best-average, and worst-case no. of comparisons when ‘n’ is a power
of 2.

MERGE SORT
 As another example divide-and-conquer, we investigate a sorting algorithm that has the
nice property that is the worst case its complexity is O(n log n)
 This algorithm is called merge sort
 We assume throughout that the elements are to be sorted in non-decreasing order.
 Given a sequence of ‘n’ elements a[1],…,a[n] the general idea is to imagine then split
into 2 sets a[1],…..,a[n/2] and a[[n/2]+1],….a[n].
 Each set is individually sorted, and the resulting sorted sequences are merged to produce
a single sorted sequence of ‘n’ elements.
 Thus, we have another ideal example of the divide-and-conquer strategy in which the
splitting is into 2 equal-sized sets & the combining operation is the merging of 2 sorted
sets into one.
Algorithm For Merge Sort:
Algorithm MergeSort(low,high)
//a[low:high] is a global array to be sorted.Small(P) is true if there is only one
// element to sort. In this case the list is already sorted.
{
if (low<high) then //if there are more than one element
{
//Divide P into subproblems
//find where to split the set
mid = [(low+high)/2];
//solve the subproblems.
mergesort (low,mid);
mergesort(mid+1,high);
//combine the solutions .
merge(low,mid,high);
}
}
Algorithm: Merging 2 sorted subarrays using auxiliary storage.
Algorithm merge(low,mid,high)
//a[low:high] is a global array containing two sorted subsets in a[low:mid]
//and in a[mid+1:high].The goal is to merge these 2 sets into a single set residing in
//a[low:high].b[] is an auxiliary global array.
{
h=low; i=low; j=mid+1;
while ((h<=mid) and (j<=high)) do
{
if (a[h]<=a[j]) then
{
b[i]=a[h];
h = h+1;
}
else
{
b[i]= a[j];
j=j+1;
}
i=i+1;
}
if (h>mid) then
for k=j to high do
{
b[i]=a[k];
i=i+1;
}
else
for k=h to mid do
{
b[i]=a[k];
i=i+1;
}
for k=low to high do a[k] = b[k];
}
 Consider the array of 10 elements a[1:10] =(310, 285, 179, 652, 351, 423, 861, 254, 450,
520)
 Algorithm Mergesort begins by splitting a[] into 2 sub arrays each of size five (a[1:5] and
a[6:10]).
 The elements in a[1:5] are then split into 2 sub arrays of size 3 (a[1:3] ) and 2(a[4:5])
 Then the items in a a[1:3] are split into sub arrays of size 2 a[1:2] & one(a[3:3])
 The 2 values in a[1:2} are split to find time into one-element sub arrays, and now the
merging begins.

(310| 285| 179| 652, 351| 423, 861, 254, 450, 520)
→Where vertical bars indicate the boundaries of sub arrays.
→Elements a[1] and a[2] are merged to yield,
(285, 310|179|652, 351| 423, 861, 254, 450, 520)
→Then a[3] is merged with a[1:2] and
(179, 285, 310| 652, 351| 423, 861, 254, 450, 520)
→Next, elements a[4] & a[5] are merged.
(179, 285, 310| 351, 652 | 423, 861, 254, 450, 520)
→And then a[1:3] & a[4:5]
(179, 285, 310, 351, 652| 423, 861, 254, 450, 520)
→Repeated recursive calls are invoked producing the following sub arrays.
(179, 285, 310, 351, 652| 423| 861| 254| 450, 520)
→Elements a[6] &a[7] are merged.
→Then a[8] is merged with a[6:7]
(179, 285, 310, 351, 652| 254,423, 861| 450, 520)
→Next a[9] &a[10] are merged, and then a[6:8] & a[9:10]
(179, 285, 310, 351, 652| 254, 423, 450, 520, 861 )
→At this point there are 2 sorted sub arrays & the final merge produces the fully sorted result.
(179, 254, 285, 310, 351, 423, 450, 520, 652, 861)
 If the time for the merging operations is proportional to ‘n’, then the computing time for
merge sort is described by the recurrence relation.
T(n) = { a n=1,’a’ a constant
2T(n/2)+cn n>1,’c’ a constant.
→When ‘n’ is a power of 2, n= 2k, we can solve this equation by successive substitution.
T(n) =2(2T(n/4) +cn/2) +cn
= 4T(n/4)+2cn
= 4(2T(n/8)+cn/4)+2cn
*
*
= 2k T(1)+kCn.
= an + cn log n.
→It is easy to see that if 2k<n<=2k+1, then T(n)<=T(2k+1). Therefore, T(n)=O(n log n)
QUICK SORT
 The divide-and-conquer approach can be used to arrive at an efficient sorting method
different from merge sort.
 In merge sort, the file a[1:n] was divided at its midpoint into sub arrays which were
independently sorted & later merged.
 In Quick sort, the division into 2 sub arrays is made so that the sorted sub arrays do not
need to be merged later.
 This is accomplished by rearranging the elements in a[1:n] such that a[i]<=a[j] for all i
between 1 & n and all j between (m+1) & n for some m, 1<=m<=n.
 Thus the elements in a[1:m] & a[m+1:n] can be independently sorted.
 No merge is needed. This rearranging is referred to as partitioning.
 Function partition of Algorithm accomplishes an in-place partitioning of the elements of
a[m:p-1]
 It is assumed that a[p]>=a[m] and that a[m] is the partitioning element. If m=1 & p-1=n,
then a[n+1] must be defined and must be greater than or equal to all elements in a[1:n]
 The assumption that a[m] is the partition element is merely for convenience, other
choices for the partitioning element than the first item in the set are better in practice.
 The function interchange (a,i,j) exchanges a[i] with a[j].
Algorithm: Partition the array a[m:p-1] about a[m]
Algorithm Partition(a,m,p)
//within a[m],a[m+1],…..,a[p-1] the elements are rearranged in such a manner that if
//initially t=a[m],then after completion a[q]=t for some q between m and p-1,a[k]<=t for
//m<=k<q, and a[k]>=t for q<k<p. q is returned. Set a[p]=infinite.
{
v=a[m]; i=m; j=p;
repeat
{
repeat
i=i+1;
until(a[i]>=v);
repeat
j=j-1;
until(a[j]<=v);
if (i<j) then interchange(a,i.j);
}until(i>=j);
a[m]=a[j]; a[j]=v;
return j;
}

Algorithm Interchange(a,i,j)
//Exchange a[i] with a[j]
{
p=a[i];
a[i]=a[j];
a[j]=p;
}
Algorithm: Sorting by Partitioning
Algorithm Quicksort(p,q)
//Sort the elements a[p],….a[q] which resides is the global array a[1:n] into ascending
//order; a[n+1] is considered to be defined and must be >= all the elements in a[1:n]
{
if(p<q) then // If there are more than one element
{
// divide p into 2 subproblems
j=partition(a,p,q+1);
//’j’ is the position of the partitioning element.
//solve the subproblems.
quicksort(p,j-1);
quicksort(j+1,q);
//There is no need for combining solution.
}
}
The function is initially invoked as Partition(a, 1, 10). The element a[1] = 65 is the
partitioning element and it is found to be the fifthe smallest element of the set. The
remaining elements are unsorted but partitioned about a[5] = 65.

Sorted list : 45 50 55 60 65 70 75 80 85

Time complexity

STRASSEN’S MATRIX MULTIPLICAION


 Let A and B be the 2 n*n Matrix. The product matrix C=AB is calculated by using the
formula,
 C (i ,j )= A(i,k) B(k,j) for all ‘i’ and and j between 1 and n.
 The time complexity for the matrix Multiplication is O(n3).
 Divide and conquer method suggest another way to compute the product of n*n matrix.
 We assume that N is a power of 2 .In the case N is not a power of 2 ,then enough rows
and columns of zero can be added to both A and B. So that the resulting dimension are
the powers of two.
 If n=2 then the following formula as a computed using a matrix multiplication operation
for the elements of A & B.
 If n>2,Then the elements are partitioned into sub matrix n/2*n/2..since ‘n’ is a power of 2
these product can be recursively computed using the same formula. This Algorithm will
continue applying itself to smaller sub matrix until ‘N” become suitable small(n=2) so
that the product is computed directly .
 The formula are

𝐴11 𝐴12 𝐵11 𝐵12 𝐶11 𝐶12


( ) * ( ) = ( )
𝐴21 𝐴22 𝐵21 𝐵22 𝐶21 𝐶22
C11 = A11 B11 + A12 B21
C12 = A11 B12 + A12 B22
C21 = A21 B11 + A22 B21
C22 = A21 B12 + A22 B22

 To compute AB using the equation we need to perform 8 multiplication of n/2*n/2 matrix


and From 4 addition of n/2*n/2 matrix.
 Ci,j are computed using the formula in equation →4
 As can be sum P, Q, R, S, T, U, and V can be computed using 8 Matrix multiplication
and 4 addition or subtraction.
 The Cij are required addition 8 addition or subtraction.
T(n) = b n<=2 a &b are
8T(n/2)+cn2 n>2 constant

That is T(n)=O(n3)
Example
4 4
( )
4 4
P=(4*4)+(4+4)=64
Q=(4+4)4=32
R=4(4-4)=0
S=4(4-4)=0
T=(4+4)4=32
U=(4-4)(4+4)=0
V=(4-4)(4+4)=0
C11=(64+0-32+0)=32
C12=0+32=32
C21=32+0=32
C22=64+0-32+0=32
32 32
So the answer c(i,j) is ( )
32 32
since n/2n/2 &matrix can be can be added in Cn for some constant C, The overall computing
time T(n) of the resulting divide and conquer algorithm is given by the sequence.
* Matrix multiplication are more expensive then the matrix addition O(n3).We can attempt to
reformulate the equation for Cij so as to have fewer multiplication and possibly more addition .
Stressen has discovered a way to compute the Cij of equation (2) using only 7 multiplication
and 18 addition or subtraction.
Strassen’s formula are
P= (A11+A12)(B11+B22)
Q= (A12+A22)B11
R= A11(B12-B22)
S= A22(B21-B11)
T= (A11+A12)B22
U= (A21-A11)(B11+B12)
V= (A12-A22)(B21+B22)
C11=P+S-T+V
C12=R+T
C21=Q+T
C22=P+R-Q+V
T(n) = b n<=2 a &b are
7T(n/2)+an2 n>2 constant
Finally we get T(n) =O( n log27) ≈ O(n 2.81)

You might also like