General Method - Binary Search - Finding The Maximum and Minimum - Merge Sort - Quick Sort - Selection Sort - Strassen's Matrix Multiplications
General Method - Binary Search - Finding The Maximum and Minimum - Merge Sort - Quick Sort - Selection Sort - Strassen's Matrix Multiplications
General Method – Binary Search – Finding the Maximum and Minimum – Merge Sort – Quick
Sort – Selection Sort – Strassen’s Matrix Multiplications.
DIVIDE AND CONQUER
General method
Given a function to compute on ‘n’ inputs the divide-and-conquer strategy suggests
splitting the inputs into ‘k’ distinct subsets, 1<k<=n, yielding ‘k’ sub problems.
These sub problems must be solved, and then a method must be found to combine sub
solutions into a solution of the whole.
If the sub problems are still relatively large, then the divide-and-conquer strategy can
possibly be reapplied.
Often the sub problems resulting from a divide-and-conquer design are of the same type
as the original problem.
For those cases the re application of the divide-and-conquer principle is naturally
expressed by a recursive algorithm.
D And C(Algorithm) is initially invoked as D and C(P), where ‘p’ is the problem to be
solved.
Small(P) is a Boolean-valued function that determines whether the i/p size is small
enough that the answer can be computed without splitting.
If this so, the function ‘S’ is invoked.
Otherwise, the problem P is divided into smaller sub problems.
These sub problems P1, P2 …Pk are solved by recursive application of D And C.
Combine is a function that determines the solution to p using the solutions to the ‘k’ sub
problems.
If the size of ‘p’ is n and the sizes of the ‘k’ sub problems are n1, n2 ….nk, respectively,
then the computing time of D And C is described by the recurrence relation.
T(n)= { g(n) n small
T(n1)+T(n2)+……………+T(nk)+f(n); otherwise
Where T(n) → is the time for D And C on any I/p of size ‘n’.
g(n) → is the time of compute the answer directly for small I/ps.
f(n) → is the time for dividing P & combining the solution to sub problems.
Algorithm DAndC(P)
{
if small(P) then return S(P);
else
{
divide P into smaller instances P1, P2… Pk, k>=1;
Apply DAndC to each of these sub problems;
return combine (DAndC(P1), DAndC(P2),…….,DAndC(Pk));
}
}
The complexity of many divide-and-conquer algorithms is given by recurrences of the
form
T(n) = { T(1) n=1
aT(n/b)+f(n) n>1
→Where a & b are known constants.
→We assume that T(1) is known & ‘n’ is a power of b(i.e., n=bk)
One of the methods for solving any such recurrence relation is called the substitution
method.
This method repeatedly makes substitution for each occurrence of the function. T is the
Right-hand side until all such occurrences disappear.
Example:
1) Consider the case in which a=2 and b=2. Let T(1)=2 & f(n)=n. We have,
T(n) = 2T(n/2)+n
= 2 [2T(n/2/2)+n/2]+n
= [4T(n/4)+n]+n
= 4 T(n/4)+2n
= 4 [2T(n/4/2)+n/4]+2n
= 4 [2T(n/8)+n/4]+2n
= 8 T(n/8)+n+2n
= 8 T(n/8)+3n
*
*
In general, we see that T(n)=2iT(n/2i )+i n., for any log n >=i>=1.
→ T(n) =2logn T(n/2logn) + n log n
→ Corresponding to the choice of i=log n
→ Thus, T(n) = 2logn T(n/2logn) + n log n
= n. T(n/n) + n log n
= n. T(1) + n log n [since, log 1=0, 20=1]
= 2n + n log n
BINARY SEARCH
Algorithm Bin search(a,n,x)
// Given an array a[1:n] of elements in non-decreasing order, n>=0,determine
// whether ‘x’ is present and if so, return ‘j’ such that x=a[j]; else return 0.
{
low:=1; high:=n;
while (low<=high) do
{
mid:=[(low+high)/2];
if (x<a[mid]) then high;
else if(x>a[mid]) then
low=mid+1;
else return mid;
}
return 0;
}
Algorithm, describes this binary search method, where Binsrch has 4 i/ps a[], I , l & x.
It is initially invoked as Binsrch (a,1,n,x)
A non-recursive version of Binsrch is given below.
This Binsearch has 3 i/ps a,n, & x.
The while loop continues processing as long as there are more elements left to check.
At the conclusion of the procedure 0 is returned if x is not present, or ‘j’ is returned, such
that a[j]=x.
We observe that low & high are integer Variables such that each time through the loop
either x is found or low is increased by at least one or high is decreased at least one.
Thus we have 2 sequences of integers approaching each other and eventually low
becomes > than high & causes termination in a finite no. of steps if ‘x’ is not present.
Example:
1) Let us select the 14 entries.
-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.
→Place them in a[1:14], and simulate the steps Binsearch goes through as it searches for
different values of ‘x’.
→Only the variables, low, high & mid need to be traced as we simulate the algorithm.
→We try the following values for x: 151, -14 and 9 for 2 successful searches & 1 unsuccessful
search.
Table. Shows the traces of Bin search on these 3 steps.
X=151 low high mid
1 14 7
8 14 11
12 14 13
14 14 14
Found
x=-14 low high mid
1 14 7
1 6 3
1 2 1
2 2 2
2 1 Not found
x=9 low high mid
1 14 7
1 6 3
4 6 5
Found
MERGE SORT
As another example divide-and-conquer, we investigate a sorting algorithm that has the
nice property that is the worst case its complexity is O(n log n)
This algorithm is called merge sort
We assume throughout that the elements are to be sorted in non-decreasing order.
Given a sequence of ‘n’ elements a[1],…,a[n] the general idea is to imagine then split
into 2 sets a[1],…..,a[n/2] and a[[n/2]+1],….a[n].
Each set is individually sorted, and the resulting sorted sequences are merged to produce
a single sorted sequence of ‘n’ elements.
Thus, we have another ideal example of the divide-and-conquer strategy in which the
splitting is into 2 equal-sized sets & the combining operation is the merging of 2 sorted
sets into one.
Algorithm For Merge Sort:
Algorithm MergeSort(low,high)
//a[low:high] is a global array to be sorted.Small(P) is true if there is only one
// element to sort. In this case the list is already sorted.
{
if (low<high) then //if there are more than one element
{
//Divide P into subproblems
//find where to split the set
mid = [(low+high)/2];
//solve the subproblems.
mergesort (low,mid);
mergesort(mid+1,high);
//combine the solutions .
merge(low,mid,high);
}
}
Algorithm: Merging 2 sorted subarrays using auxiliary storage.
Algorithm merge(low,mid,high)
//a[low:high] is a global array containing two sorted subsets in a[low:mid]
//and in a[mid+1:high].The goal is to merge these 2 sets into a single set residing in
//a[low:high].b[] is an auxiliary global array.
{
h=low; i=low; j=mid+1;
while ((h<=mid) and (j<=high)) do
{
if (a[h]<=a[j]) then
{
b[i]=a[h];
h = h+1;
}
else
{
b[i]= a[j];
j=j+1;
}
i=i+1;
}
if (h>mid) then
for k=j to high do
{
b[i]=a[k];
i=i+1;
}
else
for k=h to mid do
{
b[i]=a[k];
i=i+1;
}
for k=low to high do a[k] = b[k];
}
Consider the array of 10 elements a[1:10] =(310, 285, 179, 652, 351, 423, 861, 254, 450,
520)
Algorithm Mergesort begins by splitting a[] into 2 sub arrays each of size five (a[1:5] and
a[6:10]).
The elements in a[1:5] are then split into 2 sub arrays of size 3 (a[1:3] ) and 2(a[4:5])
Then the items in a a[1:3] are split into sub arrays of size 2 a[1:2] & one(a[3:3])
The 2 values in a[1:2} are split to find time into one-element sub arrays, and now the
merging begins.
(310| 285| 179| 652, 351| 423, 861, 254, 450, 520)
→Where vertical bars indicate the boundaries of sub arrays.
→Elements a[1] and a[2] are merged to yield,
(285, 310|179|652, 351| 423, 861, 254, 450, 520)
→Then a[3] is merged with a[1:2] and
(179, 285, 310| 652, 351| 423, 861, 254, 450, 520)
→Next, elements a[4] & a[5] are merged.
(179, 285, 310| 351, 652 | 423, 861, 254, 450, 520)
→And then a[1:3] & a[4:5]
(179, 285, 310, 351, 652| 423, 861, 254, 450, 520)
→Repeated recursive calls are invoked producing the following sub arrays.
(179, 285, 310, 351, 652| 423| 861| 254| 450, 520)
→Elements a[6] &a[7] are merged.
→Then a[8] is merged with a[6:7]
(179, 285, 310, 351, 652| 254,423, 861| 450, 520)
→Next a[9] &a[10] are merged, and then a[6:8] & a[9:10]
(179, 285, 310, 351, 652| 254, 423, 450, 520, 861 )
→At this point there are 2 sorted sub arrays & the final merge produces the fully sorted result.
(179, 254, 285, 310, 351, 423, 450, 520, 652, 861)
If the time for the merging operations is proportional to ‘n’, then the computing time for
merge sort is described by the recurrence relation.
T(n) = { a n=1,’a’ a constant
2T(n/2)+cn n>1,’c’ a constant.
→When ‘n’ is a power of 2, n= 2k, we can solve this equation by successive substitution.
T(n) =2(2T(n/4) +cn/2) +cn
= 4T(n/4)+2cn
= 4(2T(n/8)+cn/4)+2cn
*
*
= 2k T(1)+kCn.
= an + cn log n.
→It is easy to see that if 2k<n<=2k+1, then T(n)<=T(2k+1). Therefore, T(n)=O(n log n)
QUICK SORT
The divide-and-conquer approach can be used to arrive at an efficient sorting method
different from merge sort.
In merge sort, the file a[1:n] was divided at its midpoint into sub arrays which were
independently sorted & later merged.
In Quick sort, the division into 2 sub arrays is made so that the sorted sub arrays do not
need to be merged later.
This is accomplished by rearranging the elements in a[1:n] such that a[i]<=a[j] for all i
between 1 & n and all j between (m+1) & n for some m, 1<=m<=n.
Thus the elements in a[1:m] & a[m+1:n] can be independently sorted.
No merge is needed. This rearranging is referred to as partitioning.
Function partition of Algorithm accomplishes an in-place partitioning of the elements of
a[m:p-1]
It is assumed that a[p]>=a[m] and that a[m] is the partitioning element. If m=1 & p-1=n,
then a[n+1] must be defined and must be greater than or equal to all elements in a[1:n]
The assumption that a[m] is the partition element is merely for convenience, other
choices for the partitioning element than the first item in the set are better in practice.
The function interchange (a,i,j) exchanges a[i] with a[j].
Algorithm: Partition the array a[m:p-1] about a[m]
Algorithm Partition(a,m,p)
//within a[m],a[m+1],…..,a[p-1] the elements are rearranged in such a manner that if
//initially t=a[m],then after completion a[q]=t for some q between m and p-1,a[k]<=t for
//m<=k<q, and a[k]>=t for q<k<p. q is returned. Set a[p]=infinite.
{
v=a[m]; i=m; j=p;
repeat
{
repeat
i=i+1;
until(a[i]>=v);
repeat
j=j-1;
until(a[j]<=v);
if (i<j) then interchange(a,i.j);
}until(i>=j);
a[m]=a[j]; a[j]=v;
return j;
}
Algorithm Interchange(a,i,j)
//Exchange a[i] with a[j]
{
p=a[i];
a[i]=a[j];
a[j]=p;
}
Algorithm: Sorting by Partitioning
Algorithm Quicksort(p,q)
//Sort the elements a[p],….a[q] which resides is the global array a[1:n] into ascending
//order; a[n+1] is considered to be defined and must be >= all the elements in a[1:n]
{
if(p<q) then // If there are more than one element
{
// divide p into 2 subproblems
j=partition(a,p,q+1);
//’j’ is the position of the partitioning element.
//solve the subproblems.
quicksort(p,j-1);
quicksort(j+1,q);
//There is no need for combining solution.
}
}
The function is initially invoked as Partition(a, 1, 10). The element a[1] = 65 is the
partitioning element and it is found to be the fifthe smallest element of the set. The
remaining elements are unsorted but partitioned about a[5] = 65.
Sorted list : 45 50 55 60 65 70 75 80 85
Time complexity
That is T(n)=O(n3)
Example
4 4
( )
4 4
P=(4*4)+(4+4)=64
Q=(4+4)4=32
R=4(4-4)=0
S=4(4-4)=0
T=(4+4)4=32
U=(4-4)(4+4)=0
V=(4-4)(4+4)=0
C11=(64+0-32+0)=32
C12=0+32=32
C21=32+0=32
C22=64+0-32+0=32
32 32
So the answer c(i,j) is ( )
32 32
since n/2n/2 &matrix can be can be added in Cn for some constant C, The overall computing
time T(n) of the resulting divide and conquer algorithm is given by the sequence.
* Matrix multiplication are more expensive then the matrix addition O(n3).We can attempt to
reformulate the equation for Cij so as to have fewer multiplication and possibly more addition .
Stressen has discovered a way to compute the Cij of equation (2) using only 7 multiplication
and 18 addition or subtraction.
Strassen’s formula are
P= (A11+A12)(B11+B22)
Q= (A12+A22)B11
R= A11(B12-B22)
S= A22(B21-B11)
T= (A11+A12)B22
U= (A21-A11)(B11+B12)
V= (A12-A22)(B21+B22)
C11=P+S-T+V
C12=R+T
C21=Q+T
C22=P+R-Q+V
T(n) = b n<=2 a &b are
7T(n/2)+an2 n>2 constant
Finally we get T(n) =O( n log27) ≈ O(n 2.81)