0% found this document useful (0 votes)
10 views

Divide Conquer

The document discusses the Divide and Conquer algorithm, explaining its principle of breaking down problems into smaller subproblems, solving them, and combining their solutions. It provides examples such as binary search, merge sort, quick sort, and matrix multiplication, detailing their algorithms and complexities. Additionally, it introduces Volker Strassen's method for matrix multiplication, which reduces the number of required multiplications.

Uploaded by

ayra56878
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Divide Conquer

The document discusses the Divide and Conquer algorithm, explaining its principle of breaking down problems into smaller subproblems, solving them, and combining their solutions. It provides examples such as binary search, merge sort, quick sort, and matrix multiplication, detailing their algorithms and complexities. Additionally, it introduces Volker Strassen's method for matrix multiplication, which reduces the number of required multiplications.

Uploaded by

ayra56878
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Divide and Conquer

• Splitting the inputs into k distinct subsets yielding k


sub problems. These sub problems are solved and
then sub solutions are combined to form the final
solution.
• Sub problems are often of the same type as the
original problem. This can naturally be expressed by
a recursive algorithm.
• Example: Detecting a counterfeit coin: There is a bag
having 16 coins out of which one may be counterfeit.
A counterfeit coin is lighter than genuine one. Divide
and conquer strategy can solve this problem in
comparatively less number of comparisons (4
instead of 8).
• Control Abstraction of an algorithm shows flow1 of
control. Primary operations are specified by other
– Algorithm DAndC(P)
{ if Small(P) then return S(P);
else
{ divide P into smaller instances P1,P2,…Pk, k>=1;
Apply DAndC to each of these subproblems;
Return Combine(DAndC(p1),DAndC(P2)….
DAndC(Pk));
}
}
– Small(P) is a boolean valued function that determines
whether input size is small enough that the answer can
be computed without splitting.
– Combine is a function that combines the sub solutions.
– The computing time of DAndC is described by the
recurrence relation:
g(n) n small
• T(n)= T(n1)+T(n2)+……T(nk) + f(n) otherwise
• g(n) is time to compute answer directly for small inputs. 2
• f(n) is the time for dividing P and combining the solutions to
• Complexity of many divide and conquer algorithms is
given by recurrences of the form
– T(n) = T(1) n=1
aT(n/b) + f(n) n>1
– a and b are known constants
– Assume that T(1) is known and n is a power of b ie. n = bk.
– Such recurrence relations can be solved using substitution
method.
• Example: Let a =2 and b=2. Let T(1)=2 and f(n)=n then
– T(n) = 2T(n/2)+n
= 2[2T(n/4)+n/2] + n
= 4T(n/4) + n + n
= 4T(n/4)+2n
= 4[2T(n/8)+n/4]+2n
= 8T(n/8)+3n
= …
= 2 kT(n/2 k)+kn
– In general maximum value of k is log2 n and 2 log2n is n. 3
– So T(n) = nT(1)+n log2n ie. nlog 2n + 2n.
Binary Search
• Let a i, 1<=i<=n, be a list of elements sorted in
increasing order and a given element x is to be found
in the list. j is supposed to have the position of x in the
list when aj=x otherwise j is set to zero.
• Let P=(n,ai,….al,x) is an instance of the search problem.
If Small(P) is true ie. n=1, then S(P) will return i if x=ai
or zero if x ≠ a i. So g(1) = Ө(1). If n >0, then an index q
between i and l can be picked such that if a q = x then j
= q, if x < a q then P becomes P=(q - i, a i,….aq-1, x), if x >
a q then P becomes P = (l - q, a q+1,….al, x). If q is chosen
such that a q is the middle element ie. q = floor((n+1)/2),
then the algorithm is binary search algorithm.
Combining of solutions is not required in this
algorithm. 4
• Recursive Binary Search Algorithm
• Algorithm BinSrch(a,i,l,x)
{ if(l=i) then
{ if (x = a[i]) then return i;
else return 0;
}
else
{ mid:= └ (i+1)/2┘;
if(x=a[mid] then return mid;
else
{ if (x<a[mid]) then
return BinSrch(a,i,mid-1,x);
else
return BinSrch(a,mid+1,l,x);
}
} 5
}
• Iterative Binary Search Algorithm
• Algorithm BinSrch(a,n,x)
{
low:=1; high:=n;
while (low <= high) do
{ mid := └ (low+high)/2┘;
if (x < a[mid] then high:=mid+1;
else if ( x > a[mid]) then low:=mid+1;
else return mid;
}
return 0;
}

6
• Testing Binary Search Algorithm: To test all successful
searches x must take on n values in a. To test all
unsuccessful searches, x need only take on n+1 different
values. So, complexity of testing Binary Search algorithm is
2n+1 for each n.
• Space required for this algorithm includes space for n
elements of array. Space for low, high, mid and x ie. Total
n+4 locations are required.
• Complexity of Binary Search Algorithm:
– Comparisons in Binary Search algorithm can be described in the form
of a binary decision tree. For eg. if n=14:
– If x is present, the algorithm will end at one of the circular nodes
(internal nodes).
– If x is not present, algorithm will terminate at one of the square nodes
(external nodes).
– Theorem: If n is in the range [2 k-1,2k), then BinSearch makes at most k
element comparisons for a successful search and either k-1 or k
comparisons for an unsuccessful search ie. time complexity for a
successful search is O(log n) and for an unsuccessful search is Ө(log7
n)
• For successful searches: Best- Ө(1), Worst- Ө(log n), Average-
Merge Sort
• Its worst case complexity is O(n log n)
• n elements from a[1] to a[n] are split into two sublists
a[1]……. a[└ n/2 ┘] and a[└ n/2 ┘ +1]…..a[n]. Each
sublist is individually sorted and then merged to
produce single sorted list.
• Algorithm MergeSort(low, high)
{ if (low < high) then
{ mid := └(low+high)/2 ┘;
MergeSort(low, mid);
MergeSort(mid+1,high);
Merge(low, mid, high);
}
} 8
• Algorithm Merge(low, mid, high)
{ h:= low; i:= low, j:= mid+1;
while((h<=mid) and (j<=high)) do
{ if(a[h]<=a[j]) then
{ b[i]:=a[h]; h:=h+1;
}
else
{ b[i]:=a[j]; j:=j+1;
}
i:=i+1;
}
if(h>mid) then
{ for k:=j to high do
{ b[i]:=a[k]; i:=i+1;
}
}
else
{ for k:=h to mid do
{ b[i]:=a[k]; i:=i+1;
}
}
for k:=low to high do a[k]:=b[k]; 9
}
• Example:
– {179, 254, 285, 310, 351, 423}
• Computing Time of Merge Sort is described by
recurrence relation:
– T(n) = a n = 1, a is a constant
2T(n/2) + cn n > 1, c is a constant
– When n is a power or 2, n= 2 k, then ‘
T(n) = 2(2T(n/4)+cn/2)+cn
= 4T(n/4)+2cn
= …..
= 2 kT(1)+kcn
= an + c nlog n

– If 2k<n<=2k+1, then T(n)<=T(2 k+1) , so T(n)= O(nlog n)

10
Quick Sort
• Division into two subarrays is made so
that sorted subarrays do not need to be
merged later.
• The rearrangement of elements by
picking some element of a[] and then
reordering the other elements so that all
elements appearing before that element
are less and all the elements appearing
after that element are greater than that
element. This rearranging is called
partitioning. 11
• Algorithm Partition(a,m,p)
{ v:=a[m]; i:=m; j:=p;
repeat
{ repeat
{ i:=i+1;
}until (a[i]>=v);
repeat
{ j:=j-1;
}until(a[j]<=v);
if(i<j) then Interchange(a,i,j);
}until(i>=j);
a[m]:=a[j]; a[j]:=v; return j;
}
Algorithm Interchange(a,i,j)
{ p:=a[i]; a[i]:=a[j];a[j]:=p;
}
12
• Algorithm QuickSort(p,q)
{ if(p<q) then
{ j:=partition(a, p, q+1);
QuickSort(p,j-1);
QuickSort(j+1,q);
}
}
• Worst and Average Case Time Complexity
of Quick sort is O(n2) and O(n logn)
respectively.
• Space complexity of Quick sort is O(n).
13
Selection Problem.
• In this problem, n elements are given
and it is required to determine kth
smallest element. If partitioning element
is positioned at a[j] then j-1 elements are
<= a[j] and n-j elements are >= a[j]. If k<j
then kth smallest element is in a[1:j-1]; if
k=j, then a[j] is the kth smallest element;
if k>j, then kth smallest element is the k-
jth smallest element in a[j+1:n]

14
• Algorithm Select(a,n,k)
{ low:=1; up:=n+1;
a[n+1]:= infinity;
repeat
{ j:=partition(a,low,up);
if (k=j) then return a[j];
else if (k<j) then
up:=j;
else
low:=j+1;
}until(false);
}
Example. 15
• Analysis of Selection Algorithm:
– Partition requires O(n) time
– On each successive call to partition either m
increases by at least one or j decreases by at
least one. So atmost n calls to partition can
be made.
– So worst case complexity of Selection
algorithm is O(n2). Worst case is when the
largest element is to be found and the list is
sorted.
– Best case is when the first element in the
unsorted list is the kth smallest element. Best
case complexity is O(n).
– The average computing time of Selection 16
MATRIX MULTIPLICATION
using DIVIDE & CONQUER
• The product of two n x n matrices A, B is
given by
C(i, j) = ∑ A(i, k) B(k, j)
1≤ k ≤ n
for all i, j 1 ≤ i ≤ n, 1 ≤ j ≤ n
• The matrix C has n2 elements.
• To compute C(i, j) we need n
multiplications, so the time is O(n3).

17
Divide and Conquer strategy to compute the product
• Assume n is a power of 2 i.e. n = 2k for some k ≥ 0.
• If n ≠ 2k, we can add rows and columns of zeros to both
A and B to make n=2k
• If n > 2, partition A and B into square sub matrices
each having dimensions n/2 x n/2.
• Each sub matrix is partitioned into sub matrices and
so on.
• These matrix products can be recursively computed by
the same algorithm used for n x n case.
• The algorithm will continue until n becomes suitably
small (n=2).
• Then the product is computed directly as

18
where
C11 = A11B11 + A12 B21
C12 = A11B12 + A12 B22
C21 = A21B11 + A21 B21
C22 = A21B12 + A22 B22

19
Example :
Let A = 1 2 1 1 B= 1 0 0 0
2 1 1 1 0 1 1 1
1 1 1 2 1 1 2 1
1 0 0 1 1 2 1 1

AB = 3 5 5 4
4 4 4 3
4 6 5 4
2 2 1 1
A = A11 A12 B = B11 B12
A21 A22 B21 B22
20
where
A11 = 1 2 A12 = 1 1 A21 = 1 1 A22 = 1
2
2 1 1 1 1 0 0
1

B11 = 1 0 B12 = 0 0 B 21 = 1 1 B22 = 2 1


0 1 1 1 1 2 1
1
21
C11 = A11 B11 + A12 B21 =3 5;
4 4
C12 = A11 B12+ A12 B22 =5 4;
4 3
C21 = A11 B11 + A12 B21 =4 6;
2 2
C22 = A11 B12+ A12 B22 =5 4;
1 1
22
• Analysis of Matrix Multiplication using DANC
– The computing time T(n) to multiply two n x n matrices is
T(n) = b n≤2
8T(n/2) + cn2 n>2

– Two n x n matrices require 8 multiplications of n/2 x n/2


matrices and four additions.
– Time Complexity of Matrix Multiplication using DANC is O(n3)

• Volker strassen’s Matrix Multiplication


– Matrix Multiplication is more expensive than Matrix Addition
so equations for Cij can be reformulated so as to have fewer
multiplications.
– Volker Strassen has discovered a way to compute Cij using only
7 multiplications and 18 additions or subtractions

23
P = (A11 + A 22) (B11 + B22)
Q = (A21 + A 22) (B11)
R = A11 (B 12 – B 22)
S = A22 (B 21 – B 11)
T = (A 11 + A 12) B 22
U = (A21 – A11) (B11 + B12)
V = (A12 – A22) (B21 + B22)
C11 = P+S-T+V
C12 = R+T
C21 = Q+S
C22 = P+R-Q+U

• Here A 11 , A12 , ….. B11 , B12 , …. are matrices of order n/2.


• For strassen’s matrix multiplication computing time T(n) is given
by:
T(n) = b n<=2
7T(n/2) + an2 n>2
• Time complexity of this algorithm is O(n2.81)
24

You might also like