0% found this document useful (0 votes)
25 views

Unit 3 - Divide and Conquer Algorithm

Divide and couquer
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Unit 3 - Divide and Conquer Algorithm

Divide and couquer
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Chapter – 3 – Divide and Conquer Algorithm

 Introduction
Recurrence relations
Multiplying large Integers Problem
 Binary Search
 Merge Sort
 Quick Sort
 Matrix Multiplication
 Exponentiation

1
Introduction
Divide & conquer is a general algorithm design strategy
with a general plan as follows:
• 1. DIVIDE: A problem’s instance is divided into several
smaller instances of the same problem, ideally of about the
same size.

• 2. RECUR: Solve the sub-problem recursively.

• 3. CONQUER: If necessary, the solutions obtained for the


smaller instances are combined to get a solution to the
original instance.

By: Madhuri V. Vaghasia(Assistant


2
Professor)
Introduction
• Diagram shows the general divide & conquer plan
Problem
of Size n

Problem Problem
of Size of Size
n/2 n/2

Solution to sub Solution to sub


problem 1 problem 2

Solution to
original problem 3

By: Madhuri V. Vaghasia(Assistant Professor)


General Algorithm for Divide & Conquer

By: Madhuri V. Vaghasia(Assistant


4
Professor)
• The computing time of above procedure of
divide and conquer is given by the recurrence
relation

• T(n)= g(n) ,if n is


small
T(n1)+T(n2)+…+T(nr)+f(n) ,if n is
sufficiently large

By: Madhuri V. Vaghasia(Assistant


5
Professor)
Introduction
• The real work is done piecemeal, in three different places: in the
partitioning of problems into sub problems; at the very tail end of
the recursion, when the sub problems are so small that they are
solved outright; and in the gluing together of partial answers.

• These are held together and coordinated by the algorithm's core


recursive structure.

• As an introductory example, we'll see how this technique yields a


new algorithm for multiplying numbers, one that is much more
efficient than the method we all learned in elementary school!

By: Madhuri V. Vaghasia(Assistant


6
Professor)
Recurrence relations
• Divide-and-conquer algorithms often follow a generic pattern:
they tackle a problem of size n by recursively solving, say, a sub
problems of size n/b and then combining these answers in
O(n⋀d) time, for some a; b; d > 0 (in the multiplication
algorithm, a = 3, b = 2, and d = 1).

• Their running time can therefore be captured by the equation


T(n) = aT ([n/b]) + O(n⋀d).

• We next derive a closed-form solution to this general


recurrence so that we no longer have to solve it explicitly in
each new instance.
By: Madhuri V. Vaghasia(Assistant
7
Professor)
Recurrence relations

By: Madhuri V. Vaghasia(Assistant


8
Professor)
Recurrence relations

By: Madhuri V. Vaghasia(Assistant


9
Professor)
Multiplying large Integers Problem
• Consider that we want to perform operation 1234*981.

• We need to divide the integer in two portion left and right. (12 and 34)

• 1234 = 10⋀2 * 12 + 34 = 1234

• For 0981 * 1234 consider following things.


» x = 81, w = 09, z = 34 and y = 12

• 0981 * 1234 = (10⋀2w + x )*(10⋀2y + z)


=10⋀4 wy + 10⋀2(wz + xy) + xz
=1080000 + 1278000 + 2754
= 1210554

By: Madhuri V. Vaghasia(Assistant


10
Professor)
Multiplying large Integers Problem
• The above procedure still needs four half-size
multiplications; wy, wz, xy and xz.

• The key observation is that there is no need to compute


both wz and xy; all we really need is the sum of these two
terms.

• Is it possible to obtain wz + xy at the cost of single


multiplication? Our equeation is like

• Result = (w + x) * (y + z) = wy + (wz + xy) + xz


By: Madhuri V. Vaghasia(Assistant
11
Professor)
Multiplying large Integers Problem
• There is one mathematical formula that we are going to
apply here
• (wz + xy) = (w + x)*(y + z) – wy – xz

• p = wy = 09 * 12 = 108
• q = xz = 81 * 34 = 2754
• r = (w + x) * (y + z) = 90 * 46 = 4140

• 0981 * 1234 = 10⋀4 p + 10⋀2 (r-p-q) + q


• = 1080000 + 127800 + 2754
• = 1210554
By: Madhuri V. Vaghasia(Assistant
12
Professor)
Multiplying large Integers Problem
• If each of the three half – size multiplications is carried
out by the classic algorithm, the time needed to
multiply two n – figure numbers is f(n). For some
constants c, g(n) is the time needed for additions, shifts,
and overhead operations. So the total time required is,
– T(n)=f(n)+c.g(n)

• So, the recurrence relation to multiply large integers


problem is
– T(n)=3T(n/2)+n

By: Madhuri V. Vaghasia(Assistant


13
Professor)
Multiplying large Integers Problem
• Calculate multiplication of following number
using divide and conquer

• 1234*4321 = ?

By: Madhuri V. Vaghasia(Assistant


14
Professor)
Multiplying large Integers Problem

By: Madhuri V. Vaghasia(Assistant


15
Professor)
Multiplying large Integers Problem
• At each successive level of recursion the sub problems get
halved in size.

• At the (log 2 n)th level, the sub problems get down to size 1,
and so the recursion ends.

• Therefore, the height of the tree is log 2 n.

• The branching factor is 3. Each problem recursively produces


three smaller ones. With the result that at depth k in the tree
there are 3⋀k sub problems, each of size n/2⋀k.
By: Madhuri V. Vaghasia(Assistant
16
Professor)
Multiplying large Integers Problem
• For each sub problem, a linear amount of work is
done in identifying further sub problems and
combining their answers. Therefore the total time
spent at depth k in the tree is

By: Madhuri V. Vaghasia(Assistant


17
Professor)
Multiplying large Integers Problem

By: Madhuri V. Vaghasia(Assistant


18
Professor)
Multiplying large Integers Problem

By: Madhuri V. Vaghasia(Assistant


19
Professor)
Binary Search
Function Binary_Search (T[1…n], x)
{
i  1; j  n
while i < j do
{
k  (i+j)/2;
if (x < T[k])
j  k – 1;
if(x == T[k])
i, j  k;
return k
if(x > T[k])
i  k + 1;
}
}
By: Madhuri V. Vaghasia(Assistant
20
Professor)
Binary Search
Function binrec(T[i…j], x)
{
if(j<i)
return -1
else
{
k  (i + j) /2
if(x==t[k]) then
return k
else if(x <= t[k]) then
return binrec(T[i..k-1], x)
else
return binrec(T[k+1…j], x)
}
}
By: Madhuri V. Vaghasia(Assistant
21
Professor)
Complexity of Binary Search Method
• When n= 64 BinarySearch is called to reduce size to n=32
• When n= 32 BinarySearch is called to reduce size to n=16
• When n= 16 BinarySearch is called to reduce size to n=8
• When n= 8 BinarySearch is called to reduce size to n=4
• When n= 4 BinarySearch is called to reduce size to n=2
• When n= 2 BinarySearch is called to reduce size to n=1
• Let us consider a more general case where n is a power of 2. Let us say n =
2^k .
• 2k = n Taking log of both sides
k = log n
– Recurrence for binary search is T(n)=(n/2)+1 (the time to search in an array of 1
element is constant)
We conclude from there that the time complexity of the Binary search method is
O(log n).

By: Madhuri V. Vaghasia(Assistant


22
Professor)
Merge Sort
• The problem of sorting a list of numbers lends itself
immediately to a divide-and-conquer strategy: split the list
into two halves, recursively sort each half, and then merge
the two sorted sub lists.

By: Madhuri V. Vaghasia(Assistant


23
Professor)
Merge Sort
MERGE-SORT (A, p, r)
IF p < r // Check for base
THEN q = FLOOR[(p + r)/2] //Divide
MERGE-SORT (A, p, q) // Conquer
MERGE-SORT (A, q + 1, r) // Conquer .
MERGE (A, p, q, r) // Conquer.

By: Madhuri V. Vaghasia(Assistant


24
Professor)
Merge Sort
• The correctness of this algorithm is self-evident, as
long as a correct merge subroutine is specified.

• If we are given two sorted arrays x[1… k] and y[1…l],


how do we efficiently merge them into a single
sorted array z[1 … k + l]?

• Well, the very first element of z is either x[1] or y[1],


whichever is smaller. The rest of z[.] can then be
constructed recursively.
By: Madhuri V. Vaghasia(Assistant
25
Professor)
Merge Sort
• MERGE (A, p, q, r )
n1 ← q − p + 1
n2 ← r − q
Create arrays L[1 . . n1 + 1] and R[1 . . n2 + 1]
for i ← 1 to n1 then
do L[i] ← A[p + i − 1]
for j ← 1 to n2 then
do R[j] ← A[q + j ]
L[n1 + 1] ← ∞
R[n2 + 1] ← ∞
i←1
j←1
FOR k ← p TO r
DO IF L[i ] ≤ R[ j]
THEN A[k] ← L[i]
i←i+1
ELSE A[k] ← R[j]
j←j+1
By: Madhuri V. Vaghasia(Assistant
26
Professor)
Merge Sort
• Time taken for divide is f(n)=2T(n/2) because of
two half size partitions.
• For merge linear amount of work is done, so time
taken is g(n)=O(n).
• Thus merge's are linear, and the overall time
taken by merge sort is

• T(n) = 2T(n/2) + O(n); or O(n log n)

By: Madhuri V. Vaghasia(Assistant


27
Professor)
Quick Sort
• Given an array of n elements (e.g., integers):

• If array only contains one element, return array


Else
pick one element to use as pivot.
Partition elements into two sub-arrays:
Elements less than or equal to pivot
Elements greater than pivot

Quick sort two sub-arrays


Return results
By: Madhuri V. Vaghasia(Assistant
28
Professor)
Quick Sort

40 20 10 80 60 50 7 30 100

By: Madhuri V. Vaghasia(Assistant


29
Professor)
Quick Sort

By: Madhuri V. Vaghasia(Assistant


30
Professor)
Quick Sort

By: Madhuri V. Vaghasia(Assistant


31
Professor)
Quick Sort
• Quick Sort analysis
– Best case running time: O(n log n)
• T(n)<=2T(n/2)+∅(n)
– Worst case running time? O(n∧2)
• T(n)=T(n-1)+T(0)+∅(n)=T(n-1)+∅(n)
– Recursion:
– Partition splits array in two sub-arrays:
» one sub-array of size 0
» the other sub-array of size n-1
» Quicksort each sub-array
– Number of accesses per partition? ∅(n)

By: Madhuri V. Vaghasia(Assistant


32
Professor)
Quick Sort
• Average case running time is much closer to
best case.
• If suppose partitioning algorithm produces a
9-to-1 proportional split the recurrence will be
• T(n)=T(9/10n)+T(n/10)+ ∅(n)
Solving it,
T(n)= ∅(n logn)

By: Madhuri V. Vaghasia(Assistant


33
Professor)
Matrix Multiplication

By: Madhuri V. Vaghasia(Assistant


34
Professor)
Iterative Matrix Multiplication
Input: matrices A and B
Let C be a new matrix of the appropriate size
For i from 1 to n:
For j from 1 to p:
Let sum = 0
For k from 1 to m:
Set sum ← sum + Aik × Bkj
Set Cij ← sum
Return C
By: Madhuri V. Vaghasia(Assistant
35
Professor)
Recursive Matrix Multiplication
Function MMult(A, B, n)
If n = 1
Output A × B
Else
Compute A11, B11, . . ., A22, B22 % by computing m = n/2
X1 ← MMult(A11, B11, n/2)
X2 ← MMult(A12, B21, n/2)
X3 ← MMult(A11, B12, n/2)
X4 ← MMult(A12, B22, n/2)
X5 ← MMult(A21, B11, n/2)
X6 ← MMult(A22, B21, n/2)
X7 ← MMult(A21, B12, n/2)
X8 ← MMult(A22, B22, n/2)
C 11 ← X1 + X2
C 12 ← X3 + X4
C 21 ← X5 + X6
C 22 ← X7 + X8
Output C
End If

By: Madhuri V. Vaghasia(Assistant


36
Professor)
Matrix Multiplication
• The preceding formula implies an O(n∧3) algorithm for matrix
multiplication: there are n∧2 entries to be computed, and
each takes O(n) time.

• For quite a while, this was widely believed to be the best


running time possible, and it was even proved that in certain
models of computation no algorithm could do better.

• It was therefore a source of great excitement when in1969, the


German mathematician Volker Strassen announced a
signicantly more effcient algorithm, based upon divide-and-
conquer.
By: Madhuri V. Vaghasia(Assistant
37
Professor)
Matrix Multiplication

By: Madhuri V. Vaghasia(Assistant Professor) 38


Optimized Mmult Algorithm
Strassen(A, B)
If n = 1
Output A × B
Else
Compute A11, B11, . . ., A22, B22 % by computing m = n/2
P1 ← Strassen(A11, B12 − B22)
P2 ← Strassen(A11 + A12, B22)
P3 ← Strassen(A21 + A22, B11)
P4 ← Strassen(A22, B21 − B11)
P5 ← Strassen(A11 + A22, B11 + B22)
P6 ← Strassen(A12 − A22, B21 + B22)
P7 ← Strassen(A11 − A21, B11 + B12)
C 11 ← P5 + P4 − P2 + P6
C 12 ← P1 + P2
C 21 ← P3 + P4
C 22 ← P1 + P5 − P3 − P7
Output C
End If

By: Madhuri V. Vaghasia(Assistant


39
Professor)
Matrix Multiplication

By: Madhuri V. Vaghasia(Assistant


40
Professor)
Exponentiation
• Let a and n be two integers. We wish to compute the
exponentiation x = a ∧ n.
• For simplicity, we shall assume throughout this section that n
> 0.
• If n is small, the obvious algorithm is adequate.
• Function exposeq(a, n)
{
ra
for i  1 to n-1 do r  a*r
return r
}

Complexity of this algorithm is O(n)


By: Madhuri V. Vaghasia(Assistant
41
Professor)
Exponentiation
• a ∧ n = return a if n = 1
return (a ∧ n/2) ∧ 2 if n is even
return a * a ∧ n-1 otherwise

a ∧ 29 = a * a ∧ 28 = a(a ∧ 14) ∧ 2 …….

Which involves only three multiplications and four


squaring instead of the 28 multiplication.

By: Madhuri V. Vaghasia(Assistant


42
Professor)
Exponentiation
• Function expDC(a, n)
{
if n = 1
then return a
if n is even
then return [expoDC(a, n/2)]∧2
else
return a * expoDC(a, n - 1)
}
Recurrence relation T(n)=T(n/2)+1

So, the complexity of above algorithm is O(log n)


By: Madhuri V. Vaghasia(Assistant
43
Professor)
Any Question ?

Please don’t carry it

By: Madhuri V. Vaghasia(Assistant


44
Professor)

You might also like