0% found this document useful (0 votes)
8 views48 pages

Unit 1 - Divede and Conquer

The document discusses Divide and Conquer algorithms, explaining their basic idea of breaking down a problem into smaller subproblems, solving them, and combining their solutions. It covers specific algorithms like Merge Sort and Quick Sort, detailing their procedures and complexities. Additionally, it presents examples and recurrence relations using the Master Theorem to analyze the performance of these algorithms.

Uploaded by

darshankotian99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views48 pages

Unit 1 - Divede and Conquer

The document discusses Divide and Conquer algorithms, explaining their basic idea of breaking down a problem into smaller subproblems, solving them, and combining their solutions. It covers specific algorithms like Merge Sort and Quick Sort, detailing their procedures and complexities. Additionally, it presents examples and recurrence relations using the Master Theorem to analyze the performance of these algorithms.

Uploaded by

darshankotian99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Divide and Conquer

Divide and Conquer Algorithms


 Basic Idea

 Divide and Conquer Recurrence

 Merge Sort

 Quick Sort

 Binary Search

2
Basic Idea
 Small instances of a problem are solved by Direct
Approach

 To solve large instance, divide instance of problem


into two or more smaller instances and Solve smaller
instances of problem

 Obtain solution to original (larger) instance by


combining these solutions

3
Basic Idea
 Divide-and-conquer algorithms work according to
the following general plan:
1. A problem is divided into several sub problems
of the same type, ideally of about equal size.
2. The sub problems are solved typically
recursively, sometimes a different algorithm is
employed,
3. If necessary, the solutions to the sub problems
are combined to get a solution to the original
problem.
Basic Idea
a problem of size n

subproblem 1 subproblem 2
of size n/2 of size n/2

a solution to a solution to
subproblem 1 subproblem 2

a solution to
the original problem

5
Example : Detecting Counterfeit Coin
 Consider a bag is carrying 16 coins. One of the coin may be
counterfeit which is lighter than genuine one. Task is to
determine identify the counterfeit coin if it exists.
To support to this work a machine that compares weights of
two sets of coins is given.
 If we do the comparisons of the coins in the order (1,2) (3,4)
(5,6) (7,8) (9,10) (11,12) (13,14) (15,16), we have to make at
most 8comparisions just to say counter fit coin is present or
not.
 To conclude counter fit coin is present or not , the divide and
conquer method gives the solution just by one comparison
 Also by using divide and conquer method we can get
solution by doing only four comparisons
 2 Coin instances is smaller instance. We can find lighter coin
directly by comparing weights of Two coins.
 16 Coin instance is large instance
 Divide two 8 coin instances A and B
 Comparing weights of that two instances, we determine whether
there is Counterfeit coin.
 If no Counterfeit coin algorithm terminates.
otherwise we continue with sub instance having counterfeit coin.
Suppose B has lighter coin
 It is divided into two sets of 4 coins each set B1 and B2.
 Compare weights of sets B1 and B2.Now one set of coin is
lighter, let it be B1.
 Divided B1 into two sets of 2 coins each set B1a and B1b.
 Two sets B1a and B1b weights are compared. Now one set of
coin is lighter which is carrying counterfeit coin.
 Since we are left with two coins in that set it is small instance.
Comparing weights of that two coins, we can determine lighter
one
Control Abstraction
General Divide and Conquer Recurrence
Solving General Divide-and-Conquer
Recurrence T(n) = aT(n/b) + f (n)
By Master Theorem:

If T(n) = aT(n/b) + f (n) where f(n)  (nd), d  0


Then,
If a < bd, T(n)  (nd)
If a = bd, T(n)  (nd log n)
If a > bd, T(n)  (nlog b a )

(Analogous results hold for the O and Ω notations.)

11
Examples: Solve the Recurrence using
Master theorem T(n) = aT(n/b) + f (n)

1) T(n) = T(n/2) + n
Here a = 1, b = 2, d = 1,
Since a < bd By Master theorem T(n)  (nd )
 T(n)  (n )

2) T(n) = 2T(n/2) + 1
Here a = 2, b = 2, d = 0,
Since a > bd By Master theorem T(n)  (n log b a )
 T(n)  (n log 2 2 )
T(n)  (n)
Examples T(n) = aT(n/b) + f (n)

3) T(n) = T(n/2) + 1
Here a = 1, b = 2, d = 0,
Since a = bd By Master theorem T(n)  (nd
log n)
 T(n)  (n0log n )
T(n)  (log(n) )
4) T(n) = 4T(n/2) + n
Here a = 4, b = 2, d = 1,
Since a > b d By Master theorem T(n)  (n log
a )
b
 T(n)  (n log 2 4 )
T(n)  (n 2 )

13
Examples
5) T(n) = 4T(n/2) + n2
Here a = 4, b = 2, d = 2,
Since a = bd By Master theorem T(n)  (nd log n)
 T(n)  (n2 log n)

6) T(n) = 4T(n/2) + n3
Here a = 4, b = 2, d = 3,
Since a < bd By Master theorem T(n)  (nd )
 T(n)  (n3)
Mergesort
 Merge sort is a perfect example of a successful application of
the divide-and conquer technique.
It sorts a given array A[0..n − 1] by dividing it into two halves,
divides its input elements according to their position in the
array.
A[0….|n/2|− 1] and A[|n/2|..n − 1], sorting each of them
recursively,
Then merging the two smaller sorted arrays into a single
sorted one.
 Here small(p) is instance with only one element ie, n=1
 If n>1 then,
Split array A[0..n-1] in two about equal halves and make
copies of each half in arrays B and C
 Sort arrays B and C recursively
 Merge sorted arrays B and C into array A
15
Mergesort - Procedure
8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

8 3 2 9 71 5 4

8 3 2 9 7 1 5 4

3 8 2 9 1 7 4 5

2 3 8 9 1 4 5 7

16 1 2 3 4 5 7 8 9
Mergesort
Merging of two sorted arrays
 Two pointers (array indices) are initialized to point
to the first elements of the arrays being merged.
 The elements pointed to are compared, and the
smaller of them is moved to a new sorted array
being constructed;
 After that, the index of the smaller element is
incremented to point to its immediate successor in
the array it was copied from.
 This operation is repeated until one of the two
given arrays is exhausted, and then the remaining
elements of the other array are copied to the end of
the new sorted array.
B[ ]= 6 ,14,26,56,74 C[ ]= 16 ,34,46,76,84 A[ ]= 6,14,16,26,34,46,56,74,76,84
Merge Algorithm

7,9,12,15 6,8,11,13
Mergesort
Mergesort
Mergesort - Another approach

23
Merge- Algorithm

24
Tracing Example a[ ]={7,5,4,3}
 Initially low=1, high=4
 Mergesort (1,4)
mid=2
Mergesort(1,2)
mid=1
Mergesort(1,1)
Mergesort(2,2)
merge(1,1,2)
Mergesort(3,4)
mid=3
Mergesort(3,3)
Mergesort(4,4)
merge(3,3,4)
merge(1,2,4)
Time Complexity of mergesort
Complexity of merge sort
 for the worst case, Cmerge(n) = n − 1, and we have the
recurrence
Cworst(n) = 2Cworst(n/2) + n − 1 for n > 1, Cworst(1) = 0.
 Also Cbest(n) = 2Cbest(n/2) + (n/2) for n > 1,
 Hence, according to the Master Theorem, Cworst(n)
Ө(n log n) . And Cbest(n) Ω(n log n)
 In total Complexity of merge sort
Cworst(n)= O(n logn)
Caverage(n)= Ө(n logn)
Cbest(n)=Ω(n logn)
Sort the word Example
Major drawback of Merge sort
 Merge sort algorithm is not in place.
It uses 2n locations wherein additional n locations
are needed to place the result of the merge of 2
Arrays.
 That is given list of n items is placed in array A
which is to be sorted.
Additional B array to hold n items is used in
merge procedure to place result of merge.
 Hence 2n locations are needed to sort, in the
merge sort procedure.
Hence Merge sort algorithm is not in place.
Quick Sort
 Quick sort is the other sorting algorithms that is based on
the divide and conquer approach.
 Merge sort divides its input elements according to their
position in the array where as Quick sort divides them
according to their value. For division it uses partition
algorithms.
 In partition procedure, it selects one of the element A[s]
from the input list A[0…n-1] and rearranges the array’s
elements so that all the elements to the left of element
A[s] are less than or equal to A[s], and all the elements to
the right of A[s] are greater than or equal to it so that A[s]
achieves it’s final position.
 A[0] . . . A[s − 1] A[s] A[s + 1] . . . A[n − 1]
 all are ≤A[s] A[s] all are ≥A[s]
 Obviously, after a partition is achieved, A[s] will be in its
final position in the sorted array, and we can continue
sorting the two sub arrays to the left and to the right of
A[s] independently.
Quicksort - Algorithm
ALGORITHM : Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n − 1], defined by
its left and right indices l and r
//Output: Subarray A[l..r] sorted in non decreasing
order
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r])
Partition Procedure
 We start by selecting a pivot—an element with respect to whose
value we are going to divide the subarray. There are several
different strategies for selecting a pivot
 Let us consider the subarray’s first element as pivot ie, p = A[l].
 we will now scan the subarray, comparing the subarray’s
elements with the pivot.
 The left-to-right scan starts with the second elementof the sub
array denoted below by index pointer i . Since we want elements
smaller than the pivot to be in the left part of the subarray, this
scan skips over elements that are smaller than the pivot and
stops upon encountering the element greater than or equal to
the pivot.
 The right-to-left scan, starts with the last element of the
subarray denoted below by index pointer j. Since we want
elements larger than the pivot to be in the right part of the
subarray, this scan skips over elements that are larger than the
pivot and stops on encountering the element smaller than or
equal to the pivot.
Partition Procedure- contd.
 After both scans stop, three situations may arise
1. If scanning indices i and j have not crossed, i.e., i < j,
we simply exchange A[i] and A[j] and resume left-to-
right scan by incrementing i and right-to-left scan
decrementing j respectively:
2. If the scanning indices have crossed over, i.e., i > j, we
partition the subarray by exchanging the pivot with A[j ]
and split position s=j is returned.
3. Finally, if the scanning indices stop while pointing to the
same element, i.e., i = j, we partition the subarray , by
exchanging the pivot with A[j ] and split position s = i = j
is returned.
p

A[i]p s A[i]p
Partitioning Algorithm
 ALGORITHM Partition(A[l..r])
//Partitions a subarray by using the first element as a pivot
//Input: Subarray of array A[0..n − 1], defined by its left and
right indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned
as this function’s value
p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p or i > r
repeat j ←j − 1 until A[j ]≤ p
swap(A[i], A[j ])
until i ≥ j
swap(A[i], A[j ]) //undo last swap when i ≥ j
swap(p, A[j ])
return j
Tracing for Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
Pivot=5 Function calls
5 3 1 9 8 2 4 7 i=3, j=6 swap(a[i],a[j]) Quicksort(0,7)
5 3 1 4 8 2 9 7 i=4, j=5 swap(a[i],a[j])
5 3 1 4 2 8 9 7 i=5, j=4 swap(pivot,a[j])
2 3 1 4 5 8 9 7 s=4 Quicksort(0,3)
Pivot=2
2 3 1 4 i=1, j=2 swap(a[i],a[j])
2 1 3 4 i=2, j=1 swap(pivot,a[j])
1 2 3 4 s=1 Quicksort(0,0)
Pivot=3 Quicksort(2,3)
3 4 i=3,j=2 swap(pivot,a[j])
3 4 s=2 Quicksort(2,1)
Quicksort(3,3)
Pivot=8 8 9 7 i=6, j=7 swap(a[i],a[j]) Quicksort(5,7)
8 7 9 i=7, j=6 swap(pivot,a[j])
7 8 9 s=6 Quicksort(5,5)
1 2 3 4 5 7 8 9 is Out put Quicksort(7,7)
Quicksort- Tree of Recursive Calls
Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
Complexity Analysis of Quick sort
Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
 The number of key comparisons made before a partition
is achieved is n + 1 if the scanning indices cross over
and n if they coincide. If all the splits happen in the
middle of corresponding subarrays, we will have the best
case.
 The number of key omparisons in the best case satisfies
the recurrence
Cbest(n) = 2Cbest(n/2) + n for n > 1, Cbest(1) = 0.
 According to the Master Theorem,
C best(n) Ω(n log2 n); solving it exactly for
n = 2k yields Cbest(n) =n log2 n.
Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
 In the worst case, all the splits will be skewed to the extreme: one of the two
subarrays will be empty, and the size of the other will be just 1 less than the
size of the subarray being partitioned. This situation will happen, in
particular, for increasing arrays, i.e., for inputs for which the problem is
already solved!
 if A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot, the
left-to-right scan will stop on A[1] while the right-to-left scan will go all the
way to reach A[0], indicating the split at position 0: So, after making n + 1
comparisons to get to this partition and exchanging the pivot A[0] with itself,
the algorithm will be left with the strictly increasing array A[1..n − 1] to
sort. This sorting of strictly increasing arrays of diminishing sizes will
continue until the last one A[n − 2,n − 1] has been processed.
 The total number of key comparisons made will be equal to
Cworst(n) = (n + 1) + n + . . . + 3 = ((n + 1)(n + 2)/2)-3 O(n2).
 We can prove for average case Cavg(n) ≈ 2n ln n ≈ 1.39n log2 n Ө(n log2n)
Analysis of Quicksort

— Ω(n log n)
 Best case: split in the middle
 Worst case: sorted array — O(n2)
 Average case: random arrays — Θ(n log n)

 Improvements:
 better pivot selection: median of three partitioning
 switch to insertion sort on small subfiles
 elimination of recursion

 Quicksort is considered the method of choice for


internal sorting of large files (n ≥ 10000)
39
Comparison of Merge sort with Quick
sort
 Both mergesort and Quick sort are based on the divide and conquer
approach.
 Merge sort divides its input elements according to their position in the
array where as Quick sort divides them according to their value. For
division it uses partition algorithms.
 In mergesort, the division of the problem into two sub problems is
immediate and the entire work happens in combining their solutions
In Quick sort, the entire work happens in the division stage, with no
work required to combine the solutions to the sub problems.
 Complexity of merge sort : Cworst(n)= O(n logn)
Cavg(n)= Ө(n logn)
Cbest(n)=Ω(n logn)

Complexity of Quick sort : Cworst(n)= O(n2)


Cavg(n)= Ө(n logn)
Cbest(n)=Ω(n logn)
Quick sort algorithm process in place property whereas Merge sort
algorithm is not in place
Binary Search
 Let P = (n,ai....al,x) denote an arbitrary instance of the search
problem where n is the number of elements in the sorted list, ai....al
and x is the search element.
 Divide-and-conquer can be used to solve this problem.
 Let Small (P) be true if n = 1, that is i=l.
In this case S(P)will return the value i if x = ai otherwise it will return 0.
 If P has more than on element, it can be divided into a new sub
problem.
 Pick an index q (in the range [i,l]and compare x with aq.
If q is chosen such that aq is middle element that is q=(n+1)/2), resulting
algorithm is called as binary search.
 There are three possibilities:
1. If x= aq the problem P is immediately solved.
2. if x < aq , x has to be searched for only in the sub list ai....aq-1.
Therefore, P reduces to (q- i, ai....aq-1,x).
3. If x > aq x has to be searched for only in the sub list aq+1....al.
Then P reduces to (I- q, aq+1....al ,x).
Binary Search
Binary Search
Theorem1: Algorithm BinSearch(a,n,x) works
correctly.
 If x=a[mid], Algorithm terminates successfully.
 Otherwise range is narrowed by either by
increasing low to mid+1 or by decreasing high to
mid-1.
 This narrowing of range does not affect out come
of search because array is sorted one.
 If low becomes greater than high, then x is not
present and hence loop is exited.
Binary decision tree for binary
search
 The number of element comparison needed to find each 14
elements is shown below:
 A: Index: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
 Elements :-15,-6, 0, 7, 9, 23,54, 82,101,112,125,131,142,151
 Compares: 3 4 2 4 3 4 1 4 3 4 2 4 3 4

 Average number of comparison for successful search= 45/14=


3.21
 There are 15 possible ways of getting unsuccessful search.
 Average number of comparison for un successful search=
3+14*4
 = 3.93
Binary decision tree for binary search for
n=14

The nodes corresponding to the elements are called as internal


nodes. The left and right null nodes are replaced by extra
nodes called as external nodes.
Note: for n internal nodes there are n+1 external nodes
If x is present algorithm terminates at one of internal nodes. If
not present it terminates at one external node
Complexity Analysis of Binary search
 Comparison of element with key is the basic operation.
 The number of comparison is depends on the number of elements in
the array and also search key value. So we come across best case,
worst case and average case.
 The algorithm may terminate on either in internal node for best or
average case on successful search or in an external node for worst
case on unsuccessful search.
 If algorithm terminates on comparing with first mid element it is best
case that is C (best) =1
 Otherwise after one comparison the list is divided in to two sub lists of
size (n/2) and only search procedure repeats with only one sublist.
 Hence we can write total number of comparison C(n) is
C(n)=C(n/2)+1 for n>1 and C(n)=1 for n=1
We can solve this using Master theorem or by backward substitution
method . Here a=1,b=2 and d=0 Since a=bd is true, By master theorem
C(n)=Ө (nd log n)= Ө( n0log n) ε Ө ( log n)

For Binary search n: C (best) =1, C (worst) = log n, C (avg) =log n


Advantages of divide and conquer
 Parallelism: Divide and conquer algorithms tend to
have a lot of inherent parallelism.

 Cache Performance: Once a sub-problem fits in


the cache, the standard recursive solution reuses
the cached data until the sub-problem has been
completely solved.

 It allows solving difficult and often impossible


looking problems.

You might also like