Unit 1 - Divede and Conquer
Unit 1 - Divede and Conquer
Merge Sort
Quick Sort
Binary Search
2
Basic Idea
Small instances of a problem are solved by Direct
Approach
3
Basic Idea
Divide-and-conquer algorithms work according to
the following general plan:
1. A problem is divided into several sub problems
of the same type, ideally of about equal size.
2. The sub problems are solved typically
recursively, sometimes a different algorithm is
employed,
3. If necessary, the solutions to the sub problems
are combined to get a solution to the original
problem.
Basic Idea
a problem of size n
subproblem 1 subproblem 2
of size n/2 of size n/2
a solution to a solution to
subproblem 1 subproblem 2
a solution to
the original problem
5
Example : Detecting Counterfeit Coin
Consider a bag is carrying 16 coins. One of the coin may be
counterfeit which is lighter than genuine one. Task is to
determine identify the counterfeit coin if it exists.
To support to this work a machine that compares weights of
two sets of coins is given.
If we do the comparisons of the coins in the order (1,2) (3,4)
(5,6) (7,8) (9,10) (11,12) (13,14) (15,16), we have to make at
most 8comparisions just to say counter fit coin is present or
not.
To conclude counter fit coin is present or not , the divide and
conquer method gives the solution just by one comparison
Also by using divide and conquer method we can get
solution by doing only four comparisons
2 Coin instances is smaller instance. We can find lighter coin
directly by comparing weights of Two coins.
16 Coin instance is large instance
Divide two 8 coin instances A and B
Comparing weights of that two instances, we determine whether
there is Counterfeit coin.
If no Counterfeit coin algorithm terminates.
otherwise we continue with sub instance having counterfeit coin.
Suppose B has lighter coin
It is divided into two sets of 4 coins each set B1 and B2.
Compare weights of sets B1 and B2.Now one set of coin is
lighter, let it be B1.
Divided B1 into two sets of 2 coins each set B1a and B1b.
Two sets B1a and B1b weights are compared. Now one set of
coin is lighter which is carrying counterfeit coin.
Since we are left with two coins in that set it is small instance.
Comparing weights of that two coins, we can determine lighter
one
Control Abstraction
General Divide and Conquer Recurrence
Solving General Divide-and-Conquer
Recurrence T(n) = aT(n/b) + f (n)
By Master Theorem:
11
Examples: Solve the Recurrence using
Master theorem T(n) = aT(n/b) + f (n)
1) T(n) = T(n/2) + n
Here a = 1, b = 2, d = 1,
Since a < bd By Master theorem T(n) (nd )
T(n) (n )
2) T(n) = 2T(n/2) + 1
Here a = 2, b = 2, d = 0,
Since a > bd By Master theorem T(n) (n log b a )
T(n) (n log 2 2 )
T(n) (n)
Examples T(n) = aT(n/b) + f (n)
3) T(n) = T(n/2) + 1
Here a = 1, b = 2, d = 0,
Since a = bd By Master theorem T(n) (nd
log n)
T(n) (n0log n )
T(n) (log(n) )
4) T(n) = 4T(n/2) + n
Here a = 4, b = 2, d = 1,
Since a > b d By Master theorem T(n) (n log
a )
b
T(n) (n log 2 4 )
T(n) (n 2 )
13
Examples
5) T(n) = 4T(n/2) + n2
Here a = 4, b = 2, d = 2,
Since a = bd By Master theorem T(n) (nd log n)
T(n) (n2 log n)
6) T(n) = 4T(n/2) + n3
Here a = 4, b = 2, d = 3,
Since a < bd By Master theorem T(n) (nd )
T(n) (n3)
Mergesort
Merge sort is a perfect example of a successful application of
the divide-and conquer technique.
It sorts a given array A[0..n − 1] by dividing it into two halves,
divides its input elements according to their position in the
array.
A[0….|n/2|− 1] and A[|n/2|..n − 1], sorting each of them
recursively,
Then merging the two smaller sorted arrays into a single
sorted one.
Here small(p) is instance with only one element ie, n=1
If n>1 then,
Split array A[0..n-1] in two about equal halves and make
copies of each half in arrays B and C
Sort arrays B and C recursively
Merge sorted arrays B and C into array A
15
Mergesort - Procedure
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
8 3 2 9 71 5 4
8 3 2 9 7 1 5 4
3 8 2 9 1 7 4 5
2 3 8 9 1 4 5 7
16 1 2 3 4 5 7 8 9
Mergesort
Merging of two sorted arrays
Two pointers (array indices) are initialized to point
to the first elements of the arrays being merged.
The elements pointed to are compared, and the
smaller of them is moved to a new sorted array
being constructed;
After that, the index of the smaller element is
incremented to point to its immediate successor in
the array it was copied from.
This operation is repeated until one of the two
given arrays is exhausted, and then the remaining
elements of the other array are copied to the end of
the new sorted array.
B[ ]= 6 ,14,26,56,74 C[ ]= 16 ,34,46,76,84 A[ ]= 6,14,16,26,34,46,56,74,76,84
Merge Algorithm
7,9,12,15 6,8,11,13
Mergesort
Mergesort
Mergesort - Another approach
23
Merge- Algorithm
24
Tracing Example a[ ]={7,5,4,3}
Initially low=1, high=4
Mergesort (1,4)
mid=2
Mergesort(1,2)
mid=1
Mergesort(1,1)
Mergesort(2,2)
merge(1,1,2)
Mergesort(3,4)
mid=3
Mergesort(3,3)
Mergesort(4,4)
merge(3,3,4)
merge(1,2,4)
Time Complexity of mergesort
Complexity of merge sort
for the worst case, Cmerge(n) = n − 1, and we have the
recurrence
Cworst(n) = 2Cworst(n/2) + n − 1 for n > 1, Cworst(1) = 0.
Also Cbest(n) = 2Cbest(n/2) + (n/2) for n > 1,
Hence, according to the Master Theorem, Cworst(n)
Ө(n log n) . And Cbest(n) Ω(n log n)
In total Complexity of merge sort
Cworst(n)= O(n logn)
Caverage(n)= Ө(n logn)
Cbest(n)=Ω(n logn)
Sort the word Example
Major drawback of Merge sort
Merge sort algorithm is not in place.
It uses 2n locations wherein additional n locations
are needed to place the result of the merge of 2
Arrays.
That is given list of n items is placed in array A
which is to be sorted.
Additional B array to hold n items is used in
merge procedure to place result of merge.
Hence 2n locations are needed to sort, in the
merge sort procedure.
Hence Merge sort algorithm is not in place.
Quick Sort
Quick sort is the other sorting algorithms that is based on
the divide and conquer approach.
Merge sort divides its input elements according to their
position in the array where as Quick sort divides them
according to their value. For division it uses partition
algorithms.
In partition procedure, it selects one of the element A[s]
from the input list A[0…n-1] and rearranges the array’s
elements so that all the elements to the left of element
A[s] are less than or equal to A[s], and all the elements to
the right of A[s] are greater than or equal to it so that A[s]
achieves it’s final position.
A[0] . . . A[s − 1] A[s] A[s + 1] . . . A[n − 1]
all are ≤A[s] A[s] all are ≥A[s]
Obviously, after a partition is achieved, A[s] will be in its
final position in the sorted array, and we can continue
sorting the two sub arrays to the left and to the right of
A[s] independently.
Quicksort - Algorithm
ALGORITHM : Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n − 1], defined by
its left and right indices l and r
//Output: Subarray A[l..r] sorted in non decreasing
order
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r])
Partition Procedure
We start by selecting a pivot—an element with respect to whose
value we are going to divide the subarray. There are several
different strategies for selecting a pivot
Let us consider the subarray’s first element as pivot ie, p = A[l].
we will now scan the subarray, comparing the subarray’s
elements with the pivot.
The left-to-right scan starts with the second elementof the sub
array denoted below by index pointer i . Since we want elements
smaller than the pivot to be in the left part of the subarray, this
scan skips over elements that are smaller than the pivot and
stops upon encountering the element greater than or equal to
the pivot.
The right-to-left scan, starts with the last element of the
subarray denoted below by index pointer j. Since we want
elements larger than the pivot to be in the right part of the
subarray, this scan skips over elements that are larger than the
pivot and stops on encountering the element smaller than or
equal to the pivot.
Partition Procedure- contd.
After both scans stop, three situations may arise
1. If scanning indices i and j have not crossed, i.e., i < j,
we simply exchange A[i] and A[j] and resume left-to-
right scan by incrementing i and right-to-left scan
decrementing j respectively:
2. If the scanning indices have crossed over, i.e., i > j, we
partition the subarray by exchanging the pivot with A[j ]
and split position s=j is returned.
3. Finally, if the scanning indices stop while pointing to the
same element, i.e., i = j, we partition the subarray , by
exchanging the pivot with A[j ] and split position s = i = j
is returned.
p
A[i]p s A[i]p
Partitioning Algorithm
ALGORITHM Partition(A[l..r])
//Partitions a subarray by using the first element as a pivot
//Input: Subarray of array A[0..n − 1], defined by its left and
right indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned
as this function’s value
p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p or i > r
repeat j ←j − 1 until A[j ]≤ p
swap(A[i], A[j ])
until i ≥ j
swap(A[i], A[j ]) //undo last swap when i ≥ j
swap(p, A[j ])
return j
Tracing for Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
Pivot=5 Function calls
5 3 1 9 8 2 4 7 i=3, j=6 swap(a[i],a[j]) Quicksort(0,7)
5 3 1 4 8 2 9 7 i=4, j=5 swap(a[i],a[j])
5 3 1 4 2 8 9 7 i=5, j=4 swap(pivot,a[j])
2 3 1 4 5 8 9 7 s=4 Quicksort(0,3)
Pivot=2
2 3 1 4 i=1, j=2 swap(a[i],a[j])
2 1 3 4 i=2, j=1 swap(pivot,a[j])
1 2 3 4 s=1 Quicksort(0,0)
Pivot=3 Quicksort(2,3)
3 4 i=3,j=2 swap(pivot,a[j])
3 4 s=2 Quicksort(2,1)
Quicksort(3,3)
Pivot=8 8 9 7 i=6, j=7 swap(a[i],a[j]) Quicksort(5,7)
8 7 9 i=7, j=6 swap(pivot,a[j])
7 8 9 s=6 Quicksort(5,5)
1 2 3 4 5 7 8 9 is Out put Quicksort(7,7)
Quicksort- Tree of Recursive Calls
Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
Complexity Analysis of Quick sort
Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
The number of key comparisons made before a partition
is achieved is n + 1 if the scanning indices cross over
and n if they coincide. If all the splits happen in the
middle of corresponding subarrays, we will have the best
case.
The number of key omparisons in the best case satisfies
the recurrence
Cbest(n) = 2Cbest(n/2) + n for n > 1, Cbest(1) = 0.
According to the Master Theorem,
C best(n) Ω(n log2 n); solving it exactly for
n = 2k yields Cbest(n) =n log2 n.
Example : Input 5 ,3 ,1 ,9 ,8 ,2 ,4 ,7
In the worst case, all the splits will be skewed to the extreme: one of the two
subarrays will be empty, and the size of the other will be just 1 less than the
size of the subarray being partitioned. This situation will happen, in
particular, for increasing arrays, i.e., for inputs for which the problem is
already solved!
if A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot, the
left-to-right scan will stop on A[1] while the right-to-left scan will go all the
way to reach A[0], indicating the split at position 0: So, after making n + 1
comparisons to get to this partition and exchanging the pivot A[0] with itself,
the algorithm will be left with the strictly increasing array A[1..n − 1] to
sort. This sorting of strictly increasing arrays of diminishing sizes will
continue until the last one A[n − 2,n − 1] has been processed.
The total number of key comparisons made will be equal to
Cworst(n) = (n + 1) + n + . . . + 3 = ((n + 1)(n + 2)/2)-3 O(n2).
We can prove for average case Cavg(n) ≈ 2n ln n ≈ 1.39n log2 n Ө(n log2n)
Analysis of Quicksort
— Ω(n log n)
Best case: split in the middle
Worst case: sorted array — O(n2)
Average case: random arrays — Θ(n log n)
Improvements:
better pivot selection: median of three partitioning
switch to insertion sort on small subfiles
elimination of recursion