Cs1401 Design and Analysis of Algorithms Unit Ii Decrease and Conquer and Divide-And-Conquer
Cs1401 Design and Analysis of Algorithms Unit Ii Decrease and Conquer and Divide-And-Conquer
Decrease and Conquer- Insertion Sort-Binary Search- Computing a Median and the Selection
Problem-Divide and conquer -Merge sort –Quicksort –The Closest-Pair and Convex-Hull
Problems by divide and conquer
The Decrease and Conquer technique is based on exploiting the relationship between a
solution to a given instances of a problem and a solution to a smaller instances of the same
problem. When the relationship is established, it can be exploited either top down(recursive)
or bottom up ( non recursive).
There are three major variations of decrease-and-conquer:
1.decrease by a constant
2.decrease by a constant factor
3.variable size decrease
Decrease by a Constant:
In this variation, the size of an instance is reduced by the same constant on each iteration of
the algorithm. Typically, this constant is equal to one, although other constant size reductions
do happen occasionally. Below are example problems:
Insertion sort
Graph search algorithms: DFS, BFS
Topological sorting
Algorithms for generating permutations, subsets
problem of size
n
subproble
m of size
n –1
solution to
the
subproblem
solution to
the original
problem
Figure: Process -Decrease (by one) a Constant Technique
problem of size
n
subproble
m of size
n/2
solution to
the
subproblem
solution to
the original
problem
it.This can be done by scanning the sorted subarray from right to left until the first element
smaller than or equal to A[n 1] is encountered to insert A[n 1] right after that element. The
resulting algorithm is called straight insertion sort or insertion sort.
This twice as fast average case performance coupled with an excellent efficiency on almost
sorted arrays makes insertion sort stand out among its principal competitors among
elementary sorting algorithms.
stops; otherwise, the same operation is repeated recursively for the first half of the array if K<
A[m], and for the second half if K> A[m].
3 14 27 31 3 42 55 70 7 81 85 93 98
9 4
than one half of the list’s elements and not smaller than the other half. This middle value is
called the median, and it is one of the most important notions in mathematical statistics.
Obviously, we can find the kth smallest element in a list by sorting the list first and then
selecting the kth element in the output of a sorting algorithm. The time of such an algorithm
is determined by the efficiency of the sorting algorithm used. Thus, with a fast sorting
algorithm such as mergesort (discussed in the next chapter), the algorithm’s efficiency is in
O(n log n).
Sorting the entire list takes, partitioning the value of the sorted array which would be the first
element. In general, this is a rearrangement of the list’s elements so that the left part
contains all the elements smaller than or equal to p, followed by the pivot p itself, followed by
all the elements greater than or equal to p.
The list is implemented as an array whose elements are indexed starting with a 0, and let’s be
−
the partition’s split position, i.e., the index of the array’s element occupied by the pivot after
partitioning. If s k 1, pivot p itself is obviously the kth smallest element, which solves the
problem. If s> k − 1, the kth smallest element in the entire array can be found as the kth
smallest element in the left part of the partitioned array. And if s< k − 1, it can be found as
the (k s)th smallest element in its right part. Now to solve the problem outright, we reduce its
instance to a smaller one, which can be solved by the same approach recursively. This
algorithm is called quick select.
St.Joseph’s College of Engineering Page |5
CS1401 DESIGN AND ANALYSIS OF ALGORITHMS
ALGORITHM Quickselect(A[l..r], k)
//Solves the selection problem by recursive partition-based algorithm
//Input: Subarray A[l..r] of array A[0..n 1] of− orderable elements and
// integer k (1 k r l 1) ≤ ≤ −
//Output: The value of the kth+ smallest element in A[l..r]
s ← LomutoPartition(A[l..r]) //or another partition algorithm
if s = k − 1 return A[s]
else if s> l + k − 1 Quickselect(A[l..s − 1], k)
else Quickselect(A[s + 1..r],k − 1 − s)
In fact, the same idea can be implemented without recursion as well. For the non recursive
version, we need not even adjust the value of k but just continue
until s = k − 1.
Since s = 2 is smaller than k − 1 = 4, we proceed with the right part of the array:
Now s k 14, and hence we can stop: the found median is 8, which is greater than 2, 1, 4, and
7 but smaller than 12, 9, 10, and 15.
Efficiency of QuickSelect:
Partitioning an n-element array always requires n 1 key comparisons. If it produces the split
that solves the selection problem without requiring more iterations.
−
Unfortunately, the algorithm can produce an extremely unbalanced partition of a given array,
with one part being
— empty and the other containing n 1 elements.
In the worst case,
∈ this can happen on each of the n1 iterations. This implies that
2.5 DIVIDE-AND-CONQUER
The divide-and-conquer technique (Figure) which depicts the case of dividing a problem into
two smaller sub problems, by far the most widely occurring case. In the most typical case of
divide-and-conquer a problem’s instance of size n is divided into two instances of size n/2.
More generally, an instance of size n can be divided into b instances of size n/b, with a of
them needing to be solved. (Here, a and b are constants; a ≥ 1 and b > 1.) Assuming that size
n is a power of b to simplify our analysis, we get the following recurrence for the running time
T (n):
T (n) = aT (n/b) + f (n) --- (1)
Where f (n) is a function that accounts for the time spent on dividing an instance of size n into
instances of size n/b and combining their solutions. Recurrence (1) is called the general
divide-and-conquer recurrence
Obviously, the order of growth of its solution T (n) depends on the values of the constants a
and b and the order of growth of the function f (n). The efficiency analysis of many divide-and-
conquer algorithms is greatly simplified by the following theorem
n 1 2
Example 1: Let T (n) = T ( ) + n + n. What are the parameters?
2 2
a=1
b=2
Therefore, which condition? Since 1 < 2 2, case 1 applies. Thus we conclude that T (n) ∈ Θ (n)
d=2
= Θ (n2)
n
Example 2: Let T (n) = 2T ( ) + √ n + 42. What are the parameters?
4
a=2
b=4
1
d=
, case 2 applies. Thus we conclude that T (n) ∈ Θ
2
1/2
Therefore, which condition? Since 2 = 4
(nd log n) = Θ (√ n log n)
n 3
Example 3: Let T (n) = 3T ( ) + n + 1. What are the parameters?
2 4
a=3
b=2
Therefore which condition? Since 3 > 2 1, case 3 applies. Thus we conclude that T (n) ∈ Θ (n^
d=1
log b a ) = Θ (n¿ log 2 3 ). Note that log23 ≈ 1.5849. We say that T (n) ∈ Θ (n1.5849)
The merging of two sorted arrays can be done as follows. Two pointers (array indices)
are initialized to point to the first elements of the arrays being merged. The elements pointed
to are compared, and the smaller of them is added to a new array being constructed; after
that, the index of the smaller element is incremented to point to its immediate successor in
the array it was copied from. This operation is repeated until one of the two given arrays is
exhausted, and then the remaining elements of the other array are copied to the end of the
new array.
ALGORITHM Merge (B [0..p − 1], C[0..q − 1], A[0..p + q − 1])
//Merges two sorted arrays into one sorted array
//Input: Arrays B [0..p − 1] and C[0..q − 1] both sorted
//Output: Sorted array A[0..p + q − 1] of the elements of B and C
i ←0; j ←0; k←0
while i <p and j <q do
if B[i]≤ C[j ]
A[k]←B[i]; i ←i + 1
else A[k]←C[j]; j ←j + 1
k←k + 1
if i = p
copy C[j..q − 1] to A[k..p + q − 1]
else copy B[i..p − 1] to A[k..p + q − 1]
2.7 QUICKSORT
Quicksort is the other important sorting algorithm that is based on the divide-and conquer
Partition approach. A partition is an arrangement of the array’s elements so that all the
elements to the left of some element A[s] are less than or equal to A[s], and all the elements
to the right of A[s] are greater than or equal to it:
Obviously, after a partition is achieved, A[s] will be in its final position in the sorted array, and
we can continue sorting the two subarrays to the left and to the right of A[s] independently.
Difference between Mergesort and Quicksort:
St.Joseph’s College of Engineering Page |9
CS1401 DESIGN AND ANALYSIS OF ALGORITHMS
Unlike mergesort, which divides its input elements according to their position in the
array, quicksort divides them according to their value
In mergesort, the division of the problem into two subproblems is immediate and the
entire work happens in combining their solutions; here, the entire work happens in the
division stage, with no work required to combine the solutions to the subproblems.
Figure: Example of quicksort operation. (a) Array’s transformations with pivots shown in bold.
(b) Tree of recursive calls to Quicksort with input values l and r of subarray bounds and split
position s of a partition obtained.
Analysis of quicksort:
Basic operation: The number of key comparisons made before a partition is achieved is n + 1
if the scanning indices cross over and n if they coincide.
Best case: If all the splits happen in the middle of corresponding subarrays, we will have the
best case. The number of key comparisons in the best case satisfies the recurrence
Cbest(n) = 2Cbest(n/2) + n for n > 1,
According to the Master Theorem, Cbest(n) ∈θ (n log2 n); solving it exactly for n = 2k yields
Cbest(1) = 0.
Cbest(n) = n log2 n.
Worst case: All the splits will be skewed to the extreme: one of the two subarrays will be
empty, and the size of the other will be just 1 less than the size of the subarray being
partitioned. Indeed, if A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot,
the left-to-right scan will stop on A[1] while the right-to-left scan will go all the way to reach
A[0], indicating the split at position 0: So, after making n + 1 comparisons to get to this
partition and exchanging the pivot A[0] with itself, the algorithm will be left with the strictly
increasing array A[1..n − 1] to sort. This sorting of strictly increasing arrays of diminishing
sizes will continue until the last one A[n − 2..n − 1] has been processed. The total number of
key comparisons made will be equal to
(n+1)(n+ 2)
Cworst(n) = (n + 1) + n + . . . + 3 = -3 = θ (n2)
2
Improvements:
Better pivot selection methods such as randomized quicksort that uses a random
element or the median-of-three method that uses the median of the leftmost,
rightmost, and the middle element of the array
Switching to insertion sort on very small subarrays (between 5 and 15 elements for
most computer systems) or not sorting small subarrays at all and finishing the
algorithm with insertion sort applied to the entire nearly sorted array
Modifications of the partitioning algorithm such as the three-way partition into
segments smaller than, equal to, and larger than the pivot
Efficiency:
The algorithm spends linear time both for dividing the problem into two problems half
the size and combining the obtained solutions. Therefore, assuming as usual that n is a power
of 2, we have the following recurrence for the running time of the algorithm:
where f (n) ∈ θ(n). Applying the Master Theorem (with a = 2, b = 2, and d = 1), we get T (n) ∈
T (n) = 2T (n/2) + f (n),
Let p1pn be the straight line through points p 1 and pn directed from p1 to pn. This line
separates the points of S into two sets: S 1 is the set of points to the left of this line, and S 2 is
the set of points to the right of this line. The points of S on the line p1pn, other than p1 and pn,
cannot be extreme points of the convex hull and hence are excluded from further
consideration.
The boundary of the convex hull of S is made up of two polygonal chains: an “upper”
boundary and a “lower” boundary. The “upper” boundary, called the upper hull, is a sequence
of line segments with vertices at p 1, some of the points in S 1 (if S1 is not empty) and pn. The
“lower” boundary, called the lower hull, is a sequence of line segments with vertices at p 1,
some of the points in S 2 (if S2 is not empty) and pn. The fact that the convex hull of the entire
set S is composed of the upper and lower hulls, which can be constructed independently and
in a similar fashion, is a very useful observation exploited by several algorithms for this
problem.
How quickhull proceeds to construct the upper hull; the lower hull can be constructed in
the same manner. If S1 is empty, the upper hull is simply the line segment with the endpoints
at p1 and pn. If S1 is not empty, the algorithm identifies point pmax in S 1, which is the farthest
from the line p1pn (Figure ).
Efficiency:
Quickhull has the same θ (n2) worst-case efficiency as quicksort. In the average case,
however, we should expect a much better performance.
PART-B
1. Write the algorithm for quick sort. Provide a complete analysis of quicksort for the given set
of numbers 12, 33,23,43,44,55,64,77 and 76. (Dec 2018)
2. Explain Merge sort algorithm with example. (May 2018) (Dec 2017)
3. What is divide and conquer strategy and explain the binary search with suitable example
problem. (May 2017)
4.Write an algorithm for quicksort and write its time complexity with example lists are
5,3,1,9,8,2,4,7 (May 2017)
5. Give the algorithm for quicsort. With an example show that quicksort is not stable sorting
algorithm. (Dec 2016)
6. State and explain the Merge sort algorithm and give the recurrence relation and efficiency.
(May 2016)
7. Write an algorithm to perform binary search and compute its run time complexity. (Dec
2015)