0% found this document useful (0 votes)
11 views14 pages

Cs1401 Design and Analysis of Algorithms Unit Ii Decrease and Conquer and Divide-And-Conquer

The document discusses the concepts of Decrease and Conquer and Divide-and-Conquer techniques in algorithm design, focusing on methods like Insertion Sort, Binary Search, and Quick Select. It outlines variations of Decrease and Conquer, such as decreasing by a constant, constant factor, and variable size, along with their applications. Additionally, it explains the efficiency of these algorithms and their relevance in solving problems like sorting and selection.

Uploaded by

Neithal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views14 pages

Cs1401 Design and Analysis of Algorithms Unit Ii Decrease and Conquer and Divide-And-Conquer

The document discusses the concepts of Decrease and Conquer and Divide-and-Conquer techniques in algorithm design, focusing on methods like Insertion Sort, Binary Search, and Quick Select. It outlines variations of Decrease and Conquer, such as decreasing by a constant, constant factor, and variable size, along with their applications. Additionally, it explains the efficiency of these algorithms and their relevance in solving problems like sorting and selection.

Uploaded by

Neithal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

UNIT II DECREASE AND CONQUER AND DIVIDE-AND-CONQUER

Decrease and Conquer- Insertion Sort-Binary Search- Computing a Median and the Selection
Problem-Divide and conquer -Merge sort –Quicksort –The Closest-Pair and Convex-Hull
Problems by divide and conquer

2.1 DECREASE AND CONQUER:

The Decrease and Conquer technique is based on exploiting the relationship between a
solution to a given instances of a problem and a solution to a smaller instances of the same
problem. When the relationship is established, it can be exploited either top down(recursive)
or bottom up ( non recursive).
There are three major variations of decrease-and-conquer:
1.decrease by a constant
2.decrease by a constant factor
3.variable size decrease

Decrease by a Constant:
In this variation, the size of an instance is reduced by the same constant on each iteration of
the algorithm. Typically, this constant is equal to one, although other constant size reductions
do happen occasionally. Below are example problems:
 Insertion sort
 Graph search algorithms: DFS, BFS
 Topological sorting
 Algorithms for generating permutations, subsets

problem of size
n

subproble
m of size
n –1

solution to
the
subproblem

solution to

the original
problem
Figure: Process -Decrease (by one) a Constant Technique

With an example, the exponentiation problem of computing a n where a 0 and n is a


nonnegative integer. The relationship between a solution to an instance of size n and an
instance of size n 1 is obtained by the obvious formula a n an−1 . a. So the function f (n) an can
be computed either “top down” by using its recursive definition or “bottom up” by multiplying
1 by an times.

St.Joseph’s College of Engineering Page |1


CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

Decrease by a Constant Factor:


This technique suggests reducing a problem instance by the same constant factor on each
iteration of the algorithm. In most applications, this constant factor is equal to two. A
reduction by a factor other than two is especially rare. Decrease by a constant factor
algorithm is very efficient especially when the factor is greater than 2 as in the fake-coin
problem.
Below are example problems:
 Binary search
 Fake-coin problems
 Russian peasant multiplication

problem of size
n

subproble
m of size
n/2

solution to
the
subproblem

solution to

the original
problem

Figure: Decrease by half and Conquer Technique


With the exponentiation problem,If the instance of size n is to compute a n, the instance of half
its size is to compute a n/2, with the obvious relationship between the two: a n (an/2)2. But since
we consider here instances with integer exponents only, the former does not work for odd n. If
n is odd, we have to compute a n−1 by using the rule for even-valued exponents and then
multiply the result by a.

Variable Size Decrease:


In this variation, the size-reduction pattern varies from one iteration of an algorithm to
another. As, in problem of finding gcd of two number though the value of the second
argument is always smaller on the right-handside than on the left-hand side, it decreases
neither by a constant nor by a constant factor.
Below are example problems :
 Computing median and selection problem.
 Interpolation Search
 Euclid‘s algorithm

2.2 INSERTION SORT


Insertion Sort is the application to Decrease the constant and Conquer method. It applies to
decrease-by-one technique to sorting an array A[0..n 1]. Here we assume that the smaller
problem of sorting the array A[0..n 2] has already been solved to give us a sorted array of
size n 1: A[0] ... A[n 2].
Find the solution to the smaller problem to get a solution to the original problem. Find the
appropriate position among the list of elements which is sorted and insert the element in
St.Joseph’s College of Engineering Page |2
CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

it.This can be done by scanning the sorted subarray from right to left until the first element
smaller than or equal to A[n 1] is encountered to insert A[n 1] right after that element. The
resulting algorithm is called straight insertion sort or insertion sort.

ALGORITHM InsertionSort(A[0..n − 1])


//Sorts a given array by insertion sort
//Input: An array A[0..n − 1] of n orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
for i ← 1 to n − 1 do
v ← A[i] j ← i − 1
while j ≥ 0 and A[j ] >v do A[j + 1] ← A[j ]
j←j−1
A[j + 1] ← v

Example for Insertion Sort:

Figure: Example for Insertion Sort


The basic operation of the algorithm is the key comparison A[j ] > v. The number of key
comparisons in this ≥ algorithm obviously depends on the nature of the input. In the worst case,
A[j ] >v is executed the largest number = − of times, i.e., for every j i 1, ... , 0. Since v A[i], it
happens if and only if A[j ] >=A[i] for j i 1, ... , 0. (Note that we are using the fact that on the
ith iteration of insertion sort−all the elements preceding A[i] are the first i elements in the
input, albeit in the sorted order.) Thus, for the worst-case input, we get A[0] > A[1] (for i 1),
A[1] > A[2] (for i 2),. . .,=A[n 2] > A[n 1] (for = i n 1). The
− worst-case
− = is an array of strictly
input

decreasing values. In the best case, the comparison A[j ] > v is executed only once on
every iteration of the outer loop. It happens if and only if A[i 1] A[i] for every i
1 , . . . , n 1, i.e., if the input array is already sorted in nondecreasing order.
This very good performance in the best case of sorted arrays is not very useful by itself,
because we cannot expect such convenient inputs. However, almost sorted files do arise
in a variety of applications, and insertion sort preserves its excellent performance on
such inputs.

A rigorous analysis of the algorithm’s average-case efficiency is based on investigating


the number of element pairs that are out of order. It shows that on randomly ordered
arrays, insertion sort makes on average half as many comparisons as on decreasing
arrays.

This twice as fast average case performance coupled with an excellent efficiency on almost
sorted arrays makes insertion sort stand out among its principal competitors among
elementary sorting algorithms.

2.3 BINARY SEARCH


Binary search is a remarkably efficient algorithm for searching in a sorted array. It works by
comparing a search key K with the array’s middle element A[m]. If they match, the algorithm
St.Joseph’s College of Engineering Page |3
CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

stops; otherwise, the same operation is repeated recursively for the first half of the array if K<
A[m], and for the second half if K> A[m].

As an example, let us apply binary search to searching for K = 70 in the array

3 14 27 31 3 42 55 70 7 81 85 93 98
9 4

The iterations of the algorithm are given in the following table:

ALGORITHM BinarySearch(A[0..n − 1], K)


//Implements nonrecursive binary search
//Input: An array A[0..n 1] sorted in ascending
− order and
// a search key K
//Output: An index of the array’s element that is equal to K
// or −1 if there is no such element
l ← 0; r ← n − 1
while l ≤ r do
m ← (l + r)/2]
if K = A[m] return m
else if K< A[m] r ← m − 1
else l ← m + 1
return −1
The standard way to analyze the efficiency of binary search is to count the number of times
the search key is compared with an element of the array. The Three way comparisions are
used to perform the search task.After one comparison of K with A[m], the algorithm can
determine whether K is smaller, equal to, or larger than A[m].
The worst-case inputs include all arrays that do not contain a given search key, as well as
some successful searches. Since after onecomparison the algorithm faces the same situation
but for an array half the size, we get the following recurrence relation for Cworst(n):
Cworst(n) = Cworst( n/2]) + 1 for n> 1, Cworst(1) = 1.

For the initial condition Cworst(1) = 1, we obtain


Cworst(2k) = k + 1 = log2 n + 1.
for n 2k can be tweaked to get a solution valid for an arbitrary positive integer n:
Cworst(n) = log2 n]+ 1 = 「log2(n + 1)
=
First, it implies that the worst-case time efficiency of binary search is in ①(log n).
Second, it is the answer we should have fully expected.
since the algorithm simply reduces the size of the remaining array by about half on each
iteration, the number of such iterations needed to reduce the initial size n to the final size 1
must be about log2 n. Third, to reiterate the point made in Section 2.1, the logarithmic
function grows so slowly that its values remain small even for very large values of n.
A sophisticated analysis shows that the average number of key comparisons made by binary
search is only slightly smaller than that in the worst case:
Cavg(n) ≈ log2 n.

2.4 COMPUTING THE MEDIAN AND THE SELECTION PROBLEMS


The selection problem is the problem of finding the kth smallest element in a list of n
numbers. This number is called the kth order statistic. Of course, for k 1 or k n, we can simply
=
scan the list in question to find the smallest or largest element, respectively. A more
=
interesting case of this problem is for k n/2 , which asks to find
= 「an element that is not larger
St.Joseph’s College of Engineering Page |4
CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

than one half of the list’s elements and not smaller than the other half. This middle value is
called the median, and it is one of the most important notions in mathematical statistics.
Obviously, we can find the kth smallest element in a list by sorting the list first and then
selecting the kth element in the output of a sorting algorithm. The time of such an algorithm
is determined by the efficiency of the sorting algorithm used. Thus, with a fast sorting
algorithm such as mergesort (discussed in the next chapter), the algorithm’s efficiency is in
O(n log n).
Sorting the entire list takes, partitioning the value of the sorted array which would be the first
element. In general, this is a rearrangement of the list’s elements so that the left part
contains all the elements smaller than or equal to p, followed by the pivot p itself, followed by
all the elements greater than or equal to p.

Of the two principal algorithmic alternatives to partition an array,


1. Lomuto Partitioning
2. Quick Select Algorithm

In Lomuto partitioning, An Array is more oftenly a subarray A[l..r] (0 l r n 1)—under



consideration as composed of three contiguous segments. Listed in the order they follow pivot
p, they are as follows: a segment with elements known to be smaller than p, the segment of
elements known to be greater than or equal to p, and the segment of elements yet to be
compared to p .When the segments can be empty, it is always the case for the first two
segments before the algorithm starts.
Starting with i l 1, the algorithm= scans the subarray A[l..r] left to right, maintaining this
structure until a partition is achieved.
+ On each iteration, it com- pares the first element in the
unknown segment. with the pivot p. If A[i] p, i is simply incremented to expand the
segment of the elements greater than or equal to ≥ p while shrinking the un- processed
segment. If A[i] < p, it is the segment of the elements smaller than p that needs to be
expanded. This is done by incrementing s, the index of the last.

Figure: Illustration of Lomuto Partitioning

The list is implemented as an array whose elements are indexed starting with a 0, and let’s be

the partition’s split position, i.e., the index of the array’s element occupied by the pivot after
partitioning. If s k 1, pivot p itself is obviously the kth smallest element, which solves the
problem. If s> k − 1, the kth smallest element in the entire array can be found as the kth
smallest element in the left part of the partitioned array. And if s< k − 1, it can be found as
the (k s)th smallest element in its right part. Now to solve the problem outright, we reduce its
instance to a smaller one, which can be solved by the same approach recursively. This
algorithm is called quick select.
St.Joseph’s College of Engineering Page |5
CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

ALGORITHM Quickselect(A[l..r], k)
//Solves the selection problem by recursive partition-based algorithm
//Input: Subarray A[l..r] of array A[0..n 1] of− orderable elements and
// integer k (1 k r l 1) ≤ ≤ −
//Output: The value of the kth+ smallest element in A[l..r]
s ← LomutoPartition(A[l..r]) //or another partition algorithm
if s = k − 1 return A[s]
else if s> l + k − 1 Quickselect(A[l..s − 1], k)
else Quickselect(A[s + 1..r],k − 1 − s)

In fact, the same idea can be implemented without recursion as well. For the non recursive
version, we need not even adjust the value of k but just continue
until s = k − 1.

Since s = 2 is smaller than k − 1 = 4, we proceed with the right part of the array:

Now s k 14, and hence we can stop: the found median is 8, which is greater than 2, 1, 4, and
7 but smaller than 12, 9, 10, and 15.

Efficiency of QuickSelect:
Partitioning an n-element array always requires n 1 key comparisons. If it produces the split
that solves the selection problem without requiring more iterations.

Unfortunately, the algorithm can produce an extremely unbalanced partition of a given array,
with one part being
— empty and the other containing n 1 elements.
In the worst case,
∈ this can happen on each of the n1 iterations. This implies that

2.5 DIVIDE-AND-CONQUER

Divide-and-conquer is the best-known general algorithm design technique. Divide-and-


conquer algorithms work according to the following general plan:
 A problem is divided into several subproblems of the same type, ideally of about equal
size.
 The subproblems are solved (typically recursively, though sometimes a different
algorithm is employed, especially when subproblems become small enough).
 If necessary, the solutions to the subproblems are combined to get a solution to the
original problem.
St.Joseph’s College of Engineering Page |6
CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

Figure: Divide and Conquer- Strategy

The divide-and-conquer technique (Figure) which depicts the case of dividing a problem into
two smaller sub problems, by far the most widely occurring case. In the most typical case of
divide-and-conquer a problem’s instance of size n is divided into two instances of size n/2.
More generally, an instance of size n can be divided into b instances of size n/b, with a of
them needing to be solved. (Here, a and b are constants; a ≥ 1 and b > 1.) Assuming that size
n is a power of b to simplify our analysis, we get the following recurrence for the running time
T (n):
T (n) = aT (n/b) + f (n) --- (1)
Where f (n) is a function that accounts for the time spent on dividing an instance of size n into
instances of size n/b and combining their solutions. Recurrence (1) is called the general
divide-and-conquer recurrence

Obviously, the order of growth of its solution T (n) depends on the values of the constants a
and b and the order of growth of the function f (n). The efficiency analysis of many divide-and-
conquer algorithms is greatly simplified by the following theorem

n 1 2
Example 1: Let T (n) = T ( ) + n + n. What are the parameters?
2 2
a=1
b=2

Therefore, which condition? Since 1 < 2 2, case 1 applies. Thus we conclude that T (n) ∈ Θ (n)
d=2

= Θ (n2)

St.Joseph’s College of Engineering Page |7


CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

n
Example 2: Let T (n) = 2T ( ) + √ n + 42. What are the parameters?
4
a=2
b=4
1
d=
, case 2 applies. Thus we conclude that T (n) ∈ Θ
2
1/2
Therefore, which condition? Since 2 = 4
(nd log n) = Θ (√ n log n)

n 3
Example 3: Let T (n) = 3T ( ) + n + 1. What are the parameters?
2 4
a=3
b=2

Therefore which condition? Since 3 > 2 1, case 3 applies. Thus we conclude that T (n) ∈ Θ (n^
d=1

log b a ) = Θ (n¿ log 2 3 ). Note that log23 ≈ 1.5849. We say that T (n) ∈ Θ (n1.5849)

2.6 MERGE SORT


Merge sort is a perfect example of a successful application of the divide-and conquer
technique. It sorts a given array A[0..n − 1] by dividing it into two halves. A [0.. ⌊ n/2 ⌋ − 1] and
A [⌊ n/2 ⌋ ..n − 1], sorting each of them recursively, and then merging the two smaller sorted
arrays into a single sorted one.

ALGORITHM Merge sort (A[0..n − 1])


//Sorts array A [0..n − 1] by recursive merge sort
//Input: An array A [0..n − 1] of orderable elements
//Output: Array A [0..n − 1] sorted in non-decreasing order
if n > 1
Copy A [0..⌊ n/2 ⌋ − 1] to B[0..⌊ n/2 ⌋ − 1]
Copy A [⌊ n/2 ⌋ ..n − 1] to C[0..⌈n/2⌉ − 1]
Merge sort (B [0..⌊ n/2 ⌋ − 1])
Merge sort(C [0..⌈n/2⌉ − 1])
Merge (B, C, A)

The merging of two sorted arrays can be done as follows. Two pointers (array indices)
are initialized to point to the first elements of the arrays being merged. The elements pointed
to are compared, and the smaller of them is added to a new array being constructed; after
that, the index of the smaller element is incremented to point to its immediate successor in
the array it was copied from. This operation is repeated until one of the two given arrays is
exhausted, and then the remaining elements of the other array are copied to the end of the
new array.
ALGORITHM Merge (B [0..p − 1], C[0..q − 1], A[0..p + q − 1])
//Merges two sorted arrays into one sorted array
//Input: Arrays B [0..p − 1] and C[0..q − 1] both sorted
//Output: Sorted array A[0..p + q − 1] of the elements of B and C
i ←0; j ←0; k←0
while i <p and j <q do
if B[i]≤ C[j ]
A[k]←B[i]; i ←i + 1
else A[k]←C[j]; j ←j + 1
k←k + 1
if i = p
copy C[j..q − 1] to A[k..p + q − 1]
else copy B[i..p − 1] to A[k..p + q − 1]

St.Joseph’s College of Engineering Page |8


CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

The operation of the algorithm on the list 8, 3, 2, 9, 7, 1, 5, 4 is illustrated in Figure.

Figure: Example of Merge sort operation

Analysis of Merge sort:


Assuming for simplicity that n is a power of 2, the recurrence relation for the number of
key comparisons C (n) is
C (n) = 2C (n/2) + Cmerge(n) for n > 1,
C (1) = 0.
Let us analyze Cmerge(n), the number of key comparisons performed during the merging stage.
At each step, exactly one comparison is made, after which the total number of elements in
the two arrays still needing to be processed is reduced by 1. In the worst case, neither of the
two arrays becomes empty before the other one contains just one element (e.g., smaller
elements may come from the alternating arrays). Therefore, for the worst case, Cmerge(n) = n
− 1, and we have the recurrence
Cworst(n) = 2Cworst(n/2) + n − 1 for n > 1,

Hence, according to the Master Theorem, Cworst(n) ∈θ (n log n)


Cworst(1) = 0.

2.7 QUICKSORT
Quicksort is the other important sorting algorithm that is based on the divide-and conquer
Partition approach. A partition is an arrangement of the array’s elements so that all the
elements to the left of some element A[s] are less than or equal to A[s], and all the elements
to the right of A[s] are greater than or equal to it:

A[0] . . . A[s − 1] A[s] A[s + 1] . . . A[n − 1]


all are ≤A[s] all are ≥A[s]

Obviously, after a partition is achieved, A[s] will be in its final position in the sorted array, and
we can continue sorting the two subarrays to the left and to the right of A[s] independently.
Difference between Mergesort and Quicksort:
St.Joseph’s College of Engineering Page |9
CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

 Unlike mergesort, which divides its input elements according to their position in the
array, quicksort divides them according to their value
 In mergesort, the division of the problem into two subproblems is immediate and the
entire work happens in combining their solutions; here, the entire work happens in the
division stage, with no work required to combine the solutions to the subproblems.

St.Joseph’s College of Engineering P a g e | 10


CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

Figure: Example of quicksort operation. (a) Array’s transformations with pivots shown in bold.
(b) Tree of recursive calls to Quicksort with input values l and r of subarray bounds and split
position s of a partition obtained.

Analysis of quicksort:
Basic operation: The number of key comparisons made before a partition is achieved is n + 1
if the scanning indices cross over and n if they coincide.
Best case: If all the splits happen in the middle of corresponding subarrays, we will have the
best case. The number of key comparisons in the best case satisfies the recurrence
Cbest(n) = 2Cbest(n/2) + n for n > 1,

According to the Master Theorem, Cbest(n) ∈θ (n log2 n); solving it exactly for n = 2k yields
Cbest(1) = 0.

Cbest(n) = n log2 n.

Worst case: All the splits will be skewed to the extreme: one of the two subarrays will be
empty, and the size of the other will be just 1 less than the size of the subarray being
partitioned. Indeed, if A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot,
the left-to-right scan will stop on A[1] while the right-to-left scan will go all the way to reach
A[0], indicating the split at position 0: So, after making n + 1 comparisons to get to this
partition and exchanging the pivot A[0] with itself, the algorithm will be left with the strictly
increasing array A[1..n − 1] to sort. This sorting of strictly increasing arrays of diminishing
sizes will continue until the last one A[n − 2..n − 1] has been processed. The total number of
key comparisons made will be equal to
(n+1)(n+ 2)
Cworst(n) = (n + 1) + n + . . . + 3 = -3 = θ (n2)
2
Improvements:
 Better pivot selection methods such as randomized quicksort that uses a random
element or the median-of-three method that uses the median of the leftmost,
rightmost, and the middle element of the array

St.Joseph’s College of Engineering P a g e | 11


CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

 Switching to insertion sort on very small subarrays (between 5 and 15 elements for
most computer systems) or not sorting small subarrays at all and finishing the
algorithm with insertion sort applied to the entire nearly sorted array
 Modifications of the partitioning algorithm such as the three-way partition into
segments smaller than, equal to, and larger than the pivot

2.8 THE CLOSEST-PAIR PROBLEM


Let P be a set of n > 1 points in the Cartesian plane. We assume that the points are
distinct and the points are ordered in nondecreasing order of their x coordinate. It will also be
convenient to have the points sorted in a separate list in nondecreasing order of the y
coordinate; we will denote such a list Q.
If 2 ≤ n ≤ 3, the problem can be solved by the obvious brute-force algorithm. If n > 3,

we can divide the points into two subsets Pl and Pr of


n
⌈ ⌉
2 and
⌊⌋
n
2 points, respectively, by
n
⌈ ⌉
drawing a vertical line through the median m of their x coordinates so that 2 points lie to

the left of or on the line itself, and


⌊⌋
n
2 points lie to the right of or on the line. Then we can
solve the closest-pair problem recursively for subsets Pl and Pr . Let d l and dr be the smallest
distances between pairs of points in P l and Pr, respectively, and let d = min{dl, dr}.
Note that d is not necessarily the smallest distance between all the point pairs because
points of a closer pair can lie on the opposite sides of the separating line. Therefore, as a step
combining the solutions to the smaller subproblems, we need to examine such points.
Obviously, we can limit our attention to the points inside the symmetric vertical strip of width
2d around the separating line, since the distance between any other pair of points is at least
d.
Let S be the list of points inside the strip of width 2d around the separating line,
obtained from Q and hence ordered in nondecreasing order of their y coordinate. We will scan
this list, updating the information about dmin, the minimum distance seen so far, if we
encounter a closer pair of points. Initially, dmin = d, and subsequently dmin ≤ d. Let p(x, y) be a
point on this list. For a point p (x, y) to have a chance to be closer to p than dmin, the point
must follow p on list S and the difference between their y coordinates must be less than dmin .
Geometrically, this means that p must belong to the rectangle shown in Figure.

St.Joseph’s College of Engineering P a g e | 12


CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

Efficiency:
The algorithm spends linear time both for dividing the problem into two problems half
the size and combining the obtained solutions. Therefore, assuming as usual that n is a power
of 2, we have the following recurrence for the running time of the algorithm:

where f (n) ∈ θ(n). Applying the Master Theorem (with a = 2, b = 2, and d = 1), we get T (n) ∈
T (n) = 2T (n/2) + f (n),

θ(n log n).

2.9 CONVEX-HULL PROBLEM


The convex-hull problem, here a divide-and-conquer algorithm called quickhull because
of its resemblance to quicksort. Let S be a set of n>1 points p 1(x1, y1), . . . , pn(xn, yn) in the
Cartesian plane. We assume that the points are sorted in nondecreasing order of their x
coordinates, with ties resolved by increasing order of the y coordinates of the points involved.
The leftmost point p1 and the rightmost point p n are two distinct extreme points of the set’s
convex hull

Figure: Upper and lower hulls of a set of points

Let p1pn be the straight line through points p 1 and pn directed from p1 to pn. This line
separates the points of S into two sets: S 1 is the set of points to the left of this line, and S 2 is
the set of points to the right of this line. The points of S on the line p1pn, other than p1 and pn,
cannot be extreme points of the convex hull and hence are excluded from further
consideration.

St.Joseph’s College of Engineering P a g e | 13


CS1401 DESIGN AND ANALYSIS OF ALGORITHMS

The boundary of the convex hull of S is made up of two polygonal chains: an “upper”
boundary and a “lower” boundary. The “upper” boundary, called the upper hull, is a sequence
of line segments with vertices at p 1, some of the points in S 1 (if S1 is not empty) and pn. The
“lower” boundary, called the lower hull, is a sequence of line segments with vertices at p 1,
some of the points in S 2 (if S2 is not empty) and pn. The fact that the convex hull of the entire
set S is composed of the upper and lower hulls, which can be constructed independently and
in a similar fashion, is a very useful observation exploited by several algorithms for this
problem.
How quickhull proceeds to construct the upper hull; the lower hull can be constructed in
the same manner. If S1 is empty, the upper hull is simply the line segment with the endpoints
at p1 and pn. If S1 is not empty, the algorithm identifies point pmax in S 1, which is the farthest
from the line p1pn (Figure ).

Figure: The idea of quickhull


If there is a tie, the point that maximizes the angle p maxppn can be selected. (Note that
point pmax maximizes the area of the triangle with two vertices at p 1 and pn and the third one
at some other point of S 1.) Then the algorithm identifies all the points of set S 1 that are to the
left of the line p1pmax; these are the points that will make up the set S 1,1. The points of S1 to the
left of the line pmaxpn will make up the set S1,2.

Efficiency:
Quickhull has the same θ (n2) worst-case efficiency as quicksort. In the average case,
however, we should expect a much better performance.

UNIT II ANNA UNIVERSITY QUESTION:


PART-A

1. State Master’s Theorem. (May 2018)


2. Give the General plan of divide and conquer algorithms. (Dec 2017) (May 2016)
3. Write the advantages of insertion sort. (Dec 2017)
4. What is closest pair problem? (May 2017) (May 2016)
5. What is the worst case complexity of binary search? (Dec 2016)
6. Derive the complexity of binary search algorithm. (May 2015)

PART-B
1. Write the algorithm for quick sort. Provide a complete analysis of quicksort for the given set
of numbers 12, 33,23,43,44,55,64,77 and 76. (Dec 2018)
2. Explain Merge sort algorithm with example. (May 2018) (Dec 2017)

3. What is divide and conquer strategy and explain the binary search with suitable example
problem. (May 2017)
4.Write an algorithm for quicksort and write its time complexity with example lists are
5,3,1,9,8,2,4,7 (May 2017)
5. Give the algorithm for quicsort. With an example show that quicksort is not stable sorting
algorithm. (Dec 2016)
6. State and explain the Merge sort algorithm and give the recurrence relation and efficiency.
(May 2016)
7. Write an algorithm to perform binary search and compute its run time complexity. (Dec
2015)

St.Joseph’s College of Engineering P a g e | 14

You might also like