0% found this document useful (0 votes)
16 views122 pages

Unit 2 - DAA - AK2

The document provides an overview of the Divide-and-Conquer algorithmic strategy, detailing its steps: divide, conquer, and combine. It includes specific algorithms such as Binary Search, Merge Sort, and Quick Sort, along with their analyses and implementations. Additionally, it covers other applications of the technique like Strassen's Matrix multiplication and the Convex Hull Problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views122 pages

Unit 2 - DAA - AK2

The document provides an overview of the Divide-and-Conquer algorithmic strategy, detailing its steps: divide, conquer, and combine. It includes specific algorithms such as Binary Search, Merge Sort, and Quick Sort, along with their analyses and implementations. Additionally, it covers other applications of the technique like Strassen's Matrix multiplication and the Convex Hull Problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 122

Unit - II

Divide-and-conquer
Dr. S. Nagendra Prabhu
Syllabus
• Introduction, divide-and-conquer,
• Binary Search and its algorithm analysis
• Merge sort and its algorithm analysis
• Quick sort and its algorithm analysis
• Strassen's Matrix multiplication
• Finding Maximum and minimum
• Algorithm for finding closest pair
• Convex Hull Problem
Introduction
• Divide and Conquer is an algorithmic pattern. In
algorithmic methods, the design is to take a dispute
on a huge input, break the input into minor pieces,
decide the problem on each of the small pieces, and
then merge the piecewise solutions into a global
solution. This mechanism of solving the problem is
called the Divide & Conquer Strategy.
• Divide and Conquer algorithm consists of a dispute
using the following three steps.
1.Divide the original problem into a set of
subproblems.
2.Conquer: Solve every subproblem individually,
recursively.
3.Combine: Put together the solutions of the
subproblems to get the solution to the whole
problem.
Introduction
• Divide / Break
• Conquer / Solve
• Merge / Combine
Examples : Divide and Conquer Technique

The specific computer algorithms


are based on the Divide &
Conquer approach:
1. Binary Search
2. Sorting (merge sort, quick sort)
3. Strassen's Matrix multiplication
4. Finding Maximum and minimum
5. Algorithm for finding closest pair
6. Convex Hull Problem
Divide and Conquer Technique

General Method:
 The Divide and Conquer Technique splits n inputs
into k subsets , 1< k ≤ n, yielding k subproblems.

 These subproblems will be solved and then


combined by using a separate method to get a
solution to a whole problem.

 If the subproblems are large, then the Divide and


Conquer Technique will be reapplied.

 Often subproblems resulting from a Divide and


Conquer Technique are of the same type as the
original problem.
Control Abstraction/general method for Divide
and Conquer Technique

Algorithm DAndC(p)
{
if Small(p) then return s(p);
else
{
divide p into smaller instances p1,p2,…….,pk, k≥1;
Apply DAndC to each of these subproblems;
return Combine(DAndC(p1), DAndC(p2),……,DAndC(pk));
}
}
General format
If the size of p is n and the sizes of the k subproblems are n1,n2,
….,nk,then the computing time of DAndC is described by the
recurrence relation

T(n)= g( n) n small
T(n1)+T(n2)+……+T(nk)+f(n) Otherwise

 Where T(n) is the time for DAndC on any input of size n and g(n) is
the time to compute the answer directly for small inputs.
 The function f(n) is the time for dividing p and combining the
solutions to subproblems.
Binary Search
• Divide / Break
• Conquer / Solve
• Merge / Combine
Binary Search
1. In Binary Search technique, we search an element
in a sorted array by recursively dividing the interval
in half.
2. Firstly, we take the whole array as an interval.
3. If the Pivot Element (the item to be searched) is
less than the item in the middle of the interval, We
discard the second half of the list and recursively
repeat the process for the first half of the list by
calculating the new middle and last element.
4. If the Pivot Element (the item to be searched) is
greater than the item in the middle of the interval,
we discard the first half of the list and work
recursively on the second half by calculating the
new beginning and middle element.
5. Repeatedly, check until the value is found or
interval is empty.
BINARY SEARCH
• Binary search is implemented using following steps...
• Step 1 - Read the search element from the user.
• Step 2 - Find the middle element in the sorted list.
• Step 3 - Compare the search element with the middle element in the sorted list.
• Step 4 - If both are matched, then display "Given element is found!!!" and terminate the
function.
• Step 5 - If both are not matched, then check whether the search element is smaller or
larger than the middle element.
• Step 6 - If the search element is smaller than middle element, repeat steps 2, 3, 4 and 5
for the left sublist of the middle element.
• Step 7 - If the search element is larger than middle element, repeat steps 2, 3, 4 and 5 for
the right sublist of the middle element.
• Step 8 - Repeat the same process until we find the search element in the list or until
sublist contains only one element.
• Step 9 - If that element also doesn't match with the search element, then display
"Element is not found in the list!!!" and terminate the function.
How Binary Search Works?

• The following is our sorted array and let us assume that


we need to search the location of value 31 using binary
search.
• First, we shall determine half of the array by using this
formula –
mid = beginning + ending / 2

• Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.
How Binary Search Works?

• We change our low to mid + 1 and find the new mid value again.

low = mid + 1
mid = Big + end / 2

• We conclude that the target value 31 is stored at location 5.


BINARY SEARCH
Algorithm BinSrch(a, low, high, x)
{
if( low = high)
{
if(x = a[high]) then return high;
else return 0;
}
else
{
mid := (low+high)/2;
if (x = a[mid]) then return mid;
else if(x < a[mid]) then
return BinSrch(a, low, mid-1, x);
else return BinSrch(a, mid+1, high, x);
}
}
Binary Search - Analysis
1.Input: an array A of size n, already sorted
in the ascending or descending order.
2.Output: analyze to search an element item
in the sorted array of size n.
3.Logic: Let T (n) = number of comparisons
of an item with n elements in a sorted array.
• Set BEG = 1 and END = n

• Compare the search item with the mid item.


Binary Search - Analysis
Binary Search - Analysis
Binary Search - Analysis
• At least there will be only one term left
that's why that term will compare out,
and only one comparison be done that's
why
Binary Search - Analysis
Time Complexity of Binary Search
Successful searches:
best average worst
O(1) O(log n) O( log n)
Merge Sort
2. Merge Sort
1. Base Case, solve the problem directly
if it is small enough(only one element).

2. Divide the problem into two or more


similar and smaller subproblems.

3. Recursively solve the subproblems.

4. Combine solutions to the subproblems.


Merge Sort
• Merge sort is yet another sorting
algorithm that falls under the category
of Divide and Conquer technique.
• It is one of the best sorting techniques
that successfully build a recursive
algorithm.
Example
Merge Sort: Idea

Divide into
A FirstPart SecondPart
two halves
Recursively
Recursively sort
sort
FirstPart SecondPart

Merge

A is sorted!
Merge Sort: Algorithm
MergeSort ( low,high)
// sorts the elements a[low],…,a[high] which reside in the global array
//a[1:n] into ascending order.
// Small(p) is true if there is only one element to sort. In this case the list is
// already sorted.

{ if ( low<high ) then // if there are more than one element

{
Recursive Calls
mid ← (low+high)/2;
MergeSort(low,mid);
MergeSort(mid+1, high);
Merge(low, mid, high);

}
Merge(A,L,M,H)
Merge Algorithm
{
Split the array into two array a and b;
i=0;j=0;k=0;
for(k=1;k<size[A];k++)
{
If (a[i]<b[j])
{
list[k]=a[i];
i++;
}
else
{
list[k]=b[j];
j++;
}
} list[k] = remaining_elements;
}
Merge-Sort: Merge Example
low mid high
A: 2
5 3
5 7 28
15 8 30
1 4
6 5 14
10 6

B: 5 5 15 28 30 6 10 14

L: R:
3 5 15 28 6 10 14 22
Merge-Sort: Merge Example

A: 5 5 15 28 30 6 10 14
B:
3
1 5 15 28 30 6 10 14

k=low

L: R:
3
2 15
3 28
7 30
8 6
1 10
4 14
5 22
6

i=low j=mid+1
Merge-Sort: Merge Example

A: 5 5 15 28 30 6 10 14
B:
1 2
5 15 28 30 6 10 14

L: R:
3
2 5
3 15
7 28
8 6
1 10
4 14
5 22
6

i j
Merge-Sort: Merge Example

A: 5 5 15 28 30 6 10 14
B:
1 2 3 28 30
15 6 10 14

L: R:
2 3 7 8 6
1 10
4 14
5 22
6

i j
Merge-Sort: Merge Example

A: 5 5 15 28 30 6 10 14
B:
1 2 3 4 6 10 14

L: R:
2 3 7 8 6
1 10
4 14
5 22
6

i j
Merge-Sort: Merge Example

A: 5 5 15 28 30 6 10 14
B:
1 2 3 4 5 6 10 14

L: R:
2 3 7 8 6
1 10
4 14
5 22
6

i j
Merge-Sort: Merge Example

A: 5 5 15 28 30 6 10 14
B:
1 2 3 4 5 6 10 14

L: R:
2 3 7 8 6
1 10
4 14
5 22
6

i j
Merge-Sort: Merge Example

A: 5 5 15 28 30 6 10 14
B:
1 2 3 4 5 6 7 14

L: R:
2 3 7 8 6
1 10
4 14
5 22
6

i j
Merge-Sort: Merge Example

A: 5 5 15 28 30 6 10 14
B:
1 2 3 4 5 6 7 8
14

L: R:
3
2 5
3 15
7 28
8 6
1 10
4 14
5 22
6

i j
Merge-Sort: Merge Example

A: 5 5 15 28 30 6 10 14
B:
1 2 3 4 5 6 7 8

L: R:
3
2 5
3 15
7 28
8 6
1 10
4 14
5 22
6

i j
A:
5 5 15 28 30 6 10 14

B: 1 2 3 4 5 6 7 8
Merge-Sort Analysis
n

n/2 n/2

n/4 n/4 n/4 n/4

2 2 2
Analysis of Merge Sort:
• Note stopping condition T(1)=0 because at last, there will be only 1 element
that need to be copied, and there will be no comparison.
• So, we get,
Merge-Sort Time Complexity
If the time for the merging operation is proportional to n, then the
computing time for merge sort is described by the recurrence relation

c1 n=1, c1 is a constant
T(n) =
2T(n/2) + c2n n>1, c2 is a constant

Assume n=2k, then


T(n) =2T(n/2) + c2n
=2(2T(n/4)+c2n/2)+cn
=4T(n/4)+2c2n

…..
…..
=2k T(1)+ kc2n
= c1n+c2nlogn = = O(nlogn)
Summary
• Merge-Sort
– Most of the work done in combining the
solutions.
– Best case takes o(n log(n)) time
– Average case takes o(n log(n)) time
– Worst case takes o(n log(n)) time
Quick Sort
3. Quick Sort
• Divide:
• Pick any one of the following as pivot,
– First element
– Last element
– Middle element
– Random element
• Partition the remaining elements into
FirstPart, which contains all elements < pivot
SecondPart, which contains all elements > pivot

• Recursively sort FirstPart and SecondPart.


• Combine: no work is necessary since sorting is done in place.
pivot divides a into two sublists x and y.

4 2 7 8 1 9 3 6 5
pivot

x y
The whole process

4 2 7 8 1 9 3 6 5

2 1 3 4 7 8 9 6 5

1 2 3 6 5 7 8 9

1 3 5 6 8 9

5 9
Quick Sort Algorithm :
Algorithm QuickSort(low,high)
//Sorts the elements a[low],…..,a[high] which resides
//in the global array a[1:n] into ascending order;
// a[n+1] is considered to be defined and must ≥ all the
// elements in a[1:n].
{
if( low< high ) // if there are more than one element
{ // divide p into two subproblems.
j :=Partition(low,high);
// j is the position of the partitioning element.
QuickSort(low,j-1);
QuickSort(j+1,high);
// There is no need for combining solutions.
}
}
Algorithm Partition(l,h)
{
pivot:= a[l] ; i:=l; j:= h+1;
while( i < j ) do
{
i++;
while( a[ i ] < pivot ) do
i++;
j--;
while( a[ j ] > pivot ) do
j--;

if ( i < j ) then Interchange(i,j ); // interchange ith and


} // jth elements.

Interchange(l, j ); return j; // interchange pivot and jth element.


}
Algorithm interchange (x,y )
{
temp=a[x];
a[x]=a[y];
a[y]=temp;
}
Best/good Case
n

n/2 n/2

n/4 n/4 n/4 n/4

2 2 2

• Total time: O(nlogn)


• Best case
• Recurrence relation for best case in quick
sort 
• T(n) = 2 T(n/2) + n

• Prove O(nlogn)
QUICK SORT


Time complexity analysis
A worst/bad case

1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
2 3 4 5 6 7 8 9
3 4 5 6 7 8 9
4 5 6 7 8 9
5 6 7 8 9 O(n2)
6 7 8 9
7 8 9
8 9
9
Worst/bad Case
n cn

n-1 c(n-1)

n-2 c(n-2)

3c
3
Happens only if
• input is sorted 2 2c
• input is reversely sorted 1c
1
Total time: O(n2)
T (n) =T(n-1)+n

•T (n-1) is time taken by remaining


element except for pivot element.
•N: the number of comparisons
required to identify the exact position
of itself (every element)
•It means there will be n comparisons
if there are n items.
Solving Recurrence Relation
• K value decreasing by 1 from starting
Summary
• Quick-Sort
– Most of the work done in partitioning
– Best case takes O(n log(n)) time
– Average case takes O(n log(n)) time
– Worst case takes O(n2) time
4.Strassen’s Matrix Multiplication
Basic Matrix Multiplication
Let A an B two n×n matrices. The product C=AB is also an n×n matrix.

void matrix_mult (){


for (i = 1; i <= N; i++) {

for (j = 1; j <= N; j++) {


for(k=1; k<=N; k++){
C[i,j]=C[i,j]+A[i,k]+B[k,j];
}
}}
Time complexity of above algorithm is ???
Basic Matrix Multiplication
Let A an B two n×n matrices. The product C=AB is also an n×n matrix.

void matrix_mult (){


for (i = 1; i <= N; i++) {

for (j = 1; j <= N; j++) {


for(k=1; k<=N; k++){
C[i,j]=C[i,j]+A[i,k]+B[k,j];
}
}}
Time complexity of above algorithm is
T(n)=O(n3)
How it works ?
• Following is simple Divide and Conquer
method to multiply two square
matrices.
1) Divide matrices A and B in 4 sub-
matrices of size N/2 x N/2 as shown in
the below diagram.
2) Calculate following values
recursively. ae + bg, af + bh, ce + dg
and cf + dh.
Then, A11 A12 B11 B12

c11 c12 1 1 2 2  5 5 6 6
C11=A11B11+A12B21  C11 C12  1 1 2 2  5 5 6 6
C    
C12=A11B12+A12B22  C 21 C 22 3 3 4 4  7 7 8 8
c21 c22  3 3 4 4  7 7 8 8 
C21=A21B11+A22B21 A 21 A 22 B 21 B22

C22=A21B12+A22B22
• Each of these four equations specifies two multiplications of n/2×n/2
matrices and the addition of their n/2×n/2 products.
• We can derive the following recurrence relation for the time T(n) to multiply
two n×n matrices:

T(n)= c1 if n<=2
8T(n/2)+ c2n2 if n>2

T(n) = O(n3)
Strassen’s method

• Matrix multiplications are more expensive than matrix


additions or subtractions O(n3)

• Strassen has discovered a way to compute the


multiplication using only 7 multiplications and 18 additions
or subtractions.
• His method involves computing 7 n×n matrices
M1,M2,M3,M4,M5,M6, and M7, then cij’s are calculated using
these matrices.
Formulas for Strassen’s Algorithm
M1 = (A11 + A22)  (B11 + B22)
M2 = (A21 + A22)  B11
M3 = A11  (B12 – B22)
M4 = A22  (B21 – B11)
M5 = (A11 + A12)  B22
M6 = (A21 – A11)  (B11 + B12)
M7 = (A12 – A22)  (B21 + B22)
C11=M1 + M4 - M5 + M7
C12= M3 + M5
C21= M2 + M4
C22=M1 + M3 - M2 + M6
C11 C12 A11 A12 B11 B12
= *
C21 C22 A21 A22 B21 B22

M1 + M4 - M5 + M7 M 3 + M5
=
M2 + M4 M1 + M3 - M2 + M6

The resulting recurrence relation for T(n) is

T(n)= c1 n<=2
7T(n/2) +c2n2 n>2
T(n)= 7kT(1) + c2n2 1+ 7/4 + (7/4)2 + (7/4) 3+……………..+ (7/4)k-1

..
.

=
7log2n c1 +c2 n2 (7/4)log2n

=
nlog27 + nlog24 ( n log27-log24 )

=2 nlog27 = O(nlog27) ~ O( n2.81)


Finding Maximum and
minimum
Finding Maximum and
minimum
• Problem: Analyze the algorithm to
find the maximum and minimum
element from an array.
Finding Maximum and
minimum
• Problem: Analyze the algorithm to
find the maximum and minimum
element from an array.
Finding Maximum and
minimum
• Problem: Analyze the algorithm to
find the maximum and minimum
element from an array.
Finding Maximum and Minimum
• Algorithm straightforward
Finding the maximum and minimum elements in a set of (n) elements
Algorithm straightforward (a, n, max, min)

Input: array (a) with (n) elements


Output: max: max value, min: min value
max = min = a(1)
for i = 2 to n do
begin
if (a(i)>max) then max=a(i)
if (a(i)<min) then min=a(i)
end
End

Complexity
best= average =worst= 2(n-1) comparisons
56 34 12 1 76 34 23 8 16

56 34 12 1 76 34 23 8 16

56 34 12 1 76 34 23 8 16

Min 34 Min 1 Min 34 23 8 16


Max 56 Max 12 Max 76
Min 23 Min 8
Min 1 Max 23 Max 16
Max 56
Min 8
Max 23

Min 8
Max 76
Min 1
Max 76
Algorithm
• Algorithm: max_min(x, y)
if y – x ≤ 1 then
return (max(numbers[x], numbers[y]), min((numbers[x], numbers[y]))
else
Mid=(l+h)/2
(max1, min1):= max_min(x, mid)
(max2, min2):= max_min(mid+1,y)
return (max(max1, max2), min(min1, min2))
Complexity Analysis
1.if size= 1 return current element as both max and min //base condition

2.else if size= 2 one comparison to determine max and min //base condition

3.else /* if size > 2 find mid and call recursive function */

recur for max and min of left half recur for max and min of right half.

one comparison determines to max of the two subarray, update max.

one comparison determines min of the two subarray, update min.

4. finally return or print the min/max in whole array.

Recurrence relation can be written as T(n)=2T(n/2)+2


solving which give you T(n) =(3n/2)-2 which is the exact no. of comparisons but
still the worst time complexity will be T(n)=O(n) and best case time complexity
will be O(1) when you have only one element in array, which will be candidate
for both max and min.
• Let n = is the size of items in an array
• Let T (n) = time required to apply the
algorithm on an array of size n. Here
we divide the terms as T(n/2).
• Similarly, apply the same procedure
recursively on each subproblem

• Recursion will stop, when


• Put the equ.4 into equation3.
Algorithm for finding closest
pair
Algorithm for finding closest pair
• We are given an array of n points in the plane, and
the problem is to find out the closest pair of points
in the array.
• For example, in air-traffic control, you may want to
monitor planes that come too close together, since this
may indicate a possible collision.
• Recall the following formula for distance between two
points p and q.
• Distance between two points:
• P = (Px,Py), q = (qx,qy)
Real-life Applications
» Track the closest pairs in air traffic control
to detect and prevent
collision.

» It is easy to come up with Brute Force attack.


Real-life Applications
Brute Force attack 

By introducing
• Best Cases
Algorithm for finding closest pair
• Given a set of points, find the closest pair
(measured in Euclidean distance).
• a set of n points are given on the 2D
plane. In this problem, we have to find the
pair of points, whose distance is minimum.
• The Brute force solution is O(n2), compute
the distance between each pair and return
the smallest.
• We can calculate the smallest distance
in ? time using Divide and Conquer
strategy.
Algorithm
Input: An array of n points P[]
Output: The smallest distance between two points in the given array.
As a pre-processing step, input array is sorted with x coordinates.
1) Find the middle point in the sorted array, we can take P[n/2] as middle
point.
2) Divide the given array in two halves. The first subarray contains points
from P[0] to P[n/2]. The second subarray contains points from P[n/2+1] to
P[n-1].
3) Recursively find the smallest distances in both subarrays. Let the
distances be dl and dr. Find the minimum of dl and dr. Let the minimum be
d.
4) From above 3 steps, we have an upper bound d of minimum distance.
Now we need to consider the pairs such that one point in pair is from
left half and other is from right half. Consider the vertical line passing
through passing through P[n/2] and find all points whose x coordinate is
closer than d to the middle vertical line. Build an array strip[] of all such
points.
5) Sort the array strip[] according to y coordinates. This step is O(nLogn). It
can be optimized to O(n) by recursively sorting and merging.

6) Find the smallest distance in strip[]. This is tricky. From first look, it seems
to be a O(n2) step, but it is actually O(n).

7) Finally return the minimum of d and distance calculated in above step 6


Algorithm for Finding Closest pair
Time Complexity
Let Time complexity of above algorithm be T(n).
• Let us assume that we use a O(nLogn) sorting algorithm. The
above algorithm divides all points in two sets and recursively calls
for two sets.
• After dividing, it finds the strip in O(n) time, sorts the strip in
O(nLogn) time and finally finds the closest points in strip in O(n)
time.

So T(n) can expressed as follows:

T(n) = 2T(n/2) + O(nLogn)


Using Master’s theorem
T(n) = 2T(n/2) + O(nLogn)
T(n) = aT(n/b) + f(n) where a >= 1 and b > 1

a=2 b=2 f(n) = nlogn

1. If f(n) < O(nlogba), then T (n) = ϴ (nlogba).

2. If f(n) = ϴ (nlogbalogkn) with k≥0, then T (n) = ϴ (nlogbalogk+1n).

3. If f(n) > Ω (nlogba), and f(n) satisfies the condition, then T (n) = ϴ (f(n)).

Calculate nlogba = nlog22= n1

Compare with f(n). Since f(n) < nlogba i.e. nlogn = n1log1n

Case 2 is satisfied hence complexity is given as

T(n) = ϴ (nlogbalogk+1n) = ϴ (nlog2n)


Convex Hull Problem
Convex Hull
Convex Hull Problem
Time Complexity for Convex
Hull Problem
Brute Force Approach
• The brute force method for determining
convex hull is to construct a line connecting
two points and then verify whether all
points are on the same side or not.
• There are such n(n – 1) /2 lines with n
points, and each line is compared with the
remaining n – 2 points to see if they fall on
the same side.
• As a result, the brute force technique takes
O(n3) time. Can we use divide and conquer
more effectively?
Divide and Conquer
• There Approach
exist multiple approaches to solve convex hull
problem. In this article, we will discuss how to solve it using
divide and conquer approach.
1. Sort all of the points by their X coordinates. The tie is
broken by ranking points according to their Y coordinate.
2. Determine two extreme points A and B, where A represents
the leftmost point and B represents the rightmost point. A
and B would be the convex hull’s vertices. Lines AB and BA
should be added to the solution set.
3. Find the point C that is the farthest away from line AB.
4. Calculate the convex hull of the points on the line AC’s right
and left sides. Remove line AB from the original solution set
and replace it with AC and CB.
5. Process the points on the right side of line BA in the same
way.
6. Find the convex hull of the points on the left and right of the
line connecting the two farthest points of that specific
convex hull recursively.
Algorithm of Convex Hull
Algorithm for finding convex hull
using divide and conquer strategy is
provided below:
Algorithm of Convex Hull
Algorithm for subroutine FindHull is
described below :
Finding Complexity
• Dividing points into two halves S1 and
S2 take O(1) time by joining A and B. In the
average case, S1 and S2 contain half of the
points. So, recursively computing the
convex hull of A and B takes T(n/2) each.
Merging of two convex hulls is done in
linear time O(n), by finding the orthogonally
farthest point. Total running time after
preprocessing the points is given by,
• T(n) = 2T(n/2) + O(n)
T(n) = 2T(n/2) + n
Problem: Find the convex hull for
a given set of points using divide
and conquer approach.

• Step 1: According to the algorithm, find left


most and rightmost points from the set P and
label them as A and B. Label all the points on the
right of AB as S1 and all the points on the right of
BA as S2.
Make a recursive call to FindHull
(S1, A, B) and FindHull(S2 ,B, A)

• Solution = Solution – { AB } ∪ {AC, CB}= {AC, CB,


BA}
• Label regions X0, X1 and X2 as shown in above figure
• Make recursive calls: FindHull (X1, A, C) and FindHull
(X2, C, B)
Make a recursive call to FindHull
(S1, A, B) and FindHull(S2 ,B, A)
• Solution = Solution – { AB } ∪ {AC, CB}=
{AC, CB, BA}
• Label regions X0, X1 and X2 as shown in
above figure
• Make recursive calls: FindHull (X1, A, C)
and FindHull (X2, C, B)
Refer this link for Convex Hull
• https://codecrucks.com/convex-hull-using-
divide-and-conquer/#:~:text=The
%20convex%20hull%20of%20the,as
%20the%20convex%20hull%20problem.

You might also like