0% found this document useful (0 votes)
45 views23 pages

Daa Unit 2

The document discusses the divide and conquer algorithmic pattern. It breaks problems down into smaller subproblems, solves those, and then combines the solutions. Common examples provided include binary search, quicksort, and merge sort. Advantages are speed and efficiency while disadvantages include higher memory usage.

Uploaded by

Nikhil Dongre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views23 pages

Daa Unit 2

The document discusses the divide and conquer algorithmic pattern. It breaks problems down into smaller subproblems, solves those, and then combines the solutions. Common examples provided include binary search, quicksort, and merge sort. Advantages are speed and efficiency while disadvantages include higher memory usage.

Uploaded by

Nikhil Dongre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Divide and Conquer Introduction

Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a
dispute on a huge input, break the input into minor pieces, decide the problem on each of the
small pieces, and then merge the piecewise solutions into a global solution. This mechanism of
solving the problem is called the Divide & Conquer Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to the whole
problem.

Generally, we can follow the divide-and-conquer approach in a three-step process.

Examples: The specific computer algorithms are based on the Divide & Conquer approach:

Maximum and Minimum Problem

Binary Search

Sorting (merge sort, quick sort)

Tower of Hanoi.

Fundamental of Divide & Conquer Strategy:

There are two fundamental of Divide & Conquer Strategy:


Relational Formula

Stopping Condition

1. Relational Formula: It is the formula that we generate from the given technique. After
generation of Formula we apply D&C Strategy, i.e. we break the problem recursively & solve
the broken subproblems.

2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we
need to know that for how much time, we need to apply divide & Conquer. So the condition
where the need to stop our recursion steps of D&C is called as Stopping Condition.

Applications of Divide and Conquer Approach:

Following algorithms are based on the concept of the Divide and Conquer Technique:

Binary Search: The binary search algorithm is a searching algorithm, which is also called a
half-interval search or logarithmic search. It works by comparing the target value with the
middle element existing in a sorted array. After making the comparison, if the value differs, then
the half that cannot contain the target will eventually eliminate, followed by continuing the
search on the other half. We will again consider the middle element and compare it with the
target value. The process keeps on repeating until the target value is met. If we found the other
half to be empty after ending the search, then it can be concluded that the target is not present in
the array.

Quicksort: It is the most efficient sorting algorithm, which is also known as partition-exchange
sort. It starts by selecting a pivot value from an array followed by dividing the rest of the array
elements into two sub-arrays. The partition is made by comparing each of the elements with the
pivot value. It compares whether the element holds a greater value or lesser value than the pivot
and then sort the arrays recursively.

Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts by
dividing an array into sub-array and then recursively sorts each of them. After the sorting is
done, it merges them back.

Closest Pair of Points: It is a problem of computational geometry. This algorithm emphasizes


finding out the closest pair of points in a metric space, given n points, such that the distance
between the pair of points should be minimal.

Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after Volker
Strassen. It has proven to be much faster than the traditional algorithm when works on large
matrices.

Cooley-Tukey Fast Fourier Transform (FFT) algorithm: The Fast Fourier Transform


algorithm is named after J. W. Cooley and John Turkey. It follows the Divide and Conquer
Approach and imposes a complexity of O(nlogn).
Karatsuba algorithm for fast multiplication: It is one of the fastest multiplication algorithms
of the traditional time, invented by Anatoly Karatsuba in late 1960 and got published in 1962. It
multiplies two n-digit numbers in such a way by reducing it to at most single-digit.

Advantages of Divide and Conquer


o Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems
for which you have no basic idea, but with the help of the divide and conquer approach, it
has lessened the effort as it works on dividing the main problem into two halves and then
solve them recursively. This algorithm is much faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
o It is more proficient than that of its counterpart Brute Force technique.
o Since these algorithms inhibit parallelism, it does not involve any modification and is
handled by systems incorporating parallel processing.

Disadvantages of Divide and Conquer


o Since most of its algorithms are designed by incorporating recursion, so it necessitates
high memory management.
o An explicit stack may overuse the space.
o It may even crash the system if the recursion is performed rigorously greater than the
stack present in the CPU.

Binary Search Algorithm


In this article, we will discuss the Binary Search Algorithm. Searching is the process of finding
some particular element in the list. If the element is present in the list, then the process is called
successful, and the process returns the location of that element. Otherwise, the search is called
unsuccessful.

Linear Search and Binary Search are the two popular searching techniques. Here we will discuss
the Binary Search Algorithm.

Binary search is the search technique that works efficiently on sorted lists. Hence, to search an
element into some list using the binary search technique, we must ensure that the list is sorted.

Binary search follows the divide and conquer approach in which the list is divided into two
halves, and the item is compared with the middle element of the list. If the match is found then,
the location of the middle element is returned. Otherwise, we search into either of the halves
depending upon the result produced through the match.
Working of Binary search
Now, let's see the working of the Binary Search Algorithm.

To understand the working of the Binary search algorithm, let's take a sorted array. It will be
easy to understand the working of Binary search with an example.

There are two methods to implement the binary search algorithm -

o Iterative method
o Recursive method

The recursive method of binary search follows the divide and conquer approach.

Let the elements of array are -

Let the element to search is, K = 56

We have to use the below formula to calculate the mid of the array -

1. mid = (beg + end)/2  

So, in the given array -

beg = 0

end = 8

mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.


Now, the element to search is found. So algorithm will return the index of the element matched.

Binary Search complexity


Now, let's see the time complexity of Binary search in the best case, average case, and worst
case. We will also see the space complexity of Binary search.

1. Time Complexity
Case Time Complexity

Best Case O(1)

Average Case O(logn)


Worst Case O(logn)

o Best Case Complexity - In Binary search, best case occurs when the element to search is
found in first comparison, i.e., when the first middle element itself is the element to be
searched. The best-case time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search
is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have to
keep reducing the search space till it has only one element. The worst-case time
complexity of Binary search is O(logn).

Quick Sort Algorithm


Sorting is a way of arranging items in a systematic manner. Quicksort is the widely used sorting
algorithm that makes n log n comparisons in average case for sorting an array of n elements. It is
a faster and highly efficient sorting algorithm. This algorithm follows the divide and conquer
approach. Divide and conquer is a technique of breaking down the algorithms into subproblems,
then solving the subproblems, and combining the results back together to solve the original
problem.

Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two
sub-arrays such that each element in the left sub-array is less than or equal to the pivot element
and each element in the right sub-array is larger than the pivot element.

Conquer: Recursively, sort two subarrays with Quicksort.

Combine: Combine the already sorted array.

Quicksort picks an element as pivot, and then it partitions the given array around the picked
pivot element. In quick sort, a large array is divided into two arrays in which one holds values
that are smaller than the specified value (Pivot), and another array holds the values that are
greater than the pivot.

After that, left and right sub-arrays are also partitioned using the same approach. It will continue
until the single element remains in the sub-array.

Choosing the pivot


Picking a good pivot is necessary for the fast implementation of quicksort. However, it is typical
to determine a good pivot. Some of the ways of choosing a pivot are as follows -

o Pivot can be random, i.e. select the random pivot from the given array.
o Pivot can either be the rightmost element of the leftmost element of the given array.
o Select median as the pivot element.

Working of Quick Sort Algorithm


Now, let's see the working of the Quicksort Algorithm.

To understand the working of quick sort, let's take an unsorted array. It will make the concept
more clear and understandable.

Let the elements of array are -

In the given array, we consider the leftmost element as pivot. So, in this case, a[left] = 24,
a[right] = 27 and a[pivot] = 24.

Since, pivot is at left, so algorithm starts from right and move towards left.

Now, a[pivot] < a[right], so algorithm moves forward one position towards left, i.e. -

Now, a[left] = 24, a[right] = 19, and a[pivot] = 24.

Because, a[pivot] > a[right], so, algorithm will swap a[pivot] with a[right], and pivot moves to
right, as -
Now, a[left] = 19, a[right] = 24, and a[pivot] = 24. Since, pivot is at right, so algorithm starts
from left and moves to right.

As a[pivot] > a[left], so algorithm moves one position to right as -

Now, a[left] = 9, a[right] = 24, and a[pivot] = 24. As a[pivot] > a[left], so algorithm moves one
position to right as -

Now, a[left] = 29, a[right] = 24, and a[pivot] = 24. As a[pivot] < a[left], so, swap a[pivot] and
a[left], now pivot is at left, i.e. -
Since, pivot is at left, so algorithm starts from right, and move to left. Now, a[left] = 24, a[right]
= 29, and a[pivot] = 24. As a[pivot] < a[right], so algorithm moves one position to left, as -

Now, a[pivot] = 24, a[left] = 24, and a[right] = 14. As a[pivot] > a[right], so, swap a[pivot] and
a[right], now pivot is at right, i.e. -

Now, a[pivot] = 24, a[left] = 14, and a[right] = 24. Pivot is at right, so the algorithm starts from
left and move to right.

Now, a[pivot] = 24, a[left] = 24, and a[right] = 24. So, pivot, left and right are pointing the same
element. It represents the termination of procedure.

Element 24, which is the pivot element is placed at its exact position.

Elements that are right side of element 24 are greater than it, and the elements that are left side of
element 24 are smaller than it.
Now, in a similar manner, quick sort algorithm is separately applied to the left and right sub-
arrays. After sorting gets done, the array will be -

Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and in worst case. We
will also see the space complexity of quicksort.

1. Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n2)

o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the
middle element or near to the middle element. The best-case time complexity of quicksort
is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that
is not properly ascending and not properly descending. The average case time complexity
of quicksort is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is
either greatest or smallest element. Suppose, if the pivot element is always the last
element of the array, the worst case would occur when the given array is sorted already in
ascending or descending order. The worst-case time complexity of quicksort is O(n2).
Though the worst-case complexity of quicksort is more than other sorting algorithms such
as Merge sort and Heap sort, still it is faster in practice. Worst case in quick sort rarely occurs
because by changing the choice of pivot, it can be implemented in different ways. Worst case in
quicksort can be avoided by choosing the right pivot element.

Merge Sort Algorithm


In this article, we will discuss the merge sort Algorithm. Merge sort is the sorting technique that
follows the divide and conquer approach. This article will be very helpful and interesting to
students as they might face merge sort as a question in their examinations. In coding or technical
interviews for software engineers, sorting algorithms are widely asked. So, it is important to
discuss the topic.

Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to
sort the elements. It is one of the most popular and efficient sorting algorithm. It divides the
given list into two equal halves, calls itself for the two halves and then merges the two sorted
halves. We have to define the merge() function to perform the merging.

The sub-lists are divided again and again into halves until the list cannot be divided further. Then
we combine the pair of one element lists into two-element lists, sorting them in the process. The
sorted two-element pairs is merged into the four-element lists, and so on until we get the sorted
list.

Now, let's see the algorithm of merge sort.

How Merge Sort Works?


To understand merge sort, we take an unsorted array as the following −

We know that merge sort first divides the whole array iteratively into equal halves unless the
atomic values are achieved. We see here that an array of 8 items is divided into two arrays of size
4.

This does not change the sequence of appearance of items in the original. Now we divide these
two arrays into halves.

We further divide these arrays and we achieve atomic value which can no more be divided.
Now, we combine them in exactly the same manner as they were broken down. Please note the
color codes given to these lists.
We first compare the element for each list and then combine them into another list in a sorted
manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the target
list of 2 values we put 10 first, followed by 27. We change the order of 19 and 35 whereas 42
and 44 are placed sequentially.

In the next iteration of the combining phase, we compare lists of two data values, and merge
them into a list of found data values placing all in a sorted order.

After the final merging, the list should look like this −

Now we should learn some programming aspects of merge sorting.

Merge sort complexity


Now, let's see the time complexity of merge sort in best case, average case, and in worst case.
We will also see the space complexity of the merge sort.

1. Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n*logn)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of merge sort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that
is not properly ascending and not properly descending. The average case time complexity
of merge sort is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order,
but its elements are in descending order. The worst-case time complexity of merge sort
is O(n*logn).

Heap Sort Algorithm


In this article, we will discuss the Heapsort Algorithm. Heap sort processes the elements by
creating the min-heap or max-heap using the elements of the given array. Min-heap or max-heap
represents the ordering of array in which the root element represents the minimum or maximum
element of the array.

Heap sort basically recursively performs two main operations -

o Build a heap H, using the elements of array.


o Repeatedly delete the root element of the heap formed in 1st phase.

Before knowing more about the heap sort, let's first see a brief description of Heap.

Working of Heap sort Algorithm


Now, let's see the working of the Heapsort Algorithm.

In heap sort, basically, there are two phases involved in the sorting of elements. By using the
heap sort algorithm, they are as follows -

o The first step includes the creation of a heap by adjusting the elements of the array.
o After the creation of heap, now remove the root element of the heap repeatedly by
shifting it to the end of the array, and then store the heap structure with the remaining
elements.

Now let's see the working of heap sort in detail by using an example. To understand it more
clearly, let's take an unsorted array and try to sort it using heap sort. It will make the explanation
clearer and easier.

First, we have to construct a heap from the given array and convert it into max heap.
After converting the given heap into max heap, the array elements are -

Next, we have to delete the root element (89) from the max heap. To delete this node, we have to
swap it with the last node, i.e. (11). After deleting the root element, we again have to heapify it to
convert it into max heap.

After swapping the array element 89 with 11, and converting the heap into max-heap, the
elements of array are -

In the next step, again, we have to delete the root element (81) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (54). After deleting the root element, we again
have to heapify it to convert it into max heap.
After swapping the array element 81 with 54 and converting the heap into max-heap, the
elements of array are -

In the next step, we have to delete the root element (76) from the max heap again. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again
have to heapify it to convert it into max heap.

After swapping the array element 76 with 9 and converting the heap into max-heap, the elements
of array are -

In the next step, again we have to delete the root element (54) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (14). After deleting the root element, we again
have to heapify it to convert it into max heap.
After swapping the array element 54 with 14 and converting the heap into max-heap, the
elements of array are -

In the next step, again we have to delete the root element (22) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (11). After deleting the root element, we again
have to heapify it to convert it into max heap.

After swapping the array element 22 with 11 and converting the heap into max-heap, the
elements of array are -

In the next step, again we have to delete the root element (14) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again
have to heapify it to convert it into max heap.

After swapping the array element 14 with 9 and converting the heap into max-heap, the elements
of array are -

In the next step, again we have to delete the root element (11) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again
have to heapify it to convert it into max heap.
After swapping the array element 11 with 9, the elements of array are -

Now, heap has only one element left. After deleting it, heap will be empty.

After completion of sorting, the array elements are -

Now, the array is completely sorted.

Heap sort complexity


Now, let's see the time complexity of Heap sort in the best case, average case, and worst case.
We will also see the space complexity of Heapsort.

1. Time Complexity
Case Time Complexity

Best Case O(n logn)

Average Case O(n log n)

Worst Case O(n log n)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of heap sort is O(n logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that
is not properly ascending and not properly descending. The average case time complexity
of heap sort is O(n log n).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order,
but its elements are in descending order. The worst-case time complexity of heap sort
is O(n log n).

The time complexity of heap sort is O(n logn) in all three cases (best case, average case, and
worst case). The height of a complete binary tree having n elements is logn.

Job Sequencing With Deadlines


 
The sequencing of jobs on a single processor with deadline constraints is called as Job
Sequencing with Deadlines.
Here-
 You are given a set of jobs.
 Each job has a defined deadline and some profit associated with it.
 The profit of a job is given only when that job is completed within its deadline.
 Only one processor is available for processing all the jobs.
 Processor takes one unit of time to complete a job.
 
The problem states-
Approach to Solution
 
 A feasible solution would be a subset of jobs where each job of the subset gets completed
within its deadline.
 Value of the feasible solution would be the sum of profit of all the jobs contained in the subset.
 An optimal solution of the problem would be a feasible solution which gives the maximum
profit.
 
Greedy Algorithm-
 
Greedy Algorithm is adopted to determine how the next job is selected for an optimal solution.
The greedy algorithm described below always gives an optimal solution to the job sequencing
problem-
 
Step-01:
 
 Sort all the given jobs in decreasing order of their profit.
 
Step-02:
 
 Check the value of maximum deadline.
 Draw a Gantt chart where maximum time on Gantt chart is the value of maximum deadline.
 
Step-03:
 
 Pick up the jobs one by one.
 Put the job on Gantt chart as far as possible from 0 ensuring that the job gets completed before
its deadline.
 
PRACTICE PROBLEM BASED ON JOB SEQUENCING WITH DEADLINES-
 
Problem
 
Given the jobs, their deadlines and associated profits as shown-
 

Jobs J1 J2 J3 J4 J5 J6

Deadlines 5 3 3 2 4 2

Profits 200 180 190 300 120 100

 
Answer the following questions-
1. Write the optimal schedule that gives maximum profit.
2. Are all the jobs completed in the optimal schedule?
3. What is the maximum earned profit?
 
Solution
 
Step-01:
 
Sort all the given jobs in decreasing order of their profit-

Jobs J4 J1 J3 J2 J5 J6

Deadlines 2 5 3 3 4 2

Profits 300 200 190 180 120 100

 
Step-02:
 
Value of maximum deadline = 5.
So, draw a Gantt chart with maximum time on Gantt chart = 5 units as shown-
 

 
Now,
 We take each job one by one in the order they appear in Step-01.
 We place the job on Gantt chart as far as possible from 0.
 
Step-03:
 
 We take job J4.
 Since its deadline is 2, so we place it in the first empty cell before deadline 2 as-
 

 
Step-04:
 
 We take job J1.
 Since its deadline is 5, so we place it in the first empty cell before deadline 5 as-
 

 
Step-05:
 
 We take job J3.
 Since its deadline is 3, so we place it in the first empty cell before deadline 3 as-
 

 
Step-06:
 
 We take job J2.
 Since its deadline is 3, so we place it in the first empty cell before deadline 3.
 Since the second and third cells are already filled, so we place job J2 in the first cell as-
 

 
Step-07:
 
 Now, we take job J5.
 Since its deadline is 4, so we place it in the first empty cell before deadline 4 as-
 

 
Now,
 The only job left is job J6 whose deadline is 2.
 All the slots before deadline 2 are already occupied.
 Thus, job J6 can not be completed.
 
Now, the given questions may be answered as-
 
Part-01:
 
The optimal schedule is-
J2 , J4 , J3 , J5 , J1
This is the required order in which the jobs must be completed in order to obtain the maximum
profit.
 
Part-02:
 
 All the jobs are not completed in optimal schedule.
 This is because job J6 could not be completed within its deadline.
 
Part-03:
 
Maximum earned profit
= Sum of profit of all the jobs in optimal schedule
= Profit of job J2 + Profit of job J4 + Profit of job J3 + Profit of job J5 + Profit of job J1
= 180 + 300 + 190 + 120 + 200
= 990

You might also like