Ada Module 2 Notes
Ada Module 2 Notes
Prepared by : Dr.A.Kannagi
UNIT 2: Brute Force and Divide n Conquer Method
2. Selection Sort
3. Bubble Sort
4. Sequential Search
5. Merge sort
6. Quick sort
7. Bucket sort
8. Radix sort.
1. Introduction
What is Brute Force Method?
The straightforward method of solving a given problem based on the problems
statement and definition is called Brute Force method.
Ex : Selection Sort, Bubble Sort, Linear Search etc.
Pros and Cons of Brute Force Method :
Pros:
The brute force approach is a guaranteed way to find the correct solution by listing
all the possible candidate solutions for the problem.
It is a generic method and not limited to any specific domain of problems.
The brute force method is ideal for solving small and simpler problems.
It is known for its simplicity and can serve as a comparison benchmark.
Cons:
The brute force approach is inefficient. For real-time problems, algorithm analysis
often goes above the O(N!) order of growth.
This method relies more on compromising the power of a computer system for
solving a problem than on a good algorithm design.
Brute force algorithms are slow.
2
Brute force algorithms are not constructive or creative compared to algorithms that
are constructed using some other design paradigms.
Exhaustive Search:
A Search problem involves some set of possibilities and we are looking for one or more
of the possibilities that satisfy some property.
Example: Travelling Salesperson Problem
Method:
1. Generate a list of all potential solutions to the problem.
2. Evaluate potential solutions one by one disqualifying infeasible ones and for an
optimization problem, keeping track of the best one found so far.
graph's vertices, paths are the graph's edges, and a path's distance is the edge's
weight. It is a minimization problem starting and finishing at a specified vertex after
3
having visited each other vertex exactly once. Often, the model is a complete graph
(i.e. each pair of vertices is connected by an edge). If no path exists between two
cities, adding an arbitrarily long edge will complete the graph without affecting the
optimal tour.
Algorithm
Travelling salesman problem takes a graph G {V, E} as an input and declare another graph
as the output (say G’) which will record the path the salesman is going to take from one
node to another.
The algorithm begins by sorting all the edges in the input graph G from the least distance to
the largest distance.
The first edge selected is the edge with least distance, and one of the two vertices (say A and
B) being the origin node (say A).
Then among the adjacent edges of the node other than the origin node (B), find the least cost
edge and add it onto the output graph.
Continue the process with further nodes making sure there are no cycles in the output graph
and the path reaches back to the origin node A.
However, if the origin is mentioned in the given problem, then the solution must always start
from that node only. Let us look at some example problems to understand this better.
Example :
3
P Q
9
6 4
8
R S
2
Here P Q R S are Cities.
Distances between various cities are represented as numbers in each of the edge.
Let assume, Salesperson starts from City P then the various routes using which he
can visit each and every city exactly once and returns back to the start city P along
with the cost is as shown below:
4
P Q R S P (COST = 22)
P Q S R P (COST = 15)
P R Q S P (COST = 27)
P R S Q P (COST = 15)
P S Q R P (COST = 27)
P S R Q P (COST = 22)
Finally considering routes with minimum cost to get the maximum profit, we will
consider the below routes and cost:
P Q S R P (COST = 15)
P R S Q P (COST = 15)
Note :
In general, for n cities number of routes=(n-1)! i.e., f(n)=f(n-1)!
Hence the time complexity is given by ,
F(n)=O(n!)
Exercise Problem :
2. Selection Sort
The selection sort enhances the bubble sort by making only a single swap for each pass through
the rundown. In order to do this, a selection sort searches for the biggest value as it makes a pass and,
after finishing the pass, places it in the best possible area. Similarly, as with a bubble sort, after the first
pass, the biggest item is in the right place. After the second pass, the following biggest is set up. This
procedure proceeds and requires n-1 goes to sort n item since the last item must be set up after the (n-1)
5
th pass.
ALGORITHM: SELECTION SORT (A)
1. k ← length [A]
2. for j ←1 to n-1
3. smallest ← j
4. for i ← j + 1 to k
5. if A [i] < A [smallest]
6. then smallest ← i
7. exchange (A [j], A [smallest])
How Selection Sort works
1. In the selection sort, first of all, we set the initial element as a minimum.
2. Now we will compare the minimum with the second element. If the second element turns out to
be smaller than the minimum, we will swap them, followed by assigning to a minimum to the
third element.
3. Else if the second element is greater than the minimum, which is our first element, then we will
do nothing and move on to the third element and then compare it with the minimum.
We will repeat this process until we reach the last element.
4. After the completion of each iteration, we will notice that our minimum has reached the start of
the unsorted list.
5. For each iteration, we will start the indexing from the first element of the unsorted list. We will
repeat the Steps from 1 to 4 until the list gets sorted or all the elements get correctly positioned.
Consider the following example of an unsorted array that we will sort with the help of the
Selection Sort algorithm.
1. A [] = (7, 4, 3, 6, 5).
A [] =
1st Iteration:
Set minimum = 7
o Compare a0 and a1
6
As, a0 > a1, set minimum = 4.
o Compare a1 and a2
2nd Iteration:
Set minimum = 4
o Compare a1 and a2
3rd Iteration:
Set minimum = 7
o Compare a2 and a3
4th Iteration:
Set minimum = 6
8
o Compare a3 and a4
Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a space
complexity of O(1) because it necessitates some extra memory space for temp variable for swapping.
Time Complexities:
o Best Case Complexity: The selection sort algorithm has a best-case time complexity
of O(n2) for the already sorted array.
o Average Case Complexity: The average-case time complexity for the selection sort algorithm
is O(n2), in which the existing elements are in jumbled ordered, i.e., neither in the ascending
order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs when
we sort the descending order of an array into the ascending order.
9
In the selection sort algorithm, the time complexity is O(n2) in all three cases. This is because, in each
step, we are required to find minimum elements so that it can be placed in the correct position. Once
we trace the complete array, we will get our minimum element.
3. BUBBLE SORT (Sinking Sort)
Bubble Sort, also known as Exchange Sort, is a simple sorting algorithm. It works by repeatedly
stepping throughout the list to be sorted, comparing two items at a time and swapping them if they are
in the wrong order. The pass through the list is duplicated until no swaps are desired, which means the
list is sorted.
Algorithm
Step 1 ➤ Initialization
set 1 ← n, p ← 1
Step 2 ➤ loop,
Repeat through step 4 while (p ≤ n-1)
set E ← 0 ➤ Initializing exchange variable.
Step 3 ➤ comparison, loop.
Repeat for i ← 1, 1, …... l-1.
if (A [i] > A [i + 1]) then
set A [i] ↔ A [i + 1] ➤ Exchanging values.
Set E ← E + 1
Step 4 ➤ Finish, or reduce the size.
if (E = 0) then
exit
else
set l ← l - 1.
How Bubble Sort Works
1. The bubble sort starts with the very first index and makes it a bubble element. Then it compares
the bubble element, which is currently our first index element, with the next element. If the
bubble element is greater and the second element is smaller, then both of them will swap.
After swapping, the second element will become the bubble element. Now we will compare the
second element with the third as we did in the earlier step and swap them if required. The same
process is followed until the last element.
2. We will follow the same process for the rest of the iterations. After each of the iteration, we will
notice that the largest element present in the unsorted array has reached the last index.
For each iteration, the bubble sort will compare up to the last unsorted element.
10
Once all the elements get sorted in the ascending order, the algorithm will get
terminated.
Consider the following example of an unsorted array that we will sort with the help of the
Bubble Sort algorithm.
Initially,
Pass 1:
o Compare a0 and a1
o Compare a2 and a3
.
Pass 2:
o Compare a0 and a1
o
o Here a3 > a4, so we will again swap both of them
11
As a0 < a1 so the array will remain as it is.
o Compare a1 and a2
Pass 3:
o Compare a0 and a1
Pass 4:
o Compare a0 and a1
12
Here a0 > a1, so we will swap both of them.
Therefore, the bubble sort algorithm encompasses a time complexity of O(n2) and a space complexity
of O(1) because it necessitates some extra memory space for temp variable for swapping.
Time Complexities:
o Best Case Complexity: The bubble sort algorithm has a best-case time complexity of O(n) for
the already sorted array.
o Average Case Complexity: The average-case time complexity for the bubble sort algorithm
is O(n2), which happens when 2 or more elements are in jumbled, i.e., neither in the ascending
order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs when we
sort the descending order of an array into the ascending order.
Advantages of Bubble Sort
1. Easily understandable.
2. Does not necessitates any extra memory.
3. The code can be written easily for this algorithm.
4. Minimal space requirement than that of other sorting algorithms.
13
Disadvantages of Bubble Sort
1. It does not work well when we have large unsorted lists, and it necessitates more resources that
end up taking so much of time.
2. It is only meant for academic purposes, not for practical implementations.
3. It involves the n2 order of steps to sort an algorithm.
The value of Key, i.e., 41, is not matched with the first element of the array. So, move to the next
element. And follow the same process until the respective element is found.
Now, the element to be searched is found. So algorithm will return the index of the element matched.
15
Linear Search complexity
Now, let's see the time complexity of linear search in the best case, average case, and worst case. We
will also see the space complexity of linear search.
1. Time Complexity
o Best Case Complexity - In Linear search, best case occurs when the element we are finding is
at the first position of the array. The best-case time complexity of linear search is O(1).
o Average Case Complexity - The average case time complexity of linear search is O(n).
o Worst Case Complexity - In Linear search, the worst case occurs when the element we are
looking is present at the end of the array. The worst-case in linear search could be when the
target element is not present in the given array, and we have to traverse the entire array. The
worst-case time complexity of linear search is O(n).
The time complexity of linear search is O(n) because every element in the array is compared only
once.
2. Space Complexity
16
original problem.
Divide and Conquer method involves 3 steps:
A. Divide : Problem is divided into a number of sub problems.
B. Conquer : If the sub problem are smaller in size, the problem can be solved
using straightforward method. If the sub problem size is large, then they are
divided into number of sub problems of the same type and size. Each sub problem
is solved recursively.
C. Combine : The solution od sub problems are combined to get the solution for
the larger problem.
5.1 Merge Sort :
This sorting method follows the technique of divide-and-conquer.
The technique works on a principle where a given set of inputs are split into distinct
subset and the required method is applied on each subset separately i.e., the sub
problems are first solved and then the solutions are combined into a solution of the
whole.
Most of the times the sub problems generated will be of the same type as the
original problem.
In such situations re-application of the divide-and-conquer technique may be
necessary on each sub problem.
This is normally achieved through a recursive procedure.
Thus smaller and smaller sub problems are generated until it is possible for us to
solve each sub problem without splitting.
The Steps involved in Merge Sort :
1. Divide : Divide the given array consisting of n elements into two parts of n/2
elements each.
2. Conquer : Sort the left part of the array and right part of the array recursively using
merge sort.
3. Combine/Merge : Merge the sorted left part and sorted right part to get a single
sorted array.
17
Example :
N=8
Elements : 5, 2, 4, 7, 1, 3, 2, 6
Divide
Conquer Combin
e
Algorithm :
Algorithm MergeSort(a,low,high)
//Purpose :Sort the given array between lower bound and upper bound
while(i<=mid and j<=high)
//Inputs: a is an array consisting of unsorted elements with low and high
if( a[i] < a[j] ) then
as lower bound and upper bound
//Output c[k]
a:It is<-an
a[i]
array consisting of sorted elements
i<-i+1; k<-k+1 ;
else if (low>high) return //No elements to
partition
c[k]<-a[j] mid<- (low+high)/2 //Divide
while(i<=mid)
6. Quick Sort
QuickSort is a Divide and Conquer algorithm . It picks an element as a pivot and
partitions the given array around the pivot. There are many different versions of quickSort
that pick the pivot in different ways.
Always pick the first element as a pivot.
Always pick the last element as a pivot.
Pick a random element as a pivot.
Pick the median as the pivot.
Quick Sort by picking the first element as the Pivot.
The key function in quick sort is a partition. The target of partitions is to put the pivot in its
correct position if the array is sorted and the smaller (or equal) to its left and higher elements
to its right and do all this in linear time.
Partition Algorithm:
There can be many ways to do partition, the following pseudo-code adopts the method given
in the CLRS book.
We start from the leftmost element and keep track of the index of smaller (or equal)
elements as i.
While traversing, if we find a smaller (or equal) element, we swap the current
element with arr[i].
Otherwise, we ignore the current element.
Pseudo Code for recursive QuickSort function:
// low –> Starting index, high –> Ending index
quickSort(arr[], low, high) {
if (low < high) {
19
Pseudo code for partition() function
/* This function takes first element as pivot, places the pivot element at its correct position in
sorted array, and places all smaller (smaller than or equal to pivot) to left of pivot and all
greater elements to right of pivot */
partition (arr[], low, high) {
// first element as pivot
pivot = arr[low]
k = high
for (i = high; i > low; i–)
{
20
Now the correct position of pivot is index 5
First partition
On the other hand partition happens on the segment [6, 8] i.e., the array {10, 9, 15}.
Here low = 6, high = 8, pivot = 10 and k = 8.
21
The correct position of 10 becomes index 7 and the right and left part both has only one
element.
Third partition: Here partition the segment {6, 5, 7}. The low = 2, high = 4, pivot = 6 and k
= 4.
If the same process is applied, we get correct position of 6 as index 3 and the left and the
right part is having only one element.
Third partition
22
The total array becomes sorted in this way. Check the below image for the recursion tree
23
If arr[i] is greater than pivot then swap arr[i] and arr[k] and
decrement k.
After the iteration is swap the pivot with arr[k].
Return k-1 as the point of partition.
Now recursively call quickSort for the left half and right half of the partition index.
Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and in worst case.
We will also see the space complexity of quicksort.
1. Time Complexity
o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the
middle element or near to the middle element. The best-case time complexity of quicksort
is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of quicksort
is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is either
greatest or smallest element. Suppose, if the pivot element is always the last element of the
array, the worst case would occur when the given array is sorted already in ascending or
descending order. The worst-case time complexity of quicksort is O(n2).
Though the worst-case complexity of quicksort is more than other sorting algorithms such as Merge
sort and Heap sort, still it is faster in practice. Worst case in quick sort rarely occurs because by
changing the choice of pivot, it can be implemented in different ways. Worst case in quicksort can be
avoided by choosing the right pivot element.
2. Space Complexity
24
Stable NO
25
Algorithm
Bucket Sort(A[])
1. Let B[0....n-1] be a new array
2. n=length[A]
3. for i=0 to n-1
4. make B[i] an empty list
5. for i=1 to n
6. do insert A[i] into list B[n a[i]]
7. for i=0 to n-1
8. do sort list B[i] with insertion-sort
9. Concatenate lists B[0], B[1],........, B[n-1] together in order
End
Scatter-gather approach
We can understand the Bucket sort algorithm via scatter-gather approach. Here, the given elements are
first scattered into buckets. After scattering, elements in each bucket are sorted using a stable sorting
algorithm. At last, the sorted elements will be gathered in order.
Let's take an unsorted array to understand the process of bucket sort. It will be easier to understand the
bucket sort via an example.
Let the elements of array are
Example :
Now, let's see the time complexity of bucket sort in best case, average case, and in worst case. We
will also see the space complexity of the bucket sort.
26
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. In Bucket sort, best case occurs when the elements are uniformly distributed in the
buckets. The complexity will be better if the elements are already sorted in the buckets.
If we use the insertion sort to sort the bucket elements, the overall complexity will be linear,
i.e., O(n + k), where O(n) is for making the buckets, and O(k) is for sorting the bucket elements
using algorithms with linear time complexity at best case.
The best-case time complexity of bucket sort is O(n + k).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. Bucket sort runs in the linear time, even when
the elements are uniformly distributed. The average case time complexity of bucket sort is O(n
+ K).
o Worst Case Complexity - In bucket sort, worst case occurs when the elements are of the close
range in the array, because of that, they have to be placed in the same bucket. So, some buckets
have more number of elements than others.
The complexity will get worse when the elements are in the reverse order.
The worst-case time complexity of bucket sort is O(n2).
2. Space Complexity
Stable YES
27
8. Radix Sort :
In this article, we will discuss the Radix sort Algorithm. Radix sort is the linear sorting
algorithm that is used for integers. In Radix sort, there is digit by digit sorting is performed that is
started from the least significant digit to the most significant digit.
The process of radix sort works similar to the sorting of students names, according to the
alphabetical order. In this case, there are 26 radix formed due to the 26 alphabets in English. In the first
pass, the names of students are grouped according to the ascending order of the first letter of their
names. After that, in the second pass, their names are grouped according to the ascending order of the
second letter of their name. And the process continues until we find the sorted list.
Now, let's see the algorithm of Radix sort.
Algorithm
radixSort(arr)
max = largest element in the given array
d = number of digits in the largest element (or, max)
Now, create d buckets of size 0 - 9
for i -> 0 to d
sort the array elements using counting sort (or any stable sort) according to the digits at
the ith place
Working of Radix sort Algorithm
Now, let's see the working of Radix sort Algorithm.
The steps used in the sorting of radix sort are listed as follows -
o First, we have to find the largest element (suppose max) from the given array. Suppose 'x' be
the number of digits in max. The 'x' is calculated because we need to go through the significant
places of all elements.
o After that, go through one by one each significant place. Here, we have to use any stable sorting
algorithm to sort the digits of each significant place.
Now let's see the working of radix sort in detail by using an example. To understand it more clearly,
let's take an unsorted array and try to sort it using radix sort. It will make the explanation clearer and
easier.
28
In the given array, the largest element is 736 that have 3 digits in it. So, the loop will run up to three
times (i.e., to the hundreds place). That means three passes are required to sort the array.
Now, first sort the elements on the basis of unit place digits (i.e., x = 0). Here, we are using the
counting sort algorithm to sort the elements.
Pass 1:
In the first pass, the list is sorted on the basis of the digits at 0's place.
Pass 2:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 10th place).
29
After the second pass, the array elements are -
Pass 3:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 100th place).
30
Now, the array is sorted in ascending order.
Radix sort complexity
Now, let's see the time complexity of Radix sort in best case, average case, and worst case. We will
also see the space complexity of Radix sort.
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of Radix sort is Ω(n+k).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of Radix
sort is θ(nk).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order, but
its elements are in descending order. The worst-case time complexity of Radix sort is O(nk).
Radix sort is a non-comparative sorting algorithm that is better than the comparative sorting
algorithms. It has linear time complexity that is better than the comparative algorithms with complexity
O(n logn).
2. Space Complexity
Stable YES
31