0% found this document useful (0 votes)
158 views

Ada Module 2 Notes

Uploaded by

Lokko Prince
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
158 views

Ada Module 2 Notes

Uploaded by

Lokko Prince
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

DEPARTMENT OF CS & IT

ANALYSIS AND DESIGN OF


ALGORITHMS
(21BCA5C01)

Module 2 : Brute Force and Divide n Conquer Method

Prepared by : Dr.A.Kannagi
UNIT 2: Brute Force and Divide n Conquer Method

1. The method - Analysis of exhaustive search - Traveling salesman problem

2. Selection Sort

3. Bubble Sort

4. Sequential Search

5. Merge sort

6. Quick sort

7. Bucket sort

8. Radix sort.

1. Introduction
What is Brute Force Method?
The straightforward method of solving a given problem based on the problems
statement and definition is called Brute Force method.
Ex : Selection Sort, Bubble Sort, Linear Search etc.
Pros and Cons of Brute Force Method :
Pros:
 The brute force approach is a guaranteed way to find the correct solution by listing
all the possible candidate solutions for the problem.
 It is a generic method and not limited to any specific domain of problems.
 The brute force method is ideal for solving small and simpler problems.
 It is known for its simplicity and can serve as a comparison benchmark.
Cons:
 The brute force approach is inefficient. For real-time problems, algorithm analysis
often goes above the O(N!) order of growth.
 This method relies more on compromising the power of a computer system for
solving a problem than on a good algorithm design.
 Brute force algorithms are slow.
2
 Brute force algorithms are not constructive or creative compared to algorithms that
are constructed using some other design paradigms.
Exhaustive Search:
A Search problem involves some set of possibilities and we are looking for one or more
of the possibilities that satisfy some property.
Example: Travelling Salesperson Problem
Method:
1. Generate a list of all potential solutions to the problem.
2. Evaluate potential solutions one by one disqualifying infeasible ones and for an
optimization problem, keeping track of the best one found so far.

Travelling Salesman Problem :


 The Travelling Salesman Problem (TSP) is an optimization problem used to find the
shortest path to travel through the given number of cities.
 Travelling salesman problem states that given a number of cities N and the distance
between the cities, the traveller has to travel through all the given cities exactly once
and return to the same city from where he started and also the length of the path is
minimized.
 The Travelling Salesman Problem (TSP) can be formulated as follows: to choose a
pathway optimal by the given criterion. In this, optimal criterion is usually the
minimal distance between towns or minimal travel expenses. Travelling salesman
should visit a certain number of towns and return to the place of departure, so that
they visit each town only once.
 TSP can be solved as shown below:
1. Get all the routes from one city to another city by taking various permutations.
2. Compute the route length for each permutation and select the shortest among
them.
 TSP can be modelled as an undirected weighted graph, such that cities are the

graph's vertices, paths are the graph's edges, and a path's distance is the edge's
weight. It is a minimization problem starting and finishing at a specified vertex after
3
having visited each other vertex exactly once. Often, the model is a complete graph
(i.e. each pair of vertices is connected by an edge). If no path exists between two
cities, adding an arbitrarily long edge will complete the graph without affecting the
optimal tour.
Algorithm
 Travelling salesman problem takes a graph G {V, E} as an input and declare another graph
as the output (say G’) which will record the path the salesman is going to take from one
node to another.
 The algorithm begins by sorting all the edges in the input graph G from the least distance to
the largest distance.
 The first edge selected is the edge with least distance, and one of the two vertices (say A and
B) being the origin node (say A).
 Then among the adjacent edges of the node other than the origin node (B), find the least cost
edge and add it onto the output graph.
 Continue the process with further nodes making sure there are no cycles in the output graph
and the path reaches back to the origin node A.
 However, if the origin is mentioned in the given problem, then the solution must always start
from that node only. Let us look at some example problems to understand this better.
Example :

3
P Q
9

6 4
8
R S
2
Here P Q R S are Cities.
 Distances between various cities are represented as numbers in each of the edge.

 Let assume, Salesperson starts from City P then the various routes using which he

can visit each and every city exactly once and returns back to the start city P along
with the cost is as shown below:

4
P Q R S P (COST = 22)
P Q S R P (COST = 15)
P R Q S P (COST = 27)
P R S Q P (COST = 15)
P S Q R P (COST = 27)
P S R Q P (COST = 22)

Finally considering routes with minimum cost to get the maximum profit, we will
consider the below routes and cost:
P Q S R P (COST = 15)
P R S Q P (COST = 15)

Note :
In general, for n cities number of routes=(n-1)! i.e., f(n)=f(n-1)!
Hence the time complexity is given by ,
F(n)=O(n!)
Exercise Problem :

2. Selection Sort
The selection sort enhances the bubble sort by making only a single swap for each pass through
the rundown. In order to do this, a selection sort searches for the biggest value as it makes a pass and,
after finishing the pass, places it in the best possible area. Similarly, as with a bubble sort, after the first
pass, the biggest item is in the right place. After the second pass, the following biggest is set up. This
procedure proceeds and requires n-1 goes to sort n item since the last item must be set up after the (n-1)
5
th pass.
ALGORITHM: SELECTION SORT (A)
1. k ← length [A]
2. for j ←1 to n-1
3. smallest ← j
4. for i ← j + 1 to k
5. if A [i] < A [smallest]
6. then smallest ← i
7. exchange (A [j], A [smallest])
How Selection Sort works
1. In the selection sort, first of all, we set the initial element as a minimum.
2. Now we will compare the minimum with the second element. If the second element turns out to
be smaller than the minimum, we will swap them, followed by assigning to a minimum to the
third element.
3. Else if the second element is greater than the minimum, which is our first element, then we will
do nothing and move on to the third element and then compare it with the minimum.
We will repeat this process until we reach the last element.
4. After the completion of each iteration, we will notice that our minimum has reached the start of
the unsorted list.
5. For each iteration, we will start the indexing from the first element of the unsorted list. We will
repeat the Steps from 1 to 4 until the list gets sorted or all the elements get correctly positioned.
Consider the following example of an unsorted array that we will sort with the help of the
Selection Sort algorithm.
1. A [] = (7, 4, 3, 6, 5).
A [] =

1st Iteration:
Set minimum = 7
o Compare a0 and a1

6
As, a0 > a1, set minimum = 4.
o Compare a1 and a2

As, a1 > a2, set minimum = 3.


o Compare a2 and a3

As, a2 < a3, set minimum= 3.


o Compare a2 and a4

As, a2 < a4, set minimum =3.


Since 3 is the smallest element, so we will swap a0 and a2.

2nd Iteration:
Set minimum = 4
o Compare a1 and a2

As, a1 < a2, set minimum = 4.


o Compare a1 and a3
7
As, A[1] < A[3], set minimum = 4.
o Compare a1 and a4

Again, a1 < a4, set minimum = 4.


Since the minimum is already placed in the correct position, so there will be no swapping.

3rd Iteration:
Set minimum = 7
o Compare a2 and a3

As, a2 > a3, set minimum = 6.


o Compare a3 and a4

As, a3 > a4, set minimum = 5.


Since 5 is the smallest element among the leftover unsorted elements, so we will swap 7 and 5.

4th Iteration:
Set minimum = 6
8
o Compare a3 and a4

As a3 < a4, set minimum = 6.


Since the minimum is already placed in the correct position, so there will be no swapping.

Complexity Analysis of Selection Sort


Input: Given n input elements.
Output: Number of steps incurred to sort a list.
Logic: If we are given n elements, then in the first pass, it will do n-1 comparisons; in the second pass,
it will do n-2; in the third pass, it will do n-3 and so on. Thus, the total number of comparisons can be
found by;

Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a space
complexity of O(1) because it necessitates some extra memory space for temp variable for swapping.
Time Complexities:
o Best Case Complexity: The selection sort algorithm has a best-case time complexity
of O(n2) for the already sorted array.
o Average Case Complexity: The average-case time complexity for the selection sort algorithm
is O(n2), in which the existing elements are in jumbled ordered, i.e., neither in the ascending
order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs when
we sort the descending order of an array into the ascending order.

9
In the selection sort algorithm, the time complexity is O(n2) in all three cases. This is because, in each
step, we are required to find minimum elements so that it can be placed in the correct position. Once
we trace the complete array, we will get our minimum element.
3. BUBBLE SORT (Sinking Sort)
Bubble Sort, also known as Exchange Sort, is a simple sorting algorithm. It works by repeatedly
stepping throughout the list to be sorted, comparing two items at a time and swapping them if they are
in the wrong order. The pass through the list is duplicated until no swaps are desired, which means the
list is sorted.
Algorithm
Step 1 ➤ Initialization
set 1 ← n, p ← 1
Step 2 ➤ loop,
Repeat through step 4 while (p ≤ n-1)
set E ← 0 ➤ Initializing exchange variable.
Step 3 ➤ comparison, loop.
Repeat for i ← 1, 1, …... l-1.
if (A [i] > A [i + 1]) then
set A [i] ↔ A [i + 1] ➤ Exchanging values.
Set E ← E + 1
Step 4 ➤ Finish, or reduce the size.
if (E = 0) then
exit
else
set l ← l - 1.
How Bubble Sort Works
1. The bubble sort starts with the very first index and makes it a bubble element. Then it compares
the bubble element, which is currently our first index element, with the next element. If the
bubble element is greater and the second element is smaller, then both of them will swap.
After swapping, the second element will become the bubble element. Now we will compare the
second element with the third as we did in the earlier step and swap them if required. The same
process is followed until the last element.
2. We will follow the same process for the rest of the iterations. After each of the iteration, we will
notice that the largest element present in the unsorted array has reached the last index.
For each iteration, the bubble sort will compare up to the last unsorted element.
10
Once all the elements get sorted in the ascending order, the algorithm will get
terminated.
Consider the following example of an unsorted array that we will sort with the help of the
Bubble Sort algorithm.
Initially,

Pass 1:
o Compare a0 and a1

As a0 < a1 so the array will remain as it is.


Now a1 > a2, so we will swap both of them.

o Compare a2 and a3

As a2 < a3 so the array will remain as it is.


o Compare a3 and a4

.
Pass 2:
o Compare a0 and a1

o
o Here a3 > a4, so we will again swap both of them

11
As a0 < a1 so the array will remain as it is.
o Compare a1 and a2

Here a1 < a2, so the array will remain as it is.


o Compare a2 and a3

In this case, a2 > a3, so both of them will get swapped.

Pass 3:
o Compare a0 and a1

As a0 < a1 so the array will remain as it is.


o Compare a1 and a2

Now a1 > a2, so both of them will get swapped.

Pass 4:
o Compare a0 and a1

12
Here a0 > a1, so we will swap both of them.

Hence the array is sorted as no more swapping is required.


Complexity Analysis of Bubble Sort
Input: Given n input elements.
Output: Number of steps incurred to sort a list.
Logic: If we are given n elements, then in the first pass, it will do n-1 comparisons; in the second pass,
it will do n-2; in the third pass, it will do n-3 and so on. Thus, the total number of comparisons can be
found by;

Therefore, the bubble sort algorithm encompasses a time complexity of O(n2) and a space complexity
of O(1) because it necessitates some extra memory space for temp variable for swapping.
Time Complexities:
o Best Case Complexity: The bubble sort algorithm has a best-case time complexity of O(n) for
the already sorted array.
o Average Case Complexity: The average-case time complexity for the bubble sort algorithm
is O(n2), which happens when 2 or more elements are in jumbled, i.e., neither in the ascending
order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs when we
sort the descending order of an array into the ascending order.
Advantages of Bubble Sort
1. Easily understandable.
2. Does not necessitates any extra memory.
3. The code can be written easily for this algorithm.
4. Minimal space requirement than that of other sorting algorithms.
13
Disadvantages of Bubble Sort
1. It does not work well when we have large unsorted lists, and it necessitates more resources that
end up taking so much of time.
2. It is only meant for academic purposes, not for practical implementations.
3. It involves the n2 order of steps to sort an algorithm.

4. Sequential Search (Linear Search) :


 The simplest of all forms of searching is the linear search.
 This search is applicable to a table organized either as an array or as a linked list.
 Let us assume that A is an array of N elements from A[0] through A[N - 1].
 Let us also assume that ele is the search element. The search starts by sequentially
comparing the elements of the array one after the other from the beginning to the end
with the element to be searched.
 If the element is found its position is identified otherwise an appropriate message is
displayed.
Algorithm :

Algorithm : Linear Search (key, a[ ],n)


//Purpose : This algorithm searches for key element
//Inputs : n - number of elements present in the array
A - elements in the array where searching takes place
Key - item to be searched
//Outputs : The function returns the position of the key if found
Otherwise, function returns -1 indicating search is unsuccessful.

for i=0 to n-1 do


if( key = a[i] )
return i;
end if
end for
return -1;
14
Working of Linear search
Now, let's see the working of the linear search Algorithm.
To understand the working of linear search algorithm, let's take an unsorted array. It will be easy to
understand the working of linear search with an example.
Let the elements of array are -

Let the element to be searched is Key = 41


Now, start from the first element and compare Key with each element of the array.

The value of Key, i.e., 41, is not matched with the first element of the array. So, move to the next
element. And follow the same process until the respective element is found.

Now, the element to be searched is found. So algorithm will return the index of the element matched.
15
Linear Search complexity
Now, let's see the time complexity of linear search in the best case, average case, and worst case. We
will also see the space complexity of linear search.
1. Time Complexity

Case Time Complexity

Best Case O(1)

Average Case O(n)

Worst Case O(n)

o Best Case Complexity - In Linear search, best case occurs when the element we are finding is
at the first position of the array. The best-case time complexity of linear search is O(1).
o Average Case Complexity - The average case time complexity of linear search is O(n).
o Worst Case Complexity - In Linear search, the worst case occurs when the element we are
looking is present at the end of the array. The worst-case in linear search could be when the
target element is not present in the given array, and we have to traverse the entire array. The
worst-case time complexity of linear search is O(n).
The time complexity of linear search is O(n) because every element in the array is compared only
once.
2. Space Complexity

Space Complexity O(1)

o The space complexity of linear search is O(1).

5. Sorting, Sets and Selection:


Sorting :
The process of rearranging the given elements in ascending order or descending order is
called sorting.
Divide and Conquer Method :
 It is a top-down approach for designing algorithms which consist of dividing the
problem into smaller sub problems hoping that the solutions of the sub problems are
easier to find.
 The solutions of all smaller problems are then combined to get a solution for the

16
original problem.
Divide and Conquer method involves 3 steps:
A. Divide : Problem is divided into a number of sub problems.
B. Conquer : If the sub problem are smaller in size, the problem can be solved
using straightforward method. If the sub problem size is large, then they are
divided into number of sub problems of the same type and size. Each sub problem
is solved recursively.
C. Combine : The solution od sub problems are combined to get the solution for
the larger problem.
5.1 Merge Sort :
 This sorting method follows the technique of divide-and-conquer.
 The technique works on a principle where a given set of inputs are split into distinct
subset and the required method is applied on each subset separately i.e., the sub
problems are first solved and then the solutions are combined into a solution of the
whole.
 Most of the times the sub problems generated will be of the same type as the
original problem.
 In such situations re-application of the divide-and-conquer technique may be
necessary on each sub problem.
 This is normally achieved through a recursive procedure.
 Thus smaller and smaller sub problems are generated until it is possible for us to
solve each sub problem without splitting.
The Steps involved in Merge Sort :
1. Divide : Divide the given array consisting of n elements into two parts of n/2
elements each.
2. Conquer : Sort the left part of the array and right part of the array recursively using
merge sort.
3. Combine/Merge : Merge the sorted left part and sorted right part to get a single
sorted array.

17
Example :
N=8
Elements : 5, 2, 4, 7, 1, 3, 2, 6
Divide
Conquer Combin
e

Algorithm :

Algorithm MergeSort(a,low,high)

//Purpose :Sort the given array between lower bound and upper bound
while(i<=mid and j<=high)
//Inputs: a is an array consisting of unsorted elements with low and high
if( a[i] < a[j] ) then
as lower bound and upper bound
//Output c[k]
a:It is<-an
a[i]
array consisting of sorted elements
i<-i+1; k<-k+1 ;
else if (low>high) return //No elements to
partition
c[k]<-a[j] mid<- (low+high)/2 //Divide

the array into two parts


j<-j+1 ; k<-k+1 ;
MergeSort (a,low,mid) //Sort the left part of
end if
18
the array
end while

while(i<=mid)
6. Quick Sort
QuickSort is a Divide and Conquer algorithm . It picks an element as a pivot and
partitions the given array around the pivot. There are many different versions of quickSort
that pick the pivot in different ways.
 Always pick the first element as a pivot.
 Always pick the last element as a pivot.
 Pick a random element as a pivot.
 Pick the median as the pivot.
Quick Sort by picking the first element as the Pivot.
The key function in quick sort is a partition. The target of partitions is to put the pivot in its
correct position if the array is sorted and the smaller (or equal) to its left and higher elements
to its right and do all this in linear time.
Partition Algorithm:
There can be many ways to do partition, the following pseudo-code adopts the method given
in the CLRS book.
 We start from the leftmost element and keep track of the index of smaller (or equal)
elements as i.
 While traversing, if we find a smaller (or equal) element, we swap the current
element with arr[i].
 Otherwise, we ignore the current element.
Pseudo Code for recursive QuickSort function:
// low –> Starting index, high –> Ending index
quickSort(arr[], low, high) {
if (low < high) {

// pi is partitioning index, arr[pi] is now at right place


pi = partition(arr, low, high);
quickSort(arr, low, pi – 1); // Before pi
quickSort(arr, pi + 1, high); // After pi
}
}

19
Pseudo code for partition() function
/* This function takes first element as pivot, places the pivot element at its correct position in
sorted array, and places all smaller (smaller than or equal to pivot) to left of pivot and all
greater elements to right of pivot */
partition (arr[], low, high) {
// first element as pivot
pivot = arr[low]
k = high
for (i = high; i > low; i–)
{

if (arr[i] > pivot)


{
swap arr[i] and arr[k];
k–;
}
}
swap arr[k] and arr[low]
return k-1;
}
Illustration of partition() :
Consider: arr[] = { 7, 6, 10, 5, 9, 2, 1, 15, 7 }
First Partition: low = 0, high = 8, pivot = arr[low] = 7
Initialize index of right most element k = high = 8.
 Traverse from i = high to low:
 if arr[i] is greater than pivot:
 Swap arr[i] and arr[k].
 Decrement k;
 At the end swap arr[low] and arr[k].

20
Now the correct position of pivot is index 5

First partition

Second Partition: low = 0, high = 4, pivot = arr[low] = 2


Similarly initialize k = high = 4;
The correct position of 2 becomes index 1. And the left part is only one element and the right
part has {6, 5, 7}.

Partition of the left half

On the other hand partition happens on the segment [6, 8] i.e., the array {10, 9, 15}.
Here low = 6, high = 8, pivot = 10 and k = 8.

21
The correct position of 10 becomes index 7 and the right and left part both has only one
element.

Partition of the right half

Third partition: Here partition the segment {6, 5, 7}. The low = 2, high = 4, pivot = 6 and k
= 4.
If the same process is applied, we get correct position of 6 as index 3 and the left and the
right part is having only one element.

Third partition

22
The total array becomes sorted in this way. Check the below image for the recursion tree

Recursion tree for partitions

Follow the below steps to implement the approach.


 Use a recursive function (say quickSort) to initialize the function.
 Call the partition function to partition the array and inside the partition function do
the following
 Take the first element as pivot and initialize and iterator k = high.
 Iterate in a for loop from i = high to low+1:

23
 If arr[i] is greater than pivot then swap arr[i] and arr[k] and
decrement k.
 After the iteration is swap the pivot with arr[k].
 Return k-1 as the point of partition.
 Now recursively call quickSort for the left half and right half of the partition index.

Quicksort complexity
Now, let's see the time complexity of quicksort in best case, average case, and in worst case.
We will also see the space complexity of quicksort.
1. Time Complexity

Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n2)

o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the
middle element or near to the middle element. The best-case time complexity of quicksort
is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of quicksort
is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is either
greatest or smallest element. Suppose, if the pivot element is always the last element of the
array, the worst case would occur when the given array is sorted already in ascending or
descending order. The worst-case time complexity of quicksort is O(n2).
Though the worst-case complexity of quicksort is more than other sorting algorithms such as Merge
sort and Heap sort, still it is faster in practice. Worst case in quick sort rarely occurs because by
changing the choice of pivot, it can be implemented in different ways. Worst case in quicksort can be
avoided by choosing the right pivot element.
2. Space Complexity

Space Complexity O(n*logn)

24
Stable NO

o The space complexity of quicksort is O(n*logn).


7. Bucket sort
The data items in the bucket sort are distributed in the form of buckets. Bucket sort is a sorting
algorithm that separate the elements into multiple groups said to be buckets. Elements in bucket sort
are first uniformly divided into groups called buckets, and then they are sorted by any other sorting
algorithm. After that, elements are gathered in a sorted manner.
The basic procedure of performing the bucket sort is given as follows -
o First, partition the range into a fixed number of buckets.
o Then, toss every element into its appropriate bucket.
o After that, sort each bucket individually by applying a sorting algorithm.
o And at last, concatenate all the sorted buckets.
The advantages of bucket sort are -
o Bucket sort reduces the no. of comparisons.
o It is asymptotically fast because of the uniform distribution of elements.
The limitations of bucket sort are -
o It may or may not be a stable sorting algorithm.
o It is not useful if we have a large array because it increases the cost.
o It is not an in-place sorting algorithm, because some extra space is required to sort the buckets.
The best and average-case complexity of bucket sort is O(n + k), and the worst-case complexity of
bucket sort is O(n2), where n is the number of items.
Bucket sort is commonly used -
o With floating-point values.
o When input is distributed uniformly over a range.
The basic idea to perform the bucket sort is given as follows -
bucketSort(a[], n)
1. Create 'n' empty buckets
2. Do for each array element a[i]
2.1. Put array elements into buckets, i.e. insert a[i] into bucket[n*a[i]]
3. Sort the elements of individual buckets by using the insertion sort.
4. At last, gather or concatenate the sorted buckets.
End bucketSort

25
Algorithm
Bucket Sort(A[])
1. Let B[0....n-1] be a new array
2. n=length[A]
3. for i=0 to n-1
4. make B[i] an empty list
5. for i=1 to n
6. do insert A[i] into list B[n a[i]]
7. for i=0 to n-1
8. do sort list B[i] with insertion-sort
9. Concatenate lists B[0], B[1],........, B[n-1] together in order
End
Scatter-gather approach
We can understand the Bucket sort algorithm via scatter-gather approach. Here, the given elements are
first scattered into buckets. After scattering, elements in each bucket are sorted using a stable sorting
algorithm. At last, the sorted elements will be gathered in order.
Let's take an unsorted array to understand the process of bucket sort. It will be easier to understand the
bucket sort via an example.
Let the elements of array are
Example :
Now, let's see the time complexity of bucket sort in best case, average case, and in worst case. We
will also see the space complexity of the bucket sort.

26
1. Time Complexity

Case Time Complexity

Best Case O(n + k)

Average Case O(n + k)

Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. In Bucket sort, best case occurs when the elements are uniformly distributed in the
buckets. The complexity will be better if the elements are already sorted in the buckets.
If we use the insertion sort to sort the bucket elements, the overall complexity will be linear,
i.e., O(n + k), where O(n) is for making the buckets, and O(k) is for sorting the bucket elements
using algorithms with linear time complexity at best case.
The best-case time complexity of bucket sort is O(n + k).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. Bucket sort runs in the linear time, even when
the elements are uniformly distributed. The average case time complexity of bucket sort is O(n
+ K).
o Worst Case Complexity - In bucket sort, worst case occurs when the elements are of the close
range in the array, because of that, they have to be placed in the same bucket. So, some buckets
have more number of elements than others.
The complexity will get worse when the elements are in the reverse order.
The worst-case time complexity of bucket sort is O(n2).
2. Space Complexity

Space Complexity O(n*k)

Stable YES

o The space complexity of bucket sort is O(n*k).

27
8. Radix Sort :
In this article, we will discuss the Radix sort Algorithm. Radix sort is the linear sorting
algorithm that is used for integers. In Radix sort, there is digit by digit sorting is performed that is
started from the least significant digit to the most significant digit.
The process of radix sort works similar to the sorting of students names, according to the
alphabetical order. In this case, there are 26 radix formed due to the 26 alphabets in English. In the first
pass, the names of students are grouped according to the ascending order of the first letter of their
names. After that, in the second pass, their names are grouped according to the ascending order of the
second letter of their name. And the process continues until we find the sorted list.
Now, let's see the algorithm of Radix sort.
Algorithm
radixSort(arr)
max = largest element in the given array
d = number of digits in the largest element (or, max)
Now, create d buckets of size 0 - 9
for i -> 0 to d
sort the array elements using counting sort (or any stable sort) according to the digits at
the ith place
Working of Radix sort Algorithm
Now, let's see the working of Radix sort Algorithm.
The steps used in the sorting of radix sort are listed as follows -
o First, we have to find the largest element (suppose max) from the given array. Suppose 'x' be
the number of digits in max. The 'x' is calculated because we need to go through the significant
places of all elements.
o After that, go through one by one each significant place. Here, we have to use any stable sorting
algorithm to sort the digits of each significant place.
Now let's see the working of radix sort in detail by using an example. To understand it more clearly,
let's take an unsorted array and try to sort it using radix sort. It will make the explanation clearer and
easier.

28
In the given array, the largest element is 736 that have 3 digits in it. So, the loop will run up to three
times (i.e., to the hundreds place). That means three passes are required to sort the array.
Now, first sort the elements on the basis of unit place digits (i.e., x = 0). Here, we are using the
counting sort algorithm to sort the elements.
Pass 1:
In the first pass, the list is sorted on the basis of the digits at 0's place.

After the first pass, the array elements are -

Pass 2:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 10th place).

29
After the second pass, the array elements are -

Pass 3:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 100th place).

After the third pass, the array elements are -

30
Now, the array is sorted in ascending order.
Radix sort complexity
Now, let's see the time complexity of Radix sort in best case, average case, and worst case. We will
also see the space complexity of Radix sort.
1. Time Complexity

Case Time Complexity

Best Case Ω(n+k)

Average Case θ(nk)

Worst Case O(nk)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of Radix sort is Ω(n+k).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of Radix
sort is θ(nk).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order, but
its elements are in descending order. The worst-case time complexity of Radix sort is O(nk).
Radix sort is a non-comparative sorting algorithm that is better than the comparative sorting
algorithms. It has linear time complexity that is better than the comparative algorithms with complexity
O(n logn).
2. Space Complexity

Space Complexity O(n + k)

Stable YES

o The space complexity of Radix sort is O(n + k).

31

You might also like