0% found this document useful (0 votes)
8 views

Algorithms - CS3401 - Notes

The document outlines the curriculum for various engineering disciplines over eight semesters, detailing courses such as Professional English, Algorithms, and Object Oriented Programming. It also discusses algorithm analysis, focusing on time and space complexity, and introduces asymptotic notation including Big O, Big Ω, and Big Θ. Additionally, it explains best, average, and worst-case scenarios for algorithm performance.

Uploaded by

Mrsselvam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Algorithms - CS3401 - Notes

The document outlines the curriculum for various engineering disciplines over eight semesters, detailing courses such as Professional English, Algorithms, and Object Oriented Programming. It also discusses algorithm analysis, focusing on time and space complexity, and introduces asymptotic notation including Big O, Big Ω, and Big Θ. Additionally, it explains best, average, and worst-case scenarios for algorithm performance.

Uploaded by

Mrsselvam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Civil

CSE
Home Mech
e
EEE
ECE

2nd Semester 3rd Semester


1st Semester
Professional English II Discrete Mathematics
Professional English I
Statistics and Numerical
Methods Digital Principles and
Matrices and Calculus
Computer Organization
Engineering Graphics
Engineering Physics
Foundation of Data
Physics for Information
Science Science
Engineering Chemistry

Physics
Basic for Engineering
Electrical and Data Structure
Problem Solving and Science Engineering
Electronics
Python Programming Object Oriented
Programming in C
Programming

4th Semester 5th Semester 6th Semester


Theory of Computation Computer Networks Object Oriented Software
Engineering
Artificial Intelligence Compiler Design
and Machine Learning Embedded Systems IoT
Cryptography and
Database Management Cyber Security Open Elective I
System
Professional Elective III
Algorithms Distributed Computing
Professional Elective IV

Introduction to Operating Professional Elective I Professional Elective V


Systems
Professional Elective II Professional Elective VI
Environmental Sciences
and sustainability Mandatory Course I Mandatory Course II

7th Semester 8th Semester


Human Values and Ethics Project Work/Internship

Elective-Management

Professional Elective II

Professional Elective III

Professional Elective IV
www.Poriyaan.in

CS3401- ALGORITHMS
UNIT 1
Algorithm Analysis
Algorithm analysis is the process of determining the time and space complexity of an algorithm,
which are measures of the algorithm's efficiency. Time complexity refers to the amount of time it
takes for an algorithm to run as a function of the size of the input, and is typically expressed using
big O notation. Space complexity refers to the amount of memory required by an algorithm as a
function of the size of the input, and is also typically expressed using big O notation.

To analyze the time complexity of an algorithm, we need to consider the number of operations
performed by the algorithm, and how the number of operations changes as the size of the input
increases. This can be done by counting the number of basic operations performed in the algorithm,
such as comparisons, assignments, and function calls. The number of basic operations is then used
to determine the algorithm's time complexity using big O notation.

To analyze the space complexity of an algorithm, we need to consider the amount of memory used
by the algorithm, and how the amount of memory used changes as the size of the input increases.
This can be done by counting the number of variables used by the algorithm, and how the number of
variables used changes as the size of the input increases. The amount of memory used is then used
to determine the algorithm's space complexity using big O notation.

It's important to note that analyzing the time and space complexity of an algorithm is a way to
evaluate the efficiency of an algorithm and trade-off between time and space, but it is not a
definitive measure of the actual performance of the algorithm, as it depends on the specific
implementation of the algorithm, the computer and the input.

Time and Space Complexity


Time complexity is a measure of how long an algorithm takes to run as a function of the size of the
input. It is typically expressed using big O notation, which describes the upper bound on the growth
of the time required by the algorithm. For example, an algorithm with a time complexity of O(n)
takes longer to run as the input size (n) increases.

There are different types of time complexities:

 O(1) or constant time: the algorithm takes the same amount of time to run regardless of the
size of the input.

 O(log n) or logarithmic time: the algorithm's running time increases logarithmically with the
size of the input.

 O(n) or linear time: the algorithm's running time increases linearly with the size of the input.

 O(n log n) or linear logarithmic time: the algorithm's running time increases linearly with the
size of the input and logarithmically with the size of the input.

 O(n^2) or quadratic time: the algorithm's running time increases quadratically with the size
of the input.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 1
www.Poriyaan.in

 O(2^n) or exponential time: the algorithm's running time increases exponentially with the
size of the input.

Space complexity, on the other hand, is a measure of how much memory an algorithm uses as a
function of the size of the input. Like time complexity, it is typically expressed using big O notation.
For example, an algorithm with a space complexity of O(n) uses more memory as the input size (n)
increases. Space complexities are generally categorized as:

 O(1) or constant space: the algorithm uses the same amount of memory regardless of the
size of the input.

 O(n) or linear space: the algorithm's memory usage increases linearly with the size of the
input.

 O(n^2) or quadratic space: the algorithm's memory usage increases quadratically with the
size of the input.

 O(2^n) or exponential space: the algorithm's memory usage increases exponentially with the
size of the input.

It is important to note that time and space complexity analysis is a way to evaluate the efficiency of
an algorithm and the trade-off between time and space, but it is not a definitive measure of the
actual performance of the algorithm, as it depends on the specific implementation of the algorithm,
the computer and the input.

Asymptotic notation and its properties


Asymptotic notation is a mathematical notation used to describe the behavior of an algorithm as the
size of the input (usually denoted by n) becomes arbitrarily large. The most commonly used
asymptotic notations are big O, big Ω, and big Θ.

 Big O notation (O(f(n))) provides an upper bound on the growth of a function. It describes
the worst-case scenario for the time or space complexity of an algorithm. For example, an
algorithm with a time complexity of O(n^2) means that the running time of the algorithm is
at most n^2, where n is the size of the input.

 Big Ω notation (Ω(f(n))) provides a lower bound on the growth of a function. It describes the
best-case scenario for the time or space complexity of an algorithm. For example, an
algorithm with a space complexity of Ω(n) means that the memory usage of the algorithm is
at least n, where n is the size of the input.

 Big Θ notation (Θ(f(n))) provides a tight bound on the growth of a function. It describes the
average-case scenario for the time or space complexity of an algorithm. For example, an
algorithm with a time complexity of Θ(n log n) means that the running time of the algorithm
is both O(n log n) and Ω(n log n), where n is the size of the input.

It's important to note that the asymptotic notation only describes the behavior of the function for
large values of n, and does not provide information about the exact behavior of the function for
small values of n. Also, for some cases, the best, worst and average cases can be the same, in that
case the notation will be simplified to O(f(n)) = Ω(f(n)) = Θ(f(n))

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 2
www.Poriyaan.in

Additionally, these notations can be used to compare the efficiency of different algorithms, where a
lower order of the function is considered more efficient. For example, an algorithm with a time
complexity of O(n) is more efficient than an algorithm with a time complexity of O(n^2).

It's also worth mentioning that asymptotic notation is not only limited to time and space complexity
but can be used to express the behavior of any function, not just algorithms.

There are three asymptotic notations that are used to represent the time complexity of an algorithm.
They are:

 Θ Notation (theta)

 Big O Notation

 Ω Notation

Before learning about these three asymptotic notation, we should learn about the best, average,
and the worst case of an algorithm.

Best case, Average case, and Worst case

An algorithm can have different time for different inputs. It may take 1 second for some input and 10
seconds for some other input.

For example: We have one array named " arr" and an integer " k ". we need to find if that integer
" k " is present in the array " arr " or not? If the integer is there, then return 1 other return 0. Try to
make an algorithm for this question.

The following information can be extracted from the above question:

 Input: Here our input is an integer array of size "n" and we have one integer "k" that we
need to search for in that array.

 Output: If the element "k" is found in the array, then we have return 1, otherwise we have
to return 0.

Now, one possible solution for the above problem can be linear search i.e. we will traverse each and
every element of the array and compare that element with "k". If it is equal to "k" then return 1,
otherwise, keep on comparing for more elements in the array and if you reach at the end of the
array and you did not find any element, then return 0.

/*
* @type of arr: integer array
* @type of n: integer (size of integer array)
* @type of k: integer (integer to be searched)
*/
int search K(int arr[], int n, int k)
{
// for-loop to iterate with each element in the array
for (int i = 0; i < n; ++i)
{
// check if ith element is equal to "k" or not
if (arr[i] == k)
return 1; // return 1, if you find "k"

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 3
www.Poriyaan.in

}
return 0; // return 0, if you didn't find "k"
}

/*
* [Explanation]
* i = 0 ------------> will be executed once
* i < n ------------> will be executed n+1 times
* i++ --------------> will be executed n times
* if(arr[i] == k) --> will be executed n times
* return 1 ---------> will be executed once(if "k" is there in the array)
* return 0 ---------> will be executed once(if "k" is not there in the array)
*/
Each statement in code takes constant time, let's say "C", where "C" is some constant. So, whenever
we declare an integer then it takes constant time when we change the value of some integer or
other variables then it takes constant time, when we compare two variables then it takes constant
time. So, if a statement is taking "C" amount of time and it is executed "N" times, then it will take
C*N amount of time. Now, think of the following inputs to the above algorithm that we have just
written:

NOTE: Here we assume that each statement is taking 1sec of time to execute.

 If the input array is [1, 2, 3, 4, 5] and you want to find if "1" is present in the array or not,
then the if-condition of the code will be executed 1 time and it will find that the element 1 is
there in the array. So, the if-condition will take 1 second here.

 If the input array is [1, 2, 3, 4, 5] and you want to find if "3" is present in the array or not,
then the if-condition of the code will be executed 3 times and it will find that the element 3
is there in the array. So, the if-condition will take 3 seconds here.

 If the input array is [1, 2, 3, 4, 5] and you want to find if "6" is present in the array or not,
then the if-condition of the code will be executed 5 times and it will find that the element 6
is not there in the array and the algorithm will return 0 in this case. So, the if-condition will
take 5 seconds here.

As we can see that for the same input array, we have different time for different values of "k". So,
this can be divided into three cases:

 Best case: This is the lower bound on running time of an algorithm. We must know the case
that causes the minimum number of operations to be executed. In the above example, our
array was [1, 2, 3, 4, 5] and we are finding if "1" is present in the array or not. So here, after
only one comparison, we will get that ddelement is present in the array. So, this is the best
case of our algorithm.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 4
www.Poriyaan.in

 Average case: We calculate the running time for all possible inputs, sum all the calculated
values and divide the sum by the total number of inputs. We must know (or predict)
distribution of cases.

 Worst case: This is the upper bound on running time of an algorithm. We must know the
case that causes the maximum number of operations to be executed. In our example, the
worst case can be if the given array is [1, 2, 3, 4, 5] and we try to find if element "6" is
present in the array or not. Here, the if-condition of our loop will be executed 5 times and
then the algorithm will give "0" as output.

So, we learned about the best, average, and worst case of an algorithm. Now, let's get back to the
asymptotic notation where we saw that we use three asymptotic notation to represent the
complexity of an algorithm i.e. Θ Notation (theta), Ω Notation, Big O Notation.

NOTE: In the asymptotic analysis, we generally deal with large input size.

Θ Notation (theta)

The Θ Notation is used to find the average bound of an algorithm i.e. it defines an upper bound and
a lower bound, and your algorithm will lie in between these levels. So, if a function is g(n), then the
theta representation is shown as Θ(g(n)) and the relation is shown as:

Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0

such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }

The above expression can be read as theta of g(n) is defined as set of all the functions f(n) for which
there exists some positive constants c1, c2, and n0 such that c1*g(n) is less than or equal to f(n) and
f(n) is less than or equal to c2*g(n) for all n that is greater than or equal to n0.

For example:

if f(n) = 2n² + 3n + 1

and g(n) = n²

then for c1 = 2, c2 = 6, and n0 = 1, we can say that f(n) = Θ(n²)

Ω Notation

The Ω notation denotes the lower bound of an algorithm i.e. the time taken by the algorithm can't
be lower than this. In other words, this is the fastest time in which the algorithm will return a result.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 5
www.Poriyaan.in

Its the time taken by the algorithm when provided with its best-case input. So, if a function is g(n),
then the omega representation is shown as Ω(g(n)) and the relation is shown as:

Ω(g(n)) = { f(n): there exist positive constants c and n0

such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

The above expression can be read as omega of g(n) is defined as set of all the functions f(n) for which
there exist some constants c and n0 such that c*g(n) is less than or equal to f(n), for all n greater
than or equal to n0.

if f(n) = 2n² + 3n + 1

and g(n) = n²

then for c = 2 and n0 = 1, we can say that f(n) = Ω(n²)

Big O Notation

The Big O notation defines the upper bound of any algorithm i.e. you algorithm can't take more time
than this time. In other words, we can say that the big O notation denotes the maximum time taken
by an algorithm or the worst-case time complexity of an algorithm. So, big O notation is the most
used notation for the time complexity of an algorithm. So, if a function is g(n), then the big O
representation of g(n) is shown as O(g(n)) and the relation is shown as:

O(g(n)) = { f(n): there exist positive constants c and n0

such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

The above expression can be read as Big O of g(n) is defined as a set of functions f(n) for which there
exist some constants c and n0 such that f(n) is greater than or equal to 0 and f(n) is smaller than or
equal to c*g(n) for all n greater than or equal to n0.

if f(n) = 2n² + 3n + 1

and g(n) = n²

then for c = 6 and n0 = 1, we can say that f(n) = O(n²)

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 6
www.Poriyaan.in

Big O notation example of Algorithms

Big O notation is the most used notation to express the time complexity of an algorithm. In this
section of the blog, we will find the big O notation of various algorithms.

Example 1: Finding the sum of the first n numbers.

In this example, we have to find the sum of first n numbers. For example, if n = 4, then our output
should be 1 + 2 + 3 + 4 = 10. If n = 5, then the ouput should be 1 + 2 + 3 + 4 + 5 = 15. Let's try various
solutions to this code and try to compare all those codes.

O(1) solution

// function taking input "n"

int findSum(int n)

return n * (n+1) / 2; // this will take some constant time c1

In the above code, there is only one statement and we know that a statement takes constant time
for its execution. The basic idea is that if the statement is taking constant time, then it will take the
same amount of time for all the input size and we denote this as O(1) .

O(n) solution

In this solution, we will run a loop from 1 to n and we will add these values to a variable named
"sum".

// function taking input "n"

int findSum(int n)

int sum = 0; // -----------------> it takes some constant time "c1"

for(int i = 1; i <= n; ++i) // --> here the comparision and increment will take place n times(c2*n) and
the creation of i takes place with some constant time

sum = sum + i; // -----------> this statement will be executed n times i.e. c3*n

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 7
www.Poriyaan.in

return sum; // ------------------> it takes some constant time "c4"

/*

* Total time taken = time taken by all the statments to execute

* here in our example we have 3 constant time taking statements i.e. "sum = 0", "i = 0", and "return
sum", so we can add all the constatnts and replacce with some new constant "c"

* apart from this, we have two statements running n-times i.e. "i < n(in real n+1)" and "sum = sum +
i" i.e. c2*n + c3*n = c0*n

* Total time taken = c0*n + c

*/

The big O notation of the above code is O(c0*n) + O(c), where c and c0 are constants. So, the overall
time complexity can be written as O(n) .

O(n²) solution

In this solution, we will increment the value of sum variable "i" times i.e. for i = 1, the sum variable
will be incremented once i.e. sum = 1. For i = 2, the sum variable will be incremented twice. So, let's
see the solution.

// function taking input "n"

int findSum(int n)

int sum = 0; // ---------------------> constant time

for(int i = 1; i <= n; ++i)

for(int j = 1; j <= i; ++j)

sum++; // -------------------> it will run [n * (n + 1) / 2]

return sum; // ----------------------> constant time

/*

* Total time taken = time taken by all the statments to execute

* the statement that is being executed most of the time is "sum++" i.e. n * (n + 1) / 2

* So, total complexity will be: c1*n² + c2*n + c3 [c1 is for the constant terms of n², c2 is for the
constant terms of n, and c3 is for rest of the constant time]

*/

The big O notation of the above algorithm is O(c1*n²) +O( c2*n) + O(c3). Since we take the higher
order of growth in big O. So, our expression will be reduced to O(n²) .

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 8
www.Poriyaan.in

So, until now, we saw 3 solutions for the same problem. Now, which algorithm will you prefer to use
when you are finding the sum of first "n" numbers? If your answer is O(1) solution, then we have one
bonus section for you at the end of this blog. We would prefer the O(1) solution because the time
taken by the algorithm will be constant irrespective of the input size.

Recurrence Relation

A recurrence relation is a mathematical equation that describes the relation between the input size
and the running time of a recursive algorithm. It expresses the running time of a problem in terms of
the running time of smaller instances of the same problem.

A recurrence relation typically has the form T(n) = aT(n/b) + f(n) where:

 T(n) is the running time of the algorithm on an input of size n

 a is the number of recursive calls made by the algorithm

 b is the size of the input passed to each recursive call

 f(n) is the time required to perform any non-recursive operations

The recurrence relation can be used to determine the time complexity of the algorithm using
techniques such as the Master Theorem or Substitution Method.

For example, let's consider the problem of computing the nth Fibonacci number. A simple recursive
algorithm for solving this problem is as follows:

Fibonacci(n)

if n <= 1

return n

else

return Fibonacci(n-1) + Fibonacci(n-2)

The recurrence relation for this algorithm is T(n) = T(n-1) + T(n-2) + O(1), which describes the running
time of the algorithm in terms of the running time of the two smaller instances of the problem with
input sizes n-1 and n-2. Using the Master Theorem, it can be shown that the time complexity of this
algorithm is O(2^n) which is very inefficient for large input sizes.

Searching
Searching is the process of fetching a specific element in a collection of elements. The collection can
be an array or a linked list. If you find the element in the list, the process is considered successful,
and it returns the location of that element.
Two prominent search strategies are extensively used to find a specific item on a list. However, the
algorithm chosen is determined by the list's organization.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 9
www.Poriyaan.in

1. Linear Search
2. Binary Search
3. Interpolation search

Linear Search
Linear search, often known as sequential search, is the most basic search technique. In this type of
search, we go through the entire list and try to fetch a match for a single element. If we
find a match, then the address of the matching target element is returned.
On the other hand, if the element is not found, then it returns a NULL value.
Following is a step-by-step approach employed to perform Linear Search Algorithm.

The procedures for implementing linear search are as follows:


Step 1: First, read the search element (Target element) in the array.
Step 2: In the second step compare the search element with the first element in the array.
Step 3: If both are matched, display "Target element is found" and terminate the Linear Search
function.
Step 4: If both are not matched, compare the search element with the next element in the array.
Step 5: In this step, repeat steps 3 and 4 until the search (Target) element is compared with the last
element of the array.
Step 6 - If the last element in the list does not match, the Linear Search Function will be terminated,
and the message "Element is not found" will be displayed.

Algorithm and Pseudocode of Linear Search Algorithm


Algorithm of the Linear Search Algorithm

Linear Search ( Array Arr, Value a ) // Arr is the name of the array, and a is the searched element.
Step 1: Set i to 0 // i is the index of an array which starts from 0
Step 2: ifi > n then go to step 7 // n is the number of elements in array
Step 3: if Arr[i] = a then go to step 6
Step 4: Set i to i + 1
Step 5: Goto step 2
Step 6: Print element a found at index i and go to step 8
Step 7: Print element not found
Step 8: Exit

Pseudocode of Linear Search Algorithm

Start
linear_search ( Array , value)

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 10
www.Poriyaan.in

For each element in the array


If (searched element == value)
Return's the searched element location
end if
end for
end

Example of Linear Search Algorithm

Consider an array of size 7 with elements 13, 9, 21, 15, 39, 19, and 27 that starts with 0 and ends
with size minus one, 6.
Search element = 39

Step 1: The searched element 39 is compared to the first element of an array, which is 13.

The match is not found, you now move on to the next element and try to implement a comparison.
Step 2: Now, search element 39 is compared to the second element of an array, 9.

Step 3: Now, search element 39 is compared with the third element, which is 21.

Again, both the elements are not matching, you move onto the next following element.
Step 4; Next, search element 39 is compared with the fourth element, which is 15.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 11
www.Poriyaan.in

Step 5: Next, search element 39 is compared with the fifth element 39.

A perfect match is found, display the element found at location 4.

The Complexity of Linear Search Algorithm


Three different complexities faced while performing Linear Search Algorithm, they are mentioned as
follows.
1. Best Case
2. Worst Case
3. Average Case
Best Case Complexity
 The element being searched could be found in the first position.
 In this case, the search ends with a single successful comparison.
 Thus, in the best-case scenario, the linear search algorithm performs O(1) operations.
Worst Case Complexity
 The element being searched may be at the last position in the array or not at all.
 In the first case, the search succeeds in ‘n’ comparisons.
 In the next case, the search fails after ‘n’ comparisons.
 Thus, in the worst-case scenario, the linear search algorithm performs O(n) operations.
Average Case Complexity
When the element to be searched is in the middle of the array, the average case of the Linear Search
Algorithm is O(n).
Space Complexity of Linear Search Algorithm
The linear search algorithm takes up no extra space; its space complexity is O(n) for an array of n
elements.
Application of Linear Search Algorithm
The linear search algorithm has the following applications:
 Linear search can be applied to both single-dimensional and multi-dimensional arrays.
 Linear search is easy to implement and effective when the array contains only a few
elements.
 Linear Search is also efficient when the search is performed to fetch a single search in an
unordered-List.
Code Implementation of Linear Search Algorithm

#include<stdio.h>
#include<stdlib.h>
#include<conio.h>
int main()
{
int array[50],i,target,num;

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 12
www.Poriyaan.in

printf("How many elements do you want in the array");


scanf("%d",&num);
printf("Enter array elements:");
for(i=0;i<num;++i)
scanf("%d",&array[i]);
printf("Enter element to search:");
scanf("%d",&target);
for(i=0;i<num;++i)
if(array[i]==target)
break;
if(i<num)
printf("Target element found at location %d",i);
else
printf("Target element not found in an array");
return 0;
}

Binary Search

Binary search is the search technique that works efficiently on sorted lists. Hence, to search an
element into some list using the binary search technique, we must ensure that the list is sorted.
Binary search follows the divide and conquer approach in which the list is divided into two halves,
and the item is compared with the middle element of the list. If the match is found then, the
location of the middle element is returned. Otherwise, we search into either of the halves depending
upon the result produced through the match
NOTE: Binary search can be implemented on sorted array elements. If the list elements are not
arranged in a sorted manner, we have first to sort them.

Algorithm
1. Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is t
he index of the first array element, 'upper_bound' is the index of the last array element, 'val'
is the value to search
2. Step 1: set beg = lower_bound, end = upper_bound, pos = - 1
3. Step 2: repeat steps 3 and 4 while beg <=end
4. Step 3: set mid = (beg + end)/2
5. Step 4: if a[mid] = val
6. set pos = mid
7. print pos
8. go to step 6
9. else if a[mid] > val
10. set end = mid - 1
11. else
12. set beg = mid + 1
13. [end of if]
14. [end of loop]
15. Step 5: if pos = -1

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 13
www.Poriyaan.in

16. print "value is not present in the array"


17. [end of if]
18. Step 6: exit
Procedure binary_search
A ← sorted array
n ← size of array
x ← value to be searched
Set lowerBound = 1
Set upperBound = n
while x not found
if upperBound < lowerBound
EXIT: x does not exists.
set midPoint = lowerBound + ( upperBound - lowerBound ) / 2
if A[midPoint] < x
set lowerBound = midPoint + 1
if A[midPoint] > x
set upperBound = midPoint - 1
if A[midPoint] = x
EXIT: x found at location midPoint
end while
end procedure

Working of Binary search


To understand the working of the Binary search algorithm, let's take a sorted array. It will be easy to
understand the working of Binary search with an example.
There are two methods to implement the binary search algorithm -
o Iterative method
o Recursive method
The recursive method of binary search follows the divide and conquer approach.
Let the elements of array are -

Let the element to search is, K = 56


We have to use the below formula to calculate the mid of the array -
1. mid = (beg + end)/2
So, in the given array -

beg = 0
end = 8
mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 14
www.Poriyaan.in

Now, the element to search is found. So algorithm will return the index of the element matched.
Binary Search complexity
Now, let's see the time complexity of Binary search in the best case, average case, and worst case.
We will also see the space complexity of Binary search.
1. Time Complexity

Case Time Complexity

Best Case O(1)

Average Case O(logn)

Worst Case O(logn)


o Best Case Complexity - In Binary search, best case occurs when the element to search is
found in first comparison, i.e., when the first middle element itself is the element to be
searched. The best-case time complexity of Binary search is O(1).
o Average Case Complexity - The average case time complexity of Binary search is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we have to keep
reducing the search space till it has only one element. The worst-case time complexity of
Binary search is O(logn).
2. Space Complexity
Space Complexity O(1)
o The space complexity of binary search is O(1).

Implementation of Binary Search


Program: Write a program to implement Binary search in C language.
1. #include <stdio.h>
2. int binarySearch(int a[], int beg, int end, int val)
3. {
4. int mid;
5. if(end >= beg)

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 15
www.Poriyaan.in

6. { mid = (beg + end)/2;


7. /* if the item to be searched is present at middle */
8. if(a[mid] == val)
9. {
10. return mid+1;
11. }
12. /* if the item to be searched is smaller than middle, then it can only be in left subarra
y */
13. else if(a[mid] < val)
14. {
15. return binarySearch(a, mid+1, end, val);
16. }
17. /* if the item to be searched is greater than middle, then it can only be in right subarr
ay */
18. else
19. {
20. return binarySearch(a, beg, mid-1, val);
21. }
22. }
23. return -1;
24. }
25. int main() {
26. int a[] = {11, 14, 25, 30, 40, 41, 52, 57, 70}; // given array
27. int val = 40; // value to be searched
28. int n = sizeof(a) / sizeof(a[0]); // size of array
29. int res = binarySearch(a, 0, n-1, val); // Store result
30. printf("The elements of the array are - ");
31. for (int i = 0; i < n; i++)
32. printf("%d ", a[i]);
33. printf("\nElement to be searched is - %d", val);
34. if (res == -1)
35. printf("\nElement is not present in the array");
36. else
37. printf("\nElement is present at %d position of array", res);
38. return 0;
39. }
Output

Interpolation Search
Interpolation search is an improved variant of binary search. This search algorithm works on the
probing position of the required value. For this algorithm to work properly, the data collection
should be in a sorted form and equally distributed.
Binary search has a huge advantage of time complexity over linear search. Linear search has worst-
case complexity of Ο(n) whereas binary search has Ο(log n).

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 16
www.Poriyaan.in

There are cases where the location of target data may be known in advance. For example, in case of
a telephone directory, if we want to search the telephone number of Morphius. Here, linear search
and even binary search will seem slow as we can directly jump to memory space where the names
start from 'M' are stored.
Position Probing in Interpolation Search
Interpolation search finds a particular item by computing the probe position. Initially, the probe
position is the position of the middle most item of the collection.

If a match occurs, then the index of the item is returned. To split the list into two parts, we use the
following method −
mid = Lo + ((Hi - Lo) / (A[Hi] - A[Lo])) * (X - A[Lo])

where −
A = list
Lo = Lowest index of the list
Hi = Highest index of the list
A[n] = Value stored at index n in the list

If the middle item is greater than the item, then the probe position is again calculated in the sub-
array to the right of the middle item. Otherwise, the item is searched in the subarray to the left of
the middle item. This process continues on the sub-array as well until the size of subarray reduces to
zero.
Runtime complexity of interpolation search algorithm is Ο(log (log n)) as compared to Ο(log n) of
BST in favorable situations.
Algorithm
As it is an improvisation of the existing BST algorithm, we are mentioning the steps to search the
'target' data value index, using position probing −
Step 1 − Start searching data from middle of the list.
Step 2 − If it is a match, return the index of the item, and exit.
Step 3 − If it is not a match, probe position.
Step 4 − Divide the list using probing formula and find the new midle.
Step 5 − If data is greater than middle, search in higher sub-list.
Step 6 − If data is smaller than middle, search in lower sub-list.
Step 7 − Repeat until match.

Pseudocode
A → Array list
N → Size of A
X → Target Value

Procedure Interpolation_Search()

Set Lo → 0

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 17
www.Poriyaan.in

Set Mid → -1
Set Hi → N-1

While X does not match

if Lo equals to Hi OR A[Lo] equals to A[Hi]


EXIT: Failure, Target not found
end if

Set Mid = Lo + ((Hi - Lo) / (A[Hi] - A[Lo])) * (X - A[Lo])

if A[Mid] = X
EXIT: Success, Target found at Mid
else
if A[Mid] < X
Set Lo to Mid+1
else if A[Mid] > X
Set Hi to Mid-1
end if
end if
End While

End Procedure

Implementation of interpolation in C

#include<stdio.h>
#define MAX 10
// array of items on which linear search will be conducted.
int list[MAX] = { 10, 14, 19, 26, 27, 31, 33, 35, 42, 44 };
int find(int data) {
int lo = 0;
int hi = MAX - 1;
int mid = -1;
int comparisons = 1;
int index = -1;
while(lo <= hi) {
printf("\nComparison %d \n" , comparisons ) ;
printf("lo : %d, list[%d] = %d\n", lo, lo, list[lo]);
printf("hi : %d, list[%d] = %d\n", hi, hi, list[hi]);

comparisons++;
// probe the mid point
mid = lo + (((double)(hi - lo) / (list[hi] - list[lo])) * (data - list[lo]));
printf("mid = %d\n",mid);
// data found
if(list[mid] == data) {
index = mid;
break;

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 18
www.Poriyaan.in

} else {
if(list[mid] < data) {
// if data is larger, data is in upper half
lo = mid + 1;
} else {
// if data is smaller, data is in lower half
hi = mid - 1;
}
}
}

printf("\nTotal comparisons made: %d", --comparisons);


return index;
}
int main() {
//find location of 33
int location = find(33);

// if element was found


if(location != -1)
printf("\nElement found at location: %d" ,(location+1));
else
printf("Element not found.");
return 0;
}
If we compile and run the above program, it will produce the following result −
Output
Comparison 1
lo : 0, list[0] = 10
hi : 9, list[9] = 44
mid = 6

Total comparisons made: 1


Element found at location: 7

Time Complexity
 Bestcase-O(1)
The best-case occurs when the target is found exactly as the first expected position
computed using the formula. As we only perform one comparison, the time complexity is
O(1).

 Worst-case-O(n)
The worst case occurs when the given data set is exponentially distributed.

 Averagecase-O(log(log(n)))
If the data set is sorted and uniformly distributed, then it takes O(log(log(n))) time as on an
average (log(log(n))) comparisons are made.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 19
www.Poriyaan.in

Space Complexity
O(1) as no extra space is required.

Pattern Search
Pattern Searching algorithms are used to find a pattern or substring from another bigger string.
There are different algorithms. The main goal to design these type of algorithms to reduce the time
complexity. The traditional approach may take lots of time to complete the pattern searching task
for a longer text.
Here we will see different algorithms to get a better performance of pattern matching.
In this Section We are going to cover.
 Aho-Corasick Algorithm
 Anagram Pattern Search
 Bad Character Heuristic
 Boyer Moore Algorithm
 Efficient Construction of Finite Automata
 kasai’s Algorithm
 Knuth-Morris-Pratt Algorithm
 Manacher’s Algorithm
 Naive Pattern Searching
 Rabin-Karp Algorithm
 Suffix Array
 Trie of all Suffixes
 Z Algorithm

Naïve pattern searching is the simplest method among other pattern searching algorithms. It checks
for all character of the main string to the pattern. This algorithm is helpful for smaller texts. It does
not need any pre-processing phases. We can find substring by checking once for the string. It also
does not occupy extra space to perform the operation.
The time complexity of Naïve Pattern Search method is O(m*n). The m is the size of pattern and n is
the size of the main string.

Input and Output


Input:
Main String: “ABAAABCDBBABCDDEBCABC”, pattern: “ABC”
Output:
Pattern found at position: 4
Pattern found at position: 10
Pattern found at position: 18

Algorithm
naive_algorithm(pattern, text)
Input − The text and the pattern
Output − locations, where the pattern is present in the text
Start
pat_len := pattern Size

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 20
www.Poriyaan.in

str_len := string size


for i := 0 to (str_len - pat_len), do
for j := 0 to pat_len, do
if text[i+j] ≠ pattern[j], then
break
if j == patLen, then
display the position i, as there pattern found
End

Implementation in C
#include <stdio.h>
#include <string.h>
int main (){
char txt[] = "tutorialsPointisthebestplatformforprogrammers";
char pat[] = "a";
int M = strlen (pat);
int N = strlen (txt);
for (int i = 0; i <= N - M; i++){
int j;
for (j = 0; j < M; j++)
if (txt[i + j] != pat[j])
break;
if (j == M)
printf ("Pattern matches at index %d
", i);
}
return 0;
}
Output
Pattern matches at 6
Pattern matches at 25
Pattern matches at 39

Rabin-Karp matching pattern

Rabin-Karp is another pattern searching algorithm. It is the string matching algorithm that was
proposed by Rabin and Karp to find the pattern in a more efficient way. Like the Naive Algorithm, it
also checks the pattern by moving the window one by one, but without checking all characters for all
cases, it finds the hash value. When the hash value is matched, then only it proceeds to check each
character. In this way, there is only one comparison per text subsequence making it a more efficient
algorithm for pattern searching.
Preprocessing time- O(m)
The time complexity of the Rabin-Karp Algorithm is O(m+n), but for the worst case, it is O(mn).
Algorithm
rabinkarp_algo(text, pattern, prime)
Input − The main text and the pattern. Another prime number of find hash location

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 21
www.Poriyaan.in

Output − locations, where the pattern is found


Start
pat_len := pattern Length
str_len := string Length
patHash := 0 and strHash := 0, h := 1
maxChar := total number of characters in character set
for index i of all character in the pattern, do
h := (h*maxChar) mod prime
for all character index i of pattern, do
patHash := (maxChar*patHash + pattern[i]) mod prime
strHash := (maxChar*strHash + text[i]) mod prime
for i := 0 to (str_len - pat_len), do
if patHash = strHash, then
for charIndex := 0 to pat_len -1, do
if text[i+charIndex] ≠ pattern[charIndex], then
break
if charIndex = pat_len, then
print the location i as pattern found at i position.
if i < (str_len - pat_len), then
strHash := (maxChar*(strHash – text[i]*h)+text[i+patLen]) mod prime, then
if strHash < 0, then
strHash := strHash + prime
End

Implementation In C

#include<stdio.h>
#include<string.h>
int main (){
char txt[80], pat[80];
int q;
printf("Enterthecontainerstring ");
scanf ("%s", &txt);
printf("Enterthepatterntobesearched ");
scanf ("%s", &pat);
int d = 256;
printf("Enteraprimenumber ");
scanf ("%d", &q);
int M = strlen (pat);
int N = strlen (txt);
int i, j;
int p = 0;
int t = 0;
int h = 1;
for (i = 0; i < M - 1; i++)
h = (h * d) % q;
for (i = 0; i < M; i++){
p = (d * p + pat[i]) % q;

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 22
www.Poriyaan.in

t = (d * t + txt[i]) % q;
}
for (i = 0; i <= N - M; i++){
if (p == t){
for (j = 0; j < M; j++){
if (txt[i + j] != pat[j])
break;
}
if (j == M)
printf("Patternfoundatindex%d ", i);
}
if (i < N - M){
t = (d * (t - txt[i] * h) + txt[i + M]) % q;
if (t < 0)
t = (t + q);
}
}
return 0;
}
Output
Enter the container string
tutorialspointisthebestprogrammingwebsite
Enter the pattern to be searched
p
Enter a prime number
3
Pattern found at index 8
Pattern found at index 21

n this problem, we are given two strings a text and a pattern. Our task is to create a program for
KMP algorithm for pattern search, it will find all the occurrences of pattern in text string.
Here, we have to find all the occurrences of patterns in the text.
Let’s take an example to understand the problem,
Input
text = “xyztrwqxyzfg” pattern = “xyz”
Output
Found at index 0
Found at index 7
Here, we will discuss the solution to the problem using KMP (Knuth Morris Pratt) pattern searching
algorithm, it will use a preprocessing string of the pattern which will be used for matching in the text.
And help’s in processing or finding pattern matches in the case where matching characters are
followed by the character of the string that does not match the pattern.
We will preprocess the pattern wand to create an array that contains the proper prefix and suffix
from the pattern that will help in finding the mismatch patterns.
Program for KMP Algorithm for Pattern Searching
// C Program for KMP Algorithm for Pattern Searching
Example
#include<iostream>

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 23
www.Poriyaan.in

#include<string.h>
using namespace std;
void prefixSuffixArray(char* pat, int M, int* pps) {
int length = 0;
pps[0] = 0;
int i = 1;
while (i < M) {
if (pat[i] == pat[length]) {
length++;
pps[i] = length;
i++;
} else {
if (length != 0)
length = pps[length - 1];
else {
pps[i] = 0;
i++;
}
}
}
}
void KMPAlgorithm(char* text, char* pattern) {
int M = strlen(pattern);
int N = strlen(text);
int pps[M];
prefixSuffixArray(pattern, M, pps);
int i = 0;
int j = 0;
while (i < N) {
if (pattern[j] == text[i]) {
j++;
i++;
}
if (j == M)
{
printf("Foundpatternatindex%d", i - j);
j = pps[j - 1];
}
else if (i < N && pattern[j] != text[i]) {
if (j != 0)
j = pps[j - 1];
else
i = i + 1;
}
}
}
int main() {
char text[] = "xyztrwqxyzfg";

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 24
www.Poriyaan.in

char pattern[] = "xyz";


printf("Thepatternisfoundinthetextatthefollowingindex : ");
KMPAlgorithm(text, pattern);
return 0;
}
Output
The pattern is found in the text at the following index −
Found pattern at index 0
Found pattern at index 7

Sorting : Insertion sort

Insertion sort works similar to the sorting of playing cards in hands. It is assumed that the first card is
already sorted in the card game, and then we select an unsorted card. If the selected unsorted card
is greater than the first card, it will be placed at the right side; otherwise, it will be placed at the left
side. Similarly, all unsorted cards are taken and put in their exact place.

The same approach is applied in insertion sort. The idea behind the insertion sort is that first take
one element, iterate it through the sorted array. Although it is simple to use, it is not appropriate for
large data sets as the time complexity of insertion sort in the average case and worst case is O(n2),
where n is the number of items. Insertion sort is less efficient than the other sorting algorithms like
heap sort, quick sort, merge sort, etc.

Algorithm
The simple steps of achieving the insertion sort are listed as follows -
Step 1 - If the element is the first element, assume that it is already sorted. Return 1.
Step2 - Pick the next element, and store it separately in a key.
Step3 - Now, compare the key with all elements in the sorted array.
Step 4 - If the element in the sorted array is smaller than the current element, then move to the next
element. Else, shift greater elements in the array towards the right.
Step 5 - Insert the value.
Step 6 - Repeat until the array is sorted.
Working of Insertion sort Algorithm
Now, let's see the working of the insertion sort Algorithm.
To understand the working of the insertion sort algorithm, let's take an unsorted array. It will be
easier to understand the insertion sort via an example.
Let the elements of array are -

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So, for now,
12 is stored in a sorted sub-array.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 25
www.Poriyaan.in

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along with
swapping, insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the sorted
array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements that are
31 and 8.

Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are 31 and
32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 26
www.Poriyaan.in

17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.

Insertion sort complexity


1. Time Complexity
Case Time Complexity
Best Case O(n)
Average Case O(n2)
Worst Case O(n2)
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is
not properly ascending and not properly descending. The average case time complexity of
insertion sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order,
but its elements are in descending order. The worst-case time complexity of insertion sort
is O(n2).
2. Space Complexity
Space Complexity O(1)
Stable YES
o The space complexity of insertion sort is O(1). It is because, in insertion sort, an extra
variable is required for swapping.
Implementation of insertion sort
Program: Write a program to implement insertion sort in C language.
1. #include <stdio.h>
2.
3. void insert(int a[], int n) /* function to sort an aay with insertion sort */
4. {
5. int i, j, temp;

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 27
www.Poriyaan.in

6. for (i = 1; i < n; i++) {


7. temp = a[i];
8. j = i - 1;
9.
10. while(j>=0 && temp <= a[j]) /* Move the elements greater than temp to one position a
head from their current position*/
11. {
12. a[j+1] = a[j];
13. j = j-1;
14. }
15. a[j+1] = temp;
16. }
17. }
18.
19. void printArr(int a[], int n) /* function to print the array */
20. {
21. int i;
22. for (i = 0; i < n; i++)
23. printf("%d ", a[i]);
24. }
25.
26. int main()
27. {
28. int a[] = { 12, 31, 25, 8, 32, 17 };
29. int n = sizeof(a) / sizeof(a[0]);
30. printf("Before sorting array elements are - \n");
31. printArr(a, n);
32. insert(a, n);
33. printf("\nAfter sorting array elements are - \n");
34. printArr(a, n);
35.
36. return 0;
37. }
Output:

Heap Sort

Heap Sort Algorithm

Heap sort processes the elements by creating the min-heap or max-heap using the elements of the
given array. Min-heap or max-heap represents the ordering of array in which the root element
represents the minimum or maximum element of the array.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 28
www.Poriyaan.in

Heap sort basically recursively performs two main operations -

o Build a heap H, using the elements of array.

o Repeatedly delete the root element of the heap formed in 1st phase.

A heap is a complete binary tree, and the binary tree is a tree in which the node can have the utmost
two children. A complete binary tree is a binary tree in which all the levels except the last level, i.e.,
leaf node, should be completely filled, and all the nodes should be left-justified.

Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to eliminate the
elements one by one from the heap part of the list, and then insert them into the sorted part of the
list.

Algorithm

1. HeapSort(arr)

2. BuildMaxHeap(arr)

3. for i = length(arr) to 2

4. swap arr[1] with arr[i]

5. heap_size[arr] = heap_size[arr] ? 1

6. MaxHeapify(arr,1)

7. End

BuildMaxHeap(arr)

1. BuildMaxHeap(arr)

2. heap_size(arr) = length(arr)

3. for i = length(arr)/2 to 1

4. MaxHeapify(arr,i)

5. End

MaxHeapify(arr,i)

1. MaxHeapify(arr,i)

2. L = left(i)

3. R = right(i)

4. if L ? heap_size[arr] and arr[L] > arr[i]

5. largest = L

6. else

7. largest = i

8. if R ? heap_size[arr] and arr[R] > arr[largest]

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 29
www.Poriyaan.in

9. largest = R

10. if largest != i

11. swap arr[i] with arr[largest]

12. MaxHeapify(arr,largest)

13. End

Working of Heap sort Algorithm

In heap sort, basically, there are two phases involved in the sorting of elements. By using the heap
sort algorithm, they are as follows -

o The first step includes the creation of a heap by adjusting the elements of the array.

o After the creation of heap, now remove the root element of the heap repeatedly by shifting
it to the end of the array, and then store the heap structure with the remaining elements.

First, we have to construct a heap from the given array and convert it into max heap.

After converting the given heap into max heap, the array elements are -

Next, we have to delete the root element (89) from the max heap. To delete this node, we have to
swap it with the last node, i.e. (11). After deleting the root element, we again have to heapify it to
convert it into max heap.

After swapping the array element 89 with 11, and converting the heap into max-heap, the elements
of array are -

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 30
www.Poriyaan.in

In the next step, again, we have to delete the root element (81) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (54). After deleting the root element, we again have
to heapify it to convert it into max heap.

After swapping the array element 81 with 54 and converting the heap into max-heap, the elements
of array are -

In the next step, we have to delete the root element (76) from the max heap again. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again have
to heapify it to convert it into max heap.

After swapping the array element 76 with 9 and converting the heap into max-heap, the elements of
array are -

In the next step, again we have to delete the root element (54) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (14). After deleting the root element, we again have
to heapify it to convert it into max heap.

After swapping the array element 54 with 14 and converting the heap into max-heap, the elements
of array are -

In the next step, again we have to delete the root element (22) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (11). After deleting the root element, we again have
to heapify it to convert it into max heap.

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 31
www.Poriyaan.in

After swapping the array element 22 with 11 and converting the heap into max-heap, the elements
of array are -

In the next step, again we have to delete the root element (14) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again have
to heapify it to convert it into max heap.

After swapping the array element 14 with 9 and converting the heap into max-heap, the elements of
array are -

In the next step, again we have to delete the root element (11) from the max heap. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again have
to heapify it to convert it into max heap.

After swapping the array element 11 with 9, the elements of array are -

Now, heap has only one element left. After deleting it, heap will be empty.

After completion of sorting, the array elements are -

Time complexity of Heap sort in the best case, average case, and worst case

1. Time Complexity

Case Time Complexity

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 32
www.Poriyaan.in

Best Case O(n logn)

Average Case O(n log n)

Worst Case O(n log n)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of heap sort is O(n logn).

o Average Case Complexity - It occurs when the array elements are in jumbled order that is
not properly ascending and not properly descending. The average case time complexity of
heap sort is O(n log n).

o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order,
but its elements are in descending order. The worst-case time complexity of heap sort is O(n
log n).

The time complexity of heap sort is O(n logn) in all three cases (best case, average case, and worst
case). The height of a complete binary tree having n elements is logn.

2. Space Complexity

Space Complexity O(1)

Stable N0

o The space complexity of Heap sort is O(1).

Implementation of Heapsort

Program: Write a program to implement heap sort in C language.

1. #include <stdio.h>

2. /* function to heapify a subtree. Here 'i' is the

3. index of root node in array a[], and 'n' is the size of heap. */

4. void heapify(int a[], int n, int i)

5. {

6. int largest = i; // Initialize largest as root

7. int left = 2 * i + 1; // left child

8. int right = 2 * i + 2; // right child

9. // If left child is larger than root

10. if (left < n && a[left] > a[largest])

11. largest = left;

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 33
www.Poriyaan.in

12. // If right child is larger than root

13. if (right < n && a[right] > a[largest])

14. largest = right;

15. // If root is not largest

16. if (largest != i) {

17. // swap a[i] with a[largest]

18. int temp = a[i];

19. a[i] = a[largest];

20. a[largest] = temp;

21. heapify(a, n, largest);

22. }

23. }

24. /*Function to implement the heap sort*/

25. void heapSort(int a[], int n)

26. {

27. for (int i = n / 2 - 1; i >= 0; i--)

28. heapify(a, n, i);

29. // One by one extract an element from heap

30. for (int i = n - 1; i >= 0; i--) {

31. /* Move current root element to end*/

32. // swap a[0] with a[i]

33. int temp = a[0];

34. a[0] = a[i];

35. a[i] = temp;

36.

37. heapify(a, i, 0);

38. }

39. }

40. /* function to print the array elements */

41. void printArr(int arr[], int n)

42. {

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 34
www.Poriyaan.in

43. for (int i = 0; i < n; ++i)

44. {

45. printf("%d", arr[i]);

46. printf(" ");

47. }

48.

49. }

50. int main()

51. {

52. int a[] = {48, 10, 23, 43, 28, 26, 1};

53. int n = sizeof(a) / sizeof(a[0]);

54. printf("Before sorting array elements are - \n");

55. printArr(a, n);

56. heapSort(a, n);

57. printf("\nAfter sorting array elements are - \n");

58. printArr(a, n);

59. return 0;

60. }

Output

https://play.google.com/store/apps/details?id=com.poriyaan.poriyaan 35
Civil
CSE
Home Mech
e
EEE
ECE

2nd Semester 3rd Semester


1st Semester
Professional English II Discrete Mathematics
Professional English I
Statistics and Numerical
Methods Digital Principles and
Matrices and Calculus
Computer Organization
Engineering Graphics
Engineering Physics
Foundation of Data
Physics for Information
Science Science
Engineering Chemistry

Physics
Basic for Engineering
Electrical and Data Structure
Problem Solving and Science Engineering
Electronics
Python Programming Object Oriented
Programming in C
Programming

4th Semester 5th Semester 6th Semester


Theory of Computation Computer Networks Object Oriented Software
Engineering
Artificial Intelligence Compiler Design
and Machine Learning Embedded Systems IoT
Cryptography and
Database Management Cyber Security Open Elective I
System
Professional Elective III
Algorithms Distributed Computing
Professional Elective IV

Introduction to Operating Professional Elective I Professional Elective V


Systems
Professional Elective II Professional Elective VI
Environmental Sciences
and sustainability Mandatory Course I Mandatory Course II

7th Semester 8th Semester


Human Values and Ethics Project Work/Internship

Elective-Management

Professional Elective II

Professional Elective III

Professional Elective IV

You might also like