0% found this document useful (0 votes)
23 views85 pages

Data Structure

Data structures are essential for organizing, processing, and storing data efficiently in computer systems, with various types including linear (arrays, linked lists) and non-linear structures (trees, graphs). They differ from data types as they can hold multiple data types and have concrete implementations with time complexity considerations. Understanding data structures is crucial for effective algorithm design and data management in software development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views85 pages

Data Structure

Data structures are essential for organizing, processing, and storing data efficiently in computer systems, with various types including linear (arrays, linked lists) and non-linear structures (trees, graphs). They differ from data types as they can hold multiple data types and have concrete implementations with time complexity considerations. Understanding data structures is crucial for effective algorithm design and data management in software development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

Data Structure & Algorithm

What is Data Structure:


A data structure is a storage that is used to store and organize data. It is a way of arranging
data on a computer so that it can be accessed and updated efficiently.
A data structure is not only used for organizing the data. It is also used for processing,
retrieving, and storing data. There are different basic and advanced types of data structures that
are used in almost every program or software system that has been developed. So we must
have good knowledge of data structures.
Data structures are an integral part of computers used for the arrangement of data in memory.
They are essential and responsible for organizing, processing, accessing, and storing data
efficiently.

How Data Structure varies from Data Type:

We already have learned about data structure. Many times, what happens is that people get
confused between data type and data structure. So, let’s see a few differences between data
type and data structure to make it clear.

Data Type Data Structure

The data type is the form of a variable to Data structure is a collection of different kinds
which a value can be assigned. It defines of data. That entire data can be represented
that the particular variable will assign the using an object and can be used throughout the
values of the given data type only. program.

It can hold value but not data. Therefore, It can hold multiple types of data within a single
it is dataless. object.

The implementation of a data type is Data structure implementation is known as


known as abstract implementation. concrete implementation.

There is no time complexity in the case of In data structure objects, time complexity plays
data types. an important role.

In the case of data types, the value of data While in the case of data structures, the data and
is not stored because it only represents its value acquire the space in the computer’s
the type of data that can be stored. main memory. Also, a data structure can hold
Data Type Data Structure

different kinds and types of data within one


single object.

Data type examples are int, float, double, Data structure examples are stack, queue, tree,
etc. etc.

Classification of Data Structure:


Data structure has many different uses in our daily life. There are many different data
structures that are used to solve different mathematical and logical problems. By using data
structure, one can organize and process a very large amount of data in a relatively short period.
Let’s look at different data structures that are used in different situations.

Classification of Data Structure

• Linear data structure: Data structure in which data elements are arranged
sequentially or linearly, where each element is attached to its previous and next
adjacent elements, is called a linear data structure.
Examples of linear data structures are array, stack, queue, linked list, etc.
• Static data structure: Static data structure has a fixed memory size. It is
easier to access the elements in a static data structure.
An example of this data structure is an array.
• Dynamic data structure: In the dynamic data structure, the size is not
fixed. It can be randomly updated during the runtime which may be
considered efficient concerning the memory (space) complexity of the
code.
Examples of this data structure are queue, stack, etc.
• Non-linear data structure: Data structures where data elements are not placed
sequentially or linearly are called non-linear data structures. In a non-linear data
structure, we can’t traverse all the elements in a single run only.
Examples of non-linear data structures are trees and graphs.

Need Of Data Structure:


The structure of the data and the synthesis of the algorithm are relative to each other. Data
presentation must be easy to understand so the developer, as well as the user, can make an
efficient implementation of the operation.
Data structures provide an easy way of organizing, retrieving, managing, and storing data.
Here is a list of the needs for data.
1. Data structure modification is easy.
2. It requires less time.
3. Save storage memory space.
4. Data representation is easy.
5. Easy access to the large database.

Arrays
An array is a linear data structure and it is a collection of items stored at contiguous memory
locations. The idea is to store multiple items of the same type together in one place. It allows
the processing of a large amount of data in a relatively short period. The first element of the
array is indexed by a subscript of 0. There are different operations possible in an array, like
Searching, Sorting, Inserting, Traversing, Reversing, and Deleting.

Array
Characteristics of an Array:
An array has various characteristics which are as follows:
• Arrays use an index-based data structure which helps to identify each of the
elements in an array easily using the index.
• If a user wants to store multiple values of the same data type, then the array can be
utilized efficiently.
• An array can also handle complex data structures by storing data in a two-
dimensional array.
• An array is also used to implement other data structures like Stacks, Queues, Heaps,
Hash tables, etc.
• The search process in an array can be done very easily.

Operations performed on array:

• Initialization: An array can be initialized with values at the time of declaration or


later using an assignment statement.
• Accessing elements: Elements in an array can be accessed by their index, which
starts from 0 and goes up to the size of the array minus one.
• Searching for elements: Arrays can be searched for a specific element using linear
search or binary search algorithms.
• Sorting elements: Elements in an array can be sorted in ascending or descending
order using algorithms like bubble sort, insertion sort, or quick sort.
• Inserting elements: Elements can be inserted into an array at a specific location,
but this operation can be time-consuming because it requires shifting existing
elements in the array.
• Deleting elements: Elements can be deleted from an array by shifting the elements
that come after it to fill the gap.
• Updating elements: Elements in an array can be updated or modified by assigning
a new value to a specific index.
• Traversing elements: The elements in an array can be traversed in order, visiting
each element once.
These are some of the most common operations performed on arrays. The specific operations
and algorithms used may vary based on the requirements of the problem and the programming
language used.
Applications of Array:
Different applications of an array are as follows:
• An array is used in solving matrix problems.
• Database records are also implemented by an array.
• It helps in implementing a sorting algorithm.
• It is also used to implement other data structures like Stacks, Queues, Heaps, Hash
tables, etc.
• An array can be used for CPU scheduling.
• Can be applied as a lookup table in computers.
• Arrays can be used in speech processing where every speech signal is an array.
• The screen of the computer is also displayed by an array. Here we use a
multidimensional array.
• The array is used in many management systems like a library, students, parliament,
etc.
• The array is used in the online ticket booking system. Contacts on a cell phone are
displayed by this array.
• In games like online chess, where the player can store his past moves as well as
current moves. It indicates a hint of position.
• To save images in a specific dimension in the android Like 360*1200
Real-Life Applications of Array:
• An array is frequently used to store data for mathematical computations.
• It is used in image processing.
• It is also used in record management.
• Book pages are also real-life examples of an array.
• It is used in ordering boxes as well.

Linked list
A linked list is a linear data structure in which elements are not stored at contiguous memory
locations. The elements in a linked list are linked using pointers as shown in the below image:
Types of linked lists:
• Singly-linked list
• Doubly linked list
• Circular linked list
• Doubly circular linked list

Linked List

Characteristics of a Linked list:


A linked list has various characteristics which are as follows:
• A linked list uses extra memory to store links.
• During the initialization of the linked list, there is no need to know the size of the
elements.
• Linked lists are used to implement stacks, queues, graphs, etc.
• The first node of the linked list is called the Head.
• The next pointer of the last node always points to NULL.
• In a linked list, insertion and deletion are possible easily.
• Each node of the linked list consists of a pointer/link which is the address of the
next node.
• Linked lists can shrink or grow at any point in time easily.
Operations performed on Linked list:

A linked list is a linear data structure where each node contains a value and a reference to the
next node. Here are some common operations performed on linked lists:

• Initialization: A linked list can be initialized by creating a head node with a


reference to the first node. Each subsequent node contains a value and a reference to
the next node.
• Inserting elements: Elements can be inserted at the head, tail, or at a specific
position in the linked list.
• Deleting elements: Elements can be deleted from the linked list by updating the
reference of the previous node to point to the next node, effectively removing the
current node from the list.
• Searching for elements: Linked lists can be searched for a specific element by
starting from the head node and following the references to the next nodes until the
desired element is found.
• Updating elements: Elements in a linked list can be updated by modifying the
value of a specific node.
• Traversing elements: The elements in a linked list can be traversed by starting
from the head node and following the references to the next nodes until the end of
the list is reached.
• Reversing a linked list: The linked list can be reversed by updating the references
of each node so that they point to the previous node instead of the next node.
These are some of the most common operations performed on linked lists. The specific
operations and algorithms used may vary based on the requirements of the problem and the
programming language used.
Applications of the Linked list:
Different applications of linked lists are as follows:
• Linked lists are used to implement stacks, queues, graphs, etc.
• Linked lists are used to perform arithmetic operations on long integers.
• It is used for the representation of sparse matrices.
• It is used in the linked allocation of files.
• It helps in memory management.
• It is used in the representation of Polynomial Manipulation where each polynomial
term represents a node in the linked list.
• Linked lists are used to display image containers. Users can visit past, current, and
next images.
• They are used to store the history of the visited page.
• They are used to perform undo operations.
• Linked are used in software development where they indicate the correct syntax of a
tag.
• Linked lists are used to display social media feeds.
Real-Life Applications of a Linked list:
• A linked list is used in Round-Robin scheduling to keep track of the turn in
multiplayer games.
• It is used in image viewer. The previous and next images are linked, and hence can
be accessed by the previous and next buttons.
• In a music playlist, songs are linked to the previous and next songs.

Stack
Stack is a linear data structure that follows a particular order in which the operations are
performed. The order is LIFO (Last In First Out). Entering and retrieving data is possible from
only one end. The entering and retrieving of data is also called push and pop operation in a
stack. There are different operations possible in a stack like reversing a stack using recursion,
Sorting, Deleting the middle element of a stack, etc.

Characteristics of a Stack:
Stack has various different characteristics which are as follows:
• Stack is used in many different algorithms like Tower of Hanoi, tree traversal,
recursion, etc.
• Stack is implemented through an array or linked list.
• It follows the Last In First Out operation i.e., an element that is inserted first will
pop in last and vice versa.
• The insertion and deletion are performed at one end i.e. from the top of the stack.
• In stack, if the allocated space for the stack is full, and still anyone attempts to add
more elements, it will lead to stack overflow.
Applications of Stack:
Different applications of Stack are as follows:
• The stack data structure is used in the evaluation and conversion of arithmetic
expressions.
• Stack is used in Recursion.
• It is used for parenthesis checking.
• While reversing a string, the stack is used as well.
• Stack is used in memory management.
• It is also used for processing function calls.
• The stack is used to convert expressions from infix to postfix.
• The stack is used to perform undo as well as redo operations in word processors.
• The stack is used in virtual machines like JVM.
• The stack is used in the media players. Useful to play the next and previous song.
• The stack is used in recursion operations.

Operation performed on stack:

A stack is a linear data structure that implements the Last-In-First-Out (LIFO) principle. Here
are some common operations performed on stacks:
• Push: Elements can be pushed onto the top of the stack, adding a new element to
the top of the stack.
• Pop: The top element can be removed from the stack by performing a pop
operation, effectively removing the last element that was pushed onto the stack.
• Peek: The top element can be inspected without removing it from the stack using a
peek operation.
• IsEmpty: A check can be made to determine if the stack is empty.
• Size: The number of elements in the stack can be determined using a size operation.
These are some of the most common operations performed on stacks. The specific operations
and algorithms used may vary based on the requirements of the problem and the programming
language used. Stacks are commonly used in applications such as evaluating expressions,
implementing function call stacks in computer programs, and many others.
Real-Life Applications of Stack:
• Real life example of a stack is the layer of eating plates arranged one above the
other. When you remove a plate from the pile, you can take the plate to the top of
the pile. But this is exactly the plate that was added most recently to the pile. If you
want the plate at the bottom of the pile, you must remove all the plates on top of it
to reach it.
• Browsers use stack data structures to keep track of previously visited sites.
• Call log in mobile also uses stack data structure.

Queue
Queue is a linear data structure that follows a particular order in which the operations are
performed. The order is First In First Out (FIFO) i.e. the data item stored first will be accessed
first. In this, entering and retrieving data is not done from only one end. An example of a queue
is any queue of consumers for a resource where the consumer that came first is served first.
Different operations are performed on a Queue like Reversing a Queue (with or without using
recursion), Reversing the first K elements of a Queue, etc. A few basic operations performed
In Queue are enqueue, dequeue, front, rear, etc.
Characteristics of a Queue:
The queue has various different characteristics which are as follows:
• The queue is a FIFO (First In First Out) structure.
• To remove the last element of the Queue, all the elements inserted before the new
element in the queue must be removed.
• A queue is an ordered list of elements of similar data types.

Applications of Queue:
Different applications of Queue are as follows:
• Queue is used for handling website traffic.
• It helps to maintain the playlist in media players.
• Queue is used in operating systems for handling interrupts.
• It helps in serving requests on a single shared resource, like a printer, CPU task
scheduling, etc.
• It is used in the asynchronous transfer of data e.g. pipes, file IO, and sockets.
• Queues are used for job scheduling in the operating system.
• In social media to upload multiple photos or videos queue is used.
• To send an e-mail queue data structure is used.
• To handle website traffic at a time queues are used.
• In Windows operating system, to switch multiple applications.

Operation performed on queue:

A queue is a linear data structure that implements the First-In-First-Out (FIFO) principle. Here
are some common operations performed on queues:
• Enqueue: Elements can be added to the back of the queue, adding a new element to
the end of the queue.
• Dequeue: The front element can be removed from the queue by performing a
dequeue operation, effectively removing the first element that was added to the
queue.
• Peek: The front element can be inspected without removing it from the queue using
a peek operation.
• IsEmpty: A check can be made to determine if the queue is empty.
• Size: The number of elements in the queue can be determined using a size
operation.
These are some of the most common operations performed on queues. The specific operations
and algorithms used may vary based on the requirements of the problem and the programming
language used. Queues are commonly used in applications such as scheduling tasks, managing
communication between processes, and many others.

Real-Life Applications of Queue:


• A real-world example of a queue is a single-lane one-way road, where the vehicle
that enters first will exit first.
• A more real-world example can be seen in the queue at the ticket windows.
• A cashier line in a store is also an example of a queue.
• People on an escalator

Tree
A tree is a non-linear and hierarchical data structure where the elements are arranged in a tree-
like structure. In a tree, the topmost node is called the root node. Each node contains some
data, and data can be of any type. It consists of a central node, structural nodes, and sub-nodes
which are connected via edges. Different tree data structures allow quicker and easier access to
the data as it is a non-linear data structure. A tree has various terminologies like Node, Root,
Edge, Height of a tree, Degree of a tree, etc.
There are different types of Tree-like
• Binary Tree,
• Binary Search Tree,
• AVL Tree,
• B-Tree, etc.

Tree
Characteristics of a Tree:

The tree has various different characteristics which are as follows:


• A tree is also known as a Recursive data structure.
• In a tree, the Height of the root can be defined as the longest path from the root
node to the leaf node.
• In a tree, one can also calculate the depth from the top to any node. The root node
has a depth of 0.

Applications of Tree:

Different applications of Tree are as follows:


• Heap is a tree data structure that is implemented using arrays and used to implement
priority queues.
• B-Tree and B+ Tree are used to implement indexing in databases.
• Syntax Tree helps in scanning, parsing, generation of code, and evaluation of
arithmetic expressions in Compiler design.
• K-D Tree is a space partitioning tree used to organize points in K-dimensional
space.
• Spanning trees are used in routers in computer networks.

Operation performed on tree:

A tree is a non-linear data structure that consists of nodes connected by edges. Here are some
common operations performed on trees:
• Insertion: New nodes can be added to the tree to create a new branch or to increase
the height of the tree.
• Deletion: Nodes can be removed from the tree by updating the references of the
parent node to remove the reference to the current node.
• Search: Elements can be searched for in a tree by starting from the root node and
traversing the tree based on the value of the current node until the desired node is
found.
• Traversal: The elements in a tree can be traversed in several different ways,
including in-order, pre-order, and post-order traversal.
• Height: The height of the tree can be determined by counting the number of edges
from the root node to the furthest leaf node.
• Depth: The depth of a node can be determined by counting the number of edges
from the root node to the current node.
• Balancing: The tree can be balanced to ensure that the height of the tree is
minimized and the distribution of nodes is as even as possible.
These are some of the most common operations performed on trees. The specific operations
and algorithms used may vary based on the requirements of the problem and the programming
language used. Trees are commonly used in applications such as searching, sorting, and storing
hierarchical data.
Real-Life Applications of Tree:

• In real life, tree data structure helps in Game Development.


• It also helps in indexing in databases.
• A Decision Tree is an efficient machine-learning tool, commonly used in decision
analysis. It has a flowchart-like structure that helps to understand data.
• Domain Name Server also uses a tree data structure.
• The most common use case of a tree is any social networking site.

Graph

A graph is a non-linear data structure that consists of vertices (or nodes) and edges. It consists
of a finite set of vertices and set of edges that connect a pair of nodes. The graph is used to
solve the most challenging and complex programming problems. It has different terminologies
which are Path, Degree, Adjacent vertices, Connected components, etc.

Graph

Characteristics of Graph:

The graph has various different characteristics which are as follows:


• The maximum distance from a vertex to all the other vertices is considered the
Eccentricity of that vertex.
• The vertex having minimum Eccentricity is considered the central point of the
graph.
• The minimum value of Eccentricity from all vertices is considered the radius of a
connected graph.
Applications of Graph:

Different applications of Graphs are as follows:


• The graph is used to represent the flow of computation.
• It is used in modeling graphs.
• The operating system uses Resource Allocation Graph.
• Also used in the World Wide Web where the web pages represent the nodes.

Operation performed on Graph:

A graph is a non-linear data structure consisting of nodes and edges. Here are some common
operations performed on graphs:
• Add Vertex: New vertices can be added to the graph to represent a new node.
• Add Edge: Edges can be added between vertices to represent a relationship
between nodes.
• Remove Vertex: Vertices can be removed from the graph by updating the references
of adjacent vertices to remove the reference to the current vertex.
• Remove Edge: Edges can be removed by updating the references of the adjacent
vertices to remove the reference to the current edge.
• Depth-First Search (DFS): A graph can be traversed using a depth-first search by
visiting the vertices in a depth-first manner.
• Breadth-First Search (BFS): A graph can be traversed using a breadth-first search
by visiting the vertices in a breadth-first manner.
• Shortest Path: The shortest path between two vertices can be determined using
algorithms such as Dijkstra’s algorithm or A* algorithm.
• Connected Components: The connected components of a graph can be determined
by finding sets of vertices that are connected to each other but not to any other
vertices in the graph.
• Cycle Detection: Cycles in a graph can be detected by checking for back edges
during a depth-first search.
These are some of the most common operations performed on graphs. The specific operations
and algorithms used may vary based on the requirements of the problem and the programming
language used. Graphs are commonly used in applications such as computer networks, social
networks, and routing problems.
Real-Life Applications of Graph:
• One of the most common real-world examples of a graph is Google Maps where
cities are located as vertices and paths connecting those vertices are located as
edges of the graph.
• A social network is also one real-world example of a graph where every person on
the network is a node, and all of their friendships on the network are the edges of
the graph.
• A graph is also used to study molecules in physics and chemistry.
Advantages of data structure

1. Improved data organization and storage efficiency.


2. Faster data retrieval and manipulation.
3. Facilitates the design of algorithms for solving complex problems.
4. Eases the task of updating and maintaining the data.
5. Provides a better understanding of the relationships between data elements.

Disadvantage of Data Structure

1. Increased computational and memory overhead.


2. Difficulty in designing and implementing complex data structures.
3. Limited scalability and flexibility.
4. Complexity in debugging and testing.
5. Difficulty in modifying existing data structures.

Time Complexity and Space Complexity

Generally, there is always more than one way to solve a problem in computer science with
different algorithms. Therefore, it is highly required to use a method to compare the solutions
in order to judge which one is more optimal. The method must be:
• Independent of the machine and its configuration, on which the algorithm is running
on.
• Shows a direct correlation with the number of inputs.
• Can distinguish two algorithms clearly without ambiguity.

Time Complexity

The time complexity of an algorithm quantifies the amount of time taken by an algorithm to
run as a function of the length of the input. Note that the time to run is a function of the length
of the input and not the actual execution time of the machine on which the algorithm is running
on.

Definition: The valid algorithm takes a finite amount of time for execution. The time required
by the algorithm to solve given problem is called time complexity of the algorithm. Time
complexity is very useful measure in algorithm analysis.
It is the time needed for the completion of an algorithm. To estimate the time complexity, we
need to consider the cost of each fundamental instruction and the number of times the
instruction is executed.
Example 1: Addition of two scalar variables.
Algorithm ADD SCALAR(A, B)
//Description: Perform arithmetic addition of two numbers
//Input: Two scalar variables A and B
//Output: variable C, which holds the addition of A and B
C <- A + B
return C
The of two scalar numbers requires one addition operation. the time complexity of this
algorithm is constant, so T(n) = O(1) .
In order to calculate time complexity on an algorithm, it is assumed that a constant time c is
taken to execute one operation, and then the total operations for an input length on N are
calculated. Consider an example to understand the process of calculation: Suppose a problem is
to find whether a pair (X, Y) exists in an array, A of N elements whose sum is Z. The simplest
idea is to consider every pair and check if it satisfies the given condition or not.
The pseudo-code is as follows:
int a[n];
for(int i = 0;i < n;i++)
cin >> a[i]

for(int i = 0;i < n;i++)


for(int j = 0;j < n;j++)
if(i!=j && a[i]+a[j] == z)
return true

return false

C++:
// C++ program for the above approach
#include <bits/stdc++.h>
using namespace std;

// Function to find a pair in the given


// array whose sum is equal to z
bool findPair(int a[], int n, int z){
// Iterate through all the pairs
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)

// Check if the sum of the pair


// (a[i], a[j]) is equal to z
if (i != j && a[i] + a[j] == z)
return true;

return false;
}

// Driver Code
int main(){
// Given Input
int a[] = { 1, -2, 1, 0, 5 };
int z = 0;
int n = sizeof(a) / sizeof(a[0]);

// Function Call
if (findPair(a, n, z))
cout << "True";
else
cout << "False";
return 0;
}

Java:
// Java program for the above approach
import java.lang.*;
import java.util.*;

class GFG{

// Function to find a pair in the given


// array whose sum is equal to z
static boolean findPair(int a[], int n, int z){

// Iterate through all the pairs


for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++)

// Check if the sum of the pair


// (a[i], a[j]) is equal to z
if (i != j && a[i] + a[j] == z)
return true;

return false;
}

// Driver code
public static void main(String[] args)
{

// Given Input
int a[] = { 1, -2, 1, 0, 5 };
int z = 0;
int n = a.length;

// Function Call
if (findPair(a, n, z))
System.out.println("True");
else
System.out.println("False");
}
}

Output: False
Assuming that each of the operations in the computer takes approximately constant time, let it
be c. The number of lines of code executed actually depends on the value of Z. During
analyses of the algorithm, mostly the worst-case scenario is considered, i.e., when there is no
pair of elements with sum equals Z. In the worst case,
• N*c operations are required for input.
• The outer loop i loop runs N times.
• For each i, the inner loop j loop runs N times.
So total execution time is N*c + N*N*c + c. Now ignore the lower order terms since the lower
order terms are relatively insignificant for large input, therefore only the highest order term is
taken (without constant) which is N*N in this case. Different notations are used to describe the
limiting behavior of a function, but since the worst case is taken so big-O notation will be used
to represent the time complexity.
Hence, the time complexity is O(N2) for the above algorithm. Note that the time complexity is
solely based on the number of elements in array A i.e the input length, so if the length of the
array will increase the time of execution will also increase.
Order of growth is how the time of execution depends on the length of the input. In the above
example, it is clearly evident that the time of execution quadratically depends on the length of
the array. Order of growth will help to compute the running time with ease.

Another Example: Let’s calculate the time complexity of the below algorithm:
C++:

count = 0
for (int i = N; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count++;

Java:

int count = 0 ;
for (int i = N; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count++;

This is a tricky case. In the first look, it seems like the complexity is O(N * log N). N for
the j′s loop and log(N) for i′s loop. But it’s wrong. Let’s see why.
Think about how many times count++ will run.
• When i = N, it will run N times.
• When i = N / 2, it will run N / 2 times.
• When i = N / 4, it will run N / 4 times.
• And so on.
The total number of times count++ will run is N + N/2 + N/4+…+1= 2 * N. So the time
complexity will be O(N).
Some general time complexities are listed below with the input range for which they are
accepted in competitive programming:

Input Worst Accepted Time


Length Complexity Usually type of solutions

10 -12 O(N!) Recursion and backtracking

15-18 Recursion, backtracking, and bit


O(2N * N)
manipulation

Recursion, backtracking, and bit


18-22 O(2N * N) manipulation
30-40 O(2N/2 * N) Meet in the middle, Divide and Conquer

100 O(N4) Dynamic programming, Constructive

400 O(N3) Dynamic programming, Constructive

Dynamic programming, Binary


2K O(N2* log N) Search, Sorting,
Divide and Conquer

10K Dynamic programming, Graph, Trees,


O(N2)
Constructive

1M O(N* log N) Sorting, Binary Search, Divide and Conquer

100M O(N), O(log N), O(1) Constructive, Mathematical, Greedy


Algorithms

Understanding Time Complexity with Simple Examples

A lot of students get confused while understanding the concept of time complexity, but in this
article, we will explain it with a very simple example.
Q. Imagine a classroom of 100 students in which you gave your pen to one person. You
have to find that pen without knowing to whom you gave it.
Here are some ways to find the pen and what the O order is.
• O(n2): You go and ask the first person in the class if he has the pen. Also, you ask this
person about the other 99 people in the classroom if they have that pen and so on,
This is what we call O(n2).
• O(n): Going and asking each student individually is O(N).
• O(log n): Now I divide the class into two groups, then ask: “Is it on the left side, or
the right side of the classroom?” Then I take that group and divide it into two and ask
again, and so on. Repeat the process till you are left with one student who has your
pen. This is what you mean by O(log n).
I might need to do:
• The O(n2) searches if only one student knows on which student the pen is hidden.
• The O(n) if one student had the pen and only they knew it.
• The O(log n) search if all the students knew, but would only tell me if I guessed the
right side.
The above O -> is called Big – Oh which is an asymptotic notation. There are other asymptotic
notations like theta and Omega.

NOTE: We are interested in the rate of growth over time with respect to the inputs taken during
the program execution.

Is the Time Complexity of an Algorithm/Code the same as the Running/Execution Time of


Code?
The Time Complexity of an algorithm/code is not equal to the actual time required to execute a
particular code, but the number of times a statement executes. We can prove this by using
the time command.
For example: Write code in C/C++ or any other language to find the maximum between N
numbers, where N varies from 10, 100, 1000, and 10000. For Linux based operating system
(Fedora or Ubuntu), use the below commands:
To compile the program: gcc program.c – o program
To execute the program: time ./program
You will get surprising results i.e.:
• For N = 10: you may get 0.5 ms time,
• For N = 10,000: you may get 0.2 ms time.
• Also, you will get different timings on different machines. Even if you will not get the
same timings on the same machine for the same code, the reason behind that is the
current network load.
So, we can say that the actual time required to execute code is machine-dependent (whether
you are using Pentium 1 or Pentium 5) and also it considers network load if your machine is in
LAN/WAN.

What is meant by the Time Complexity of an Algorithm?


Now, the question arises if time complexity is not the actual time required to execute the code,
then what is it?
The answer is:
Instead of measuring actual time required in executing each statement in the code, Time
Complexity considers how many times each statement executes.

Example 1: Consider the below simple code to print Hello World

C++:
#include <iostream>
using namespace std;

int main(){
cout << "Hello World";
return 0;
}
Output
Hello World
Time Complexity: In the above code “Hello World” is printed only once on the screen.
So, the time complexity is constant: O(1) i.e. every time a constant amount of time is required to
execute code, no matter which operating system or which machine configurations you are using.
Auxiliary Space: O(1)

Example 2:

C++:
#include <iostream>
using namespace std;

int main(){

int i, n = 8;
for (i = 1; i <= n; i++) {
cout << "Hello World !!!\n";
}
return 0;
}
Output
Hello World !!!
Hello World !!!
Hello World !!!
Time Complexity: In the above code “Hello World !!!” is printed only n times on the screen, as
the value of n can change.
So, the time complexity is linear: O(n) i.e. every time, a linear amount of time is required to
execute code.
Auxiliary Space: O(1)

Example 3:

C++:
#include <iostream>
using namespace std;

int main(){

int i, n = 8;
for (i = 1; i <= n; i=i*2) {
cout << "Hello World !!!\n";
}
return 0;
}

Output
Hello World !!!
Hello World !!!
Hello World !!!
Hello World !!!
Time Complexity: O(log2(n))
Auxiliary Space: O(1)

Example 4:

C++:

#include <iostream>
#include <cmath>
using namespace std;

int main()
{

int i, n = 8;
for (i = 2; i <= n; i=pow(i,2)) {
cout << "Hello World !!!\n";
}
return 0;
}

Output
Hello World !!!
Hello World !!!
Time Complexity: O(log(log n))
Auxiliary Space: O(1)

How to Find the Time Complexity of an Algorithm?


Now let us see some other examples and the process to find the time complexity of an algorithm:
Example: Let us consider a model machine that has the following specifications:
• Single processor
• 32 bit
• Sequential execution
• 1 unit time for arithmetic and logical operations
• 1 unit time for assignment and return statements

Q1. Find the Sum of 2 numbers on the above machine:
For any machine, the pseudocode to add two numbers will be something like this:

C++:
// Pseudocode : Sum(a, b) { return a + b }
#include <iostream>
using namespace std;

int sum(int a,int b){


return a+b;
}

int main() {
int a = 5, b = 6;
cout<<sum(a,b)<<endl;
return 0;
}

Output
11
Time Complexity:
• The above code will take 2 units of time(constant):
• one for arithmetic operations and
• one for return. (as per the above conventions).
• Therefore total cost to perform sum operation (Tsum) = 1 + 1 = 2
• Time Complexity = O(2) = O(1), since 2 is constant
• Auxiliary Space: O(1)

Q2. Find the sum of all elements of a list/array


The pseudocode to do so can be given as:
C++
#include <iostream>
using namespace std;
int list_Sum(int A[], int n)

// A->array and
// n->number of elements in array{
int sum = 0;
for (int i = 0; i <= n - 1; i++) {
sum = sum + A[i];
}
return sum;
}

int main(){
int A[] = { 5, 6, 1, 2 };
int n = sizeof(A) / sizeof(A[0]);
cout << list_Sum(A, n);
return 0;
}

Java:
// Java code for the above approach

import java.io.*;

class GFG {

static int list_Sum(int[] A, int n)

// A->array and
// n->number of elements in array
{
int sum = 0;
for (int i = 0; i <= n - 1; i++) {
sum = sum + A[i];
}
return sum;
}

public static void main(String[] args){


int[] A = { 5, 6, 1, 2 };
int n = A.length;
System.out.println(list_Sum(A, n));
}
}
To understand the time complexity of the above code, let’s see how much time each statement
will take:
C++:
int list_Sum(int A[], int n){
int sum = 0; // cost=1 no of times=1
for(int i=0; i<n; i++) // cost=2 no of times=n+1 (+1 for the end
false condition)
sum = sum + A[i] ; // cost=2 no of times=n
return sum ; // cost=1 no of times=1
}

Therefore the total cost to perform sum operation


Tsum=1 + 2 * (n+1) + 2 * n + 1 = 4n + 4 =C1 * n + C2 = O(n)
Therefore, the time complexity of the above code is O(n)

Q3. Find the sum of all elements of a matrix


For this one, the complexity is a polynomial equation (quadratic equation for a square matrix)
• Matrix of size n*n => Tsum = a.n2 + b.n + c
• Since Tsum is in order of n2, therefore Time Complexity = O(n2)

C++:
#include <iostream>
using namespace std;

int main(){
int n = 3;
int m = 3;
int arr[][3]
= { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 } };
int sum = 0;

// Iterating over all 1-D arrays in 2-D array


for (int i = 0; i < n; i++) {

// Printing all elements in ith 1-D array


for (int j = 0; j < m; j++) {

// Printing jth element of ith row


sum += arr[i][j];
}
}
cout << sum << endl;
return 0;
}

Java:
/*package whatever //do not write package name here */

import java.io.*;

class GFG {
public static void main(String[] args)
{
int n = 3;
int m = 3;
int arr[][]
= { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 } };
int sum = 0;

// Iterating over all 1-D arrays in 2-D array


for (int i = 0; i < n; i++) {

// Printing all elements in ith 1-D array


for (int j = 0; j < m; j++) {

// Printing jth element of ith row


sum += arr[i][j];
}
}
System.out.println(sum);
}
}

Output
43
Time Complexity: O(n*m)
The program iterates through all the elements in the 2D array using two nested loops. The outer
loop iterates n times and the inner loop iterates m times for each iteration of the outer loop.
Therefore, the time complexity of the program is O(n*m).
Auxiliary Space: O(n*m)
The program uses a fixed amount of auxiliary space to store the 2D array and a few integer
variables. The space required for the 2D array is nm integers. The program also uses a single
integer variable to store the sum of the elements. Therefore, the auxiliary space complexity of the
program is O(nm + 1), which simplifies to O(n*m).

In conclusion, the time complexity of the program is O(nm), and the auxiliary space complexity
is also O(nm).
So from the above examples, we can conclude that the time of execution increases with the type
of operations we make using the inputs.

Space Complexity

Definition:
Problem-solving using computer requires memory to hold temporary data or final result while
the program is in execution. The amount of memory required by the algorithm to solve given
problem is called space complexity of the algorithm.
The space complexity of an algorithm quantifies the amount of space taken by an algorithm to
run as a function of the length of the input. Consider an example: Suppose a problem to find
the frequency of array elements.
It is the amount of memory needed for the completion of an algorithm.
To estimate the memory requirement we need to focus on two parts:
(1) A fixed part: It is independent of the input size. It includes memory for instructions (code),
constants, variables, etc.
(2) A variable part: It is dependent on the input size. It includes memory for recursion stack,
referenced variables, etc.

Example: Addition of two scalar variables

Algorithm ADD SCALAR(A, B)

//Description: Perform arithmetic addition of two numbers

//Input: Two scalar variables A and B

//Output: variable C, which holds the addition of A and B

C <— A+B

return C
The addition of two scalar numbers requires one extra memory location to hold the result. Thus
the space complexity of this algorithm is constant, hence S(n) = O(1).
The pseudo-code is as follows:
int freq[n];
int a[n];
for(int i = 0; i<n; i++){
cin>>a[i];
freq[a[i]]++;
}
Below is the implementation of the above approach:
C++:

// C++ program for the above approach


#include <bits/stdc++.h>
using namespace std;

// Function to count frequencies of array items


void countFreq(int arr[], int n){
unordered_map<int, int> freq;

// Traverse through array elements and


// count frequencies
for (int i = 0; i < n; i++)
freq[arr[i]]++;

// Traverse through map and print frequencies


for (auto x : freq)
cout << x.first << " " << x.second << endl;
}

// Driver Code
int main(){
// Given array
int arr[] = { 10, 20, 20, 10, 10, 20, 5, 20 };
int n = sizeof(arr) / sizeof(arr[0]);

// Function Call
countFreq(arr, n);
return 0;
}
Java:
// Java program for the above approach
import java.util.*;
class GFG{

// Function to count frequencies of array items


static void countFreq(int arr[], int n)
{
HashMap<Integer,Integer> freq = new HashMap<>();

// Traverse through array elements and


// count frequencies
for (int i = 0; i < n; i++) {
if(freq.containsKey(arr[i])){
freq.put(arr[i], freq.get(arr[i])+1);
}
else{
freq.put(arr[i], 1);
}
}

// Traverse through map and print frequencies


for (Map.Entry<Integer,Integer> x : freq.entrySet())
System.out.print(x.getKey()+ " " + x.getValue() +"\n");
}

// Driver Code
public static void main(String[] args)
{
// Given array
int arr[] = { 10, 20, 20, 10, 10, 20, 5, 20 };
int n = arr.length;

// Function Call
countFreq(arr, n);
}
}

Output
51
10 3
20 4
Here two arrays of length N, and variable i are used in the algorithm so, the total space used
is N * c + N * c + 1 * c = 2N * c + c, where c is a unit space taken. For many inputs,
constant c is insignificant, and it can be said that the space complexity is O(N).
There is also auxiliary space, which is different from space complexity. The main difference
is where space complexity quantifies the total space used by the algorithm, auxiliary space
quantifies the extra space that is used in the algorithm apart from the given input. In the above
example, the auxiliary space is the space used by the freq[] array because that is not part of the
given input. So total auxiliary space is N * c + c which is O(N) only.

What does ‘Space Complexity’ mean: The term Space Complexity is misused for Auxiliary
Space at many places. Following are the correct definitions of Auxiliary Space and Space
Complexity.
Auxiliary Space is the extra space or temporary space used by an algorithm.
The space Complexity of an algorithm is the total space taken by the algorithm with respect to
the input size. Space complexity includes both Auxiliary space and space used by input.
For example, if we want to compare standard sorting algorithms on the basis of space, then
Auxiliary Space would be a better criterion than Space Complexity. Merge Sort uses O(n)
auxiliary space, Insertion sort, and Heap Sort use O(1) auxiliary space. The space complexity
of all these sorting algorithms is O(n) though.
Space complexity is a parallel concept to time complexity. If we need to create an array of size
n, this will require O(n) space. If we create a two-dimensional array of size n*n, this will
require O(n2) space.
In recursive calls stack space also counts.
Example:

int add (int n){


if (n <= 0){
return 0;
}
return n + add (n-1);
}
Here each call add a level to the stack :
1. add(4)
2. -> add(3)
3. -> add(2)
4. -> add(1)
5. -> add(0)
Each of these calls is added to call stack and takes up actual memory.
So it takes O(n) space.
However, just because you have n calls total doesn’t mean it takes O(n) space.
Look at the below function :

int addSequence (int n){


int sum = 0;
for (int i = 0; i < n; i++){
sum += pairSum(i, i+1);
}
return sum;
}

int pairSum(int x, int y){


return x + y;
}

There will be roughly O(n) calls to pairSum. However, those


calls do not exist simultaneously on the call stack,
so you only need O(1) space.
Note: It’s necessary to mention that space complexity depends on a variety of things such as
the programming language, the compiler, or even the machine running the algorithm.

Sorting Algorithms

What is Sorting?

A Sorting Algorithm is used to rearrange a given array or list of elements according to a comparison
operator on the elements. The comparison operator is used to decide the new order of elements in
the respective data structure.
For Example: The below list of characters is sorted in increasing order of their ASCII values. That
is, the character with a lesser ASCII value will be placed first than the character with a higher
ASCII value.

Example of Sorting

1. Selection sort is a simple and efficient sorting algorithm that works by repeatedly
selecting the smallest (or largest) element from the unsorted portion of the list and
moving it to the sorted portion of the list.
The algorithm repeatedly selects the smallest (or largest) element from the unsorted portion of
the list and swaps it with the first element of the unsorted part. This process is repeated for the
remaining unsorted portion until the entire list is sorted.
How does Selection Sort Algorithm work?
Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}

First pass:
• For the first position in the sorted array, the whole array is traversed from index 0
to 4 sequentially. The first position where 64 is stored presently, after traversing
whole array it is clear that 11 is the lowest value.
• Thus, replace 64 with 11. After one iteration 11, which happens to be the least value
in the array, tends to appear in the first position of the sorted list
.

Selection Sort Algorithm | Swapping 1st element with the minimum in array
Second pass:
• For the second position, where 25 is present, again traverse the rest of the array in
a sequential manner.
• After traversing, we found that 12 is the second lowest value in the array and it
should appear at the second place in the array, thus swap these values.

Selection Sort Algorithm | swapping i=1 with the next minimum element

Third Pass:
• Now, for third place, where 25 is present again traverse the rest of the array and
find the third least value present in the array.
• While traversing, 22 came out to be the third least value and it should appear at the
third place in the array, thus swap 22 with element present at third position.

Selection Sort Algorithm | swapping i=2 with the next minimum element

Fourth pass:
• Similarly, for fourth position traverse the rest of the array and find the fourth least
element in the array
• As 25 is the 4th lowest value hence, it will place at the fourth position.
Selection Sort Algorithm | swapping i=3 with the next minimum element

Fifth Pass:
• At last the largest value present in the array automatically get placed at the last
position in the array
• The resulted array is the sorted array.

Selection Sort Algorithm | Required sorted array

Below is the implementation of the above approach:


C:
// C program for implementation of selection sort
#include <stdio.h>

void swap(int *xp, int *yp)


{
int temp = *xp;
*xp = *yp;
*yp = temp;
}
void selectionSort(int arr[], int n)
{
int i, j, min_idx;

// One by one move boundary of unsorted subarray


for (i = 0; i < n-1; i++)
{
// Find the minimum element in unsorted array
min_idx = i;
for (j = i+1; j < n; j++)
if (arr[j] < arr[min_idx])
min_idx = j;

// Swap the found minimum element with the first element


if(min_idx != i)
swap(&arr[min_idx], &arr[i]);
}
}

/* Function to print an array */


void printArray(int arr[], int size)
{
int i;
for (i=0; i < size; i++)
printf("%d ", arr[i]);
printf("\n");
}

// Driver program to test above functions


int main()
{
int arr[] = {64, 25, 12, 22, 11};
int n = sizeof(arr)/sizeof(arr[0]);
selectionSort(arr, n);
printf("Sorted array: \n");
printArray(arr, n);
return 0;
}
C++:
// C++ program for implementation of
// selection sort
#include <bits/stdc++.h>
using namespace std;

// Function for Selection sort


void selectionSort(int arr[], int n)
{
int i, j, min_idx;

// One by one move boundary of


// unsorted subarray
for (i = 0; i < n - 1; i++) {

// Find the minimum element in


// unsorted array
min_idx = i;
for (j = i + 1; j < n; j++) {
if (arr[j] < arr[min_idx])
min_idx = j;
}

// Swap the found minimum element


// with the first element
if (min_idx != i)
swap(arr[min_idx], arr[i]);
}
}

// Function to print an array


void printArray(int arr[], int size)
{
int i;
for (i = 0; i < size; i++) {
cout << arr[i] << " ";
cout << endl;
}
}

// Driver program
int main()
{
int arr[] = { 64, 25, 12, 22, 11 };
int n = sizeof(arr) / sizeof(arr[0]);

// Function Call
selectionSort(arr, n);
cout << "Sorted array: \n";
printArray(arr, n);
return 0;
}

Java:
// Java program for implementation of Selection Sort
import java.io.*;
public class SelectionSort
{
void sort(int arr[])
{
int n = arr.length;

// One by one move boundary of unsorted subarray


for (int i = 0; i < n-1; i++)
{
// Find the minimum element in unsorted array
int min_idx = i;
for (int j = i+1; j < n; j++)
if (arr[j] < arr[min_idx])
min_idx = j;

// Swap the found minimum element with the first


// element
int temp = arr[min_idx];
arr[min_idx] = arr[i];
arr[i] = temp;
}
}

// Prints the array


void printArray(int arr[])
{
int n = arr.length;
for (int i=0; i<n; ++i)
System.out.print(arr[i]+" ");
System.out.println();
}
// Driver code to test above
public static void main(String args[])
{
SelectionSort ob = new SelectionSort();
int arr[] = {64,25,12,22,11};
ob.sort(arr);
System.out.println("Sorted array");
ob.printArray(arr);
}
}

Output
Sorted array:
11 12 22 25 64
Complexity Analysis of Selection Sort
Time Complexity: The time complexity of Selection Sort is O(N2) as there are two nested
loops:
• One loop to select an element of Array one by one = O(N)
• Another loop to compare that element with every other Array element = O(N)
• Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N 2)
Auxiliary Space: O(1) as the only extra memory used is for temporary variables while
swapping two values in Array. The selection sort never makes more than O(N) swaps and can
be useful when memory writing is costly.

Advantages of Selection Sort Algorithm


• Simple and easy to understand.
• Works well with small datasets.

Disadvantages of the Selection Sort Algorithm
• Selection sort has a time complexity of O(n^2) in the worst and average case.
• Does not work well on large datasets.
• Does not preserve the relative order of items with equal keys which means it is not
stable.

2. Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the
adjacent elements if they are in the wrong order. This algorithm is not suitable for large
data sets as its average and worst-case time complexity is quite high.
Bubble Sort Algorithm
In this algorithm,
• traverse from left and compare adjacent elements and the higher one is placed at
right side.
• In this way, the largest element is moved to the rightmost end at first.
• This process is then continued to find the second largest and place it and so on until
the data is sorted.
How does Bubble Sort Work?
Let us understand the working of bubble sort with the help of the following illustration:
Input: arr[] = {6, 3, 0, 5}

First Pass:
The largest element is placed in its correct position, i.e., the end of the array

Bubble Sort Algorithm: Placing the largest element at correct position

Second Pass:
Place the second largest element at correct position

Bubble Sort Algorithm: Placing the second largest element at correct position
Third Pass:
Place the remaining two elements at their correct positions.

Bubble Sort Algorithm: Placing the remaining elements at their correct positions
Implementation of Bubble Sort
Below is the implementation of the bubble sort. It can be optimized by stopping the algorithm if
the inner loop didn’t cause any swap.
C:

// Optimized implementation of Bubble sort


#include <stdbool.h>
#include <stdio.h>

void swap(int* xp, int* yp){


int temp = *xp;
*xp = *yp;
*yp = temp;
}

// An optimized version of Bubble Sort


void bubbleSort(int arr[], int n){
int i, j;
bool swapped;
for (i = 0; i < n - 1; i++) {
swapped = false;
for (j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
swap(&arr[j], &arr[j + 1]);
swapped = true;
}
}

// If no two elements were swapped by inner loop,


// then break
if (swapped == false)
break;
}
}

// Function to print an array


void printArray(int arr[], int size){
int i;
for (i = 0; i < size; i++)
printf("%d ", arr[i]);
}

// Driver program to test above functions


int main(){
int arr[] = { 64, 34, 25, 12, 22, 11, 90 };
int n = sizeof(arr) / sizeof(arr[0]);
bubbleSort(arr, n);
printf("Sorted array: \n");
printArray(arr, n);
return 0;
}

C++:

// Optimized implementation of Bubble sort


#include <bits/stdc++.h>
using namespace std;

// An optimized version of Bubble Sort


void bubbleSort(int arr[], int n){
int i, j;
bool swapped;
for (i = 0; i < n - 1; i++) {
swapped = false;
for (j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
swap(arr[j], arr[j + 1]);
swapped = true;
}
}

// If no two elements were swapped


// by inner loop, then break
if (swapped == false)
break;
}
}

// Function to print an array


void printArray(int arr[], int size){
int i;
for (i = 0; i < size; i++)
cout << " " << arr[i];
}

// Driver program to test above functions


int main(){
int arr[] = { 64, 34, 25, 12, 22, 11, 90 };
int N = sizeof(arr) / sizeof(arr[0]);
bubbleSort(arr, N);
cout << "Sorted array: \n";
printArray(arr, N);
return 0;
}

Java:

// Optimized java implementation of Bubble sort

import java.io.*;

class GFG {

// An optimized version of Bubble Sort


static void bubbleSort(int arr[], int n){
int i, j, temp;
boolean swapped;
for (i = 0; i < n - 1; i++) {
swapped = false;
for (j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
// Swap arr[j] and arr[j+1]
temp = arr[j];
arr[j] = arr[j + 1];
arr[j + 1] = temp;
swapped = true;
}
}

// If no two elements were


// swapped by inner loop, then break
if (swapped == false)
break;
}
}

// Function to print an array


static void printArray(int arr[], int size){
int i;
for (i = 0; i < size; i++)
System.out.print(arr[i] + " ");
System.out.println();
}

// Driver program
public static void main(String args[])
{
int arr[] = { 64, 34, 25, 12, 22, 11, 90 };
int n = arr.length;
bubbleSort(arr, n);
System.out.println("Sorted array: ");
printArray(arr, n);
}
}

Output
Sorted array:
11 12 22 25 34 64 90
Complexity Analysis of Bubble Sort:
Time Complexity: O(N2)
Auxiliary Space: O(1)
Advantages of Bubble Sort:
• Bubble sort is easy to understand and implement.
• It does not require any additional memory space.
• It is a stable sorting algorithm, meaning that elements with the same key value
maintain their relative order in the sorted output.

Disadvantages of Bubble Sort:


• Bubble sort has a time complexity of O(N 2) which makes it very slow for large data
sets.
• Bubble sort is a comparison-based sorting algorithm, which means that it requires a
comparison operator to determine the relative order of elements in the input data
set. It can limit the efficiency of the algorithm in certain cases.

3. Insertion sort is a simple sorting algorithm that works similar to the way you sort
playing cards in your hands. The array is virtually split into a sorted and an unsorted
part. Values from the unsorted part are picked and placed at the correct position in the
sorted part.

Insertion Sort Algorithm


To sort an array of size N in ascending order iterate over the array and compare the current
element (key) to its predecessor, if the key element is smaller than its predecessor, compare it
to the elements before. Move the greater elements one position up to make space for the
swapped element.
Working of Insertion Sort algorithm
Consider an example: arr[]: {12, 11, 13, 5, 6}

12 11 13 5 6

First Pass:
• Initially, the first two elements of the array are compared in insertion sort.

12 11 13 5 6

• Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not
at its correct position. Thus, swap 11 and 12.
• So, for now 11 is stored in a sorted sub-array.

11 12 13 5 6
Second Pass:
• Now, move to the next two elements and compare them

11 12 13 5 6

• Here, 13 is greater than 12, thus both elements seems to be in ascending order,
hence, no swapping will occur. 12 also stored in a sorted sub-array along with 11

Third Pass:
• Now, two elements are present in the sorted sub-array which are 11 and 12
• Moving forward to the next two elements which are 13 and 5

11 12 13 5 6

• Both 5 and 13 are not present at their correct place so swap them

11 12 5 13 6

• After swapping, elements 12 and 5 are not sorted, thus swap again

11 5 12 13 6

• Here, again 11 and 5 are not sorted, hence swap again

5 11 12 13 6

• Here, 5 is at its correct position

Fourth Pass:
• Now, the elements which are present in the sorted sub-array are 5, 11 and 12
• Moving to the next two elements 13 and 6
5 11 12 13 6

• Clearly, they are not sorted, thus perform swap between both

5 11 12 6 13

• Now, 6 is smaller than 12, hence, swap again

5 11 6 12 13

• Here, also swapping makes 11 and 6 unsorted hence, swap again

5 6 11 12 13


Finally, the array is completely sorted.

Illustrations:

Implementation of Insertion Sort Algorithm


Below is the implementation of the iterative approach:
C:
// C program for insertion sort
#include <math.h>
#include <stdio.h>

/* Function to sort an array using insertion sort*/


void insertionSort(int arr[], int n)
{
int i, key, j;
for (i = 1; i < n; i++) {
key = arr[i];
j = i - 1;

/* Move elements of arr[0..i-1], that are


greater than key, to one position ahead
of their current position */
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}

// A utility function to print an array of size n


void printArray(int arr[], int n)
{
int i;
for (i = 0; i < n; i++)
printf("%d ", arr[i]);
printf("\n");
}

/* Driver program to test insertion sort */


int main()
{
int arr[] = { 12, 11, 13, 5, 6 };
int n = sizeof(arr) / sizeof(arr[0]);

insertionSort(arr, n);
printArray(arr, n);

return 0;
}
C++:

// C++ program for insertion sort

#include <bits/stdc++.h>
using namespace std;

// Function to sort an array using


// insertion sort
void insertionSort(int arr[], int n){
int i, key, j;
for (i = 1; i < n; i++) {
key = arr[i];
j = i - 1;

// Move elements of arr[0..i-1],


// that are greater than key,
// to one position ahead of their
// current position
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}
// A utility function to print an array
// of size n
void printArray(int arr[], int n){
int i;
for (i = 0; i < n; i++)
cout << arr[i] << " ";
cout << endl;
}
// Driver code
int main(){
int arr[] = { 12, 11, 13, 5, 6 };
int N = sizeof(arr) / sizeof(arr[0]);

insertionSort(arr, N);
printArray(arr, N);

return 0;
}
Java:

/ Java program for implementation of Insertion Sort


public class InsertionSort {
/*Function to sort array using insertion sort*/
void sort(int arr[]){
int n = arr.length;
for (int i = 1; i < n; ++i) {
int key = arr[i];
int j = i - 1;

/* Move elements of arr[0..i-1], that are


greater than key, to one position ahead
of their current position */
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j = j - 1;
}
arr[j + 1] = key;
}
}

/* A utility function to print array of size n*/


static void printArray(int arr[]){
int n = arr.length;
for (int i = 0; i < n; ++i)
System.out.print(arr[i] + " ");

System.out.println();
}

// Driver method
public static void main(String args[]){
int arr[] = { 12, 11, 13, 5, 6 };

InsertionSort ob = new InsertionSort();


ob.sort(arr);

printArray(arr);
}
};
Output
5 6 11 12 13
Time Complexity: O(N^2)
Auxiliary Space: O(1)

Complexity Analysis of Insertion Sort:

Time Complexity of Insertion Sort


• The worst-case time complexity of the Insertion sort is O(N^2)
• The average case time complexity of the Insertion sort is O(N^2)
• The time complexity of the best case is O(N).
Space Complexity of Insertion Sort
The auxiliary space complexity of Insertion Sort is O(1)

Characteristics of Insertion Sort:

• This algorithm is one of the simplest algorithms with a simple implementation


• Basically, Insertion sort is efficient for small data values
• Insertion sort is adaptive in nature, i.e. it is appropriate for data sets that are already
partially sorted.

4. Merge sort is defined as a sorting algorithm that works by dividing an array into
smaller subarrays, sorting each subarray, and then merging the sorted subarrays back
together to form the final sorted array.
In simple terms, we can say that the process of merge sort is to divide the array into two
halves, sort each half, and then merge the sorted halves back together. This process is repeated
until the entire array is sorted.

Merge Sort Algorithm


How does Merge Sort work?
Merge sort is a recursive algorithm that continuously splits the array in half until it cannot be
further divided i.e., the array has only one element left (an array with one element is always
sorted). Then the sorted subarrays are merged into one sorted array.
See the below illustration to understand the working of merge sort.
Illustration:
Lets consider an array arr[] = {38, 27, 43, 10}
• Initially divide the array into two equal halves:

Merge Sort: Divide the array into two halves

• These subarrays are further divided into two halves. Now they become array of unit
length that can no longer be divided and array of unit length are always sorted.

Merge Sort: Divide the subarrays into two halves (unit length subarrays here)
• These sorted subarrays are merged together, and we get bigger sorted subarrays.

Merge Sort: Merge the unit length subarrays into sorted subarrays

This merging process is continued until the sorted array is built from the smaller subarrays.

Merge Sort: Merge the sorted subarrys to get the sorted array

The following diagram shows the complete merge sort process for an example array {38, 27,
43, 3, 9, 82, 10}.
Below is the Code implementation of Merge Sort.
C
/ C program for Merge Sort
#include <stdio.h>
#include <stdlib.h>

// Merges two subarrays of arr[].


// First subarray is arr[l..m]
// Second subarray is arr[m+1..r]
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;

// Create temp arrays


int L[n1], R[n2];

// Copy data to temp arrays L[] and R[]


for (i = 0; i < n1; i++)
L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1 + j];

// Merge the temp arrays back into arr[l..r


i = 0;
j = 0;
k = l;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
}
else {
arr[k] = R[j];
j++;
}
k++;
}

// Copy the remaining elements of L[],


// if there are any
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}

// Copy the remaining elements of R[],


// if there are any
while (j < n2) {
arr[k] = R[j];
j++;
k++;
}
}

// l is for left index and r is right index of the


// sub-array of arr to be sorted
void mergeSort(int arr[], int l, int r)
{
if (l < r) {
int m = l + (r - l) / 2;

// Sort first and second halves


mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);

merge(arr, l, m, r);
}
}

// Function to print an array


void printArray(int A[], int size)
{
int i;
for (i = 0; i < size; i++)
printf("%d ", A[i]);
printf("\n");
}

// Driver code
int main()
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
int arr_size = sizeof(arr) / sizeof(arr[0]);

printf("Given array is \n");


printArray(arr, arr_size);
mergeSort(arr, 0, arr_size - 1);

printf("\nSorted array is \n");


printArray(arr, arr_size);
return 0;
}

C++:
// C++ program for Merge Sort
#include <bits/stdc++.h>
using namespace std;

// Merges two subarrays of array[].


// First subarray is arr[begin..mid]
// Second subarray is arr[mid+1..end]
void merge(int array[], int const left, int const mid,
int const right)
{
int const subArrayOne = mid - left + 1;
int const subArrayTwo = right - mid;

// Create temp arrays


auto *leftArray = new int[subArrayOne],
*rightArray = new int[subArrayTwo];

// Copy data to temp arrays leftArray[] and rightArray[]


for (auto i = 0; i < subArrayOne; i++)
leftArray[i] = array[left + i];
for (auto j = 0; j < subArrayTwo; j++)
rightArray[j] = array[mid + 1 + j];

auto indexOfSubArrayOne = 0, indexOfSubArrayTwo = 0;


int indexOfMergedArray = left;

// Merge the temp arrays back into array[left..right]


while (indexOfSubArrayOne < subArrayOne
&& indexOfSubArrayTwo < subArrayTwo) {
if (leftArray[indexOfSubArrayOne]
<= rightArray[indexOfSubArrayTwo]) {
array[indexOfMergedArray]
= leftArray[indexOfSubArrayOne];
indexOfSubArrayOne++;
}
else {
array[indexOfMergedArray]
= rightArray[indexOfSubArrayTwo];
indexOfSubArrayTwo++;
}
indexOfMergedArray++;
}

// Copy the remaining elements of


// left[], if there are any
while (indexOfSubArrayOne < subArrayOne) {
array[indexOfMergedArray]
= leftArray[indexOfSubArrayOne];
indexOfSubArrayOne++;
indexOfMergedArray++;
}

// Copy the remaining elements of


// right[], if there are any
while (indexOfSubArrayTwo < subArrayTwo) {
array[indexOfMergedArray]
= rightArray[indexOfSubArrayTwo];
indexOfSubArrayTwo++;
indexOfMergedArray++;
}
delete[] leftArray;
delete[] rightArray;
}

// begin is for left index and end is right index


// of the sub-array of arr to be sorted
void mergeSort(int array[], int const begin, int const end)
{
if (begin >= end)
return;

int mid = begin + (end - begin) / 2;


mergeSort(array, begin, mid);
mergeSort(array, mid + 1, end);
merge(array, begin, mid, end);
}

// UTILITY FUNCTIONS
// Function to print an array
void printArray(int A[], int size)
{
for (int i = 0; i < size; i++)
cout << A[i] << " ";
cout << endl;
}

// Driver code
int main()
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
int arr_size = sizeof(arr) / sizeof(arr[0]);

cout << "Given array is \n";


printArray(arr, arr_size);

mergeSort(arr, 0, arr_size - 1);

cout << "\nSorted array is \n";


printArray(arr, arr_size);
return 0;
}

Java:
// Java program for Merge Sort
import java.io.*;

class MergeSort {

// Merges two subarrays of arr[].


// First subarray is arr[l..m]
// Second subarray is arr[m+1..r]
void merge(int arr[], int l, int m, int r)
{
// Find sizes of two subarrays to be merged
int n1 = m - l + 1;
int n2 = r - m;

// Create temp arrays


int L[] = new int[n1];
int R[] = new int[n2];

// Copy data to temp arrays


for (int i = 0; i < n1; ++i)
L[i] = arr[l + i];
for (int j = 0; j < n2; ++j)
R[j] = arr[m + 1 + j];

// Merge the temp arrays

// Initial indices of first and second subarrays


int i = 0, j = 0;

// Initial index of merged subarray array


int k = l;
while (i < n1 && j < n2) {
if (L[i] <= R[j]) {
arr[k] = L[i];
i++;
}
else {
arr[k] = R[j];
j++;
}
k++;
}

// Copy remaining elements of L[] if any


while (i < n1) {
arr[k] = L[i];
i++;
k++;
}

// Copy remaining elements of R[] if any


while (j < n2) {
arr[k] = R[j];
j++;
k++;
}
}

// Main function that sorts arr[l..r] using


// merge()
void sort(int arr[], int l, int r)
{
if (l < r) {

// Find the middle point


int m = l + (r - l) / 2;

// Sort first and second halves


sort(arr, l, m);
sort(arr, m + 1, r);

// Merge the sorted halves


merge(arr, l, m, r);
}
}

// A utility function to print array of size n


static void printArray(int arr[])
{
int n = arr.length;
for (int i = 0; i < n; ++i)
System.out.print(arr[i] + " ");
System.out.println();
}

// Driver code
public static void main(String args[])
{
int arr[] = { 12, 11, 13, 5, 6, 7 };

System.out.println("Given array is");


printArray(arr);

MergeSort ob = new MergeSort();


ob.sort(arr, 0, arr.length - 1);

System.out.println("\nSorted array is");


printArray(arr);
}
}

Output
Given array is
12 11 13 5 6 7
Sorted array is
5 6 7 11 12 13
Complexity Analysis of Merge Sort:

Time Complexity: O(N log(N)), Merge Sort is a recursive algorithm and time complexity can
be expressed as following recurrence relation.
T(n) = 2T(n/2) + θ(n)
The above recurrence can be solved either using the Recurrence Tree method or the Master
method. It falls in case II of the Master Method and the solution of the recurrence is
θ(Nlog(N)). The time complexity of Merge Sort isθ(Nlog(N)) in all 3 cases (worst, average,
and best) as merge sort always divides the array into two halves and takes linear time to merge
two halves.
Auxiliary Space: O(N), In merge sort all elements are copied into an auxiliary array. So N
auxiliary space is required for merge sort.

Applications of Merge Sort:


• Sorting large datasets: Merge sort is particularly well-suited for sorting large
datasets due to its guaranteed worst-case time complexity of O(n log n).
• External sorting: Merge sort is commonly used in external sorting, where the data
to be sorted is too large to fit into memory.
• Custom sorting: Merge sort can be adapted to handle different input distributions,
such as partially sorted, nearly sorted, or completely unsorted data.
• Inversion Count Problem

Advantages of Merge Sort:


• Stability: Merge sort is a stable sorting algorithm, which means it maintains the
relative order of equal elements in the input array.
• Guaranteed worst-case performance: Merge sort has a worst-case time
complexity of O(N logN), which means it performs well even on large datasets.
• Parallelizable: Merge sort is a naturally parallelizable algorithm, which means it
can be easily parallelized to take advantage of multiple processors or threads.

Drawbacks of Merge Sort:


• Space complexity: Merge sort requires additional memory to store the merged sub-
arrays during the sorting process.
• Not in-place: Merge sort is not an in-place sorting algorithm, which means it
requires additional memory to store the sorted data. This can be a disadvantage in
applications where memory usage is a concern.
• Not always optimal for small datasets: For small datasets, Merge sort has a higher
time complexity than some other sorting algorithms, such as insertion sort. This can
result in slower performance for very small datasets.

5. Quick Sort is a sorting algorithm based on the Divide and Conquer algorithm that
picks an element as a pivot and partitions the given array around the picked pivot by
placing the pivot in its correct position in the sorted array.
How does Quick Sort work?
The key process in quick Sort is a partition(). The target of partitions is to place the pivot (any
element can be chosen to be a pivot) at its correct position in the sorted array and put all
smaller elements to the left of the pivot, and all greater elements to the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its correct
position and this finally sorts the array.

How Quicksort works

Choice of Pivot:
There are many different choices for picking pivots.
• Always pick the first element as a pivot.
• Always pick the last element as a pivot (implemented below)
• Pick a random element as a pivot.
• Pick the middle as the pivot.
Partition Algorithm:
The logic is simple, we start from the leftmost element and keep track of the index of smaller
(or equal) elements as i. While traversing, if we find a smaller element, we swap the current
element with arr[i]. Otherwise, we ignore the current element.
Let us understand the working of partition and the Quick Sort algorithm with the help of the
following example:
Consider: arr[] = {10, 80, 30, 90, 40}.
• Compare 10 with the pivot and as it is less than pivot arrange it accrodingly.

Partition in Quick Sort: Compare pivot with 10
• Compare 80 with the pivot. It is greater than pivot.

Partition in QuickSort: Compare pivot with 80


• Compare 30 with pivot. It is less than pivot so arrange it accordingly.

Partition in QuickSort: Compare pivot with 30

• Compare 90 with the pivot. It is greater than the pivot.


Partition in QuickSort: Compare pivot with 90
• Arrange the pivot in its correct position.

Partition in QuickSort: Place pivot in its correct position


Illustration of Quicksort:
As the partition process is done recursively, it keeps on putting the pivot in its actual position
in the sorted array. Repeatedly putting pivots in their actual position makes the array sorted.
Follow the below images to understand how the recursive implementation of the partition
algorithm helps to sort the array.
• Initial partition on the main array:

Quicksort: Performing the partition


• Partitioning of the subarrays:

• Quicksort: Performing the partition

Code implementation of the Quick Sort:


Below is the implementation of the Quicksort:

C:

// C code to implement quicksort

#include <stdio.h>

// Function to swap two elements


void swap(int* a, int* b){
int t = *a;
*a = *b;
*b = t;
}

// Partition the array using the last element as the pivot


int partition(int arr[], int low, int high)
{
// Choosing the pivot
int pivot = arr[high];

// Index of smaller element and indicates


// the right position of pivot found so far
int i = (low - 1);

for (int j = low; j <= high - 1; j++) {


// If current element is smaller than the pivot
if (arr[j] < pivot) {

// Increment index of smaller element


i++;
swap(&arr[i], &arr[j]);
}
}
swap(&arr[i + 1], &arr[high]);
return (i + 1);
}

// The main function that implements QuickSort


// arr[] --> Array to be sorted,
// low --> Starting index,
// high --> Ending index
void quickSort(int arr[], int low, int high){
if (low < high) {

// pi is partitioning index, arr[p]


// is now at right place
int pi = partition(arr, low, high);

// Separately sort elements before


// partition and after partition
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

// Driver code
int main(){
int arr[] = { 10, 7, 8, 9, 1, 5 };
int N = sizeof(arr) / sizeof(arr[0]);

// Function call
quickSort(arr, 0, N - 1);
printf("Sorted array: \n");
for (int i = 0; i < N; i++)
printf("%d ", arr[i]);
return 0;
}
C++:

// C++ code to implement quicksort

#include <bits/stdc++.h>
using namespace std;

// This function takes last element as pivot,


// places the pivot element at its correct position
// in sorted array, and places all smaller to left
// of pivot and all greater elements to right of pivot
int partition(int arr[], int low, int high)
{
// Choosing the pivot
int pivot = arr[high];

// Index of smaller element and indicates


// the right position of pivot found so far
int i = (low - 1);

for (int j = low; j <= high - 1; j++) {

// If current element is smaller than the pivot


if (arr[j] < pivot) {

// Increment index of smaller element


i++;
swap(arr[i], arr[j]);
}
}
swap(arr[i + 1], arr[high]);
return (i + 1);
}

// The main function that implements QuickSort


// arr[] --> Array to be sorted,
// low --> Starting index,
// high --> Ending index
void quickSort(int arr[], int low, int high)
{
if (low < high) {

// pi is partitioning index, arr[p]


// is now at right place
int pi = partition(arr, low, high);

// Separately sort elements before


// partition and after partition
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

// Driver Code
int main()
{
int arr[] = { 10, 7, 8, 9, 1, 5 };
int N = sizeof(arr) / sizeof(arr[0]);

// Function call
quickSort(arr, 0, N - 1);
cout << "Sorted array: " << endl;
for (int i = 0; i < N; i++)
cout << arr[i] << " ";
return 0;
}

Java:

// Java implementation of QuickSort


import java.io.*;

class GFG {

// A utility function to swap two elements


static void swap(int[] arr, int i, int j)
{
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}

// This function takes last element as pivot,


// places the pivot element at its correct position
// in sorted array, and places all smaller to left
// of pivot and all greater elements to right of pivot
static int partition(int[] arr, int low, int high)
{
// Choosing the pivot
int pivot = arr[high];

// Index of smaller element and indicates


// the right position of pivot found so far
int i = (low - 1);

for (int j = low; j <= high - 1; j++) {

// If current element is smaller than the pivot


if (arr[j] < pivot) {

// Increment index of smaller element


i++;
swap(arr, i, j);
}
}
swap(arr, i + 1, high);
return (i + 1);
}

// The main function that implements QuickSort


// arr[] --> Array to be sorted,
// low --> Starting index,
// high --> Ending index
static void quickSort(int[] arr, int low, int high)
{
if (low < high) {

// pi is partitioning index, arr[p]


// is now at right place
int pi = partition(arr, low, high);

// Separately sort elements before


// partition and after partition
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}
// To print sorted array
public static void printArr(int[] arr)
{
for (int i = 0; i < arr.length; i++) {
System.out.print(arr[i] + " ");
}
}

// Driver Code
public static void main(String[] args)
{
int[] arr = { 10, 7, 8, 9, 1, 5 };
int N = arr.length;

// Function call
quickSort(arr, 0, N - 1);
System.out.println("Sorted array:");
printArr(arr);
}
}

Output
Sorted array:
1 5 7 8 9 10
Complexity Analysis of Quick Sort:

Time Complexity:
• Best Case:
• Average Case:
• Worst Case: O(N2)

Auxiliary Space: O(1) as no extra space is used

Advantages of Quick Sort:


• It is a divide-and-conquer algorithm that makes it easier to solve problems.
• It is efficient on large data sets.
• It has a low overhead, as it only requires a small amount of memory to function.

Disadvantages of Quick Sort:


• It has a worst-case time complexity of O(N2), which occurs when the pivot is
chosen poorly.
• It is not a good choice for small data sets.
• It is not a stable sort, meaning that if two elements have the same key, their relative
order will not be preserved in the sorted output in case of quick sort, because here
we are swapping elements according to the pivot’s position (without considering
their original positions).
Searching Algorithms

What is Searching Algorithm?

Searching Algorithms are designed to check for an element or retrieve an element from any data
structure where it is stored.
Based on the type of search operation, these algorithms are generally classified into two categories:
1. Sequential Search: In this, the list or array is traversed sequentially and every
element is checked. For example: Linear Search.

Linear Search to find the element “20” in a given list of numbers

Linear-Search

2. Interval Search: These algorithms are specifically designed for searching in sorted
data-structures. These type of searching algorithms are much more efficient than
Linear Search as they repeatedly target the center of the search structure and divide
the search space in half. For Example: Binary Search.
Binary Search to find the element “23” in a given list of numbers

Binary Search
Linear Search

Linear Search is defined as a sequential search algorithm that starts at one end and goes
through each element of a list until the desired element is found, otherwise the search continues
till the end of the data set.

Linear Search Algorithm

How Does Linear Search Algorithm Work?


In Linear Search Algorithm,
• Every element is considered as a potential match for the key and checked for the
same.
• If any element is found equal to the key, the search is successful and the index of that
element is returned.
• If no element is found equal to the key, the search yields “No match found”.

For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30

Step 1: Start from the first element (index 0) and compare key with each element (arr[i]).
• Comparing key with first element arr[0]. SInce not equal, the iterator moves to the
next element as a potential match.

Compare key with arr[0]


• Comparing key with next element arr[1]. Since not equal, the iterator moves to the
next element as a potential match.

Compare key with arr[1]

Step 2: Now when comparing arr[2] with key, the value matches. So the Linear Search
Algorithm will yield a successful message and return the index of the element when key is found
(here 2).

Compare key with arr[2]

Implementation of Linear Search Algorithm:


Below is the implementation of the linear search algorithm:
C:
// C code to linearly search x in arr[].

#include <stdio.h>

int search(int arr[], int N, int x){


for (int i = 0; i < N; i++)
if (arr[i] == x)
return i;
return -1;
}
// Driver code
int main(void){
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int N = sizeof(arr) / sizeof(arr[0]);

// Function call
int result = search(arr, N, x);
(result == -1)
? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}

C++:
// C++ code to linearly search x in arr[].

#include <bits/stdc++.h>
using namespace std;

int search(int arr[], int N, int x){


for (int i = 0; i < N; i++)
if (arr[i] == x)
return i;
return -1;
}

// Driver code
int main(void){
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int N = sizeof(arr) / sizeof(arr[0]);

// Function call
int result = search(arr, N, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
Java:

// Java code for linearly searching x in arr[].

import java.io.*;

class GFG {
public static int search(int arr[], int N, int x)
{
for (int i = 0; i < N; i++) {
if (arr[i] == x)
return i;
}
return -1;
}

// Driver code
public static void main(String args[])
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;

// Function call
int result = search(arr, arr.length, x);
if (result == -1)
System.out.print(
"Element is not present in array");
else
System.out.print("Element is present at index "
+ result);
}
}

Output
Element is present at index 3
Complexity Analysis of Linear Search:

Time Complexity:
• Best Case: In the best case, the key might be present at the first index. So the best
case complexity is O(1)
• Worst Case: In the worst case, the key might be present at the last index i.e., opposite
to the end from which the search has started in the list. So the worst-case complexity
is O(N) where N is the size of the list.
• Average Case: O(N)
Auxiliary Space: O(1) as except for the variable to iterate through the list, no other variable is
used.

Advantages of Linear Search:


• Linear search can be used irrespective of whether the array is sorted or not. It can be
used on arrays of any data type.
• Does not require any additional memory.
• It is a well-suited algorithm for small datasets.

Drawbacks of Linear Search:


• Linear search has a time complexity of O(N), which in turn makes it slow for large
datasets.
• Not suitable for large arrays.

When to use Linear Search?


• When we are dealing with a small dataset.
• When you are searching for a dataset stored in contiguous memory.

Binary Search

Binary Search is defined as a searching algorithm used in a sorted array by repeatedly dividing
the search interval in half. The idea of binary search is to use the information that the array is
sorted and reduce the time complexity to O(log N).

Example of Binary Search Algorithm


Conditions for when to apply Binary Search in a Data Structure:

To apply Binary Search algorithm:


• The data structure must be sorted.
• Access to any element of the data structure takes constant time.

Binary Search Algorithm:


In this algorithm,
• Divide the search space into two halves by finding the middle index “mid”

Finding the middle index “mid” in Binary Search Algorithm

• Compare the middle element of the search space with the key.
• If the key is found at middle element, the process is terminated.
• If the key is not found at middle element, choose which half will be used as the next
search space.
• If the key is smaller than the middle element, then the left side is used for
next search.
• If the key is larger than the middle element, then the right side is used for
next search.
• This process is continued until the key is found or the total search space is exhausted.

How does Binary Search work?


To understand the working of binary search, consider the following illustration:
Consider an array arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and the target = 23.

First Step: Calculate the mid and compare the mid element with the key. If the key is less than
mid element, move to left and if it is greater than the mid then move search space to the right.
• Key (i.e., 23) is greater than current mid element (i.e., 16). The search space moves to
the right.
Binary Search Algorithm: Compare key with 16

• Key is less than the current mid 56. The search space moves to the left.

Binary Search Algorithm: Compare key with 56


Second Step: If the key matches the value of the mid element, the element is found and stop
search.

Binary Search Algorithm: Key matches with mid

How to Implement Binary Search?


The Binary Search Algorithm can be implemented in the following two ways
• Iterative Binary Search Algorithm
Recursive Binary Search Algorithm

Given below are the pseudocodes for the approaches.
1. Iterative Binary Search Algorithm:

Here we use a while loop to continue the process of comparing the key and splitting the search
space in two halves.
Implementation of Iterative Binary Search Algorithm:

C:
// C program to implement iterative Binary Search
#include <stdio.h>

// An iterative binary search function.


int binarySearch(int arr[], int l, int r, int x)
{
while (l <= r) {
int m = l + (r - l) / 2;

// Check if x is present at mid


if (arr[m] == x)
return m;

// If x greater, ignore left half


if (arr[m] < x)
l = m + 1;

// If x is smaller, ignore right half


else
r = m - 1;
}

// If we reach here, then element was not present


return -1;
}

// Driver code
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int n = sizeof(arr) / sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n - 1, x);
(result == -1) ? printf("Element is not present"
" in array")
: printf("Element is present at "
"index %d",
result);
return 0;
}

C++:
// C++ program to implement iterative Binary Search
#include <bits/stdc++.h>
using namespace std;

// An iterative binary search function.


int binarySearch(int arr[], int l, int r, int x)
{
while (l <= r) {
int m = l + (r - l) / 2;

// Check if x is present at mid


if (arr[m] == x)
return m;

// If x greater, ignore left half


if (arr[m] < x)
l = m + 1;

// If x is smaller, ignore right half


else
r = m - 1;
}

// If we reach here, then element was not present


return -1;
}

// Driver code
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Java:
// Java implementation of iterative Binary Search

import java.io.*;

class BinarySearch {

// Returns index of x if it is present in arr[].


int binarySearch(int arr[], int x)
{
int l = 0, r = arr.length - 1;
while (l <= r) {
int m = l + (r - l) / 2;

// Check if x is present at mid


if (arr[m] == x)
return m;

// If x greater, ignore left half


if (arr[m] < x)
l = m + 1;

// If x is smaller, ignore right half


else
r = m - 1;
}

// If we reach here, then element was


// not present
return -1;
}

// Driver code
public static void main(String args[])
{
BinarySearch ob = new BinarySearch();
int arr[] = { 2, 3, 4, 10, 40 };
int n = arr.length;
int x = 10;
int result = ob.binarySearch(arr, x);
if (result == -1)
System.out.println(
"Element is not present in array");
else
System.out.println("Element is present at "
+ "index " + result);
}
}

Output
Element is present at index 3
Time Complexity: O(log N)
Auxiliary Space: O(1)

2. Recursive Binary Search Algorithm:

Create a recursive function and compare the mid of the search space with the key. And based on
the result either return the index where the key is found or call the recursive function for the next
search space.
Implementation of Recursive Binary Search Algorithm:
C:
// C program to implement recursive Binary Search
#include <stdio.h>

// A recursive binary search function. It returns


// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
int mid = l + (r - l) / 2;

// If the element is present at the middle


// itself
if (arr[mid] == x)
return mid;

// If element is smaller than mid, then


// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);

// Else the element can only be present


// in right subarray
return binarySearch(arr, mid + 1, r, x);
}

// We reach here when element is not


// present in array
return -1;
}

// Driver code
int main()
{
int arr[] = { 2, 3, 4, 10, 40 };
int n = sizeof(arr) / sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}

C++:
// C++ program to implement recursive Binary Search
#include <bits/stdc++.h>
using namespace std;

// A recursive binary search function. It returns


// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
int mid = l + (r - l) / 2;

// If the element is present at the middle


// itself
if (arr[mid] == x)
return mid;
// If element is smaller than mid, then
// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);

// Else the element can only be present


// in right subarray
return binarySearch(arr, mid + 1, r, x);
}

// We reach here when element is not


// present in array
return -1;
}

// Driver code
int main()
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}

Java:
// Java implementation of recursive Binary Search
class BinarySearch {

// Returns index of x if it is present in arr[l..


// r], else return -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l) {
int mid = l + (r - l) / 2;

// If the element is present at the


// middle itself
if (arr[mid] == x)
return mid;
// If element is smaller than mid, then
// it can only be present in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid - 1, x);

// Else the element can only be present


// in right subarray
return binarySearch(arr, mid + 1, r, x);
}

// We reach here when element is not present


// in array
return -1;
}

// Driver code
public static void main(String args[])
{
BinarySearch ob = new BinarySearch();
int arr[] = { 2, 3, 4, 10, 40 };
int n = arr.length;
int x = 10;
int result = ob.binarySearch(arr, 0, n - 1, x);
if (result == -1)
System.out.println(
"Element is not present in array");
else
System.out.println(
"Element is present at index " + result);
}
}

Output
Element is present at index 3
Complexity Analysis of Binary Search:
• Time Complexity:
• Best Case: O(1)
• Average Case: O(log N)
• Worst Case: O(log N)
• Auxiliary Space: O(1), If the recursive call stack is considered then the auxiliary
space will be O(logN).
Advantages of Binary Search:
• Binary search is faster than linear search, especially for large arrays.
• More efficient than other searching algorithms with a similar time complexity, such as
interpolation search or exponential search.
• Binary search is well-suited for searching large datasets that are stored in external
memory, such as on a hard drive or in the cloud.

Drawbacks of Binary Search:


• The array should be sorted.
• Binary search requires that the data structure being searched be stored in contiguous
memory locations.
• Binary search requires that the elements of the array be comparable, meaning that
they must be able to be ordered.

Applications of Binary Search:


• Binary search can be used as a building block for more complex algorithms used in
machine learning, such as algorithms for training neural networks or finding the
optimal hyperparameters for a model.
• It can be used for searching in computer graphics such as algorithms for ray tracing or
texture mapping.
• It can be used for searching a database.

You might also like