Data Structure
Data Structure
We already have learned about data structure. Many times, what happens is that people get
confused between data type and data structure. So, let’s see a few differences between data
type and data structure to make it clear.
The data type is the form of a variable to Data structure is a collection of different kinds
which a value can be assigned. It defines of data. That entire data can be represented
that the particular variable will assign the using an object and can be used throughout the
values of the given data type only. program.
It can hold value but not data. Therefore, It can hold multiple types of data within a single
it is dataless. object.
There is no time complexity in the case of In data structure objects, time complexity plays
data types. an important role.
In the case of data types, the value of data While in the case of data structures, the data and
is not stored because it only represents its value acquire the space in the computer’s
the type of data that can be stored. main memory. Also, a data structure can hold
Data Type Data Structure
Data type examples are int, float, double, Data structure examples are stack, queue, tree,
etc. etc.
• Linear data structure: Data structure in which data elements are arranged
sequentially or linearly, where each element is attached to its previous and next
adjacent elements, is called a linear data structure.
Examples of linear data structures are array, stack, queue, linked list, etc.
• Static data structure: Static data structure has a fixed memory size. It is
easier to access the elements in a static data structure.
An example of this data structure is an array.
• Dynamic data structure: In the dynamic data structure, the size is not
fixed. It can be randomly updated during the runtime which may be
considered efficient concerning the memory (space) complexity of the
code.
Examples of this data structure are queue, stack, etc.
• Non-linear data structure: Data structures where data elements are not placed
sequentially or linearly are called non-linear data structures. In a non-linear data
structure, we can’t traverse all the elements in a single run only.
Examples of non-linear data structures are trees and graphs.
Arrays
An array is a linear data structure and it is a collection of items stored at contiguous memory
locations. The idea is to store multiple items of the same type together in one place. It allows
the processing of a large amount of data in a relatively short period. The first element of the
array is indexed by a subscript of 0. There are different operations possible in an array, like
Searching, Sorting, Inserting, Traversing, Reversing, and Deleting.
Array
Characteristics of an Array:
An array has various characteristics which are as follows:
• Arrays use an index-based data structure which helps to identify each of the
elements in an array easily using the index.
• If a user wants to store multiple values of the same data type, then the array can be
utilized efficiently.
• An array can also handle complex data structures by storing data in a two-
dimensional array.
• An array is also used to implement other data structures like Stacks, Queues, Heaps,
Hash tables, etc.
• The search process in an array can be done very easily.
Linked list
A linked list is a linear data structure in which elements are not stored at contiguous memory
locations. The elements in a linked list are linked using pointers as shown in the below image:
Types of linked lists:
• Singly-linked list
• Doubly linked list
• Circular linked list
• Doubly circular linked list
Linked List
A linked list is a linear data structure where each node contains a value and a reference to the
next node. Here are some common operations performed on linked lists:
Stack
Stack is a linear data structure that follows a particular order in which the operations are
performed. The order is LIFO (Last In First Out). Entering and retrieving data is possible from
only one end. The entering and retrieving of data is also called push and pop operation in a
stack. There are different operations possible in a stack like reversing a stack using recursion,
Sorting, Deleting the middle element of a stack, etc.
Characteristics of a Stack:
Stack has various different characteristics which are as follows:
• Stack is used in many different algorithms like Tower of Hanoi, tree traversal,
recursion, etc.
• Stack is implemented through an array or linked list.
• It follows the Last In First Out operation i.e., an element that is inserted first will
pop in last and vice versa.
• The insertion and deletion are performed at one end i.e. from the top of the stack.
• In stack, if the allocated space for the stack is full, and still anyone attempts to add
more elements, it will lead to stack overflow.
Applications of Stack:
Different applications of Stack are as follows:
• The stack data structure is used in the evaluation and conversion of arithmetic
expressions.
• Stack is used in Recursion.
• It is used for parenthesis checking.
• While reversing a string, the stack is used as well.
• Stack is used in memory management.
• It is also used for processing function calls.
• The stack is used to convert expressions from infix to postfix.
• The stack is used to perform undo as well as redo operations in word processors.
• The stack is used in virtual machines like JVM.
• The stack is used in the media players. Useful to play the next and previous song.
• The stack is used in recursion operations.
A stack is a linear data structure that implements the Last-In-First-Out (LIFO) principle. Here
are some common operations performed on stacks:
• Push: Elements can be pushed onto the top of the stack, adding a new element to
the top of the stack.
• Pop: The top element can be removed from the stack by performing a pop
operation, effectively removing the last element that was pushed onto the stack.
• Peek: The top element can be inspected without removing it from the stack using a
peek operation.
• IsEmpty: A check can be made to determine if the stack is empty.
• Size: The number of elements in the stack can be determined using a size operation.
These are some of the most common operations performed on stacks. The specific operations
and algorithms used may vary based on the requirements of the problem and the programming
language used. Stacks are commonly used in applications such as evaluating expressions,
implementing function call stacks in computer programs, and many others.
Real-Life Applications of Stack:
• Real life example of a stack is the layer of eating plates arranged one above the
other. When you remove a plate from the pile, you can take the plate to the top of
the pile. But this is exactly the plate that was added most recently to the pile. If you
want the plate at the bottom of the pile, you must remove all the plates on top of it
to reach it.
• Browsers use stack data structures to keep track of previously visited sites.
• Call log in mobile also uses stack data structure.
Queue
Queue is a linear data structure that follows a particular order in which the operations are
performed. The order is First In First Out (FIFO) i.e. the data item stored first will be accessed
first. In this, entering and retrieving data is not done from only one end. An example of a queue
is any queue of consumers for a resource where the consumer that came first is served first.
Different operations are performed on a Queue like Reversing a Queue (with or without using
recursion), Reversing the first K elements of a Queue, etc. A few basic operations performed
In Queue are enqueue, dequeue, front, rear, etc.
Characteristics of a Queue:
The queue has various different characteristics which are as follows:
• The queue is a FIFO (First In First Out) structure.
• To remove the last element of the Queue, all the elements inserted before the new
element in the queue must be removed.
• A queue is an ordered list of elements of similar data types.
Applications of Queue:
Different applications of Queue are as follows:
• Queue is used for handling website traffic.
• It helps to maintain the playlist in media players.
• Queue is used in operating systems for handling interrupts.
• It helps in serving requests on a single shared resource, like a printer, CPU task
scheduling, etc.
• It is used in the asynchronous transfer of data e.g. pipes, file IO, and sockets.
• Queues are used for job scheduling in the operating system.
• In social media to upload multiple photos or videos queue is used.
• To send an e-mail queue data structure is used.
• To handle website traffic at a time queues are used.
• In Windows operating system, to switch multiple applications.
A queue is a linear data structure that implements the First-In-First-Out (FIFO) principle. Here
are some common operations performed on queues:
• Enqueue: Elements can be added to the back of the queue, adding a new element to
the end of the queue.
• Dequeue: The front element can be removed from the queue by performing a
dequeue operation, effectively removing the first element that was added to the
queue.
• Peek: The front element can be inspected without removing it from the queue using
a peek operation.
• IsEmpty: A check can be made to determine if the queue is empty.
• Size: The number of elements in the queue can be determined using a size
operation.
These are some of the most common operations performed on queues. The specific operations
and algorithms used may vary based on the requirements of the problem and the programming
language used. Queues are commonly used in applications such as scheduling tasks, managing
communication between processes, and many others.
Tree
A tree is a non-linear and hierarchical data structure where the elements are arranged in a tree-
like structure. In a tree, the topmost node is called the root node. Each node contains some
data, and data can be of any type. It consists of a central node, structural nodes, and sub-nodes
which are connected via edges. Different tree data structures allow quicker and easier access to
the data as it is a non-linear data structure. A tree has various terminologies like Node, Root,
Edge, Height of a tree, Degree of a tree, etc.
There are different types of Tree-like
• Binary Tree,
• Binary Search Tree,
• AVL Tree,
• B-Tree, etc.
Tree
Characteristics of a Tree:
Applications of Tree:
A tree is a non-linear data structure that consists of nodes connected by edges. Here are some
common operations performed on trees:
• Insertion: New nodes can be added to the tree to create a new branch or to increase
the height of the tree.
• Deletion: Nodes can be removed from the tree by updating the references of the
parent node to remove the reference to the current node.
• Search: Elements can be searched for in a tree by starting from the root node and
traversing the tree based on the value of the current node until the desired node is
found.
• Traversal: The elements in a tree can be traversed in several different ways,
including in-order, pre-order, and post-order traversal.
• Height: The height of the tree can be determined by counting the number of edges
from the root node to the furthest leaf node.
• Depth: The depth of a node can be determined by counting the number of edges
from the root node to the current node.
• Balancing: The tree can be balanced to ensure that the height of the tree is
minimized and the distribution of nodes is as even as possible.
These are some of the most common operations performed on trees. The specific operations
and algorithms used may vary based on the requirements of the problem and the programming
language used. Trees are commonly used in applications such as searching, sorting, and storing
hierarchical data.
Real-Life Applications of Tree:
Graph
A graph is a non-linear data structure that consists of vertices (or nodes) and edges. It consists
of a finite set of vertices and set of edges that connect a pair of nodes. The graph is used to
solve the most challenging and complex programming problems. It has different terminologies
which are Path, Degree, Adjacent vertices, Connected components, etc.
Graph
Characteristics of Graph:
A graph is a non-linear data structure consisting of nodes and edges. Here are some common
operations performed on graphs:
• Add Vertex: New vertices can be added to the graph to represent a new node.
• Add Edge: Edges can be added between vertices to represent a relationship
between nodes.
• Remove Vertex: Vertices can be removed from the graph by updating the references
of adjacent vertices to remove the reference to the current vertex.
• Remove Edge: Edges can be removed by updating the references of the adjacent
vertices to remove the reference to the current edge.
• Depth-First Search (DFS): A graph can be traversed using a depth-first search by
visiting the vertices in a depth-first manner.
• Breadth-First Search (BFS): A graph can be traversed using a breadth-first search
by visiting the vertices in a breadth-first manner.
• Shortest Path: The shortest path between two vertices can be determined using
algorithms such as Dijkstra’s algorithm or A* algorithm.
• Connected Components: The connected components of a graph can be determined
by finding sets of vertices that are connected to each other but not to any other
vertices in the graph.
• Cycle Detection: Cycles in a graph can be detected by checking for back edges
during a depth-first search.
These are some of the most common operations performed on graphs. The specific operations
and algorithms used may vary based on the requirements of the problem and the programming
language used. Graphs are commonly used in applications such as computer networks, social
networks, and routing problems.
Real-Life Applications of Graph:
• One of the most common real-world examples of a graph is Google Maps where
cities are located as vertices and paths connecting those vertices are located as
edges of the graph.
• A social network is also one real-world example of a graph where every person on
the network is a node, and all of their friendships on the network are the edges of
the graph.
• A graph is also used to study molecules in physics and chemistry.
Advantages of data structure
Generally, there is always more than one way to solve a problem in computer science with
different algorithms. Therefore, it is highly required to use a method to compare the solutions
in order to judge which one is more optimal. The method must be:
• Independent of the machine and its configuration, on which the algorithm is running
on.
• Shows a direct correlation with the number of inputs.
• Can distinguish two algorithms clearly without ambiguity.
Time Complexity
The time complexity of an algorithm quantifies the amount of time taken by an algorithm to
run as a function of the length of the input. Note that the time to run is a function of the length
of the input and not the actual execution time of the machine on which the algorithm is running
on.
Definition: The valid algorithm takes a finite amount of time for execution. The time required
by the algorithm to solve given problem is called time complexity of the algorithm. Time
complexity is very useful measure in algorithm analysis.
It is the time needed for the completion of an algorithm. To estimate the time complexity, we
need to consider the cost of each fundamental instruction and the number of times the
instruction is executed.
Example 1: Addition of two scalar variables.
Algorithm ADD SCALAR(A, B)
//Description: Perform arithmetic addition of two numbers
//Input: Two scalar variables A and B
//Output: variable C, which holds the addition of A and B
C <- A + B
return C
The of two scalar numbers requires one addition operation. the time complexity of this
algorithm is constant, so T(n) = O(1) .
In order to calculate time complexity on an algorithm, it is assumed that a constant time c is
taken to execute one operation, and then the total operations for an input length on N are
calculated. Consider an example to understand the process of calculation: Suppose a problem is
to find whether a pair (X, Y) exists in an array, A of N elements whose sum is Z. The simplest
idea is to consider every pair and check if it satisfies the given condition or not.
The pseudo-code is as follows:
int a[n];
for(int i = 0;i < n;i++)
cin >> a[i]
return false
C++:
// C++ program for the above approach
#include <bits/stdc++.h>
using namespace std;
return false;
}
// Driver Code
int main(){
// Given Input
int a[] = { 1, -2, 1, 0, 5 };
int z = 0;
int n = sizeof(a) / sizeof(a[0]);
// Function Call
if (findPair(a, n, z))
cout << "True";
else
cout << "False";
return 0;
}
Java:
// Java program for the above approach
import java.lang.*;
import java.util.*;
class GFG{
return false;
}
// Driver code
public static void main(String[] args)
{
// Given Input
int a[] = { 1, -2, 1, 0, 5 };
int z = 0;
int n = a.length;
// Function Call
if (findPair(a, n, z))
System.out.println("True");
else
System.out.println("False");
}
}
Output: False
Assuming that each of the operations in the computer takes approximately constant time, let it
be c. The number of lines of code executed actually depends on the value of Z. During
analyses of the algorithm, mostly the worst-case scenario is considered, i.e., when there is no
pair of elements with sum equals Z. In the worst case,
• N*c operations are required for input.
• The outer loop i loop runs N times.
• For each i, the inner loop j loop runs N times.
So total execution time is N*c + N*N*c + c. Now ignore the lower order terms since the lower
order terms are relatively insignificant for large input, therefore only the highest order term is
taken (without constant) which is N*N in this case. Different notations are used to describe the
limiting behavior of a function, but since the worst case is taken so big-O notation will be used
to represent the time complexity.
Hence, the time complexity is O(N2) for the above algorithm. Note that the time complexity is
solely based on the number of elements in array A i.e the input length, so if the length of the
array will increase the time of execution will also increase.
Order of growth is how the time of execution depends on the length of the input. In the above
example, it is clearly evident that the time of execution quadratically depends on the length of
the array. Order of growth will help to compute the running time with ease.
Another Example: Let’s calculate the time complexity of the below algorithm:
C++:
count = 0
for (int i = N; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count++;
Java:
int count = 0 ;
for (int i = N; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count++;
This is a tricky case. In the first look, it seems like the complexity is O(N * log N). N for
the j′s loop and log(N) for i′s loop. But it’s wrong. Let’s see why.
Think about how many times count++ will run.
• When i = N, it will run N times.
• When i = N / 2, it will run N / 2 times.
• When i = N / 4, it will run N / 4 times.
• And so on.
The total number of times count++ will run is N + N/2 + N/4+…+1= 2 * N. So the time
complexity will be O(N).
Some general time complexities are listed below with the input range for which they are
accepted in competitive programming:
A lot of students get confused while understanding the concept of time complexity, but in this
article, we will explain it with a very simple example.
Q. Imagine a classroom of 100 students in which you gave your pen to one person. You
have to find that pen without knowing to whom you gave it.
Here are some ways to find the pen and what the O order is.
• O(n2): You go and ask the first person in the class if he has the pen. Also, you ask this
person about the other 99 people in the classroom if they have that pen and so on,
This is what we call O(n2).
• O(n): Going and asking each student individually is O(N).
• O(log n): Now I divide the class into two groups, then ask: “Is it on the left side, or
the right side of the classroom?” Then I take that group and divide it into two and ask
again, and so on. Repeat the process till you are left with one student who has your
pen. This is what you mean by O(log n).
I might need to do:
• The O(n2) searches if only one student knows on which student the pen is hidden.
• The O(n) if one student had the pen and only they knew it.
• The O(log n) search if all the students knew, but would only tell me if I guessed the
right side.
The above O -> is called Big – Oh which is an asymptotic notation. There are other asymptotic
notations like theta and Omega.
NOTE: We are interested in the rate of growth over time with respect to the inputs taken during
the program execution.
C++:
#include <iostream>
using namespace std;
int main(){
cout << "Hello World";
return 0;
}
Output
Hello World
Time Complexity: In the above code “Hello World” is printed only once on the screen.
So, the time complexity is constant: O(1) i.e. every time a constant amount of time is required to
execute code, no matter which operating system or which machine configurations you are using.
Auxiliary Space: O(1)
Example 2:
C++:
#include <iostream>
using namespace std;
int main(){
int i, n = 8;
for (i = 1; i <= n; i++) {
cout << "Hello World !!!\n";
}
return 0;
}
Output
Hello World !!!
Hello World !!!
Hello World !!!
Time Complexity: In the above code “Hello World !!!” is printed only n times on the screen, as
the value of n can change.
So, the time complexity is linear: O(n) i.e. every time, a linear amount of time is required to
execute code.
Auxiliary Space: O(1)
Example 3:
C++:
#include <iostream>
using namespace std;
int main(){
int i, n = 8;
for (i = 1; i <= n; i=i*2) {
cout << "Hello World !!!\n";
}
return 0;
}
Output
Hello World !!!
Hello World !!!
Hello World !!!
Hello World !!!
Time Complexity: O(log2(n))
Auxiliary Space: O(1)
Example 4:
C++:
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
int i, n = 8;
for (i = 2; i <= n; i=pow(i,2)) {
cout << "Hello World !!!\n";
}
return 0;
}
Output
Hello World !!!
Hello World !!!
Time Complexity: O(log(log n))
Auxiliary Space: O(1)
C++:
// Pseudocode : Sum(a, b) { return a + b }
#include <iostream>
using namespace std;
int main() {
int a = 5, b = 6;
cout<<sum(a,b)<<endl;
return 0;
}
Output
11
Time Complexity:
• The above code will take 2 units of time(constant):
• one for arithmetic operations and
• one for return. (as per the above conventions).
• Therefore total cost to perform sum operation (Tsum) = 1 + 1 = 2
• Time Complexity = O(2) = O(1), since 2 is constant
• Auxiliary Space: O(1)
// A->array and
// n->number of elements in array{
int sum = 0;
for (int i = 0; i <= n - 1; i++) {
sum = sum + A[i];
}
return sum;
}
int main(){
int A[] = { 5, 6, 1, 2 };
int n = sizeof(A) / sizeof(A[0]);
cout << list_Sum(A, n);
return 0;
}
Java:
// Java code for the above approach
import java.io.*;
class GFG {
// A->array and
// n->number of elements in array
{
int sum = 0;
for (int i = 0; i <= n - 1; i++) {
sum = sum + A[i];
}
return sum;
}
C++:
#include <iostream>
using namespace std;
int main(){
int n = 3;
int m = 3;
int arr[][3]
= { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 } };
int sum = 0;
Java:
/*package whatever //do not write package name here */
import java.io.*;
class GFG {
public static void main(String[] args)
{
int n = 3;
int m = 3;
int arr[][]
= { { 3, 2, 7 }, { 2, 6, 8 }, { 5, 1, 9 } };
int sum = 0;
Output
43
Time Complexity: O(n*m)
The program iterates through all the elements in the 2D array using two nested loops. The outer
loop iterates n times and the inner loop iterates m times for each iteration of the outer loop.
Therefore, the time complexity of the program is O(n*m).
Auxiliary Space: O(n*m)
The program uses a fixed amount of auxiliary space to store the 2D array and a few integer
variables. The space required for the 2D array is nm integers. The program also uses a single
integer variable to store the sum of the elements. Therefore, the auxiliary space complexity of the
program is O(nm + 1), which simplifies to O(n*m).
In conclusion, the time complexity of the program is O(nm), and the auxiliary space complexity
is also O(nm).
So from the above examples, we can conclude that the time of execution increases with the type
of operations we make using the inputs.
Space Complexity
Definition:
Problem-solving using computer requires memory to hold temporary data or final result while
the program is in execution. The amount of memory required by the algorithm to solve given
problem is called space complexity of the algorithm.
The space complexity of an algorithm quantifies the amount of space taken by an algorithm to
run as a function of the length of the input. Consider an example: Suppose a problem to find
the frequency of array elements.
It is the amount of memory needed for the completion of an algorithm.
To estimate the memory requirement we need to focus on two parts:
(1) A fixed part: It is independent of the input size. It includes memory for instructions (code),
constants, variables, etc.
(2) A variable part: It is dependent on the input size. It includes memory for recursion stack,
referenced variables, etc.
C <— A+B
return C
The addition of two scalar numbers requires one extra memory location to hold the result. Thus
the space complexity of this algorithm is constant, hence S(n) = O(1).
The pseudo-code is as follows:
int freq[n];
int a[n];
for(int i = 0; i<n; i++){
cin>>a[i];
freq[a[i]]++;
}
Below is the implementation of the above approach:
C++:
// Driver Code
int main(){
// Given array
int arr[] = { 10, 20, 20, 10, 10, 20, 5, 20 };
int n = sizeof(arr) / sizeof(arr[0]);
// Function Call
countFreq(arr, n);
return 0;
}
Java:
// Java program for the above approach
import java.util.*;
class GFG{
// Driver Code
public static void main(String[] args)
{
// Given array
int arr[] = { 10, 20, 20, 10, 10, 20, 5, 20 };
int n = arr.length;
// Function Call
countFreq(arr, n);
}
}
Output
51
10 3
20 4
Here two arrays of length N, and variable i are used in the algorithm so, the total space used
is N * c + N * c + 1 * c = 2N * c + c, where c is a unit space taken. For many inputs,
constant c is insignificant, and it can be said that the space complexity is O(N).
There is also auxiliary space, which is different from space complexity. The main difference
is where space complexity quantifies the total space used by the algorithm, auxiliary space
quantifies the extra space that is used in the algorithm apart from the given input. In the above
example, the auxiliary space is the space used by the freq[] array because that is not part of the
given input. So total auxiliary space is N * c + c which is O(N) only.
What does ‘Space Complexity’ mean: The term Space Complexity is misused for Auxiliary
Space at many places. Following are the correct definitions of Auxiliary Space and Space
Complexity.
Auxiliary Space is the extra space or temporary space used by an algorithm.
The space Complexity of an algorithm is the total space taken by the algorithm with respect to
the input size. Space complexity includes both Auxiliary space and space used by input.
For example, if we want to compare standard sorting algorithms on the basis of space, then
Auxiliary Space would be a better criterion than Space Complexity. Merge Sort uses O(n)
auxiliary space, Insertion sort, and Heap Sort use O(1) auxiliary space. The space complexity
of all these sorting algorithms is O(n) though.
Space complexity is a parallel concept to time complexity. If we need to create an array of size
n, this will require O(n) space. If we create a two-dimensional array of size n*n, this will
require O(n2) space.
In recursive calls stack space also counts.
Example:
Sorting Algorithms
What is Sorting?
A Sorting Algorithm is used to rearrange a given array or list of elements according to a comparison
operator on the elements. The comparison operator is used to decide the new order of elements in
the respective data structure.
For Example: The below list of characters is sorted in increasing order of their ASCII values. That
is, the character with a lesser ASCII value will be placed first than the character with a higher
ASCII value.
Example of Sorting
1. Selection sort is a simple and efficient sorting algorithm that works by repeatedly
selecting the smallest (or largest) element from the unsorted portion of the list and
moving it to the sorted portion of the list.
The algorithm repeatedly selects the smallest (or largest) element from the unsorted portion of
the list and swaps it with the first element of the unsorted part. This process is repeated for the
remaining unsorted portion until the entire list is sorted.
How does Selection Sort Algorithm work?
Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}
First pass:
• For the first position in the sorted array, the whole array is traversed from index 0
to 4 sequentially. The first position where 64 is stored presently, after traversing
whole array it is clear that 11 is the lowest value.
• Thus, replace 64 with 11. After one iteration 11, which happens to be the least value
in the array, tends to appear in the first position of the sorted list
.
Selection Sort Algorithm | Swapping 1st element with the minimum in array
Second pass:
• For the second position, where 25 is present, again traverse the rest of the array in
a sequential manner.
• After traversing, we found that 12 is the second lowest value in the array and it
should appear at the second place in the array, thus swap these values.
Selection Sort Algorithm | swapping i=1 with the next minimum element
Third Pass:
• Now, for third place, where 25 is present again traverse the rest of the array and
find the third least value present in the array.
• While traversing, 22 came out to be the third least value and it should appear at the
third place in the array, thus swap 22 with element present at third position.
Selection Sort Algorithm | swapping i=2 with the next minimum element
Fourth pass:
• Similarly, for fourth position traverse the rest of the array and find the fourth least
element in the array
• As 25 is the 4th lowest value hence, it will place at the fourth position.
Selection Sort Algorithm | swapping i=3 with the next minimum element
Fifth Pass:
• At last the largest value present in the array automatically get placed at the last
position in the array
• The resulted array is the sorted array.
// Driver program
int main()
{
int arr[] = { 64, 25, 12, 22, 11 };
int n = sizeof(arr) / sizeof(arr[0]);
// Function Call
selectionSort(arr, n);
cout << "Sorted array: \n";
printArray(arr, n);
return 0;
}
Java:
// Java program for implementation of Selection Sort
import java.io.*;
public class SelectionSort
{
void sort(int arr[])
{
int n = arr.length;
Output
Sorted array:
11 12 22 25 64
Complexity Analysis of Selection Sort
Time Complexity: The time complexity of Selection Sort is O(N2) as there are two nested
loops:
• One loop to select an element of Array one by one = O(N)
• Another loop to compare that element with every other Array element = O(N)
• Therefore overall complexity = O(N) * O(N) = O(N*N) = O(N 2)
Auxiliary Space: O(1) as the only extra memory used is for temporary variables while
swapping two values in Array. The selection sort never makes more than O(N) swaps and can
be useful when memory writing is costly.
2. Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the
adjacent elements if they are in the wrong order. This algorithm is not suitable for large
data sets as its average and worst-case time complexity is quite high.
Bubble Sort Algorithm
In this algorithm,
• traverse from left and compare adjacent elements and the higher one is placed at
right side.
• In this way, the largest element is moved to the rightmost end at first.
• This process is then continued to find the second largest and place it and so on until
the data is sorted.
How does Bubble Sort Work?
Let us understand the working of bubble sort with the help of the following illustration:
Input: arr[] = {6, 3, 0, 5}
First Pass:
The largest element is placed in its correct position, i.e., the end of the array
Second Pass:
Place the second largest element at correct position
Bubble Sort Algorithm: Placing the second largest element at correct position
Third Pass:
Place the remaining two elements at their correct positions.
Bubble Sort Algorithm: Placing the remaining elements at their correct positions
Implementation of Bubble Sort
Below is the implementation of the bubble sort. It can be optimized by stopping the algorithm if
the inner loop didn’t cause any swap.
C:
C++:
Java:
import java.io.*;
class GFG {
// Driver program
public static void main(String args[])
{
int arr[] = { 64, 34, 25, 12, 22, 11, 90 };
int n = arr.length;
bubbleSort(arr, n);
System.out.println("Sorted array: ");
printArray(arr, n);
}
}
Output
Sorted array:
11 12 22 25 34 64 90
Complexity Analysis of Bubble Sort:
Time Complexity: O(N2)
Auxiliary Space: O(1)
Advantages of Bubble Sort:
• Bubble sort is easy to understand and implement.
• It does not require any additional memory space.
• It is a stable sorting algorithm, meaning that elements with the same key value
maintain their relative order in the sorted output.
3. Insertion sort is a simple sorting algorithm that works similar to the way you sort
playing cards in your hands. The array is virtually split into a sorted and an unsorted
part. Values from the unsorted part are picked and placed at the correct position in the
sorted part.
12 11 13 5 6
First Pass:
• Initially, the first two elements of the array are compared in insertion sort.
12 11 13 5 6
• Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not
at its correct position. Thus, swap 11 and 12.
• So, for now 11 is stored in a sorted sub-array.
11 12 13 5 6
Second Pass:
• Now, move to the next two elements and compare them
11 12 13 5 6
• Here, 13 is greater than 12, thus both elements seems to be in ascending order,
hence, no swapping will occur. 12 also stored in a sorted sub-array along with 11
Third Pass:
• Now, two elements are present in the sorted sub-array which are 11 and 12
• Moving forward to the next two elements which are 13 and 5
11 12 13 5 6
• Both 5 and 13 are not present at their correct place so swap them
11 12 5 13 6
• After swapping, elements 12 and 5 are not sorted, thus swap again
11 5 12 13 6
5 11 12 13 6
Fourth Pass:
• Now, the elements which are present in the sorted sub-array are 5, 11 and 12
• Moving to the next two elements 13 and 6
5 11 12 13 6
• Clearly, they are not sorted, thus perform swap between both
5 11 12 6 13
5 11 6 12 13
5 6 11 12 13
•
Finally, the array is completely sorted.
Illustrations:
insertionSort(arr, n);
printArray(arr, n);
return 0;
}
C++:
#include <bits/stdc++.h>
using namespace std;
insertionSort(arr, N);
printArray(arr, N);
return 0;
}
Java:
System.out.println();
}
// Driver method
public static void main(String args[]){
int arr[] = { 12, 11, 13, 5, 6 };
printArray(arr);
}
};
Output
5 6 11 12 13
Time Complexity: O(N^2)
Auxiliary Space: O(1)
4. Merge sort is defined as a sorting algorithm that works by dividing an array into
smaller subarrays, sorting each subarray, and then merging the sorted subarrays back
together to form the final sorted array.
In simple terms, we can say that the process of merge sort is to divide the array into two
halves, sort each half, and then merge the sorted halves back together. This process is repeated
until the entire array is sorted.
• These subarrays are further divided into two halves. Now they become array of unit
length that can no longer be divided and array of unit length are always sorted.
Merge Sort: Divide the subarrays into two halves (unit length subarrays here)
• These sorted subarrays are merged together, and we get bigger sorted subarrays.
Merge Sort: Merge the unit length subarrays into sorted subarrays
This merging process is continued until the sorted array is built from the smaller subarrays.
Merge Sort: Merge the sorted subarrys to get the sorted array
The following diagram shows the complete merge sort process for an example array {38, 27,
43, 3, 9, 82, 10}.
Below is the Code implementation of Merge Sort.
C
/ C program for Merge Sort
#include <stdio.h>
#include <stdlib.h>
merge(arr, l, m, r);
}
}
// Driver code
int main()
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
int arr_size = sizeof(arr) / sizeof(arr[0]);
C++:
// C++ program for Merge Sort
#include <bits/stdc++.h>
using namespace std;
// UTILITY FUNCTIONS
// Function to print an array
void printArray(int A[], int size)
{
for (int i = 0; i < size; i++)
cout << A[i] << " ";
cout << endl;
}
// Driver code
int main()
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
int arr_size = sizeof(arr) / sizeof(arr[0]);
Java:
// Java program for Merge Sort
import java.io.*;
class MergeSort {
// Driver code
public static void main(String args[])
{
int arr[] = { 12, 11, 13, 5, 6, 7 };
Output
Given array is
12 11 13 5 6 7
Sorted array is
5 6 7 11 12 13
Complexity Analysis of Merge Sort:
Time Complexity: O(N log(N)), Merge Sort is a recursive algorithm and time complexity can
be expressed as following recurrence relation.
T(n) = 2T(n/2) + θ(n)
The above recurrence can be solved either using the Recurrence Tree method or the Master
method. It falls in case II of the Master Method and the solution of the recurrence is
θ(Nlog(N)). The time complexity of Merge Sort isθ(Nlog(N)) in all 3 cases (worst, average,
and best) as merge sort always divides the array into two halves and takes linear time to merge
two halves.
Auxiliary Space: O(N), In merge sort all elements are copied into an auxiliary array. So N
auxiliary space is required for merge sort.
5. Quick Sort is a sorting algorithm based on the Divide and Conquer algorithm that
picks an element as a pivot and partitions the given array around the picked pivot by
placing the pivot in its correct position in the sorted array.
How does Quick Sort work?
The key process in quick Sort is a partition(). The target of partitions is to place the pivot (any
element can be chosen to be a pivot) at its correct position in the sorted array and put all
smaller elements to the left of the pivot, and all greater elements to the right of the pivot.
Partition is done recursively on each side of the pivot after the pivot is placed in its correct
position and this finally sorts the array.
Choice of Pivot:
There are many different choices for picking pivots.
• Always pick the first element as a pivot.
• Always pick the last element as a pivot (implemented below)
• Pick a random element as a pivot.
• Pick the middle as the pivot.
Partition Algorithm:
The logic is simple, we start from the leftmost element and keep track of the index of smaller
(or equal) elements as i. While traversing, if we find a smaller element, we swap the current
element with arr[i]. Otherwise, we ignore the current element.
Let us understand the working of partition and the Quick Sort algorithm with the help of the
following example:
Consider: arr[] = {10, 80, 30, 90, 40}.
• Compare 10 with the pivot and as it is less than pivot arrange it accrodingly.
•
Partition in Quick Sort: Compare pivot with 10
• Compare 80 with the pivot. It is greater than pivot.
C:
#include <stdio.h>
// Driver code
int main(){
int arr[] = { 10, 7, 8, 9, 1, 5 };
int N = sizeof(arr) / sizeof(arr[0]);
// Function call
quickSort(arr, 0, N - 1);
printf("Sorted array: \n");
for (int i = 0; i < N; i++)
printf("%d ", arr[i]);
return 0;
}
C++:
#include <bits/stdc++.h>
using namespace std;
// Driver Code
int main()
{
int arr[] = { 10, 7, 8, 9, 1, 5 };
int N = sizeof(arr) / sizeof(arr[0]);
// Function call
quickSort(arr, 0, N - 1);
cout << "Sorted array: " << endl;
for (int i = 0; i < N; i++)
cout << arr[i] << " ";
return 0;
}
Java:
class GFG {
// Driver Code
public static void main(String[] args)
{
int[] arr = { 10, 7, 8, 9, 1, 5 };
int N = arr.length;
// Function call
quickSort(arr, 0, N - 1);
System.out.println("Sorted array:");
printArr(arr);
}
}
Output
Sorted array:
1 5 7 8 9 10
Complexity Analysis of Quick Sort:
Time Complexity:
• Best Case:
• Average Case:
• Worst Case: O(N2)
Searching Algorithms are designed to check for an element or retrieve an element from any data
structure where it is stored.
Based on the type of search operation, these algorithms are generally classified into two categories:
1. Sequential Search: In this, the list or array is traversed sequentially and every
element is checked. For example: Linear Search.
Linear-Search
2. Interval Search: These algorithms are specifically designed for searching in sorted
data-structures. These type of searching algorithms are much more efficient than
Linear Search as they repeatedly target the center of the search structure and divide
the search space in half. For Example: Binary Search.
Binary Search to find the element “23” in a given list of numbers
Binary Search
Linear Search
Linear Search is defined as a sequential search algorithm that starts at one end and goes
through each element of a list until the desired element is found, otherwise the search continues
till the end of the data set.
For example: Consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and key = 30
Step 1: Start from the first element (index 0) and compare key with each element (arr[i]).
• Comparing key with first element arr[0]. SInce not equal, the iterator moves to the
next element as a potential match.
Step 2: Now when comparing arr[2] with key, the value matches. So the Linear Search
Algorithm will yield a successful message and return the index of the element when key is found
(here 2).
#include <stdio.h>
// Function call
int result = search(arr, N, x);
(result == -1)
? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}
C++:
// C++ code to linearly search x in arr[].
#include <bits/stdc++.h>
using namespace std;
// Driver code
int main(void){
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int N = sizeof(arr) / sizeof(arr[0]);
// Function call
int result = search(arr, N, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
Java:
import java.io.*;
class GFG {
public static int search(int arr[], int N, int x)
{
for (int i = 0; i < N; i++) {
if (arr[i] == x)
return i;
}
return -1;
}
// Driver code
public static void main(String args[])
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
// Function call
int result = search(arr, arr.length, x);
if (result == -1)
System.out.print(
"Element is not present in array");
else
System.out.print("Element is present at index "
+ result);
}
}
Output
Element is present at index 3
Complexity Analysis of Linear Search:
Time Complexity:
• Best Case: In the best case, the key might be present at the first index. So the best
case complexity is O(1)
• Worst Case: In the worst case, the key might be present at the last index i.e., opposite
to the end from which the search has started in the list. So the worst-case complexity
is O(N) where N is the size of the list.
• Average Case: O(N)
Auxiliary Space: O(1) as except for the variable to iterate through the list, no other variable is
used.
Binary Search
Binary Search is defined as a searching algorithm used in a sorted array by repeatedly dividing
the search interval in half. The idea of binary search is to use the information that the array is
sorted and reduce the time complexity to O(log N).
• Compare the middle element of the search space with the key.
• If the key is found at middle element, the process is terminated.
• If the key is not found at middle element, choose which half will be used as the next
search space.
• If the key is smaller than the middle element, then the left side is used for
next search.
• If the key is larger than the middle element, then the right side is used for
next search.
• This process is continued until the key is found or the total search space is exhausted.
First Step: Calculate the mid and compare the mid element with the key. If the key is less than
mid element, move to left and if it is greater than the mid then move search space to the right.
• Key (i.e., 23) is greater than current mid element (i.e., 16). The search space moves to
the right.
Binary Search Algorithm: Compare key with 16
• Key is less than the current mid 56. The search space moves to the left.
Here we use a while loop to continue the process of comparing the key and splitting the search
space in two halves.
Implementation of Iterative Binary Search Algorithm:
C:
// C program to implement iterative Binary Search
#include <stdio.h>
// Driver code
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int n = sizeof(arr) / sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n - 1, x);
(result == -1) ? printf("Element is not present"
" in array")
: printf("Element is present at "
"index %d",
result);
return 0;
}
C++:
// C++ program to implement iterative Binary Search
#include <bits/stdc++.h>
using namespace std;
// Driver code
int main(void)
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
Java:
// Java implementation of iterative Binary Search
import java.io.*;
class BinarySearch {
// Driver code
public static void main(String args[])
{
BinarySearch ob = new BinarySearch();
int arr[] = { 2, 3, 4, 10, 40 };
int n = arr.length;
int x = 10;
int result = ob.binarySearch(arr, x);
if (result == -1)
System.out.println(
"Element is not present in array");
else
System.out.println("Element is present at "
+ "index " + result);
}
}
Output
Element is present at index 3
Time Complexity: O(log N)
Auxiliary Space: O(1)
Create a recursive function and compare the mid of the search space with the key. And based on
the result either return the index where the key is found or call the recursive function for the next
search space.
Implementation of Recursive Binary Search Algorithm:
C:
// C program to implement recursive Binary Search
#include <stdio.h>
// Driver code
int main()
{
int arr[] = { 2, 3, 4, 10, 40 };
int n = sizeof(arr) / sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}
C++:
// C++ program to implement recursive Binary Search
#include <bits/stdc++.h>
using namespace std;
// Driver code
int main()
{
int arr[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(arr) / sizeof(arr[0]);
int result = binarySearch(arr, 0, n - 1, x);
(result == -1)
? cout << "Element is not present in array"
: cout << "Element is present at index " << result;
return 0;
}
Java:
// Java implementation of recursive Binary Search
class BinarySearch {
// Driver code
public static void main(String args[])
{
BinarySearch ob = new BinarySearch();
int arr[] = { 2, 3, 4, 10, 40 };
int n = arr.length;
int x = 10;
int result = ob.binarySearch(arr, 0, n - 1, x);
if (result == -1)
System.out.println(
"Element is not present in array");
else
System.out.println(
"Element is present at index " + result);
}
}
Output
Element is present at index 3
Complexity Analysis of Binary Search:
• Time Complexity:
• Best Case: O(1)
• Average Case: O(log N)
• Worst Case: O(log N)
• Auxiliary Space: O(1), If the recursive call stack is considered then the auxiliary
space will be O(logN).
Advantages of Binary Search:
• Binary search is faster than linear search, especially for large arrays.
• More efficient than other searching algorithms with a similar time complexity, such as
interpolation search or exponential search.
• Binary search is well-suited for searching large datasets that are stored in external
memory, such as on a hard drive or in the cloud.