Data Structure and Algorithams
Data Structure and Algorithams
ALGORITHMS
Centre for Distance and Online Education
Online MCA Program
Data Structures and Algorithms
Semester: 1
Author
Credits
Centre for Distance and Online Education,
Parul University,
391760.
Website: https://paruluniversity.ac.in/
Disclaimer
This content is protected by CDOE, Parul University. It is sold under the stipulation that it cannot be
lent, resold, hired out, or otherwise circulated without obtaining prior written consent from the
publisher. The content should remain in the same binding or cover as it was initially published, and this
requirement should also extend to any subsequent purchaser. Furthermore, it is important to note that,
in compliance with the copyright protections outlined above, no part of this publication may be
reproduced, stored in a retrieval system, or transmitted through any means (including electronic,
Mechanical, photocopying, recording, or otherwise) without obtaining the prior
written permission from both the copyright owner and the publisher of this
content.
Note to Students
These course notes are intended for the exclusive use of students enrolled in
Online MCA. They are not to be shared or distributed without explicit permission
from the University. Any unauthorized sharing or distribution of these materials
may result in academic and legal consequences.
Table of Content
Computer programs consist of sets of instructions designed to carry out specific tasks. In order
to accomplish these tasks, computers require the ability to store and retrieve data, as well as
perform calculations on that data. To facilitate efficient data management, programmers utilize
data structures, which are named entities employed for storing and organizing data.
Data structures can be described as a collection of data elements that offer an effective means
of storing and organizing data in a computer system, enabling efficient access and utilization.
Diverse types of data structures are utilized in various ways by nearly every enterprise
application.
WHAT IS AN ALGORITHM?
It is a systematic approach that involves a series of sequential steps to solve a given problem.
From a data structure perspective, there are several important categories of algorithms,
including:
● Search: Algorithms designed to locate an item within a collection of records.
● Sort: Algorithms used to arrange objects in a specific order.
● Insert Algorithms for adding new objects into a record or data structure.
● Update: Algorithms that modify existing objects within a record or data structure.
● Delete Algorithms that remove or delete a specific object from a structure or collection.
Characteristics of an Algorithm
Not all procedures can be classified as algorithms; they must adhere to the following principles:
● Unambiguous: An algorithm must be clear and unambiguous, with each step and its
inputs/outputs precisely defined to achieve the desired outcome.
● Input: An algorithm should have zero or more well-defined inputs.
● Output: An algorithm must produce one or more clearly defined outputs that align with
the desired result.
● Finiteness: Algorithms must conclude after a finite number of steps.
● Feasibility: They should be feasible, considering the available resources.
● Independence: An algorithm should consist of step-by-step instructions that are
independent of any specific programming code.
Algorithms are not designed to be specific to any programming code. They are developed in a
step-by-step manner, independent of any specific programming language.
Example
Problem 1 − Design an algorithm for addition of two numbers and display the result.
Step 1: Start
Step 2: Declare variables num1, num2 and sum.
Step 3: Read values num1 and num2.
Step 4: Add num1 and num2 and assign the result to sum.
sum←num1+num2
Step 5: Display sum
Step 6: Stop
Step 1: Start
Step 2: Declare variables a,b and c.
Step 3: Read variables a,b and c.
Step 4: If a > b
If a > c
Display a is the largest number.
Else
Display c is the largest number.
Else
If b > c
Display b is the largest number.
Else
Display c is the greatest number.
Step 5: Stop
KEY TAKEAWAYS
● Data structures can be described as a collection of data elements that offer an effective
means of storing and organizing data in a computer system, enabling efficient access
and utilization.
● An algorithm is a systematic procedure that outlines a sequence of instructions to be
executed in a specific order, leading to the desired output.
● An algorithm is a systematic approach that involves a series of sequential steps to solve
a given problem.
BASIC OF DATA STRUCTURE
SUB LESSON 1.2
BASIC TERMINOLOGIES
BASIC TERMINOLOGIES
In simpler terms, it quantifies the amount of computer time needed for the algorithm or
program to reach its end.
Typically, the time required by an algorithm can be categorized into three types:
● Worst case: It represents the input that causes the algorithm to take the maximum
amount of time to execute.
● Average case: It refers to the typical or expected time taken by the algorithm for a
random or average input.
● Best case: It represents the input for which the algorithm takes the minimum amount of
time to execute.
1. Fixed part: This part is independent of the input size and includes memory allocation for
instructions (code), constants, variables, and other static components.
2. Variable part: This part is dependent on the input size and includes memory allocation
for dynamic components such as recursion stack, referenced variables, and other data
structures that vary based on the input.
KEY TAKEAWAYS
DATA STRUCTURE
A data structure refers to a memory component utilized for storing and arranging data,
providing a means to efficiently access and modify information on a computer. When selecting
a suitable file model for your project, it is crucial to consider your specific requirements. For
instance, an array data structure may be preferred when there is a need to allocate memory for
storing data in a particular sequence.
Data structures serve as a fundamental aspect of any programmable system that handles
storage-related challenges. Storage issues are inherent in most programs, particularly when
working with data.
Programmable systems that deal with data management require a solid understanding of data
structures, which provide a foundational framework for organizing and storing information. The
majority of programs face storage challenges, especially when handling data
Data structures were initially developed to organize, manage, and manipulate records within
programming languages. Files were created to simplify and streamline the process of accessing
and processing information. Unlike programming languages, this document format is
independent of any specific programming language.
Data structures offer an effective approach to store and access large volumes of records.
Various fields of programming, such as AI, databases, and others, investigate the challenge of
efficient data storage.
CLASSIFICATION OF DATA STRUCTURES
Depending on the memory illustration, data structures divide into two categories:
Each type of data structure offers distinct capabilities. Understanding the differences between
primary data structure types allows for the selection of the most appropriate solution for a
given problem..
Array - An array is a linear data structure that stores a collection of items in contiguous memory
locations. It enables the storage of multiple items of the same type in a single place. Arrays
facilitate efficient processing of large amounts of data within a relatively short time. The
indexing of elements in an array starts from 0. Various operations can be performed on an
array, including searching, sorting, inserting, traversing, reversing, and deleting.
Stack - A stack is a linear data structure that follows a specific order known as LIFO (Last In First
Out). In a stack, data can only be inserted and removed from one end. The process of inserting
data is referred to as the push operation, while removing data is known as the pop operation.
Queue - A queue is a linear data structure that operates based on a specific order called First In
First Out (FIFO), meaning the item that is stored first will be accessed first. Unlike a stack, in a
queue, data items are entered and retrieved from different ends. A common example of a
queue is a line of consumers waiting for a resource, where the consumer who arrived first is
served first.
Linked List - A linked list is a linear data structure where elements are not stored at contiguous
memory locations. Instead, the elements in a linked list are connected using pointers, as
illustrated in the image below:
KEY TAKEAWAYS
● A data structure refers to a memory component utilized for storing and arranging data,
providing a means to efficiently access and modify information on a computer.
● Linear data structures are constructed by arranging data elements in continuous
memory locations.
● Non-Linear data structures store records in a hierarchical form, unlike linear data
structures.
● An array is a linear data structure that stores a collection of items in contiguous memory
locations.
● A stack is a linear data structure that follows a specific order known as LIFO (Last In First
Out).
● A queue is a linear data structure that operates based on a specific order called First In
First Out (FIFO)
● A linked list is a linear data structure where elements are not stored at contiguous
memory locations.
BASIC OF DATA STRUCTURE
SUB LESSON 1.4
INTRODUCTION
The Tower of Hanoi is a popular mathematical puzzle that involves three rods, denoted as A, B,
and C, and a set of N disks. At the beginning of the game, the disks are arranged on rod A in
decreasing order of diameter, with the smallest disk placed on top. The main goal of the puzzle
is to move the entire stack of disks from rod A to another rod (typically rod C), while following a
set of simple rules:
In the Tower of Hanoi puzzle, there are specific rules that must be followed during the
movement of the disks:
● Only one disk can be moved at a time.
● A move involves taking the uppermost disk from one of the stacks and placing it on top
of another stack.
● It is only allowed to move a disk if it is the topmost disk on its respective stack.
● No disk can be placed on top of a smaller disk. In other words, a larger disk cannot be
placed on top of a smaller disk.
The Tower of Hanoi is a mathematical puzzle that involves a set of n disks and three towers. It
can be solved in a minimum of 2^n−1 steps. To illustrate, let's consider an example. If we have a
puzzle with 3 disks, it would take 2^3 - 1 = 7 steps to solve it.
ALGORITHM
In order to develop an algorithm for the Tower of Hanoi problem, it is essential to understand
how to solve the problem for smaller numbers of disks, specifically for 1 or 2 disks. The three
towers involved in the problem are labeled as the source, destination, and auxiliary towers(only
to help move the disks). When there is only one disk present, it can be directly transferred from
the source tower to the destination tower without any complications.
When dealing with 2 disks in the Tower of Hanoi problem, we follow the following steps:
1. Move the smaller (top) disk to the auxiliary (aux) peg.
2. Move the larger (bottom) disk to the destination peg.
3. Finally, move the smaller disk from the auxiliary (aux) peg to the destination peg.
By following these steps, we successfully transfer both disks from the source peg to the
destination peg while utilizing the auxiliary peg.
Let's consider a scenario where we have a stack of three disks. Our objective is to move this
stack from the source tower, let's say tower A, to the destination tower, which we'll label as
tower C.
Before reaching the destination tower C, let's introduce an intermediate tower, which we'll
refer to as tower B. This intermediate tower will play a role in the process of moving the stack
of three disks from the source tower A to the destination tower C.
To complete the task, we can utilize tower B as a helper. Now, let's go through each step of the
process:
1. Move the top disk from tower A to tower C.
2. Move the top disk from tower A to tower B.
3. Move the top disk from tower C to tower B.
4. Move the top disk from tower A to tower C.
5. Move the top disk from tower B to tower A.
6. Move the top disk from tower B to tower C.
7. Move the top disk from tower A to tower C.
By following these steps, we successfully transfer the stack of three disks from tower A to tower
C, utilizing tower B as an intermediate helper.
A B C
For a better understanding, you can refer to the animated image provided above. It can help
illustrate the process and steps involved in solving the Tower of Hanoi problem.
The steps to follow in solving the Tower of Hanoi problem are as follows:
Step 1: Move n-1 disks from the source tower to the auxiliary tower.
Step 2: Move the nth disk from the source tower to the destination tower.
Step 3: Move the n-1 disks from the auxiliary tower to the destination tower.s
By following these steps recursively, you can successfully solve the Tower of Hanoi problem for
any given number of disks.
KEY TAKEAWAYS
● The Tower of Hanoi is a mathematical puzzle that involves a set of n disks and three
towers.
● In the Tower of Hanoi puzzle, there are specific rules that must be followed Only one
disk can be moved at a time.
● No disk can be placed on top of a smaller disk. In other words, a larger disk cannot be
placed on top of a smaller disk.
● It can be solved in a minimum of 2^n−1 steps.
ARRAY
SUB LESSON 2.1
INTRODUCTION TO ARRAY
INTRODUCTION
An array is a collection of elements or data items of the same type, stored in contiguous
memory locations. In simpler terms, arrays are commonly used in computer programming to
organize and manage data of the same type efficiently. Arrays can be defined in single or
multiple dimensions. They are commonly used when there is a need to store multiple elements
of similar characteristics together in one location.
Arrays play a crucial role in data structures as they assist in resolving various high-level
problems, such as the implementation of the 'longest consecutive subsequence' program, or
performing simple tasks like organizing similar elements in ascending order. The fundamental
idea behind arrays is to gather multiple objects of identical nature.
An array is a linear data structure designed to gather elements of the same data type and store
them in adjacent and contiguous memory locations. The indexing system of arrays begins at 0
and goes up to (n-1), with 'n' representing the size of the array.
PROPERTIES OF ARRAY
● Every element within an array possesses the same data type and occupies a consistent
size of 4 bytes.
● The array elements are stored in contiguous memory locations, with the initial element
residing at the lowest memory address.
● The array facilitates random access to its elements as we can determine the address of
each element by utilizing the base address and the size of the data element.
NEED OF ARRAY
Let's suppose a class consists of ten students, and the class has to publish their results. If you
had declared all ten variables individually, it would be challenging to manipulate and maintain
the data.
If more students were to join, it would become more difficult to declare all the variables and
keep track of it. To overcome this problem, arrays came into the picture.
For regular variables, we have the option to declare them on one line and initialize them on the
next line. For example
int x;
x = 0;
int x = 0;
By using arrays,
As previously mentioned, the data elements of an array are stored in contiguous locations
within the main memory. The name of the array serves as the base address, representing the
memory address of the first element. Each element of the array is accessed using appropriate
indexing.
KEY TAKEAWAYS
● An array is a collection of elements or data items of the same type, stored in contiguous
memory locations.
● Arrays can be defined in single or multiple dimensions.
● An array is a linear data structure designed to gather elements of the same data type
and store them in adjacent and contiguous memory locations.
● The indexing system of arrays begins at 0 and goes up to (n-1), with 'n' representing the
size of the array.
ARRAY
SUB LESSON 2.2
DECLARATION OF ARRAY
INTRODUCTION
In order to utilize an array, we need to declare a variable that acts as a reference to the array.
In C, it is necessary to declare an array before using it, similar to any other variable. To declare
an array, you need to specify its name, the type of its elements, and the size of its dimensions.
When an array is declared in C, the compiler allocates a memory block of the specified size to
accommodate the array's elements
To create an array, you need to specify the data type (such as int) and provide a name for the
array, followed by square brackets [].
For instance, if you want to create an array of integers, you would use the following syntax: int
arrayName[];
Syntax:
data_type array_name[array_size];
Data types are used for declaring variables or arrays, which specify the kind of data and the size
of data that can be stored in those variables.
An array is a type of variable that allows you to store multiple values of the same data type. For
instance, if you need to store 100 integers, you can utilize an array specifically designed for that
purpose.
int arr[100];
Here, int is the data type, arr is the name of the array and 100 is the size of an array.
It should be emphasized that once an array is declared, its size and type remain fixed and
cannot be modified.
To insert values into an array, you can use a comma-separated list enclosed within curly braces
{}.
For example, if you have an array named "arrayName" and you want to insert values into it, you
can do so as follows:
Each value in the comma-separated list corresponds to an element in the array, allowing you to
initialize the array with specific values.
For Example:
float mark[5];
In this case, we have declared an array called "marks" of floating-point type. The size of the
array is specified as 5, indicating that it can store 5 floating-point values.
1. Arrays in C start with a 0 index, not 1. In the given example, mark[0] represents the first
element of the array.
2. If an array has a size of n, the last element is accessed using the n-1 index. In the given
example, mark[4] refers to the last element of the array.
3. The memory addresses of array elements follow a pattern. If the starting address of
mark[0] is 2120d, mark[1] will have an address of 2124d, mark[2] will have an address of
2128d, and so on. This is because the size of a float data type is typically 4 bytes.
These keynotes highlight important aspects of arrays, including indexing and memory
allocation, as applied to the given example.
EXAMPLE
int main()
int arr_int[5];
char arr_char[5];
return 0;
KEY TAKEAWAYS
REPRESENTATION OF ARRAY
INTRODUCTION
An array is a type of data structure used to store elements of the same or different data types.
It can be defined as a collection of items arranged in a linear format. Arrays can be either single-
dimensional or multi-dimensional, providing a way to organize and access multiple elements
efficiently.
The distinction between an array index and a memory address lies in their respective functions.
An array index serves as a key value that labels the elements within the array, allowing for their
identification and retrieval. On the other hand, a memory address refers to the starting location
of available memory.
To better understand the concept of arrays, it is essential to be familiar with the following
terms:
Index: Every element in an array is associated with a numerical index, which serves as its unique
identifier within the array.
Arrays are represented as a collection of buckets or slots, with each slot storing one element.
The indexing of these buckets starts from '0' and goes up to 'n-1', where 'n' represents the size
or length of the array. For example, an array with a size of 10 will have buckets indexed from 0
to 9.
The representation of an array is defined by its declaration, which involves allocating memory
for the array based on a specified size.
When an array is declared, the compiler sets aside a contiguous block of memory to store the
elements of the array. The size of the memory block is determined by the number of elements
in the array and the size of each element.
For example, consider the declaration of an integer array named "myArray" with a size of 5:
int myArray[5];
In this case, the declaration allocates memory to store 5 integer elements, based on the size of
the "int" data type.
The declaration of an array is crucial as it determines the memory allocation, allowing the array
to store and access its elements effectively.
Example:
REPRESENTATION OF ARRAY
A one-dimensional array, often referred to as a 1-D array, can be visualized as a row where
elements are stored sequentially, one after another. The elements in a 1-D array are accessed
using a single index.
In a 2-D array, elements are accessed using two indices: one for the row and another for the
column. The row index represents the position of the desired row, and the column index
represents the position of the desired column.
2-D arrays are useful when dealing with data that naturally fits into a two-dimensional
structure, such as grids, matrices, and tables.
KEY TAKEAWAYS
To access elements in an array, you use indices to refer to specific positions within the array.
For instance, let's consider the previously declared array "mark". In this case, the first element
is accessed using the index mark[0], the second element is accessed using mark[1], and so on.
The index value indicates the position of the element within the array.
It's important to note that array indices in many programming languages start from 0.
Therefore, the first element is always at index 0, the second element at index 1, and so on. This
indexing scheme allows you to access and manipulate individual elements of the array based on
their positions.
Syntax:
arrayName[indexNum]
In the given example, the second value of the array is accessed using its index, which is 1. The
output of this operation will be the value at index 1, which is 200. This value represents the
second element of the array, assuming that the array is zero-indexed.
By specifying the index within square brackets after the array name (e.g., arrayName[index]),
you can retrieve the value stored at that particular index within the array. In this case, accessing
the value at index 1 returns the second value in the array.
In this code, an integer array named "mark" is declared with a size of 5 and initialized with
some values. The second element of the array is accessed using the index 1, and its value is
assigned to the variable secondValue.
By accessing mark[1], we retrieve the element at index 1, which is 200 in this case. This value is
then stored in the secondValue variable.
This code demonstrates how to access a specific element from an array using its index, allowing
you to perform operations on individual array elements.
Example:
#include<stdio.h>
int main()
printf(“%d\n”,a[1]);
printf(“%d\n”,a[2]);
printf(“%d\n”,a[3]);
printf(“%d”,a[4]);
return 0;
Output:
KEY TAKEAWAYS
• To access elements in an array, you use indices to refer to specific positions within the
array.
• By specifying the index within square brackets after the array name (e.g.,
arrayName[index]), you can retrieve the value stored at that particular index within the
array.
• It's important to note that array indices in many programming languages start from 0.
Therefore, the first element is always at index 0, the second element at index 1, and so
on.
ARRAY
SUB LESSON 2.5
OPERATIONS ON ARRAY
Arrays support several basic operations that can be performed on the elements they contain.
Some common operations include:
1. Traversal
2. Insertion
3. Deletion
4. Search
5. Update
These operations allow you to manipulate the data stored in an array according to the
requirements of your program. Whether you need to add, remove, search, display, iterate, or
update array elements, these basic operations provide the necessary functionality to work with
array data effectively.
1. TRAVERSAL: Traversing an array refers to the process of accessing and examining each element
of an array in a systematic manner.
CODE :
#include <stdio.h>
void main() {
int i;
OUTPUT :
Arr[0] = 18, Arr[1] = 30, Arr[2] = 15, Arr[3] = 70, Arr[4] = 12,
2. INSERTION: Insertion in the context of arrays refers to the process of adding an element at a
specific position within an existing array.
CODE :
#include <stdio.h>
int main()
int i, x, pos, n = 5;
printf("\n");
pos = 4;
n++;
arr[pos - 1] = x;
printf("\n");
return 0;
OUTPUT :
18 30 15 70 12
18 30 15 50 70 12
3. DELETION: Deletion in the context of arrays refers to the process of removing an element from
a specific position within an array.
CODE :
#include <stdio.h>
void main() {
int k = 30, n = 5;
int i, j;
j = k;
while( j < n) {
arr[j-1] = arr[j];
j = j + 1;
n = n -1;
}
OUTPUT :
arr[0] = 18, arr[1] = 30, arr[2] = 15, arr[3] = 70, arr[4] = 12,
4. SEARCH: Search in the context of arrays refers to the process of finding the position or
existence of a specific element within an array.
CODE :
#include <stdio.h>
void main() {
break;
j = j + 1;
}
OUTPUT :
arr[0] = 18, arr[1] = 30, arr[2] = 15, arr[3] = 70, arr[4] = 12,
Element to be searched = 70
5. UPDATE: Updating an array refers to the process of modifying the value of an existing element
at a specific position within the array.
CODE :
#include <stdio.h>
void main() {
arr[pos-1] = item;
OUTPUT :
arr[0] = 18, arr[1] = 30, arr[2] = 15, arr[3] = 70, arr[4] = 12,
arr[0] = 18, arr[1] = 30, arr[2] = 50, arr[3] = 70, arr[4] = 12,
KEY TAKEAWAYS
● Declaration: Arrays are declared by specifying the data type and size. For example, int[]
numbers = new int[5]; declares an integer array with five elements.
● Accessing Elements: Array elements can be accessed using their index. The index starts
from 0, so the first element is at index 0. For example, int element = numbers[2];
retrieves the value at index 2.
● Updating Elements: Array elements can be updated by assigning a new value to a
specific index. For example, numbers[3] = 10; assign the value 10 to the element at
index 3.
● Array Length: The length property or method allows you to determine the number of
elements in an array. For example, int length = numbers.Length; retrieves the length of
the numbers array.
STACK
SUB LESSON 3.1
INTRODUCTION TO STACK
STACK
A stack is a linear data structure that follows the Last In First Out (LIFO) principle. In other
words, the last element that is inserted into the stack is the first one to be removed. This
behavior is similar to a stack of objects, where the most recently placed item is the first one to
be taken off.
You can envision the stack data structure as a stack of plates, where each plate is placed on top
of another.
In this analogy, you have the ability to perform three operations on the stack of plates:
1. Put a new plate on top: You can add a new plate to the stack, placing it on the top.
2. Remove the top plate: You can remove the plate that is currently on the top of the
stack.
3. Accessing the plate at the bottom: If you want to retrieve the plate that is at the bottom
of the stack, you must first remove all the plates on top, following the Last In First Out
(LIFO) principle of the stack data structure.
In programming, the act of adding an item to the top of the stack is commonly referred to as
"push." It corresponds to placing an element onto the stack. On the other hand, removing an
item from the top of the stack is known as "pop", which signifies taking out the topmost
element from the stack. These terms, "push" and "pop," are frequently used to describe the
fundamental operations performed on a stack data structure in programming.
In the provided image, you can observe that even though item 3 was the most recent addition
to the stack, it was the first one to be removed. This exemplifies the essence of the Last In First
Out (LIFO) principle, which governs the behavior of a stack data structure. According to the LIFO
principle, the most recently added item is the first one to be removed, while the items added
earlier remain in the stack until the topmost element is taken out.
Here are some key points related to the stack data structure:
1. Stack behavior: The stack data structure is named so because it mimics the behavior of a
real-world stack, such as a pile of books or plates. Elements are added and removed
from the top of the stack.
2. Abstract data type: A stack is an abstract data type (ADT) that comes with a predefined
capacity, meaning it can only hold a limited number of elements based on its size.
3. Insertion and deletion order: The stack follows a specific order for inserting and deleting
elements. This order can be either Last In First Out (LIFO) or First In Last Out (FILO). In
LIFO, the most recently added element is the first one to be removed, while in FILO, the
first element inserted is the last one to be removed.
These points highlight the characteristics and behavior of a stack data structure.
For the array-based implementation of a stack, the push and pop operations take constant
time, i.e. O(1).
Reversing a word: By pushing all the letters of a word onto a stack and then popping them out,
the LIFO order of the stack ensures that the letters are retrieved in reverse order, effectively
reversing the word.
Compilers: Stacks are used by compilers to evaluate expressions, such as converting them to
prefix or postfix form. The stack helps in organizing and calculating the values of complex
expressions by following the appropriate order of operations.
Browsers: In web browsers, the back button functionality utilizes a stack. Whenever a user visits
a new page, its URL is added to the top of the stack. Pressing the back button removes the
current URL from the stack, allowing access to the previous URL, effectively navigating back
through the browsing history.
KEY TAKEAWAYS
● A stack is a linear data structure that follows the Last In First Out (LIFO) principle.
● For the array-based implementation of a stack, the push and pop operations take
constant time, i.e. O(1).
● In programming, the act of adding an item to the top of the stack is commonly referred
to as "push."
● It corresponds to placing an element onto the stack. On the other hand, removing an
item from the top of the stack is known as "pop"
STACK
SUB LESSON 3.2
A stack is a data structure that follows the Last-In-First-Out (LIFO) principle. It allows operations
to be performed at one end only, known as the top of the stack. Here are the key operations
performed on a stack:
1. Push: This operation adds an element to the top of the stack. The new element becomes
the top, and the size of the stack increases. In other words, it pushes an element onto
the stack.
2. Pop: This operation removes the top element from the stack. The element is removed
from the stack, and the size of the stack decreases. In other words, it pops the top
element from the stack.
3. Peek/Top: This operation retrieves the top element from the stack without removing it.
It allows you to access the value of the top element without modifying the stack.
4. isEmpty: This operation checks if the stack is empty. It returns a Boolean value indicating
whether the stack is empty or not.
PUSH OPERATION
The process of pushing an element onto a stack involves the following steps:
1. Before inserting an element into the stack, we check whether the stack is already full,
i.e., if it has reached its maximum capacity.
2. If the stack is full and we try to insert an element, it results in an overflow condition,
indicating that the stack cannot accommodate any more elements.
3. When initializing a stack, we typically set the initial value of the top pointer to -1. This
value is used to check whether the stack is empty.
4. When a new element is pushed onto the stack, the value of the top pointer is
incremented, usually by adding 1 (top = top + 1). This increments the top pointer to the
new position.
5. The new element is then placed at the position indicated by the updated top pointer.
6. The process of pushing elements continues until the stack reaches its maximum size.
These steps outline the process of pushing an element onto a stack, considering the overflow
condition and the management of the top pointer.
POP OPERATION
The process of popping an element from a stack involves the following steps:
1. Before deleting an element from the stack, we check whether the stack is empty by
verifying the value of the top pointer.
2. If the stack is empty and we try to delete an element, it results in an underflow
condition. This indicates that there are no elements in the stack to be removed.
3. If the stack is not empty, we can access the element that is pointed to by the top
pointer. This element represents the topmost element in the stack.
4. After performing the pop operation and removing the element, the top pointer is
decremented by 1, typically by subtracting 1 (top = top - 1). This adjusts the top pointer
to point to the new topmost element in the stack.
5. The element that was popped can be used or discarded as needed.
6. The process of popping elements can continue as long as there are elements in the
stack.
These steps outline the process of popping an element from a stack, considering the underflow
condition and the adjustment of the top pointer after the removal of an element.
KEY TAKEAWAYS
● Push Operation: The process of adding an element to the stack is called the push
operation. It involves placing the new element on top of the existing elements.
● Pop Operation: The process of removing an element from the stack is called the pop
operation. It involves removing the topmost element from the stack.
● Top Pointer: Stacks typically have a top pointer that keeps track of the topmost
element. The top pointer is updated with each push and pop operation.
● Overflow and Underflow: Stack operations should be performed with caution to avoid
overflow and underflow conditions. Overflow occurs when trying to push an element
into a full stack, and underflow occurs when trying to pop an element from an empty
stack.
STACK
SUB LESSON 3.3
STACK IMPLEMENTATION
You can implement stacks in data structures using two main approaches: array implementation
and linked list implementation.
Array: In the array implementation, a stack is constructed using an array data structure. All the
stack operations are performed using arrays. We will explore how various operations can be
implemented on the stack in data structures using the array data structure.
Linked List: In the linked list implementation of stacks in data structures, each new element is
inserted as the top element of the linked list. This means that every newly inserted element
becomes the new top. When removing an element from the stack, the node pointed to by the
top is removed by updating the top to point to its previous node in the list.
STACK IMPLEMENTATION USING ARRAY WITH EXAMPLE
Push Operation:
The push operation involves adding an element on the top of the stack. It consists of the
following two steps:
1. Increment the top variable of the stack to refer to the next memory location.
2. Add the data element at the incremented top position.
When performing a push operation, if the stack is already full, it results in an overflow
condition, indicating that no more elements can be inserted into the stack.
begin
if top = n
stack is full
top = top + 1
stack(top) = data
end
Pop Operation:
The pop operation in a stack is used to remove the topmost element from the stack. It follows
the LIFO (Last-In-First-Out) principle, where the element that was most recently pushed onto
the stack will be the first one to be popped.
1. Check if the stack is empty. If the stack is empty, it indicates an underflow condition,
meaning there are no elements in the stack to be popped.
2. If the stack is not empty, access the element at the top of the stack.
3. Decrement the value of the top pointer to move it to the next element in the stack.
4. Return or use the value of the popped element as needed.
The pop operation modifies the stack by removing the topmost element and updating the top
pointer accordingly. It is important to handle the underflow condition and ensure that the stack
is not empty before performing the pop operation to avoid any errors.
Begin
if top = 0
stack is empty
value = stack(top)
top= =top -1
end
Peek Operation:
The peek operation in a stack is used to retrieve the topmost element from the stack without
removing it. It allows you to examine the value of the element at the top of the stack without
modifying the stack itself.
1. Check if the stack is empty. If the stack is empty, it indicates an underflow condition,
meaning there are no elements in the stack to retrieve.
2. If the stack is not empty, access the element at the top of the stack.
Begin
if top = -1
stack is empty
data = stack[top]
return data
end
KEY TAKEAWAYS
● The push operation adds an element on the top of the stack by incrementing the top
variable and adding the element at the new top position.
● The pop operation removes the topmost element from the stack by decrementing the
top variable and returning the deleted element.
● The peek operation retrieves the topmost element from the stack without removing it.
● Stack can encounter overflow condition when trying to insert an element into a full
stack, and underflow condition when trying to remove an element from an empty stack.
STACK
SUB LESSON 3.4
A stack is a type of linear data structure that consists of a collection of elements. It follows the
principle of Last In, First Out (LIFO), which means that the last element inserted into the stack
will be the first one to be removed.
In a stack, elements can be inserted and deleted only from one end, often referred to as the
"top" of the stack.
OPERATIONS ON STACK
Pop: This operation removes the topmost element from the stack.
isEmpty: This operation checks whether the stack is empty. It returns true if the stack has no
elements and false otherwise.
isFull: This operation checks whether the stack is full, especially in cases where the stack has a
maximum capacity. It returns true if the stack is full and false otherwise.
Top: This operation allows us to access the topmost element of the stack without removing it. It
returns the value of the element at the top of the stack.
Example:
#include <stdio.h>
#include <stdlib.h>
#define SIZE 4
void push();
void pop();
void show();
int main()
int choice;
while (1)
scanf("%d", &choice);
switch (choice)
case 1:
push();
break;
case 2:
pop();
break;
case 3:
show();
break;
case 4:
exit(0);
default:
printf("\nInvalid choice!!");
void push()
int x;
if (top == SIZE - 1)
printf("\nOverflow!!");
else
scanf("%d", &x);
top = top + 1;
inp_array[top] = x;
}
void pop()
if (top == -1)
printf("\nUnderflow!!");
else
top = top - 1;
void show()
if (top == -1)
printf("\nUnderflow!!");
else
Output:
3.Show
4.End
3.Show
4.End
Then pop():
Output
3.Show
4.End
Popped element: 10
3.Show
4.End
Underflow!!
RECURSION
Recursion is a concept in programming where a function calls itself, either directly or indirectly.
When a function calls itself, it is known as a recursive function. It is a powerful technique that
allows problems to be solved by breaking them down into smaller, simpler versions of the same
problem. The recursive function continues to call itself until it reaches a base case, which is a
condition that stops the recursion and returns a result.
Properties of Recursion:
The factorial of a negative number is not defined, as it does not have a meaningful
interpretation in mathematics. Similarly, the factorial of 0 is defined to be 1. These are
established conventions in mathematics and are important to consider when working with
factorial calculations.
#include<stdio.h>
long factorial(int n)
if (n == 0)
return 1;
else
return(n * factorial(n-1));
int main()
int number;
long fact;
scanf("%d", &number);
fact = factorial(number);
return 0;
Output:
Enter a number: 5
Factorial of 5 is 120
KEY TAKEAWAYS
● Stack follows the principle of Last In, First Out (LIFO), which means that the last element
inserted into the stack will be the first one to be removed.
● In a stack, elements can be inserted and deleted only from one end, often referred to as
the "top" of the stack.
QUEUE
SUB LESSON 4.1
BASICS OF QUEUE
A queue is a linear data structure in computer science that stores a collection of elements
following the First-In-First-Out (FIFO) principle. It is an ordered list where elements are added
to the end (rear) and removed from the front (head).
Think of a real-life queue or line of people waiting for service. The first person to arrive is the
first to be served, and as new people join the queue, they line up at the back and wait for their
turn. Similarly, in a queue data structure, the element that has been in the queue the longest is
the first one to be removed, while new elements are added to the end.
A queue is an abstract data structure that is different from a stack in that it is open at both
ends. This means that a queue follows the FIFO (First-In-First-Out) structure, where the data
item that is inserted first will also be accessed or removed first. In a queue, data is inserted at
one end and deleted from the other end, maintaining the order of insertion. The end where
data is inserted is typically called the "rear" or "tail," and the end from which data is removed is
called the "front" or "head" of the queue.
A real-world example that illustrates the concept of a queue is a single-lane one-way road,
where vehicles enter the road in a specific order and exit in the same order. This aligns with the
FIFO (First-In-First-Out) nature of a queue. Another example can be observed at ticket windows
or bus stops, where people join a queue and are served or board the bus in the order they
arrived, ensuring fairness and maintaining the sequence of arrival.
REPRESENTATION OF QUEUES
Similar to the stack abstract data type (ADT), the queue ADT can also be implemented using
various data structures such as arrays, linked lists, or pointers. In this tutorial, we will
demonstrate the implementation of queues using a one-dimensional array as a simple example.
LIMITATIONS OF QUEUE
As you can see in the image below, After performing enqueue and dequeue operations, the size
of the queue has decreased.
Indexes 0 and 1 can only be used for adding elements to the queue when the queue has been
reset, meaning that all elements have been dequeued.
APPLICATIONS OF QUEUE
1. Task Scheduling: Queues are used in task scheduling algorithms to manage the
execution order of tasks or processes based on their priority or arrival time.
2. Printer Spooling: When multiple users send print requests to a shared printer, a queue is
used to manage the order in which the print jobs are processed.
3. Message Queuing: In messaging systems, queues are employed to ensure reliable and
ordered delivery of messages between different components or systems.
4. Event-driven Programming: Queues are used to handle events and event-driven
programming models, where events are queued and processed in the order of their
occurrence.
5. Simulations: Queues are essential in simulating real-world systems, such as traffic flow,
customer queues, or manufacturing processes, to analyze and optimize their
performance.
6. Call Center Systems: Queues are utilized in call center systems to manage incoming calls,
ensuring fair distribution to available agents based on their availability.
7. Network Packet Routing: Queues are used in network routers to manage the incoming
and outgoing network packets, facilitating proper routing and preventing congestion.
8. CPU Scheduling: Queues play a vital role in CPU scheduling algorithms, where processes
are placed in different queues based on their priority or scheduling criteria.
9. Web Server Request Handling: Queues are used in web servers to manage incoming
requests from clients, ensuring that requests are processed in the order they are
received.
10. Breadth-First Search: Queues are extensively used in graph algorithms, particularly in
breadth-first search (BFS), to explore nodes or vertices level by level.
KEY TAKEAWAYS
The FIFO principle implies that the element that enters the queue first will be the first one to be
removed from it. This characteristic makes queues suitable for scenarios where order
preservation is essential.
In the example above, the image depicts a queue where the number 1 was added to the queue
before the number 2. As a result, according to the FIFO (First-In-First-Out) rule, the number 1
will be the first one to be removed from the queue.
Queue operations typically involve the use of two pointers: FRONT and REAR. Here's an
explanation of how these pointers work:
1. FRONT: This pointer keeps track of the first element in the queue. When the queue is
empty, the FRONT pointer is typically set to -1.
2. REAR: This pointer keeps track of the last element in the queue. As elements are
enqueued (added) to the queue, the REAR pointer is updated accordingly. When the
queue is empty, the REAR pointer is also set to -1.
By using these pointers, we can determine the position of the first and last elements in the
queue and perform enqueue and dequeue operations effectively.
To enqueue an element, we increment the REAR pointer and add the element to the position
indicated by the REAR pointer. If the queue is empty initially, we set both the FRONT and REAR
pointers to 0.
To dequeue an element, we increment the FRONT pointer to point to the next element in the
queue and retrieve the element from the position indicated by the previous FRONT pointer
value. If the dequeue operation results in an empty queue (i.e., there are no more elements),
we can reset both the FRONT and REAR pointers to -1.
It's important to note that different implementations may have variations in how these pointers
are initialized and updated, but the general concept remains the same.
Enqueue Operation:
If the queue is empty (i.e., it has no elements), set the value of FRONT to 0.
Increase the REAR index by 1 to indicate the next available position in the queue.
Add the new element to the position pointed to by the REAR index.
Dequeue Operation:
If the queue is not empty, return the value pointed to by the FRONT index, which represents
the element to be dequeued.
Increase the FRONT index by 1 to move it to the next element in the queue.
If the dequeue operation results in an empty queue (i.e., there are no more elements
remaining), reset the values of both FRONT and REAR to -1 to indicate an empty queue state.
It's worth noting that these operations assume the underlying implementation maintains the
queue size and checks for full or empty conditions appropriately. Additionally, variations in
implementation may have different strategies for handling full or empty queue situations.
KEY TAKEAWAYS
• Enqueue: Adding elements to the rear of the queue is known as the enqueue operation.
This operation increases the size of the queue and updates the rear pointer accordingly.
• Dequeue: Removing elements from the front of the queue is known as the dequeue
operation. This operation reduces the size of the queue and updates the front pointer
accordingly.
QUEUE
SUB LESSON 4.3
OPERATIONS OF QUEUE
1. Enqueue: This operation adds an element to the end of the queue. It expands the size of
the queue and places the new element at the rear.
2. Dequeue: This operation removes an element from the front of the queue. It shrinks the
size of the queue and retrieves the element that was first in line.
3. IsEmpty: This operation checks if the queue is empty, indicating whether there are no
elements present in the queue.
4. IsFull: This operation checks if the queue is full, indicating whether it has reached its
maximum capacity or the specified size limit.
5. Peek: This operation allows you to access the value of the element at the front of the
queue without removing it. It provides a way to examine the next element that will be
dequeued.
These operations are fundamental in working with queues and provide the necessary
functionality to manage and manipulate the elements within the queue data structure.
Example:
// Queue implementation in C
#include <stdio.h>
#define SIZE 5
void enQueue(int);
void deQueue();
void display();
deQueue();
//enQueue 5 elements
enQueue(1);
enQueue(2);
enQueue(3);
enQueue(4);
enQueue(5);
enQueue(6);
display();
deQueue();
display();
return 0;
}
void enQueue(int value) {
if (rear == SIZE - 1)
printf("\nQueue is Full!!");
else {
if (front == -1)
front = 0;
rear++;
items[rear] = value;
void deQueue() {
if (front == -1)
printf("\nQueue is Empty!!");
else {
front++;
}
// Function to print the queue
void display() {
if (rear == -1)
printf("\nQueue is Empty!!!");
else {
int i;
printf("\n");
Output:
Queue is Empty!!
Inserted -> 1
Inserted -> 2
Inserted -> 3
Inserted -> 4
Inserted -> 5
Queue is Full!!
1 2 3 4 5
Deleted : 1
Queue elements are:
2 3 4 5
KEY TAKEAWAYS
• IsEmpty: This operation checks if the queue is empty, indicating whether there are no
elements present in the queue.
• IsFull: This operation checks if the queue is full, indicating whether it has reached its
maximum capacity or the specified size limit.
TYPES OF QUEUE
SUB LESSON 5.1
TYPES OF QUEUE
Simple Queue
In a Linear Queue, an element is inserted at one end while deletion occurs at the other end. The
end where insertion takes place is called the rear end, and the end where deletion occurs is
called the front end. This type of queue strictly adheres to the First-In-First-Out (FIFO) rule.
A significant limitation of the linear queue is that insertions can only be performed at the rear
end. When the first three elements are deleted from the queue, the inability to insert
additional elements arises, even if there is available space within the linear queue.
Consequently, the linear queue encounters an overflow condition, indicated by the rear end
pointing to the last element of the queue.
Circular Queue
The Circular Queue represents all the nodes in a circular manner. It shares similarities with the
linear queue, but with the distinction that the last element of the queue is connected to the
first element, forming a circular structure. It is also referred to as a Ring Buffer due to the
interconnected nature of all the ends. The image below illustrates the representation of a
circular queue:
The circular queue addresses the drawback encountered in the linear queue. It overcomes the
limitation of the linear queue by allowing the addition of new elements in empty spaces. This is
achieved by incrementing the value of the rear pointer. One of the primary advantages of using
a circular queue is its ability to optimize memory utilization, resulting in improved efficiency.
Priority Queue
A priority queue is a unique type of queue where elements are organized based on their
priority. It is a data structure where each element is assigned a priority value. In cases where
multiple elements have the same priority, they are arranged according to the First-In-First-Out
(FIFO) principle. The image below illustrates the representation of a priority queue:
In a priority queue, the insertion of elements takes place based on their arrival, meaning that
newly arriving elements are inserted into the queue. On the other hand, deletion in a priority
queue is performed based on the priority associated with each element. Elements with higher
priority are given precedence for deletion over elements with lower priority.
Double Ended Queue (or Deque)
A Deque, or Double Ended Queue, allows for the insertion and deletion of elements from both
ends of the queue, which includes both the front and rear ends. This means that elements can
be inserted and removed from either end of the queue. A notable application of a deque is in
checking for palindromes. By reading a string from both ends, if the string remains the same, it
indicates that it is a palindrome.
KEY TAKEAWAYS
● Linear Queue: In a linear queue, insertion takes place at one end (rear) and deletion
occurs at the other end (front). It follows the First-In-First-Out (FIFO) rule.
● Circular Queue: A circular queue is similar to a linear queue but with the last element
connected to the first element, forming a circular structure. It overcomes the limitation
of a linear queue by allowing better utilization of available space.
● Priority Queue: In a priority queue, elements are arranged based on their priority. Each
element has a priority associated with it, and higher priority elements are given
precedence during deletion.
● Deque (Double Ended Queue): A deque allows insertion and deletion from both ends,
i.e., the front and rear. It provides more flexibility compared to other queue types.
TYPES OF QUEUE
SUB LESSON 5.2
SIMPLE QUEUE
A queue is a data structure that follows a First-In-First-Out (FIFO) principle. It is similar to a list
where elements are added at one end and removed from the other end. The element that is
added first will be the first one to be removed, maintaining the order of insertion.
A queue can be compared to or visualized as a line of people waiting to purchase tickets, where
the person who arrives first is the first to be served (following the "First come, first served"
principle).
The position of the entry in the queue that is ready to be served, which is the first entry that
will be removed from the queue, is commonly referred to as the "front" of the queue (or
sometimes called the "head" of the queue). Similarly, the position of the last entry in the
queue, which is the most recently added one, is known as the "rear" (or the "tail") of the
queue. Refer to the illustration below:
In the context of a queue, the term "Queue" refers to the name of the array used to store the
elements of the queue.
The "Front" denotes the index in the array where the first element of the queue is stored.
On the other hand, the "Rear" represents the index in the array where the last element of the
queue is stored.
IMPLEMENTATION OF SIMPLE QUEUE
#include <stdio.h>
struct Queue {
int items[MAX_SIZE];
int front;
int rear;
int size;
};
queue->front = -1;
queue->rear = -1;
queue->size = 0;
}
void enqueue(struct Queue* queue, int value) {
if (isFull(queue)) {
return;
if (isEmpty(queue)) {
queue->front = 0;
queue->items[queue->rear] = value;
queue->size++;
if (isEmpty(queue)) {
return -1;
queue->size--;
return removedItem;
}
if (isEmpty(queue)) {
printf("Queue is empty.\n");
return;
printf("\n");
int main() {
initializeQueue(&queue);
enqueue(&queue, 1);
enqueue(&queue, 2);
enqueue(&queue, 3);
enqueue(&queue, 4);
printQueue(&queue); // Output: Queue elements: 1 2 3 4
enqueue(&queue, 5);
return 0;
OUTPUT
KEY TAKEAWAYS
CIRCULAR QUEUE
The array implementation of a queue had a specific limitation. When the rear of the queue
reached the end position, there were potential vacant spaces in the beginning that couldn't be
utilized. To overcome this limitation, the concept of a circular queue was introduced.
A circular queue shares similarities with a linear queue as both operate based on the First-In-
First-Out (FIFO) principle. However, in a circular queue, the last position is connected to the first
position, forming a circular structure or circle. This distinctive characteristic gives rise to its
alternative name, the Ring Buffer.
Circular queues support the following operations:
#include <stdio.h>
#define SIZE 5
int items[SIZE];
int isFull() {
return 0;
int isEmpty() {
return 0;
// Adding an element
if (isFull())
else {
items[rear] = element;
// Removing an element
int deQueue() {
int element;
if (isEmpty()) {
return (-1);
} else {
element = items[front];
if (front == rear) {
front = -1;
rear = -1;
else {
return (element);
void display() {
int i;
if (isEmpty())
int main() {
deQueue();
enQueue(1);
enQueue(2);
enQueue(3);
enQueue(4);
enQueue(5);
enQueue(6);
display();
deQueue();
display();
enQueue(7);
display();
enQueue(8);
return 0;
Output:
Queue is empty !!
Inserted -> 1
Inserted -> 2
Inserted -> 3
Inserted -> 4
Inserted -> 5
Queue is full!!
Front -> 0
Items -> 1 2 3 4 5
Rear -> 4
Front -> 1
Items -> 2 3 4 5
Rear -> 4
Inserted -> 7
Front -> 1
Items -> 2 3 4 5 7
Rear -> 0
Queue is full!!
KEY TAKEAWAYS
● Circular nature: A circular queue differs from a regular queue by allowing the front and
rear pointers to wrap around to the beginning of the queue, enabling efficient space
utilization.
● Enqueue and dequeue operations: Enqueueing (adding) an element and dequeuing
(removing) an element from a circular queue are both performed in constant time, O(1),
regardless of the size of the queue.
● Full and empty conditions: A circular queue is considered full when the rear pointer is
one position behind the front pointer. Conversely, the queue is empty when the front
and rear pointers are equal.
TYPES OF QUEUE
SUB LESSON 5.4
PRIORITY QUEUE
A priority queue is a unique form of queue that assigns a priority value to each element.
Elements are then retrieved from the queue based on their priority, with higher priority items
being served first. In the event that elements share the same priority, they are served in the
order they were added to the queue.
Assigning priority values in a priority queue is typically done by considering the value of the
element itself. In a priority queue, the element with the highest priority is dequeued first. The
priority of elements determines the order in which they are removed from the priority queue,
with higher-priority elements being dequeued before lower-priority elements.
In the given example, if we insert the values 1, 3, 4, 8, 14, and 22 into a priority queue with an
ordering imposed from least to greatest, element 1 would have the highest priority, while 22
would have the lowest priority. This means that when elements are dequeued from the priority
queue, the element with the value 1 would be dequeued first, followed by 3, 4, 8, 14, and
finally 22.
Characteristics of a priority queue include:
In an ascending order priority queue, a lower priority number is considered to have a higher
priority. For instance, if we have the numbers 1 to 5 arranged in ascending order like 1, 2, 3, 4,
and 5, the smallest number, 1, is given the highest priority in the priority queue. Consequently,
in this priority queue, the element with the value 1 would be served first, followed by 2, 3, 4,
and 5, in that order.
Descending order priority queue:
In a descending order priority queue, a higher priority number is considered to have a higher
priority. For instance, if we have the numbers 1 to 5 arranged in descending order like 5, 4, 3, 2,
and 1, the largest number, 5, is given the highest priority in the priority queue. Accordingly, in
this priority queue, the element with the value 5 would be served first, followed by 4, 3, 2, and
1, in that order.
Before studying the priority queue, we need to learn about the heap data structure for a better
understanding of the binary heap, as it is used to implement the priority queue.
A heap is a type of binary tree known as a complete binary tree, where each node can have at
most two children.
1. Min Heap
2. Max Heap
In a Min Heap, the value of a parent node is always less than or equal to the values of its
children.
In a Max Heap, the value of a parent node is always greater than or equal to the values of its
children.
To create a Max Heap tree, the following two cases need to be considered:
1. Insertion of Elements: When inserting elements into the Max Heap tree, it is crucial to
maintain the property of a complete binary tree. This means that elements should be
inserted from left to right, level by level, filling the available positions in the tree.
2. Parent-Child Relationship: Additionally, the value of a parent node in the Max Heap tree
must be greater than the values of both its children. This ensures that the maximum
element is always at the root of the tree, with progressively smaller elements branching
out.
Step 1: Initially, we add the element 44 to the tree. The resulting tree would be as follows:
Step 2: The next element to be added is 33. Since the insertion in a binary tree starts from the
left side, we add the element 33 to the left of 44. The updated tree would look like this:
Step 3: The next element to be added is 77. We add the element 77 to the right of 44 because
the insertion in a binary tree starts from the left side and moves to the right. The updated tree
would appear as follows:
As we can observe in the above tree, it does not satisfy the Max Heap property, which requires
the parent node to have a value greater than its child nodes. In this case, the parent node 44 is
less than the child node 77. To rectify this, we will swap the values of the parent and child
nodes, resulting in the updated tree as shown below:
After swapping the values, the Max Heap property is now satisfied, with the parent node 77
being greater than both of its child nodes, 33 and 44.
Step 4: The next element to be added is 11. Since the insertion in a binary tree starts from the
left side, we add the element 11 to the left of 33. The updated tree would look like this:
Step 5: The next element to be added is 55. To maintain the property of a complete binary tree,
we add the node 55 to the right of 33. The updated tree would appear as follows:
As we can observe in the above figure, the property of a Max Heap is not satisfied because the
parent node 33 is less than the child node 55. To rectify this, we will swap the values of the
parent and child nodes, resulting in the updated tree as shown below:
Step 6: The next element to be added is 88. Since the left subtree is already complete, we add
the element 88 to the left of 44 to maintain the property of a complete binary tree. The
updated tree would look like this:
Deletion in Heap Tree
#include <stdio.h>
int size = 0;
*b = *a;
*a = temp;
}
// Function to heapify the tree
if (size == 1) {
} else {
// Find the largest among root, left child and right child
int largest = i;
int l = 2 * i + 1;
int r = 2 * i + 2;
largest = l;
largest = r;
if (largest != i) {
swap(&array[i], &array[largest]);
array[0] = newNum;
size += 1;
} else {
array[size] = newNum;
size += 1;
int i;
if (num == array[i])
break;
size -= 1;
printf("\n");
// Driver code
int main() {
int array[10];
insert(array, 3);
insert(array, 4);
insert(array, 9);
insert(array, 5);
insert(array, 2);
printArray(array, size);
deleteRoot(array, 4);
printf("After deleting an element: ");
printArray(array, size);
Output:
Max-Heap array: 9 5 4 3 2
KEY TAKEAWAYS
● A priority queue is a unique form of queue that assigns a priority value to each element.
● Elements are then retrieved from the queue based on their priority, with higher-priority
items being served first.
● In the event that elements share the same priority, they are served in the order they
were added to the queue.
TYPES OF QUEUE
SUB LESSON 5.5
A deque, also known as a double-ended queue, is a data structure that allows the insertion and
removal of elements at both the front and the rear. Unlike a traditional queue, a dequeue does
not strictly follow the FIFO (First-In-First-Out) rule.
Operations on a Deque
In a circular array implementation, when the array becomes full, the insertion of new elements
starts from the beginning of the array, creating a circular behavior.
However, in a linear array implementation, if the array becomes full, further insertion of
elements is not possible. In such cases, an "overflow message" is typically thrown to indicate
that the array is full and no more elements can be inserted.
To perform the following operations, the following steps are typically followed:
These initial steps set up the structure for subsequent operations on the deque data structure.
Fig: Initialize an array and pointers for deque
If the deque is empty (i.e. front = -1), deletion cannot be performed (underflow condition).
If the deque has only one element (i.e. front = rear), set front = -1 and rear = -1.
Else if front is at the end (i.e. front = n - 1), set go to the front front = 0.
If the deque is empty (i.e. front = -1), deletion cannot be performed (underflow condition).
If the deque has only one element (i.e. front = rear), set front = -1 and rear = -1, else follow
the steps below.
If rear is at the front (i.e. rear = 0), set go to the front rear = n - 1.
This operation checks if the deque is empty. If front = -1, the deque is empty.
6. Check Full
This operation checks if the deque is full. If front = 0 and rear = n - 1 OR front = rear + 1, the
deque is full.
Example :
#include <stdio.h>
#include <stdlib.h>
// Global variables
int deque[MAX_SIZE];
int isEmpty() {
int isFull() {
if (isFull()) {
return;
if (isEmpty()) {
front = rear = 0;
} else {
deque[front] = data;
printf("Element %d inserted at the front.\n", data);
if (isFull()) {
return;
if (isEmpty()) {
front = rear = 0;
} else {
deque[rear] = data;
void deleteFront() {
if (isEmpty()) {
return;
}
if (front == rear) {
} else {
void deleteRear() {
if (isEmpty()) {
return;
if (front == rear) {
} else {
}
// Function to get the front element of the deque
int getFront() {
if (isEmpty()) {
printf("Deque is empty.\n");
return -1;
return deque[front];
int getRear() {
if (isEmpty()) {
printf("Deque is empty.\n");
return -1;
return deque[rear];
void display() {
if (isEmpty()) {
printf("Deque is empty.\n");
return;
}
int i = front;
while (i != rear) {
i = (i + 1) % MAX_SIZE;
printf("%d\n", deque[rear]);
// Main function
int main() {
insertFront(1);
insertRear(2);
insertFront(3);
insertRear(4);
display();
deleteFront();
deleteRear();
display();
return 0;
Output:
Front element: 1
Rear element: 2
KEY TAKEAWAYS
● A deque, also known as a double-ended queue, is a data structure that allows the
insertion and removal of elements at both the front and the rear.
● Deque does not strictly follow the FIFO (First-In-First-Out) rule.
LINKED LIST
SUB LESSON 6.1
A linked list is a fundamental data structure used to store a collection of elements. It consists of
a sequence of nodes, where each node contains a data element and a reference (or pointer) to
the next node in the sequence. The last node in the list typically has a null reference, indicating
the end of the list.
DATA ELEMENT: In this portion, we have the ability to retain the necessary details, regardless
of the data type utilized.
Example
int age;
char name[20];
REFERENCE TO THE NEXT NODE: It will store the address of the next node.
Node: A node is a basic unit of a linked list. It contains two fields: the data field to store the
element and the next field to hold the reference to the next node.
Head: The head of a linked list refers to the first node in the list. It serves as the starting point to
access the elements in the list.
Singly Linked List: In a singly linked list, each node only has a reference to the next node.
Traversing the list is only possible in one direction, from the head to the tail.
Doubly Linked List: In a doubly linked list, each node has a reference to both the next node and
the previous node. This allows traversal in both directions.
Circular Linked List : A circular linked list is a data structure where each node contains a
reference to the next node, and the last node points back to the first node, creating a circular
structure.
Dynamic size: Linked lists can grow or shrink in size as elements are added or removed, unlike
arrays which have a fixed size.
Insertion and deletion: Insertion and deletion operations can be more efficient in linked lists
compared to arrays because they don't require shifting elements.
Flexibility: Linked lists allow efficient manipulation of elements, such as inserting or deleting
nodes, at any position in the list.
Random access: Linked lists do not provide direct access to arbitrary elements like arrays do.
Accessing an element at a specific index requires traversing the list from the head.
Extra memory: Linked lists require additional memory to store the references/pointers to the
next nodes.
Linked lists are commonly used in various applications and are the basis for more complex data
structures like stacks, queues, and hash tables. Understanding the basics of linked lists is crucial
for mastering data structures and algorithms.
CHOOSING AN APPROPRIATE DATA TYPE FOR THE LINKED LIST:
DATA ELEMENT : It can accommodate any data type, such as integers, characters, floats,
doubles, and so on.
REFERENCE(POINTER): The next part of a node is a pointer that stores the address of the
following node, making it a pointer type.
In this scenario, there is a requirement to organize and combine two distinct data types,
resulting in a heterogeneous structure.
To group different data types, a common approach is to use data structure called struct that
contains members of different data types.
Therefore, each node in a linked list is of the structure data type, as it encapsulates multiple
data fields representing the elements within a node.
Difference Between Arrays & Linked List
Collection of similar types of data elements stored in Collection of the list of data values stored in random
contiguous memory locations. order.
Has static size where the memory size is fixed and Has dynamic size where the memory size is not fixed
cannot be changed at the run time. and can be changed during run time.
Memory is allocated in the stack section. Memory is allocated in the heap section.
KEY TAKEAWAYS
A linked list can be visualized as a sequential chain of nodes, with each node pointing to the
next node in the sequence.
struct node
int data;
};
where,
struct node *next - The next part of a node is utilized to reference the subsequent node, storing
the address of the next node in the linked list.
middle->next = last;
last->next = NULL;
head->data = 10;
middle->data = 20;
last->data = 30;
Insertion: Nodes can be inserted at the beginning, end, or at any position in the linked list.
Deletion: Nodes can be removed from the list by updating the references of neighboring nodes.
Search: The list can be searched for a specific element by traversing through the nodes until the
element is found or the end of the list is reached.
Traversal: The list can be traversed from the head to the tail, accessing each node's data.
#include<stdlib.h>
struct Node
int data;
};
if (*head == NULL)
return;
*head = (*head)->next;
free (temp);
newNode->data = data;
newNode->next = *head;
*head = newNode;
node = node->next;
printf ("\n");
int main ()
display (head);
deleteStart (&head);
deleteStart (&head);
display (head);
return 0;
OUTPUT :
100 Inserted
80 Inserted
60 Inserted
40 Inserted
20 Inserted
20 deleted
40 deleted
● A linked list is a linear data structure consisting of nodes where each node contains a
data element and a reference (link) to the next node in the sequence.
● A node in a linked list typically consists of two parts: the data part, which stores the
actual data, and the next part, which is a reference to the next node in the list.
TYPES OF LINKED LIST
SUB LESSON 7.1
The singly linked list is a linear data structure and the most common type, where each node
contains data and a pointer(address) to the next node.
A singly linked list is a type of linked list that allows traversal in only one direction. you can only
traverse the list in a forward direction, starting from the head (first node) and progressing
through the list until the last node, which points to null.
Let's consider a scenario where we have three nodes with addresses 100, 200, and 300. The
representation of these three nodes as a linked list can be visualized as follows:
In this particular example, the first node holds the address of the next node, which is 200. The
second node, in turn, contains the address of the last node, which is 300. Lastly, the third node
has a NULL value in its address field, indicating that it does not point to any other node. It is
worth noting that the pointer that stores the address of the initial node is commonly referred
to as the head pointer.
struct node {
int data;
A doubly linked list is a linear data structure where each node contains three components: a
data element, a pointer to the previous node, and a pointer to the next node.
This structure allows for traversal in both directions, forward and backward, can say bi-
directional.
The data part of the node holds the actual data value, while the previous pointer points to the
preceding node in the list, and the next pointer points to the subsequent node.
A doubly linked list comprises a collection of singly linked lists (SLLs), where each SLL is itself
doubly linked. This structure is employed to store data in a manner that facilitates rapid
insertion and deletion of elements.
Let's consider a scenario where we have three nodes with addresses 100, 200, and 300,
respectively. The representation of these nodes in a doubly linked list can be visualized as
follows:
In the above representation, we can observe that each node in a doubly-linked list contains two
address components. One component stores the address of the next node, while the other
component stores the address of the previous node. The initial node in the doubly linked list
has a NULL value in the address part that corresponds to the previous node, indicating that it is
the starting point of the list and has no previous node.
struct node {
int data;
In a circular linked list, the last node is connected to the first node, creating a circular structure.
As a result, the link part of the last node contains the address of the first node in the list.
A circular linked list does not have a distinct beginning or end. It can be visualized as a ring of
nodes.
In a singly linked circular list, the next pointer of the last item (node) points back to the first
item in the list
In a doubly linked circular list, the prev pointer of the first item (node) points to the last item in
the list.
The representation of a circular linked list is similar to that of a singly linked list. It forms a
circular structure where the last node points back to the first node. This is depicted in the figure
below:
struct node
int data;
}
KEY TAKEAWAYS
● The singly linked list is a linear data structure and the most common type, where each
node contains data and a pointer(address) to the next node.
● A doubly linked list is a linear data structure where each node contains three
components: a data element, a pointer to the previous node, and a pointer to the next
node.
● In a circular linked list, the last node is connected to the first node, creating a circular
structure. As a result, the link part of the last node contains the address of the first node
in the list.
TYPES OF LINKED LIST
SUB LESSON 7.2
A singly linked list is a linear data structure consisting of a sequence of nodes, where each node
contains a value and a reference to the next node in the list. It forms a chain-like structure
where data elements are connected in a forward direction and can be traversed in the same
forward direction.
The nodes are not stored in a contiguous block of memory, but instead, each node holds the
address of the next node in the list.
Singly-linked lists can dynamically grow or shrink in size as elements are added or removed. This
flexibility makes them suitable for scenarios where the number of elements may change over
time.
To access an element in a singly linked list, you need to traverse the list from the head node to
the desired position. This process has a time complexity of O(n), where n is the number of
elements in the list. Random access to elements by an index is not efficient in singly linked lists.
Singly-linked lists are used in various applications and algorithms. They are commonly
employed for implementing stacks, queues, hash tables, and graph algorithms.
SINGLY LINKED LIST COMPLEXITY
The time complexity of a singly linked list depends on the specific operation being performed.
The space complexity of a singly linked list is O(n) as it requires memory allocation for each
individual node. The space complexity is proportional to the number of nodes in the list.
In this program, we have four nodes to insert into the list. Each node consists of two parts: the
data part, which stores an integer value, and the address part, represented by the next pointer,
which holds the address of the next node.
The singly linked list starts with a special node called the head node, which holds the address of
the first node in the list. The last node in the list points to NULL to indicate the end of the list.
In a singly linked list, each node connects with the next node through a pointer that points to
the address of the next node, and arrows in the above-given diagram represent that.
CODE TO IMPLEMENT A SINGLY LINKED LIST
#include <stdio.h>
#include <stdlib.h>
void display();
struct Node {
int data;
};
int main()
first->data = 10;
second->data = 20;
third->data = 30;
fourth->data = 40;
first->next = second;
second->next = third;
third->next = fourth;
fourth->next = NULL;
display(first);
return 0;
ptr = ptr->next;
OUTPUT :
10 20 30 40
KEY TAKEAWAYS
• A singly linked list is a linear data structure consisting of a sequence of nodes, where
each node contains a value and a reference to the next node in the list.
• It forms a chain-like structure where data elements are connected in a forward direction
and can be traversed in the same forward direction.
TYPES OF LINKED LIST
SUB LESSON 7.3
A doubly linked list is a data structure that consists of a sequence of nodes, where each node
contains data and two pointers: one pointing to the previous node and one pointing to the next
node. This allows for bidirectional traversal, meaning we can navigate both forward and
backward in the list.
Doubly linked lists require additional memory compared to singly linked lists because each node
has to store references to both the previous and next nodes.
Doubly linked lists provide flexibility in accessing and manipulating the list in both forward and
backward directions, making them useful in scenarios where bidirectional traversal is required
or when efficient insertion and deletion operations are necessary.
In a doubly linked list, the presence of two pointers, prev and next, requires additional steps to
be taken in certain operations.
Doubly linked lists are used as building blocks for other complex data structures like stacks,
queues, and associative arrays.
The time complexity of basic operations in a doubly linked list can be summarized as follows:
Searching: O(n)
These time complexities represent the worst-case scenario in terms of the number of elements (n) in the
doubly linked list.
The space complexity of a doubly linked list is O(n), where n is the number of nodes in the list. Each
node requires memory to store its data and pointers to the previous and next nodes.
In this program, we consider three elements to insert into the list. Each node in the linked list consists of
two parts: the data part, which stores an integer value, and the address parts of the previous and next
nodes, represented by the prev and next pointers, respectively, which allows bidirectional traversal.
The nodes may be stored at random addresses in memory, but their logical connection is maintained
through the prev and next pointers.
The address of the first node in the linked list is stored in a special node called the head node.
In the doubly linked list, the first node's prev pointer is set to NULL, indicating that there are no nodes
before it. Similarly, the last node's next pointer is set to NULL, indicating that there are no nodes after it.
These connections allow for efficient traversal in both directions, forward and backward, through the
linked list. The arrows in the diagram represent these connections between nodes.
#include <stdio.h>
#include <stdlib.h>
void display();
struct Node {
int data;
struct Node* prev;
};
int main()
first->data = 10;
second->data = 20;
third->data = 30;
fourth->data = 40;
first->next = second;
first->prev=NULL;
second->next = third;
second->prev=first;
third->next = fourth;
third->prev=second;
fourth->next = NULL;
fourth->prev = third;
head=first;
display(first);
return 0;
last = ptr;
ptr = ptr->next;
OUTPUT :
10 20 30 40
KEY TAKEAWAYS
• A doubly linked list is a data structure that consists of a sequence of nodes, where each
node contains data and two pointers: one pointing to the previous node and one
pointing to the next node.
• Doubly linked lists require additional memory compared to singly linked lists because
each node has to store references to both the previous and next nodes.
TYPES OF LINKED LIST
SUB LESSON 7.4
A circular linked list is characterized by a connection between the first and last nodes, forming a
circular structure. There is no concept of a NULL pointer indicating the end of the list.
It allows flexibility in setting the starting point, which can be any node within the list.
Traversal from the first node to the last node in a circular linked list is efficient.
In a circular linked list, determining the end of the list and controlling the looping can be more
challenging compared to a linear linked list.
Directly accessing individual nodes in a circular linked list is not readily available.
A circular linked list can be used to manage multiple running applications, where each
application is represented by a node in the circular linked list.
A circular linked list is particularly useful for implementing queues, trees or graphs. Unlike
other implementations, a circular linked list eliminates the need for maintaining separate
pointers for the front and rear. By keeping a pointer to the last inserted node, we can easily
determine the front by accessing the next node of the last inserted one.
In this type of circular linked list, the address of the last node points to the address of the first
node.
2. Circular Doubly Linked List
In this particular type of circular linked list, both the last node and the first node contain
pointers that reference each other.
The time complexity of a circular linked list is typically determined by the number of nodes and
the specific operation being performed.
The space complexity of a circular linked list is the same as that of a regular singly linked list,
which is O(n). It requires space to store the data and pointers for each node in the list.
MEMORY REPRESENTATION OF CIRCULAR LINKED LIST
Let's start by discussing the addition of four elements to a linked list. To accomplish this, we
create four nodes, each containing both data and address information, which are stored at
random addresses. In a singly linked list, the last node's next pointer typically points to Null,
indicating the end of the list. However, in the case of a circular singly linked list, there is no Null
pointer since the last node's next pointer loops back to the first node, creating a circular
structure.
In a circular singly linked list, the last node's next pointer stores the address of the first node.
This means that the tail node's address points to the head node of the linked list, creating a
circular connection, and the arrows in this diagram represent that.
#include <stdio.h>
#include <stdlib.h>
void display();
struct Node {
int data;
};
int main()
{
first->data = 10;
second->data = 20;
third->data = 30;
fourth->data = 40;
first->next = second;
second->next = third;
third->next = fourth;
fourth->next = first;
last = fourth;
display(last);
return 0;
return;
ptr = last_node->next;
do {
ptr = ptr->next;
OUTPUT :
10 20 30 40
KEY TAKEAWAYS
● A circular linked list is characterized by a connection between the first and last nodes,
forming a circular structure. There is no concept of a NULL pointer indicating the end of
the list.
● A circular linked list can be used to manage multiple running applications, where each
application is represented by a node in the circular linked list.
TREE
SUB LESSON 8.1
INTRODUCTION TO TREE
Tree Node
The tree data structure is a specialized approach to efficiently organize and store data in a
computer system. It comprises a central node, structural nodes, and sub-nodes that are
interconnected through edges. It can be associated with a tree with roots, branches, and
leaves, where all the components are interconnected. By utilizing this structure, data can be
managed and accessed effectively.
In a tree structure, the root serves as the central node, and it can be connected to zero or
multiple subtrees, denoted as T1, T2, ..., Tk. Each subtree is associated with an edge that
connects the root of the tree to the root of the corresponding subtree.
The tree is considered a non-linear data structure because the data elements it contains are not
stored sequentially or linearly. Unlike linear data structures such as arrays or linked lists, where
elements are stored one after another, a tree organizes its data in a hierarchical manner with
multiple levels.
In a tree, each element (node) can have zero or more child nodes, forming a branching
structure. This hierarchical arrangement allows for efficient organization and representation of
data, as well as the establishment of relationships between different elements.
The non-linear nature of a tree arises from the fact that elements are not constrained to a
linear sequence, but rather can have multiple connections and form a branching structure. This
hierarchy enables efficient searching, insertion, and retrieval operations, making trees suitable
for various applications in computer science and data processing.
Example
TREE TRAVERSAL
Tree traversal is a fundamental operation that involves visiting every node in a tree data
structure exactly once. It plays a crucial role in computer science and various algorithms,
enabling operations and retrieval of information stored within the tree. Traversing a tree can be
accomplished using different techniques, with three commonly used methods being in-order
traversal, pre-order traversal, and post-order traversal.
For the given Tree we are performing the in-order traversal, pre-order traversal, and post-order
traversal.
Inorder traversal - The described technique follows the "left-root-right" policy, which
corresponds to the in-order traversal method. In in-order traversal, the left subtree is visited
first (traversed recursively), followed by the root node, and finally, the right subtree (also
traversed recursively). The name "in-order" indicates that the root node is traversed between
the left and right subtrees.
To perform an in-order traversal of a tree, we start from the root node (A) and visit its left
subtree (B) in an in-order manner. The process continues recursively until all the nodes are
visited. The resulting output of the in-order traversal will be the values of the nodes in
ascending order.
Final Output - D → B → E → A → F → C → G
Preorder traversal - The described technique follows the "root-left-right" policy, which
corresponds to the pre-order traversal method. In pre-order traversal, the root node is visited
first, followed by the left subtree (visited recursively), and finally, the right subtree (visited
recursively). The name "pre-order" indicates that the root node is traversed before the left and
right subtrees.
To perform a pre-order traversal of a tree, we start from the root node (A) and visit it first.
Then, we move to its left subtree (B) and traverse it in a pre-order manner. The process
continues recursively until all the nodes are visited. The resulting output of the pre-order
traversal will be the values of the nodes in the order they are visited.
Final Output - A → B → D → E → C → F → G
Postorder traversal - The described technique follows the "left-right-root" policy, which
corresponds to the post-order traversal method. In post-order traversal, the left subtree is
traversed first (recursively), followed by the right subtree (also recursively), and finally, the root
node is traversed. The name "post-order" indicates that the root node is traversed after the left
and right subtrees.
To perform a post-order traversal of a tree, we start from the root node (A) and visit its left
subtree (B) in a post-order manner. Then, we move to the right subtree and traverse it in a
post-order manner as well. Finally, we visit the root node itself. The process continues
recursively until all the nodes are visited. The resulting output of the post-order traversal will be
the values of the nodes in the order they are visited.
Final Output - D → E → B → F → G → C → A
Example:
#include <stdio.h>
#include <stdlib.h>
struct Node {
int data;
};
newNode->data = data;
newNode->left = NULL;
newNode->right = NULL;
return newNode;
if (node != NULL) {
if (node != NULL) {
// Main function
int main() {
root->left = createNode(2);
root->right = createNode(3);
root->left->left = createNode(4);
root->left->right = createNode(5);
preOrderTraversal(root);
printf("\n");
postOrderTraversal(root);
printf("\n");
printf("In-order traversal of the binary tree: ");
inOrderTraversal(root);
printf("\n");
return 0;
Output:
KEY TAKEAWAYS
Node
The leaf nodes, also known as external nodes, are located at the ends of each path and do not
possess any links or pointers to child nodes.
On the other hand, an internal node is a node that has at least one child node connected to it.
Edge
It serves as the connection or link between any two nodes within the tree structure.
Root
Height of a Node
The height of a node is defined as the number of edges on the longest path from that node to a
leaf node. It represents the depth or level of a node within the tree structure.
Depth of a Node
The depth of a node refers to the number of edges in the path from the root node to that
particular node. It represents the level or position of the node within the tree hierarchy.
In the image below, you can observe the height and depth of each node in the tree structure
Degree of a Node
The degree of a node in a tree refers to the total number of branches or child nodes connected
to that particular node.
Forest
A forest is a collection of disjoint or separate trees. In other words, it refers to a set of trees
where there are no connections or edges between the trees within the collection. Each
individual tree in the forest retains its own hierarchical structure, with its own root and set of
nodes.
Creating a forest involves disconnecting the root node of a tree or removing the root node
entirely. When the root node is cut or removed from a tree, the resulting disconnected parts
are considered individual trees, and together they form a forest. Each disconnected part retains
its own tree structure and can be considered a separate tree within the forest.
KEY TAKEAWAYS
TYPES OF TREE
There are various types of trees in data structures, each with its own characteristics and
applications. Some commonly encountered types of trees include:
1. Binary Tree: A binary tree is a tree in which each node can have at most two child nodes,
typically referred to as the left child and the right child.
2. Binary Search Tree (BST): A binary search tree is a type of binary tree where the nodes
are arranged in a specific order. The left child of a node contains a value smaller than
the node's value, and the right child contains a value greater than the node's value. This
arrangement allows for efficient searching, insertion, and deletion operations.
3. AVL Tree: An AVL tree is a self-balancing binary search tree. It maintains a balance factor
for each node to ensure that the height difference between its left and right subtrees is
at most 1. This balancing mechanism helps maintain efficient search and insertion
operations.
BINARY TREE
A binary tree is a type of tree data structure where each parent node can have a maximum of
two children. In a binary tree, each node is composed of three components:
A full binary tree is a specific type of binary tree where every internal (non-leaf) node
has either two children or no children at all. In other words, each internal node in a full
binary tree is either a leaf node (with no children) or has exactly two child nodes.
2. Perfect Binary Tree
A perfect binary tree is a specific type of binary tree where every internal (non-leaf)
node has exactly two child nodes, and all the leaf nodes are at the same level or depth.
3. Complete Binary Tree
A complete binary tree is a type of binary tree that shares similarities with a full binary
tree, but with two distinct differences:
Every level must be completely filled: In a complete binary tree, all levels, except
possibly the last one, must be completely filled with nodes. This means that every node
at each level, except the last, must have two children. The last level may not be
completely filled, but all the nodes in that level should be as left as possible.
Leaf elements lean towards the left: In a complete binary tree, all the leaf nodes
(bottom-most nodes) are positioned towards the left side of the tree. This means that
there should be no gap between the leaf nodes in the last level on the left side.
A binary search tree (BST) is a data structure used for efficiently maintaining a sorted list of
numbers. The term "binary" in binary search tree refers to the fact that each node in the tree
can have a maximum of two children.
A binary search tree (BST) is referred to as a search tree because it provides an efficient way to
search for the presence of a number within the tree. The search operation can be performed in
O(log(n)) time complexity.
A binary search tree (BST) possesses specific properties that distinguish it from a regular binary
tree:
1. All nodes in the left subtree of a node have values that are less than the value of the
root node.
2. All nodes in the right subtree of a node have values that are greater than the value of
the root node.
3. Both the left and right subtrees of each node are themselves binary search trees,
meaning they also adhere to the above two properties.
Example:
1. Search Operation
Algorithm
If root == NULL
return NULL;
If number == root->data
return root->data;
return search(root->left)
return search(root->right)
2. Insert Operation
Inserting a value in the correct position within a binary search tree (BST) is the same as
the search operation. This is because, during the insertion process, we try to maintain
the rule that the left subtree contains values lesser than the root, while the right
subtree contains values greater than the root.
Algorithm:
If node == NULL
return createNode(data)
return node;
To gain a visual understanding of how to add a number to an existing binary search tree
(BST), let's explore the process step by step.
Since the number 4 is smaller than 8, we will traverse through the left child of the node
8.
Since the number 4 is greater than 3, we will traverse through the right child of the node
3.
Since the number 4 is smaller than 6, we will traverse through the left child of the node
6.
We will insert the number 4 as the left child of the node 6.
3. Deletion Operation
Deleting a node from a binary search tree (BST) involves considering three main cases.
Case 1 : The first case for deleting a node from a binary search tree (BST) is when the
node to be deleted is a leaf node. In this scenario, we can simply remove the node from
the tree.
4 is to be deleted
Case 2 : The second case for deleting a node from a binary search tree (BST) occurs
when the node to be deleted has a single child node. In this case, we can follow the
steps below:
copy the value of its child to the node and delete the child
Final Tree
Case 3 : The third case for deleting a node from a binary search tree (BST) arises
when the node to be deleted has two children. In this scenario, we can follow the
steps below:
3 is to be deleted
Copy the value of the inorder successor (4) to the node
Final Tree
AVL TREE
An AVL tree is a type of self-balancing binary search tree. It incorporates additional information,
known as a balance factor, for each node. The balance factor can have one of three values: -1,
0, or +1.
The balance factor of a node in an AVL tree is determined by calculating the difference between
the height of its left subtree and the height of its right subtree. Mathematically, the balance
factor can be expressed as:
The self-balancing property of an AVL tree is maintained by the balance factor. It is essential
that the balance factor of each node is always -1, 0, or +1.
The balancing algorithm of AVL trees typically involves four rotation cases:
1. Left Rotation
2. Right Rotation
3. Left-Right Rotation
4. Right-Left Rotation
1. Left Rotation
If a node is inserted into the right subtree of the right subtree, causing an imbalance in
the tree, a single left rotation is performed
2. Right Rotation
When a node is inserted into the left subtree of the left subtree, it may cause an
imbalance in the AVL tree. In such cases, a single right rotation is performed.
3. Left-Right Rotation
4. Right-Left Rotation
Let's illustrate the process of inserting elements into an AVL tree by constructing an example
AVL tree with integers from 1 to 7.
We begin by adding the first element, 1, as a node and then evaluate the balance factor, which
in this case is 0.
Since the binary search property and the balance factor are both met, we proceed to insert
another element into the AVL tree.
The balance factors for the two nodes are calculated and found to be -1 (the height of the left
subtree is 0, and the height of the right subtree is 1). As the balance factor does not exceed 1,
we proceed to add another element to the AVL tree.
Now, upon adding the third element, the balance factor exceeds 1 and becomes 2. As a result,
rotations need to be performed.
Likewise, the subsequent elements are inserted and reorganized using these rotations. After
the rearrangement, the resulting tree appears as
KEY TAKEAWAYS
● A binary tree is a tree in which each node can have at most two child nodes, typically
referred to as the left child and the right child.
● A binary search tree is a type of binary tree where the nodes are arranged in a specific
order.
● An AVL tree is a self-balancing binary search tree.
TREE
SUB LESSON 8.4
A red-black tree is a type of self-balancing binary search tree. It is named after the properties it
maintains, which are represented by colors assigned to each node in the tree: red or black. The
red-black tree guarantees that the height of the tree remains logarithmic, ensuring efficient
operations.
A Red-Black tree is a self-balancing binary search tree data structure. The term "self-balancing"
indicates that the tree automatically maintains its balance by performing rotations or recoloring
nodes as necessary.
The name "Red-Black" refers to the color assigned to each node in the tree. Each node stores
an additional bit representing its color. In this representation, a black node is denoted by the bit
value 0, while a red node is denoted by the bit value 1. The nodes in a Red-Black tree also store
other information like data values, left and right pointers, similar to a standard binary tree.
In a Red-Black tree, the root node is always black in color, adhering to the property that ensures
the tree remains balanced.
While in a regular binary tree, leaf nodes have no children, in a Red-Black tree, the nodes
without children are considered internal nodes. These internal nodes are connected to special
NIL nodes, which are always black in color and serve as the leaf nodes in the Red-Black tree.
One of the key properties of a Red-Black tree is that if a node is red, its children must be black.
This property ensures that there are no consecutive red nodes along any path in the tree.
Additionally, the Red-Black tree maintains another property where every path from a node to
any of its descendant NIL nodes contains the same number of black nodes. This property
guarantees that the tree remains balanced.
By following these properties, a Red-Black tree provides efficient insertion, deletion, and search
operations with a guaranteed logarithmic time complexity.
During the insertion process in a Red-Black tree, the following rules are followed to maintain
the properties of the tree:
1. If the tree is empty, create a new node as the root node and color it black.
2. If the tree is not empty, create a new node as a leaf node and color it red.
3. If the parent of the new node is black, no further action is needed, and the tree remains
balanced.
4. If the parent of the new node is red, additional checks are required to maintain the
properties of the Red-Black tree:
a) If the color is Black, then we perform rotations and recoloring.
b) If the color is Red then we recolor the node. We will also check whether the parents'
parent of a new node is the root node or not; if it is not a root node, we will recolor and
recheck the node.
These rules ensure that the Red-Black tree remains balanced after the insertion operation,
preserving the Red-Black tree properties, including maintaining the correct coloring and black-
depth along all paths from the root to the leaves.
1. Insert 10: The tree is initially empty, so we create a new node with a value of 10 and
color it black, making it the root of the tree.
2. Insert 18: We insert 18 as a new red node. Since 18 is greater than 10, it becomes the
right child of 10.
3. Insert 7: We insert 7 as a new red node. Since 7 is less than 10, it becomes the left child
of 10.
4. Insert 15: Since 15 is greater than 10 but less than 18, the new node (15) will be inserted
to the left of node 18. As per the Red-Black tree properties, the new node (15) will be
colored red since the tree is not empty.
The current tree violates the Red-Black tree property that states there should be no red-
red parent-child relationship. To rectify this violation, we need to apply the rules of a
Red-Black tree.
Rule 4 states that if the parent of a new node is red, we need to check the color of the
parent's sibling. In this case, the new node (15) has a parent of node 18, and the sibling
of the parent node (18) is node 7.
Since the color of the parent's sibling (node 7) is red, we need to apply Rule 4a.
According to Rule 4a, we need to perform recoloring and rotations to balance the tree.
After applying Rule 4a, the recolored tree would look like this:
5. Insert 16: Now, we need to insert 16 into the tree. Since 16 is greater than 10 but less
than 18 and greater than 15, it will be placed to the right of node 15. As the tree is not
empty, the new node (16) will be colored red according to Red-Black tree properties.
The current tree violates the Red-Black tree property that states there should be no red-
red parent-child relationship. To rectify this violation, we need to apply the rules of a
Red-Black tree, we need to apply Rule 4a.
Here we have an LR relationship, so we require to perform two rotations. First, we will
perform left, and then we will perform the right rotation.
When we perform the right rotation, the median element would be the root node.
After performing the rotation and resolving the LR relationship, let's proceed with the
recoloring of the nodes:
The recoloring step ensures that the Red-Black tree properties are maintained. In this
case, node 16 and node 18 will undergo recoloring:
● Since the color of node 16 is red, it needs to be changed to black.
● Since the color of node 18 is black, it needs to be changed to red.
KEY TAKEAWAYS
A graph is an example of a non-linear data structure that is composed of vertices, also known as
nodes and edges. The edges, which can be represented as lines or arcs, connect pairs of nodes
within the graph.
A graph data structure comprises a set of nodes, each containing data, and these nodes are
interconnected.
Vertices: Vertices serve as the basic units of a graph and are sometimes referred to as nodes.
Each node/vertex can have a label or remain unlabeled.
Edges: Edges are used to establish connections between two nodes in a graph. In a directed
graph, an edge can be represented as an ordered pair of nodes. There are no restrictions on
how edges can link any two nodes, allowing for diverse connections. Sometimes, edges are also
called arcs. Each edge can be assigned a label or be left unlabeled.
A graph can be represented as an ordered pair G = (V, E), where V is a set of vertices or nodes,
and E is a collection of vertex pairs from V, representing the edges of the graph. For instance,
consider the following graph:
V = { 1, 2, 3, 4, 5, 6 }
GRAPH TERMINOLOGY
In the graph,
V = {0, 1, 2, 3}
G = {V, E}
Adjacency: In a graph, two vertices are considered adjacent if there exists an edge connecting
them. For example, vertices 2 and 3 are not adjacent since there is no edge connecting them.
Path: A path is a sequence of edges that enables traversal from one vertex, A, to another
vertex, B, within a graph. For instance, in the context of vertices 0 and 2, the paths 0-1, 1-2, and
0-2 serve as routes from vertex 0 to vertex 2.
TYPES OF GRAPH
1. Null Graph
A graph is referred to as a null graph when it contains no edges, indicating the absence
of connections between vertices.
2. Trivial Graph
A trivial graph is the smallest possible graph consisting of a single vertex without any
edges.
3. Undirected Graph
A graph in which edges are undirected, meaning there is no specific direction associated
with them. In this type of graph, the nodes are considered unordered pairs in the
definition of each edge.
4. Directed Graph
A directed graph is a type of graph where edges have a specific direction. In this graph,
the nodes are represented as ordered pairs in the definition of each edge.
5. Connected Graph
A connected graph refers to a graph in which it is possible to reach any node from any
other node within the graph through a series of edges.
6. Disconnected Graph
A disconnected graph is a type of graph where there exists at least one node that cannot
be reached from another node within the graph.
7. Regular Graph
A regular graph is a type of graph where each vertex has the same number of adjacent
vertices. In other words, all vertices in a regular graph have the same degree. The
degree of a vertex refers to the number of edges connected to it.
For example, in a regular graph of degree 3, every vertex will be connected to exactly
three other vertices. Regular graphs are often denoted as "k-regular," where "k"
represents the degree of each vertex.
8. Complete Graph
A complete graph is a type of graph where each node is directly connected to every
other node by an edge.
REPRESENTATION OF GRAPH
1. Vertices (also known as nodes): A finite set of vertices that represent distinct elements
or entities.
2. Edges: A finite set of ordered pairs (u, v) that define connections between vertices. In
the case of a directed graph (di-graph), the order of the pair matters, as (u, v) is not the
same as (v, u). The pair (u, v) indicates that there is an edge originating from vertex u
and pointing to vertex v. The edges may also include weight, value, or cost associated
with them.
1. Adjacency Matrix
2. Adjacency List
Adjacency Matrix:
One commonly used method for representing the relationships between vertices and edges in a
graph is through an adjacency matrix. An adjacency matrix can effectively capture the structure
of different types of graphs, including undirected graphs, directed graphs, and weighted graphs.
If the value adj[i][j] is equal to w, it signifies the presence of an edge from vertex i to vertex j,
and the weight of this edge is w.
When considering the adjacency matrix representation of a graph, an entry Aij refers to the
specific element at the intersection of the ith row and the jth column. In the context of the
adjacency matrix representation, the value aij is set to 1 if there exists a path from vertex Vi to
vertex Vj in the graph. Conversely, if there is no such path, the value of aij is set to 0.
In the diagram above, an image displays the correspondence between the vertices (A, B, C, D,
E), which is represented using an adjacency matrix.
It's important to note that different adjacency matrices exist for directed and undirected
graphs. In a directed graph, an entry Aij will have a value of 1 only if there is a directed edge
from vertex Vi to vertex Vj.
In a directed graph, edges denote specific paths from one vertex to another. For instance, if
there is a path from vertex A to vertex B, it indicates that vertex A serves as the starting node,
while vertex B serves as the destination node or terminal node.
In the graph illustrated above, it is evident that there are no self-loops, resulting in diagonal
entries of the adjacency matrix being 0.
The adjacency matrix representation of a weighted directed graph differs from other
representations, as it replaces the non-zero values with the actual weights assigned to the
edges.
Adjacency List
An adjacency list is utilized to store the graph in the computer's memory. This approach offers
efficiency in terms of storage, as we only need to store the values corresponding to the edges.
In the above figure, it is evident that each node of the graph has a linked list or adjacency list
associated with it. From vertex A, there are paths leading to vertex B and vertex D. These nodes
are connected to node A in the provided adjacency list.
In the case of a directed graph, the sum of the lengths of the adjacency lists is equal to the total
number of edges present in the graph.
Adjacency list representation of the weighted directed graph.
In the context of a weighted directed graph, each node includes an additional field known as
the node weight.
The adjacency list representation offers convenience when adding a new vertex, as the use of
linked lists allows for efficient insertion. Additionally, this representation saves space due to its
linked structure.
KEY TAKEAWAYS
OPERATIONS ON GRAPH
Operations on graphs refer to various actions and manipulations performed on graph data
structures. Graphs consist of nodes (vertices) connected by edges, and these operations enable
the analysis, traversal, modification, and other transformations of graphs. Here are some
common operations on graphs:
1. Graph Creation: Creating a graph involves defining the nodes and edges that connect
them. Graphs can be either directed (edges have a specific direction) or undirected
(edges are bidirectional).
2. Adding and Removing Nodes: Nodes can be added or removed from a graph, which may
affect the connectivity and structure of the graph.
3. Adding and Removing Edges: Edges can be added or removed between nodes in a
graph, altering the relationships and connectivity between the nodes.
IMPLEMENTATION
#include <stdio.h>
adjacencyMatrix[source][destination] = 1;
adjacencyMatrix[destination][source] = 1;
adjacencyMatrix[source][destination] = 0;
adjacencyMatrix[destination][source] = 0;
int i, j;
printf("\n");
int main() {
addEdge(adjacencyMatrix, 0, 1);
addEdge(adjacencyMatrix, 0, 4);
addEdge(adjacencyMatrix, 1, 3);
addEdge(adjacencyMatrix, 1, 4);
addEdge(adjacencyMatrix, 2, 3);
addEdge(adjacencyMatrix, 3, 4);
printf("Graph:\n");
printGraph(adjacencyMatrix, numNodes);
// Remove an edge
removeEdge(adjacencyMatrix, 1, 4);
printf("\nUpdated Graph:\n");
printGraph(adjacencyMatrix, numNodes);
return 0;
OUTPUT:
KEY TAKEAWAYS
Depth-First Search (DFS) is an algorithm used for traversing or searching through a graph or
tree data structure. It explores as far as possible along each branch before backtracking. The
algorithm starts at a selected vertex and explores the deepest unvisited node in the graph until
all nodes have been visited or a specific condition is met.
1. Visited
2. Not visited
The main objective of DFS is to mark each vertex as visited while avoiding cycles.
1. Start by selecting any vertex from the graph and place it on top of a stack.
2. Pop the top item from the stack and mark it as visited.
3. Create a list of adjacent nodes for the current vertex. Add only those nodes that have
not been visited to the top of the stack.
4. Repeat steps 2 and 3 until the stack becomes empty.
By following these steps, the DFS algorithm explores the graph in a depth-first manner.
To understand how the Depth First Search (DFS) algorithm works, let's consider an example
with an undirected graph containing 5 vertices.
To initiate the DFS algorithm, we begin from vertex 0. We mark vertex 0 as visited and proceed
by adding all its neighboring vertices to a stack for further exploration.
Next, we move on to the element at the top of the stack, which is vertex 1. We visit vertex 1
and explore its adjacent nodes. Since vertex 0 has already been visited, we proceed to visit
vertex 2 instead.
Vertex 2 has an adjacent vertex, which is vertex 4, that hasn't been visited yet. Thus, we add
vertex 4 to the top of the stack and proceed to visit it.
Once we visit the last vertex, which is vertex 3, we observe that it does not have any unvisited
adjacent nodes. This indicates that we have successfully completed the Depth First Traversal of
the graph.
The time complexity of the DFS algorithm can be expressed as O(V + E), where V represents the
number of nodes in the graph, and E represents the number of edges.
As for the space complexity, it is O(V), indicating that the amount of memory required by the
algorithm grows linearly with the number of nodes in the graph.
#include <stdio.h>
#include <stdbool.h>
// Example usage
int main() {
int numVertices = 5;
int graph[MAX_VERTICES][MAX_VERTICES] = {
{0, 1, 1, 0, 0},
{1, 0, 1, 1, 0},
{1, 1, 0, 0, 1},
{0, 1, 0, 0, 1},
{0, 0, 1, 1, 0}
};
int startVertex = 0;
// Start DFS from the specified vertex
return 0;
OUTPUT
KEY TAKEAWAYS
● The main objective of DFS is to mark each vertex as visited while avoiding cycles.
● DFS algorithm explores the graph in a depth-first manner.
● . It explores as far as possible along each branch before backtracking. The algorithm
starts at a selected vertex and explores the deepest unvisited node in the graph until all
nodes have been visited or a specific condition is met.
● It is used for traversing or searching through a graph or tree data structure.
GRAPH
SUB LESSON 9.5
Breadth-First Search is a fundamental graph traversal algorithm used in data structures and
algorithms. It explores all the vertices of a graph in breadth-first order, meaning it visits all the
vertices at the same level before moving to the next level.
Breadth-First Search is a common graph traversal algorithm that categorizes each vertex into
two groups:
1. Visited: Represents the vertices that have been explored and processed.
2. Not Visited: Represents the vertices that have not yet been explored.
The primary goal of BFS is to traverse the graph while marking each vertex as visited and
avoiding cycles.
1. Choose any vertex from the graph and enqueue it at the back of a queue.
2. Dequeue the front item from the queue and mark it as visited.
3. Create a list of adjacent nodes for the dequeued vertex. Add only those nodes that have
not been visited to the back of the queue.
4. Repeat steps 2 and 3 until the queue becomes empty.
5. In case the graph consists of disconnected components, to ensure that every vertex is
covered, you can run the BFS algorithm on each unvisited node.
By using a queue data structure, BFS explores the graph layer by layer, visiting all the vertices at
the same level before moving to the next level.
Sure, here's an example of how the Breadth-First Search (BFS) algorithm works with a simple
undirected graph consisting of 5 vertices:
We start from vertex 0. In the BFS algorithm, we put vertex 0 in the visited list and enqueue all
its adjacent vertices into the queue.
Next, we visit the element at the front of the queue, i.e., vertex 1, and explore its adjacent
nodes. Since vertex 0 has already been visited, we move on to visit vertex 2 instead.
Vertex 2 has an unvisited adjacent vertex, which is vertex 4. We enqueue vertex 4 at the back of
the queue and then visit vertex 3, which is at the front of the queue.
Only vertex 4 remains in the queue since the only adjacent node of vertex 3, which is vertex 0,
has already been visited. We dequeue vertex 4 from the queue and visit it.
When the queue becomes empty, it signifies that the Breadth-First Traversal of the graph has
concluded.
The time complexity of the Breadth-First Search (BFS) algorithm can be expressed as O(V + E),
where V represents the number of nodes in the graph and E represents the number of edges.
Regarding the space complexity, it is O(V), indicating that the amount of memory required by
the algorithm grows linearly with the number of nodes in the graph.
#include <stdio.h>
#include <stdbool.h>
int queue[MAX_VERTICES];
queue[rear++] = startVertex;
visited[startVertex] = true;
visited[i] = true;
// Example usage
int main() {
int numVertices = 5;
int graph[MAX_VERTICES][MAX_VERTICES] = {
{1, 1, 1, 0, 1},
{1, 0, 1, 1, 0},
{1, 1, 0, 0, 1},
{0, 1, 0, 0, 1},
{0, 0, 1, 1, 0}
};
int startVertex = 0;
return 0;
OUTPUT
KEY TAKEAWAYS
PRIM’S ALGORITHM
Prim's algorithm is a unique approach for generating a minimum spanning tree from a given
graph. It operates by selecting a starting vertex and iteratively adding the minimum weight
edge that connects the current tree to a new vertex. This process continues until all vertices are
included in the tree, resulting in a minimum-spanning tree that has the lowest total weight
among all possible trees that can be derived from the original graph.
A minimum spanning tree (MST) is a subgraph of a connected, weighted graph that includes all
the vertices of the graph while minimizing the total weight or cost of the edges.
To generate a minimum spanning tree using Prim's algorithm, follow the steps below:
1. Begin by selecting a random vertex as the starting point for the minimum spanning tree.
2. Find all the edges that connect the current minimum spanning tree to new vertices.
3. Select the edge with the lowest weight from the previous step and add it to the
minimum spanning tree.
4. Repeat steps 2 and 3 until all vertices are included in the minimum spanning tree.
5. These steps ensure that the minimum spanning tree gradually grows by adding edges
with the lowest weights, ensuring that all vertices are connected in the most efficient
manner.
EXAMPLE
Select the edge with the minimum weight from the edges connected to the chosen vertex, and
include it in the growing minimum spanning tree.
Select the vertex that is closest in distance to the current minimum spanning tree but has not
yet been included in the solution.
Select the edge that is closest in distance among the edges that have not yet been included in
the solution. If there are multiple edges with the same minimum distance, choose one of them
randomly.
Continue the process of selecting vertices and edges as described above until you have formed
a spanning tree that includes all the vertices of the graph.
#include <limits.h>
#include <stdbool.h>
#include <stdio.h>
// Number of vertices in the graph
#define V 5
return min_index;
printf("Edge \tWeight\n");
for (int i = 1; i < V; i++)
graph[i][parent[i]]);
// matrix representation
int parent[V];
int key[V];
bool mstSet[V];
// vertex.
key[0] = 0;
parent[0] = -1;
mstSet[u] = true;
printMST(parent, graph);
// Driver's code
int main()
int graph[V][V] = { { 0, 2, 0, 6, 0 },
{ 2, 0, 3, 8, 5 },
{ 0, 3, 0, 0, 7 },
{ 6, 8, 0, 0, 9 },
{ 0, 5, 7, 9, 0 } };
primMST(graph);
return 0;
}
OUTPUT
KEY TAKEAWAYS
• Prim's algorithm is a unique approach for generating a minimum spanning tree from a
given graph.
• It operates by selecting a starting vertex and iteratively adding the minimum weight
edge that connects the current tree to a new vertex.
GRAPH
SUB LESSON 9.7
KRUSKAL’S ALGORITHM
Kruskal's algorithm is a popular algorithm used to find the minimum spanning tree (MST) of a
connected. Kruskal's algorithm begins by sorting the edges of the graph in ascending order
based on their weights. It then progressively adds edges with the lowest weights to the
minimum spanning tree until all vertices are connected.
1. Sort all the edges of the graph in ascending order based on their weights.
2. Select the edge with the lowest weight and add it to the spanning tree. If adding this
edge creates a cycle, reject it.
3. Continue selecting edges with increasing weights and add them to the spanning tree, as
long as they don't create cycles.
4. Repeat step 3 until all vertices are included in the spanning tree.
5. By following these steps, Kruskal's algorithm constructs the minimum spanning tree by
iteratively adding edges with the lowest weights that do not form cycles. Eventually, the
selected edges form the minimum spanning tree of the given graph.
Since the graph consists of 9 vertices and 14 edges, the resulting minimum spanning tree will
have (9 - 1) = 8 edges, as it follows the property that a minimum spanning tree in a connected
graph with V vertices has V - 1 edges.
EXAMPLE
After sorting:
1 7 6
2 8 2
2 6 5
4 0 1
4 2 5
6 8 6
7 2 3
7 7 8
8 0 7
8 1 2
9 3 4
10 5 4
11 1 7
14 3 5
Now pick all edges one by one from the sorted list of edges "
In the first step, select the edge connecting vertices 7 and 6. If adding this edge does not create
a cycle in the current spanning tree, include it in the minimum spanning tree.
Moving to the next step, choose the edge connecting vertices 8 and 2. If adding this edge to the
current spanning tree does not result in a cycle, include it in the minimum spanning tree.
Continuing with the algorithm, select the edge connecting vertices 6 and 5. If adding this edge
to the existing minimum spanning tree does not create a cycle, include it in the spanning tree.
Moving forward, choose the edge connecting vertices 0 and 1. If adding this edge to the current
minimum spanning tree does not result in a cycle, include it in the spanning tree.
Continuing the process, select the edge connecting vertices 2 and 5. If including this edge in the
current minimum spanning tree does not create a cycle, add it to the spanning tree.
Proceeding to the next step, consider the edge connecting vertices 8 and 6. However, including
this edge would result in a cycle, so it is discarded. Instead, select the edge connecting vertices
2 and 3. As adding this edge does not create a cycle in the current minimum spanning tree,
include it in the spanning tree.
Moving on to the next step, examine the edge connecting vertices 7 and 8. However, including
this edge would introduce a cycle, so it is discarded. Instead, select the edge connecting
vertices 0 and 7. As adding this edge does not create a cycle in the current minimum spanning
tree, include it in the spanning tree.
Continuing to the next step, consider the edge connecting vertices 1 and 2. However, including
this edge would result in a cycle, so it is discarded. Instead, select the edge connecting vertices
3 and 4. As adding this edge does not create a cycle in the current minimum spanning tree,
include it in the spanning tree.
Since the number of edges included in the minimum spanning tree (MST) is equal to (V - 1),
where V represents the number of vertices, the algorithm terminates at this point.
// Kruskal's algorithm in C
#include <stdio.h>
#define MAX 30
int u, v, w;
} edge;
typedef struct edge_list {
edge data[MAX];
int n;
} edge_list;
edge_list elist;
int Graph[MAX][MAX], n;
edge_list spanlist;
void kruskalAlgo();
void sort();
void print();
void kruskalAlgo() {
elist.n = 0;
elist.data[elist.n].u = i;
elist.data[elist.n].v = j;
elist.data[elist.n].w = Graph[i][j];
elist.n++;
sort();
belongs[i] = i;
spanlist.n = 0;
if (cno1 != cno2) {
spanlist.data[spanlist.n] = elist.data[i];
spanlist.n = spanlist.n + 1;
}
}
return (belongs[vertexno]);
int i;
if (belongs[i] == c2)
belongs[i] = c1;
// Sorting algo
void sort() {
int i, j;
edge temp;
temp = elist.data[j];
elist.data[j] = elist.data[j + 1];
elist.data[j + 1] = temp;
void print() {
int i, cost = 0;
int main() {
int i, j, total_cost;
n = 6;
Graph[0][0] = 0;
Graph[0][1] = 4;
Graph[0][2] = 4;
Graph[0][3] = 0;
Graph[0][4] = 0;
Graph[0][5] = 0;
Graph[0][6] = 0;
Graph[1][0] = 4;
Graph[1][1] = 0;
Graph[1][2] = 2;
Graph[1][3] = 0;
Graph[1][4] = 0;
Graph[1][5] = 0;
Graph[1][6] = 0;
Graph[2][0] = 4;
Graph[2][1] = 2;
Graph[2][2] = 0;
Graph[2][3] = 3;
Graph[2][4] = 4;
Graph[2][5] = 0;
Graph[2][6] = 0;
Graph[3][0] = 0;
Graph[3][1] = 0;
Graph[3][2] = 3;
Graph[3][3] = 0;
Graph[3][4] = 3;
Graph[3][5] = 0;
Graph[3][6] = 0;
Graph[4][0] = 0;
Graph[4][1] = 0;
Graph[4][2] = 4;
Graph[4][3] = 3;
Graph[4][4] = 0;
Graph[4][5] = 0;
Graph[4][6] = 0;
Graph[5][0] = 0;
Graph[5][1] = 0;
Graph[5][2] = 2;
Graph[5][3] = 0;
Graph[5][4] = 3;
Graph[5][5] = 0;
Graph[5][6] = 0;
kruskalAlgo();
print();
}
OUTPUT
KEY TAKEAWAYS
• Kruskal's algorithm is a popular algorithm used to find the minimum spanning tree
(MST) of a connected.
• Kruskal's algorithm begins by sorting the edges of the graph in ascending order based on
their weights. It then progressively adds edges with the lowest weights to the minimum
spanning tree until all vertices are connected.
GRAPH
SUB LESSON 9.8
DIJKSTRA’S ALGORITHM
The key distinction between Dijkstra's algorithm and the minimum spanning tree is that the
shortest path found by Dijkstra's algorithm between two vertices may not encompass all the
vertices in the graph.
EXAMPLE
The algorithm will calculate the shortest path from a given starting node (such as node 0) to all
the other nodes in the graph. In this context, the edge weights in the graph are considered to
represent the distances between the nodes.
In Dijkstra's algorithm, the distance from the source node to itself is considered as 0. In the
given example, if the source node is labeled as 0, its distance from itself will be set to 0.
For all the other nodes in the graph, initially, their distances from the source node are
unknown. To handle this, we typically mark their distances as infinity to indicate that they have
not been visited yet and their distances are not yet determined.
In addition to keeping track of the distances from the source node, Dijkstra's algorithm also
utilizes an array or data structure to store unvisited or unmarked nodes. The algorithm is
considered complete when all the nodes in the graph have been marked as visited.
Unvisited Nodes:- 0 1 2 3 4 5 6.
We typically start from a specific node, such as Node 0, and mark it as visited. In visual
representations, this is often depicted by marking the visited node in red.
After visiting a node, the next step in Dijkstra's algorithm is to consider its adjacent nodes and
calculate their tentative distances. In this step, we examine the neighboring nodes and choose
the node with the minimum distance as the next node to visit.
For example, let's say we have two adjacent nodes, Node 1 and Node 2, with tentative
distances of 2 and 6, respectively. In this case, Node 1 has the minimum distance. Thus, we
would mark Node 1 as visited and update its distance.
Upon reaching Node 3, the algorithm marks it as visited and adds up the distance.
we have two adjacent nodes, Node 4 and Node 5, with distances of 10 and 15, respectively. To
determine the next node to visit, we select the node with the minimum distance. Node 4 has
the minimum distance, so we mark it as visited and update its distance.
Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 = 2 + 5 + 10 = 17
we examine the adjacent nodes of the current node. If the next adjacent node is Node 6, we
mark it as visited and update the distance.
Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 -> Node 6 = 2 + 5 + 10 + 2 = 19
So, the Shortest Distance from the Source Vertex is 19 which is minimum.
C PROGRAM FOR KRUSKAL’S ALGORITHM
// Dijkstra's Algorithm in C
#include <stdio.h>
#define MAX 10
if (Graph[i][j] == 0)
cost[i][j] = INFINITY;
else
cost[i][j] = Graph[i][j];
distance[i] = cost[start][i];
pred[i] = start;
visited[i] = 0;
distance[start] = 0;
visited[start] = 1;
count = 1;
mindistance = INFINITY;
mindistance = distance[i];
nextnode = i;
visited[nextnode] = 1;
if (!visited[i])
pred[i] = nextnode;
count++;
}
if (i != start) {
int main() {
int Graph[MAX][MAX], i, j, n, u;
n = 7;
Graph[0][0] = 0;
Graph[0][1] = 0;
Graph[0][2] = 1;
Graph[0][3] = 2;
Graph[0][4] = 0;
Graph[0][5] = 0;
Graph[0][6] = 0;
Graph[1][0] = 0;
Graph[1][1] = 0;
Graph[1][2] = 2;
Graph[1][3] = 0;
Graph[1][4] = 0;
Graph[1][5] = 3;
Graph[1][6] = 0;
Graph[2][0] = 1;
Graph[2][1] = 2;
Graph[2][2] = 0;
Graph[2][3] = 1;
Graph[2][4] = 3;
Graph[2][5] = 0;
Graph[2][6] = 0;
Graph[3][0] = 2;
Graph[3][1] = 0;
Graph[3][2] = 1;
Graph[3][3] = 0;
Graph[3][4] = 0;
Graph[3][5] = 0;
Graph[3][6] = 1;
Graph[4][0] = 0;
Graph[4][1] = 0;
Graph[4][2] = 3;
Graph[4][3] = 0;
Graph[4][4] = 0;
Graph[4][5] = 2;
Graph[4][6] = 0;
Graph[5][0] = 0;
Graph[5][1] = 3;
Graph[5][2] = 0;
Graph[5][3] = 0;
Graph[5][4] = 2;
Graph[5][5] = 0;
Graph[5][6] = 1;
Graph[6][0] = 0;
Graph[6][1] = 0;
Graph[6][2] = 0;
Graph[6][3] = 1;
Graph[6][4] = 0;
Graph[6][5] = 1;
Graph[6][6] = 0;
u = 0;
Dijkstra(Graph, n, u);
return 0;
}
OUTPUT
KEY TAKEAWAYS
LINEAR SEARCH
Linear search, also known as sequential search, is an algorithm used to find a specific element
within a list. It involves starting at one end of the list and sequentially examining each element
until the desired element is found. If the element is not found, the search continues until the
end of the list is reached.
For example, let's consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and the key = 30.
2. Move to the second element, arr[1] = 50. Again, it doesn't match the key, so continue to
the next element.
3. Move to the third element, arr[2] = 30. It matches the key, so the search is successful.
Return the index 2.
In this example, the Linear Search Algorithm successfully finds the key 30 at index 2 of the
array.
Step 3: Traverse the entire array until the search data is found.
- If the end of the array is reached without finding the search data, return -1.
Step 5: Stop.
The algorithm starts by declaring the array and the value to be searched for, represented by the
variable 'x'. It then iterates through each element of the array, comparing it with 'x'. If a match
is found, the algorithm returns the location (index) of the element. If no match is found after
traversing the entire array, it returns -1 to indicate that the search data is not present in the
array. Finally, the result is printed, and the algorithm terminates.
The time complexity of the linear search algorithm can be analyzed as follows:
Best Case: The best-case scenario occurs when the element being searched is present at the
first index of the list. In this case, the search operation can be completed in constant time,
denoted as O(1). This is because only one comparison is needed to find the element.
Worst Case: The worst-case scenario happens when the element being searched is present at
the last index of the list, or it is not present in the list at all. In this case, the algorithm needs to
compare the search element with each element in the list until the end is reached or a match is
found. As a result, the time complexity in the worst case is O(N), where N is the size of the list.
This means that the time required to perform the search increases linearly with the size of the
list.
Average Case: On average, when considering all possible cases, the linear search algorithm will
need to examine half of the list elements before finding the desired element or concluding that
it is not present. Therefore, the average case time complexity is O(N), where N is the size of the
list.
EXAMPLE
#include<stdio.h>
int main()
int a[20],i,x,n;
scanf("%d",&n);
for(i=0;i<n;++i)
scanf("%d",&a[i]);
scanf("%d",&x);
for(i=0;i<n;++i)
if(a[i]==x)
break;
if(i<n)
else
return 0;
Output :
How many elements: 5
KEY TAKEAWAYS
● Linear search is a simple searching algorithm that sequentially checks each element in a
list until the target element is found or the end of the list is reached.
● It is applicable to both sorted and unsorted lists, but it is more commonly used for
unsorted lists.
● Linear search starts from the first element of the list and compares it with the target
element. If a match is found, the search is successful.
● If the target element is not found, linear search continues checking each subsequent
element in the list until the end is reached or the target element is found.
● Linear search has a time complexity of O(n), where n is the number of elements in the
list. In the worst-case scenario, where the target element is at the end of the list or not
present, linear search needs to traverse the entire list.
SEARCHING
SUB LESSON 10.2
BINARY SEARCH
Binary Search is an efficient searching algorithm used for finding a target element in a sorted
array. The algorithm works by repeatedly dividing the search interval in half, eliminating half of
the remaining elements each time, until the target element is found or it is determined that the
element does not exist in the array.
1. Binary search involves dividing the search space into two halves by finding the middle
index, known as "mid."
2. The middle element of the search space is compared to the target key.
3. If the key is found at the middle element, the search process is terminated successfully.
4. If the key is not found at the middle element, the next search space is determined by
choosing the appropriate half.
5. If the key is smaller than the middle element, the left side of the search space is selected
for the next iteration.
6. If the key is larger than the middle element, the right side of the search space is selected
for the next iteration.
7. This process of dividing the search space and selecting the appropriate half is repeated
until the key is found or the total search space is exhausted.
8. Binary search has a time complexity of O(log n), making it an efficient algorithm for
searching in large datasets.
EXAMPLE:
Let's consider the given array arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and the target key = 23.
First Step: Calculate the mid index by dividing the search space in half:
● Start index: 0
● End index: 9
● Mid index: (0 + 9) / 2 = 4
Since the key is greater than the current mid-element, the search space moves to the right side
of the array.
1. Iterative
2. Recursive
1. Iterative Method: In the iterative approach, the binary search algorithm is implemented
using a loop to repeatedly divide the search space in half. Here are the steps involved:
● Initialize the low and high pointers to the start and end of the array respectively.
● While the low pointer is less than or equal to the high pointer:
● Calculate the mid index as (low + high) / 2.
● Compare the element at the mid index with the target value:
● If they are equal, return the mid index as the position of the target
element.
● If the target value is less than the mid element, update the high pointer
to mid - 1.
● If the target value is greater than the mid element, update the low
pointer to mid + 1.
● If the loop terminates without finding the target element, return a value indicating that
the element was not found.
2. Recursive Method: In the recursive approach, the binary search algorithm is
implemented using a recursive function that divides the search space in half. Here are
the steps involved:
● Define a recursive function that takes the array, target value, low index, and high index
as parameters.
● If the low index is greater than the high index, return a value indicating that the element
was not found.
● Calculate the mid index as (low + high) / 2.
● Compare the element at the mid index with the target value:
● If they are equal, return the mid index as the position of the target element.
● If the target value is less than the mid element, make a recursive call to search in
the left half of the array.
● If the target value is greater than the mid element, make a recursive call to
search in the right half of the array.
● The recursive calls continue until the target element is found or the search space is
exhausted.
Both the iterative and recursive methods provide the same result, but they differ in their
implementation approach. The choice between them depends on factors such as programming
language preferences, code readability, and the specific requirements of the problem at hand.
#include <stdio.h>
int main()
scanf("%d",&n);
scanf("%d",&array[c]);
scanf("%d", &search);
first = 0;
last = n - 1;
middle = (first+last)/2;
while (first <= last) {
first = middle + 1;
break;
else
last = middle - 1;
return 0;
Output:
Enter 5 integers:
12
14
18
25
50
Enter the value to find:
14
14 is present at index 2.
KEY TAKEAWAYS
● Binary Search is an efficient searching algorithm used for finding a target element in a
sorted array.
● Binary search involves dividing the search space into two halves by finding the middle
index, known as "mid."
● The middle element of the search space is compared to the target key.
● If the key is found at the middle element, the search process is terminated successfully.
● If the key is not found at the middle element, the next search space is determined by
choosing the appropriate half.
● If the key is smaller than the middle element, the left side of the search space is selected
for the next iteration.
● If the key is larger than the middle element, the right side of the search space is selected
for the next iteration.
SORTING
SUB LESSON 11.1
BUBBLE SORT
The term "bubble sort" is used to describe this sorting algorithm because the way array
elements move resembles the movement of air bubbles in the water. In bubble sort, the array
elements move towards the end in each iteration, comparable to how bubbles rise to the
surface in water.
Its Best Case Time Complexity is efficient with O(N) and Average and Worst Case Time
Complexity is quite high with O(n2), where n is the number of items.
This sorting technique is not compatible with large datasets and its average and worst-case
time complexity is quite high with O(N2) where n is a number of items.
Bubble sort is commonly used in situations where complexity is not a major concern, and
simplicity and a shorter code implementation are preferred.
Bubble sort is an in-place algorithm since it performs the swapping of adjacent pairs without
requiring the use of any significant additional data structure.
To understand the operation of the bubble sort algorithm, let's consider an unsorted array.
For the purpose of illustration, we will use a concise and precise array since we are aware that
the time complexity of bubble sort is O(n2).
1. The first step is to compare the element at the first index with the element at the second
index of the array.
2. If the first element is greater than the second element, they are swapped.
3. Compare further pair of elements and swap them if they are not in the order.
4. This process continues iteratively until the algorithm reaches the last element of the array.
2. Remaining Iteration
The same process continues for the remaining iterations in the bubble sort algorithm.
After each iteration in the algorithm, the largest element among the unsorted elements is
positioned at the end of the array.
During each iteration of the bubble sort algorithm, the comparison process occurs up to the last
unsorted element in the array.
The array is considered sorted when all the unsorted elements have been correctly placed in
their respective positions.
EXAMPLE:
#include <stdio.h>
int main()
scanf("%d", &n);
scanf("%d", &array[c]);
if (array[d] > array[d+1]) /* For decreasing order use '<' instead of '>' */
swap = array[d];
array[d] = array[d+1];
array[d+1] = swap;
printf("%d\n", array[c]);
return 0;
}
Output:
Enter 5 integers
15
32
20
15
20
32
If the array is already sorted, there is no need to perform the sorting algorithm since the
elements are already in the desired order.
This situation arises when the elements of the array are in a disordered or jumbled state,
meaning they are neither arranged in ascending nor descending order.
The worst-case scenario for sorting in ascending order using bubble sort arises when the array
is initially arranged in descending order.
To prevent the O(N^2) time complexity of bubble sort, it is advisable to check if the array is
already sorted before executing the algorithm. By verifying the sorted status beforehand,
unnecessary iterations and comparisons can be avoided, resulting in improved efficiency.
The space complexity of bubble sort is O(1) because it only requires a constant amount of
additional space for swapping elements using a temporary variable.
ADVANTAGES
Bubble sort is straightforward to understand and execute. Bubble sort does not need any extra
memory space.
Bubble sort is a stable sorting algorithm, which implies that elements with the same key value
maintain their relative order in the sorted output.
DISADVANTAGES
Bubble sort exhibits a time complexity of O(N^2), rendering it inefficient for handling large data
sets due to its relatively slow performance.
KEY TAKEAWAYS
SELECTION SORT
Selection sort is a sorting algorithm that iteratively selects the smallest element from an
unsorted list in each iteration and places it at the beginning of the unsorted portion of the list.
Selection sort is a simple and efficient sorting algorithm.
The standard(default) implementation of the Selection Sort Algorithm is not inherently stable.
The Selection Sort Algorithm is an in-place sorting algorithm, meaning it does not require
additional space.
This algorithm is not well-suited for large data sets due to its average and worst-case
complexities, which are both O(n^2), where n represents the number of items.
The Selection sort algorithm can be implemented in C using for and while loops, as well as by
utilizing functions.
3. After each iteration, the minimum element is positioned at the beginning of the unsorted
portion of the list.
4. During each iteration, the indexing starts from the first unsorted element. Steps 1 to 3 are
repeated until all the elements are correctly positioned.
SELECTION SORT CODE
#include <stdio.h>
// driver code
int main() {
int data[] = {20, 12, 10, 15, 2};
int size = sizeof(data) / sizeof(data[0]);
selectionSort(data, size);
printf("Sorted array in Acsending Order:\n");
printArray(data, size);
}
OUTPUT :
The space complexity is O(1) since only an additional variable, 'temp', is utilized.
KEY TAKEAWAYS
● Selection sort is a sorting algorithm that iteratively selects the smallest element from an
unsorted list in each iteration and places it at the beginning of the unsorted portion of
the list.
● This algorithm is not well-suited for large data sets due to its average and worst-case
complexities, which are both O(n^2), where n represents the number of items.
SORTING
SUB LESSON 11.3
INSERTION SORT
The insertion sort algorithm iterates through the unsorted elements and places each element at
its appropriate position in the sorted manner. The insertion sort algorithm operates in a
manner similar to sorting cards in a hand during a card game.
It begins by assuming that the first card is already sorted. Then, we pick an unsorted card and
compare it with the first card. If the unsorted card is greater, it is positioned on the right side;
otherwise, it is placed on the left side. This process is repeated for each unsorted card, ensuring
they are correctly placed in their respective positions.
Insertion sort utilizes a similar approach. The concept behind the insertion sort algorithm is to
select an element and iterate it through the sorted array.
This algorithm is considered one of the simplest sorting algorithms due to its straightforward
implementation.
Generally, insertion sort is considered efficient for sorting small amounts of data.
Insertion sort exhibits adaptability, making it suitable for data sets that are partially sorted.
The Insertion Sort algorithm adopts an incremental approach.
Insertion sort is in-place algorithm because extra space required is not used to manipulate
input.
Applications of Insertion sort are :
A. It is commonly used when dealing with a small number of elements.
B. It can also be advantageous when the input array is nearly sorted, with only a
few elements out of place within a larger array.
2. At this point, the first two elements are now in sorted order.
Next, consider the third element and compare it with the elements to its left. Place the third
element just behind the element that is smaller than it. If there are no elements smaller than it,
then place it at the beginning of the array.
3. Likewise, continue placing each unsorted element in its correct position.
INSERTION SORT CODE
// Insertion sort in C
#include <stdio.h>
// Compare key with each element on the left of it until an element smaller than
// it is found.
// For descending order, change key<array[j] to key>array[j].
while (key < array[j] && j >= 0) {
array[j + 1] = array[j];
--j;
}
array[j + 1] = key;
}
}
// Driver code
int main() {
int data[] = {9, 5, 1, 4, 3};
int size = sizeof(data) / sizeof(data[0]);
insertionSort(data, size);
printf("Sorted array in ascending order:\n");
printArray(data, size);
}
OUTPUT :
INSERTION SORT TIME COMPLEXITY
Space complexity is O(1) because an extra variable key is used and it is stable.
1. Insertion sort is not as efficient when dealing with larger data sets.
2. The worst-case time complexity of the insertion sort algorithm is O(n^2)
3. Insertion sort is less efficient than heap sort, quick sort, merge sort, etc.
KEY TAKEAWAYS
● The insertion sort algorithm iterates through the unsorted elements and places each
element at its appropriate position in the sorted manner.
● The insertion sort algorithm operates in a manner similar to sorting cards in a hand
during a card game.
SORTING
SUB LESSON 11.4
MERGE SORT
Merge Sort is a widely used sorting algorithm that follows the Divide and Conquer principle.
In this approach, a problem is divided into several smaller sub-problems, which are solved
independently. Eventually, the solutions to these sub-problems are combined to obtain the
final solution. To carry out the merging process, we need to define the merge() function.
It is widely recognized as a highly respected and efficient algorithm because of its Time
Complexity.
It serves as an effective algorithm for gaining proficiency in recursion and problem-solving
techniques by employing the divide-and-conquer approach.
Merge sort does not sort the array in place, meaning it requires additional memory space
proportional to the size of the input array.
Merge sort exhibits consistent performance regardless of the initial order of the elements, as it
always performs the same number of comparisons and moves for a given input size.
Merge sort performs well on linked lists due to its ability to easily split and merge linked list
nodes without excessive memory operations.
In the merge sort algorithm, the initial step is to divide the given array into two equal halves.
This process of dividing the list into equal parts continues until further division is not possible.
Since the given array consists of eight elements, it is divided into two arrays of equal size, each
containing four elements.
Next, further divide these two arrays into halves. Since they each have a size of 4, divide them
into new arrays of size 2.
Now, further divide these arrays until reaching the smallest indivisible elements.
Now, reassemble them in the same manner as they were originally divided.
When combining, begin by comparing the elements of each array, and then merge them into a
new array in a sorted order.
Next, compare the values 12 and 31; since they are already in sorted order, leave them as they
are. Then, compare 25 and 8; in the list of two values, place 8 first, followed by 25. Proceed to
compare 32 and 17, sort them, and place 17 first, followed by 32. Lastly, compare 40 and 42,
and arrange them in sequence.
In the subsequent iteration of combining, compare the arrays containing two data values and
merge them into a new array with the sorted order of the elements.
Now, perform the final merge of the arrays. After the completion of this merging process, the
resulting array will appear as follows -
while (j<n2)
{
a[k] = RightArray[j];
j++;
k++;
}
}
int main()
{
int a[] = { 12, 31, 25, 8, 32, 17, 40, 42 };
int n = sizeof(a) / sizeof(a[0]);
printf("Before sorting array elements are - \n");
printArray(a, n);
mergeSort(a, 0, n - 1);
printf("After sorting array elements are - \n");
printArray(a, n);
return 0;
}
OUTPUT :
Before sorting array elements are -
12 31 25 8 32 17 40 42
After sorting array elements are -
8 12 17 25 31 32 40 42
The space complexity of merge sort is O(n) because it requires an additional variable for
swapping during the sorting process.
Merge sort is classified as a stable sorting algorithm since it preserves the original order of
equal elements in the input array.
1. The worst-case time complexity of merge sort is O(N logN), which indicates its efficient
performance even when dealing with large datasets.
2. Merge sort possesses inherent parallelizability, making it well-suited for leveraging
multiple processors or threads to improve efficiency.
1. During the sorting process, merge sort necessitates extra memory to hold the merged
sub-arrays.
2. Compared to certain sorting algorithms like insertion sort, Merge sort exhibits a higher
time complexity for small datasets. Consequently, its performance may be slower when
dealing with very small datasets.
KEY TAKEAWAYS
● Merge Sort is a widely used sorting algorithm that follows the Divide and Conquer
principle.
● In this approach, a problem is divided into several smaller sub-problems, which are
solved independently.
● Merge sort does not sort the array in place, meaning it requires additional memory
space proportional to the size of the input array.
…
SORTING
SUB LESSON 11.5
QUICK SORT
Quicksort is a sorting algorithm that uses the divide and conquer approach, where
1. The Quicksort algorithm divides an array into subarrays by selecting a pivot element,
which is chosen from the array itself.
2. When partitioning the array, the pivot element is positioned in a manner such that
elements smaller than the pivot are placed on the left side, while elements greater than
the pivot are placed on the right side of the pivot.
3. The same approach is applied to divide the left and right subarrays. This process
continues recursively until each subarray contains only one element.
4. At this stage, the individual elements within each subarray are already sorted. Finally,
the sorted subarrays are combined to form a fully sorted array.
It is recognized as a fast and highly efficient sorting algorithm.
The Quicksort algorithm is commonly utilized when
● The Quicksort algorithm is suitable for programming languages that support recursion.
● The Quicksort algorithm is suitable when time complexity is a critical factor.
● The Quicksort algorithm is suitable when space complexity is an important
consideration.
Now, the pivot is compared with the remaining elements. If a smaller element than the
pivot is encountered, it is swapped with the previously identified greater element.
The process is repeated again to identify the next greater element as the second
pointer. If another smaller element is found, it is swapped with the current smaller
element.
3. Divide Subarrays
The process of selecting pivot elements is repeated separately for the left and right
subarrays, and Step 2 is repeated.
The subarrays are recursively divided until each subarray consists of a single element. At this
point, the array is already sorted.
QUICK SORT CODE
#include <stdio.h>
// main function
int main() {
int data[] = {8, 7, 2, 1, 0, 9, 6};
printf("Unsorted Array\n");
printArray(data, n);
OUTPUT :
Unsorted Array
8 7 2 1 0 9 6
Sorted array in ascending order:
0 1 2 6 7 8 9
KEY TAKEAWAYS
● The insertion sort algorithm iterates through the unsorted elements and places each
element at its appropriate position in the sorted manner.
● The insertion sort algorithm operates in a manner similar to sorting cards in a hand
during a card game.