0% found this document useful (0 votes)
22 views

Data Structure and Algorithams

The document is a course outline for a Data Structures and Algorithms program offered by Parul University, detailing various topics covered in the curriculum. It includes an introduction to data structures and algorithms, basic terminologies, classifications of data structures, and key concepts such as time and space complexity. The content is intended for exclusive use by enrolled students and is protected by copyright laws.

Uploaded by

TYBCA 2021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Data Structure and Algorithams

The document is a course outline for a Data Structures and Algorithms program offered by Parul University, detailing various topics covered in the curriculum. It includes an introduction to data structures and algorithms, basic terminologies, classifications of data structures, and key concepts such as time and space complexity. The content is intended for exclusive use by enrolled students and is protected by copyright laws.

Uploaded by

TYBCA 2021
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 341

DATA STRUCTURE AND

ALGORITHMS
Centre for Distance and Online Education
Online MCA Program
Data Structures and Algorithms

Semester: 1

Author

Ms. Ridhi Mehta, Assistant Professor, Online Degree-CDOE,


Parul University

Credits
Centre for Distance and Online Education,
Parul University,

Post Limda, Waghodia,

Vadodara, Gujarat, India

391760.

Website: https://paruluniversity.ac.in/

Disclaimer

This content is protected by CDOE, Parul University. It is sold under the stipulation that it cannot be
lent, resold, hired out, or otherwise circulated without obtaining prior written consent from the
publisher. The content should remain in the same binding or cover as it was initially published, and this
requirement should also extend to any subsequent purchaser. Furthermore, it is important to note that,
in compliance with the copyright protections outlined above, no part of this publication may be
reproduced, stored in a retrieval system, or transmitted through any means (including electronic,
Mechanical, photocopying, recording, or otherwise) without obtaining the prior
written permission from both the copyright owner and the publisher of this
content.

Note to Students
These course notes are intended for the exclusive use of students enrolled in
Online MCA. They are not to be shared or distributed without explicit permission
from the University. Any unauthorized sharing or distribution of these materials
may result in academic and legal consequences.
Table of Content

SUB LESSON 1.1


INTRODUCTION TO DATA STRUCTURE, WHAT IS ALGORITHM?
SUB LESSON 1.2
BASIC TERMINOLOGIES
SUB LESSON 1.3
CLASSIFICATION OF DATA STRUCTURE
SUB LESSON 1.4
TOWER OF HANOI PROBLEM
SUB LESSON 2.1
INTRODUCTION TO ARRAY
SUB LESSON 2.2
DECLARATION OF ARRAY
SUB LESSON 2.3
REPRESENTATION OF ARRAY
SUB LESSON 2.4
HOW TO ACCESS ELEMENTS FROM ARRAY
SUB LESSON 2.5
OPERATIONS ON ARRAY
SUB LESSON 3.1
INTRODUCTION TO STACK
SUB LESSON 3.2
WORKING WITH STACK
SUB LESSON 3.3
STACK IMPLEMENTATION
SUB LESSON 3.4
OPERATIONS ON STACK & RECURSION
SUB LESSON 4.1
BASICS OF QUEUE
SUB LESSON 4.2
WORKING WITH QUEUE
SUB LESSON 4.3
OPERATIONS OF QUEUE
SUB LESSON 5.1
TYPES OF QUEUE
SUB LESSON 5.2
SIMPLE QUEUE
SUB LESSON 5.3
CIRCULAR QUEUE
SUB LESSON 5.4
PRIORITY QUEUE
SUB LESSON 5.5
DOUBLE ENDED QUEUE
SUB LESSON 6.1
BASICS OF LINKED LIST
SUB LESSON 6.2
REPRESENTATION OF LINKED LIST
SUB LESSON 7.1
TYPES OF LINKED LIST
SUB LESSON 7.2
SINGLY LINKED LIST
SUB LESSON 7.3
DOUBLY LINKED LIST
SUB LESSON 7.4
CIRCULAR LINKED LIST
SUB LESSON 8.1
INTRODUCTION TO TREE
SUB LESSON 8.2
BASIC TERMINOLOGIES OF TREE
SUB LESSON 8.3
TYPES OF TREE
SUB LESSON 8.4
RED BLACK TREE
SUB LESSON 9.1
INTRODUCTION TO GRAPH DATA STRUCTURE, GRAPH TERMINOLOGY
SUB LESSON 9.2
REPRESENTATION OF GRAPH
SUB LESSON 9.3
OPERATIONS ON GRAPH
SUB LESSON 9.4
DEPTH FIRST SEARCH
SUB LESSON 9.5
BREADTH FIRST SEARCH
SUB LESSON 9.6
PRIM’S ALGORITHM
SUB LESSON 9.7
KRUSKAL’S ALGORITHM
SUB LESSON 9.8
DIJKSTRA’S ALGORITHM
SUB LESSON 10.1
LINEAR SEARCH
SUB LESSON 10.2
BINARY SEARCH
SUB LESSON 11.1
BUBBLE SORT
SUB LESSON 11.2
SELECTION SORT
SUB LESSON 11.3
INSERTION SORT
SUB LESSON 11.4
MERGE SORT
SUB LESSON 11.5
QUICK SORT
SUB LESSON 1.1

INTRODUCTION TO DATA STRUCTURE, WHAT IS ALGORITHM?

INTRODUCTION TO DATA STRUCTURE

Computer programs consist of sets of instructions designed to carry out specific tasks. In order
to accomplish these tasks, computers require the ability to store and retrieve data, as well as
perform calculations on that data. To facilitate efficient data management, programmers utilize
data structures, which are named entities employed for storing and organizing data.

Data structures can be described as a collection of data elements that offer an effective means
of storing and organizing data in a computer system, enabling efficient access and utilization.

Organizing information on a computer in a manner that facilitates access and modification is a


key aspect of data structure design. Choosing appropriate materials for a project depends on
specific requirements and needs.

Diverse types of data structures are utilized in various ways by nearly every enterprise
application.

WHAT IS AN ALGORITHM?

In the context of computer programming, an algorithm can be defined as a collection of clearly-


defined instructions designed to solve a specific problem. It operates by taking a predetermined
set of input(s) and generating the desired output.

An algorithm is a systematic procedure that outlines a sequence of instructions to be executed


in a specific order, leading to the desired output.

It is a systematic approach that involves a series of sequential steps to solve a given problem.
From a data structure perspective, there are several important categories of algorithms,
including:
● Search: Algorithms designed to locate an item within a collection of records.
● Sort: Algorithms used to arrange objects in a specific order.
● Insert Algorithms for adding new objects into a record or data structure.
● Update: Algorithms that modify existing objects within a record or data structure.
● Delete Algorithms that remove or delete a specific object from a structure or collection.

Characteristics of an Algorithm
Not all procedures can be classified as algorithms; they must adhere to the following principles:

● Unambiguous: An algorithm must be clear and unambiguous, with each step and its
inputs/outputs precisely defined to achieve the desired outcome.
● Input: An algorithm should have zero or more well-defined inputs.
● Output: An algorithm must produce one or more clearly defined outputs that align with
the desired result.
● Finiteness: Algorithms must conclude after a finite number of steps.
● Feasibility: They should be feasible, considering the available resources.
● Independence: An algorithm should consist of step-by-step instructions that are
independent of any specific programming code.

Qualities of a Good Algorithm

● Input and output should be precisely defined in an algorithm.


● Each step within the algorithm must be clear and unambiguous.
● Algorithms should strive to be optimal among various approaches for problem-solving.
● An algorithm should not be specific to a particular programming code; instead, it should
be written in a manner that allows it to be implemented in different programming
languages.

Why Learn Data Structure and Algorithms?


As applications become more complex and data-intensive, modern-day programs often
encounter three common challenges:
● Data Search: Efficiently searching and retrieving specific data from large datasets.
● Processor Speed: Ensuring that the program's processing speed is capable of handling
the workload efficiently.
● Handling Multiple Requests: Managing multiple simultaneous requests and ensuring
timely responses

HOW TO WRITE AN ALGORITHM?

Algorithms are not designed to be specific to any programming code. They are developed in a
step-by-step manner, independent of any specific programming language.

Example

Problem 1 − Design an algorithm for addition of two numbers and display the result.

Step 1: Start
Step 2: Declare variables num1, num2 and sum.
Step 3: Read values num1 and num2.
Step 4: Add num1 and num2 and assign the result to sum.
sum←num1+num2
Step 5: Display sum
Step 6: Stop

Problem 2 − Find the largest number among three numbers

Step 1: Start
Step 2: Declare variables a,b and c.
Step 3: Read variables a,b and c.
Step 4: If a > b
If a > c
Display a is the largest number.
Else
Display c is the largest number.
Else
If b > c
Display b is the largest number.
Else
Display c is the greatest number.
Step 5: Stop

KEY TAKEAWAYS

● Data structures can be described as a collection of data elements that offer an effective
means of storing and organizing data in a computer system, enabling efficient access
and utilization.
● An algorithm is a systematic procedure that outlines a sequence of instructions to be
executed in a specific order, leading to the desired output.
● An algorithm is a systematic approach that involves a series of sequential steps to solve
a given problem.
BASIC OF DATA STRUCTURE
SUB LESSON 1.2

BASIC TERMINOLOGIES

BASIC TERMINOLOGIES

Data: Data refers to individual values or collections of values.


Data items: A data item represents a single unit of value within a dataset.
Algorithm: An algorithm is a well-defined, finite list of step-by-step instructions designed to
solve a specific problem.
Data Structure: A data structure represents the logical relationships that exist among individual
data elements. It provides a means of organizing data items in a manner that considers both
the stored elements and their interrelationships. The term 'data structure' pertains to the
organization and storage method employed for data.
Data Types: Data types are designated as the formats in which variables can store data to
perform specific operations. They are utilized to define variables before using them in a
program. Data types determine the size of variables, constants, and arrays.

TIME & SPACE COMPLEXITY

When analyzing an algorithm, it is important to determine its complexity in terms of resources


such as time and space. However, the calculated complexity does not provide the exact amount
of resources required. Instead, the complexity of an algorithm is expressed in a general
mathematical form that captures its fundamental nature, providing insights into its
performance characteristics.

Time Complexity - Time complexity is a form of computational complexity that characterizes


the time needed to execute an algorithm. It measures the amount of time required for each
statement in the algorithm to complete. Time complexity heavily relies on the size of the data
being processed and plays a crucial role in assessing the efficiency and performance of an
algorithm.

In simpler terms, it quantifies the amount of computer time needed for the algorithm or
program to reach its end.
Typically, the time required by an algorithm can be categorized into three types:

● Worst case: It represents the input that causes the algorithm to take the maximum
amount of time to execute.
● Average case: It refers to the typical or expected time taken by the algorithm for a
random or average input.
● Best case: It represents the input for which the algorithm takes the minimum amount of
time to execute.

Space Complexity - When an algorithm is executed on a computer, it requires a certain amount


of memory space. The space complexity of a program is a measure of the amount of memory it
consumes during execution. This includes the memory needed to store input data and
temporary values while the program is running. Space complexity can be categorized into
auxiliary space, which represents additional space required by the algorithm, and input space,
which is the space required to store the input data.

In estimating the memory requirement, we need to consider two components:

1. Fixed part: This part is independent of the input size and includes memory allocation for
instructions (code), constants, variables, and other static components.
2. Variable part: This part is dependent on the input size and includes memory allocation
for dynamic components such as recursion stack, referenced variables, and other data
structures that vary based on the input.

KEY TAKEAWAYS

● Data refers to individual values or collections of values.


● Time complexity is a form of computational complexity that characterizes the time
needed to execute an algorithm.
● When an algorithm is executed on a computer, it requires a certain amount of memory
space. The space complexity of a program is a measure of the amount of memory it
consumes during execution.
BASIC OF DATA STRUCTURE
SUB LESSON 1.3

CLASSIFICATION OF DATA STRUCTURE

DATA STRUCTURE

A data structure refers to a memory component utilized for storing and arranging data,
providing a means to efficiently access and modify information on a computer. When selecting
a suitable file model for your project, it is crucial to consider your specific requirements. For
instance, an array data structure may be preferred when there is a need to allocate memory for
storing data in a particular sequence.

Data structures serve as a fundamental aspect of any programmable system that handles
storage-related challenges. Storage issues are inherent in most programs, particularly when
working with data.

Programmable systems that deal with data management require a solid understanding of data
structures, which provide a foundational framework for organizing and storing information. The
majority of programs face storage challenges, especially when handling data

Every data structure defines:

● The method of connecting a group of elements and their in-memory representation.


● The allowed set of operations and algorithms over the grouped elements.

Data structures were initially developed to organize, manage, and manipulate records within
programming languages. Files were created to simplify and streamline the process of accessing
and processing information. Unlike programming languages, this document format is
independent of any specific programming language.

Data structures offer an effective approach to store and access large volumes of records.
Various fields of programming, such as AI, databases, and others, investigate the challenge of
efficient data storage.
CLASSIFICATION OF DATA STRUCTURES

Depending on the memory illustration, data structures divide into two categories:

1. Linear Data Structure


2. Non-Linear Data Structure

(Figure 1: This image needs to be shown on the back ground of video)

Each type of data structure offers distinct capabilities. Understanding the differences between
primary data structure types allows for the selection of the most appropriate solution for a
given problem..

1. Linear Data Structure


Linear data structures are constructed by arranging data elements in continuous
memory locations. In linear data structures, the data is stored sequentially, without
involving mathematical operations. These structures are fundamental as they store
elements one after another in a sequential manner.
Based on memory allocation, there are two sub-categories within data structures:
i. Static Data Structure: This type has a fixed size, and although the elements
stored within it may change, the memory allocation remains constant. An
example of a static data structure is an array.
Example - Array
ii. Dynamic Data Structure: A dynamic data structure is capable of adjusting its size
during runtime. It allows for easy modification of the stored values, regardless of
whether it is a static or dynamic data structure. Dynamic data structures are
designed to facilitate both data modification and resizing of the structure itself
while the program is running.
Example - Stack, Queue, Linked List
2. Non-Linear Data Structure
Non-Linear data structures store records in a hierarchical form, unlike linear data
structures. As a result, the data can be organized into multiple levels, making it
challenging to traverse through in comparison to linear data structures.
Example - Tree, Graph

Below is a brief overview of the basic types of data structures

Array - An array is a linear data structure that stores a collection of items in contiguous memory
locations. It enables the storage of multiple items of the same type in a single place. Arrays
facilitate efficient processing of large amounts of data within a relatively short time. The
indexing of elements in an array starts from 0. Various operations can be performed on an
array, including searching, sorting, inserting, traversing, reversing, and deleting.

Stack - A stack is a linear data structure that follows a specific order known as LIFO (Last In First
Out). In a stack, data can only be inserted and removed from one end. The process of inserting
data is referred to as the push operation, while removing data is known as the pop operation.

Queue - A queue is a linear data structure that operates based on a specific order called First In
First Out (FIFO), meaning the item that is stored first will be accessed first. Unlike a stack, in a
queue, data items are entered and retrieved from different ends. A common example of a
queue is a line of consumers waiting for a resource, where the consumer who arrived first is
served first.

Linked List - A linked list is a linear data structure where elements are not stored at contiguous
memory locations. Instead, the elements in a linked list are connected using pointers, as
illustrated in the image below:

KEY TAKEAWAYS

● A data structure refers to a memory component utilized for storing and arranging data,
providing a means to efficiently access and modify information on a computer.
● Linear data structures are constructed by arranging data elements in continuous
memory locations.
● Non-Linear data structures store records in a hierarchical form, unlike linear data
structures.
● An array is a linear data structure that stores a collection of items in contiguous memory
locations.
● A stack is a linear data structure that follows a specific order known as LIFO (Last In First
Out).
● A queue is a linear data structure that operates based on a specific order called First In
First Out (FIFO)
● A linked list is a linear data structure where elements are not stored at contiguous
memory locations.
BASIC OF DATA STRUCTURE
SUB LESSON 1.4

TOWER OF HANOI PROBLEM

INTRODUCTION

The Tower of Hanoi is a popular mathematical puzzle that involves three rods, denoted as A, B,
and C, and a set of N disks. At the beginning of the game, the disks are arranged on rod A in
decreasing order of diameter, with the smallest disk placed on top. The main goal of the puzzle
is to move the entire stack of disks from rod A to another rod (typically rod C), while following a
set of simple rules:

In the Tower of Hanoi puzzle, there are specific rules that must be followed during the
movement of the disks:
● Only one disk can be moved at a time.
● A move involves taking the uppermost disk from one of the stacks and placing it on top
of another stack.
● It is only allowed to move a disk if it is the topmost disk on its respective stack.
● No disk can be placed on top of a smaller disk. In other words, a larger disk cannot be
placed on top of a smaller disk.

The Tower of Hanoi is a mathematical puzzle that involves a set of n disks and three towers. It
can be solved in a minimum of 2^n−1 steps. To illustrate, let's consider an example. If we have a
puzzle with 3 disks, it would take 2^3 - 1 = 7 steps to solve it.

ALGORITHM

In order to develop an algorithm for the Tower of Hanoi problem, it is essential to understand
how to solve the problem for smaller numbers of disks, specifically for 1 or 2 disks. The three
towers involved in the problem are labeled as the source, destination, and auxiliary towers(only
to help move the disks). When there is only one disk present, it can be directly transferred from
the source tower to the destination tower without any complications.

When dealing with 2 disks in the Tower of Hanoi problem, we follow the following steps:
1. Move the smaller (top) disk to the auxiliary (aux) peg.
2. Move the larger (bottom) disk to the destination peg.
3. Finally, move the smaller disk from the auxiliary (aux) peg to the destination peg.

By following these steps, we successfully transfer both disks from the source peg to the
destination peg while utilizing the auxiliary peg.

Let's consider a scenario where we have a stack of three disks. Our objective is to move this
stack from the source tower, let's say tower A, to the destination tower, which we'll label as
tower C.

Before reaching the destination tower C, let's introduce an intermediate tower, which we'll
refer to as tower B. This intermediate tower will play a role in the process of moving the stack
of three disks from the source tower A to the destination tower C.

To complete the task, we can utilize tower B as a helper. Now, let's go through each step of the
process:
1. Move the top disk from tower A to tower C.
2. Move the top disk from tower A to tower B.
3. Move the top disk from tower C to tower B.
4. Move the top disk from tower A to tower C.
5. Move the top disk from tower B to tower A.
6. Move the top disk from tower B to tower C.
7. Move the top disk from tower A to tower C.

By following these steps, we successfully transfer the stack of three disks from tower A to tower
C, utilizing tower B as an intermediate helper.

A B C

For a better understanding, you can refer to the animated image provided above. It can help
illustrate the process and steps involved in solving the Tower of Hanoi problem.

The steps to follow in solving the Tower of Hanoi problem are as follows:
Step 1: Move n-1 disks from the source tower to the auxiliary tower.
Step 2: Move the nth disk from the source tower to the destination tower.
Step 3: Move the n-1 disks from the auxiliary tower to the destination tower.s

By following these steps recursively, you can successfully solve the Tower of Hanoi problem for
any given number of disks.
KEY TAKEAWAYS

● The Tower of Hanoi is a mathematical puzzle that involves a set of n disks and three
towers.
● In the Tower of Hanoi puzzle, there are specific rules that must be followed Only one
disk can be moved at a time.
● No disk can be placed on top of a smaller disk. In other words, a larger disk cannot be
placed on top of a smaller disk.
● It can be solved in a minimum of 2^n−1 steps.
ARRAY
SUB LESSON 2.1

INTRODUCTION TO ARRAY

INTRODUCTION

An array is a collection of elements or data items of the same type, stored in contiguous
memory locations. In simpler terms, arrays are commonly used in computer programming to
organize and manage data of the same type efficiently. Arrays can be defined in single or
multiple dimensions. They are commonly used when there is a need to store multiple elements
of similar characteristics together in one location.

Arrays play a crucial role in data structures as they assist in resolving various high-level
problems, such as the implementation of the 'longest consecutive subsequence' program, or
performing simple tasks like organizing similar elements in ascending order. The fundamental
idea behind arrays is to gather multiple objects of identical nature.

An array is a linear data structure designed to gather elements of the same data type and store
them in adjacent and contiguous memory locations. The indexing system of arrays begins at 0
and goes up to (n-1), with 'n' representing the size of the array.

PROPERTIES OF ARRAY

● Every element within an array possesses the same data type and occupies a consistent
size of 4 bytes.
● The array elements are stored in contiguous memory locations, with the initial element
residing at the lowest memory address.
● The array facilitates random access to its elements as we can determine the address of
each element by utilizing the base address and the size of the data element.
NEED OF ARRAY

Let's suppose a class consists of ten students, and the class has to publish their results. If you
had declared all ten variables individually, it would be challenging to manipulate and maintain
the data.

If more students were to join, it would become more difficult to declare all the variables and
keep track of it. To overcome this problem, arrays came into the picture.

For regular variables, we have the option to declare them on one line and initialize them on the
next line. For example
int x;
x = 0;

Alternatively, we can combine the declaration and initialization in a single statement:

int x = 0;

By using arrays,

int list[4] = {2, 4, 6, 8};


char letters[5] = {'a', 'e', 'i', 'o', 'u'};
double numbers[3] = {3.45, 2.39, 9.1};
int table[3][2] = {{2, 5}, {3, 1}, {4, 9}};

MEMORY ALLOCATION OF AN ARRAY

As previously mentioned, the data elements of an array are stored in contiguous locations
within the main memory. The name of the array serves as the base address, representing the
memory address of the first element. Each element of the array is accessed using appropriate
indexing.

There are different ways to define the indexing of an array:


1. 0 (zero-based indexing): In this approach, the first element of the array is represented
by arr[0].
2. 1 (one-based indexing): In this approach, the first element of the array is denoted by
arr[1].
3. n (n-based indexing): In this approach, the first element of the array can reside at any
arbitrary index number, determined by the specific context or programming language
used.

KEY TAKEAWAYS

● An array is a collection of elements or data items of the same type, stored in contiguous
memory locations.
● Arrays can be defined in single or multiple dimensions.
● An array is a linear data structure designed to gather elements of the same data type
and store them in adjacent and contiguous memory locations.
● The indexing system of arrays begins at 0 and goes up to (n-1), with 'n' representing the
size of the array.
ARRAY
SUB LESSON 2.2

DECLARATION OF ARRAY

INTRODUCTION

In order to utilize an array, we need to declare a variable that acts as a reference to the array.

In C, it is necessary to declare an array before using it, similar to any other variable. To declare
an array, you need to specify its name, the type of its elements, and the size of its dimensions.
When an array is declared in C, the compiler allocates a memory block of the specified size to
accommodate the array's elements

To create an array, you need to specify the data type (such as int) and provide a name for the
array, followed by square brackets [].

For instance, if you want to create an array of integers, you would use the following syntax: int
arrayName[];

Syntax:

data_type array_name[array_size];

Data types are used for declaring variables or arrays, which specify the kind of data and the size
of data that can be stored in those variables.

An array is a type of variable that allows you to store multiple values of the same data type. For
instance, if you need to store 100 integers, you can utilize an array specifically designed for that
purpose.

int arr[100];

Here, int is the data type, arr is the name of the array and 100 is the size of an array.

It should be emphasized that once an array is declared, its size and type remain fixed and
cannot be modified.
To insert values into an array, you can use a comma-separated list enclosed within curly braces
{}.

For example, if you have an array named "arrayName" and you want to insert values into it, you
can do so as follows:

arrayName = {value1, value2, value3, ...};

Each value in the comma-separated list corresponds to an element in the array, allowing you to
initialize the array with specific values.

For Example:

float mark[5];

In this case, we have declared an array called "marks" of floating-point type. The size of the
array is specified as 5, indicating that it can store 5 floating-point values.

Here are a few keynotes regarding arrays:

1. Arrays in C start with a 0 index, not 1. In the given example, mark[0] represents the first
element of the array.
2. If an array has a size of n, the last element is accessed using the n-1 index. In the given
example, mark[4] refers to the last element of the array.
3. The memory addresses of array elements follow a pattern. If the starting address of
mark[0] is 2120d, mark[1] will have an address of 2124d, mark[2] will have an address of
2128d, and so on. This is because the size of a float data type is typically 4 bytes.

These keynotes highlight important aspects of arrays, including indexing and memory
allocation, as applied to the given example.

EXAMPLE

Aim : Write a Program for Declaration of an Array.

// C Program to illustrate the array declaration


#include <stdio.h>

int main()

// declaring array of integers

int arr_int[5];

// declaring array of characters

char arr_char[5];

return 0;

KEY TAKEAWAYS

● It is necessary to declare an array before using it


● To create an array, you need to specify the data type (such as int) and provide a name
for the array, followed by square brackets [].
● An array is a type of variable that allows you to store multiple values of the same data
type.
ARRAY
SUB LESSON 2.3

REPRESENTATION OF ARRAY

INTRODUCTION

An array is a type of data structure used to store elements of the same or different data types.
It can be defined as a collection of items arranged in a linear format. Arrays can be either single-
dimensional or multi-dimensional, providing a way to organize and access multiple elements
efficiently.

The distinction between an array index and a memory address lies in their respective functions.
An array index serves as a key value that labels the elements within the array, allowing for their
identification and retrieval. On the other hand, a memory address refers to the starting location
of available memory.

To better understand the concept of arrays, it is essential to be familiar with the following
terms:

Element: Each individual item stored within an array is referred to as an element.

Index: Every element in an array is associated with a numerical index, which serves as its unique
identifier within the array.

Arrays are represented as a collection of buckets or slots, with each slot storing one element.
The indexing of these buckets starts from '0' and goes up to 'n-1', where 'n' represents the size
or length of the array. For example, an array with a size of 10 will have buckets indexed from 0
to 9.

The representation of an array is defined by its declaration, which involves allocating memory
for the array based on a specified size.

When an array is declared, the compiler sets aside a contiguous block of memory to store the
elements of the array. The size of the memory block is determined by the number of elements
in the array and the size of each element.

For example, consider the declaration of an integer array named "myArray" with a size of 5:

int myArray[5];

In this case, the declaration allocates memory to store 5 integer elements, based on the size of
the "int" data type.

The declaration of an array is crucial as it determines the memory allocation, allowing the array
to store and access its elements effectively.

(Figure 2: This image needs to be shown on the back ground of video)

Example:
REPRESENTATION OF ARRAY

A one-dimensional array, often referred to as a 1-D array, can be visualized as a row where
elements are stored sequentially, one after another. The elements in a 1-D array are accessed
using a single index.

A two-dimensional array, also known as a 2-D multidimensional array, can be conceptualized as


an array of arrays or as a matrix consisting of rows and columns. It is a data structure that
allows elements to be organized in a grid-like format.

In a 2-D array, elements are accessed using two indices: one for the row and another for the
column. The row index represents the position of the desired row, and the column index
represents the position of the desired column.

2-D arrays are useful when dealing with data that naturally fits into a two-dimensional
structure, such as grids, matrices, and tables.
KEY TAKEAWAYS

● An array can be defined as a collection of items arranged in a linear format.


● The indexing of the Array buckets starts from '0' and goes up to 'n-1', where 'n'
represents the size or length of the array.
● When an array is declared, the compiler sets aside a contiguous block of memory to
store the elements of the array.
● A 1-D array can be visualized as a row where elements are stored sequentially.
● In a 2-D array, elements are accessed using two indices: one for the row and another for
the column.
ARRAY
SUB LESSON 2.4

HOW TO ACCESS ELEMENTS FROM ARRAY

ACCESS ARRAY ELEMENTS

To access elements in an array, you use indices to refer to specific positions within the array.

For instance, let's consider the previously declared array "mark". In this case, the first element
is accessed using the index mark[0], the second element is accessed using mark[1], and so on.
The index value indicates the position of the element within the array.

It's important to note that array indices in many programming languages start from 0.
Therefore, the first element is always at index 0, the second element at index 1, and so on. This
indexing scheme allows you to access and manipulate individual elements of the array based on
their positions.

Syntax:

arrayName[indexNum]
In the given example, the second value of the array is accessed using its index, which is 1. The
output of this operation will be the value at index 1, which is 200. This value represents the
second element of the array, assuming that the array is zero-indexed.

By specifying the index within square brackets after the array name (e.g., arrayName[index]),
you can retrieve the value stored at that particular index within the array. In this case, accessing
the value at index 1 returns the second value in the array.

Let's discuss the code snippet related to accessing array elements:

int mark[5] = {100, 200, 300, 400, 500};

int secondValue = mark[1];

In this code, an integer array named "mark" is declared with a size of 5 and initialized with
some values. The second element of the array is accessed using the index 1, and its value is
assigned to the variable secondValue.

By accessing mark[1], we retrieve the element at index 1, which is 200 in this case. This value is
then stored in the secondValue variable.

This code demonstrates how to access a specific element from an array using its index, allowing
you to perform operations on individual array elements.

Example:

#include<stdio.h>

int main()

int a[5] = {2, 3, 5, 7, 11};

printf(“%d\n”,a[0]); // we are accessing

printf(“%d\n”,a[1]);

printf(“%d\n”,a[2]);

printf(“%d\n”,a[3]);

printf(“%d”,a[4]);
return 0;

Output:

KEY TAKEAWAYS
• To access elements in an array, you use indices to refer to specific positions within the
array.
• By specifying the index within square brackets after the array name (e.g.,
arrayName[index]), you can retrieve the value stored at that particular index within the
array.
• It's important to note that array indices in many programming languages start from 0.
Therefore, the first element is always at index 0, the second element at index 1, and so
on.
ARRAY
SUB LESSON 2.5

OPERATIONS ON ARRAY

BASIC OPERATIONS IN THE ARRAYS

Arrays support several basic operations that can be performed on the elements they contain.
Some common operations include:

1. Traversal
2. Insertion
3. Deletion
4. Search
5. Update

These operations allow you to manipulate the data stored in an array according to the
requirements of your program. Whether you need to add, remove, search, display, iterate, or
update array elements, these basic operations provide the necessary functionality to work with
array data effectively.

1. TRAVERSAL: Traversing an array refers to the process of accessing and examining each element
of an array in a systematic manner.
CODE :

#include <stdio.h>

void main() {

int Arr[5] = {18, 30, 15, 70, 12};

int i;

printf("Elements of the array are:\n");

for(i = 0; i<5; i++) {

printf("Arr[%d] = %d, ", i, Arr[i]);

OUTPUT :

Elements of the array are:

Arr[0] = 18, Arr[1] = 30, Arr[2] = 15, Arr[3] = 70, Arr[4] = 12,

2. INSERTION: Insertion in the context of arrays refers to the process of adding an element at a
specific position within an existing array.

CODE :

#include <stdio.h>

int main()

int arr[20] = { 18, 30, 15, 70, 12 };

int i, x, pos, n = 5;

printf("Array elements before insertion\n");


for (i = 0; i < n; i++)

printf("%d ", arr[i]);

printf("\n");

x = 50; // element to be inserted

pos = 4;

n++;

for (i = n-1; i >= pos; i--)

arr[i] = arr[i - 1];

arr[pos - 1] = x;

printf("Array elements after insertion\n");

for (i = 0; i < n; i++)

printf("%d ", arr[i]);

printf("\n");

return 0;

OUTPUT :

Array elements before insertion

18 30 15 70 12

Array elements after insertion

18 30 15 50 70 12
3. DELETION: Deletion in the context of arrays refers to the process of removing an element from
a specific position within an array.

CODE :

#include <stdio.h>

void main() {

int arr[] = {18, 30, 15, 70, 12};

int k = 30, n = 5;

int i, j;

printf("Given array elements are :\n");

for(i = 0; i<n; i++) {

printf("arr[%d] = %d, ", i, arr[i]);

j = k;

while( j < n) {

arr[j-1] = arr[j];

j = j + 1;

n = n -1;

printf("\nElements of array after deletion:\n");

for(i = 0; i<n; i++) {

printf("arr[%d] = %d, ", i, arr[i]);

}
OUTPUT :

Given array elements are :

arr[0] = 18, arr[1] = 30, arr[2] = 15, arr[3] = 70, arr[4] = 12,

Elements of array after deletion:

arr[0] = 18, arr[1] = 30, arr[2] = 15, arr[3] = 70,

4. SEARCH: Search in the context of arrays refers to the process of finding the position or
existence of a specific element within an array.

CODE :

#include <stdio.h>

void main() {

int arr[5] = {18, 30, 15, 70, 12};

int item = 70, i, j=0 ;

printf("Given array elements are :\n");

for(i = 0; i<5; i++) {

printf("arr[%d] = %d, ", i, arr[i]);

printf("\nElement to be searched = %d", item);

while( j < 5){

if( arr[j] == item ) {

break;

j = j + 1;
}

printf("\nElement %d is found at %d position", item, j+1);

OUTPUT :

Given array elements are :

arr[0] = 18, arr[1] = 30, arr[2] = 15, arr[3] = 70, arr[4] = 12,

Element to be searched = 70

Element 70 is found at 4 position

5. UPDATE: Updating an array refers to the process of modifying the value of an existing element
at a specific position within the array.

CODE :

#include <stdio.h>

void main() {

int arr[5] = {18, 30, 15, 70, 12};

int item = 50, i, pos = 3;

printf("Given array elements are :\n");

for(i = 0; i<5; i++) {

printf("arr[%d] = %d, ", i, arr[i]);

arr[pos-1] = item;

printf("\nArray elements after updation :\n");


for(i = 0; i<5; i++) {

printf("arr[%d] = %d, ", i, arr[i]);

OUTPUT :

Given array elements are :

arr[0] = 18, arr[1] = 30, arr[2] = 15, arr[3] = 70, arr[4] = 12,

Array elements after updation :

arr[0] = 18, arr[1] = 30, arr[2] = 50, arr[3] = 70, arr[4] = 12,

KEY TAKEAWAYS

● Declaration: Arrays are declared by specifying the data type and size. For example, int[]
numbers = new int[5]; declares an integer array with five elements.
● Accessing Elements: Array elements can be accessed using their index. The index starts
from 0, so the first element is at index 0. For example, int element = numbers[2];
retrieves the value at index 2.
● Updating Elements: Array elements can be updated by assigning a new value to a
specific index. For example, numbers[3] = 10; assign the value 10 to the element at
index 3.
● Array Length: The length property or method allows you to determine the number of
elements in an array. For example, int length = numbers.Length; retrieves the length of
the numbers array.
STACK
SUB LESSON 3.1

INTRODUCTION TO STACK

STACK

A stack is a linear data structure that follows the Last In First Out (LIFO) principle. In other
words, the last element that is inserted into the stack is the first one to be removed. This
behavior is similar to a stack of objects, where the most recently placed item is the first one to
be taken off.

You can envision the stack data structure as a stack of plates, where each plate is placed on top
of another.

In this analogy, you have the ability to perform three operations on the stack of plates:

1. Put a new plate on top: You can add a new plate to the stack, placing it on the top.
2. Remove the top plate: You can remove the plate that is currently on the top of the
stack.
3. Accessing the plate at the bottom: If you want to retrieve the plate that is at the bottom
of the stack, you must first remove all the plates on top, following the Last In First Out
(LIFO) principle of the stack data structure.

LIFO PRINCIPLE OF STACK

In programming, the act of adding an item to the top of the stack is commonly referred to as
"push." It corresponds to placing an element onto the stack. On the other hand, removing an
item from the top of the stack is known as "pop", which signifies taking out the topmost
element from the stack. These terms, "push" and "pop," are frequently used to describe the
fundamental operations performed on a stack data structure in programming.

In the provided image, you can observe that even though item 3 was the most recent addition
to the stack, it was the first one to be removed. This exemplifies the essence of the Last In First
Out (LIFO) principle, which governs the behavior of a stack data structure. According to the LIFO
principle, the most recently added item is the first one to be removed, while the items added
earlier remain in the stack until the topmost element is taken out.

Here are some key points related to the stack data structure:

1. Stack behavior: The stack data structure is named so because it mimics the behavior of a
real-world stack, such as a pile of books or plates. Elements are added and removed
from the top of the stack.
2. Abstract data type: A stack is an abstract data type (ADT) that comes with a predefined
capacity, meaning it can only hold a limited number of elements based on its size.
3. Insertion and deletion order: The stack follows a specific order for inserting and deleting
elements. This order can be either Last In First Out (LIFO) or First In Last Out (FILO). In
LIFO, the most recently added element is the first one to be removed, while in FILO, the
first element inserted is the last one to be removed.

These points highlight the characteristics and behavior of a stack data structure.

Stack Time Complexity

For the array-based implementation of a stack, the push and pop operations take constant
time, i.e. O(1).

APPLICATIONS OF STACK DATA STRUCTURE

Some of the most common uses of a stack include:

Reversing a word: By pushing all the letters of a word onto a stack and then popping them out,
the LIFO order of the stack ensures that the letters are retrieved in reverse order, effectively
reversing the word.

Compilers: Stacks are used by compilers to evaluate expressions, such as converting them to
prefix or postfix form. The stack helps in organizing and calculating the values of complex
expressions by following the appropriate order of operations.

Browsers: In web browsers, the back button functionality utilizes a stack. Whenever a user visits
a new page, its URL is added to the top of the stack. Pressing the back button removes the
current URL from the stack, allowing access to the previous URL, effectively navigating back
through the browsing history.

KEY TAKEAWAYS

● A stack is a linear data structure that follows the Last In First Out (LIFO) principle.
● For the array-based implementation of a stack, the push and pop operations take
constant time, i.e. O(1).
● In programming, the act of adding an item to the top of the stack is commonly referred
to as "push."
● It corresponds to placing an element onto the stack. On the other hand, removing an
item from the top of the stack is known as "pop"
STACK
SUB LESSON 3.2

WORKING WITH STACK

WORKING WITH STACK

A stack is a data structure that follows the Last-In-First-Out (LIFO) principle. It allows operations
to be performed at one end only, known as the top of the stack. Here are the key operations
performed on a stack:

1. Push: This operation adds an element to the top of the stack. The new element becomes
the top, and the size of the stack increases. In other words, it pushes an element onto
the stack.
2. Pop: This operation removes the top element from the stack. The element is removed
from the stack, and the size of the stack decreases. In other words, it pops the top
element from the stack.
3. Peek/Top: This operation retrieves the top element from the stack without removing it.
It allows you to access the value of the top element without modifying the stack.
4. isEmpty: This operation checks if the stack is empty. It returns a Boolean value indicating
whether the stack is empty or not.
PUSH OPERATION

The process of pushing an element onto a stack involves the following steps:

1. Before inserting an element into the stack, we check whether the stack is already full,
i.e., if it has reached its maximum capacity.
2. If the stack is full and we try to insert an element, it results in an overflow condition,
indicating that the stack cannot accommodate any more elements.
3. When initializing a stack, we typically set the initial value of the top pointer to -1. This
value is used to check whether the stack is empty.
4. When a new element is pushed onto the stack, the value of the top pointer is
incremented, usually by adding 1 (top = top + 1). This increments the top pointer to the
new position.
5. The new element is then placed at the position indicated by the updated top pointer.
6. The process of pushing elements continues until the stack reaches its maximum size.

These steps outline the process of pushing an element onto a stack, considering the overflow
condition and the management of the top pointer.

POP OPERATION

The process of popping an element from a stack involves the following steps:
1. Before deleting an element from the stack, we check whether the stack is empty by
verifying the value of the top pointer.
2. If the stack is empty and we try to delete an element, it results in an underflow
condition. This indicates that there are no elements in the stack to be removed.
3. If the stack is not empty, we can access the element that is pointed to by the top
pointer. This element represents the topmost element in the stack.
4. After performing the pop operation and removing the element, the top pointer is
decremented by 1, typically by subtracting 1 (top = top - 1). This adjusts the top pointer
to point to the new topmost element in the stack.
5. The element that was popped can be used or discarded as needed.
6. The process of popping elements can continue as long as there are elements in the
stack.

These steps outline the process of popping an element from a stack, considering the underflow
condition and the adjustment of the top pointer after the removal of an element.
KEY TAKEAWAYS

● Push Operation: The process of adding an element to the stack is called the push
operation. It involves placing the new element on top of the existing elements.
● Pop Operation: The process of removing an element from the stack is called the pop
operation. It involves removing the topmost element from the stack.
● Top Pointer: Stacks typically have a top pointer that keeps track of the topmost
element. The top pointer is updated with each push and pop operation.
● Overflow and Underflow: Stack operations should be performed with caution to avoid
overflow and underflow conditions. Overflow occurs when trying to push an element
into a full stack, and underflow occurs when trying to pop an element from an empty
stack.
STACK
SUB LESSON 3.3

STACK IMPLEMENTATION

You can implement stacks in data structures using two main approaches: array implementation
and linked list implementation.

Array: In the array implementation, a stack is constructed using an array data structure. All the
stack operations are performed using arrays. We will explore how various operations can be
implemented on the stack in data structures using the array data structure.

Linked List: In the linked list implementation of stacks in data structures, each new element is
inserted as the top element of the linked list. This means that every newly inserted element
becomes the new top. When removing an element from the stack, the node pointed to by the
top is removed by updating the top to point to its previous node in the list.
STACK IMPLEMENTATION USING ARRAY WITH EXAMPLE

Push Operation:

The push operation involves adding an element on the top of the stack. It consists of the
following two steps:
1. Increment the top variable of the stack to refer to the next memory location.
2. Add the data element at the incremented top position.

When performing a push operation, if the stack is already full, it results in an overflow
condition, indicating that no more elements can be inserted into the stack.

Algorithm of push operation:

begin

if top = n

stack is full

top = top + 1

stack(top) = data

end
Pop Operation:

The pop operation in a stack is used to remove the topmost element from the stack. It follows
the LIFO (Last-In-First-Out) principle, where the element that was most recently pushed onto
the stack will be the first one to be popped.

The steps involved in the pop operation are as follows:

1. Check if the stack is empty. If the stack is empty, it indicates an underflow condition,
meaning there are no elements in the stack to be popped.
2. If the stack is not empty, access the element at the top of the stack.
3. Decrement the value of the top pointer to move it to the next element in the stack.
4. Return or use the value of the popped element as needed.

The pop operation modifies the stack by removing the topmost element and updating the top
pointer accordingly. It is important to handle the underflow condition and ensure that the stack
is not empty before performing the pop operation to avoid any errors.

Algorithm of pop operation:

Begin

if top = 0

stack is empty
value = stack(top)

top= =top -1

end

Peek Operation:

The peek operation in a stack is used to retrieve the topmost element from the stack without
removing it. It allows you to examine the value of the element at the top of the stack without
modifying the stack itself.

The steps involved in the peek operation are as follows:

1. Check if the stack is empty. If the stack is empty, it indicates an underflow condition,
meaning there are no elements in the stack to retrieve.
2. If the stack is not empty, access the element at the top of the stack.

Algorithm of peek operation:

Begin

if top = -1

stack is empty
data = stack[top]

return data

end

KEY TAKEAWAYS

● The push operation adds an element on the top of the stack by incrementing the top
variable and adding the element at the new top position.
● The pop operation removes the topmost element from the stack by decrementing the
top variable and returning the deleted element.
● The peek operation retrieves the topmost element from the stack without removing it.
● Stack can encounter overflow condition when trying to insert an element into a full
stack, and underflow condition when trying to remove an element from an empty stack.
STACK
SUB LESSON 3.4

OPERATIONS ON STACK & RECURSION

A stack is a type of linear data structure that consists of a collection of elements. It follows the
principle of Last In, First Out (LIFO), which means that the last element inserted into the stack
will be the first one to be removed.

In a stack, elements can be inserted and deleted only from one end, often referred to as the
"top" of the stack.

OPERATIONS ON STACK

The stack data structure supports several basic operations:

Push: This operation adds an element to the top of the stack.

Pop: This operation removes the topmost element from the stack.

isEmpty: This operation checks whether the stack is empty. It returns true if the stack has no
elements and false otherwise.

isFull: This operation checks whether the stack is full, especially in cases where the stack has a
maximum capacity. It returns true if the stack is full and false otherwise.

Top: This operation allows us to access the topmost element of the stack without removing it. It
returns the value of the element at the top of the stack.

Example:

#include <stdio.h>

#include <stdlib.h>

#define SIZE 4

int top = -1, inp_array[SIZE];

void push();

void pop();

void show();
int main()

int choice;

while (1)

printf("\nPerform operations on the stack:");

printf("\n1.Push the element\n2.Pop the element\n3.Show\n4.End");

printf("\n\nEnter the choice: ");

scanf("%d", &choice);

switch (choice)

case 1:

push();

break;

case 2:

pop();

break;

case 3:

show();

break;

case 4:
exit(0);

default:

printf("\nInvalid choice!!");

void push()

int x;

if (top == SIZE - 1)

printf("\nOverflow!!");

else

printf("\nEnter the element to be added onto the stack: ");

scanf("%d", &x);

top = top + 1;

inp_array[top] = x;

}
void pop()

if (top == -1)

printf("\nUnderflow!!");

else

printf("\nPopped element: %d", inp_array[top]);

top = top - 1;

void show()

if (top == -1)

printf("\nUnderflow!!");

else

printf("\nElements present in the stack: \n");

for (int i = top; i >= 0; --i)


printf("%d\n", inp_array[i]);

Output:

Execute this code to push() the number "10"onto the stack:


Output

Perform operations on the stack:

1.Push the element

2.Pop the element

3.Show

4.End

Enter the choice: 1

Enter the element to be inserted onto the stack: 10

Then show() the elements on the stack:


Output

Perform operations on the stack:

1.Push the element

2.Pop the element

3.Show

4.End

Enter the choice: 3

Elements present in the stack:


10

Then pop():
Output

Perform operations on the stack:

1.Push the element

2.Pop the element

3.Show

4.End

Enter the choice: 2

Popped element: 10

Now, the stack is empty. Attempt to pop() again:


Output

Perform operations on the stack:

1.Push the element

2.Pop the element

3.Show

4.End

Enter the choice: 3

Underflow!!

RECURSION

Recursion is a concept in programming where a function calls itself, either directly or indirectly.
When a function calls itself, it is known as a recursive function. It is a powerful technique that
allows problems to be solved by breaking them down into smaller, simpler versions of the same
problem. The recursive function continues to call itself until it reaches a base case, which is a
condition that stops the recursion and returns a result.
Properties of Recursion:

● Performing the same operations multiple times with different inputs.


● In every step, we try smaller inputs to make the problem smaller.
● Base condition is needed to stop the recursion otherwise infinite loop will occur.

Example: Write a Program to Find the Factorial of a Number Using Recursion

The factorial of a positive number n is given by:

factorial of n (n!) = 1 * 2 * 3 * 4 *... * n

The factorial of a negative number is not defined, as it does not have a meaningful
interpretation in mathematics. Similarly, the factorial of 0 is defined to be 1. These are
established conventions in mathematics and are important to consider when working with
factorial calculations.

#include<stdio.h>
long factorial(int n)

if (n == 0)

return 1;

else

return(n * factorial(n-1));

int main()

int number;

long fact;

printf("Enter a number: ");

scanf("%d", &number);

fact = factorial(number);

printf("Factorial of %d is %ld\n", number, fact);

return 0;

Output:

Enter a number: 5

Factorial of 5 is 120

KEY TAKEAWAYS

● Stack follows the principle of Last In, First Out (LIFO), which means that the last element
inserted into the stack will be the first one to be removed.
● In a stack, elements can be inserted and deleted only from one end, often referred to as
the "top" of the stack.
QUEUE
SUB LESSON 4.1

BASICS OF QUEUE

A queue is a linear data structure in computer science that stores a collection of elements
following the First-In-First-Out (FIFO) principle. It is an ordered list where elements are added
to the end (rear) and removed from the front (head).

Think of a real-life queue or line of people waiting for service. The first person to arrive is the
first to be served, and as new people join the queue, they line up at the back and wait for their
turn. Similarly, in a queue data structure, the element that has been in the queue the longest is
the first one to be removed, while new elements are added to the end.

A queue is an abstract data structure that is different from a stack in that it is open at both
ends. This means that a queue follows the FIFO (First-In-First-Out) structure, where the data
item that is inserted first will also be accessed or removed first. In a queue, data is inserted at
one end and deleted from the other end, maintaining the order of insertion. The end where
data is inserted is typically called the "rear" or "tail," and the end from which data is removed is
called the "front" or "head" of the queue.

A real-world example that illustrates the concept of a queue is a single-lane one-way road,
where vehicles enter the road in a specific order and exit in the same order. This aligns with the
FIFO (First-In-First-Out) nature of a queue. Another example can be observed at ticket windows
or bus stops, where people join a queue and are served or board the bus in the order they
arrived, ensuring fairness and maintaining the sequence of arrival.

REPRESENTATION OF QUEUES

Similar to the stack abstract data type (ADT), the queue ADT can also be implemented using
various data structures such as arrays, linked lists, or pointers. In this tutorial, we will
demonstrate the implementation of queues using a one-dimensional array as a simple example.
LIMITATIONS OF QUEUE

As you can see in the image below, After performing enqueue and dequeue operations, the size
of the queue has decreased.

Indexes 0 and 1 can only be used for adding elements to the queue when the queue has been
reset, meaning that all elements have been dequeued.

APPLICATIONS OF QUEUE

Here are some common examples and applications of queues:

1. Task Scheduling: Queues are used in task scheduling algorithms to manage the
execution order of tasks or processes based on their priority or arrival time.
2. Printer Spooling: When multiple users send print requests to a shared printer, a queue is
used to manage the order in which the print jobs are processed.
3. Message Queuing: In messaging systems, queues are employed to ensure reliable and
ordered delivery of messages between different components or systems.
4. Event-driven Programming: Queues are used to handle events and event-driven
programming models, where events are queued and processed in the order of their
occurrence.
5. Simulations: Queues are essential in simulating real-world systems, such as traffic flow,
customer queues, or manufacturing processes, to analyze and optimize their
performance.
6. Call Center Systems: Queues are utilized in call center systems to manage incoming calls,
ensuring fair distribution to available agents based on their availability.
7. Network Packet Routing: Queues are used in network routers to manage the incoming
and outgoing network packets, facilitating proper routing and preventing congestion.
8. CPU Scheduling: Queues play a vital role in CPU scheduling algorithms, where processes
are placed in different queues based on their priority or scheduling criteria.
9. Web Server Request Handling: Queues are used in web servers to manage incoming
requests from clients, ensuring that requests are processed in the order they are
received.
10. Breadth-First Search: Queues are extensively used in graph algorithms, particularly in
breadth-first search (BFS), to explore nodes or vertices level by level.

KEY TAKEAWAYS

● A queue is a linear data structure in computer science that stores a collection of


elements following the First-In-First-Out (FIFO) principle.
● A queue is an abstract data structure that is different from a stack in that it is
open at both ends.
● It is an ordered list where elements are added to the end (rear) and removed
from the front (head).
QUEUE
SUB LESSON 4.2

WORKING WITH QUEUE

A queue is a fundamental data structure in programming that follows the First-In-First-Out


(FIFO) rule. It can be likened to a ticket queue outside a cinema hall, where the person who
joins the queue first is the first one to obtain a ticket.

The FIFO principle implies that the element that enters the queue first will be the first one to be
removed from it. This characteristic makes queues suitable for scenarios where order
preservation is essential.

In the example above, the image depicts a queue where the number 1 was added to the queue
before the number 2. As a result, according to the FIFO (First-In-First-Out) rule, the number 1
will be the first one to be removed from the queue.

In programming, the process of adding items to a queue is commonly referred to as "enqueue,"


while the process of removing items from a queue is known as "dequeue." These terms are
used to describe the operations performed on a queue data structure, where elements are
inserted at one end and removed from the other end, following the FIFO principle.

Queue operations typically involve the use of two pointers: FRONT and REAR. Here's an
explanation of how these pointers work:

1. FRONT: This pointer keeps track of the first element in the queue. When the queue is
empty, the FRONT pointer is typically set to -1.
2. REAR: This pointer keeps track of the last element in the queue. As elements are
enqueued (added) to the queue, the REAR pointer is updated accordingly. When the
queue is empty, the REAR pointer is also set to -1.

By using these pointers, we can determine the position of the first and last elements in the
queue and perform enqueue and dequeue operations effectively.
To enqueue an element, we increment the REAR pointer and add the element to the position
indicated by the REAR pointer. If the queue is empty initially, we set both the FRONT and REAR
pointers to 0.

To dequeue an element, we increment the FRONT pointer to point to the next element in the
queue and retrieve the element from the position indicated by the previous FRONT pointer
value. If the dequeue operation results in an empty queue (i.e., there are no more elements),
we can reset both the FRONT and REAR pointers to -1.

It's important to note that different implementations may have variations in how these pointers
are initialized and updated, but the general concept remains the same.

The enqueue and dequeue operations in a queue can be described as follows:

Enqueue Operation:

Check if the queue is full (based on the implementation's capacity).

If the queue is empty (i.e., it has no elements), set the value of FRONT to 0.

Increase the REAR index by 1 to indicate the next available position in the queue.

Add the new element to the position pointed to by the REAR index.

Dequeue Operation:

Check if the queue is empty (i.e., there are no elements present).

If the queue is not empty, return the value pointed to by the FRONT index, which represents
the element to be dequeued.

Increase the FRONT index by 1 to move it to the next element in the queue.

If the dequeue operation results in an empty queue (i.e., there are no more elements
remaining), reset the values of both FRONT and REAR to -1 to indicate an empty queue state.

It's worth noting that these operations assume the underlying implementation maintains the
queue size and checks for full or empty conditions appropriately. Additionally, variations in
implementation may have different strategies for handling full or empty queue situations.
KEY TAKEAWAYS
• Enqueue: Adding elements to the rear of the queue is known as the enqueue operation.
This operation increases the size of the queue and updates the rear pointer accordingly.
• Dequeue: Removing elements from the front of the queue is known as the dequeue
operation. This operation reduces the size of the queue and updates the front pointer
accordingly.
QUEUE
SUB LESSON 4.3

OPERATIONS OF QUEUE

The basic operations of a queue include:

1. Enqueue: This operation adds an element to the end of the queue. It expands the size of
the queue and places the new element at the rear.
2. Dequeue: This operation removes an element from the front of the queue. It shrinks the
size of the queue and retrieves the element that was first in line.
3. IsEmpty: This operation checks if the queue is empty, indicating whether there are no
elements present in the queue.
4. IsFull: This operation checks if the queue is full, indicating whether it has reached its
maximum capacity or the specified size limit.
5. Peek: This operation allows you to access the value of the element at the front of the
queue without removing it. It provides a way to examine the next element that will be
dequeued.
These operations are fundamental in working with queues and provide the necessary
functionality to manage and manipulate the elements within the queue data structure.

Example:

// Queue implementation in C

#include <stdio.h>

#define SIZE 5

void enQueue(int);

void deQueue();

void display();

int items[SIZE], front = -1, rear = -1;


int main() {

//deQueue is not possible on empty queue

deQueue();

//enQueue 5 elements

enQueue(1);

enQueue(2);

enQueue(3);

enQueue(4);

enQueue(5);

// 6th element can't be added to because the queue is full

enQueue(6);

display();

//deQueue removes element entered first i.e. 1

deQueue();

//Now we have just 4 elements

display();

return 0;

}
void enQueue(int value) {

if (rear == SIZE - 1)

printf("\nQueue is Full!!");

else {

if (front == -1)

front = 0;

rear++;

items[rear] = value;

printf("\nInserted -> %d", value);

void deQueue() {

if (front == -1)

printf("\nQueue is Empty!!");

else {

printf("\nDeleted : %d", items[front]);

front++;

if (front > rear)

front = rear = -1;

}
// Function to print the queue

void display() {

if (rear == -1)

printf("\nQueue is Empty!!!");

else {

int i;

printf("\nQueue elements are:\n");

for (i = front; i <= rear; i++)

printf("%d ", items[i]);

printf("\n");

Output:

Queue is Empty!!

Inserted -> 1

Inserted -> 2

Inserted -> 3

Inserted -> 4

Inserted -> 5

Queue is Full!!

Queue elements are:

1 2 3 4 5

Deleted : 1
Queue elements are:

2 3 4 5

KEY TAKEAWAYS
• IsEmpty: This operation checks if the queue is empty, indicating whether there are no
elements present in the queue.
• IsFull: This operation checks if the queue is full, indicating whether it has reached its
maximum capacity or the specified size limit.
TYPES OF QUEUE
SUB LESSON 5.1

TYPES OF QUEUE

The following list presents four distinct types of queue:

● Simple Queue or Linear Queue


● Circular Queue
● Priority Queue
● Double Ended Queue (or Deque)

Simple Queue

In a Linear Queue, an element is inserted at one end while deletion occurs at the other end. The
end where insertion takes place is called the rear end, and the end where deletion occurs is
called the front end. This type of queue strictly adheres to the First-In-First-Out (FIFO) rule.

A significant limitation of the linear queue is that insertions can only be performed at the rear
end. When the first three elements are deleted from the queue, the inability to insert
additional elements arises, even if there is available space within the linear queue.
Consequently, the linear queue encounters an overflow condition, indicated by the rear end
pointing to the last element of the queue.

Circular Queue

The Circular Queue represents all the nodes in a circular manner. It shares similarities with the
linear queue, but with the distinction that the last element of the queue is connected to the
first element, forming a circular structure. It is also referred to as a Ring Buffer due to the
interconnected nature of all the ends. The image below illustrates the representation of a
circular queue:
The circular queue addresses the drawback encountered in the linear queue. It overcomes the
limitation of the linear queue by allowing the addition of new elements in empty spaces. This is
achieved by incrementing the value of the rear pointer. One of the primary advantages of using
a circular queue is its ability to optimize memory utilization, resulting in improved efficiency.

Priority Queue

A priority queue is a unique type of queue where elements are organized based on their
priority. It is a data structure where each element is assigned a priority value. In cases where
multiple elements have the same priority, they are arranged according to the First-In-First-Out
(FIFO) principle. The image below illustrates the representation of a priority queue:

In a priority queue, the insertion of elements takes place based on their arrival, meaning that
newly arriving elements are inserted into the queue. On the other hand, deletion in a priority
queue is performed based on the priority associated with each element. Elements with higher
priority are given precedence for deletion over elements with lower priority.
Double Ended Queue (or Deque)

A Deque, or Double Ended Queue, allows for the insertion and deletion of elements from both
ends of the queue, which includes both the front and rear ends. This means that elements can
be inserted and removed from either end of the queue. A notable application of a deque is in
checking for palindromes. By reading a string from both ends, if the string remains the same, it
indicates that it is a palindrome.

KEY TAKEAWAYS

● Linear Queue: In a linear queue, insertion takes place at one end (rear) and deletion
occurs at the other end (front). It follows the First-In-First-Out (FIFO) rule.
● Circular Queue: A circular queue is similar to a linear queue but with the last element
connected to the first element, forming a circular structure. It overcomes the limitation
of a linear queue by allowing better utilization of available space.
● Priority Queue: In a priority queue, elements are arranged based on their priority. Each
element has a priority associated with it, and higher priority elements are given
precedence during deletion.
● Deque (Double Ended Queue): A deque allows insertion and deletion from both ends,
i.e., the front and rear. It provides more flexibility compared to other queue types.
TYPES OF QUEUE
SUB LESSON 5.2

SIMPLE QUEUE

A queue is a data structure that follows a First-In-First-Out (FIFO) principle. It is similar to a list
where elements are added at one end and removed from the other end. The element that is
added first will be the first one to be removed, maintaining the order of insertion.

A queue can be compared to or visualized as a line of people waiting to purchase tickets, where
the person who arrives first is the first to be served (following the "First come, first served"
principle).

The position of the entry in the queue that is ready to be served, which is the first entry that
will be removed from the queue, is commonly referred to as the "front" of the queue (or
sometimes called the "head" of the queue). Similarly, the position of the last entry in the
queue, which is the most recently added one, is known as the "rear" (or the "tail") of the
queue. Refer to the illustration below:

In the context of a queue, the term "Queue" refers to the name of the array used to store the
elements of the queue.

The "Front" denotes the index in the array where the first element of the queue is stored.

On the other hand, the "Rear" represents the index in the array where the last element of the
queue is stored.
IMPLEMENTATION OF SIMPLE QUEUE

#include <stdio.h>

#define MAX_SIZE 5 // Adjust the size of the queue as needed

struct Queue {

int items[MAX_SIZE];

int front;

int rear;

int size;

};

void initializeQueue(struct Queue* queue) {

queue->front = -1;

queue->rear = -1;

queue->size = 0;

int isEmpty(struct Queue* queue) {

return (queue->size == 0);

int isFull(struct Queue* queue) {

return (queue->size == MAX_SIZE);

}
void enqueue(struct Queue* queue, int value) {

if (isFull(queue)) {

printf("Queue is full. Cannot enqueue %d.\n", value);

return;

if (isEmpty(queue)) {

queue->front = 0;

queue->rear = (queue->rear + 1) % MAX_SIZE;

queue->items[queue->rear] = value;

queue->size++;

printf("Enqueued %d successfully.\n", value);

int dequeue(struct Queue* queue) {

if (isEmpty(queue)) {

printf("Queue is empty. Cannot dequeue.\n");

return -1;

int removedItem = queue->items[queue->front];

queue->front = (queue->front + 1) % MAX_SIZE;

queue->size--;

printf("Dequeued item: %d\n", removedItem);

return removedItem;
}

void printQueue(struct Queue* queue) {

if (isEmpty(queue)) {

printf("Queue is empty.\n");

return;

printf("Queue elements: ");

int current = queue->front;

for (int i = 0; i < queue->size; i++) {

printf("%d ", queue->items[current]);

current = (current + 1) % MAX_SIZE;

printf("\n");

int main() {

struct Queue queue;

initializeQueue(&queue);

enqueue(&queue, 1);

enqueue(&queue, 2);

enqueue(&queue, 3);

enqueue(&queue, 4);
printQueue(&queue); // Output: Queue elements: 1 2 3 4

dequeue(&queue); // Output: Dequeued item: 1

dequeue(&queue); // Output: Dequeued item: 2

printQueue(&queue); // Output: Queue elements: 3 4

enqueue(&queue, 5);

printQueue(&queue); // Output: Queue elements: 3 4 5

dequeue(&queue); // Output: Dequeued item: 3

dequeue(&queue); // Output: Dequeued item: 4

dequeue(&queue); // Output: Dequeued item: 5

dequeue(&queue); // Output: Queue is empty. Cannot dequeue.

return 0;

OUTPUT
KEY TAKEAWAYS

● A queue is a data structure that follows a First-In-First-Out (FIFO) principle. It is similar


to a list where elements are added at one end and removed from the other end.
● The element that is added first will be the first one to be removed, maintaining the
order of insertion.
TYPES OF QUEUE
SUB LESSON 5.3

CIRCULAR QUEUE

The array implementation of a queue had a specific limitation. When the rear of the queue
reached the end position, there were potential vacant spaces in the beginning that couldn't be
utilized. To overcome this limitation, the concept of a circular queue was introduced.

A circular queue shares similarities with a linear queue as both operate based on the First-In-
First-Out (FIFO) principle. However, in a circular queue, the last position is connected to the first
position, forming a circular structure or circle. This distinctive characteristic gives rise to its
alternative name, the Ring Buffer.
Circular queues support the following operations:

1. Front: Retrieves the front element from the queue.


2. Rear: Retrieves the rear element from the queue.
3. Enqueue(value): Inserts a new value into the queue. The new element is always inserted
from the rear end.
4. Dequeue(): Removes an element from the queue. Deletion in a queue always occurs
from the front end.

CIRCULAR QUEUE REPRESENTATION


Example:

// Circular Queue implementation in C

#include <stdio.h>

#define SIZE 5

int items[SIZE];

int front = -1, rear = -1;


// Check if the queue is full

int isFull() {

if ((front == rear + 1) || (front == 0 && rear == SIZE - 1)) return 1;

return 0;

// Check if the queue is empty

int isEmpty() {

if (front == -1) return 1;

return 0;

// Adding an element

void enQueue(int element) {

if (isFull())

printf("\n Queue is full!! \n");

else {

if (front == -1) front = 0;

rear = (rear + 1) % SIZE;

items[rear] = element;

printf("\n Inserted -> %d", element);

// Removing an element

int deQueue() {
int element;

if (isEmpty()) {

printf("\n Queue is empty !! \n");

return (-1);

} else {

element = items[front];

if (front == rear) {

front = -1;

rear = -1;

// Q has only one element, so we reset the

// queue after dequeuing it. ?

else {

front = (front + 1) % SIZE;

printf("\n Deleted element -> %d \n", element);

return (element);

// Display the queue

void display() {

int i;

if (isEmpty())

printf(" \n Empty Queue\n");


else {

printf("\n Front -> %d ", front);

printf("\n Items -> ");

for (i = front; i != rear; i = (i + 1) % SIZE) {

printf("%d ", items[i]);

printf("%d ", items[i]);

printf("\n Rear -> %d \n", rear);

int main() {

// Fails because front = -1

deQueue();

enQueue(1);

enQueue(2);

enQueue(3);

enQueue(4);

enQueue(5);

// Fails to enqueue because front == 0 && rear == SIZE - 1

enQueue(6);

display();

deQueue();

display();
enQueue(7);

display();

// Fails to enqueue because front == rear + 1

enQueue(8);

return 0;

Output:

Queue is empty !!

Inserted -> 1

Inserted -> 2

Inserted -> 3

Inserted -> 4

Inserted -> 5

Queue is full!!

Front -> 0

Items -> 1 2 3 4 5

Rear -> 4

Deleted element -> 1

Front -> 1
Items -> 2 3 4 5

Rear -> 4

Inserted -> 7

Front -> 1

Items -> 2 3 4 5 7

Rear -> 0

Queue is full!!

KEY TAKEAWAYS

● Circular nature: A circular queue differs from a regular queue by allowing the front and
rear pointers to wrap around to the beginning of the queue, enabling efficient space
utilization.
● Enqueue and dequeue operations: Enqueueing (adding) an element and dequeuing
(removing) an element from a circular queue are both performed in constant time, O(1),
regardless of the size of the queue.
● Full and empty conditions: A circular queue is considered full when the rear pointer is
one position behind the front pointer. Conversely, the queue is empty when the front
and rear pointers are equal.
TYPES OF QUEUE
SUB LESSON 5.4

PRIORITY QUEUE

A priority queue is a unique form of queue that assigns a priority value to each element.
Elements are then retrieved from the queue based on their priority, with higher priority items
being served first. In the event that elements share the same priority, they are served in the
order they were added to the queue.

Assigning Priority Value

Assigning priority values in a priority queue is typically done by considering the value of the
element itself. In a priority queue, the element with the highest priority is dequeued first. The
priority of elements determines the order in which they are removed from the priority queue,
with higher-priority elements being dequeued before lower-priority elements.

In the given example, if we insert the values 1, 3, 4, 8, 14, and 22 into a priority queue with an
ordering imposed from least to greatest, element 1 would have the highest priority, while 22
would have the lowest priority. This means that when elements are dequeued from the priority
queue, the element with the value 1 would be dequeued first, followed by 3, 4, 8, 14, and
finally 22.
Characteristics of a priority queue include:

1. Every element in a priority queue is associated with a priority value.


2. Elements with higher priority are dequeued (deleted) before elements with lower
priority.
3. If two elements in a priority queue have the same priority, they are arranged and
dequeued based on the "first-in, first-out" (FIFO) principle.

Ascending order priority queue:

In an ascending order priority queue, a lower priority number is considered to have a higher
priority. For instance, if we have the numbers 1 to 5 arranged in ascending order like 1, 2, 3, 4,
and 5, the smallest number, 1, is given the highest priority in the priority queue. Consequently,
in this priority queue, the element with the value 1 would be served first, followed by 2, 3, 4,
and 5, in that order.
Descending order priority queue:

In a descending order priority queue, a higher priority number is considered to have a higher
priority. For instance, if we have the numbers 1 to 5 arranged in descending order like 5, 4, 3, 2,
and 1, the largest number, 5, is given the highest priority in the priority queue. Accordingly, in
this priority queue, the element with the value 5 would be served first, followed by 4, 3, 2, and
1, in that order.
Before studying the priority queue, we need to learn about the heap data structure for a better
understanding of the binary heap, as it is used to implement the priority queue.

A heap is a type of binary tree known as a complete binary tree, where each node can have at
most two children.

There are two types of heaps

1. Min Heap
2. Max Heap

In a Min Heap, the value of a parent node is always less than or equal to the values of its
children.

In a Max Heap, the value of a parent node is always greater than or equal to the values of its
children.

Let the input array be

44, 33, 77, 11, 55, 88, 66

To create a Max Heap tree, the following two cases need to be considered:

1. Insertion of Elements: When inserting elements into the Max Heap tree, it is crucial to
maintain the property of a complete binary tree. This means that elements should be
inserted from left to right, level by level, filling the available positions in the tree.
2. Parent-Child Relationship: Additionally, the value of a parent node in the Max Heap tree
must be greater than the values of both its children. This ensures that the maximum
element is always at the root of the tree, with progressively smaller elements branching
out.

Step 1: Initially, we add the element 44 to the tree. The resulting tree would be as follows:

Step 2: The next element to be added is 33. Since the insertion in a binary tree starts from the
left side, we add the element 33 to the left of 44. The updated tree would look like this:
Step 3: The next element to be added is 77. We add the element 77 to the right of 44 because
the insertion in a binary tree starts from the left side and moves to the right. The updated tree
would appear as follows:

As we can observe in the above tree, it does not satisfy the Max Heap property, which requires
the parent node to have a value greater than its child nodes. In this case, the parent node 44 is
less than the child node 77. To rectify this, we will swap the values of the parent and child
nodes, resulting in the updated tree as shown below:
After swapping the values, the Max Heap property is now satisfied, with the parent node 77
being greater than both of its child nodes, 33 and 44.

Step 4: The next element to be added is 11. Since the insertion in a binary tree starts from the
left side, we add the element 11 to the left of 33. The updated tree would look like this:

Step 5: The next element to be added is 55. To maintain the property of a complete binary tree,
we add the node 55 to the right of 33. The updated tree would appear as follows:
As we can observe in the above figure, the property of a Max Heap is not satisfied because the
parent node 33 is less than the child node 55. To rectify this, we will swap the values of the
parent and child nodes, resulting in the updated tree as shown below:
Step 6: The next element to be added is 88. Since the left subtree is already complete, we add
the element 88 to the left of 44 to maintain the property of a complete binary tree. The
updated tree would look like this:
Deletion in Heap Tree

Select the element to be deleted


Swap it with the last element
Remove the last element.
Final Tree
Example:

// Priority Queue implementation in C

#include <stdio.h>

int size = 0;

void swap(int *a, int *b) {

int temp = *b;

*b = *a;

*a = temp;

}
// Function to heapify the tree

void heapify(int array[], int size, int i) {

if (size == 1) {

printf("Single element in the heap");

} else {

// Find the largest among root, left child and right child

int largest = i;

int l = 2 * i + 1;

int r = 2 * i + 2;

if (l < size && array[l] > array[largest])

largest = l;

if (r < size && array[r] > array[largest])

largest = r;

// Swap and continue heapifying if root is not largest

if (largest != i) {

swap(&array[i], &array[largest]);

heapify(array, size, largest);

// Function to insert an element into the tree

void insert(int array[], int newNum) {


if (size == 0) {

array[0] = newNum;

size += 1;

} else {

array[size] = newNum;

size += 1;

for (int i = size / 2 - 1; i >= 0; i--) {

heapify(array, size, i);

// Function to delete an element from the tree

void deleteRoot(int array[], int num) {

int i;

for (i = 0; i < size; i++) {

if (num == array[i])

break;

swap(&array[i], &array[size - 1]);

size -= 1;

for (int i = size / 2 - 1; i >= 0; i--) {

heapify(array, size, i);


}

// Print the array

void printArray(int array[], int size) {

for (int i = 0; i < size; ++i)

printf("%d ", array[i]);

printf("\n");

// Driver code

int main() {

int array[10];

insert(array, 3);

insert(array, 4);

insert(array, 9);

insert(array, 5);

insert(array, 2);

printf("Max-Heap array: ");

printArray(array, size);

deleteRoot(array, 4);
printf("After deleting an element: ");

printArray(array, size);

Output:

Max-Heap array: 9 5 4 3 2

After deleting an element: 9 5 2 3

KEY TAKEAWAYS

● A priority queue is a unique form of queue that assigns a priority value to each element.
● Elements are then retrieved from the queue based on their priority, with higher-priority
items being served first.
● In the event that elements share the same priority, they are served in the order they
were added to the queue.
TYPES OF QUEUE
SUB LESSON 5.5

DOUBLE ENDED QUEUE

A deque, also known as a double-ended queue, is a data structure that allows the insertion and
removal of elements at both the front and the rear. Unlike a traditional queue, a dequeue does
not strictly follow the FIFO (First-In-First-Out) rule.

Operations on a Deque

Here's an example of a circular array implementation of the deque

In a circular array implementation, when the array becomes full, the insertion of new elements
starts from the beginning of the array, creating a circular behavior.

However, in a linear array implementation, if the array becomes full, further insertion of
elements is not possible. In such cases, an "overflow message" is typically thrown to indicate
that the array is full and no more elements can be inserted.

To perform the following operations, the following steps are typically followed:

1. Initialize an array (deque) of size n.


2. Set two pointers, front and rear, to indicate the positions in the deque.
● Initially, set front to -1 and rear to 0.

These initial steps set up the structure for subsequent operations on the deque data structure.
Fig: Initialize an array and pointers for deque

1. Insert at the Front

This operation adds an element at the front.

Check the position of front.

If front < 1, reinitialize front = n-1 (last index).


Else, decrease front by 1.

Add the new key 5 into array[front].

2. Insert at the Rear

This operation adds an element to the rear.

Check if the array is full.


If the deque is full, reinitialize rear = 0.

Else, increase rear by 1.

Add the new key 5 into array[rear].


3. Delete from the Front

The operation deletes an element from the front.

Check if the deque is empty.

If the deque is empty (i.e. front = -1), deletion cannot be performed (underflow condition).

If the deque has only one element (i.e. front = rear), set front = -1 and rear = -1.

Else if front is at the end (i.e. front = n - 1), set go to the front front = 0.

Else, front = front + 1.


4. Delete from the Rear

This operation deletes an element from the rear.

Check if the deque is empty.

If the deque is empty (i.e. front = -1), deletion cannot be performed (underflow condition).

If the deque has only one element (i.e. front = rear), set front = -1 and rear = -1, else follow
the steps below.

If rear is at the front (i.e. rear = 0), set go to the front rear = n - 1.

Else, rear = rear - 1.


5. Check Empty

This operation checks if the deque is empty. If front = -1, the deque is empty.

6. Check Full

This operation checks if the deque is full. If front = 0 and rear = n - 1 OR front = rear + 1, the
deque is full.

Example :

#include <stdio.h>

#include <stdlib.h>

// Maximum size of the deque

#define MAX_SIZE 100

// Global variables

int deque[MAX_SIZE];

int front = -1;


int rear = -1;

// Function to check if the deque is empty

int isEmpty() {

return (front == -1 && rear == -1);

// Function to check if the deque is full

int isFull() {

return ((rear + 1) % MAX_SIZE == front);

// Function to insert an element at the front of the deque

void insertFront(int data) {

if (isFull()) {

printf("Deque is full. Cannot insert element.\n");

return;

if (isEmpty()) {

front = rear = 0;

} else {

front = (front - 1 + MAX_SIZE) % MAX_SIZE;

deque[front] = data;
printf("Element %d inserted at the front.\n", data);

// Function to insert an element at the rear of the deque

void insertRear(int data) {

if (isFull()) {

printf("Deque is full. Cannot insert element.\n");

return;

if (isEmpty()) {

front = rear = 0;

} else {

rear = (rear + 1) % MAX_SIZE;

deque[rear] = data;

printf("Element %d inserted at the rear.\n", data);

// Function to delete an element from the front of the deque

void deleteFront() {

if (isEmpty()) {

printf("Deque is empty. Cannot delete element.\n");

return;

}
if (front == rear) {

printf("Element %d deleted from the front.\n", deque[front]);

front = rear = -1;

} else {

printf("Element %d deleted from the front.\n", deque[front]);

front = (front + 1) % MAX_SIZE;

// Function to delete an element from the rear of the deque

void deleteRear() {

if (isEmpty()) {

printf("Deque is empty. Cannot delete element.\n");

return;

if (front == rear) {

printf("Element %d deleted from the rear.\n", deque[rear]);

front = rear = -1;

} else {

printf("Element %d deleted from the rear.\n", deque[rear]);

rear = (rear - 1 + MAX_SIZE) % MAX_SIZE;

}
// Function to get the front element of the deque

int getFront() {

if (isEmpty()) {

printf("Deque is empty.\n");

return -1;

return deque[front];

// Function to get the rear element of the deque

int getRear() {

if (isEmpty()) {

printf("Deque is empty.\n");

return -1;

return deque[rear];

// Function to display the elements of the deque

void display() {

if (isEmpty()) {

printf("Deque is empty.\n");

return;

}
int i = front;

printf("Elements in the deque: ");

while (i != rear) {

printf("%d ", deque[i]);

i = (i + 1) % MAX_SIZE;

printf("%d\n", deque[rear]);

// Main function

int main() {

insertFront(1);

insertRear(2);

insertFront(3);

insertRear(4);

display();

deleteFront();

deleteRear();

display();

int frontElement = getFront();

int rearElement = getRear();

printf("Front element: %d\n", frontElement);


printf("Rear element: %d\n", rearElement);

return 0;

Output:

Element 1 inserted at the front.

Element 2 inserted at the rear.

Element 3 inserted at the front.

Element 4 inserted at the rear.

Elements in the deque: 3 1 2 4

Element 3 deleted from the front.

Element 4 deleted from the rear.

Elements in the deque: 1 2

Front element: 1

Rear element: 2

KEY TAKEAWAYS

● A deque, also known as a double-ended queue, is a data structure that allows the
insertion and removal of elements at both the front and the rear.
● Deque does not strictly follow the FIFO (First-In-First-Out) rule.
LINKED LIST
SUB LESSON 6.1

BASICS OF LINKED LIST

A linked list is a fundamental data structure used to store a collection of elements. It consists of
a sequence of nodes, where each node contains a data element and a reference (or pointer) to
the next node in the sequence. The last node in the list typically has a null reference, indicating
the end of the list.

DATA ELEMENT: In this portion, we have the ability to retain the necessary details, regardless
of the data type utilized.

Example

int age;
char name[20];

REFERENCE TO THE NEXT NODE: It will store the address of the next node.

Head Node - Starting node of a linked list.

Last Node - Node with reference pointer as NULL.


HERE ARE SOME BASIC CONCEPTS AND OPERATIONS ASSOCIATED WITH LINKED LISTS:

Node: A node is a basic unit of a linked list. It contains two fields: the data field to store the
element and the next field to hold the reference to the next node.

Head: The head of a linked list refers to the first node in the list. It serves as the starting point to
access the elements in the list.

Singly Linked List: In a singly linked list, each node only has a reference to the next node.
Traversing the list is only possible in one direction, from the head to the tail.

Doubly Linked List: In a doubly linked list, each node has a reference to both the next node and
the previous node. This allows traversal in both directions.

Circular Linked List : A circular linked list is a data structure where each node contains a
reference to the next node, and the last node points back to the first node, creating a circular
structure.

Advantages of Linked Lists:

Dynamic size: Linked lists can grow or shrink in size as elements are added or removed, unlike
arrays which have a fixed size.

Insertion and deletion: Insertion and deletion operations can be more efficient in linked lists
compared to arrays because they don't require shifting elements.

Flexibility: Linked lists allow efficient manipulation of elements, such as inserting or deleting
nodes, at any position in the list.

Disadvantages of Linked Lists:

Random access: Linked lists do not provide direct access to arbitrary elements like arrays do.
Accessing an element at a specific index requires traversing the list from the head.

Extra memory: Linked lists require additional memory to store the references/pointers to the
next nodes.

Linked lists are commonly used in various applications and are the basis for more complex data
structures like stacks, queues, and hash tables. Understanding the basics of linked lists is crucial
for mastering data structures and algorithms.
CHOOSING AN APPROPRIATE DATA TYPE FOR THE LINKED LIST:

As mentioned previously, every node in a linked list consists of two components.

DATA ELEMENT : It can accommodate any data type, such as integers, characters, floats,
doubles, and so on.

REFERENCE(POINTER): The next part of a node is a pointer that stores the address of the
following node, making it a pointer type.

In this scenario, there is a requirement to organize and combine two distinct data types,
resulting in a heterogeneous structure.

To group different data types, a common approach is to use data structure called struct that
contains members of different data types.

Therefore, each node in a linked list is of the structure data type, as it encapsulates multiple
data fields representing the elements within a node.
Difference Between Arrays & Linked List

Arrays Linked Lists

Collection of similar types of data elements stored in Collection of the list of data values stored in random
contiguous memory locations. order.

Elements present in Linked Lists are dependent on


Elements of the arrays are independent of each other.
each other.

Has static size where the memory size is fixed and Has dynamic size where the memory size is not fixed
cannot be changed at the run time. and can be changed during run time.

Memory is allocated in the stack section. Memory is allocated in the heap section.

Memory is allocated at the compile-time. Memory is allocated at the run time.

Accessing elements is comparatively slower than


Elements in the array can be accessed faster by their
arrays, as the search function has to traverse the list
index.
from the 1st element to find an element.
Linked Lists are hard to implement as they are prone
Arrays are comparatively easier to implement
to memory leaks, segmentation faults, etc.
Memory utilization is optimized as the memory is
Memory utilization is inefficient as memory declared
allocated/deallocated based on the requirements
during the compile time can be left unused.
during run time.
Complexity:Access an element – O(1)Insert an element Complexity:Access an element – O(n)Insert an element
at the beginning – O(n)Insert an element at the end – at the beginning – O(1)Insert an element at the end –
O(n) O(n)

Arrays can be single/two/ multi-dimensional. Linked Lists can be Singly/Doubly/Circular lists.

KEY TAKEAWAYS

• Elements present in Linked Lists are dependent on each other.


• Memory is allocated at the run time.
• Linked Lists can be Singly/Doubly/Circular lists.
• Accessing elements is comparatively slower than arrays, as the search function has to traverse
the list from the 1st element to find an element.
LINKED LIST
SUB LESSON 6.2

REPRESENTATION OF LINKED LIST

A linked list can be visualized as a sequential chain of nodes, with each node pointing to the
next node in the sequence.

SAMPLE LINKED LIST NODE :

struct node

int data;

struct node *next;

};

where,

data - used to store the integer information.

struct node *next - The next part of a node is utilized to reference the subsequent node, storing
the address of the next node in the linked list.

LINKING EACH NODES :

headnode -> middlenode-> lastnode-> NULL


head->next = middle;

middle->next = last;

last->next = NULL;

LET'S CREATE AND ALLOCATE MEMORY FOR 3 NODE :

struct node *head,*middle,*last;

head = malloc(sizeof(struct node));

middle = malloc(sizeof(struct node));

last = malloc(sizeof(struct node));

ASSIGN VALUES TO EACH NODE :

head->data = 10;

middle->data = 20;

last->data = 30;

OPERATIONS OF LINKED LIST

These are the basic operations, we can do with linked list :

Insertion: Nodes can be inserted at the beginning, end, or at any position in the linked list.

Deletion: Nodes can be removed from the list by updating the references of neighboring nodes.

Search: The list can be searched for a specific element by traversing through the nodes until the
element is found or the end of the list is reached.

Traversal: The list can be traversed from the head to the tail, accessing each node's data.

CODE FOR UNDERSTANDING OPERATIONS OF LINKED LIST :


#include<stdio.h>

#include<stdlib.h>

struct Node

int data;

struct Node *next;

};

void deleteStart (struct Node **head)

struct Node *temp = *head;

// if there are no nodes in Linked List can't delete

if (*head == NULL)

printf ("Linked List Empty, nothing to delete");

return;

// move head to next node

*head = (*head)->next;

printf ("\n%d deleted\n", temp->data);

free (temp);

void insertStart (struct Node **head, int data)

// dynamically create memory for this newNode


struct Node *newNode = (struct Node *) malloc (sizeof (struct Node));

// assign data value

newNode->data = data;

// change the next node of this newNode

// to current head of Linked List

newNode->next = *head;

//re-assign head to this newNode

*head = newNode;

printf ("\n%d Inserted\n", newNode->data);

void display (struct Node *node)

printf ("\nLinked List: ");

// as linked list will end when Node is Null

while (node != NULL)

printf ("%d ", node->data);

node = node->next;

printf ("\n");

int main ()

struct Node *head = NULL;


// Need '&' i.e. address as we need to change head

insertStart (&head, 100);

insertStart (&head, 80);

insertStart (&head, 60);

insertStart (&head, 40);

insertStart (&head, 20);

// No Need for '&' as not changing head in display operation

display (head);

deleteStart (&head);

deleteStart (&head);

display (head);

return 0;

OUTPUT :

100 Inserted

80 Inserted

60 Inserted

40 Inserted

20 Inserted

Linked List: 20 40 60 80 100

20 deleted

40 deleted

Linked List: 60 80 100


KEY TAKEAWAYS

● A linked list is a linear data structure consisting of nodes where each node contains a
data element and a reference (link) to the next node in the sequence.
● A node in a linked list typically consists of two parts: the data part, which stores the
actual data, and the next part, which is a reference to the next node in the list.
TYPES OF LINKED LIST
SUB LESSON 7.1

TYPES OF LINKED LIST

Basically, there are three types of Linked List.

1. Singly Linked List


2. Doubly Linked List
3. Circular Linked List

Singly Linked List

The singly linked list is a linear data structure and the most common type, where each node
contains data and a pointer(address) to the next node.

A singly linked list is a type of linked list that allows traversal in only one direction. you can only
traverse the list in a forward direction, starting from the head (first node) and progressing
through the list until the last node, which points to null.

Let's consider a scenario where we have three nodes with addresses 100, 200, and 300. The
representation of these three nodes as a linked list can be visualized as follows:

In this particular example, the first node holds the address of the next node, which is 200. The
second node, in turn, contains the address of the last node, which is 300. Lastly, the third node
has a NULL value in its address field, indicating that it does not point to any other node. It is
worth noting that the pointer that stores the address of the initial node is commonly referred
to as the head pointer.

A node is typically represented as follows:

struct node {

int data;

struct node *next;

DOUBLY LINKED LIST

A doubly linked list is a linear data structure where each node contains three components: a
data element, a pointer to the previous node, and a pointer to the next node.

This structure allows for traversal in both directions, forward and backward, can say bi-
directional.

The data part of the node holds the actual data value, while the previous pointer points to the
preceding node in the list, and the next pointer points to the subsequent node.

A doubly linked list comprises a collection of singly linked lists (SLLs), where each SLL is itself
doubly linked. This structure is employed to store data in a manner that facilitates rapid
insertion and deletion of elements.

Let's consider a scenario where we have three nodes with addresses 100, 200, and 300,
respectively. The representation of these nodes in a doubly linked list can be visualized as
follows:
In the above representation, we can observe that each node in a doubly-linked list contains two
address components. One component stores the address of the next node, while the other
component stores the address of the previous node. The initial node in the doubly linked list
has a NULL value in the address part that corresponds to the previous node, indicating that it is
the starting point of the list and has no previous node.

A node is typically represented as follows:

struct node {

int data;

struct node *next;

struct node *prev;

CIRCULAR LINKED LIST

In a circular linked list, the last node is connected to the first node, creating a circular structure.
As a result, the link part of the last node contains the address of the first node in the list.

A circular linked list does not have a distinct beginning or end. It can be visualized as a ring of
nodes.

In a circular linked list, it is possible to traverse in both directions, whether it be forward or


backward. Data can be added and removed from a circular linked list at any point in time.
A circular linked list can be implemented either as a singly linked list or as a doubly linked list.

In a singly linked circular list, the next pointer of the last item (node) points back to the first
item in the list

In a doubly linked circular list, the prev pointer of the first item (node) points to the last item in
the list.

The representation of a circular linked list is similar to that of a singly linked list. It forms a
circular structure where the last node points back to the first node. This is depicted in the figure
below:

A node is typically represented as follows:

struct node

int data;

struct node *next;

}
KEY TAKEAWAYS

● The singly linked list is a linear data structure and the most common type, where each
node contains data and a pointer(address) to the next node.
● A doubly linked list is a linear data structure where each node contains three
components: a data element, a pointer to the previous node, and a pointer to the next
node.
● In a circular linked list, the last node is connected to the first node, creating a circular
structure. As a result, the link part of the last node contains the address of the first node
in the list.
TYPES OF LINKED LIST
SUB LESSON 7.2

SINGLY LINKED LIST

A singly linked list is a linear data structure consisting of a sequence of nodes, where each node
contains a value and a reference to the next node in the list. It forms a chain-like structure
where data elements are connected in a forward direction and can be traversed in the same
forward direction.

The nodes are not stored in a contiguous block of memory, but instead, each node holds the
address of the next node in the list.

Singly-linked lists can dynamically grow or shrink in size as elements are added or removed. This
flexibility makes them suitable for scenarios where the number of elements may change over
time.

To access an element in a singly linked list, you need to traverse the list from the head node to
the desired position. This process has a time complexity of O(n), where n is the number of
elements in the list. Random access to elements by an index is not efficient in singly linked lists.

Singly-linked lists are used in various applications and algorithms. They are commonly
employed for implementing stacks, queues, hash tables, and graph algorithms.
SINGLY LINKED LIST COMPLEXITY

The time complexity of a singly linked list depends on the specific operation being performed.

The space complexity of a singly linked list is O(n) as it requires memory allocation for each
individual node. The space complexity is proportional to the number of nodes in the list.

MEMORY REPRESENTATION OF SINGLY LINKED LIST

Let's consider four elements to insert into the list.

In this program, we have four nodes to insert into the list. Each node consists of two parts: the
data part, which stores an integer value, and the address part, represented by the next pointer,
which holds the address of the next node.

The singly linked list starts with a special node called the head node, which holds the address of
the first node in the list. The last node in the list points to NULL to indicate the end of the list.

In a singly linked list, each node connects with the next node through a pointer that points to
the address of the next node, and arrows in the above-given diagram represent that.
CODE TO IMPLEMENT A SINGLY LINKED LIST

#include <stdio.h>

#include <stdlib.h>

void display();

struct Node {

int data;

struct Node* next;

};

int main()

struct Node* first;

struct Node* second;

struct Node* third;

struct Node* fourth;

first = (struct Node*)malloc(sizeof(struct Node));

second = (struct Node*)malloc(sizeof(struct Node));

third = (struct Node*)malloc(sizeof(struct Node));

fourth = (struct Node*)malloc(sizeof(struct Node));

first->data = 10;

second->data = 20;

third->data = 30;

fourth->data = 40;

first->next = second;

second->next = third;
third->next = fourth;

fourth->next = NULL;

display(first);

return 0;

void display(struct Node* ptr)

while (ptr != NULL) {

printf(" %d ", ptr->data);

ptr = ptr->next;

OUTPUT :

10 20 30 40

KEY TAKEAWAYS

• A singly linked list is a linear data structure consisting of a sequence of nodes, where
each node contains a value and a reference to the next node in the list.
• It forms a chain-like structure where data elements are connected in a forward direction
and can be traversed in the same forward direction.
TYPES OF LINKED LIST
SUB LESSON 7.3

DOUBLY LINKED LIST

A doubly linked list is a data structure that consists of a sequence of nodes, where each node
contains data and two pointers: one pointing to the previous node and one pointing to the next
node. This allows for bidirectional traversal, meaning we can navigate both forward and
backward in the list.

Doubly linked lists require additional memory compared to singly linked lists because each node
has to store references to both the previous and next nodes.

Doubly linked lists provide flexibility in accessing and manipulating the list in both forward and
backward directions, making them useful in scenarios where bidirectional traversal is required
or when efficient insertion and deletion operations are necessary.

In a doubly linked list, the presence of two pointers, prev and next, requires additional steps to
be taken in certain operations.

Doubly linked lists are used as building blocks for other complex data structures like stacks,
queues, and associative arrays.

DOUBLY LINKED LIST COMPLEXITY

The time complexity of basic operations in a doubly linked list can be summarized as follows:

Searching: O(n)

Insertion/Deletion at the beginning: O(1)

Insertion/Deletion at the end: O(1)

Insertion/Deletion at a specific position: O(n)


Traversal: O(n)

These time complexities represent the worst-case scenario in terms of the number of elements (n) in the
doubly linked list.

The space complexity of a doubly linked list is O(n), where n is the number of nodes in the list. Each
node requires memory to store its data and pointers to the previous and next nodes.

MEMORY REPRESENTATION OF DOUBLY LINKED LIST

In this program, we consider three elements to insert into the list. Each node in the linked list consists of
two parts: the data part, which stores an integer value, and the address parts of the previous and next
nodes, represented by the prev and next pointers, respectively, which allows bidirectional traversal.

The nodes may be stored at random addresses in memory, but their logical connection is maintained
through the prev and next pointers.

The address of the first node in the linked list is stored in a special node called the head node.

In the doubly linked list, the first node's prev pointer is set to NULL, indicating that there are no nodes
before it. Similarly, the last node's next pointer is set to NULL, indicating that there are no nodes after it.

These connections allow for efficient traversal in both directions, forward and backward, through the
linked list. The arrows in the diagram represent these connections between nodes.

CODE TO IMPLEMENT A DOUBLY LINKED LIST

#include <stdio.h>

#include <stdlib.h>

void display();

struct Node {

int data;
struct Node* prev;

struct Node* next;

};

int main()

struct Node* head;

struct Node* first=NULL;

struct Node* second=NULL;

struct Node* third=NULL;

struct Node* fourth=NULL;

first = (struct Node*)malloc(sizeof(struct Node));

second = (struct Node*)malloc(sizeof(struct Node));

third = (struct Node*)malloc(sizeof(struct Node));

fourth = (struct Node*)malloc(sizeof(struct Node));

first->data = 10;

second->data = 20;

third->data = 30;

fourth->data = 40;

first->next = second;

first->prev=NULL;

second->next = third;

second->prev=first;

third->next = fourth;

third->prev=second;

fourth->next = NULL;
fourth->prev = third;

head=first;

display(first);

return 0;

void display(struct Node* ptr) {

struct Node* last;

printf("The doubly linked list elements are:\n");

while (ptr != NULL) {

printf("%4d ", ptr->data);

last = ptr;

ptr = ptr->next;

OUTPUT :

The doubly linked list elements are:

10 20 30 40

KEY TAKEAWAYS

• A doubly linked list is a data structure that consists of a sequence of nodes, where each
node contains data and two pointers: one pointing to the previous node and one
pointing to the next node.
• Doubly linked lists require additional memory compared to singly linked lists because
each node has to store references to both the previous and next nodes.
TYPES OF LINKED LIST
SUB LESSON 7.4

CIRCULAR LINKED LIST

A circular linked list is characterized by a connection between the first and last nodes, forming a
circular structure. There is no concept of a NULL pointer indicating the end of the list.

It allows flexibility in setting the starting point, which can be any node within the list.

Traversal from the first node to the last node in a circular linked list is efficient.

In a circular linked list, determining the end of the list and controlling the looping can be more
challenging compared to a linear linked list.

Directly accessing individual nodes in a circular linked list is not readily available.

A circular linked list can be used to manage multiple running applications, where each
application is represented by a node in the circular linked list.

A circular linked list is particularly useful for implementing queues, trees or graphs. Unlike
other implementations, a circular linked list eliminates the need for maintaining separate
pointers for the front and rear. By keeping a pointer to the last inserted node, we can easily
determine the front by accessing the next node of the last inserted one.

Circular linked lists can be classified into two main types:

1. Circular Singly Linked List

In this type of circular linked list, the address of the last node points to the address of the first
node.
2. Circular Doubly Linked List

In this particular type of circular linked list, both the last node and the first node contain
pointers that reference each other.

CIRCULAR LINKED LIST COMPLEXITY

The time complexity of a circular linked list is typically determined by the number of nodes and
the specific operation being performed.

The space complexity of a circular linked list is the same as that of a regular singly linked list,
which is O(n). It requires space to store the data and pointers for each node in the list.
MEMORY REPRESENTATION OF CIRCULAR LINKED LIST

Let's start by discussing the addition of four elements to a linked list. To accomplish this, we
create four nodes, each containing both data and address information, which are stored at
random addresses. In a singly linked list, the last node's next pointer typically points to Null,
indicating the end of the list. However, in the case of a circular singly linked list, there is no Null
pointer since the last node's next pointer loops back to the first node, creating a circular
structure.

In a circular singly linked list, the last node's next pointer stores the address of the first node.
This means that the tail node's address points to the head node of the linked list, creating a
circular connection, and the arrows in this diagram represent that.

CODE TO IMPLEMENT A CIRCULAR LINKED LIST

#include <stdio.h>

#include <stdlib.h>

void display();

struct Node {

int data;

struct Node* next;

};

int main()
{

struct Node* last;

struct Node* first;

struct Node* second;

struct Node* third;

struct Node* fourth;

first = (struct Node*)malloc(sizeof(struct Node));

second = (struct Node*)malloc(sizeof(struct Node));

third = (struct Node*)malloc(sizeof(struct Node));

fourth = (struct Node*)malloc(sizeof(struct Node));

first->data = 10;

second->data = 20;

third->data = 30;

fourth->data = 40;

first->next = second;

second->next = third;

third->next = fourth;

fourth->next = first;

last = fourth;

display(last);

return 0;

void display(struct Node* last_node) {

struct Node* ptr;


if (last_node == NULL) {

printf("The list is empty");

return;

ptr = last_node->next;

do {

printf("%d ", ptr->data);

ptr = ptr->next;

} while (ptr != last_node->next);

OUTPUT :

10 20 30 40

KEY TAKEAWAYS

● A circular linked list is characterized by a connection between the first and last nodes,
forming a circular structure. There is no concept of a NULL pointer indicating the end of
the list.
● A circular linked list can be used to manage multiple running applications, where each
application is represented by a node in the circular linked list.
TREE
SUB LESSON 8.1

INTRODUCTION TO TREE

A tree is a hierarchical data structure that consists of nodes connected by edges. It is a


nonlinear structure that exhibits hierarchical relationships, similar to the parts of a tree in the
real world. In data structures, trees are commonly used to represent relationships and organize
data efficiently.

Tree Node

A node is a fundamental component of a tree data structure. It represents an entity or a data


item within the tree. Each node may have zero or more child nodes, except for the topmost
node called the root. Nodes are connected to each other through edges.

The tree data structure is a specialized approach to efficiently organize and store data in a
computer system. It comprises a central node, structural nodes, and sub-nodes that are
interconnected through edges. It can be associated with a tree with roots, branches, and
leaves, where all the components are interconnected. By utilizing this structure, data can be
managed and accessed effectively.
In a tree structure, the root serves as the central node, and it can be connected to zero or
multiple subtrees, denoted as T1, T2, ..., Tk. Each subtree is associated with an edge that
connects the root of the tree to the root of the corresponding subtree.

WHY TREE IS CONSIDERED A NON-LINEAR DATA STRUCTURE

The tree is considered a non-linear data structure because the data elements it contains are not
stored sequentially or linearly. Unlike linear data structures such as arrays or linked lists, where
elements are stored one after another, a tree organizes its data in a hierarchical manner with
multiple levels.

In a tree, each element (node) can have zero or more child nodes, forming a branching
structure. This hierarchical arrangement allows for efficient organization and representation of
data, as well as the establishment of relationships between different elements.

The non-linear nature of a tree arises from the fact that elements are not constrained to a
linear sequence, but rather can have multiple connections and form a branching structure. This
hierarchy enables efficient searching, insertion, and retrieval operations, making trees suitable
for various applications in computer science and data processing.
Example

TREE TRAVERSAL

Tree traversal is a fundamental operation that involves visiting every node in a tree data
structure exactly once. It plays a crucial role in computer science and various algorithms,
enabling operations and retrieval of information stored within the tree. Traversing a tree can be
accomplished using different techniques, with three commonly used methods being in-order
traversal, pre-order traversal, and post-order traversal.

For the given Tree we are performing the in-order traversal, pre-order traversal, and post-order
traversal.
Inorder traversal - The described technique follows the "left-root-right" policy, which
corresponds to the in-order traversal method. In in-order traversal, the left subtree is visited
first (traversed recursively), followed by the root node, and finally, the right subtree (also
traversed recursively). The name "in-order" indicates that the root node is traversed between
the left and right subtrees.

To perform an in-order traversal of a tree, we start from the root node (A) and visit its left
subtree (B) in an in-order manner. The process continues recursively until all the nodes are
visited. The resulting output of the in-order traversal will be the values of the nodes in
ascending order.

Final Output - D → B → E → A → F → C → G

Preorder traversal - The described technique follows the "root-left-right" policy, which
corresponds to the pre-order traversal method. In pre-order traversal, the root node is visited
first, followed by the left subtree (visited recursively), and finally, the right subtree (visited
recursively). The name "pre-order" indicates that the root node is traversed before the left and
right subtrees.

To perform a pre-order traversal of a tree, we start from the root node (A) and visit it first.
Then, we move to its left subtree (B) and traverse it in a pre-order manner. The process
continues recursively until all the nodes are visited. The resulting output of the pre-order
traversal will be the values of the nodes in the order they are visited.

Final Output - A → B → D → E → C → F → G
Postorder traversal - The described technique follows the "left-right-root" policy, which
corresponds to the post-order traversal method. In post-order traversal, the left subtree is
traversed first (recursively), followed by the right subtree (also recursively), and finally, the root
node is traversed. The name "post-order" indicates that the root node is traversed after the left
and right subtrees.

To perform a post-order traversal of a tree, we start from the root node (A) and visit its left
subtree (B) in a post-order manner. Then, we move to the right subtree and traverse it in a
post-order manner as well. Finally, we visit the root node itself. The process continues
recursively until all the nodes are visited. The resulting output of the post-order traversal will be
the values of the nodes in the order they are visited.

Final Output - D → E → B → F → G → C → A

Example:

#include <stdio.h>

#include <stdlib.h>

// Structure for a binary tree node

struct Node {

int data;

struct Node* left;

struct Node* right;

};

// Function to create a new node

struct Node* createNode(int data) {

struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));

newNode->data = data;

newNode->left = NULL;
newNode->right = NULL;

return newNode;

// Pre-order traversal function

void preOrderTraversal(struct Node* node) {

if (node != NULL) {

printf("%d ", node->data); // Visit the node

preOrderTraversal(node->left); // Traverse the left subtree

preOrderTraversal(node->right); // Traverse the right subtree

// Post-order traversal function

void postOrderTraversal(struct Node* node) {

if (node != NULL) {

postOrderTraversal(node->left); // Traverse the left subtree

postOrderTraversal(node->right); // Traverse the right subtree

printf("%d ", node->data); // Visit the node

// In-order traversal function

void inOrderTraversal(struct Node* node) {


if (node != NULL) {

inOrderTraversal(node->left); // Traverse the left subtree

printf("%d ", node->data); // Visit the node

inOrderTraversal(node->right); // Traverse the right subtree

// Main function

int main() {

// Creating a binary tree

struct Node* root = createNode(1);

root->left = createNode(2);

root->right = createNode(3);

root->left->left = createNode(4);

root->left->right = createNode(5);

printf("Pre-order traversal of the binary tree: ");

preOrderTraversal(root);

printf("\n");

printf("Post-order traversal of the binary tree: ");

postOrderTraversal(root);

printf("\n");
printf("In-order traversal of the binary tree: ");

inOrderTraversal(root);

printf("\n");

return 0;

Output:

Pre-order traversal of the binary tree: 1 2 4 5 3

Post-order traversal of the binary tree: 4 5 2 3 1

In-order traversal of the binary tree: 4 2 5 1 3

KEY TAKEAWAYS

• A tree is a hierarchical data structure that consists of nodes connected by edges.


• The tree is considered a non-linear data structure because the data elements it contains
are not stored sequentially or linearly.
• It comprises a central node, structural nodes, and sub-nodes that are interconnected
through edges.
• It can be associated with a tree with roots, branches, and leaves, where all the
components are interconnected.
• Tree traversal is a fundamental operation that involves visiting every node in a tree data
structure exactly once.
• Traversing a tree can be accomplished using different techniques, with three commonly
used methods being in-order traversal, pre-order traversal, and post-order traversal.
TREE
SUB LESSON 8.2

BASIC TERMINOLOGIES OF TREE

Some basic terms used in Tree data structure.

Node

A node is a fundamental component of a tree data structure. It represents an entity that


contains a key or value and holds pointers to its child nodes.

The leaf nodes, also known as external nodes, are located at the ends of each path and do not
possess any links or pointers to child nodes.

On the other hand, an internal node is a node that has at least one child node connected to it.

Edge

It serves as the connection or link between any two nodes within the tree structure.
Root

The root is the highest or topmost node in a tree.

Height of a Node

The height of a node is defined as the number of edges on the longest path from that node to a
leaf node. It represents the depth or level of a node within the tree structure.

Depth of a Node

The depth of a node refers to the number of edges in the path from the root node to that
particular node. It represents the level or position of the node within the tree hierarchy.

In the image below, you can observe the height and depth of each node in the tree structure

Degree of a Node

The degree of a node in a tree refers to the total number of branches or child nodes connected
to that particular node.
Forest

A forest is a collection of disjoint or separate trees. In other words, it refers to a set of trees
where there are no connections or edges between the trees within the collection. Each
individual tree in the forest retains its own hierarchical structure, with its own root and set of
nodes.

Creating a forest involves disconnecting the root node of a tree or removing the root node
entirely. When the root node is cut or removed from a tree, the resulting disconnected parts
are considered individual trees, and together they form a forest. Each disconnected part retains
its own tree structure and can be considered a separate tree within the forest.

KEY TAKEAWAYS

• The root is the highest or topmost node in a tree.


• Edge is the connection or link between any two nodes
• A forest is a collection of disjoint or separate trees.
• You can create a forest by cutting the root of a tree.
TREE
SUB LESSON 8.3

TYPES OF TREE

There are various types of trees in data structures, each with its own characteristics and
applications. Some commonly encountered types of trees include:

1. Binary Tree: A binary tree is a tree in which each node can have at most two child nodes,
typically referred to as the left child and the right child.
2. Binary Search Tree (BST): A binary search tree is a type of binary tree where the nodes
are arranged in a specific order. The left child of a node contains a value smaller than
the node's value, and the right child contains a value greater than the node's value. This
arrangement allows for efficient searching, insertion, and deletion operations.
3. AVL Tree: An AVL tree is a self-balancing binary search tree. It maintains a balance factor
for each node to ensure that the height difference between its left and right subtrees is
at most 1. This balancing mechanism helps maintain efficient search and insertion
operations.

BINARY TREE

A binary tree is a type of tree data structure where each parent node can have a maximum of
two children. In a binary tree, each node is composed of three components:

1. Data item: It represents the value or information stored in the node.


2. Address of the left child: It points to the memory location or address of the left child
node, which is one of the two children of the parent node.
3. Address of the right child: It indicates the memory location or address of the right child
node, which is the other child of the parent node.
TYPES OF BINARY TREE

1. Full Binary Tree

A full binary tree is a specific type of binary tree where every internal (non-leaf) node
has either two children or no children at all. In other words, each internal node in a full
binary tree is either a leaf node (with no children) or has exactly two child nodes.
2. Perfect Binary Tree

A perfect binary tree is a specific type of binary tree where every internal (non-leaf)
node has exactly two child nodes, and all the leaf nodes are at the same level or depth.
3. Complete Binary Tree

A complete binary tree is a type of binary tree that shares similarities with a full binary
tree, but with two distinct differences:

Every level must be completely filled: In a complete binary tree, all levels, except
possibly the last one, must be completely filled with nodes. This means that every node
at each level, except the last, must have two children. The last level may not be
completely filled, but all the nodes in that level should be as left as possible.

Leaf elements lean towards the left: In a complete binary tree, all the leaf nodes
(bottom-most nodes) are positioned towards the left side of the tree. This means that
there should be no gap between the leaf nodes in the last level on the left side.

BINARY SEARCH TREE

A binary search tree (BST) is a data structure used for efficiently maintaining a sorted list of
numbers. The term "binary" in binary search tree refers to the fact that each node in the tree
can have a maximum of two children.
A binary search tree (BST) is referred to as a search tree because it provides an efficient way to
search for the presence of a number within the tree. The search operation can be performed in
O(log(n)) time complexity.

A binary search tree (BST) possesses specific properties that distinguish it from a regular binary
tree:

1. All nodes in the left subtree of a node have values that are less than the value of the
root node.
2. All nodes in the right subtree of a node have values that are greater than the value of
the root node.
3. Both the left and right subtrees of each node are themselves binary search trees,
meaning they also adhere to the above two properties.

Example:

OPERATIONS ON BINARY SEARCH TREE:

1. Search Operation

● Binary search tree (BST) allows for efficient search operations.


● Start at the root node and compare the target value with the current node's
value.
● If the target value is equal, the search is successful.
● If the target value is less, move to the left child node.
● If the target value is greater, move to the right child node.
● Repeat the comparison until the target value is found or a null child node is
encountered.
● Time complexity of the search operation is O(log(n)) in a balanced BST.
● In the worst case, when the BST is highly imbalanced, the search can take O(n)
time.
● BST relies on the ordering of nodes to efficiently narrow down the search space.
● If the target value is not found after traversing the entire tree, it is not present in
the BST.

Algorithm

If root == NULL

return NULL;

If number == root->data

return root->data;

If number < root->data

return search(root->left)

If number > root->data

return search(root->right)

2. Insert Operation

Inserting a value in the correct position within a binary search tree (BST) is the same as
the search operation. This is because, during the insertion process, we try to maintain
the rule that the left subtree contains values lesser than the root, while the right
subtree contains values greater than the root.

Algorithm:

If node == NULL
return createNode(data)

if (data < node->data)

node->left = insert(node->left, data);

else if (data > node->data)

node->right = insert(node->right, data);

return node;

To gain a visual understanding of how to add a number to an existing binary search tree
(BST), let's explore the process step by step.

Insert Value 4 in the existing tree

Since the number 4 is smaller than 8, we will traverse through the left child of the node
8.
Since the number 4 is greater than 3, we will traverse through the right child of the node
3.

Since the number 4 is smaller than 6, we will traverse through the left child of the node
6.
We will insert the number 4 as the left child of the node 6.

3. Deletion Operation

Deleting a node from a binary search tree (BST) involves considering three main cases.

Case 1 : The first case for deleting a node from a binary search tree (BST) is when the
node to be deleted is a leaf node. In this scenario, we can simply remove the node from
the tree.
4 is to be deleted

Case 2 : The second case for deleting a node from a binary search tree (BST) occurs
when the node to be deleted has a single child node. In this case, we can follow the
steps below:

1. Replace the node to be deleted with its child node.


2. Remove the child node from its original position in the tree.
6 is to be deleted

copy the value of its child to the node and delete the child

Final Tree
Case 3 : The third case for deleting a node from a binary search tree (BST) arises
when the node to be deleted has two children. In this scenario, we can follow the
steps below:

1. Find the inorder successor of the node to be deleted.


2. Replace the node with its inorder successor.
3. Remove the inorder successor from its original position in the tree.

3 is to be deleted
Copy the value of the inorder successor (4) to the node

Final Tree

AVL TREE

An AVL tree is a type of self-balancing binary search tree. It incorporates additional information,
known as a balance factor, for each node. The balance factor can have one of three values: -1,
0, or +1.
The balance factor of a node in an AVL tree is determined by calculating the difference between
the height of its left subtree and the height of its right subtree. Mathematically, the balance
factor can be expressed as:

BALANCE FACTOR = HEIGHT(LEFT SUBTREE) – HEIGHT(RIGHT SUBTREE)

The self-balancing property of an AVL tree is maintained by the balance factor. It is essential
that the balance factor of each node is always -1, 0, or +1.

The balancing algorithm of AVL trees typically involves four rotation cases:

1. Left Rotation
2. Right Rotation
3. Left-Right Rotation
4. Right-Left Rotation

1. Left Rotation

If a node is inserted into the right subtree of the right subtree, causing an imbalance in
the tree, a single left rotation is performed

2. Right Rotation

When a node is inserted into the left subtree of the left subtree, it may cause an
imbalance in the AVL tree. In such cases, a single right rotation is performed.
3. Left-Right Rotation

A left-right rotation is a combined operation in which a left rotation is performed first,


followed by a right rotation.

4. Right-Left Rotation

A right-left rotation is a combined operation in which a right rotation is performed first,


followed by a left rotation.
Example:

Let's illustrate the process of inserting elements into an AVL tree by constructing an example
AVL tree with integers from 1 to 7.

We begin by adding the first element, 1, as a node and then evaluate the balance factor, which
in this case is 0.

Since the binary search property and the balance factor are both met, we proceed to insert
another element into the AVL tree.
The balance factors for the two nodes are calculated and found to be -1 (the height of the left
subtree is 0, and the height of the right subtree is 1). As the balance factor does not exceed 1,
we proceed to add another element to the AVL tree.

Now, upon adding the third element, the balance factor exceeds 1 and becomes 2. As a result,
rotations need to be performed.
Likewise, the subsequent elements are inserted and reorganized using these rotations. After
the rearrangement, the resulting tree appears as
KEY TAKEAWAYS

● A binary tree is a tree in which each node can have at most two child nodes, typically
referred to as the left child and the right child.
● A binary search tree is a type of binary tree where the nodes are arranged in a specific
order.
● An AVL tree is a self-balancing binary search tree.
TREE
SUB LESSON 8.4

RED BLACK TREE

A red-black tree is a type of self-balancing binary search tree. It is named after the properties it
maintains, which are represented by colors assigned to each node in the tree: red or black. The
red-black tree guarantees that the height of the tree remains logarithmic, ensuring efficient
operations.

Here are the key properties and rules of a red-black tree:

1. Every node is colored either red or black.


2. The root node is always black.
3. All leaves (NIL or null nodes) are black.
4. If a node is red, both its children are black.
5. Every path from a given node to its descendant leaves contains the same number of
black nodes. This property is known as black height.

PROPERTIES OF RED-BLACK TREE

A Red-Black tree is a self-balancing binary search tree data structure. The term "self-balancing"
indicates that the tree automatically maintains its balance by performing rotations or recoloring
nodes as necessary.

The name "Red-Black" refers to the color assigned to each node in the tree. Each node stores
an additional bit representing its color. In this representation, a black node is denoted by the bit
value 0, while a red node is denoted by the bit value 1. The nodes in a Red-Black tree also store
other information like data values, left and right pointers, similar to a standard binary tree.

In a Red-Black tree, the root node is always black in color, adhering to the property that ensures
the tree remains balanced.

While in a regular binary tree, leaf nodes have no children, in a Red-Black tree, the nodes
without children are considered internal nodes. These internal nodes are connected to special
NIL nodes, which are always black in color and serve as the leaf nodes in the Red-Black tree.

One of the key properties of a Red-Black tree is that if a node is red, its children must be black.
This property ensures that there are no consecutive red nodes along any path in the tree.
Additionally, the Red-Black tree maintains another property where every path from a node to
any of its descendant NIL nodes contains the same number of black nodes. This property
guarantees that the tree remains balanced.

By following these properties, a Red-Black tree provides efficient insertion, deletion, and search
operations with a guaranteed logarithmic time complexity.

INSERTION IN RED BLACK TREE

During the insertion process in a Red-Black tree, the following rules are followed to maintain
the properties of the tree:

1. If the tree is empty, create a new node as the root node and color it black.
2. If the tree is not empty, create a new node as a leaf node and color it red.
3. If the parent of the new node is black, no further action is needed, and the tree remains
balanced.
4. If the parent of the new node is red, additional checks are required to maintain the
properties of the Red-Black tree:
a) If the color is Black, then we perform rotations and recoloring.
b) If the color is Red then we recolor the node. We will also check whether the parents'
parent of a new node is the root node or not; if it is not a root node, we will recolor and
recheck the node.

These rules ensure that the Red-Black tree remains balanced after the insertion operation,
preserving the Red-Black tree properties, including maintaining the correct coloring and black-
depth along all paths from the root to the leaves.

Let's understand the insertion in the Red-Black tree.

10, 18, 7, 15, 16

1. Insert 10: The tree is initially empty, so we create a new node with a value of 10 and
color it black, making it the root of the tree.

2. Insert 18: We insert 18 as a new red node. Since 18 is greater than 10, it becomes the
right child of 10.
3. Insert 7: We insert 7 as a new red node. Since 7 is less than 10, it becomes the left child
of 10.

4. Insert 15: Since 15 is greater than 10 but less than 18, the new node (15) will be inserted
to the left of node 18. As per the Red-Black tree properties, the new node (15) will be
colored red since the tree is not empty.
The current tree violates the Red-Black tree property that states there should be no red-
red parent-child relationship. To rectify this violation, we need to apply the rules of a
Red-Black tree.

Rule 4 states that if the parent of a new node is red, we need to check the color of the
parent's sibling. In this case, the new node (15) has a parent of node 18, and the sibling
of the parent node (18) is node 7.

Since the color of the parent's sibling (node 7) is red, we need to apply Rule 4a.
According to Rule 4a, we need to perform recoloring and rotations to balance the tree.

After applying Rule 4a, the recolored tree would look like this:
5. Insert 16: Now, we need to insert 16 into the tree. Since 16 is greater than 10 but less
than 18 and greater than 15, it will be placed to the right of node 15. As the tree is not
empty, the new node (16) will be colored red according to Red-Black tree properties.
The current tree violates the Red-Black tree property that states there should be no red-
red parent-child relationship. To rectify this violation, we need to apply the rules of a
Red-Black tree, we need to apply Rule 4a.
Here we have an LR relationship, so we require to perform two rotations. First, we will
perform left, and then we will perform the right rotation.

When we perform the right rotation, the median element would be the root node.
After performing the rotation and resolving the LR relationship, let's proceed with the
recoloring of the nodes:
The recoloring step ensures that the Red-Black tree properties are maintained. In this
case, node 16 and node 18 will undergo recoloring:
● Since the color of node 16 is red, it needs to be changed to black.
● Since the color of node 18 is black, it needs to be changed to red.
KEY TAKEAWAYS

● A red-black tree is a type of self-balancing binary search tree


● Every node is colored either red or black.
● The root node is always black.
● All leaves (NIL or null nodes) are black.
● If a node is red, both its children are black.
● Every path from a given node to its descendant leaves contains the same number of
black nodes. This property is known as black height.
GRAPH
SUB LESSON 9.1

INTRODUCTION TO GRAPH DATA STRUCTURE, GRAPH


TERMINOLOGY

A graph is an example of a non-linear data structure that is composed of vertices, also known as
nodes and edges. The edges, which can be represented as lines or arcs, connect pairs of nodes
within the graph.

A graph data structure comprises a set of nodes, each containing data, and these nodes are
interconnected.

THE COMPONENTS OF A GRAPH:

Vertices: Vertices serve as the basic units of a graph and are sometimes referred to as nodes.
Each node/vertex can have a label or remain unlabeled.

Edges: Edges are used to establish connections between two nodes in a graph. In a directed
graph, an edge can be represented as an ordered pair of nodes. There are no restrictions on
how edges can link any two nodes, allowing for diverse connections. Sometimes, edges are also
called arcs. Each edge can be assigned a label or be left unlabeled.
A graph can be represented as an ordered pair G = (V, E), where V is a set of vertices or nodes,
and E is a collection of vertex pairs from V, representing the edges of the graph. For instance,
consider the following graph:

V = { 1, 2, 3, 4, 5, 6 }

E = { (1, 4), (1, 6), (2, 6), (4, 5), (5, 6) }

GRAPH TERMINOLOGY

In the graph,

V = {0, 1, 2, 3}

E = {(0,1), (0,2), (0,3), (1,2)}

G = {V, E}

Adjacency: In a graph, two vertices are considered adjacent if there exists an edge connecting
them. For example, vertices 2 and 3 are not adjacent since there is no edge connecting them.
Path: A path is a sequence of edges that enables traversal from one vertex, A, to another
vertex, B, within a graph. For instance, in the context of vertices 0 and 2, the paths 0-1, 1-2, and
0-2 serve as routes from vertex 0 to vertex 2.

TYPES OF GRAPH

1. Null Graph
A graph is referred to as a null graph when it contains no edges, indicating the absence
of connections between vertices.
2. Trivial Graph
A trivial graph is the smallest possible graph consisting of a single vertex without any
edges.

3. Undirected Graph
A graph in which edges are undirected, meaning there is no specific direction associated
with them. In this type of graph, the nodes are considered unordered pairs in the
definition of each edge.
4. Directed Graph
A directed graph is a type of graph where edges have a specific direction. In this graph,
the nodes are represented as ordered pairs in the definition of each edge.
5. Connected Graph
A connected graph refers to a graph in which it is possible to reach any node from any
other node within the graph through a series of edges.
6. Disconnected Graph
A disconnected graph is a type of graph where there exists at least one node that cannot
be reached from another node within the graph.

7. Regular Graph
A regular graph is a type of graph where each vertex has the same number of adjacent
vertices. In other words, all vertices in a regular graph have the same degree. The
degree of a vertex refers to the number of edges connected to it.
For example, in a regular graph of degree 3, every vertex will be connected to exactly
three other vertices. Regular graphs are often denoted as "k-regular," where "k"
represents the degree of each vertex.
8. Complete Graph
A complete graph is a type of graph where each node is directly connected to every
other node by an edge.

TREE V/S GRAPH


KEY TAKEAWAYS

● Nodes/Vertices: Graphs consist of nodes or vertices that represent entities or elements


of interest.
● Edges: Edges connect pairs of nodes and represent relationships or connections
between them. They can be directed or undirected, depending on whether they have a
specific direction or not.
● Adjacency: Nodes are adjacent if there is an edge connecting them. The adjacency of
nodes determines their connectivity within the graph.
● Weighted Graphs: In some graphs, edges can be assigned weights or values to represent
the strength, distance, or cost associated with the relationship between nodes.
GRAPH
SUB LESSON 9.2

REPRESENTATION OF GRAPH

A GRAPH IS A DATA STRUCTURE COMPOSED OF TWO MAIN COMPONENTS:

1. Vertices (also known as nodes): A finite set of vertices that represent distinct elements
or entities.
2. Edges: A finite set of ordered pairs (u, v) that define connections between vertices. In
the case of a directed graph (di-graph), the order of the pair matters, as (u, v) is not the
same as (v, u). The pair (u, v) indicates that there is an edge originating from vertex u
and pointing to vertex v. The edges may also include weight, value, or cost associated
with them.

A graph can be conceptualized as a set of points, referred to as vertices, which are


interconnected by lines known as edges. Each vertex represents a distinct point of interest,
while each edge represents a connection between two points. The edges can be directed or
undirected, meaning they can either have a specific direction or be bidirectional.

The two most commonly used representations of a graph are:

1. Adjacency Matrix
2. Adjacency List

Adjacency Matrix:

One commonly used method for representing the relationships between vertices and edges in a
graph is through an adjacency matrix. An adjacency matrix can effectively capture the structure
of different types of graphs, including undirected graphs, directed graphs, and weighted graphs.

If the value adj[i][j] is equal to w, it signifies the presence of an edge from vertex i to vertex j,
and the weight of this edge is w.
When considering the adjacency matrix representation of a graph, an entry Aij refers to the
specific element at the intersection of the ith row and the jth column. In the context of the
adjacency matrix representation, the value aij is set to 1 if there exists a path from vertex Vi to
vertex Vj in the graph. Conversely, if there is no such path, the value of aij is set to 0.

Adjacency matrix for an undirected graph

In the diagram above, an image displays the correspondence between the vertices (A, B, C, D,
E), which is represented using an adjacency matrix.

It's important to note that different adjacency matrices exist for directed and undirected
graphs. In a directed graph, an entry Aij will have a value of 1 only if there is a directed edge
from vertex Vi to vertex Vj.

Adjacency matrix for a directed graph

In a directed graph, edges denote specific paths from one vertex to another. For instance, if
there is a path from vertex A to vertex B, it indicates that vertex A serves as the starting node,
while vertex B serves as the destination node or terminal node.
In the graph illustrated above, it is evident that there are no self-loops, resulting in diagonal
entries of the adjacency matrix being 0.

Adjacency matrix for a weighted directed graph

The representation of a weighted graph using an adjacency matrix is similar to that of a


directed graph. However, instead of using '1' to indicate the existence of a path, the weight
associated with each edge is utilized. In this case, the weights assigned to the graph edges are
represented as the entries within the adjacency matrix.

The adjacency matrix representation of a weighted directed graph differs from other
representations, as it replaces the non-zero values with the actual weights assigned to the
edges.
Adjacency List

An adjacency list is utilized to store the graph in the computer's memory. This approach offers
efficiency in terms of storage, as we only need to store the values corresponding to the edges.

Adjacency list representation of an undirected graph.

In the above figure, it is evident that each node of the graph has a linked list or adjacency list
associated with it. From vertex A, there are paths leading to vertex B and vertex D. These nodes
are connected to node A in the provided adjacency list.

Adjacency list representation of a directed graph.

In the case of a directed graph, the sum of the lengths of the adjacency lists is equal to the total
number of edges present in the graph.
Adjacency list representation of the weighted directed graph.

In the context of a weighted directed graph, each node includes an additional field known as
the node weight.

The adjacency list representation offers convenience when adding a new vertex, as the use of
linked lists allows for efficient insertion. Additionally, this representation saves space due to its
linked structure.

KEY TAKEAWAYS

● A graph can be conceptualized as a set of points, referred to as vertices, which are


interconnected by lines known as edges.
● Each vertex represents a distinct point of interest, while each edge represents a
connection between two points.
● The edges can be directed or undirected, meaning they can either have a specific
direction or be bidirectional.
● By using Adjacency Matrix & Adjacency List we can represent the Graph.
GRAPH
SUB LESSON 9.3

OPERATIONS ON GRAPH

Operations on graphs refer to various actions and manipulations performed on graph data
structures. Graphs consist of nodes (vertices) connected by edges, and these operations enable
the analysis, traversal, modification, and other transformations of graphs. Here are some
common operations on graphs:

1. Graph Creation: Creating a graph involves defining the nodes and edges that connect
them. Graphs can be either directed (edges have a specific direction) or undirected
(edges are bidirectional).
2. Adding and Removing Nodes: Nodes can be added or removed from a graph, which may
affect the connectivity and structure of the graph.
3. Adding and Removing Edges: Edges can be added or removed between nodes in a
graph, altering the relationships and connectivity between the nodes.

IMPLEMENTATION

#include <stdio.h>

#define MAX_NODES 100

// Function to add an edge between two nodes

void addEdge(int adjacencyMatrix[][MAX_NODES], int source, int destination) {

adjacencyMatrix[source][destination] = 1;

adjacencyMatrix[destination][source] = 1;

// Function to remove an edge between two nodes

void removeEdge(int adjacencyMatrix[][MAX_NODES], int source, int destination) {

adjacencyMatrix[source][destination] = 0;
adjacencyMatrix[destination][source] = 0;

// Function to print the adjacency matrix representing the graph

void printGraph(int adjacencyMatrix[][MAX_NODES], int numNodes) {

int i, j;

for (i = 0; i < numNodes; i++) {

for (j = 0; j < numNodes; j++) {

printf("%d ", adjacencyMatrix[i][j]);

printf("\n");

int main() {

int numNodes = 5; // Number of nodes in the graph

// Initialize the adjacency matrix with all zeros

int adjacencyMatrix[MAX_NODES][MAX_NODES] = {0};

// Add edges to the graph

addEdge(adjacencyMatrix, 0, 1);

addEdge(adjacencyMatrix, 0, 4);

addEdge(adjacencyMatrix, 1, 3);
addEdge(adjacencyMatrix, 1, 4);

addEdge(adjacencyMatrix, 2, 3);

addEdge(adjacencyMatrix, 3, 4);

// Print the graph

printf("Graph:\n");

printGraph(adjacencyMatrix, numNodes);

// Remove an edge

removeEdge(adjacencyMatrix, 1, 4);

// Print the updated graph

printf("\nUpdated Graph:\n");

printGraph(adjacencyMatrix, numNodes);

return 0;

OUTPUT:
KEY TAKEAWAYS

• Operations on graphs refer to various actions and manipulations performed on graph


data structures.
• Graphs consist of nodes (vertices) connected by edges, and these operations enable the
analysis, traversal, modification, and other transformations of graphs.
GRAPH
SUB LESSON 9.4

DEPTH FIRST SEARCH

Depth-First Search (DFS) is an algorithm used for traversing or searching through a graph or
tree data structure. It explores as far as possible along each branch before backtracking. The
algorithm starts at a selected vertex and explores the deepest unvisited node in the graph until
all nodes have been visited or a specific condition is met.

It categorizes each vertex into two groups:

1. Visited
2. Not visited

The main objective of DFS is to mark each vertex as visited while avoiding cycles.

The steps involved in the DFS algorithm are as follows:

1. Start by selecting any vertex from the graph and place it on top of a stack.
2. Pop the top item from the stack and mark it as visited.
3. Create a list of adjacent nodes for the current vertex. Add only those nodes that have
not been visited to the top of the stack.
4. Repeat steps 2 and 3 until the stack becomes empty.

By following these steps, the DFS algorithm explores the graph in a depth-first manner.

To understand how the Depth First Search (DFS) algorithm works, let's consider an example
with an undirected graph containing 5 vertices.

To initiate the DFS algorithm, we begin from vertex 0. We mark vertex 0 as visited and proceed
by adding all its neighboring vertices to a stack for further exploration.
Next, we move on to the element at the top of the stack, which is vertex 1. We visit vertex 1
and explore its adjacent nodes. Since vertex 0 has already been visited, we proceed to visit
vertex 2 instead.

Vertex 2 has an adjacent vertex, which is vertex 4, that hasn't been visited yet. Thus, we add
vertex 4 to the top of the stack and proceed to visit it.
Once we visit the last vertex, which is vertex 3, we observe that it does not have any unvisited
adjacent nodes. This indicates that we have successfully completed the Depth First Traversal of
the graph.

COMPLEXITY OF DEPTH FIRST SEARCH

The time complexity of the DFS algorithm can be expressed as O(V + E), where V represents the
number of nodes in the graph, and E represents the number of edges.

As for the space complexity, it is O(V), indicating that the amount of memory required by the
algorithm grows linearly with the number of nodes in the graph.

DFS ALGORITHM IMPLEMENTED IN C

#include <stdio.h>

#include <stdbool.h>

#define MAX_VERTICES 100


// Function to perform DFS traversal

void dfs(int graph[][MAX_VERTICES], int vertex, bool visited[], int numVertices) {

visited[vertex] = true; // Mark the current vertex as visited

printf("%d ", vertex); // Print the visited vertex

// Traverse all adjacent vertices

for (int i = 0; i < numVertices; i++) {

if (graph[vertex][i] && !visited[i]) {

dfs(graph, i, visited, numVertices); // Recursive call for unvisited neighbors

// Example usage

int main() {

int numVertices = 5;

int graph[MAX_VERTICES][MAX_VERTICES] = {

{0, 1, 1, 0, 0},

{1, 0, 1, 1, 0},

{1, 1, 0, 0, 1},

{0, 1, 0, 0, 1},

{0, 0, 1, 1, 0}

};

bool visited[MAX_VERTICES] = { false };

int startVertex = 0;
// Start DFS from the specified vertex

dfs(graph, startVertex, visited, numVertices);

return 0;

OUTPUT

KEY TAKEAWAYS

● The main objective of DFS is to mark each vertex as visited while avoiding cycles.
● DFS algorithm explores the graph in a depth-first manner.
● . It explores as far as possible along each branch before backtracking. The algorithm
starts at a selected vertex and explores the deepest unvisited node in the graph until all
nodes have been visited or a specific condition is met.
● It is used for traversing or searching through a graph or tree data structure.
GRAPH
SUB LESSON 9.5

BREADTH FIRST SEARCH

Breadth-First Search is a fundamental graph traversal algorithm used in data structures and
algorithms. It explores all the vertices of a graph in breadth-first order, meaning it visits all the
vertices at the same level before moving to the next level.

Breadth-First Search is a common graph traversal algorithm that categorizes each vertex into
two groups:

1. Visited: Represents the vertices that have been explored and processed.
2. Not Visited: Represents the vertices that have not yet been explored.
The primary goal of BFS is to traverse the graph while marking each vertex as visited and
avoiding cycles.

The algorithm follows these steps:

1. Choose any vertex from the graph and enqueue it at the back of a queue.
2. Dequeue the front item from the queue and mark it as visited.
3. Create a list of adjacent nodes for the dequeued vertex. Add only those nodes that have
not been visited to the back of the queue.
4. Repeat steps 2 and 3 until the queue becomes empty.
5. In case the graph consists of disconnected components, to ensure that every vertex is
covered, you can run the BFS algorithm on each unvisited node.
By using a queue data structure, BFS explores the graph layer by layer, visiting all the vertices at
the same level before moving to the next level.

Sure, here's an example of how the Breadth-First Search (BFS) algorithm works with a simple
undirected graph consisting of 5 vertices:
We start from vertex 0. In the BFS algorithm, we put vertex 0 in the visited list and enqueue all
its adjacent vertices into the queue.

Next, we visit the element at the front of the queue, i.e., vertex 1, and explore its adjacent
nodes. Since vertex 0 has already been visited, we move on to visit vertex 2 instead.
Vertex 2 has an unvisited adjacent vertex, which is vertex 4. We enqueue vertex 4 at the back of
the queue and then visit vertex 3, which is at the front of the queue.
Only vertex 4 remains in the queue since the only adjacent node of vertex 3, which is vertex 0,
has already been visited. We dequeue vertex 4 from the queue and visit it.

When the queue becomes empty, it signifies that the Breadth-First Traversal of the graph has
concluded.

COMPLEXITY OF BREADTH-FIRST SEARCH

The time complexity of the Breadth-First Search (BFS) algorithm can be expressed as O(V + E),
where V represents the number of nodes in the graph and E represents the number of edges.

Regarding the space complexity, it is O(V), indicating that the amount of memory required by
the algorithm grows linearly with the number of nodes in the graph.

BFS ALGORITHM IMPLEMENTED IN C

#include <stdio.h>

#include <stdbool.h>

#define MAX_VERTICES 100

// Function to perform BFS traversal

void bfs(int graph[][MAX_VERTICES], int startVertex, int numVertices) {

bool visited[MAX_VERTICES] = { false };

int queue[MAX_VERTICES];

int front = 0, rear = 0;


// Enqueue the starting vertex

queue[rear++] = startVertex;

visited[startVertex] = true;

while (front != rear) {

int vertex = queue[front++];

printf("%d ", vertex); // Print the visited vertex

// Traverse all adjacent vertices

for (int i = 0; i < numVertices; i++) {

if (graph[vertex][i] && !visited[i]) {

queue[rear++] = i; // Enqueue unvisited neighbors

visited[i] = true;

// Example usage

int main() {

int numVertices = 5;

int graph[MAX_VERTICES][MAX_VERTICES] = {

{1, 1, 1, 0, 1},
{1, 0, 1, 1, 0},

{1, 1, 0, 0, 1},

{0, 1, 0, 0, 1},

{0, 0, 1, 1, 0}

};

int startVertex = 0;

// Start BFS from the specified vertex

bfs(graph, startVertex, numVertices);

return 0;

OUTPUT

KEY TAKEAWAYS

• Breadth-First Search is a fundamental graph traversal algorithm used in data


structures and algorithms.
• It explores all the vertices of a graph in breadth-first order, meaning it visits all
the vertices at the same level before moving to the next level.
GRAPH
SUB LESSON 9.6

PRIM’S ALGORITHM

Prim's algorithm is a unique approach for generating a minimum spanning tree from a given
graph. It operates by selecting a starting vertex and iteratively adding the minimum weight
edge that connects the current tree to a new vertex. This process continues until all vertices are
included in the tree, resulting in a minimum-spanning tree that has the lowest total weight
among all possible trees that can be derived from the original graph.

A minimum spanning tree (MST) is a subgraph of a connected, weighted graph that includes all
the vertices of the graph while minimizing the total weight or cost of the edges.

To generate a minimum spanning tree using Prim's algorithm, follow the steps below:

1. Begin by selecting a random vertex as the starting point for the minimum spanning tree.
2. Find all the edges that connect the current minimum spanning tree to new vertices.
3. Select the edge with the lowest weight from the previous step and add it to the
minimum spanning tree.
4. Repeat steps 2 and 3 until all vertices are included in the minimum spanning tree.
5. These steps ensure that the minimum spanning tree gradually grows by adding edges
with the lowest weights, ensuring that all vertices are connected in the most efficient
manner.
EXAMPLE

Begin with a graph that contains weights assigned to its edges.


Select a random vertex from the given graph.

Select the edge with the minimum weight from the edges connected to the chosen vertex, and
include it in the growing minimum spanning tree.

Select the vertex that is closest in distance to the current minimum spanning tree but has not
yet been included in the solution.
Select the edge that is closest in distance among the edges that have not yet been included in
the solution. If there are multiple edges with the same minimum distance, choose one of them
randomly.

Continue the process of selecting vertices and edges as described above until you have formed
a spanning tree that includes all the vertices of the graph.

C PROGRAM FOR PRIM'S ALGORITHM

#include <limits.h>

#include <stdbool.h>

#include <stdio.h>
// Number of vertices in the graph

#define V 5

// A utility function to find the vertex with

// minimum key value, from the set of vertices

// not yet included in MST

int minKey(int key[], bool mstSet[])

// Initialize min value

int min = INT_MAX, min_index;

for (int v = 0; v < V; v++)

if (mstSet[v] == false && key[v] < min)

min = key[v], min_index = v;

return min_index;

// A utility function to print the

// constructed MST stored in parent[]

int printMST(int parent[], int graph[V][V])

printf("Edge \tWeight\n");
for (int i = 1; i < V; i++)

printf("%d - %d \t%d \n", parent[i], i,

graph[i][parent[i]]);

// Function to construct and print MST for

// a graph represented using adjacency

// matrix representation

void primMST(int graph[V][V])

// Array to store constructed MST

int parent[V];

// Key values used to pick minimum weight edge in cut

int key[V];

// To represent set of vertices included in MST

bool mstSet[V];

// Initialize all keys as INFINITE

for (int i = 0; i < V; i++)

key[i] = INT_MAX, mstSet[i] = false;

// Always include first 1st vertex in MST.

// Make key 0 so that this vertex is picked as first

// vertex.
key[0] = 0;

// First node is always root of MST

parent[0] = -1;

// The MST will have V vertices

for (int count = 0; count < V - 1; count++) {

// Pick the minimum key vertex from the

// set of vertices not yet included in MST

int u = minKey(key, mstSet);

// Add the picked vertex to the MST Set

mstSet[u] = true;

// Update key value and parent index of

// the adjacent vertices of the picked vertex.

// Consider only those vertices which are not

// yet included in MST

for (int v = 0; v < V; v++)

// graph[u][v] is non zero only for adjacent

// vertices of m mstSet[v] is false for vertices

// not yet included in MST Update the key only


// if graph[u][v] is smaller than key[v]

if (graph[u][v] && mstSet[v] == false

&& graph[u][v] < key[v])

parent[v] = u, key[v] = graph[u][v];

// print the constructed MST

printMST(parent, graph);

// Driver's code

int main()

int graph[V][V] = { { 0, 2, 0, 6, 0 },

{ 2, 0, 3, 8, 5 },

{ 0, 3, 0, 0, 7 },

{ 6, 8, 0, 0, 9 },

{ 0, 5, 7, 9, 0 } };

// Print the solution

primMST(graph);

return 0;

}
OUTPUT

KEY TAKEAWAYS

• Prim's algorithm is a unique approach for generating a minimum spanning tree from a
given graph.
• It operates by selecting a starting vertex and iteratively adding the minimum weight
edge that connects the current tree to a new vertex.
GRAPH
SUB LESSON 9.7

KRUSKAL’S ALGORITHM

Kruskal's algorithm is a popular algorithm used to find the minimum spanning tree (MST) of a
connected. Kruskal's algorithm begins by sorting the edges of the graph in ascending order
based on their weights. It then progressively adds edges with the lowest weights to the
minimum spanning tree until all vertices are connected.

The steps for implementing Kruskal's algorithm are as follows:

1. Sort all the edges of the graph in ascending order based on their weights.
2. Select the edge with the lowest weight and add it to the spanning tree. If adding this
edge creates a cycle, reject it.
3. Continue selecting edges with increasing weights and add them to the spanning tree, as
long as they don't create cycles.
4. Repeat step 3 until all vertices are included in the spanning tree.
5. By following these steps, Kruskal's algorithm constructs the minimum spanning tree by
iteratively adding edges with the lowest weights that do not form cycles. Eventually, the
selected edges form the minimum spanning tree of the given graph.
Since the graph consists of 9 vertices and 14 edges, the resulting minimum spanning tree will
have (9 - 1) = 8 edges, as it follows the property that a minimum spanning tree in a connected
graph with V vertices has V - 1 edges.

EXAMPLE
After sorting:

Weight Source Destination

1 7 6

2 8 2

2 6 5

4 0 1

4 2 5

6 8 6

7 2 3

7 7 8

8 0 7

8 1 2

9 3 4

10 5 4

11 1 7

14 3 5

Now pick all edges one by one from the sorted list of edges "

In the first step, select the edge connecting vertices 7 and 6. If adding this edge does not create
a cycle in the current spanning tree, include it in the minimum spanning tree.
Moving to the next step, choose the edge connecting vertices 8 and 2. If adding this edge to the
current spanning tree does not result in a cycle, include it in the minimum spanning tree.
Continuing with the algorithm, select the edge connecting vertices 6 and 5. If adding this edge
to the existing minimum spanning tree does not create a cycle, include it in the spanning tree.
Moving forward, choose the edge connecting vertices 0 and 1. If adding this edge to the current
minimum spanning tree does not result in a cycle, include it in the spanning tree.
Continuing the process, select the edge connecting vertices 2 and 5. If including this edge in the
current minimum spanning tree does not create a cycle, add it to the spanning tree.
Proceeding to the next step, consider the edge connecting vertices 8 and 6. However, including
this edge would result in a cycle, so it is discarded. Instead, select the edge connecting vertices
2 and 3. As adding this edge does not create a cycle in the current minimum spanning tree,
include it in the spanning tree.
Moving on to the next step, examine the edge connecting vertices 7 and 8. However, including
this edge would introduce a cycle, so it is discarded. Instead, select the edge connecting
vertices 0 and 7. As adding this edge does not create a cycle in the current minimum spanning
tree, include it in the spanning tree.
Continuing to the next step, consider the edge connecting vertices 1 and 2. However, including
this edge would result in a cycle, so it is discarded. Instead, select the edge connecting vertices
3 and 4. As adding this edge does not create a cycle in the current minimum spanning tree,
include it in the spanning tree.
Since the number of edges included in the minimum spanning tree (MST) is equal to (V - 1),
where V represents the number of vertices, the algorithm terminates at this point.

C PROGRAM FOR KRUSKAL’S ALGORITHM

// Kruskal's algorithm in C

#include <stdio.h>

#define MAX 30

typedef struct edge {

int u, v, w;

} edge;
typedef struct edge_list {

edge data[MAX];

int n;

} edge_list;

edge_list elist;

int Graph[MAX][MAX], n;

edge_list spanlist;

void kruskalAlgo();

int find(int belongs[], int vertexno);

void applyUnion(int belongs[], int c1, int c2);

void sort();

void print();

// Applying Krushkal Algo

void kruskalAlgo() {

int belongs[MAX], i, j, cno1, cno2;

elist.n = 0;

for (i = 1; i < n; i++)

for (j = 0; j < i; j++) {


if (Graph[i][j] != 0) {

elist.data[elist.n].u = i;

elist.data[elist.n].v = j;

elist.data[elist.n].w = Graph[i][j];

elist.n++;

sort();

for (i = 0; i < n; i++)

belongs[i] = i;

spanlist.n = 0;

for (i = 0; i < elist.n; i++) {

cno1 = find(belongs, elist.data[i].u);

cno2 = find(belongs, elist.data[i].v);

if (cno1 != cno2) {

spanlist.data[spanlist.n] = elist.data[i];

spanlist.n = spanlist.n + 1;

applyUnion(belongs, cno1, cno2);

}
}

int find(int belongs[], int vertexno) {

return (belongs[vertexno]);

void applyUnion(int belongs[], int c1, int c2) {

int i;

for (i = 0; i < n; i++)

if (belongs[i] == c2)

belongs[i] = c1;

// Sorting algo

void sort() {

int i, j;

edge temp;

for (i = 1; i < elist.n; i++)

for (j = 0; j < elist.n - 1; j++)

if (elist.data[j].w > elist.data[j + 1].w) {

temp = elist.data[j];
elist.data[j] = elist.data[j + 1];

elist.data[j + 1] = temp;

// Printing the result

void print() {

int i, cost = 0;

for (i = 0; i < spanlist.n; i++) {

printf("\n%d - %d : %d", spanlist.data[i].u, spanlist.data[i].v, spanlist.data[i].w);

cost = cost + spanlist.data[i].w;

printf("\nSpanning tree cost: %d", cost);

int main() {

int i, j, total_cost;

n = 6;

Graph[0][0] = 0;

Graph[0][1] = 4;
Graph[0][2] = 4;

Graph[0][3] = 0;

Graph[0][4] = 0;

Graph[0][5] = 0;

Graph[0][6] = 0;

Graph[1][0] = 4;

Graph[1][1] = 0;

Graph[1][2] = 2;

Graph[1][3] = 0;

Graph[1][4] = 0;

Graph[1][5] = 0;

Graph[1][6] = 0;

Graph[2][0] = 4;

Graph[2][1] = 2;

Graph[2][2] = 0;

Graph[2][3] = 3;

Graph[2][4] = 4;

Graph[2][5] = 0;

Graph[2][6] = 0;

Graph[3][0] = 0;

Graph[3][1] = 0;
Graph[3][2] = 3;

Graph[3][3] = 0;

Graph[3][4] = 3;

Graph[3][5] = 0;

Graph[3][6] = 0;

Graph[4][0] = 0;

Graph[4][1] = 0;

Graph[4][2] = 4;

Graph[4][3] = 3;

Graph[4][4] = 0;

Graph[4][5] = 0;

Graph[4][6] = 0;

Graph[5][0] = 0;

Graph[5][1] = 0;

Graph[5][2] = 2;

Graph[5][3] = 0;

Graph[5][4] = 3;

Graph[5][5] = 0;

Graph[5][6] = 0;

kruskalAlgo();

print();
}

OUTPUT

KEY TAKEAWAYS

• Kruskal's algorithm is a popular algorithm used to find the minimum spanning tree
(MST) of a connected.
• Kruskal's algorithm begins by sorting the edges of the graph in ascending order based on
their weights. It then progressively adds edges with the lowest weights to the minimum
spanning tree until all vertices are connected.
GRAPH
SUB LESSON 9.8

DIJKSTRA’S ALGORITHM

Dijkstra's algorithm is a widely-used algorithm in computer science primarily designed to find


the shortest path between nodes in a weighted graph. The algorithm operates by iteratively
exploring nodes, starting from a given source node and progressively calculating the shortest
distance to each node.

The key distinction between Dijkstra's algorithm and the minimum spanning tree is that the
shortest path found by Dijkstra's algorithm between two vertices may not encompass all the
vertices in the graph.

EXAMPLE

The algorithm will calculate the shortest path from a given starting node (such as node 0) to all
the other nodes in the graph. In this context, the edge weights in the graph are considered to
represent the distances between the nodes.

In Dijkstra's algorithm, the distance from the source node to itself is considered as 0. In the
given example, if the source node is labeled as 0, its distance from itself will be set to 0.
For all the other nodes in the graph, initially, their distances from the source node are
unknown. To handle this, we typically mark their distances as infinity to indicate that they have
not been visited yet and their distances are not yet determined.

In addition to keeping track of the distances from the source node, Dijkstra's algorithm also
utilizes an array or data structure to store unvisited or unmarked nodes. The algorithm is
considered complete when all the nodes in the graph have been marked as visited.

Unvisited Nodes:- 0 1 2 3 4 5 6.

We typically start from a specific node, such as Node 0, and mark it as visited. In visual
representations, this is often depicted by marking the visited node in red.

After visiting a node, the next step in Dijkstra's algorithm is to consider its adjacent nodes and
calculate their tentative distances. In this step, we examine the neighboring nodes and choose
the node with the minimum distance as the next node to visit.

For example, let's say we have two adjacent nodes, Node 1 and Node 2, with tentative
distances of 2 and 6, respectively. In this case, Node 1 has the minimum distance. Thus, we
would mark Node 1 as visited and update its distance.

Distance: Node 0 -> Node 1 = 2


After visiting the previous nodes and updating their distances, the algorithm moves forward to
consider the adjacent nodes. In this case, let's assume that the next adjacent node is Node 3.

Upon reaching Node 3, the algorithm marks it as visited and adds up the distance.

Distance: Node 0 -> Node 1 -> Node 3 = 2 + 5 = 7

we have two adjacent nodes, Node 4 and Node 5, with distances of 10 and 15, respectively. To
determine the next node to visit, we select the node with the minimum distance. Node 4 has
the minimum distance, so we mark it as visited and update its distance.
Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 = 2 + 5 + 10 = 17

we examine the adjacent nodes of the current node. If the next adjacent node is Node 6, we
mark it as visited and update the distance.

Distance: Node 0 -> Node 1 -> Node 3 -> Node 4 -> Node 6 = 2 + 5 + 10 + 2 = 19

So, the Shortest Distance from the Source Vertex is 19 which is minimum.
C PROGRAM FOR KRUSKAL’S ALGORITHM

// Dijkstra's Algorithm in C

#include <stdio.h>

#define INFINITY 9999

#define MAX 10

void Dijkstra(int Graph[MAX][MAX], int n, int start);

void Dijkstra(int Graph[MAX][MAX], int n, int start) {

int cost[MAX][MAX], distance[MAX], pred[MAX];

int visited[MAX], count, mindistance, nextnode, i, j;

// Creating cost matrix

for (i = 0; i < n; i++)

for (j = 0; j < n; j++)

if (Graph[i][j] == 0)

cost[i][j] = INFINITY;

else

cost[i][j] = Graph[i][j];

for (i = 0; i < n; i++) {

distance[i] = cost[start][i];

pred[i] = start;
visited[i] = 0;

distance[start] = 0;

visited[start] = 1;

count = 1;

while (count < n - 1) {

mindistance = INFINITY;

for (i = 0; i < n; i++)

if (distance[i] < mindistance && !visited[i]) {

mindistance = distance[i];

nextnode = i;

visited[nextnode] = 1;

for (i = 0; i < n; i++)

if (!visited[i])

if (mindistance + cost[nextnode][i] < distance[i]) {

distance[i] = mindistance + cost[nextnode][i];

pred[i] = nextnode;

count++;
}

// Printing the distance

for (i = 0; i < n; i++)

if (i != start) {

printf("\nDistance from source to %d: %d", i, distance[i]);

int main() {

int Graph[MAX][MAX], i, j, n, u;

n = 7;

Graph[0][0] = 0;

Graph[0][1] = 0;

Graph[0][2] = 1;

Graph[0][3] = 2;

Graph[0][4] = 0;

Graph[0][5] = 0;

Graph[0][6] = 0;

Graph[1][0] = 0;

Graph[1][1] = 0;

Graph[1][2] = 2;

Graph[1][3] = 0;
Graph[1][4] = 0;

Graph[1][5] = 3;

Graph[1][6] = 0;

Graph[2][0] = 1;

Graph[2][1] = 2;

Graph[2][2] = 0;

Graph[2][3] = 1;

Graph[2][4] = 3;

Graph[2][5] = 0;

Graph[2][6] = 0;

Graph[3][0] = 2;

Graph[3][1] = 0;

Graph[3][2] = 1;

Graph[3][3] = 0;

Graph[3][4] = 0;

Graph[3][5] = 0;

Graph[3][6] = 1;

Graph[4][0] = 0;

Graph[4][1] = 0;

Graph[4][2] = 3;

Graph[4][3] = 0;
Graph[4][4] = 0;

Graph[4][5] = 2;

Graph[4][6] = 0;

Graph[5][0] = 0;

Graph[5][1] = 3;

Graph[5][2] = 0;

Graph[5][3] = 0;

Graph[5][4] = 2;

Graph[5][5] = 0;

Graph[5][6] = 1;

Graph[6][0] = 0;

Graph[6][1] = 0;

Graph[6][2] = 0;

Graph[6][3] = 1;

Graph[6][4] = 0;

Graph[6][5] = 1;

Graph[6][6] = 0;

u = 0;

Dijkstra(Graph, n, u);

return 0;
}

OUTPUT

KEY TAKEAWAYS

• Dijkstra's algorithm is a widely-used algorithm in computer science primarily designed to


find the shortest path between nodes in a weighted graph.
• The algorithm operates by iteratively exploring nodes, starting from a given source node
and progressively calculating the shortest distance to each node.
SEARCHING
SUB LESSON 10.1

LINEAR SEARCH

Linear search, also known as sequential search, is an algorithm used to find a specific element
within a list. It involves starting at one end of the list and sequentially examining each element
until the desired element is found. If the element is not found, the search continues until the
end of the list is reached.

The Linear Search Algorithm works as follows:

1. Start at the beginning of the array.


2. Compare each element of the array with the key value.
3. If a match is found (element equals the key), return the index of that element.
4. If the end of the array is reached without finding a match, return "No match found".

For example, let's consider the array arr[] = {10, 50, 30, 70, 80, 20, 90, 40} and the key = 30.

The algorithm would proceed as follows:


1. Start at the first element, arr[0] = 10. Since it doesn't match the key, move to the next
element.

2. Move to the second element, arr[1] = 50. Again, it doesn't match the key, so continue to
the next element.

3. Move to the third element, arr[2] = 30. It matches the key, so the search is successful.
Return the index 2.

4. The algorithm stops here, as a match has been found.

In this example, the Linear Search Algorithm successfully finds the key 30 at index 2 of the
array.

ALGORITHM FOR LINEAR SEARCH


Step 1: Start

Step 2: Declare an array and a search data variable, 'x'.

Step 3: Traverse the entire array until the search data is found.

- If the search data is found, return its location (index).

- If the end of the array is reached without finding the search data, return -1.

Step 4: Print the result (location/index or -1).

Step 5: Stop.

The algorithm starts by declaring the array and the value to be searched for, represented by the
variable 'x'. It then iterates through each element of the array, comparing it with 'x'. If a match
is found, the algorithm returns the location (index) of the element. If no match is found after
traversing the entire array, it returns -1 to indicate that the search data is not present in the
array. Finally, the result is printed, and the algorithm terminates.

TIME COMPLEXITY OF LINEAR SEARCH

The time complexity of the linear search algorithm can be analyzed as follows:

Best Case: The best-case scenario occurs when the element being searched is present at the
first index of the list. In this case, the search operation can be completed in constant time,
denoted as O(1). This is because only one comparison is needed to find the element.

Worst Case: The worst-case scenario happens when the element being searched is present at
the last index of the list, or it is not present in the list at all. In this case, the algorithm needs to
compare the search element with each element in the list until the end is reached or a match is
found. As a result, the time complexity in the worst case is O(N), where N is the size of the list.
This means that the time required to perform the search increases linearly with the size of the
list.

Average Case: On average, when considering all possible cases, the linear search algorithm will
need to examine half of the list elements before finding the desired element or concluding that
it is not present. Therefore, the average case time complexity is O(N), where N is the size of the
list.

EXAMPLE
#include<stdio.h>

int main()

int a[20],i,x,n;

printf("How many elements:");

scanf("%d",&n);

printf("Enter array elements:");

for(i=0;i<n;++i)

scanf("%d",&a[i]);

printf("Enter element to search:");

scanf("%d",&x);

for(i=0;i<n;++i)

if(a[i]==x)

break;

if(i<n)

printf("Element found at index %d",i);

else

printf("Element not found");

return 0;

Output :
How many elements: 5

Enter array elements: 12 11 10 22 34

Enter element to search: 22

Element found at index 3

KEY TAKEAWAYS

● Linear search is a simple searching algorithm that sequentially checks each element in a
list until the target element is found or the end of the list is reached.
● It is applicable to both sorted and unsorted lists, but it is more commonly used for
unsorted lists.
● Linear search starts from the first element of the list and compares it with the target
element. If a match is found, the search is successful.
● If the target element is not found, linear search continues checking each subsequent
element in the list until the end is reached or the target element is found.
● Linear search has a time complexity of O(n), where n is the number of elements in the
list. In the worst-case scenario, where the target element is at the end of the list or not
present, linear search needs to traverse the entire list.
SEARCHING
SUB LESSON 10.2

BINARY SEARCH

Binary Search is an efficient searching algorithm used for finding a target element in a sorted
array. The algorithm works by repeatedly dividing the search interval in half, eliminating half of
the remaining elements each time, until the target element is found or it is determined that the
element does not exist in the array.

HERE ARE THE KEY POINTS FOR BINARY SEARCH,

1. Binary search involves dividing the search space into two halves by finding the middle
index, known as "mid."

2. The middle element of the search space is compared to the target key.
3. If the key is found at the middle element, the search process is terminated successfully.
4. If the key is not found at the middle element, the next search space is determined by
choosing the appropriate half.
5. If the key is smaller than the middle element, the left side of the search space is selected
for the next iteration.
6. If the key is larger than the middle element, the right side of the search space is selected
for the next iteration.
7. This process of dividing the search space and selecting the appropriate half is repeated
until the key is found or the total search space is exhausted.
8. Binary search has a time complexity of O(log n), making it an efficient algorithm for
searching in large datasets.

EXAMPLE:
Let's consider the given array arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and the target key = 23.
First Step: Calculate the mid index by dividing the search space in half:

● Start index: 0
● End index: 9
● Mid index: (0 + 9) / 2 = 4

Compare the mid element with the key:

● arr[4] = 16, which is less than the key 23.

Since the key is greater than the current mid-element, the search space moves to the right side
of the array.

New search space:

● Start index: 5 (mid + 1)


● End index: 9
The next step of the binary search algorithm would involve repeating the process with the
updated search space until the key is found or the search space is exhausted.

TIME COMPLEXITY OF BINARY SEARCH:

● Best Case: O(1)


● Average Case: O(log N)
● Worst Case: O(log N)
Binary Search Algorithm can be implemented using two methods

1. Iterative
2. Recursive

1. Iterative Method: In the iterative approach, the binary search algorithm is implemented
using a loop to repeatedly divide the search space in half. Here are the steps involved:
● Initialize the low and high pointers to the start and end of the array respectively.
● While the low pointer is less than or equal to the high pointer:
● Calculate the mid index as (low + high) / 2.
● Compare the element at the mid index with the target value:
● If they are equal, return the mid index as the position of the target
element.
● If the target value is less than the mid element, update the high pointer
to mid - 1.
● If the target value is greater than the mid element, update the low
pointer to mid + 1.
● If the loop terminates without finding the target element, return a value indicating that
the element was not found.
2. Recursive Method: In the recursive approach, the binary search algorithm is
implemented using a recursive function that divides the search space in half. Here are
the steps involved:
● Define a recursive function that takes the array, target value, low index, and high index
as parameters.
● If the low index is greater than the high index, return a value indicating that the element
was not found.
● Calculate the mid index as (low + high) / 2.
● Compare the element at the mid index with the target value:
● If they are equal, return the mid index as the position of the target element.
● If the target value is less than the mid element, make a recursive call to search in
the left half of the array.
● If the target value is greater than the mid element, make a recursive call to
search in the right half of the array.
● The recursive calls continue until the target element is found or the search space is
exhausted.

Both the iterative and recursive methods provide the same result, but they differ in their
implementation approach. The choice between them depends on factors such as programming
language preferences, code readability, and the specific requirements of the problem at hand.

EXAMPLE (BY USING ITERATIVE METHOD)

#include <stdio.h>

int main()

int c, first, last, middle, n, search, array[100];

printf("Enter number of elements:\n");

scanf("%d",&n);

printf("Enter %d integers:\n", n);

for (c = 0; c < n; c++)

scanf("%d",&array[c]);

printf("Enter the value to find:\n");

scanf("%d", &search);

first = 0;

last = n - 1;

middle = (first+last)/2;
while (first <= last) {

if (array[middle] < search)

first = middle + 1;

else if (array[middle] == search) {

printf("%d is present at index %d.\n", search, middle+1);

break;

else

last = middle - 1;

middle = (first + last)/2;

if (first > last)

printf("Not found! %d is not present in the list.\n", search);

return 0;

Output:

Enter number of elements:

Enter 5 integers:

12

14

18

25

50
Enter the value to find:

14

14 is present at index 2.

KEY TAKEAWAYS

● Binary Search is an efficient searching algorithm used for finding a target element in a
sorted array.
● Binary search involves dividing the search space into two halves by finding the middle
index, known as "mid."
● The middle element of the search space is compared to the target key.
● If the key is found at the middle element, the search process is terminated successfully.
● If the key is not found at the middle element, the next search space is determined by
choosing the appropriate half.
● If the key is smaller than the middle element, the left side of the search space is selected
for the next iteration.
● If the key is larger than the middle element, the right side of the search space is selected
for the next iteration.
SORTING
SUB LESSON 11.1

BUBBLE SORT

Bubble Sort is a straightforward sorting algorithm that operates by iteratively exchanging


adjacent elements when they are in the incorrect order.

The term "bubble sort" is used to describe this sorting algorithm because the way array
elements move resembles the movement of air bubbles in the water. In bubble sort, the array
elements move towards the end in each iteration, comparable to how bubbles rise to the
surface in water.

Its Best Case Time Complexity is efficient with O(N) and Average and Worst Case Time
Complexity is quite high with O(n2), where n is the number of items.

This sorting technique is not compatible with large datasets and its average and worst-case
time complexity is quite high with O(N2) where n is a number of items.

Bubble sort is commonly used in situations where complexity is not a major concern, and
simplicity and a shorter code implementation are preferred.

Bubble sort is an in-place algorithm since it performs the swapping of adjacent pairs without
requiring the use of any significant additional data structure.

WORKING OF BUBBLE SORT

To understand the operation of the bubble sort algorithm, let's consider an unsorted array.
For the purpose of illustration, we will use a concise and precise array since we are aware that
the time complexity of bubble sort is O(n2).

Let the elements of array are –

1. First Iteration (Compare and Swap)

1. The first step is to compare the element at the first index with the element at the second
index of the array.

2. If the first element is greater than the second element, they are swapped.

3. Compare further pair of elements and swap them if they are not in the order.

4. This process continues iteratively until the algorithm reaches the last element of the array.

2. Remaining Iteration
The same process continues for the remaining iterations in the bubble sort algorithm.

After each iteration in the algorithm, the largest element among the unsorted elements is
positioned at the end of the array.

During each iteration of the bubble sort algorithm, the comparison process occurs up to the last
unsorted element in the array.
The array is considered sorted when all the unsorted elements have been correctly placed in
their respective positions.

EXAMPLE:

/* Bubble sort code */

#include <stdio.h>

int main()

int array[100], n, c, d, swap;


printf("Enter number of elements\n");

scanf("%d", &n);

printf("Enter %d integers\n", n);

for (c = 0; c < n; c++)

scanf("%d", &array[c]);

for (c = 0 ; c < n - 1; c++)

for (d = 0 ; d < n - c - 1; d++)

if (array[d] > array[d+1]) /* For decreasing order use '<' instead of '>' */

swap = array[d];

array[d] = array[d+1];

array[d+1] = swap;

printf("Sorted list in ascending order:\n");

for (c = 0; c < n; c++)

printf("%d\n", array[c]);

return 0;

}
Output:

Enter number of elements

Enter 5 integers

15

32

20

Sorted list in ascending order:

15

20

32

BUBBLE SORT TIME COMPLEXITY

Best Case Complexity: O(n)

If the array is already sorted, there is no need to perform the sorting algorithm since the
elements are already in the desired order.

Average Case Complexity: O(n2)

This situation arises when the elements of the array are in a disordered or jumbled state,
meaning they are neither arranged in ascending nor descending order.

Worst Case Complexity: O(n2)

The worst-case scenario for sorting in ascending order using bubble sort arises when the array
is initially arranged in descending order.
To prevent the O(N^2) time complexity of bubble sort, it is advisable to check if the array is
already sorted before executing the algorithm. By verifying the sorted status beforehand,
unnecessary iterations and comparisons can be avoided, resulting in improved efficiency.

BUBBLE SORT SPACE COMPLEXITY

The space complexity of bubble sort is O(1) because it only requires a constant amount of
additional space for swapping elements using a temporary variable.

ADVANTAGES

Bubble sort is straightforward to understand and execute. Bubble sort does not need any extra
memory space.

Bubble sort is a stable sorting algorithm, which implies that elements with the same key value
maintain their relative order in the sorted output.

DISADVANTAGES

Bubble sort exhibits a time complexity of O(N^2), rendering it inefficient for handling large data
sets due to its relatively slow performance.

Bubble sort is a comparison-based sorting algorithm, implying that it relies on a comparison


operator to establish the relative order of elements within the input data set. This characteristic
can potentially affect the algorithm's efficiency in certain scenarios.

KEY TAKEAWAYS

● Bubble Sort is a straightforward sorting algorithm that operates by iteratively


exchanging adjacent elements when they are in the incorrect order.
● Bubble sort is an in-place algorithm since it performs the swapping of adjacent pairs
without requiring the use of any significant additional data structure.
● Time complexity of bubble sort is O(n2).
SORTING
SUB LESSON 11.2

SELECTION SORT

Selection sort is a sorting algorithm that iteratively selects the smallest element from an
unsorted list in each iteration and places it at the beginning of the unsorted portion of the list.
Selection sort is a simple and efficient sorting algorithm.

The standard(default) implementation of the Selection Sort Algorithm is not inherently stable.
The Selection Sort Algorithm is an in-place sorting algorithm, meaning it does not require
additional space.

This algorithm is not well-suited for large data sets due to its average and worst-case
complexities, which are both O(n^2), where n represents the number of items.
The Selection sort algorithm can be implemented in C using for and while loops, as well as by
utilizing functions.

Selection sort is commonly used in situations where -


A. When small array has to be sorted.
B. The cost of swapping elements is not an issue.
C. It is necessary to examine all elements in the process.
D. The cost of writing to memory becomes significant, particularly in flash memory
systems where the number of writes or swaps is O(n) in comparison to the O(n^2)
complexity of bubble sort.

WORKING OF SELECTION SORT

Now, let's see how the selection sort algorithm operates.


1. Assign the first element as the minimum.
2. Compare the minimum value with the second element. If the second element is smaller than
the current minimum, update the minimum value to be the second element.
Continue comparing the minimum value with the third element. If the third element is smaller,
update the minimum value to be the third element; otherwise, no action is taken. This process
continues until the last element is reached.

3. After each iteration, the minimum element is positioned at the beginning of the unsorted
portion of the list.

4. During each iteration, the indexing starts from the first unsorted element. Steps 1 to 3 are
repeated until all the elements are correctly positioned.
SELECTION SORT CODE

#include <stdio.h>

// function to swap the the position of two elements


void swap(int *a, int *b) {
int temp = *a;
*a = *b;
*b = temp;
}

void selectionSort(int array[], int size) {


for (int step = 0; step < size - 1; step++) {
int min_idx = step;
for (int i = step + 1; i < size; i++) {

// To sort in descending order, change > to < in this line.


// Select the minimum element in each loop.
if (array[i] < array[min_idx])
min_idx = i;
}

// put min at the correct position


swap(&array[min_idx], &array[step]);
}
}

// function to print an array


void printArray(int array[], int size) {
for (int i = 0; i < size; ++i) {
printf("%d ", array[i]);
}
printf("\n");
}

// driver code
int main() {
int data[] = {20, 12, 10, 15, 2};
int size = sizeof(data) / sizeof(data[0]);
selectionSort(data, size);
printf("Sorted array in Acsending Order:\n");
printArray(data, size);
}
OUTPUT :

SELECTION SORT TIME COMPLEXITY

Best Case Complexity: O(n2)


This scenario arises when the array is already sorted.
Average Case Complexity: O(n2)
This situation arises when the elements of the array are in a disordered state, without a specific
ascending or descending order.
Worst Case Complexity: O(n2)
The worst-case scenario arises when the array is in descending order and we intend to sort it in
ascending order.

SELECTION SORT SPACE COMPLEXITY

The space complexity is O(1) since only an additional variable, 'temp', is utilized.

ADVANTAGES OF SELECTION SORT


1. It is a straightforward and easily detailed algorithm.
2. It performs effectively with small datasets.
DISADVANTAGES OF SELECTION SORT

1. The worst-case and average-case time complexity of Selection sort is O(n^2).


2. Selection sort is not well-suited for large datasets.

KEY TAKEAWAYS

● Selection sort is a sorting algorithm that iteratively selects the smallest element from an
unsorted list in each iteration and places it at the beginning of the unsorted portion of
the list.
● This algorithm is not well-suited for large data sets due to its average and worst-case
complexities, which are both O(n^2), where n represents the number of items.
SORTING
SUB LESSON 11.3

INSERTION SORT

The insertion sort algorithm iterates through the unsorted elements and places each element at
its appropriate position in the sorted manner. The insertion sort algorithm operates in a
manner similar to sorting cards in a hand during a card game.
It begins by assuming that the first card is already sorted. Then, we pick an unsorted card and
compare it with the first card. If the unsorted card is greater, it is positioned on the right side;
otherwise, it is placed on the left side. This process is repeated for each unsorted card, ensuring
they are correctly placed in their respective positions.

Insertion sort utilizes a similar approach. The concept behind the insertion sort algorithm is to
select an element and iterate it through the sorted array.

This algorithm is considered one of the simplest sorting algorithms due to its straightforward
implementation.
Generally, insertion sort is considered efficient for sorting small amounts of data.
Insertion sort exhibits adaptability, making it suitable for data sets that are partially sorted.
The Insertion Sort algorithm adopts an incremental approach.
Insertion sort is in-place algorithm because extra space required is not used to manipulate
input.
Applications of Insertion sort are :
A. It is commonly used when dealing with a small number of elements.
B. It can also be advantageous when the input array is nearly sorted, with only a
few elements out of place within a larger array.

WORKING OF INSERTION SORT


Now, let's see how the insertion sort algorithm operates.
Consider we have to sort the following array :
1. In the sorting process, we assume the first element in the array is already in its sorted
position. We then select the second element and temporarily store it in a variable called 'key'.
Compare the value of the 'key' variable with the value of the first element. If the first element is
greater than the 'key', then the 'key' is inserted before the first element.

2. At this point, the first two elements are now in sorted order.
Next, consider the third element and compare it with the elements to its left. Place the third
element just behind the element that is smaller than it. If there are no elements smaller than it,
then place it at the beginning of the array.
3. Likewise, continue placing each unsorted element in its correct position.
INSERTION SORT CODE

// Insertion sort in C

#include <stdio.h>

// Function to print an array


void printArray(int array[], int size) {
for (int i = 0; i < size; i++) {
printf("%d ", array[i]);
}
printf("\n");
}
void insertionSort(int array[], int size) {
for (int step = 1; step < size; step++) {
int key = array[step];
int j = step - 1;

// Compare key with each element on the left of it until an element smaller than
// it is found.
// For descending order, change key<array[j] to key>array[j].
while (key < array[j] && j >= 0) {
array[j + 1] = array[j];
--j;
}
array[j + 1] = key;
}
}

// Driver code
int main() {
int data[] = {9, 5, 1, 4, 3};
int size = sizeof(data) / sizeof(data[0]);
insertionSort(data, size);
printf("Sorted array in ascending order:\n");
printArray(data, size);
}

OUTPUT :
INSERTION SORT TIME COMPLEXITY

Best Case Complexity: O(n)


When the array is already sorted, the outer loop runs for 'n' iterations, while the inner loop
does not run at all. As a result, there are only 'n' comparisons. Therefore, the complexity is
linear.
Average Case Complexity: O(n2)
It refers to a scenario where the elements of an array are in unordered arrangement, lacking
any specific ascending or descending order.
Worst Case Complexity: O(n2)
Consider a scenario where you have an array in ascending order and you aim to sort it in
descending order. In such cases, the worst-case complexity arises.
For each element, it needs to be compared with every other element. As a result, for every nth
element, (n-1) comparisons are performed.
Hence, the total number of comparisons is approximately equal to n*(n-1), which is equivalent
to n^2.

INSERTION SORT SPACE COMPLEXITY

Space complexity is O(1) because an extra variable key is used and it is stable.

ADVANTAGES OF INSERTION SORT

1. It provides good efficiency when applied to small data sets.


2. It only requires a constant amount of additional memory space, denoted as O(1).
3. It performs effectively when applied to data sets that are already substantially sorted.

DISADVANTAGES OF INSERTION SORT

1. Insertion sort is not as efficient when dealing with larger data sets.
2. The worst-case time complexity of the insertion sort algorithm is O(n^2)
3. Insertion sort is less efficient than heap sort, quick sort, merge sort, etc.

KEY TAKEAWAYS

● The insertion sort algorithm iterates through the unsorted elements and places each
element at its appropriate position in the sorted manner.
● The insertion sort algorithm operates in a manner similar to sorting cards in a hand
during a card game.
SORTING
SUB LESSON 11.4

MERGE SORT

Merge Sort is a widely used sorting algorithm that follows the Divide and Conquer principle.
In this approach, a problem is divided into several smaller sub-problems, which are solved
independently. Eventually, the solutions to these sub-problems are combined to obtain the
final solution. To carry out the merging process, we need to define the merge() function.
It is widely recognized as a highly respected and efficient algorithm because of its Time
Complexity.
It serves as an effective algorithm for gaining proficiency in recursion and problem-solving
techniques by employing the divide-and-conquer approach.
Merge sort does not sort the array in place, meaning it requires additional memory space
proportional to the size of the input array.
Merge sort exhibits consistent performance regardless of the initial order of the elements, as it
always performs the same number of comparisons and moves for a given input size.
Merge sort performs well on linked lists due to its ability to easily split and merge linked list
nodes without excessive memory operations.

WORKING OF MERGE SORT


Now, let's examine the working of the merge sort algorithm.
To understand the functioning of the merge sort algorithm, let's consider an array that is not
sorted. Let’s understand with an example.
Consider the array elements as follows -

In the merge sort algorithm, the initial step is to divide the given array into two equal halves.
This process of dividing the list into equal parts continues until further division is not possible.
Since the given array consists of eight elements, it is divided into two arrays of equal size, each
containing four elements.
Next, further divide these two arrays into halves. Since they each have a size of 4, divide them
into new arrays of size 2.

Now, further divide these arrays until reaching the smallest indivisible elements.

Now, reassemble them in the same manner as they were originally divided.
When combining, begin by comparing the elements of each array, and then merge them into a
new array in a sorted order.
Next, compare the values 12 and 31; since they are already in sorted order, leave them as they
are. Then, compare 25 and 8; in the list of two values, place 8 first, followed by 25. Proceed to
compare 32 and 17, sort them, and place 17 first, followed by 32. Lastly, compare 40 and 42,
and arrange them in sequence.

In the subsequent iteration of combining, compare the arrays containing two data values and
merge them into a new array with the sorted order of the elements.

Now, perform the final merge of the arrays. After the completion of this merging process, the
resulting array will appear as follows -

Now, the array has been successfully sorted.


MERGE SORT CODE
#include <stdio.h>

/* Function to merge the subarrays of a[] */


void merge(int a[], int beg, int mid, int end)
{
int i, j, k;
int n1 = mid - beg + 1;
int n2 = end - mid;

int LeftArray[n1], RightArray[n2]; //temporary arrays

/* copy data to temp arrays */


for (int i = 0; i < n1; i++)
LeftArray[i] = a[beg + i];
for (int j = 0; j < n2; j++)
RightArray[j] = a[mid + 1 + j];

i = 0; /* initial index of first sub-array */


j = 0; /* initial index of second sub-array */
k = beg; /* initial index of merged sub-array */

while (i < n1 && j < n2)


{
if(LeftArray[i] <= RightArray[j])
{
a[k] = LeftArray[i];
i++;
}
else
{
a[k] = RightArray[j];
j++;
}
k++;
}
while (i<n1)
{
a[k] = LeftArray[i];
i++;
k++;
}

while (j<n2)
{
a[k] = RightArray[j];
j++;
k++;
}
}

void mergeSort(int a[], int beg, int end)


{
if (beg < end)
{
int mid = (beg + end) / 2;
mergeSort(a, beg, mid);
mergeSort(a, mid + 1, end);
merge(a, beg, mid, end);
}
}

/* Function to print the array */


void printArray(int a[], int n)
{
int i;
for (i = 0; i < n; i++)
printf("%d ", a[i]);
printf("\n");
}

int main()
{
int a[] = { 12, 31, 25, 8, 32, 17, 40, 42 };
int n = sizeof(a) / sizeof(a[0]);
printf("Before sorting array elements are - \n");
printArray(a, n);
mergeSort(a, 0, n - 1);
printf("After sorting array elements are - \n");
printArray(a, n);
return 0;
}

OUTPUT :
Before sorting array elements are -
12 31 25 8 32 17 40 42
After sorting array elements are -
8 12 17 25 31 32 40 42

MERGE SORT TIME COMPLEXITY

Best Case Complexity: O(n*log n)


This situation arises when the array is already sorted.
Average Case Complexity: O(n*log n)
It happens when the elements of an array are in a disorganized order, neither properly
ascending nor properly descending.
Worst Case Complexity: O(n*log n)
It happens when the elements of an array need to be arranged in the opposite order, typically
in descending or reverse order.
MERGE SORT SPACE COMPLEXITY

The space complexity of merge sort is O(n) because it requires an additional variable for
swapping during the sorting process.
Merge sort is classified as a stable sorting algorithm since it preserves the original order of
equal elements in the input array.

ADVANTAGES OF MERGE SORT

1. The worst-case time complexity of merge sort is O(N logN), which indicates its efficient
performance even when dealing with large datasets.
2. Merge sort possesses inherent parallelizability, making it well-suited for leveraging
multiple processors or threads to improve efficiency.

DISADVANTAGES OF MERGE SORT

1. During the sorting process, merge sort necessitates extra memory to hold the merged
sub-arrays.
2. Compared to certain sorting algorithms like insertion sort, Merge sort exhibits a higher
time complexity for small datasets. Consequently, its performance may be slower when
dealing with very small datasets.

KEY TAKEAWAYS
● Merge Sort is a widely used sorting algorithm that follows the Divide and Conquer
principle.

● In this approach, a problem is divided into several smaller sub-problems, which are
solved independently.

● Merge sort does not sort the array in place, meaning it requires additional memory
space proportional to the size of the input array.

SORTING
SUB LESSON 11.5

QUICK SORT

Quicksort is a sorting algorithm that uses the divide and conquer approach, where
1. The Quicksort algorithm divides an array into subarrays by selecting a pivot element,
which is chosen from the array itself.
2. When partitioning the array, the pivot element is positioned in a manner such that
elements smaller than the pivot are placed on the left side, while elements greater than
the pivot are placed on the right side of the pivot.
3. The same approach is applied to divide the left and right subarrays. This process
continues recursively until each subarray contains only one element.
4. At this stage, the individual elements within each subarray are already sorted. Finally,
the sorted subarrays are combined to form a fully sorted array.
It is recognized as a fast and highly efficient sorting algorithm.
The Quicksort algorithm is commonly utilized when
● The Quicksort algorithm is suitable for programming languages that support recursion.
● The Quicksort algorithm is suitable when time complexity is a critical factor.
● The Quicksort algorithm is suitable when space complexity is an important
consideration.

WORKING OF QUICK SORT

1. Choose the pivot element.


Quicksort has various variations regarding the selection of the pivot element. In this
case, we will choose the rightmost element of the array as the pivot.

2. Reorganize the array


Now, the array elements are rearranged in a way that places the elements smaller than
the pivot on the left side and the elements greater than the pivot on the right side.
3. Here's how we reorganize the array:
A pointer is set at the pivot element, and it is compared with the elements starting from
the first index.

If an element is found to be greater than the pivot element, a second pointer is


positioned for that element.

Now, the pivot is compared with the remaining elements. If a smaller element than the
pivot is encountered, it is swapped with the previously identified greater element.
The process is repeated again to identify the next greater element as the second
pointer. If another smaller element is found, it is swapped with the current smaller
element.

The process continues until the second-to-last element is reached.


Lastly, the pivot element is swapped with the second pointer.

3. Divide Subarrays
The process of selecting pivot elements is repeated separately for the left and right
subarrays, and Step 2 is repeated.

The subarrays are recursively divided until each subarray consists of a single element. At this
point, the array is already sorted.
QUICK SORT CODE

#include <stdio.h>

// function to swap elements


void swap(int *a, int *b) {
int t = *a;
*a = *b;
*b = t;
}

// function to find the partition position


int partition(int array[], int low, int high) {

// select the rightmost element as pivot


int pivot = array[high];

// pointer for greater element


int i = (low - 1);

// traverse each element of the array


// compare them with the pivot
for (int j = low; j < high; j++) {
if (array[j] <= pivot) {

// if element smaller than pivot is found


// swap it with the greater element pointed by i
i++;

// swap element at i with element at j


swap(&array[i], &array[j]);
}
}

// swap the pivot element with the greater element at i


swap(&array[i + 1], &array[high]);

// return the partition point


return (i + 1);
}

void quickSort(int array[], int low, int high) {


if (low < high) {

// find the pivot element such that


// elements smaller than pivot are on left of pivot
// elements greater than pivot are on right of pivot
int pi = partition(array, low, high);

// recursive call on the left of pivot


quickSort(array, low, pi - 1);

// recursive call on the right of pivot


quickSort(array, pi + 1, high);
}
}

// function to print array elements


void printArray(int array[], int size) {
for (int i = 0; i < size; ++i) {
printf("%d ", array[i]);
}
printf("\n");
}

// main function
int main() {
int data[] = {8, 7, 2, 1, 0, 9, 6};

int n = sizeof(data) / sizeof(data[0]);

printf("Unsorted Array\n");
printArray(data, n);

// perform quicksort on data


quickSort(data, 0, n - 1);
printf("Sorted array in ascending order: \n");
printArray(data, n);
}

OUTPUT :
Unsorted Array
8 7 2 1 0 9 6
Sorted array in ascending order:
0 1 2 6 7 8 9

QUICK SORT TIME COMPLEXITY

Best Case: O(n*log n)


This occurs when the pivot element is consistently chosen as the middle element or in close
proximity to the middle element.
Average Case: O(n*log n)
This occurs when the above conditions are not met.
Worst Case: O(n2)
This occurs when the pivot element chosen is either the largest or smallest element in the
array. To avoid the worst-case scenario in quicksort, it is important to choose the pivot element
carefully.

QUICK SORT SPACE COMPLEXITY

The space complexity of quicksort is O(log n) and it is not stable.


ADVANTAGES OF QUICK SORT
1. Quicksort is a divide-and-conquer algorithm known for its ability to efficiently solve
problems.
2. Quicksort is known for its efficiency when sorting large data sets.
3. Quicksort has a low memory overhead, as it operates efficiently with minimum memory
requirements.

DISADVANTAGES OF QUICK SORT


1.The worst-case time complexity of Quicksort is O(N^2), which can happen when the pivot
selection is chosen badly.
2. Quicksort may not be the optimal choice for small data sets.

KEY TAKEAWAYS

● The insertion sort algorithm iterates through the unsorted elements and places each
element at its appropriate position in the sorted manner.
● The insertion sort algorithm operates in a manner similar to sorting cards in a hand
during a card game.

You might also like