Experiment Dsa
Experiment Dsa
AIM: To implement string functions like strcopy(), strlen(), strcat() etc. without the use of
<string.h>
THEORY:
strcpy() - String Copy Function:
strcpy() copies the contents of one string to another.
We'll iterate through each character of the source string and copy it to the destination string
until we reach the null character ('\0').
strlen() - String Length Function:
strlen() calculates the length of a string.
We'll iterate through the characters of the string until we find the null character ('\0'),
counting the number of characters encountered.
strcat() - String Concatenation Function:
strcat() appends one string to the end of another.
We'll find the end of the destination string and then copy each character from the source
string to the end of the destination string until we reach the null character ('\0').
SOURCE CODE:
#include<stdio.h>
int main(){
char alpha[50]="CopyIt";
char beta[50];
char gamma[50]="Here";
stringcopy(beta, alpha);
printf("The original string is %s.\n",alpha);
printf("The copied string is %s.\n",beta);
printf("The length of this string is %d.\n",
stringlength(alpha));
stringconcat(alpha,gamma);
printf("The concatenated string is %s.",alpha);}
OUTPUT:
LEARNING OUTCOME:
Understanding of string manipulation at a fundamental level.
Practice in implementing algorithms for common string operations.
Improved problem-solving skills by implementing standard library functions from scratch.
EXPERIMENT 2
AIM: To implement one dimensional, two dimensional and multi-dimensional arrays.
THEORY:
Arrays are data structures used to store elements of the same type sequentially in memory.
One-dimensional arrays are linear collections accessed using a single index, declared with a
single pair of square brackets. Two-dimensional arrays, arrays of arrays, organize elements
into rows and columns, declared with two pairs of square brackets. Multi-dimensional arrays
extend this concept to more dimensions. They are declared with multiple pairs of square
brackets, and each dimension is accessed using an index. Initialization and access follow
similar patterns across all array types, offering efficient storage and retrieval of data in
various dimensions.
SOURCE CODE:
#include<stdio.h>
//Function to display One Dimensional Array
void displayONE(int arr[], int len){
for (int i = 0; i < len; i++)
{
printf("%d ",arr[i]);
}
}
//Function to display Two Dimensional Array
void displayTWO(int arr[][3], int rows,int cols){
for (int i = 0; i < rows; i++)
{
for (int j = 0; j < cols; j++)
{
printf("%d ",arr[i][j]);
}
printf("\n");
}
}
// Function to display Three Dimensional Array
void displayTHREE(int arr[][2][2], int dim1, int dim2, int
dim3){
for (int i = 0; i < dim1; i++){
for (int j = 0; j < dim2; j++){
for (int k = 0; k < dim3; k++){
printf("%d ", arr[i][j][k]);
}
printf("\n");
}
printf("\n");
}
}
int main(){
//One-Dimensional Array
int one[10]={1,2,3,4,5,6,7,8,9,10};
printf("The 1-D array is: \n");
displayONE(one,10);
printf("\n\n");
//Two Dimensional Array
int two[3][3]={{1,2,3},{4,5,6},{7,8,9}};
printf("The 2-D array is: \n");
displayTWO(two,3,3);
printf("\n");
//Multi Dimensional Array
printf("The Multi-D array is: \n");
int multi[2][2][2]={{{1,2},{3,4}},{{5,6},{7,8}}};
displayTHREE(multi, 2,2,2);
}
OUTPUT:
LEARNING OUTCOME:
Familiarity with accessing elements of arrays using indices.
Knowledge of how memory is organized for multi-dimensional arrays.
Experience in handling multi-dimensional data structures for various applications.
EXPERIMENT 3
AIM: To implement linear search and binary search on arrays.
THEORY:
Linear search is a simple searching algorithm that sequentially checks each element of the
array until the desired element is found or the end of the array is reached. It works well for
small arrays or unordered arrays.
Binary search is an efficient searching algorithm for sorted arrays. It compares the target
value with the middle element of the array. If they match, the search is successful. If not, the
half in which the target cannot lie is eliminated, and the search continues on the remaining
half.
SOURCE CODE:
#include<stdio.h>
//Function to implement Linear Search
int linearsearch(int arr[],int size, int key){
for (int i = 0; i < size; i++)
{
if(arr[i]==key){
return i;
}
}
return -1;
}
//Function to implement Binary Search
int binarysearch(int arr[], int low, int high, int key){
if (low<=high){
int mid=(low+high)/2;
if (arr[mid]==key)
{
return mid;
}
else if (arr[mid]<key)
{
return binarysearch(arr,mid+1, high,key);
}
else
return binarysearch(arr, low,mid-1,key);
}
return -1;
}
int main(){
int arr[10]={1,2,3,4,5,6,7,8,9,10};
int key1=5;
int key2=8;
int linear=linearsearch(arr,10,key1);
int binary=binarysearch(arr,0,9,key2);
if (linear!=-1)
{
printf("Value %d found by Linear Search at index
%d.\n",key1,linear);
}
else{
printf("Value %d not found.",key1);
}
if (binary!=-1)
{
printf("Value %d found by Binary Search at index
%d.\n",key2,binary);
}
else{
printf("Value %d not found.",key2);
}
return 0;
}
OUTPUT:
LEARNING OUTCOME:
Familiarity with sequential search (linear search) and its limitations.
Understanding of efficient searching (binary search) on sorted arrays.
Insights into the importance of data organization for efficient search operations.
EXPERIMENT 4
AIM: To Implement stack using array and show the stack operations.
THEORY: A stack is a linear data structure that follows the Last In, First Out (LIFO)
principle. It means that the last element added to the stack is the first one to be removed.
Stacks are widely used in programming for various applications such as expression
evaluation, function call management (call stack), and backtracking algorithms.
OUTPUT:
LEARNING OUTCOME:
Understanding of stack data structure and its operations.
Familiarity with array-based implementation of data structures.
Ability to implement common stack operations like push, pop, and peek in C.
Handling of edge cases such as stack overflow and underflow.
EXPERIMENT 5
AIM: To Implement stack using Linked List and show the stack operations.
THEORY: A stack is a linear data structure that follows the Last In, First Out (LIFO)
principle. It means that the last element added to the stack is the first one to be removed.
Stacks are widely used in programming for various applications such as expression
evaluation, function call management (call stack), and backtracking algorithms.
OUTPUT:
LEARNING OUTCOME:
Develop practical skills in constructing a stack using linked lists, delving into concepts of
memory management and node manipulation crucial for dynamic data structures.
Showcase adeptness in executing essential stack operations, including push, pop, peek, and
display, illuminating the core functionalities of a stack-based data structure.
Grasp the advantages inherent in utilizing linked lists over arrays for dynamic data structures
like stacks, fostering efficient memory usage and adaptability to varying data sizes.
EXPERIMENT 6
AIM: To implement Queue using array and implement Queue Operations.
THEORY: A queue is a linear data structure that follows the First In, First Out (FIFO)
principle. It means that the element that is added first to the queue will be the first one to be
removed. Queues are widely used in programming for various applications such as job
scheduling, breadth-first search algorithms, and implementing caches.
int main() {
Queue queue;
initQueue(&queue);
enqueue(&queue, 1);
enqueue(&queue, 2);
enqueue(&queue, 3);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
Acquire practical experience in implementing a queue data structure using an array, fostering
comprehension of dynamic memory allocation and management, essential for building
efficient data structures.
Demonstrate proficiency in executing fundamental queue operations, including enqueue,
dequeue, and front, showcasing mastery over core functionalities crucial for data
manipulation and algorithm design.
Understand the importance of error handling and boundary checks in queue operations,
ensuring robustness and reliability in real-world applications, thereby promoting software
resilience and stability.
EXPERIMENT 7
AIM: - Implement queue using Linked List and show the queue operations.
THEORY: A queue implemented with a linked list follows FIFO (First In, First Out) order.
Each element is a node containing data and a pointer to the next node. Operations include
enqueue (adding an element to the end), dequeue (removing the first element), peek (viewing
the front element without removing it), and isEmpty (checking if the queue is empty). Linked
list implementation offers dynamic memory allocation, efficient insertion and deletion, but
may have higher overhead due to maintaining pointers. This design ensures efficient handling
of data in a sequential manner, crucial in scenarios like task scheduling and breadth-first
search algorithms.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
enqueue(&queue, 47);
enqueue(&queue, 45);
enqueue(&queue, 36);
display(&queue);
printf("Dequeued element: %d\n", dequeue(&queue));
display(&queue);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Define the characteristics of a queue and its FIFO (First In First Out) behavior.
2. Explain the structure of a linked list and its relevance to queue implementation.
3. Demonstrate proficiency in implementing queue operations (enqueue, dequeue, peek,
isEmpty, size) using a linked list.
4. Analyze the time complexity of queue operations in the linked list implementation.
5. Apply queue-based problem-solving strategies to real-world scenarios, emphasizing the
importance of efficient data structures.
EXPERIMENT 8
AIM: Implement singly linked list with all operations.
THEORY: A singly linked list is a fundamental data structure composed of nodes where
each node points to the next node in the sequence. Operations include insertion, deletion, and
traversal. Insertion involves adding a new node at the beginning, end, or middle of the list,
updating pointers accordingly. Deletion removes a node by updating pointers to bypass it.
Traversal involves visiting each node sequentially. Implementing these operations requires
managing pointers effectively to maintain the integrity of the list. Understanding singly
linked lists is crucial as they serve as the basis for more complex data structures and
algorithms, contributing to efficient memory management and problem-solving.
SOURCE CODE:#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
Node* head = NULL;
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of singly linked lists, including their structure and
node traversal.
2. Master the implementation of insertion operations (at the beginning, end, and middle) with
correct pointer manipulation.
3. Demonstrate proficiency in deletion operations, ensuring proper reorganization of pointers
to maintain list integrity.
4. Apply traversal techniques to efficiently access and process data within a singly linked list.
5. Analyze the time and space complexity of each operation, fostering a deeper understanding
of algorithmic efficiency in linked list manipulation.
EXPERIMENT 9
AIM: Implement doubly linked list with all operations.
THEORY: A doubly linked list is a linear data structure comprising nodes with two pointers:
one pointing to the next node and another to the previous node. Operations include insertion,
deletion, and traversal. Insertion can occur at the beginning, end, or middle of the list,
requiring updates to adjacent pointers. Deletion removes a node, necessitating adjustment of
adjacent pointers to maintain connectivity. Traversal involves moving forward or backward
through the list by following pointers. Implementing doubly linked lists demands careful
management of both forward and backward pointers to ensure efficient data manipulation.
Mastery of these operations facilitates efficient data storage and retrieval in various
applications.
SOURCE CODE:#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
Node* head = NULL;
insertAtBeginning(&head, 5);
insertAtBeginning(&head, 10);
insertAtEnd(&head, 15);
insertAtEnd(&head, 20);
insertAtPosition(&head, 2, 12);
deleteNode(&head, 12);
display(head); // Output: 10 <-> 5 <-> 15 <-> 20 <-> NULL
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Gain a deep understanding of doubly linked lists, including their node structure and
bidirectional traversal.
2. Develop proficiency in implementing insertion operations (at the beginning, end, and
middle) with accurate pointer manipulation for both forward and backward links.
3. Master deletion operations, ensuring proper adjustment of adjacent pointers to maintain list
integrity in both directions.
4. Apply traversal techniques to efficiently navigate through the list in both forward and
backward directions.
5. Analyze the time and space complexity of each operation, fostering a comprehensive
understanding of algorithmic efficiency in doubly linked list manipulation.
EXPERIMENT 10
AIM: Implement circular linked list with all operations.
THEORY: A circular linked list is a data structure where the last node points back to the
first, forming a circular arrangement. Operations include insertion, deletion, and traversal.
Insertion involves adding nodes at the beginning, end, or middle, with proper adjustment of
pointers to maintain circular connectivity. Deletion removes nodes, necessitating updates to
adjacent pointers to preserve circularity. Traversal requires careful handling to avoid infinite
loops while visiting each node exactly once. Implementing circular linked lists demands
meticulous pointer management to ensure seamless circular navigation. Mastery of these
operations enables efficient storage and manipulation of data in applications requiring
cyclical data structures.
SOURCE CODE:#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
Node* head = NULL;
insertAtBeginning(&head, 5);
insertAtBeginning(&head, 10);
insertAtEnd(&head, 15);
insertAtEnd(&head, 20);
insertAtPosition(&head, 2, 12);
deleteNode(&head, 12);
display(head); // Output: 10 -> 5 -> 15 -> 20 -> (head)
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of circular linked lists, including their unique
structure and traversal.
2. Master the implementation of insertion operations (at the beginning, end, and middle),
ensuring proper adjustment of pointers to maintain circular connectivity.
3. Demonstrate proficiency in deletion operations, accurately updating pointers to remove
nodes while preserving circularity.
4. Apply traversal techniques to navigate the circular list efficiently, ensuring each node is
visited exactly once.
5. Analyze the time and space complexity of each operation, fostering a deeper understanding
of algorithmic efficiency in circular linked list manipulation.
EXPERIMENT 11
AIM: Implement reversing of a Singly linked list.
THEORY: Reversing a singly linked list involves changing the direction of pointers to flip
the list end-to-end. The process typically includes iterating through the list while reassigning
each node's pointer to its preceding node, effectively reversing the order. Implementation can
be iterative or recursive, with careful consideration of pointer manipulation to avoid losing
connectivity. Reversing a singly linked list facilitates efficient traversal from the previous tail
to the new head, enabling operations like searching or sorting in reverse order. Mastery of
this operation enhances understanding of pointer manipulation and algorithmic problem-
solving in linked list manipulation tasks.
SOURCE CODE:#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
Node* head = NULL;
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Understand the concept of reversing a singly linked list, comprehending its significance in
data manipulation.
2. Implement iterative and recursive algorithms for reversing a singly linked list,
demonstrating proficiency in pointer manipulation.
3. Analyze the time and space complexity of both iterative and recursive approaches to
reversing a singly linked list.
4. Apply the reversed singly linked list in various problem-solving scenarios, such as
palindrome detection or efficiently accessing elements in reverse order.
5. Evaluate the effectiveness of different reversal techniques, fostering a deeper
understanding of algorithmic optimization and data structure manipulation.
EXPERIMENT 12
AIM: - Implement reversing of a Doubly linked list.
THEORY: Reversing a doubly linked list involves altering the direction of both forward and
backward pointers to invert the list. The process requires traversing the list while swapping
the pointers of each node with its previous and next nodes, effectively reversing the order.
Implementation can be iterative or recursive, ensuring proper adjustment of pointers to
maintain bidirectional connectivity. Reversing a doubly linked list enables efficient traversal
from the previous tail to the new head and vice versa, enhancing operations like searching or
sorting in reverse order. Mastery of this operation deepens understanding of bidirectional
pointer manipulation and algorithmic strategies in linked list manipulation tasks.
SOURCE CODE:#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
Node* head = NULL;
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Grasp the concept of reversing a doubly linked list, understanding its importance in
bidirectional data manipulation.
2. Implement iterative and recursive algorithms for reversing a doubly linked list,
demonstrating proficiency in bidirectional pointer manipulation.
3. Analyze the time and space complexity of both iterative and recursive approaches to
reversing a doubly linked list.
4. Apply the reversed doubly linked list in various problem-solving scenarios, such as
efficient traversal in both forward and reverse directions.
5. Evaluate the effectiveness of different reversal techniques, fostering a deeper
understanding of algorithmic optimization and bidirectional data structure manipulation.
EXPERIMENT 13
AIM: Implement Binary Trees using Arrays.
THEORY: Implementing binary trees using arrays involves mapping tree nodes to array
indices, optimizing memory usage while enabling efficient operations. In this approach, the
root node is stored at index 0, with each node's left child at index 2*i + 1 and right child at
index 2*i + 2. Operations include insertion, deletion, and traversal. Insertion involves placing
elements at the next available index, while deletion requires maintaining tree balance.
Traversal methods like inorder, preorder, and postorder are adapted to navigate the array
representation efficiently. Implementing binary trees with arrays offers a balance between
memory efficiency and ease of traversal, facilitating various tree-based algorithms and
applications.
// Main function
int main() {
TreeNode* tree[MAX_SIZE] = {NULL};
int size = 0; // Number of nodes in the tree
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Understand the concept of representing binary trees using arrays, grasping the mapping
between tree nodes and array indices.
2. Implement efficient algorithms for insertion and deletion in a binary tree using array-based
representation, ensuring proper adjustment of array indices.
3. Master traversal techniques adapted to array-based binary trees, including inorder,
preorder, and postorder traversal, for efficient navigation.
4. Analyze the time and space complexity of operations in array-based binary trees, fostering
a deeper understanding of algorithmic efficiency.
5. Apply array-based binary trees in various problem-solving scenarios, such as binary search
tree operations or heap-based data structures, demonstrating proficiency in data structure
manipulation and algorithmic optimization.
EXPERIMENT 14
AIM: Implement Binary Tress using Linked List.
THEORY: Implementing binary trees using linked lists involves each node containing
pointers to its left and right children. In this approach, nodes are dynamically allocated,
allowing for flexible memory usage. Operations include insertion, deletion, and traversal.
Insertion involves recursively traversing the tree to find the appropriate position for the new
node. Deletion requires handling cases such as nodes with one or two children. Traversal
methods like inorder, preorder, and postorder recursively visit nodes to process data
efficiently. Implementing binary trees with linked lists offers ease of insertion and deletion,
making it suitable for dynamic data structures and algorithms requiring frequent
modifications.
SOURCE CODE:#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
TreeNode* root = NULL;
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of binary trees implemented using linked lists,
including the structure of tree nodes and pointer management.
2. Implement insertion and deletion operations in a binary tree using linked list
representation, ensuring proper handling of pointers for tree modification.
3. Master traversal techniques adapted to linked list-based binary trees, including inorder,
preorder, and postorder traversal, for efficient data processing.
4. Analyze the time and space complexity of operations in linked list-based binary trees,
fostering a deeper understanding of algorithmic efficiency.
5. Apply linked list-based binary trees in various problem-solving scenarios, such as binary
search tree operations or hierarchical data representation, demonstrating proficiency in
dynamic data structure manipulation and algorithmic optimization.
EXPERIMENT 15
AIM: Implement insertion and deletion on binary search tree.
THEORY: Insertion in a binary search tree (BST) involves recursively comparing the value
of the new node with nodes in the tree. Starting from the root, the algorithm traverses left or
right based on comparison results until a suitable position is found. Once located, the new
node is inserted as a leaf node, maintaining the BST property. Deletion in a BST requires
careful handling to preserve the tree's structure and ordering. Cases include removing a leaf
node, a node with one child, or a node with two children. Adjustments are made to reorganize
the tree while preserving the BST property.
SOURCE CODE:#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
TreeNode* root = NULL;
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Gain a thorough understanding of insertion algorithms in binary search trees (BSTs),
comprehending the importance of maintaining the BST property.
2. Implement insertion operations in a BST, demonstrating proficiency in recursively
comparing and traversing nodes to maintain ordering.
3. Master deletion algorithms in BSTs, including handling cases of removing leaf nodes,
nodes with one child, and nodes with two children.
4. Apply insertion and deletion techniques in various problem-solving scenarios, such as
maintaining sorted data or efficiently organizing hierarchical structures.
5. Analyze the time and space complexity of insertion and deletion operations in BSTs,
fostering a deeper understanding of algorithmic efficiency in dynamic data structures.
EXPERIMENT 16
AIM: Implement DFS on binary trees.
THEORY: Depth-First Search (DFS) is a traversal algorithm used to explore binary trees
systematically. It starts at the root node and explores as far as possible along each branch
before backtracking. In binary trees, DFS typically involves three traversal methods: inorder,
preorder, and postorder. In inorder traversal, nodes are visited in ascending order. Preorder
traversal visits the root node first, followed by the left and right subtrees. Postorder traversal
visits the left and right subtrees before the root node. DFS on binary trees facilitates various
operations like searching, sorting, and expression evaluation, providing efficient data
exploration in tree structures.
SOURCE CODE:#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
// Create a binary tree
TreeNode* root = createNode(1);
root->left = createNode(2);
root->right = createNode(3);
root->left->left = createNode(4);
root->left->right = createNode(5);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
// Create a binary tree
TreeNode* root = createNode(1);
root->left = createNode(2);
root->right = createNode(3);
root->left->left = createNode(4);
root->left->right = createNode(5);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of Depth-First Search (DFS) traversal algorithms
applied to binary trees, including inorder, preorder, and postorder methods.
2. Implement DFS algorithms effectively to systematically explore binary tree structures,
grasping the concept of recursive traversal.
3. Master the application of DFS for tasks such as searching, sorting, and expression
evaluation in binary trees, demonstrating proficiency in data exploration.
4. Analyze the differences between inorder, preorder, and postorder DFS traversal methods,
fostering a deeper understanding of their respective applications.
5. Apply DFS traversal techniques to solve diverse problem-solving scenarios, enhancing
algorithmic proficiency in tree-based data structures.
EXPERIMENT 17
AIM: Implement BFS on binary trees.
THEORY: Breadth-First Search (BFS) is a traversal algorithm used to explore binary trees
level by level. It starts at the root node and systematically visits each level, exploring all
nodes at the current level before moving to the next. In binary trees, BFS traverses nodes in a
left-to-right fashion, facilitating operations such as level-order traversal. BFS on binary trees
is efficient for tasks like finding the shortest path, level-order printing, or hierarchical data
exploration. By systematically exploring all nodes at each level, BFS ensures optimal
exploration of binary tree structures, enabling a wide range of applications in tree-based
algorithms and data structures.
SOURCE CODE-
#include <stdio.h>
#include <stdlib.h>
// Main function
int main() {
// Create a binary tree
TreeNode* root = createNode(1);
root->left = createNode(2);
root->right = createNode(3);
root->left->left = createNode(4);
root->left->right = createNode(5);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Gain a comprehensive understanding of Breadth-First Search (BFS) traversal applied to
binary trees, focusing on exploring levels systematically.
2. Implement BFS algorithms effectively to traverse binary tree structures in a level-by-level
manner, grasping the concept of queue-based traversal.
3. Master the application of BFS for tasks such as finding the shortest path, level-order
printing, or hierarchical data exploration in binary trees.
4. Analyze the efficiency of BFS compared to other traversal methods, fostering a deeper
understanding of its advantages in certain scenarios.
5. Apply BFS traversal techniques to solve diverse problem-solving scenarios, enhancing
algorithmic proficiency in tree-based data structures.
EXPERIMENT 18
AIM: Implement inorder, preorder and postorder traversal on trees.
THEORY: Inorder, preorder, and postorder are three traversal methods used to explore tree
structures systematically. In inorder traversal, nodes are visited in ascending order, starting
from the leftmost subtree, then the root, and finally the right subtree. Preorder traversal visits
the root node first, followed by the left and right subtrees recursively. Postorder traversal
visits the left and right subtrees before the root node. These traversal methods facilitate
various operations like searching, sorting, and expression evaluation in trees, offering
different perspectives on tree exploration and manipulation, each with its unique applications
and advantages in algorithmic problem-solving.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
// Inorder traversal
void inorderTraversal(struct TreeNode* root) {
if (root != NULL) {
inorderTraversal(root->left);
printf("%d ", root->data);
inorderTraversal(root->right);
}
}
// Preorder traversal
void preorderTraversal(struct TreeNode* root) {
if (root != NULL) {
printf("%d ", root->data);
preorderTraversal(root->left);
preorderTraversal(root->right);
}
}
// Postorder traversal
void postorderTraversal(struct TreeNode* root) {
if (root != NULL) {
postorderTraversal(root->left);
postorderTraversal(root->right);
printf("%d ", root->data);
}
}
int main() {
// Create a sample binary tree
struct TreeNode* root = createNode(1);
root->left = createNode(2);
root->right = createNode(3);
root->left->left = createNode(4);
root->left->right = createNode(5);
return 0;}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of the three fundamental tree traversal methods:
inorder, preorder, and postorder.
2. Implement each traversal algorithm effectively, demonstrating proficiency in navigating
tree structures recursively.
3. Master the application of inorder traversal for sorting elements in ascending order within a
tree.
4. Apply preorder traversal to efficiently construct expression trees or generate prefix
expressions.
5. Utilize postorder traversal for tasks such as evaluating mathematical expressions or
deleting nodes in a tree.
EXPERIMENT 19
AIM: - Implement the concept of AVL trees.
THEORY: AVL trees are self-balancing binary search trees designed to maintain their
balance through rotations after insertion or deletion operations. Each node in an AVL tree is
associated with a balance factor, ensuring the tree remains balanced by enforcing a maximum
height difference of 1 between the left and right subtrees. Insertion and deletion operations in
AVL trees trigger rotations to restore balance, ensuring efficient search, insertion, and
deletion operations with a worst-case time complexity of O(log n). AVL trees are crucial in
scenarios requiring fast search operations while maintaining a balanced tree structure,
offering optimal performance in dynamic data storage and retrieval applications.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a deep understanding of AVL trees, comprehending their self-balancing nature
and the importance of maintaining balance factors for efficient operations.
2. Implement AVL tree algorithms effectively, including insertion, deletion, and rotation
operations, ensuring balanced tree structure at all times.
3. Master the concept of balance factors and their significance in AVL trees, demonstrating
proficiency in detecting and correcting tree imbalances.
4. Apply AVL trees to various problem-solving scenarios, such as dictionary
implementations, where fast search, insertion, and deletion operations are crucial.
5. Analyze the time and space complexity of AVL tree operations, fostering a deeper
understanding of algorithmic efficiency in dynamic data structures.
EXPERIMENT 20
AIM: Implement interpolation search.
THEORY: Interpolation search is a search algorithm used to find a target value within a
sorted array by estimating its position based on the distribution of values. Unlike binary
search, which always selects the middle element, interpolation search calculates the probable
position of the target by linearly interpolating between array elements. This approach is
particularly effective for uniformly distributed data, offering improved performance by
reducing the number of comparisons required. However, in non-uniformly distributed data or
arrays with repeated elements, interpolation search may exhibit suboptimal performance.
Mastery of interpolation search enhances understanding of search algorithms and their
application in various data retrieval tasks.
SOURCE CODE:
#include <stdio.h>
int low = 0;
int high = n - 1;
while (low <= high && x >= arr[low] && x <= arr[high]) {
if (low == high) {
if (arr[low] == x) {
return low;
return -1;
if (arr[pos] == x) {
return pos;
if (arr[pos] < x) {
low = pos + 1;
} else {
high = pos - 1;
return -1;
int main() {
int x = 10;
if (index != -1) {
} else {
return 0;
OUTPUT:
LEARNING OUTCOME:
1. Gain a thorough understanding of the interpolation search algorithm, comprehending its
principles of estimating the target position based on data distribution.
2. Implement interpolation search effectively, demonstrating proficiency in calculating
interpolation values and refining search boundaries.
3. Analyze the efficiency of interpolation search compared to other search algorithms,
particularly in scenarios with uniformly distributed data.
4. Apply interpolation search to real-world problem-solving scenarios, such as database
searches or data retrieval tasks, showcasing its effectiveness in certain contexts.
5. Evaluate the limitations and considerations of interpolation search, fostering a deeper
understanding of its applicability and performance characteristics.
EXPERIMENT 21
AIM: Implement ternanry search.
THEORY: Ternary search is a divide-and-conquer search algorithm used to find the position
of a target value within a sorted array. It operates by recursively dividing the search interval
into three parts and determining which part contains the target. By repeatedly narrowing
down the search space, ternary search converges towards the target efficiently, typically
outperforming binary search when the array size is large and the distribution of data is
uniform. However, ternary search may exhibit suboptimal performance in certain cases, such
as when the data is not evenly distributed. Mastery of ternary search enriches understanding
of search algorithms and their role in efficient data retrieval tasks.
SOURCE CODE:
#include <stdio.h>
int mid1 = l + (r - l) / 3;
int mid2 = r - (r - l) / 3;
if (arr[mid1] == x) {
return mid1;
if (arr[mid2] == x) {
return mid2;
if (x < arr[mid1]) {
} else {
return -1;
int main() {
int x = 5;
} else {
return 0;
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of the ternary search algorithm, grasping its
divide-and-conquer approach to efficiently locate a target value.
2. Implement ternary search effectively, demonstrating proficiency in recursively dividing the
search interval into three parts and determining the target's location.
3. Analyze the performance of ternary search compared to other search algorithms,
particularly in scenarios with large arrays and uniformly distributed data.
4. Apply ternary search to solve real-world problems requiring fast and efficient search
operations, showcasing its effectiveness in certain contexts.
5. Evaluate the limitations and considerations of ternary search, fostering a deeper
understanding of its applicability and performance characteristics in different scenarios.
EXPERIMENT 22
AIM: Implement selection and insertion sort.
THEORY: Selection sort is a comparison-based sorting algorithm that iteratively selects the
smallest element from the unsorted portion of the array and swaps it with the element at the
current position. This process continues until the entire array is sorted. In contrast, insertion
sort builds the sorted portion of the array one element at a time by iteratively placing each
unsorted element into its correct position among the sorted elements. Both algorithms have a
time complexity of O(n^2), making them suitable for small datasets or as a base case for
more efficient sorting algorithms. Mastery of selection and insertion sort enhances
understanding of sorting techniques and algorithmic efficiency.
SOURCE CODE:
#include <stdio.h>
minIndex = i;
minIndex = j;
temp = arr[i];
arr[i] = arr[minIndex];
arr[minIndex] = temp;
int i, key, j;
key = arr[i];
j = i - 1;
arr[j + 1] = arr[j];
j = j - 1;
arr[j + 1] = key;
}
void printArray(int arr[], int n) {
printf("\n");
int main() {
printArray(arr, n);
selectionSort(arr, n);
printArray(arr, n);
insertionSort(arr, n);
printArray(arr, n);
return 0;
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of selection and insertion sort algorithms,
including their comparison-based sorting techniques.
2. Implement selection sort effectively by iteratively selecting the smallest element and
swapping it into the correct position.
3. Master insertion sort algorithms, demonstrating proficiency in iteratively building the
sorted portion of the array by inserting elements into their correct positions.
4. Analyze the time complexity of selection and insertion sort, fostering a deeper
understanding of their performance characteristics for small to moderate-sized datasets.
5. Apply selection and insertion sort techniques to sort arrays efficiently, demonstrating
algorithmic proficiency in sorting algorithms and their practical applications.
EXPERIMENT 23
AIM: Implement quick sort
THEORY: Quick sort is a widely used comparison-based sorting algorithm renowned for its
efficiency and simplicity. It employs a divide-and-conquer strategy, selecting a pivot element
to partition the array into two subarrays: elements less than the pivot and elements greater
than the pivot. These subarrays are recursively sorted, with the base case being an array of
size one or zero. Quick sort is highly efficient, with an average time complexity of O(n log n)
and a worst-case time complexity of O(n^2) when an inappropriate pivot is chosen. Mastery
of quick sort enriches understanding of efficient sorting techniques and algorithmic design
principles.
SOURCE CODE:
#include <stdio.h>
*a = *b;
*b = temp;
i++;
swap(&arr[i], &arr[j]);
return (i + 1);
quickSort(arr, pi + 1, high);
}
void printArray(int arr[], int size) {
printf("\n");
int main() {
printArray(arr, n);
quickSort(arr, 0, n - 1);
printArray(arr, n);
return 0;
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of the quick sort algorithm, including its divide-
and-conquer strategy and pivot selection techniques.
2. Implement quick sort effectively, demonstrating proficiency in partitioning the array and
recursively sorting its subarrays.
3. Master the selection of pivot elements and understand their impact on quick sort's
performance, fostering algorithmic optimization skills.
4. Analyze the time complexity of quick sort, recognizing its efficiency and adaptability to
various dataset sizes and distributions.
5. Apply quick sort to sort arrays efficiently in practical scenarios, showcasing algorithmic
proficiency in sorting algorithms and their applications.
EXPERIMENT 24
AIM: Implement merge sort.
THEORY: Merge sort is a comparison-based sorting algorithm known for its stability and
efficiency. It employs a divide-and-conquer strategy, recursively splitting the array into
smaller subarrays until each subarray contains only one element. These subarrays are then
merged in a sorted manner, combining two sorted arrays into a single sorted array. Merge sort
guarantees a time complexity of O(n log n) in all cases, making it suitable for sorting large
datasets efficiently. Its stability and predictable performance contribute to its widespread use
in various applications, from sorting arrays to sorting linked lists and external sorting.
Mastery of merge sort enhances algorithmic understanding and problem-solving skills.
SOURCE CODE:
#include <stdio.h>
int n1 = m - l + 1;
int n2 = r - m;
i = 0;
j = 0;
k = l;
arr[k] = L[i];
i++;
} else {
arr[k] = R[j];
j++;
k++;
arr[k] = L[i];
i++;
k++;
}
arr[k] = R[j];
j++;
k++;
if (l < r) {
int m = l + (r - l) / 2;
mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);
merge(arr, l, m, r);
printf("\n");
int main() {
printArray(arr, arr_size);
return 0;
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of the merge sort algorithm, grasping its divide-
and-conquer approach and merging technique.
2. Implement merge sort effectively, demonstrating proficiency in recursively splitting and
merging arrays to achieve sorting.
3. Master the concept of stability in merge sort and its significance in maintaining the relative
order of equal elements.
4. Analyze the time complexity of merge sort, recognizing its efficiency and suitability for
large datasets.
5. Apply merge sort to efficiently sort arrays in practical scenarios, showcasing algorithmic
proficiency and problem-solving skills in sorting algorithms.
EXPERIMENT 25
AIM: Implement heap sort.
THEORY: Heap sort is a comparison-based sorting algorithm known for its efficiency and
versatility. It leverages the binary heap data structure to achieve sorting in two phases: heap
construction and sorting. During heap construction, the input array is transformed into a max-
heap or min-heap, ensuring that the root node holds the maximum or minimum value,
respectively. The heap is then repeatedly modified to extract the root element, which is then
placed at the end of the sorted array. This process continues until the heap is empty, resulting
in a sorted array. Heap sort has a time complexity of O(n log n) in all cases, making it
suitable for large datasets and embedded systems with limited memory. Mastery of heap sort
enriches understanding of data structures and sorting algorithms, offering efficient solutions
to various sorting problems.
SOURCE CODE:
#include <stdio.h>
void heapify(int arr[], int n, int i) {
int largest = i;
int left = 2 * i + 1;
int right = 2 * i + 2;
largest = left;
largest = right;
if (largest != i) {
arr[i] = arr[largest];
arr[largest] = temp;
heapify(arr, n, largest);
heapify(arr, n, i);
arr[0] = arr[i];
arr[i] = temp;
heapify(arr, i, 0);
}
printf("\n");
}int main() {
printArray(arr, n);
heapSort(arr, n);
printArray(arr, n);
return 0;
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of the heap sort algorithm, including its two-
phase approach involving heap construction and sorting.
2. Implement heap sort effectively, demonstrating proficiency in transforming an input array
into a heap and performing heap-based sorting.
3. Master the concepts of max-heap and min-heap and understand their role in heap sort's
sorting process.
4. Analyze the time complexity of heap sort, recognizing its efficiency and suitability for
sorting large datasets.
5. Apply heap sort to efficiently sort arrays in practical scenarios, showcasing algorithmic
proficiency and problem-solving skills in sorting algorithms.
EXPERIMENT 26
AIM: Implement sorting on different keys.
THEORY: Implementing sorting on different keys involves organizing data based on
multiple criteria or attributes. This approach is crucial in scenarios where sorting based on a
single key is insufficient to meet requirements. Sorting on different keys requires defining a
comparison function that considers multiple key attributes, ensuring proper ordering of
elements. Techniques such as lexicographical ordering or custom comparison functions can
be employed to achieve sorting on different keys. Mastery of sorting on different keys
enhances the ability to organize and analyze data effectively, facilitating various applications
such as database management, information retrieval, and data analysis in diverse fields.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct Record {
int id;
char name[50];
int age;
};
int main() {
};
printf("Original Records:\n");
printRecords(records, n);
printf("\nSorted by ID:\n");
printRecords(records, n);
printf("\nSorted by Name:\n");
printRecords(records, n);
printf("\nSorted by Age:\n");
printRecords(records, n);
return 0;
OUTPUT:
LEARNING OUTCOME:
1. Gain a comprehensive understanding of sorting algorithms adapted for organizing data
based on multiple criteria or attributes.
2. Implement sorting on different keys effectively, demonstrating proficiency in defining
comparison functions that consider multiple key attributes.
3. Master techniques such as lexicographical ordering and custom comparison functions to
achieve sorting on different keys.
4. Analyze the efficiency and effectiveness of sorting on different keys compared to sorting
on a single key, recognizing its importance in various applications.
5. Apply sorting on different keys to organize and analyze data in practical scenarios,
showcasing algorithmic proficiency and problem-solving skills in diverse data management
tasks.
EXPERIMENT 27
AIM: Implement external sorting.
THEORY: External sorting is a technique used to sort data sets that are too large to fit
entirely into memory. It involves splitting the data into smaller, manageable chunks, sorting
each chunk in memory, and then merging the sorted chunks to produce the final sorted
output. External sorting algorithms, such as merge sort and polyphase merge sort, efficiently
handle large data sets by minimizing the number of disk accesses and maximizing the use of
available memory. This technique is commonly employed in database systems, data
warehouses, and other applications dealing with massive datasets, ensuring efficient sorting
operations while mitigating memory constraints.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
} else {
while (!feof(f1)) {
}
while (!feof(f2)) {
int numFiles = 0;
while (!feof(input)) {
char filename[20];
int count = 0;
count++;
fclose(chunk);
free(arr);
numFiles++;
fclose(chunk);
remove(filename);
int main() {
printf("Sorting Successfull");
fclose(input);
fclose(output);
return 0;
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of external sorting techniques, including
strategies for handling large datasets that exceed available memory.
2. Implement external sorting algorithms effectively, demonstrating proficiency in splitting
data into manageable chunks, sorting them in memory, and merging the sorted results.
3. Master techniques to optimize external sorting performance, such as minimizing disk
accesses and maximizing memory usage.
4. Analyze the efficiency and scalability of external sorting algorithms, recognizing their
importance in handling massive datasets in various applications.
5. Apply external sorting to efficiently sort large datasets in practical scenarios, showcasing
algorithmic proficiency and problem-solving skills in data management tasks.
EXPERIMENT 28
AIM: Implement graphs using adjacency list
THEORY: Implementing graphs using adjacency lists involves representing each vertex as a
node and storing its adjacent vertices in a linked list or array. This approach optimizes
memory usage by only storing connections that exist, making it efficient for sparse graphs.
Operations such as adding vertices and edges, as well as traversing the graph, are facilitated
by adjacency lists. This representation allows for efficient implementation of algorithms like
breadth-first search and depth-first search. Mastery of adjacency lists enhances understanding
of graph theory and graph algorithms, enabling efficient manipulation and traversal of graph
data structures in various applications, from social networks to routing algorithms.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
return graph;
}
// Since the graph is undirected, add an edge from dest to src also
newNode = newAdjListNode(src);
newNode->next = graph->array[dest].head;
graph->array[dest].head = newNode;
}
int main() {
int V = 5; // Number of vertices
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Gain a comprehensive understanding of graph representation using adjacency lists,
comprehending the structure of nodes and their connections.
2. Implement adjacency lists effectively, demonstrating proficiency in storing vertices and
their adjacent vertices in linked lists or arrays.
3. Master operations such as adding vertices and edges to the graph, as well as traversing the
graph efficiently using adjacency lists.
4. Analyze the advantages of adjacency lists in terms of memory efficiency and ease of
implementation compared to other graph representations.
5. Apply adjacency lists to solve various graph-related problems, showcasing algorithmic
proficiency in graph theory and graph algorithms.
EXPERIMENT 29
AIM: Implement graphs using adjacency matrix
THEORY: Implementing graphs using adjacency matrices involves representing vertices as
rows and columns in a matrix, with cell values indicating edge connections. This method
offers efficient lookup for edge existence and supports weighted edges. However, it requires
more memory for sparse graphs and may be less efficient for large graphs with many edges.
Operations such as adding vertices, edges, and checking edge connectivity are
straightforward with adjacency matrices. Mastery of adjacency matrices enriches
understanding of graph theory and facilitates efficient implementation of graph algorithms
like Dijkstra's shortest path algorithm and Floyd-Warshall algorithm. It's suitable for
applications where edge connectivity queries are frequent, such as network analysis.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
// Add edges
addEdge(graph, 0, 1);
addEdge(graph, 0, 4);
addEdge(graph, 1, 2);
addEdge(graph, 1, 3);
addEdge(graph, 1, 4);
addEdge(graph, 2, 3);
addEdge(graph, 3, 4);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of graph representation using adjacency matrices,
recognizing the structure of vertices and their connections within a matrix.
2. Implement adjacency matrices effectively, demonstrating proficiency in representing edge
connections and supporting weighted edges.
3. Master operations such as adding vertices, edges, and checking edge connectivity
efficiently using adjacency matrices.
4. Analyze the advantages and limitations of adjacency matrices in terms of memory usage
and query efficiency, compared to other graph representations.
5. Apply adjacency matrices to solve various graph-related problems, showcasing algorithmic
proficiency in graph theory and graph algorithms.
EXPERIMENT 30
AIM: Implement DFS on a graph
THEORY: Depth-First Search (DFS) is a graph traversal algorithm that systematically
explores vertices and edges by visiting as far as possible along each branch before
backtracking. It can be implemented using recursion or a stack data structure. DFS starts from
a designated starting vertex and explores adjacent vertices recursively, marking visited
vertices to prevent revisits. This algorithm efficiently explores connected components and
can be adapted for various applications, including cycle detection, topological sorting, and
finding paths or connected components. Mastery of DFS enriches understanding of graph
theory and traversal algorithms, facilitating efficient exploration and analysis of graph
structures in diverse applications.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
return graph;
}
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Gain a deep understanding of Depth-First Search (DFS) algorithm, comprehending its
systematic exploration of vertices and edges in a graph.
2. Implement DFS effectively, demonstrating proficiency in recursive traversal or using a
stack data structure to explore vertices and mark visited nodes.
3. Master DFS applications, including cycle detection, topological sorting, and finding paths
or connected components in a graph.
4. Analyze the time and space complexity of DFS algorithm, recognizing its efficiency in
exploring connected components and solving graph-related problems.
5. Apply DFS to solve real-world problems in diverse domains, showcasing algorithmic
proficiency and problem-solving skills in graph theory.
EXPERIMENT 31
AIM: Implement BFS on a graph.
THEORY: Breadth-First Search (BFS) is a graph traversal algorithm that explores vertices
and edges by systematically visiting all neighbors of a vertex before moving to the next level.
It employs a queue data structure to maintain the order of vertices to be visited. BFS starts
from a designated starting vertex and explores adjacent vertices layer by layer, marking
visited vertices to prevent revisits. This algorithm efficiently finds shortest paths and can be
adapted for various applications, including shortest path finding, connected component
identification, and network analysis. Mastery of BFS enriches understanding of graph theory
and traversal algorithms, facilitating efficient exploration and analysis of graph structures.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
return graph;
}
// Get all adjacent vertices of the dequeued vertex current. If an adjacent has not been
visited, then mark it visited and enqueue it
struct AdjListNode* pCrawl = graph->array[current].head;
while (pCrawl != NULL) {
int adj = pCrawl->dest;
if (!visited[adj]) {
visited[adj] = 1;
queue[rear++] = adj;
}
pCrawl = pCrawl->next;
}
}
}
int main() {
int V = 5; // Number of vertices
// Add edges
addEdge(graph, 0, 1);
addEdge(graph, 0, 4);
addEdge(graph, 1, 2);
addEdge(graph, 1, 3);
addEdge(graph, 1, 4);
addEdge(graph, 2, 3);
addEdge(graph, 3, 4);
free(graph);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of Breadth-First Search (BFS) algorithm,
comprehending its systematic exploration of vertices and edges in a graph.
2. Implement BFS effectively, demonstrating proficiency in traversing adjacent vertices layer
by layer using a queue data structure.
3. Master BFS applications, including shortest path finding, connected component
identification, and network analysis in a graph.
4. Analyze the time and space complexity of BFS algorithm, recognizing its efficiency in
exploring shortest paths and solving graph-related problems.
5. Apply BFS to solve real-world problems in diverse domains, showcasing algorithmic
proficiency and problem-solving skills in graph theory.
EXPERIMENT 32
AIM: Implement spanning trees using Prim's algorithm.
THEORY: Prim's algorithm is a greedy algorithm used to find the minimum spanning tree
(MST) of a connected, undirected graph. It starts by selecting an arbitrary vertex as the initial
tree and iteratively grows the tree by adding the shortest edge that connects a vertex in the
tree to a vertex outside the tree. This process continues until all vertices are included in the
MST. Prim's algorithm guarantees the construction of the minimum-weight spanning tree and
has a time complexity of O(V^2) or O(E log V) depending on the implementation. Mastery of
Prim's algorithm enriches understanding of graph theory and facilitates efficient network
design in various applications.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
return graph;
}
// Since the graph is undirected, add an edge from dest to src also
newNode = newAdjListNode(src, weight);
newNode->next = graph->array[dest].head;
graph->array[dest].head = newNode;
}
// Function to find the vertex with the minimum key value, from the set of vertices not yet
included in the MST
int minKey(int key[], int mstSet[], int V) {
int min = INT_MAX, min_index;
int v;
for (v = 0; v < V; v++) {
if (mstSet[v] == 0 && key[v] < min) {
min = key[v];
min_index = v;
}
}
return min_index;
}
// Function to construct and print MST for a graph represented using adjacency list
representation
void primMST(struct Graph* graph) {
int parent[MAX_VERTICES]; // Array to store constructed MST
int key[MAX_VERTICES]; // Key values used to pick minimum weight edge in cut
int mstSet[MAX_VERTICES]; // To represent set of vertices not yet included in MST
// Initialize all keys as INFINITE
// Initialize all vertices as not yet included in MST
int i;
for (i = 0; i < graph->V; i++) {
key[i] = INT_MAX;
mstSet[i] = 0;
}
// Update key value and parent index of the adjacent vertices of the picked vertex
struct AdjListNode* pCrawl = graph->array[u].head;
while (pCrawl != NULL) {
int v = pCrawl->dest;
int weight = pCrawl->weight;
if (mstSet[v] == 0 && weight < key[v]) {
parent[v] = u;
key[v] = weight;
}
pCrawl = pCrawl->next;
}
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a thorough understanding of Prim's algorithm for constructing minimum spanning
trees (MSTs), grasping its greedy approach to vertex selection and edge addition.
2. Implement Prim's algorithm effectively, demonstrating proficiency in selecting vertices
and edges to grow the spanning tree incrementally.
3. Master the concept of minimum-weight spanning trees and understand how Prim's
algorithm guarantees the construction of the optimal tree.
4. Analyze the time complexity of Prim's algorithm, recognizing its efficiency in finding
MSTs in connected, undirected graphs.
5. Apply Prim's algorithm to solve real-world network design problems, showcasing
algorithmic proficiency and problem-solving skills in graph theory and network optimization.
EXPERIMENT 33
AIM: Implement spannign trees using Krushkal's algorithm.
THEORY: Kruskal's algorithm is a greedy algorithm used to construct a minimum spanning
tree (MST) in a connected, weighted graph. It begins by sorting the edges by their weights
and then iteratively selects the shortest edge that does not form a cycle when added to the
growing forest of trees. This process continues until all vertices are included in a single tree.
Kruskal's algorithm guarantees the construction of the minimum-weight spanning tree and
has a time complexity of O(E log E) or O(E log V), depending on the implementation.
Mastery of Kruskal's algorithm enhances understanding of graph theory and facilitates
efficient network design in various applications.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdbool.h>
// Attach smaller rank tree under root of high rank tree (Union by Rank)
if (subsets[xroot].rank < subsets[yroot].rank)
subsets[xroot].parent = yroot;
else if (subsets[xroot].rank > subsets[yroot].rank)
subsets[yroot].parent = xroot;
else {
subsets[yroot].parent = xroot;
subsets[xroot].rank++;
}
}
free(subsets);
}
int main() {
int V = 5; // Number of vertices
int E = 7; // Number of edges
graph->edge[1].src = 0;
graph->edge[1].dest = 3;
graph->edge[1].weight = 6;
graph->edge[2].src = 1;
graph->edge[2].dest = 2;
graph->edge[2].weight = 3;
graph->edge[3].src = 1;
graph->edge[3].dest = 3
graph->edge[3].weight = 8;
graph->edge[4].src = 1;
graph->edge[4].dest = 4;
graph->edge[4].weight = 5;
graph->edge[5].src = 2;
graph->edge[5].dest = 4;
graph->edge[5].weight = 7;
graph->edge[6].src = 3;
graph->edge[6].dest = 4;
graph->edge[6].weight = 9;
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Gain a comprehensive understanding of Kruskal's algorithm for constructing minimum
spanning trees (MSTs), comprehending its greedy approach to edge selection.
2. Implement Kruskal's algorithm effectively, demonstrating proficiency in sorting edges by
weight and iteratively selecting edges to grow the spanning tree.
3. Master the concept of minimum-weight spanning trees and understand how Kruskal's
algorithm ensures the construction of the optimal tree.
4. Analyze the time complexity of Kruskal's algorithm, recognizing its efficiency in finding
MSTs in connected, weighted graphs.
5. Apply Kruskal's algorithm to solve real-world network design problems, showcasing
algorithmic proficiency and problem-solving skills in graph theory and network optimization.
EXPERIMENT 34
AIM: - Implement Djisktra's algorithm.
THEORY: Dijkstra's algorithm is a graph traversal algorithm used to find the shortest paths
from a designated source vertex to all other vertices in a weighted graph with non-negative
edge weights. It operates by iteratively selecting the vertex with the shortest known distance
from the source and updating the distances to its neighboring vertices. Dijkstra's algorithm
employs a priority queue to efficiently select vertices with the smallest distances. This
algorithm guarantees the discovery of the shortest paths and has a time complexity of O((V +
E) log V). Mastery of Dijkstra's algorithm enriches understanding of graph theory and
facilitates efficient pathfinding in various applications.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
return graph;
}
// Since the graph is directed, do not add an edge from dest to src
}
// Function to find the vertex with the minimum distance value, from the set of vertices not
yet included in the shortest path tree
int minDistance(int dist[], int sptSet[], int V) {
int min = INT_MAX, min_index;
int v;
for (v = 0; v < V; v++) {
if (sptSet[v] == 0 && dist[v] <= min) {
min = dist[v];
min_index = v;
}
}
return min_index;
}
// sptSet[i] will be true if vertex i is included in the shortest path tree or shortest distance
from src to i is finalized
int sptSet[V];
// Print the constructed distance array, shortest path tree, and shortest paths from source to
all other vertices
printSolution(dist, parent, src, V);
}
int main() {
int V = 9; // Number of vertices in the graph
struct Graph* graph = createGraph(V);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of Dijkstra's algorithm for finding shortest paths
in weighted graphs, including its iterative vertex selection process.
2. Implement Dijkstra's algorithm effectively, demonstrating proficiency in updating
distances to neighboring vertices and selecting vertices with the shortest known distance.
3. Master the concept of shortest paths and understand how Dijkstra's algorithm guarantees
the discovery of optimal paths in graphs with non-negative edge weights.
4. Analyze the time complexity of Dijkstra's algorithm, recognizing its efficiency in finding
shortest paths from a source vertex to all other vertices.
5. Apply Dijkstra's algorithm to solve real-world routing and pathfinding problems,
showcasing algorithmic proficiency and problem-solving skills in graph theory and network
optimization.
EXPERIMENT 35
AIM: Implement sequential file organization. Include functions for creating, opening,
reading from, and writing to a sequential file.
THEORY: Sequential file organization involves storing records in a sequential manner,
facilitating sequential access through linear traversal. To create a sequential file, one
initializes the file and defines its structure. Opening a sequential file involves establishing a
connection for reading or writing operations. Reading from the file involves sequentially
accessing records from start to end, while writing appends new records to the end. These
operations offer simplicity and efficiency for applications requiring linear data access
patterns. Mastery of sequential file operations enhances understanding of file organization
and facilitates efficient data management in various applications, such as database systems
and file processing tasks.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of sequential file organization, including its linear
storage and access patterns.
2. Implement functions for creating and opening sequential files, demonstrating proficiency
in file initialization and establishment of access connections.
3. Master reading operations from sequential files, efficiently accessing records sequentially
from start to end.
4. Demonstrate proficiency in writing operations to sequential files, appending new records to
the end of the file.
5. Apply sequential file organization to various file processing tasks, showcasing algorithmic
proficiency and problem-solving skills in data management.
EXPERIMENT 36
AIM: Implement direct file organization using fixed-length records. Include functions to add
records, delete records, and search for records by their key.
THEORY: Direct file organization with fixed-length records involves storing data in a file
where each record occupies a predetermined fixed size. Adding records to such files requires
allocating space for new records and updating file metadata accordingly. Deleting records
involves marking them as inactive or shifting subsequent records to fill the gap. Searching for
records by key involves direct access based on the record's key value, leveraging indexing or
hashing for efficient retrieval. This organization offers fast record access and is suitable for
applications requiring frequent record retrievals. Mastery of direct file organization enhances
understanding of data storage and retrieval techniques, facilitating efficient data management
in various applications.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
fclose(fp);
fclose(temp);
remove(filename);
rename("temp.dat", filename);
if (!found) {
printf("Record with key %d not found!\n", key);
} else {
printf("Record with key %d deleted successfully!\n",
key);
}
}
Record record;
int found = 0;
while (fread(&record, RECORD_SIZE, 1, fp) == 1) {
if (record.key == key) {
printf("Record found:\n");
printf("Key: %d\n", record.key);
printf("Data: %s\n", record.data);
found = 1;
break;
}
}
if (!found) {
printf("Record with key %d not found!\n", key);
}
fclose(fp);
}
int main() {
const char *filename = "direct_file.dat";
// Example records
Record record1 = {1, "Data for record 1"};
Record record2 = {2, "Data for record 2"};
Record record3 = {3, "Data for record 3"};
// Delete a record
deleteRecord(filename, 2);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of direct file organization with fixed-length
records, grasping the concept of allocating predetermined space for each record.
2. Implement functions to add records to the file, demonstrating proficiency in allocating
space and updating file metadata.
3. Master record deletion operations, including marking records as inactive and managing file
space to maintain data integrity.
4. Demonstrate proficiency in searching for records by their key, employing direct access
based on indexing or hashing for efficient retrieval.
5. Apply direct file organization techniques to various data management tasks, showcasing
algorithmic proficiency and problem-solving skills in file organization and record
management.
EXPERIMENT 37
AIM: Implement indexing for a file structure. Use a B-tree data structure for indexing and
provide functions for insertion, deletion, and searching based on index keys.
THEORY: Indexing for a file structure involves creating a separate data structure, such as a
B-tree, to efficiently access records in a file. B-trees offer balanced search trees with a
variable number of children per node, providing efficient search, insertion, and deletion
operations. Insertion functions add new index entries, while deletion functions remove
existing entries while maintaining tree balance. Searching functions utilize the B-tree
structure to quickly locate records based on index keys, enhancing search efficiency.
Implementing indexing with B-trees optimizes data access and manipulation, facilitating
rapid record retrieval and management in various database and file system applications.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
// Function prototypes
BTreeNode* createNode(int isLeaf);
BTreeNode* search(BTreeNode *root, int key);
void insert(BTree *tree, int key);
void insertNonFull(BTreeNode *node, int key);
void delete(BTree *tree, int key);
void deleteKey(BTreeNode *node, int key);
int findKeyIndex(BTreeNode *node, int key);
void removeFromLeaf(BTreeNode *node, int index);
void removeFromNonLeaf(BTreeNode *node, int index);
void fill(BTreeNode *node, int index);
void borrowFromPrev(BTreeNode *node, int index);
void borrowFromNext(BTreeNode *node, int index);
void merge(BTreeNode *node, int index);
int getPredecessor(BTreeNode *node, int index);
int getSuccessor(BTreeNode *node, int index);
void splitChild(BTreeNode *parent, int index);
if (!child->isLeaf) {
for (int i = 0; i < MAX_KEYS / 2; ++i) {
newNode->children[i] = child->children[i +
MAX_KEYS / 2];
}
}
child->numKeys = MAX_KEYS / 2 - 1;
parent->numKeys++;
}
if (!child->isLeaf) {
for (int i = child->numKeys; i >= 0; --i) {
child->children[i + 1] = child->children[i];
}
}
if (!child->isLeaf) {
child->children[0] = sibling->children[sibling-
>numKeys];
}
node->keys[index - 1] = sibling->keys[sibling->numKeys -
1];
child->numKeys++;
sibling->numKeys--;
}
child->keys[child->numKeys] = node->keys[index];
if (!child->isLeaf) {
child->children[child->numKeys + 1] = sibling-
>children[0];
}
node->keys[index] = sibling->keys[0];
if (!sibling->isLeaf) {
for (int i = 1; i <= sibling->numKeys; ++i) {
sibling->children[i - 1] = sibling->children[i];
}
}
child->numKeys++;
sibling->numKeys--;
}
child->keys[MAX_KEYS / 2] = node->keys[index];
if (!child->isLeaf) {
for (int i = 0; i <= sibling->numKeys; ++i) {
child->children[i + MAX_KEYS / 2 + 1] = sibling-
>children[i];
}
}
child->numKeys += sibling->numKeys + 1;
node->numKeys--;
free(sibling);
}
int main() {
BTree* tree = createBTree();
free(tree);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a thorough understanding of indexing techniques using B-trees, comprehending
their role in enhancing data access efficiency in file structures.
2. Implement B-tree indexing effectively, demonstrating proficiency in constructing and
maintaining balanced search trees for efficient record retrieval.
3. Master insertion functions, efficiently adding index entries to the B-tree while ensuring
structural integrity and balance.
4. Demonstrate proficiency in deletion functions, effectively removing index entries while
preserving the B-tree's balance and search properties.
5. Apply B-tree indexing to enable rapid searching based on index keys, showcasing
algorithmic proficiency and problem-solving skills in data management and retrieval tasks.
EXPERIMENT 38
AIM: Implement hashing for file organization. Include functions for hash table creation,
insertion, deletion, and searching using collision resolution techniques such as chaining.
THEORY: Hashing for file organization involves mapping data keys to unique addresses in
a hash table, facilitating efficient data retrieval. Functions for hash table creation initialize an
array to store key-value pairs. Insertion functions hash keys to determine storage locations
and handle collisions using chaining, appending entries to linked lists at collision points.
Deletion functions locate and remove entries from the hash table, maintaining consistency.
Searching functions hash keys to retrieve stored values, navigating linked lists for collision
resolution. Hashing with chaining optimizes data access, offering constant-time average
retrieval, ideal for applications requiring rapid search operations on large datasets.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define TABLE_SIZE 10
free(current->key);
free(current);
}
// Main function
int main() {
struct HashTable* hashTable = createHashTable();
printf("\nDeleting 'John':\n");
deleteKey(hashTable, "John");
display(hashTable);
// Free memory
for (int i = 0; i < TABLE_SIZE; ++i) {
struct Node* current = hashTable->table[i];
while (current != NULL) {
struct Node* temp = current;
current = current->next;
free(temp->key);
free(temp);
}
}
free(hashTable);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Gain a comprehensive understanding of hashing techniques for file organization, including
the creation of hash tables to efficiently store key-value pairs.
2. Implement hash table creation functions effectively, demonstrating proficiency in
initializing arrays and setting up storage structures.
3. Master insertion functions, efficiently hashing keys and resolving collisions using chaining
to append entries to linked lists.
4. Demonstrate proficiency in deletion functions, effectively locating and removing entries
from the hash table while maintaining data integrity.
5. Apply searching functions to hash keys and retrieve stored values, showcasing algorithmic
proficiency and problem-solving skills in data management and retrieval tasks.
EXPERIMENT 39
AIM: Implement linear probing collision resolution technique in C for a hash table. Write a
program that demonstrates insertion, deletion, and searching operations using linear probing.
THEORY: Linear probing is a collision resolution technique used in hash tables to handle
collisions by probing sequential locations in the table until an empty slot is found. In C, a
hash table with linear probing can be implemented using an array and appropriate hash
functions. Insertion involves hashing the key and probing linearly until an empty slot is
found. Deletion locates the key's slot and marks it as deleted. Searching follows a similar
process, probing linearly until the key is found or an empty slot is encountered. This program
demonstrates efficient insertion, deletion, and searching operations using linear probing for
collision resolution.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define TABLE_SIZE 10
// Main function
int main() {
struct HashTable* hashTable = createHashTable();
printf("\nDeleting 'John':\n");
deleteKey(hashTable, "John");
display(hashTable);
// Free memory
for (int i = 0; i < TABLE_SIZE; ++i) {
if (hashTable->table[i] != NULL) {
free(hashTable->table[i]->key);
free(hashTable->table[i]);
}
}
free(hashTable);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of linear probing as a collision resolution
technique in hash tables, grasping its sequential probing strategy to handle collisions
efficiently.
2. Implement linear probing collision resolution technique in C, demonstrating proficiency in
inserting, deleting, and searching for elements in a hash table.
3. Master the process of hashing keys and probing linearly until an empty slot is found for
insertion, ensuring data integrity and efficient storage.
4. Demonstrate proficiency in locating and marking deleted slots during deletion operations,
maintaining the structure of the hash table.
5. Apply linear probing to efficiently resolve collisions and demonstrate insertion, deletion,
and searching operations in a hash table, showcasing algorithmic proficiency in data
management and retrieval tasks.
EXPERIMENT 40
AIM: Implement separate chaining collision resolution technique for hashing. Include
functions for hash table creation, insertion, deletion, and searching using separate chaining.
THEORY: Separate chaining is a collision resolution technique in hashing that involves
creating a linked list for each bucket in the hash table to handle collisions. Functions for hash
table creation initialize an array of linked lists. Insertion functions hash keys to determine
bucket locations and append elements to the corresponding linked list. Deletion functions
locate and remove elements from the linked lists while maintaining data integrity. Searching
functions hash keys to retrieve elements and traverse the appropriate linked list to find the
desired element. Separate chaining offers flexibility and efficiency in handling collisions,
ensuring optimal performance in hash table operations.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#define TABLE_SIZE 10
int main() {
HashTable *hashTable = createHashTable();
// Delete a key
int deleteKey = 20;
delete(hashTable, deleteKey);
printf("Hash table after deletion of key %d:\n",
deleteKey);
displayHashTable(hashTable);
// Free memory
for (int i = 0; i < TABLE_SIZE; ++i) {
Node *current = hashTable->table[i];
while (current != NULL) {
Node *temp = current;
current = current->next;
free(temp);
}
}
free(hashTable);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Gain a comprehensive understanding of separate chaining as a collision resolution
technique in hashing, comprehending its use of linked lists to manage collisions efficiently.
2. Implement separate chaining collision resolution technique in a hash table, demonstrating
proficiency in creating a hash table with an array of linked lists.
3. Master insertion functions, efficiently hashing keys and appending elements to the
corresponding linked list to handle collisions.
4. Demonstrate proficiency in deletion functions, effectively locating and removing elements
from linked lists while preserving data integrity.
5. Apply searching functions to hash keys and retrieve elements from the appropriate linked
list, showcasing algorithmic proficiency in data management and retrieval tasks using
separate chaining.
EXPERIMENT 41
AIM: - Implement rehashing technique for handling collisions in hashing. Include functions
for hash table creation, insertion, deletion, and searching with rehashing when the load factor
exceeds a specified threshold.
THEORY: Rehashing is a collision resolution technique in hashing that involves
dynamically resizing the hash table and redistributing elements when the load factor exceeds
a specified threshold. Functions for hash table creation initialize an array with an initial
capacity. Insertion functions calculate the load factor and trigger rehashing when it exceeds
the threshold, resizing the table and redistributing elements. Deletion functions update the
load factor and may trigger rehashing if it falls below a specified threshold. Searching
functions hash keys and locate elements in the resized hash table. Rehashing ensures optimal
performance by maintaining an appropriate load factor and minimizing collisions in the hash
table.
SOURCE CODE:
#include <stdio.h>
#include <stdlib.h>
#define INITIAL_CAPACITY 10
#define LOAD_FACTOR_THRESHOLD 0.7
int main() {
HashTable *hashTable = createHashTable();
// Delete a key
int deleteKey = 20;
delete(hashTable, deleteKey);
printf("Hash table after deletion of key %d:\n",
deleteKey);
displayHashTable(hashTable);
// Free memory
for (int i = 0; i < hashTable->capacity; ++i) {
Node *current = hashTable->table[i];
while (current != NULL) {
Node *temp = current;
current = current->next;
free(temp);
}
}
free(hashTable->table);
free(hashTable);
return 0;
}
OUTPUT:
LEARNING OUTCOME:
1. Develop a comprehensive understanding of rehashing as a technique for handling
collisions in hashing, recognizing its role in dynamically resizing the hash table.
2. Implement rehashing effectively, demonstrating proficiency in creating hash tables with
functions for insertion, deletion, and searching.
3. Master insertion functions, efficiently monitoring the load factor and triggering rehashing
when it exceeds a specified threshold to maintain optimal performance.
4. Demonstrate proficiency in deletion functions, updating the load factor and potentially
triggering rehashing to ensure balanced data distribution.
5. Apply searching functions to locate elements in the hash table, showcasing algorithmic
proficiency and problem-solving skills in data management and retrieval tasks with
rehashing.