0% found this document useful (0 votes)
42 views53 pages

Assignment 3

The document discusses data structures concepts including graphs, hashing, open addressing, separate chaining, breadth-first search, depth-first search, minimum spanning trees, hashing efficiency, and random vs non-random keys.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views53 pages

Assignment 3

The document discusses data structures concepts including graphs, hashing, open addressing, separate chaining, breadth-first search, depth-first search, minimum spanning trees, hashing efficiency, and random vs non-random keys.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 53

ASSIGNMENT – 3

DATA STRUCTURES – 22CA102004

MODULE – 5

STUDENT NAME B. Jyothi Swaroop


ROLL NUMBER 23102C010064
COURSE & SEMESTER BCA,SEM-2
SECTION 1
DATE OF SUBMISSION 22 March 2024

Signature of The Student Signature of The Faculty


2 marks
1. Define Graph and Draw it’s Notation?
In data structures, a graph is a collection of nodes (vertices) and edges that connect pairs of
nodes. Graphs are used to represent relationships between entities. The nodes represent
the entities, and the edges represent the connections or relationships between them.
Now, let's draw a simple undirected graph using graphical notation:
A
/\
/ \
B --- C

2. Define Hashing and Draw it’s Notation?


Hashing is a technique used in computer science to efficiently store and retrieve data in a
data structure called a hash table. It involves mapping keys to positions in the hash table
using a hash function. This allows for constant-time average-case operations such as
insertion, deletion, and retrieval of data. Now, let's draw a simple notation representing
hashing using a hash table:
+---+ +---+ +---+

| 0 | ---> | | ---> | |

+---+ +---+ +---+

| 1 | ---> | | ---> | |

+---+ +---+ +---+

| 2 | ---> | | ---> | |

+---+ +---+ +---+

| 3 | ---> | | ---> | |

+---+ +---+ +---+

|.| ... ...

+---+ +---+ +---+

| n | ---> | | ---> | |

3. What is an Open Addressing?


Open addressing is a collision resolution technique used in hash tables to handle situations
where multiple keys hash to the same index. In open addressing, when a collision occurs
(i.e., when the hash function maps a new key to a slot that is already occupied), the
algorithm searches for an empty slot in the hash table to place the new key-value pair.
4. What is Separate Chaining Method?
Separate chaining is a collision resolution technique used in hash tables to handle situations
where multiple keys hash to the same index. In separate chaining, each slot in the hash
table maintains a linked list (or another data structure like an array) of key-value pairs that
hash to the same index.

5. What is Breadth-First Search?


Breadth-First Search (BFS) is a graph traversal algorithm used to explore nodes in a graph
systematically. It starts at a selected node (often referred to as the "source" node) and
explores all of its neighbors at the current depth level before moving on to the nodes at the
next depth level.

6. What is Depth-First Search?


Depth-First Search (DFS) is another fundamental graph traversal algorithm used to
systematically explore nodes in a graph. Unlike Breadth-First Search (BFS), which explores
nodes level by level, DFS explores as far as possible along each branch before backtracking.
It goes as deep as possible along each branch before exploring other branches.

7. What are Minimum Spanning Trees?


A Minimum Spanning Tree (MST) is a subgraph of an undirected, connected graph that is a
tree (i.e., it has no cycles) and connects all the vertices together with the minimum possible
total edge weight. In simpler terms, an MST is the smallest possible tree that connects all
the vertices in a graph while minimizing the total weight of the edges.

8. What is Hashing Efficiency?


Hashing efficiency refers to the effectiveness and performance of a hashing technique in
terms of its ability to distribute keys evenly across the hash table and minimize collisions .
Several factors contribute to hashing efficiency:
1.Uniform Hash Function 2.Minimal Collisions
3.Load Factor 4.Collision Resolution 5.Memory Usage

9. Define Quick Computation?


In the context of data structures, "quick computation" generally refers to the efficiency of
operations performed on the data structure. Specifically, it pertains to the speed at which
various operations such as insertion, deletion, search, and traversal can be executed.

10. What are Random and Non-Random Keys?


Random Keys: Random keys are keys used within data structures that are generated or
chosen without any discernible pattern or order. These keys often lack any inherent
relationship between successive values. They are typically utilized in hash-based data
structures like hash tables, where their unpredictability aids in distributing data uniformly
across the structure to minimize collisions and optimize performance.

8 MARKS
1. Define Graph and Explain the Concept of Graph with an Example?
In data structures, a graph is a non-linear data structure composed of a set of vertices (also
known as nodes) and a set of edges that connect pairs of vertices. Graphs are widely used
to model relationships between entities, making them a fundamental data structure in
computer science.
Here's a formal definition of a graph:
"A graph G consists of a non-empty set V of vertices (nodes) and a set E of edges. Each edge
in E is a pair (v, w) where v, w ∈ V. For directed graphs, the pair (v, w) indicates an edge from
vertex v to vertex w. For undirected graphs, the pair (v, w) indicates a bidirectional edge
between vertices v and w."
Now, let's explain the concept of a graph with an example:
Consider a social network where individuals (vertices) are connected by friendships (edges).
We can represent this social network as a graph:
Vertices: {Alice, Bob, Charlie, Dave, Eve} Edges: {(Alice, Bob), (Bob, Charlie), (Charlie, Dave),
(Charlie, Eve), (Dave, Eve)}
In this example:
 Vertices represent individuals in the social network, such as Alice, Bob, Charlie, Dave,
and Eve.
 Edges represent friendships between individuals. For instance, there is an edge
between Alice and Bob, indicating that they are friends.
 This graph is undirected, meaning that friendships are mutual. If Alice is friends with
Bob, then Bob is also friends with Alice.
This example demonstrates how a graph can be used to model relationships between
entities in a social network. In this context, graphs are valuable for analyzing connectivity,
identifying communities, and studying the structure of the network.
In data structures, graphs can be implemented using various representations, such as
adjacency matrices, adjacency lists, or edge lists. These representations provide different
trade-offs in terms of memory usage and efficiency for different types of graph operations.

2. Define Hashing and Explain the Concept of Hashing with an


Example?
Hashing is a technique used in computer science to efficiently store, retrieve, and manage
data in data structures known as hash tables. It involves mapping data (keys) to indexes in a
data structure using a hash function. This process allows for constant-time average-case
operations such as insertion, deletion, and retrieval of data.
Here's how hashing works conceptually:
1. Hash Function: A hash function is a mathematical function that takes an input (often
referred to as a key) and produces a fixed-size output, known as a hash value or hash
code. The hash function computes a unique hash value for each input, ideally
distributing the keys evenly across the range of possible hash values.
2. Hash Table: A hash table is a data structure that stores key-value pairs. It typically
consists of an array (or an array-like structure) where each element corresponds to a
slot or bucket in the table. The hash function is used to compute the index or position
in the hash table where the key-value pair will be stored or retrieved.
3. Hash Collision: Since the number of possible hash values is typically smaller than the
number of possible keys, collisions may occur when two or more keys produce the
same hash value. Hash collision resolution techniques are used to handle such
situations and ensure that each key-value pair is stored and retrievable accurately.
4. Collision Resolution: Common collision resolution techniques include:
 Chaining: Each slot in the hash table maintains a linked list (or other data
structure) of key-value pairs that hash to the same index.
 Open Addressing: If a collision occurs, the algorithm probes for an alternative
(usually nearby) slot until an empty slot is found.
Here's an example to illustrate hashing:
Suppose we have a hash table with 10 slots, and we want to store the following key-value
pairs:
 ("John", 25)
 ("Alice", 30)
 ("Bob", 35)
 ("Charlie", 40)
1. Hash Function: We use a simple hash function that computes the hash value by
summing the ASCII values of the characters in the key and taking the modulo
operation with the size of the hash table (10).
2. Storing Key-Value Pairs:
 ("John", 25): Hash("John") = (ASCII("J") + ASCII("o") + ASCII("h") + ASCII("n")) %
10 = (74 + 111 + 104 + 110) % 10 = 399 % 10 = 9. Store (key="John", value=25)
in slot 9.
 ("Alice", 30): Hash("Alice") = (ASCII("A") + ASCII("l") + ASCII("i") + ASCII("c") +
ASCII("e")) % 10 = (65 + 108 + 105 + 99 + 101) % 10 = 478 % 10 = 8. Store
(key="Alice", value=30) in slot 8.
 Similarly, store ("Bob", 35) in slot 2 and ("Charlie", 40) in slot 7.
3. Retrieving Values:
 To retrieve the value associated with the key "Alice", we compute Hash("Alice")
to find the index in the hash table (slot 8) and then retrieve the value stored in
that slot (30).
Hashing provides an efficient way to store and retrieve data, especially when the number of
keys is large and the range of possible keys is known. It allows for constant-time average-
case performance for basic operations, making it a fundamental technique in computer
science and programming.

3. Explain The Logical Structural Representation of a Graph with an


Example?
In data structures, the logical structural representation of a graph refers to how the graph is
internally stored and organized in computer memory, emphasizing its logical relationships
rather than its physical representation. There are several common ways to represent a
graph in data structures, each with its own advantages and trade-offs. Let's explore three
common representations with an example:
Consider the following undirected graph:
A
/\
B---C
/\/
D---E
1.Adjacency Matrix: In an adjacency matrix representation, the graph is stored as a 2D array
(matrix) where the rows and columns represent vertices, and the presence or absence of an
edge between vertices is indicated by the value in the corresponding cell. For an undirected
graph, the matrix is symmetric.
Example:
A B C D E
+---------------
A|0 1 0 0 0
B|1 0 1 1 0
C|0 1 0 1 1
D|0 1 1 0 1
E|0 0 1 1 0
2.Adjacency List: In an adjacency list representation, the graph is stored as a collection of
lists (or arrays), where each vertex has a list of its adjacent vertices. This representation is
memory-efficient, especially for sparse graphs.
Example:
A: [B]
B: [A, C, D]
C: [B, D, E]
D: [B, C, E]
E: [C, D]
Edge List: In an edge list representation, the graph is stored as a list of all the edges, where
each edge is represented as a pair of vertices (for an undirected graph).
Example:
(A, B), (B, C), (B, D), (C, D), (C, E), (D, E)
These representations capture the logical structure of the graph and are used to efficiently
perform operations such as traversal, insertion, and deletion of vertices and edges. The
choice of representation depends on factors such as the size of the graph, the density of
edges, and the specific operations to be performed on it. Each representation has its own
advantages and is suitable for different scenarios.

4. Explain The Logical Structural Representation of a Graph with an


Example?
In data structures, the logical structural representation of a graph refers to how the graph is
internally organized and represented in computer memory, focusing on the logical
relationships between vertices and edges rather than its physical representation. There are
several common ways to represent a graph in data structures, each with its own strengths
and weaknesses. Let's explore some of these representations with an example:
Consider the following undirected graph:
A
/\
B---C
/\/
D---E
Adjacency Matrix: An adjacency matrix representation stores the graph as a 2D array
(matrix), where the rows and columns represent vertices, and the entries indicate whether
there is an edge between pairs of vertices. For an undirected graph, the matrix is symmetric
along the diagonal.
Example:
|A|B|C|D|E|
+---+---+---+---+---+
A|0|1|0|0|0|
B|1|0|1|1|0|
C|0|1|0|1|1|
D|0|1|1|0|1|
E|0|0|1|1|0|
Adjacency List: An adjacency list representation maintains a list of adjacent vertices for each
vertex. It is typically implemented using an array of lists or a dictionary (or hash map) where
each vertex is mapped to its list of neighbors.
Example:
A: [B]
B: [A, C, D]
C: [B, D, E]
D: [B, C, E]
E: [C, D]
Edge List: An edge list representation stores a list of all edges in the graph. Each edge is
represented as a tuple (or pair) of vertices.
Example:
(A, B), (B, C), (B, D), (C, D), (C, E), (D, E)
These representations capture the structure of the graph and allow for efficient traversal,
manipulation, and analysis of the graph data. The choice of representation depends on
factors such as the size of the graph, the density of edges, and the specific operations to be
performed on it. Each representation has its own advantages and is suitable for different
scenarios.

5. What are the Operations of Graphs and Explain each with an


Example?
In graph theory, various operations are commonly performed on graphs to manipulate their
structure or extract useful information. Some of the fundamental graph operations include:

1. **Create Graph**: This operation involves initializing a graph data structure. The graph
can be represented using various data structures such as an adjacency matrix, adjacency
list, or an edge list.
// Example of creating a graph using an adjacency list representation

#include <stdio.h>
#include <stdlib.h>

// Define the maximum number of vertices


#define MAX_VERTICES 10

// Node structure for adjacency list


struct Node {
int vertex;
struct Node* next;
};

// Graph structure
struct Graph {
int numVertices;
struct Node* adjLists[MAX_VERTICES];
};

// Function to create a new graph with given number of vertices


struct Graph* createGraph(int vertices) {
struct Graph* graph = (struct Graph*)malloc(sizeof(struct Graph));
graph->numVertices = vertices;

// Initialize adjacency lists as empty


for (int i = 0; i < vertices; i++) {
graph->adjLists[i] = NULL;
}

return graph;
}
```

2. **Add Edge**: This operation involves adding an edge between two vertices of the
graph.

```c
// Example of adding an edge in an adjacency list representation

void addEdge(struct Graph* graph, int src, int dest) {


// Create a new node for the destination vertex
struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->vertex = dest;
newNode->next = NULL;
// Add the new node to the adjacency list of source vertex
newNode->next = graph->adjLists[src];
graph->adjLists[src] = newNode;

// For undirected graphs, add an edge from dest to src as well


newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->vertex = src;
newNode->next = NULL;

// Add the new node to the adjacency list of destination vertex


newNode->next = graph->adjLists[dest];
graph->adjLists[dest] = newNode;
}
```

3. **Remove Edge**: This operation involves removing an edge between two vertices of
the graph.

```c
// Example of removing an edge in an adjacency list representation

void removeEdge(struct Graph* graph, int src, int dest) {


// Remove edge from src to dest
struct Node* current = graph->adjLists[src];
struct Node* prev = NULL;

while (current != NULL && current->vertex != dest) {


prev = current;
current = current->next;
}
// If the edge exists, remove it
if (current != NULL) {
if (prev != NULL) {
prev->next = current->next;
} else {
graph->adjLists[src] = current->next;
}
free(current);
}

// Remove edge from dest to src (for undirected graphs)


current = graph->adjLists[dest];
prev = NULL;

while (current != NULL && current->vertex != src) {


prev = current;
current = current->next;
}

if (current != NULL) {
if (prev != NULL) {
prev->next = current->next;
} else {
graph->adjLists[dest] = current->next;
}
free(current);
}
}
```

These are some of the basic graph operations that can be performed in C. Depending on the
requirements and the specific application, additional operations such as graph traversal
(DFS, BFS), finding shortest paths, and determining connected components can also be
implemented.

6. What is Open Addressing Mode of Hashing and Explain it with an


Example?
Open addressing is a collision resolution technique used in hash tables to deal with
collisions. In open addressing, when a collision occurs (i.e., two elements hash to the same
location), the algorithm finds an alternative location within the table to place the collided
element.

Here's how open addressing works:

1. **Hashing Function**: Initially, each element is hashed to find its initial position in the
hash table.

2. **Collision Handling**: If the calculated position is already occupied by another element,


instead of chaining (as in separate chaining), open addressing algorithm searches for an
alternative position within the hash table.

3. **Probing**: Probing involves searching for an empty slot in the hash table to place the
collided element. There are different methods of probing, such as linear probing, quadratic
probing, and double hashing.

4. **Insertion**: Once an empty slot is found, the collided element is inserted into that
position.

5. **Search and Deletion**: During search and deletion operations, the same probing
technique is used to locate the element. If the element is found, it is either returned or
deleted, respectively.

Here's an example of open addressing using linear probing:


Suppose we have a hash table with 10 slots and the following hash function:
int hash(int key) {
return key % 10; // Simple modulo hashing function
}
```

And we want to insert the following elements into the hash table:

- 25
- 35
- 15
- 45

Initially, the hash table is empty. After inserting 25, it hashes to position 5.

```
Index: 0 1 2 3 4 5 6 7 8 9
Element: [ ] [ ] [ ] [ ] [ ] [25] [ ] [ ] [ ] [ ]
```

Then, when inserting 35, it also hashes to position 5, but it's already occupied. So, linear
probing would search for the next available slot, which is position 6.

```
Index: 0 1 2 3 4 5 6 7 8 9
Element: [ ] [ ] [ ] [ ] [ ] [25] [35] [ ] [ ] [ ]
```
Similarly, for 15, it hashes to position 5 (collision again), then probes linearly and finds an
empty slot at position 7.

```
Index: 0 1 2 3 4 5 6 7 8 9
Element: [ ] [ ] [ ] [ ] [ ] [25] [35] [15] [ ] [ ]
```

Finally, when inserting 45, it hashes to position 5 (collision again), then probes linearly and
finds an empty slot at position 8.

```
Index: 0 1 2 3 4 5 6 7 8 9
Element: [ ] [ ] [ ] [ ] [ ] [25] [35] [15] [45] [ ]
```

Now, the hash table is fully occupied. During searches or deletions, the same linear probing
technique would be used to locate elements.

7. Write an Algorithm to perform Depth-First Search?


Certainly! Here's a Depth-First Search (DFS) algorithm implemented in C for a graph
represented using an adjacency list
#include <stdio.h>
#include <stdlib.h>

// Structure for a node in the adjacency list


struct Node {
int vertex;
struct Node* next;
};
// Structure for the adjacency list
struct Graph {
int numVertices;
struct Node** adjLists;
int* visited;
};

// Function to create a new node


struct Node* createNode(int v) {
struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->vertex = v;
newNode->next = NULL;
return newNode;
}

// Function to create a graph with a given number of vertices


struct Graph* createGraph(int vertices) {
struct Graph* graph = (struct Graph*)malloc(sizeof(struct Graph));
graph->numVertices = vertices;

graph->adjLists = (struct Node**)malloc(vertices * sizeof(struct Node*));


graph->visited = (int*)malloc(vertices * sizeof(int));

for (int i = 0; i < vertices; i++) {


graph->adjLists[i] = NULL;
graph->visited[i] = 0;
}

return graph;
}

// Function to add an edge to an undirected graph


void addEdge(struct Graph* graph, int src, int dest) {
// Add an edge from src to dest
struct Node* newNode = createNode(dest);
newNode->next = graph->adjLists[src];
graph->adjLists[src] = newNode;

// Add an edge from dest to src


newNode = createNode(src);
newNode->next = graph->adjLists[dest];
graph->adjLists[dest] = newNode;
}

// Depth-First Search function


void DFS(struct Graph* graph, int vertex) {
// Mark the current vertex as visited
graph->visited[vertex] = 1;
printf("%d ", vertex);

// Traverse all adjacent vertices of the current vertex


struct Node* adjList = graph->adjLists[vertex];
while (adjList != NULL) {
int adjVertex = adjList->vertex;
if (graph->visited[adjVertex] == 0) {
DFS(graph, adjVertex);
}
adjList = adjList->next;
}
}

int main() {
struct Graph* graph = createGraph(5); // Create a graph with 5 vertices

// Add edges to the graph


addEdge(graph, 0, 1);
addEdge(graph, 0, 2);
addEdge(graph, 1, 3);
addEdge(graph, 2, 4);
addEdge(graph, 3, 4);

printf("Depth-First Traversal starting from vertex 0:\n");


DFS(graph, 0); // Perform DFS starting from vertex 0

return 0;
}
```

This C program defines a graph data structure using an adjacency list representation. It
includes functions to create a graph, add edges, and perform Depth-First Search (DFS)
traversal starting from a specified vertex. Finally, it demonstrates the usage of these
functions in the main function.

8. Write an Algorithm to perform Breadth-First Search?


Certainly! Below is a pseudocode algorithm for performing Breadth-First Search (BFS) on a
graph represented using an adjacency list:

```
BFS(graph, start):
// Initialize a queue to keep track of vertices to visit
queue = new Queue()

// Initialize a set to keep track of visited vertices


visited = new Set()

// Enqueue the starting vertex onto the queue


queue.enqueue(start)

// Mark the starting vertex as visited


visited.add(start)

// While the queue is not empty


while queue is not empty:
// Dequeue a vertex from the queue
current = queue.dequeue()

// Process or print the current vertex


print current

// For each neighbor of the current vertex


for each neighbor of current:
// If the neighbor has not been visited
if neighbor is not in visited:
// Enqueue the neighbor onto the queue
queue.enqueue(neighbor)

// Mark the neighbor as visited


visited.add(neighbor)
```

Explanation:

- The BFS algorithm maintains a queue to keep track of vertices to visit. It starts by
enqueuing the starting vertex onto the queue and marking it as visited.
- In each iteration, it dequeues a vertex from the queue, processes it (e.g., prints it), and
then explores its neighbors.
- If a neighbor has not been visited yet, it is enqueued onto the queue and marked as
visited. This ensures that vertices are explored in a breadth-first manner.

This algorithm explores the entire graph reachable from the starting vertex in a breadth-first
manner.

Note: This algorithm assumes that the graph is represented using an adjacency list.
Additionally, it does not handle disconnected graphs. If the graph is disconnected, you may
need to modify the algorithm to handle multiple connected components.

9. Define Minimum Spanning Tree and Explain it with Sample Code?


A Minimum Spanning Tree (MST) is a subset of the edges of a connected, undirected graph
that connects all the vertices together without any cycles and with the minimum possible
total edge weight. In other words, an MST is a tree that spans all the vertices of the graph
with the least possible total edge weight.

Here's an example of a Minimum Spanning Tree:

```
4 1
(1) --- (2) --- (3)
| /| /
11 5 2 3
|/ |/
(4) --- (5)
6
```

In the above graph, the minimum spanning tree could be:

```
4 1
(1) --- (2) --- (3)
|
3
|
(5)
```

Now, let's write a sample code in C to find the Minimum Spanning Tree using the Prim's
algorithm:
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <limits.h>

#define V 5 // Number of vertices

// Function to find the vertex with the minimum key value


int minKey(int key[], bool mstSet[]) {
int min = INT_MAX, min_index;

for (int v = 0; v < V; v++) {


if (mstSet[v] == false && key[v] < min) {
min = key[v];
min_index = v;
}
}

return min_index;
}

// Function to print the constructed MST stored in parent[]


void printMST(int parent[], int graph[V][V]) {
printf("Edge Weight\n");
for (int i = 1; i < V; i++) {
printf("%d - %d %d \n", parent[i], i, graph[i][parent[i]]);
}
}

// Function to construct and print MST for a graph represented using adjacency matrix
representation
void primMST(int graph[V][V]) {
int parent[V]; // Array to store constructed MST
int key[V]; // Key values used to pick minimum weight edge in cut
bool mstSet[V]; // To represent set of vertices included in MST

// Initialize all keys as INFINITE


for (int i = 0; i < V; i++) {
key[i] = INT_MAX;
mstSet[i] = false;
}

// Always include the first vertex in MST. Make key 0 so that this vertex is picked as the
first vertex.
key[0] = 0;
parent[0] = -1; // First node is always root of MST

// The MST will have V vertices


for (int count = 0; count < V - 1; count++) {
// Pick the minimum key vertex from the set of vertices not yet included in MST
int u = minKey(key, mstSet);

// Add the picked vertex to the MST set


mstSet[u] = true;

// Update key value and parent index of the adjacent vertices of the picked vertex.
// Consider only those vertices which are not yet included in MST
for (int v = 0; v < V; v++) {
// graph[u][v] is non-zero only for adjacent vertices of m
// mstSet[v] is false for vertices not yet included in MST
// Update the key only if graph[u][v] is smaller than key[v]
if (graph[u][v] && mstSet[v] == false && graph[u][v] < key[v]) {
parent[v] = u;
key[v] = graph[u][v];
}
}
}

// Print the constructed MST


printMST(parent, graph);
}

// Driver code
int main() {
/* Let us create the following graph
2 3
(0)--(1)--(2)
| /\ |
6| 8/ \5 |7
|/ \|
(3)-------(4)
9 */
int graph[V][V] = {{0, 2, 0, 6, 0},
{2, 0, 3, 8, 5},
{0, 3, 0, 0, 7},
{6, 8, 0, 0, 9},
{0, 5, 7, 9, 0}};

// Print the solution


primMST(graph);

return 0;
}
```

This code demonstrates the implementation of Prim's algorithm to find the Minimum
Spanning Tree (MST) of a graph represented using an adjacency matrix. It prints the edges
of the MST along with their weights.

10. What are the Logical Differences between BFS and DFS?

Feature BFS (Breadth-First DFS (Depth-First


Search) Search)
Order of exploration Explores vertices level
Explores vertices
by level, starting from
depth by depth,
the source vertex
exhaustively exploring
one branch before
backtracking
Data structure used Uses a queue to Uses a stack (or
maintain the order of recursion) to maintain
vertices to be the order of vertices
explored to be explored
Implementation Iterative Recursive
implementation is implementation is
straightforward using straightforward using
a queue a stack (or implicit
function call stack)
Memory requirement May require more Generally requires
memory due to less memory as it
maintaining the doesn't need to store
queue all child nodes
simultaneously
Completeness Guarantees finding May not necessarily
the shortest path find the shortest path
(fewest edges) between two vertices
between two vertices in an unweighted
in an unweighted graph
graph
Applications Shortest path Topological sorting,
algorithms, finding cycle detection, maze
connected solving, game tree
components, network traversal
analysis
These are some of the key logical differences between BFS and DFS. They have different characteristics and
are suitable for different types of problems and applications.
11. Explain the Process of Separate Chaining Method with an Example?
Separate chaining is a collision resolution technique used in hash tables to handle collisions.
In separate chaining, each bucket in the hash table contains a linked list of elements. When a
collision occurs (i.e., two keys hash to the same index), the collided elements are stored in
the same bucket in the form of a linked list.

Here's the process of separate chaining method with an example:

1. **Hash Table Initialization**: Initialize a hash table with a certain number of buckets.
Each bucket can be an array or a linked list.

2. **Hash Function**: Implement a hash function that maps keys to bucket indices. This
function should distribute keys evenly across the buckets to minimize collisions.

3. **Insertion**: When inserting a key-value pair into the hash table:

- Apply the hash function to the key to determine the bucket index.

- If the bucket at the determined index is empty, create a new linked list node with the key-
value pair and insert it into the bucket.

- If the bucket is not empty, traverse the linked list in the bucket to check if the key already
exists:

- If the key exists, update its value.

- If the key doesn't exist, append a new node with the key-value pair to the end of the
linked list.

4. **Retrieval**: When retrieving a value associated with a key:

- Apply the hash function to the key to determine the bucket index.

- Traverse the linked list in the bucket to find the node with the matching key:

- If found, return the corresponding value.


- If not found, return null or indicate that the key is not present in the hash table.

5. **Deletion**: When deleting a key-value pair:

- Apply the hash function to the key to determine the bucket index.

- Traverse the linked list in the bucket to find the node with the matching key:

- If found, remove the node from the linked list.

- If not found, do nothing.

Example:

Suppose we have a hash table with 5 buckets and the following hash function:

```

hash(key) = key % 5

```

Let's say we want to insert the following key-value pairs into the hash table:

- (2, "apple")

- (7, "banana")

- (12, "orange")

- (17, "grape")

Applying the hash function:

- hash(2) = 2 % 5 = 2

- hash(7) = 7 % 5 = 2 (collision with (2, "apple"))

- hash(12) = 12 % 5 = 2 (collision with (2, "apple") and (7, "banana"))

- hash(17) = 17 % 5 = 2 (collision with (2, "apple"), (7, "banana"), and (12, "orange"))
After insertion, the hash table would look like this (using linked lists for separate chaining):

```

Bucket 0:

Bucket 1:

Bucket 2: (2, "apple") -> (7, "banana") -> (12, "orange") -> (17, "grape")

Bucket 3:

Bucket 4:

```

In this example, separate chaining resolved collisions by storing collided elements in linked
lists within the same buckets.

12. What are the Characteristics of Good Hash Functions?


A good hash function is essential for efficient hash table operations. It should have several
characteristics to ensure proper distribution of keys and minimize collisions. Here are the
characteristics of a good hash function:

1. **Uniform Distribution**: A good hash function should distribute keys uniformly across
the hash table buckets. This ensures that each bucket has roughly the same number of keys,
reducing the likelihood of collisions.

2. **Deterministic**: The hash function should always produce the same hash value for the
same input key. This ensures consistency in hashing operations.

3. **Efficiency**: The hash function should be computationally efficient, meaning it should


generate hash values quickly. This is important for the overall performance of hash table
operations.
4. **Minimal Collisions**: A good hash function should minimize the number of collisions,
where multiple keys hash to the same bucket. While collisions are inevitable, a good hash
function should distribute keys in a way that minimizes the likelihood of collisions.

5. **Avalanche Effect**: A small change in the input key should produce a significant
change in the resulting hash value. This property ensures that similar keys are distributed
across different buckets, enhancing the uniformity of distribution.

6. **Prevents Clustering**: Clustering occurs when consecutive keys hash to consecutive


bucket indices, leading to poor performance. A good hash function should prevent clustering
by distributing keys evenly across the hash table.

7. **Deterministic Time Complexity**: The hash function should have a deterministic time
complexity, meaning its performance should be predictable and not dependent on the input
key size.

8. **Resilience to Attacks**: A good hash function should be resistant to attacks such as


collision attacks and pre-image attacks, ensuring the security of hash-based data structures.

9. **Ease of Implementation**: The hash function should be easy to implement and


understand, making it suitable for practical use in various applications.

By possessing these characteristics, a hash function ensures efficient and reliable


performance of hash table operations, facilitating fast key retrieval and manipulation.

13. Define Hash Function explain the Objective of a Hash Function?


A hash function is a mathematical function that takes an input (or 'key') and returns a fixed-
size string of bytes, typically of a shorter length than the input. The output of a hash function
is called a hash value, hash code, or simply hash.
The objective of a hash function is to efficiently map data of arbitrary size (such as keys in a
hash table) to a fixed-size hash value. This hash value is typically used to index a data
structure, such as an array or a hash table, where it can be used to quickly retrieve or store
associated data.

Here are the primary objectives of a hash function:

1. **Uniform Distribution**: A good hash function aims to distribute the hash values
uniformly across the output space. This helps in minimizing collisions, where two different
inputs produce the same hash value, and ensures efficient utilization of the data structure.

2. **Deterministic Mapping**: For the same input key, a hash function must produce the
same hash value every time. This ensures consistency and predictability in hash-based
operations.

3. **Fast Computation**: Hash functions should be computationally efficient, meaning they


should generate hash values quickly. This is crucial for achieving high performance in hash
table operations and other applications that rely on hash functions.

4. **Avalanche Effect**: A small change in the input key should result in a significant
change in the resulting hash value. This property ensures that similar keys are distributed
across different hash values, promoting uniformity and reducing clustering.

5. **Resistance to Collisions**: While collisions are unavoidable in hash functions due to


the finite output space, a good hash function should minimize the likelihood of collisions.
This is achieved by distributing keys as evenly as possible across the output space.

6. **Security**: In cryptographic applications, hash functions are used for data integrity
verification, password hashing, digital signatures, etc. In such cases, the hash function should
be resistant to attacks, such as collision attacks and pre-image attacks, to ensure the security
of the system.
Overall, the objective of a hash function is to provide a fast and efficient way to map data to
hash values, ensuring uniformity, determinism, and security as per the requirements of the
application.

14. Explain in-detail about Hashing Efficiency?


Hashing efficiency refers to how well a hash function and associated data structure (like a
hash table) perform in terms of time and space complexity. It encompasses several aspects
including collision handling, distribution of keys, computational overhead, and memory
usage. Let's delve into each of these aspects in detail:

1. **Collision Handling**:

- **Open Addressing vs. Separate Chaining**: In open addressing, collisions are resolved
by finding alternative positions within the hash table, while in separate chaining, collided
elements are stored in linked lists within the same bucket. The efficiency of collision
handling depends on how well these techniques are implemented and how often collisions
occur.

- **Collision Resolution Techniques**: Techniques like linear probing, quadratic probing,


double hashing (in open addressing), or resizing linked lists (in separate chaining) impact the
efficiency of collision handling.

2. **Distribution of Keys**:

- **Uniform Distribution**: A good hash function aims to distribute keys uniformly across
the hash table buckets. If keys are unevenly distributed, it can lead to clustering and poor
performance.

- **Avalanche Effect**: A small change in the input key should result in a significant
change in the hash value, ensuring that similar keys are distributed across different buckets.
This helps in achieving a more uniform distribution.

3. **Computational Overhead**:

- **Time Complexity**: Hashing operations such as insertion, retrieval, and deletion


should have low time complexity. Ideally, these operations should have constant time
complexity on average (O(1)).
- **Deterministic Performance**: The performance of hash functions should be
deterministic and predictable, ensuring consistent behavior across different inputs.

4. **Memory Usage**:

- **Space Complexity**: The memory usage of a hash table depends on factors like the
number of buckets, the size of the hash table, and the average number of elements per
bucket. It's essential to balance memory usage with performance requirements.

- **Load Factor and Rehashing**: The load factor of a hash table (ratio of the number of
elements to the number of buckets) affects its performance. When the load factor exceeds
a certain threshold, rehashing may be required to maintain efficiency by increasing the
number of buckets.

5. **Collision Avoidance**:

- **Hash Function Quality**: The quality of the hash function plays a crucial role in
avoiding collisions. A good hash function minimizes the likelihood of collisions by
distributing keys uniformly across the hash table.

- **Preventing Clustering**: Techniques like prime number bucket sizes or using a


secondary hash function in double hashing help prevent clustering and improve efficiency.

Overall, hashing efficiency is determined by how well the hash function distributes keys,
how collisions are handled, the computational overhead of hashing operations, and the
memory usage of the hash table. Balancing these factors is essential to ensure optimal
performance in hash-based data structures.

15. Write a Program to implement Depth-First Search?


Below is a C program that implements Depth-First Search (DFS) on a graph represented
using an adjacency list:

#include <stdio.h>

#include <stdlib.h>

#define MAX_VERTICES 10
// Structure for a node in the adjacency list

struct Node {

int vertex;

struct Node* next;

};

// Structure for the adjacency list

struct Graph {

int numVertices;

struct Node** adjLists;

int* visited;

};

// Function to create a new node

struct Node* createNode(int v) {

struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));

newNode->vertex = v;

newNode->next = NULL;

return newNode;

// Function to create a graph with a given number of vertices

struct Graph* createGraph(int vertices) {


struct Graph* graph = (struct Graph*)malloc(sizeof(struct Graph));

graph->numVertices = vertices;

graph->adjLists = (struct Node**)malloc(vertices * sizeof(struct Node*));

graph->visited = (int*)malloc(vertices * sizeof(int));

for (int i = 0; i < vertices; i++) {

graph->adjLists[i] = NULL;

graph->visited[i] = 0;

return graph;

// Function to add an edge to an undirected graph

void addEdge(struct Graph* graph, int src, int dest) {

// Add an edge from src to dest

struct Node* newNode = createNode(dest);

newNode->next = graph->adjLists[src];

graph->adjLists[src] = newNode;

// Add an edge from dest to src

newNode = createNode(src);

newNode->next = graph->adjLists[dest];
graph->adjLists[dest] = newNode;

// Depth-First Search function

void DFS(struct Graph* graph, int vertex) {

// Mark the current vertex as visited

graph->visited[vertex] = 1;

printf("%d ", vertex);

// Traverse all adjacent vertices of the current vertex

struct Node* adjList = graph->adjLists[vertex];

while (adjList != NULL) {

int adjVertex = adjList->vertex;

if (graph->visited[adjVertex] == 0) {

DFS(graph, adjVertex);

adjList = adjList->next;

int main() {

struct Graph* graph = createGraph(5); // Create a graph with 5 vertices

// Add edges to the graph


addEdge(graph, 0, 1);

addEdge(graph, 0, 2);

addEdge(graph, 1, 3);

addEdge(graph, 2, 4);

addEdge(graph, 3, 4);

printf("Depth-First Traversal starting from vertex 0:\n");

DFS(graph, 0); // Perform DFS starting from vertex 0

return 0;

```

This C program defines a graph data structure using an adjacency list representation and
includes functions to create a graph, add edges, and perform Depth-First Search (DFS)
traversal starting from a specified vertex. Finally, it demonstrates the usage of these
functions in the main function.

16. Define Breadth-First Search and Write the Sample Code?


Sure, here's a definition of Breadth-First Search (BFS) and a sample code implementation
using C as a reference language:

Breadth-First Search (BFS) is a graph traversal algorithm that starts traversing the graph
from a chosen source vertex and explores all of its neighbor vertices at the present depth
before moving on to the vertices at the next depth level. It ensures that all vertices at a
given level are visited before moving to the vertices at the next level.

Below is a sample code implementation of BFS in C:

#include <stdio.h>
#include <stdlib.h>

#include <stdbool.h>

#define MAX_VERTICES 100

// Queue implementation for BFS

typedef struct {

int items[MAX_VERTICES];

int front;

int rear;

} Queue;

Queue* createQueue() {

Queue* queue = (Queue*)malloc(sizeof(Queue));

queue->front = -1;

queue->rear = -1;

return queue;

bool isEmpty(Queue* queue) {

return queue->rear == -1;

void enqueue(Queue* queue, int value) {


if (queue->rear == MAX_VERTICES - 1)

printf("Queue overflow\n");

else {

if (queue->front == -1)

queue->front = 0;

queue->rear++;

queue->items[queue->rear] = value;

int dequeue(Queue* queue) {

int item;

if (isEmpty(queue)) {

printf("Queue underflow\n");

exit(EXIT_FAILURE);

} else {

item = queue->items[queue->front];

queue->front++;

if (queue->front > queue->rear) {

queue->front = queue->rear = -1;

return item;

}
// Graph representation using adjacency list

typedef struct Node {

int dest;

struct Node* next;

} Node;

typedef struct {

Node* head;

} AdjList;

typedef struct {

int num_vertices;

AdjList* array;

} Graph;

Node* createNode(int dest) {

Node* newNode = (Node*)malloc(sizeof(Node));

newNode->dest = dest;

newNode->next = NULL;

return newNode;

Graph* createGraph(int num_vertices) {


Graph* graph = (Graph*)malloc(sizeof(Graph));

graph->num_vertices = num_vertices;

graph->array = (AdjList*)malloc(num_vertices * sizeof(AdjList));

for (int i = 0; i < num_vertices; ++i) {

graph->array[i].head = NULL;

return graph;

void addEdge(Graph* graph, int src, int dest) {

Node* newNode = createNode(dest);

newNode->next = graph->array[src].head;

graph->array[src].head = newNode;

void bfs(Graph* graph, int start) {

bool* visited = (bool*)malloc(graph->num_vertices * sizeof(bool));

for (int i = 0; i < graph->num_vertices; ++i) {

visited[i] = false;

Queue* queue = createQueue();

visited[start] = true;

enqueue(queue, start);
while (!isEmpty(queue)) {

int current_vertex = dequeue(queue);

printf("%d ", current_vertex);

Node* temp = graph->array[current_vertex].head;

while (temp) {

int adj_vertex = temp->dest;

if (!visited[adj_vertex]) {

visited[adj_vertex] = true;

enqueue(queue, adj_vertex);

temp = temp->next;

int main() {

int num_vertices = 6;

Graph* graph = createGraph(num_vertices);

addEdge(graph, 0, 1);

addEdge(graph, 0, 2);

addEdge(graph, 1, 3);
addEdge(graph, 1, 4);

addEdge(graph, 2, 4);

addEdge(graph, 3, 4);

addEdge(graph, 3, 5);

addEdge(graph, 4, 5);

printf("Breadth First Traversal starting from vertex 0: ");

bfs(graph, 0);

return 0;

```

This code defines a simple directed graph and performs a BFS traversal starting from a given
vertex. It utilizes an adjacency list representation for the graph and a queue for BFS
traversal.

17. Explain in-detail about Quick Computation?


"Quick computation" typically refers to the ability to perform calculations or operations
rapidly, often achieved through efficient algorithms, optimized code, or specialized
hardware.

Here are several aspects that contribute to quick computation:

1. **Algorithm Efficiency**: Using algorithms that have low time complexity, such as O(1),
O(log n), or O(n), can significantly speed up computations. For example, algorithms like
binary search, hashing, or dynamic programming can provide faster solutions compared to
brute-force methods.

2. **Optimized Code**: Writing code in a way that minimizes unnecessary operations,


reduces memory usage, and utilizes hardware efficiently can improve computation speed.
This includes techniques like loop unrolling, cache optimization, and using appropriate data
structures.

3. **Parallel Processing**: Utilizing multiple processing units or cores simultaneously can


speed up computations for tasks that can be divided into parallel subtasks. Techniques like
multithreading, multiprocessing, or GPU acceleration can be employed to leverage
parallelism effectively.

4. **Hardware Acceleration**: Using specialized hardware, such as Graphics Processing


Units (GPUs), Field-Programmable Gate Arrays (FPGAs), or Application-Specific Integrated
Circuits (ASICs), can significantly speed up computations for certain types of tasks, such as
graphics rendering, machine learning, or cryptography.

5. **Precomputation and Memoization**: Precomputing results for frequently used


computations or storing intermediate results to avoid redundant calculations can reduce
computation time. Memoization, which involves caching previously computed results, can
be particularly useful for recursive algorithms or dynamic programming.

6. **Vectorization**: Utilizing vector instructions and SIMD (Single Instruction, Multiple


Data) operations provided by modern processors can accelerate computations by
performing multiple operations simultaneously on data vectors.

7. **Approximation and Heuristics**: In some cases, trading off accuracy for speed by using
approximation algorithms or heuristic methods can lead to faster computation. These
techniques are often employed in optimization problems or real-time systems where quick
decisions are required.

8. **Compiled Languages and Just-In-Time (JIT) Compilation**: Writing code in compiled


languages like C, C++, or Rust, or using JIT compilation techniques in languages like Java or
Python, can improve computation speed by optimizing code execution and reducing
overhead.

Overall, achieving quick computation involves a combination of algorithmic design, code


optimization, hardware utilization, and sometimes trade-offs between speed and accuracy.
By carefully considering these factors, developers can create systems and applications that
perform calculations rapidly and efficiently.

18. Explain in-detail about Random and Non-Random Keys?


"Random" and "non-random" keys refer to the characteristics of keys used in various
cryptographic algorithms, particularly in encryption and decryption processes.
1. **Random Keys**:

- **Definition**: Random keys are generated using a true random or pseudorandom


process, ensuring that they have no discernible pattern or predictability. True random keys
are generated from physical processes, such as atmospheric noise, radioactive decay, or
thermal noise, while pseudorandom keys are generated using deterministic algorithms that
produce sequences of numbers that appear random.

- **Usage**: Random keys are typically used in symmetric encryption algorithms like AES
(Advanced Encryption Standard) or stream ciphers. These keys are required to be kept
secret and are shared between the sender and receiver of encrypted messages.

- **Strength**: Random keys are considered secure as long as they are sufficiently long
and truly random or generated by a cryptographically secure pseudorandom number
generator (CSPRNG). The strength of encryption directly depends on the randomness and
length of the key.

- **Example**: In AES encryption, a 128-bit, 192-bit, or 256-bit random key is generated


and used for encrypting and decrypting data. The security of AES heavily relies on the
randomness and secrecy of the key.

2. **Non-Random Keys**:

- **Definition**: Non-random keys are generated using deterministic algorithms or


methods, often based on user-provided input, passwords, or derived from other data.
Unlike random keys, non-random keys may exhibit patterns or structure that could
potentially be exploited by attackers.

- **Usage**: Non-random keys are commonly used in asymmetric encryption algorithms


like RSA (Rivest-Shamir-Adleman) or digital signature schemes. In these systems, a key pair
consisting of a public key and a private key is generated, with the private key being derived
from the public key or vice versa.

- **Security**: The security of non-random keys depends on the strength of the


underlying cryptographic algorithms used for key generation and management. For
example, in RSA encryption, the security relies on the difficulty of factoring large composite
numbers, which is a computationally intensive problem.

- **Example**: In RSA encryption, a user generates a key pair consisting of a public key
(which can be shared with anyone) and a private key (which must be kept secret). The
public key is derived from the product of two large prime numbers, while the private key is
derived from the prime factors of this product. Although the keys are not random, the
security of RSA relies on the computational complexity of factoring large numbers.

In summary, random keys are essential for symmetric encryption algorithms, while non-
random keys are commonly used in asymmetric encryption schemes. Both types of keys
play critical roles in ensuring the security and confidentiality of cryptographic
communications, with their strength depending on factors such as randomness, length, and
the underlying cryptographic algorithms used for key generation.

19. Write a Program to implement Breadth-First Search?


Sure, here's a C program implementing Breadth-First Search (BFS) algorithm for traversing a
graph represented using an adjacency list:

#include <stdio.h>

#include <stdlib.h>

#include <stdbool.h>

#define MAX_VERTICES 100

// Queue implementation for BFS

typedef struct {

int items[MAX_VERTICES];

int front;

int rear;

} Queue;

Queue* createQueue() {

Queue* queue = (Queue*)malloc(sizeof(Queue));

queue->front = -1;

queue->rear = -1;

return queue;

}
bool isEmpty(Queue* queue) {

return queue->rear == -1;

void enqueue(Queue* queue, int value) {

if (queue->rear == MAX_VERTICES - 1)

printf("Queue overflow\n");

else {

if (queue->front == -1)

queue->front = 0;

queue->rear++;

queue->items[queue->rear] = value;

int dequeue(Queue* queue) {

int item;

if (isEmpty(queue)) {

printf("Queue underflow\n");

exit(EXIT_FAILURE);

} else {

item = queue->items[queue->front];

queue->front++;
if (queue->front > queue->rear) {

queue->front = queue->rear = -1;

return item;

// Graph representation using adjacency list

typedef struct Node {

int dest;

struct Node* next;

} Node;

typedef struct {

Node* head;

} AdjList;

typedef struct {

int num_vertices;

AdjList* array;

} Graph;

Node* createNode(int dest) {

Node* newNode = (Node*)malloc(sizeof(Node));


newNode->dest = dest;

newNode->next = NULL;

return newNode;

Graph* createGraph(int num_vertices) {

Graph* graph = (Graph*)malloc(sizeof(Graph));

graph->num_vertices = num_vertices;

graph->array = (AdjList*)malloc(num_vertices * sizeof(AdjList));

for (int i = 0; i < num_vertices; ++i) {

graph->array[i].head = NULL;

return graph;

void addEdge(Graph* graph, int src, int dest) {

Node* newNode = createNode(dest);

newNode->next = graph->array[src].head;

graph->array[src].head = newNode;

void bfs(Graph* graph, int start) {

bool* visited = (bool*)malloc(graph->num_vertices * sizeof(bool));

for (int i = 0; i < graph->num_vertices; ++i) {


visited[i] = false;

Queue* queue = createQueue();

visited[start] = true;

enqueue(queue, start);

while (!isEmpty(queue)) {

int current_vertex = dequeue(queue);

printf("%d ", current_vertex);

Node* temp = graph->array[current_vertex].head;

while (temp) {

int adj_vertex = temp->dest;

if (!visited[adj_vertex]) {

visited[adj_vertex] = true;

enqueue(queue, adj_vertex);

temp = temp->next;

int main() {
int num_vertices = 6;

Graph* graph = createGraph(num_vertices);

addEdge(graph, 0, 1);

addEdge(graph, 0, 2);

addEdge(graph, 1, 3);

addEdge(graph, 1, 4);

addEdge(graph, 2, 4);

addEdge(graph, 3, 4);

addEdge(graph, 3, 5);

addEdge(graph, 4, 5);

printf("Breadth First Traversal starting from vertex 0: ");

bfs(graph, 0);

return 0;

In this C program:

- We first define a `Queue` structure and its functions for enqueue, dequeue, and checking
if it's empty. This queue will be used in the BFS traversal.

- We then define a structure for the adjacency list representation of the graph, which
consists of nodes and linked lists.

- Functions like `createNode`, `createGraph`, and `addEdge` are used for creating the graph
and adding edges between vertices.

- The `bfs` function implements the Breadth-First Search algorithm. It starts from the given
start vertex, explores its adjacent vertices level by level, and prints them.
- In the `main` function, we create a graph, add some edges, and then perform BFS traversal
starting from vertex 0.

Compile and run this C program, and you'll get the Breadth-First Traversal of the given
graph.
20. Define Depth-First Search and Write the Sample Code ?
Depth-First Search (DFS) is a graph traversal algorithm that explores as far as
possible along each branch before backtracking. It traverses the depth of any
particular branch before moving on to explore the siblings.

Below is a sample C code implementation of Depth-First Search:

```c
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>

#define MAX_VERTICES 100

// Graph representation using adjacency list


typedef struct Node {
int dest;
struct Node* next;
} Node;

typedef struct {
Node* head;
} AdjList;

typedef struct {
int num_vertices;
AdjList* array;
} Graph;

Node* createNode(int dest) {


Node* newNode = (Node*)malloc(sizeof(Node));
newNode->dest = dest;
newNode->next = NULL;
return newNode;
}

Graph* createGraph(int num_vertices) {


Graph* graph = (Graph*)malloc(sizeof(Graph));
graph->num_vertices = num_vertices;
graph->array = (AdjList*)malloc(num_vertices * sizeof(AdjList));
for (int i = 0; i < num_vertices; ++i) {
graph->array[i].head = NULL;
}
return graph;
}

void addEdge(Graph* graph, int src, int dest) {


Node* newNode = createNode(dest);
newNode->next = graph->array[src].head;
graph->array[src].head = newNode;
}

void dfsUtil(Graph* graph, int vertex, bool* visited) {


visited[vertex] = true;
printf("%d ", vertex);

Node* temp = graph->array[vertex].head;


while (temp != NULL) {
int adj_vertex = temp->dest;
if (!visited[adj_vertex]) {
dfsUtil(graph, adj_vertex, visited);
}
temp = temp->next;
}
}

void dfs(Graph* graph, int start) {


bool* visited = (bool*)malloc(graph->num_vertices * sizeof(bool));
for (int i = 0; i < graph->num_vertices; ++i) {
visited[i] = false;
}

dfsUtil(graph, start, visited);


free(visited);
}

int main() {
int num_vertices = 4;
Graph* graph = createGraph(num_vertices);

addEdge(graph, 0, 1);
addEdge(graph, 0, 2);
addEdge(graph, 1, 2);
addEdge(graph, 2, 0);
addEdge(graph, 2, 3);
addEdge(graph, 3, 3);

printf("DFS Traversal starting from vertex 2: ");


dfs(graph, 2);

return 0;
}
In this C implementation:
- We define structures for representing a graph using an adjacency list.
- Functions like `createNode`, `createGraph`, and `addEdge` are used for creating
the graph and adding edges between vertices.
- The `dfsUtil` function performs the actual DFS traversal recursively. It marks the
current vertex as visited, prints it, and then recursively calls itself for all adjacent
vertices that have not been visited yet.
- The `dfs` function initializes the visited array and calls `dfsUtil` with the starting
vertex.
- In the `main` function, we create a graph, add some edges, and then perform DFS
traversal starting from vertex 2.

Compile and run this C program, and you'll get the Depth-First Traversal of the given
graph.

THANK YOU

You might also like