0% found this document useful (0 votes)
0 views51 pages

Data Structures and Algorithms Questions

The document provides a comprehensive overview of Data Structures and Algorithms, including solved questions on topics like linear vs non-linear data structures, stacks, binary search, binary search trees, recursion, arrays vs linked lists, hashing, AVL trees, graphs, and traversal algorithms. Each section includes detailed explanations, examples, and comparisons to enhance understanding. The content is structured into sets of questions, with each answer containing at least 200 words to meet academic requirements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views51 pages

Data Structures and Algorithms Questions

The document provides a comprehensive overview of Data Structures and Algorithms, including solved questions on topics like linear vs non-linear data structures, stacks, binary search, binary search trees, recursion, arrays vs linked lists, hashing, AVL trees, graphs, and traversal algorithms. Each section includes detailed explanations, examples, and comparisons to enhance understanding. The content is structured into sets of questions, with each answer containing at least 200 words to meet academic requirements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Data Structures and Algorithms

Here is Set 1 of solved 10-mark questions for Data Structures and Algorithms, with each
answer detailed and at least 200 words:

✅ Set 1: Solved Questions (10 Marks Each)

Q1. Explain the difference between linear and non-linear data


structures with examples.
Answer:

Data structures are categorized into two main types: linear and non-linear, based on the
way their elements are organized.

In linear data structures, data is organized sequentially or linearly, where each element is
connected to its previous and next element. They follow a straightforward memory allocation
and are easy to implement. Examples include arrays, stacks, queues, and linked lists.

Array is a static linear structure where data elements are stored in contiguous memory
locations.

Stack uses LIFO (Last In First Out) operations, while queue uses FIFO (First In First Out).

In non-linear data structures, elements are not placed in a sequential manner. Instead, they
are connected hierarchically or through complex relationships. Examples include trees and
graphs.

In a tree, elements are arranged in a parent-child hierarchy. A binary tree is a type where
each node has at most two children.

A graph consists of nodes (vertices) and edges and can be used to represent networks.

Key Differences:

Memory organization: Linear (sequential), Non-linear (hierarchical/graph-based)

Traversal: Linear is simpler; non-linear requires DFS/BFS.

Complexity: Non-linear structures are better for representing complex relationships.

1/51
Understanding these types helps in choosing the right data structure for a given problem,
enhancing efficiency and performance.

Q2. Describe stack and its applications. Write the operations


performed on a stack.
Answer:

A stack is a linear data structure that follows the Last In First Out (LIFO) principle. It means
the last element inserted is the first to be removed. Conceptually, it works like a pile of books
where only the top book can be accessed or removed.

Basic Operations:
1. Push – Inserts an element on the top of the stack.

2. Pop – Removes the topmost element.

3. Peek/Top – Returns the top element without removing it.

4. isEmpty – Checks if the stack is empty.

5. isFull – Checks if the stack is full (in array-based stack).

Stacks can be implemented using arrays (static stack) or linked lists (dynamic stack).

Applications of Stack:
Expression Evaluation: Stack is used to evaluate and convert infix expressions to postfix
or prefix.

Function Call Management: The system stack stores function calls and local variables.

Undo Mechanisms: Applications like word processors use stacks to track user actions.

Parenthesis Matching: In compilers, stacks are used to validate expressions.

Depth First Search (DFS): DFS traversal of trees/graphs uses a stack.

Stacks offer a simple yet powerful way to manage data and control flow, especially when
tracking order of execution or nested operations.

2/51
Q3. Explain binary search with an example. Also, compare it with
linear search.
Answer:

Binary Search is an efficient algorithm for finding an element in a sorted array. It uses a
divide and conquer approach by repeatedly dividing the search space in half.

Steps:
1. Find the middle index of the array.

2. If the target equals the middle element, return its index.

3. If the target is less than the middle element, repeat the process on the left half.

4. If greater, repeat on the right half.

5. Repeat until the element is found or the range is empty.

Example:
Search for 25 in array: [10, 20, 25, 30, 40]

Middle = 25 → Match found at index 2.

Time Complexity:

Best: O(1)

Average/Worst: O(log n)

Linear Search, on the other hand, checks each element one by one.

Comparison:

Feature Binary Search Linear Search

Data Requirement Sorted Any order

Time Complexity O(log n) O(n)

Efficiency High (for large n) Low (for large n)

Use Case Sorted datasets Small/unsorted data

Binary search is preferred when dealing with sorted data due to its superior performance.

3/51
Q4. What is a Binary Search Tree (BST)? Explain insertion and deletion
operations in BST.
Answer:

A Binary Search Tree (BST) is a type of binary tree where each node has a key, and:

Keys in the left subtree are less than the node's key.

Keys in the right subtree are greater than the node's key.

This property makes search operations efficient, usually taking O(log n) time in balanced
trees.

Insertion in BST:
1. Start at the root.

2. If the key is less than the current node, move to the left child.

3. If greater, move to the right child.

4. Repeat until a NULL spot is found, and insert the new node there.

Example:
Insert 30 into BST:

markdown

20
\
40

Since 30 > 20, move right.

30 < 40, so insert as left child of 40.

Deletion in BST:
Three cases:

1. Leaf node: Simply remove.

2. One child: Replace node with its child.

3. Two children: Find in-order successor (smallest in right subtree), copy its value, and
delete it recursively.

4/51
BSTs ensure ordered data storage, supporting efficient search, insert, and delete. However,
unbalanced BSTs can degrade to linked lists (O(n) operations), which is why self-balancing
BSTs like AVL or Red-Black Trees are preferred for critical systems.

✅ Would you like me to continue with Set 2 now?

Create Word Docs Create PDF Docs

Here is Set 2 of solved 10-mark questions for Data Structures and Algorithms. Each answer
is detailed and contains at least 200 words to match the academic requirements.

✅ Set 2: Solved Questions (10 Marks


Each)

Q1. What is recursion? Explain tail recursion and how recursion can be
removed using a stack.
Answer:

Recursion is a programming technique where a function calls itself directly or indirectly to


solve a problem. It is useful when a problem can be broken down into smaller subproblems
of the same type. Each recursive call has its own execution context, stored in the call stack.

Key parts of recursion:


1. Base Case: Terminates recursion.

2. Recursive Case: Function calls itself with modified parameters.

Example:

5/51
int factorial(int n) {
if(n == 0) return 1;
return n * factorial(n - 1);
}

Tail Recursion:
A tail recursive function is one where the recursive call is the last operation before returning
a result.

Example:

int tailFact(int n, int result = 1) {


if(n == 0) return result;
return tailFact(n-1, n * result);
}

Tail recursion can be optimized by compilers into iteration, saving stack space.

Removing Recursion using Stack:


To convert recursion into iteration:

1. Use an explicit stack to simulate recursive calls.

2. Push parameters onto the stack as you go deeper.

3. Use loops to process each stack frame.

This approach is often used in tree traversals, DFS, and expression evaluation. By
managing your own stack, you avoid system-level call stack overflow and gain control over
execution.

In conclusion, recursion is a powerful but memory-intensive technique. Tail recursion and


stack-based transformations offer ways to optimize or replace it.

6/51
Q2. Differentiate between array and linked list. Explain the array and
pointer representation of linked list in C.
Answer:

Both arrays and linked lists are linear data structures but differ significantly in structure and
memory behavior.

Array:
Stores elements in contiguous memory locations.

Accessing elements via index is fast: O(1).

Size is fixed after declaration.

Insertion and deletion are costly due to shifting of elements: O(n).

Linked List:
Composed of nodes where each node contains data and a pointer to the next node.

Elements are stored in scattered memory locations.

No fixed size, dynamic memory allocation.

Efficient insertion/deletion at any point: O(1) at head/tail (with pointer).

Array Representation of Linked List:


Can be implemented by simulating links using array indices.

struct Node {
int data;
int next; // index of next element
};

7/51
This is often used in memory-constrained or embedded environments.

Pointer Representation in C:
c

struct Node {
int data;
struct Node* next;
};

Dynamic allocation using malloc() .

Easier to grow or shrink the list during runtime.

In conclusion, arrays are ideal for static and random-access-heavy tasks, while linked lists are
preferred for dynamic scenarios involving frequent insertions/deletions.

Q3. What is hashing? Explain collision and any two collision resolution
techniques.
Answer:

Hashing is a technique used to map data (keys) to a fixed-size table (hash table) using a
hash function. It enables fast insertion, deletion, and lookup in O(1) time on average.

Hash Function:
Converts key to an index within table bounds.

Example: index = key % table_size

Collision:
Occurs when two keys produce the same hash index.

8/51
Example:
For table size 10:

Key 12 → 12 % 10 = 2

Key 22 → 22 % 10 = 2 → collision!

Collisions are unavoidable due to the pigeonhole principle. Therefore, we use collision
resolution techniques.

1. Chaining:
Use linked lists at each index.

Multiple elements can exist at same index.

Advantages:

Handles multiple collisions gracefully.

Table can grow easily.

Drawback:

Extra memory needed for pointers.

2. Linear Probing (Open Addressing):


If collision occurs, move to the next index sequentially until an empty spot is found.

Example:
If index 5 is taken, try 6, 7, 8...

Advantages:

Simple and avoids pointers.

Drawbacks:

Clustering may occur.

Search time increases as table fills.

9/51
In conclusion, hashing is a powerful way to implement associative data structures, and
collision resolution is key to maintaining performance.

Q4. Explain AVL tree. How does it maintain balance? Mention


different types of rotations with examples.
Answer:

An AVL Tree is a self-balancing Binary Search Tree (BST) where the height difference
(balance factor) between the left and right subtree of any node is at most 1.

Balance Factor:
BalanceFactor = height(left subtree) – height(right subtree)

Valid AVL range: –1, 0, 1

If the balance factor goes outside this range, rotations are performed to restore
balance.

Types of Rotations:

1. Right Rotation (LL Rotation):

Occurs when a node is inserted into the left subtree of the left child.

markdown

z y
/ /\
y → x z
/
x

2. Left Rotation (RR Rotation):

Occurs when a node is inserted into the right subtree of the right child.

10/51
markdown

z y
\ /\
y → z x
\
x

3. Left-Right Rotation (LR Rotation):

Occurs when a node is inserted into the right subtree of the left child.

javascript

z z x
/ / /\
y → x → y z
\ /
x y

4. Right-Left Rotation (RL Rotation):

Occurs when a node is inserted into the left subtree of the right child.

AVL Trees maintain O(log n) time for search, insert, and delete by rebalancing the tree after
operations. This ensures that the performance never degrades like in unbalanced BSTs.

✅ Let me know if you'd like to proceed with Set 3 of solved questions.


Create Word Docs Create PDF Docs

Here is Set 3 of solved 10-mark questions for Data Structures and Algorithms, with detailed
answers (minimum 200 words each) for thorough understanding.

11/51
✅ Set 3: Solved Questions (10 Marks
Each)

Q1. What are graphs? Explain different ways to represent graphs in


memory.
Answer:

A graph is a non-linear data structure made up of a set of vertices (nodes) and edges
(connections) between them. Graphs are used to represent networks like social networks,
transportation systems, and communication models.

There are two main types of graphs:

Directed Graph (Digraph): Edges have direction (e.g., one-way roads).

Undirected Graph: Edges have no direction (e.g., mutual friendship).


Graphs can also be weighted (edges carry weights) or unweighted.

Graph Representations:

1. Adjacency Matrix:

A 2D array of size V × V (V = number of vertices).

If there’s an edge from vertex i to j, matrix[i][j] = 1 (or the weight of the edge).

For undirected graphs, the matrix is symmetric.

Pros:

Simple and fast access to edge presence: O(1)

Good for dense graphs

Cons:

Consumes O(V²) space even if few edges

12/51
2. Adjacency List:

Uses an array of linked lists or vectors.

Each index stores a list of all adjacent vertices.

Pros:

Efficient space usage for sparse graphs: O(V + E)

Easy to iterate through neighbors

Cons:

Slower to check if a specific edge exists

3. Incidence Matrix:

Rows represent vertices; columns represent edges.

Entry is 1 if vertex is part of an edge.

Less commonly used, but useful in certain mathematical applications.

Graphs provide powerful tools for modeling complex relationships. The choice of
representation depends on the size and nature of the graph (dense vs. sparse).

Q2. Explain Breadth First Search (BFS) and Depth First Search (DFS)
traversal of a graph with examples.
Answer:

BFS and DFS are fundamental algorithms used for traversing or searching graph data
structures.

Breadth First Search (BFS):

13/51
Explores all vertices at the current depth before moving to the next level.

Implemented using a queue.

Steps:

1. Start from the source vertex.

2. Visit and enqueue all its adjacent unvisited vertices.

3. Dequeue and repeat until all reachable nodes are visited.

Example:
For graph:
A—B—C
||
D—E

BFS from A: A, B, D, C, E

Time Complexity: O(V + E)

Use Cases:

Finding shortest path in unweighted graphs

Level-order traversal in trees

Depth First Search (DFS):


Explores as far as possible along a branch before backtracking.

Implemented using stack (explicit or recursion).

Steps:

1. Visit the starting vertex.

2. Recursively visit all unvisited neighbors.

3. Backtrack when no unvisited neighbors remain.

Example (from A): A, B, C, E, D

Time Complexity: O(V + E)

Use Cases:

14/51
Cycle detection

Topological sort

Connected components

Both BFS and DFS are core techniques in graph theory with widespread applications in AI,
networking, and compilers.

Q3. Write an algorithm for insertion and deletion in a circular queue.


How is it better than a linear queue?
Answer:

A circular queue is a linear data structure in which the last position is connected back to the
first to make a circle. It overcomes the limitation of the linear queue’s wasted space once rear
reaches the end.

Insertion (Enqueue):
c

if ((rear + 1) % size == front)


print("Queue is Full");
else {
rear = (rear + 1) % size;
queue[rear] = value;
if (front == -1)
front = 0;
}

Deletion (Dequeue):
c

15/51
if (front == -1)
print("Queue is Empty");
else {
value = queue[front];
if (front == rear)
front = rear = -1;
else
front = (front + 1) % size;
}

Advantages of Circular Queue over Linear Queue:


Efficient use of space: In linear queues, once the rear reaches the end, the queue
appears full even if there is space at the beginning.

Wrap-around behavior: Elements are inserted from the front when there’s space,
reducing wastage.

Fixed-size buffer implementation: Used in CPU scheduling, memory management, and


network buffers.

Circular queues are highly useful in real-time systems where memory is constrained and
cyclic usage is optimal.

Q4. Describe the working of Kruskal’s algorithm for Minimum


Spanning Tree with an example.
Answer:

Kruskal’s Algorithm is a greedy algorithm used to find a Minimum Spanning Tree (MST) in
a connected, weighted graph. It works by sorting all edges and selecting the smallest edge
that doesn’t form a cycle.

Steps of Kruskal’s Algorithm:

16/51
1. Sort all edges in ascending order of weights.

2. Initialize an empty MST.

3. Use Disjoint Set Union (DSU) or Union-Find to check for cycles.

4. Add edge to MST if it doesn’t form a cycle.

5. Repeat until MST has V – 1 edges.

Example:
Graph:

css

A — B (4)
A — C (2)
B — C (5)
B — D (10)
C — D (3)

Step 1: Sort edges: (A-C:2), (C-D:3), (A-B:4), (B-C:5), (B-D:10)

Step 2: Add A-C → OK


Step 3: Add C-D → OK
Step 4: Add A-B → OK
Step 5: Skip B-C (forms cycle)
Step 6: Skip B-D (cycle or redundant)

Resulting MST edges: A-C, C-D, A-B


Total weight: 2 + 3 + 4 = 9

Time Complexity:
O(E log E) for sorting

Union-Find operations: O(E α(V)) ≈ linear

Kruskal’s algorithm is best suited for sparse graphs and is widely used in networking and
design problems.

17/51
✅ Ready for Set 4 of solved questions?
Create Word Docs Create PDF Docs

Here is Set 4 of solved 10-mark questions for Data Structures and Algorithms, with each
answer being at least 200 words long for depth and clarity.

✅ Set 4: Solved Questions (10 Marks


Each)

Q1. Explain the working of Merge Sort with an example. Also, write its
time and space complexity.
Answer:

Merge Sort is a classic Divide and Conquer algorithm that divides the input array into
smaller subarrays, sorts them individually, and merges them to produce the final sorted
array. It is particularly efficient for large datasets.

Steps of Merge Sort:


1. Divide the array into two halves.

2. Conquer by recursively sorting each half.

3. Combine the two sorted halves by merging.

Example:
Given array: [38, 27, 43, 3, 9, 82, 10]

18/51
Divide: [38, 27, 43] and [3, 9, 82, 10]

Recursively divide until single elements: [38] [27] [43] [3] [9] [82] [10]

Merge step-by-step:

[27, 38] , [27, 38, 43]

[3, 9] , [10, 82] , [3, 9, 10, 82]

Final merge: [3, 9, 10, 27, 38, 43, 82]

Time Complexity:
Best, Average, Worst: O(n log n)
(Each division halves the array ⇒ log n divisions; merging takes O(n) at each level)

Space Complexity:
O(n): Extra space needed for temporary arrays during merge.

Advantages:
Stable sort (preserves order of equal elements)

Guarantees O(n log n) even in worst case

Use Cases:

Sorting linked lists

External sorting (with files/data too large for RAM)

Merge Sort is preferred when consistent performance is needed regardless of input


characteristics.

Q2. Differentiate between BFS and DFS. Write use cases for both.
Answer:

19/51
Both BFS (Breadth First Search) and DFS (Depth First Search) are graph traversal
algorithms. While they serve the same goal — visiting all nodes in a graph — their behavior,
use cases, and implementations differ significantly.

BFS (Breadth First Search):


Visits nodes level by level.

Uses a queue (FIFO).

Suitable for finding shortest paths in unweighted graphs.

Time Complexity: O(V + E)

Example:
In graph: A — B — C — D
BFS starting from A: A → B → C → D

DFS (Depth First Search):


Explores as deep as possible along each branch before backtracking.

Uses a stack (explicit or recursive).

Suitable for cycle detection, topological sort, and maze solving.

Time Complexity: O(V + E)

Example:
DFS from A might go: A → B → D → C

Comparison Table:
Feature BFS DFS

Data Structure Queue Stack / Recursion

Traversal Level-wise Depth-wise

20/51
Feature BFS DFS

Path Finding Shortest Path Not guaranteed

Memory Usage Higher Lower (on sparse graphs)

Use Cases GPS, Routing Solving puzzles, backtracking

BFS and DFS are fundamental for solving a wide range of problems in computer science,
from networking to game development.

Q3. Explain the concept of Sparse Matrix. How can it be efficiently


represented in memory?
Answer:

A sparse matrix is a matrix in which most of the elements are zero. They frequently arise in
applications such as scientific computing, image processing, and machine learning (e.g.,
adjacency matrices of large graphs).

Example:

csharp

[0 0 3]
[0 0 0]
[4 0 0]

Only 2 non-zero elements out of 9 → sparse.

Why not store it as a normal 2D array?


Waste of memory for large matrices.

Time inefficiency when performing operations over many zero elements.

21/51
Efficient Representations:

1. Triplet Representation (Coordinate List):

Store only non-zero values along with their row and column indices.

Format:

plaintext

[row, column, value]

Example:

csharp

[0, 2, 3]
[2, 0, 4]

2. Compressed Sparse Row (CSR):

Three arrays:

values[] : non-zero values

col_index[] : column indices of values

row_ptr[] : index in values[] where each row starts

Efficient for row-wise matrix operations.

3. Linked List Representation:

Each node holds value, row, column, and pointer to next node.

Useful for dynamic sparse matrices.

Benefits:
Saves space

Faster matrix operations

22/51
Allows efficient storage in memory-limited environments

Sparse matrix representation is crucial for optimizing performance in real-world systems


dealing with large datasets and graphs.

Q4. Explain the working of Dijkstra’s algorithm with an example.


Answer:

Dijkstra’s Algorithm finds the shortest path from a source vertex to all other vertices in a
weighted graph with non-negative edges. It’s widely used in networking (e.g., routing
protocols), navigation systems, and AI.

Steps:
1. Set all distances to ∞, source distance = 0.

2. Add source to a priority queue (min-heap).

3. While queue is not empty:

Extract node with smallest distance (u).

For all neighbors (v) of u:

If dist[v] > dist[u] + weight(u,v) , update dist[v] .

Example:
Graph:

css

A --5--> B
A --2--> C
B --1--> C
C --3--> D

23/51
Start from A:

A: 0

C: 2 (via A)

B: 3 (A→C→B is shorter than A→B directly)

D: 5 (A→C→D)

Final shortest distances:

A: 0

B: 3

C: 2

D: 5

Time Complexity:
Using simple array: O(V²)

Using min-heap: O((V + E) log V)

Limitations:
Doesn’t work with negative weights (use Bellman-Ford instead)

Dijkstra’s algorithm is efficient, widely implemented, and essential for optimal routing and
decision-making tasks.

✅ This completes Set 4 of 4. If you'd like, I can provide:


Practice MCQs

Unit-wise revision notes

Mini project ideas

PDF export of all notes and answers

Let me know how you'd like to proceed!

24/51
Create Word Docs Create PDF Docs

Great! Below is Set 5 of solved 10-mark questions for Data Structures and Algorithms. Each
answer is detailed and contains at least 200 words. I will follow up with Sets 6, 7, and 8 after
this one.

✅ Set 5: Solved Questions (10 Marks


Each)

Q1. What are Abstract Data Types (ADT)? Explain with examples.
Answer:

An Abstract Data Type (ADT) is a theoretical concept in computer science used to define a
data structure logically rather than physically. It defines the behavior (operations) of the
data type and hides the implementation details from the user.

Characteristics of ADT:
Emphasis is on what operations are performed, not how.

Helps in modular programming and encapsulation.

Implementation can vary, but interface remains the same.

Common ADTs and their Operations:

1. Stack ADT:

Operations: push(x) , pop() , peek() , isEmpty()

Can be implemented using arrays or linked lists.

25/51
2. Queue ADT:

Operations: enqueue(x) , dequeue() , isFull() , isEmpty()

3. List ADT:

Operations: insert(i, x) , delete(i) , get(i) , length()

4. Graph ADT:

Operations: addVertex(v) , addEdge(v, w) , getNeighbors(v)

Advantages of ADTs:
Abstraction: Users need not worry about internal structure.

Flexibility: Implementation can change without affecting users.

Reusability: ADTs can be used across multiple programs.

In short, ADTs form the backbone of algorithm design and data structure usage. They allow
programmers to focus on problem-solving without worrying about implementation
complexities.

Q2. Explain the difference between selection sort and bubble sort.
Write their algorithm and time complexity.
Answer:

Selection Sort and Bubble Sort are simple comparison-based sorting algorithms, suitable
for small datasets. Though easy to implement, they are inefficient for large data due to their
O(n²) time complexity.

Selection Sort:
Finds the minimum element in the unsorted part and places it at the beginning.

26/51
Repeats for the remaining array.

Algorithm:

for i = 0 to n-1:
min = i
for j = i+1 to n:
if arr[j] < arr[min]:
min = j
swap(arr[i], arr[min])

Time Complexity: O(n²) for all cases


Space Complexity: O(1)

Bubble Sort:
Repeatedly swaps adjacent elements if they are in the wrong order.

Larger elements "bubble up" to the end.

Algorithm:

for i = 0 to n-1:
for j = 0 to n-i-1:
if arr[j] > arr[j+1]:
swap(arr[j], arr[j+1])

Time Complexity:

Best Case (Sorted): O(n)

Worst & Average: O(n²)

Space Complexity: O(1)

Differences:

27/51
Feature Selection Sort Bubble Sort

Swaps Few (n) Many

Best Case O(n²) O(n)

Stable No Yes

Use Case Fewer writes needed Better for nearly sorted arrays

While both are educational, they are rarely used in production compared to efficient
algorithms like Merge Sort or Quick Sort.

Q3. Describe how polynomial operations can be performed using


linked lists.
Answer:

Polynomials can be represented and manipulated using linked lists, which provide dynamic
memory allocation and easy insertion/deletion of terms. Each node of the list represents a
term with a coefficient and exponent.

Node Structure:
c

struct Node {
int coeff;
int exp;
struct Node* next;
};

Polynomial Representation:
For polynomial: 5x^3 + 4x^2 + 2

28/51
Linked list:

css

[5,3] → [4,2] → [2,0]

Terms are usually stored in descending order of exponents.

Polynomial Addition:
Traverse both lists.

Compare exponents:

If equal: add coefficients and create a new term.

If one exponent is larger, copy that term.

Continue until both lists are traversed.

Polynomial Multiplication:
For each term in the first list:

Multiply with every term in the second list.

Insert result into result list using insertOrUpdate() to combine like terms.

Advantages:
Dynamic size — no need to declare array size.

Efficient addition/deletion of terms.

Cleaner representation compared to array (which wastes space for zero coefficients).

Linked lists are ideal for polynomial manipulation in symbolic computations and algebraic
expression handling.

29/51
Q4. Explain the concept of priority queue. How does it differ from a
normal queue? Give applications.
Answer:

A priority queue is an abstract data type in which each element is associated with a priority,
and the element with the highest priority is served before others.

Unlike a normal queue (FIFO), the priority queue does not necessarily process elements in
the order they arrive.

Types:
Ascending Priority Queue: Lower numbers have higher priority.

Descending Priority Queue: Higher numbers have higher priority.

Implementations:
1. Array/Linked List (unsorted or sorted)

2. Binary Heap (most efficient, O(log n))

3. Fibonacci Heap (for advanced usage)

Operations:
insert(element, priority)

extractMin() or extractMax()

peek() — returns the highest-priority element

Differences from Normal Queue:

30/51
Feature Queue Priority Queue

Order FIFO Based on priority

Insertion Time O(1) O(log n) (in heaps)

Use Case CPU scheduling Task scheduling

Applications:
CPU scheduling (highest priority job runs first)

Dijkstra’s algorithm (min-heap based PQ)

Bandwidth management (higher priority packets sent first)

Printer job queue management

Priority queues offer more control in situations where urgency or importance determines
processing order.

✅ Would you like me to continue with Set 6 now?


Create Word Docs Create PDF Docs

Here is Set 6 of solved 10-mark questions for Data Structures and Algorithms, with each
answer being 200+ words and written for conceptual clarity.

✅ Set 6: Solved Questions (10 Marks


Each)

31/51
Q1. What is a threaded binary tree? Explain its types and advantages.
Answer:

A Threaded Binary Tree is a variant of the binary tree where NULL pointers in left or right
child references are replaced with pointers (threads) to the in-order predecessor or
successor, improving the traversal efficiency without using stacks or recursion.

In a normal binary tree, approximately half of the child pointers are NULL . These unused
pointers can be used to link nodes in a specific traversal order.

Types of Threaded Binary Trees:


1. Single Threaded:

Left or right NULL pointer is replaced with a thread.

Can be:

Left-threaded: Left pointers to in-order predecessors.

Right-threaded: Right pointers to in-order successors.

2. Double Threaded:

Both left and right NULL pointers are replaced with threads.

Advantages:
Faster traversal (especially in-order) without stacks or recursion.

Saves memory as it avoids additional stack space.

Simplifies tree traversal logic.

In-order Traversal Example:


Using threads, traversal becomes a simple loop:

32/51
Node* current = leftmost(root);
while (current) {
print(current->data);
if (current->rthread)
current = current->right;
else
current = leftmost(current->right);
}

Threaded binary trees are useful in scenarios where memory efficiency and fast traversal are
required, such as expression trees and symbolic computations.

Q2. Explain Quick Sort algorithm with an example. Mention its time
and space complexity.
Answer:

Quick Sort is a highly efficient divide and conquer algorithm used for sorting. It is widely
used because of its superior performance in the average case and its in-place sorting
property (i.e., it requires no extra space like Merge Sort).

Algorithm:
1. Choose a pivot element.

2. Partition the array into two halves:

Left: elements less than pivot

Right: elements greater than pivot

3. Recursively apply quick sort to both halves.

Example:
Array: [29, 10, 14, 37, 13]

33/51
Pivot = 29

After partition: [10, 14, 13] [29] [37]

Sort left and right recursively

Final: [10, 13, 14, 29, 37]

Time Complexity:
Best & Average: O(n log n)

Worst: O(n²) (when array is already sorted and worst pivot is chosen)

Space Complexity: O(log n) due to recursion stack

Advantages:
Very fast for large datasets

In-place sorting (no extra memory)

Simple implementation with random pivot selection

Despite the worst-case complexity, quick sort is one of the fastest sorting techniques in
practice when implemented with optimizations like random or median-of-three pivot
selection.

Q3. Explain the concept of Strassen’s matrix multiplication. How is it


better than the conventional method?
Answer:

Strassen’s Matrix Multiplication Algorithm is a divide and conquer approach to multiply


two square matrices faster than the conventional method.

34/51
Conventional Matrix Multiplication:
Time Complexity: O(n³)

Each element of the result matrix is computed by taking dot product of rows and
columns.

Strassen’s Approach:
Reduces the number of multiplications using mathematical rearrangement.

Given two matrices A and B of size n × n:

1. Divide both A and B into 4 submatrices of size n/2 × n/2.

2. Compute 7 matrix products (instead of 8 in the traditional method):

P1 = (A11 + A22)(B11 + B22)

P2 = (A21 + A22)B11

...

P7 = (A11 - A21)(B11 + B12)

3. Use these to compute the resulting submatrices of C.

Time Complexity:
T(n) = 7T(n/2) + O(n²)

Solves to O(n^log2 7) ≈ O(n^2.81)

Advantages:
Faster than standard O(n³) for large matrices.

Especially useful in parallel computing.

35/51
Limitations:
Overhead for small matrices.

Not numerically stable for all applications.

Strassen’s algorithm is a significant improvement over brute-force multiplication, especially


in graphics and scientific computing.

Q4. What is the Longest Common Subsequence (LCS) problem? Solve it


using dynamic programming.
Answer:

The Longest Common Subsequence (LCS) problem is a classic dynamic programming


problem that aims to find the longest sequence that appears in the same order (but not
necessarily contiguously) in two given sequences.

Example:
Let X = “AGGTAB” and Y = “GXTXAYB”

LCS = “GTAB” (length = 4)

Dynamic Programming Approach:


We use a 2D table dp[m+1][n+1] where:

m = length of X

n = length of Y

Recurrence Relation:

if (X[i] == Y[j])
dp[i][j] = 1 + dp[i-1][j-1];

36/51
else
dp[i][j] = max(dp[i-1][j], dp[i][j-1]);

Algorithm:

for i = 0 to m:
for j = 0 to n:
if (i == 0 || j == 0)
dp[i][j] = 0;
else if (X[i-1] == Y[j-1])
dp[i][j] = 1 + dp[i-1][j-1];
else
dp[i][j] = max(dp[i-1][j], dp[i][j-1]);

Time and Space Complexity:


Time: O(m × n)

Space: O(m × n)

Applications:
Diff tools (e.g., Git)

DNA sequence alignment

Spell checkers

LCS is a key problem for understanding dynamic programming, and its efficient solution has
real-world applications in bioinformatics and version control.

✅ Shall I continue with Set 7?


Create Word Docs Create PDF Docs

37/51
Here is Set 7 of solved 10-mark questions for Data Structures and Algorithms, each with a
detailed explanation of at least 200 words.

✅ Set 7: Solved Questions (10 Marks


Each)

Q1. What is a B-Tree? Explain its properties and insertion process.


Answer:

A B-Tree is a self-balancing multi-way search tree used primarily in databases and file
systems. Unlike binary trees, where each node has at most two children, B-Trees allow nodes
to have multiple keys and children, making them ideal for systems that read/write large
blocks of data.

Properties of B-Tree (of order m):


1. All leaves are at the same level.

2. Each node (except root) has at least ⌈m/2⌉ children.

3. Each internal node can contain at most m–1 keys and m children.

4. Keys are stored in sorted order within each node.

5. The tree remains balanced through node splitting.

Insertion Process:
1. Insert the key in the correct leaf node.

2. If the leaf has space, insert directly.

3. If it’s full, split the node:

38/51
Middle key moves up to parent.

Two child nodes are created from remaining keys.

4. If the parent is full, repeat the splitting process recursively up to the root.

5. If the root is split, the tree’s height increases.

Example:
Inserting keys 10, 20, 5, 6, 12, 30 into a B-Tree of order 3 results in balanced nodes where
each node can hold 2 keys max, and splits occur as needed.

Use Cases:
File systems (e.g., NTFS, HFS+)

Database indexing (MySQL, Oracle)

B-Trees minimize disk I/O operations and are optimized for systems with large data sets and
block storage.

Q2. Differentiate between Prim’s and Kruskal’s algorithm. Mention


their use and complexity.
Answer:

Both Prim’s and Kruskal’s algorithms are used to find the Minimum Spanning Tree (MST) of
a connected, weighted, undirected graph, but they differ in approach.

Prim’s Algorithm:
Start from a vertex.

Grow the MST by adding the cheapest edge from the tree to a non-tree vertex.

39/51
Uses priority queue (min-heap) for efficient edge selection.

Works well for dense graphs.

Time Complexity:

Using Min-Heap + Adjacency List: O(E log V)

Kruskal’s Algorithm:
Sort all edges by weight.

Pick the smallest edge and add it to MST if it doesn’t form a cycle.

Uses Disjoint Set (Union-Find) to detect cycles.

Works well for sparse graphs.

Time Complexity:

O(E log E), mainly due to edge sorting

Comparison Table:
Feature Prim’s Algorithm Kruskal’s Algorithm

Approach Vertex-based Edge-based

Data Structure Priority Queue Union-Find (DSU)

Graph Type Dense graphs Sparse graphs

Cycles No need to check Must check using DSU

Use Cases:
Prim’s: Network design (like LAN cables)

Kruskal’s: Road system designs where connections vary in length

40/51
Both algorithms are foundational in graph theory and real-world network optimization
problems.

Q3. Write the algorithm for binary search. Compare it with linear
search in terms of time complexity.
Answer:

Binary Search is an efficient algorithm to find an element in a sorted array by repeatedly


dividing the search interval in half.

Binary Search Algorithm:


c

int binarySearch(int arr[], int n, int key) {


int low = 0, high = n - 1;
while (low <= high) {
int mid = (low + high) / 2;
if (arr[mid] == key)
return mid;
else if (key < arr[mid])
high = mid - 1;
else
low = mid + 1;
}
return -1; // Key not found
}

Linear Search:
Scans each element sequentially until the key is found.

41/51
int linearSearch(int arr[], int n, int key) {
for (int i = 0; i < n; i++)
if (arr[i] == key)
return i;
return -1;
}

Comparison Table:
Feature Binary Search Linear Search

Requirement Sorted data Any data

Best Case O(1) O(1)

Average Case O(log n) O(n/2)

Worst Case O(log n) O(n)

Use Case Large sorted data Small or unsorted data

Binary Search is significantly faster than Linear Search for large sorted datasets, but it
cannot be used on unsorted arrays.

Q4. What are asymptotic notations? Explain Big O, Omega, and Theta
with examples.
Answer:

Asymptotic Notations are mathematical tools used to describe the time or space
complexity of an algorithm in terms of input size n , as n grows large. They provide upper,
lower, and tight bounds for performance estimation.

1. Big O (O) – Upper Bound:

42/51
Describes the worst-case scenario.

Example:
For f(n) = 3n² + 2n + 1 ,
O(f(n)) = O(n²)

Used to guarantee that an algorithm will not take longer than a certain time.

2. Omega (Ω) – Lower Bound:


Describes the best-case scenario.

Example:
In Linear Search,
Ω(1) when the key is at the first position.

Used to state that the algorithm will take at least a certain amount of time.

3. Theta (Θ) – Tight Bound:


Describes both upper and lower bounds, meaning the algorithm always takes this time.

Example:
For f(n) = 5n + 3 ,
Θ(n) as both upper and lower bounds are linear.

Comparison Table:
Notation Represents Scenario

O Maximum time Worst case

Ω Minimum time Best case

Θ Exact time Average case

43/51
Asymptotic notations are essential for analyzing algorithm efficiency, comparing solutions,
and understanding performance scalability.

✅ Shall I continue with Set 8 (final set) of your requested solved questions?
Create Word Docs Create PDF Docs

Here is Set 8 – the final set of solved 10-mark questions for Data Structures and Algorithms,
with detailed answers (each 200+ words).

✅ Set 8: Solved Questions (10 Marks


Each)

Q1. What is a hash table? Explain collision and collision resolution


techniques.
Answer:

A hash table is a data structure that provides fast O(1) average time complexity for
insertion, deletion, and search operations. It uses a hash function to map keys to specific
indices (buckets) in an array.

Hash Function:
It computes an index from the key.
Example:

index = key % table_size;

44/51
But multiple keys may produce the same index — this is known as a collision.

Collision:
Occurs when two or more keys map to the same index.

Common Collision Resolution Techniques:

1. Open Addressing:

All elements are stored in the table. If a collision occurs, find the next available slot using
probing.

Linear Probing:
index = (hash + i) % table_size

Quadratic Probing:
index = (hash + i²) % table_size

Double Hashing:
index = (hash1 + i * hash2) % table_size

2. Chaining (Separate Chaining):

Each table index holds a linked list of elements. If multiple keys hash to the same index, they
are stored in the list.

Advantages:
Fast average-case performance

Efficient for dictionary-based problems

45/51
Applications:
Symbol tables in compilers

Caching

Databases and sets/maps in programming languages

Hash tables are the backbone of many high-performance applications and are widely used in
both software and system-level design.

Q2. What is an AVL Tree? Explain rotations in AVL Tree.


Answer:

An AVL Tree is a self-balancing Binary Search Tree (BST) where the difference in heights of
left and right subtrees (balance factor) of any node is at most 1.

Balance Factor:
c

Balance = height(left subtree) - height(right subtree)

Must be –1, 0, or +1 for every node in AVL Tree.

Why AVL?
Normal BST can become skewed → time complexity degrades to O(n)

AVL maintains balance → ensures O(log n) operations

Rotations in AVL Tree (to restore balance):

46/51
1. Left-Left (LL) Rotation:

Occurs when a node is inserted into the left subtree of the left child.

plaintext

z
/
y
/
x

Rotate right (z becomes right child of y)

2. Right-Right (RR) Rotation:

Occurs when inserted into the right subtree of right child.

plaintext

z
\
y
\
x

Rotate left (z becomes left child of y)

3. Left-Right (LR) Rotation:

Node inserted into the right subtree of left child. First left rotate, then right rotate.

4. Right-Left (RL) Rotation:

Node inserted into left subtree of right child. First right rotate, then left rotate.

47/51
Applications:
Databases and search engines

Real-time systems needing balanced search performance

AVL Trees guarantee logarithmic height, making search, insert, and delete operations
efficient and predictable.

Q3. Explain the concept of recursion. How does it differ from


iteration? Give examples.
Answer:

Recursion is a programming technique where a function calls itself to solve smaller sub-
problems of a larger problem. It is based on the divide and conquer strategy.

Basic Structure:
c

void recursiveFunc() {
if (base_condition)
return;
else
recursiveFunc(); // recursive call
}

Example – Factorial:
c

int factorial(int n) {
if (n <= 1)
return 1;
else

48/51
return n * factorial(n - 1);
}

Iteration vs Recursion:
Feature Recursion Iteration

Approach Function calls itself Repeats using loop

Memory usage High (due to call stack) Low

Termination Base case Condition check

Readability Often shorter Longer but clearer

Recursion is useful for:


Problems with repetitive structure, like:

Tree traversal

Tower of Hanoi

Fibonacci series

DFS traversal in graphs

Drawbacks:
More memory (stack frames)

Risk of stack overflow if not written with proper base case

Understanding recursion is critical for writing elegant and effective solutions in algorithm
design.

49/51
Q4. Explain the working of the Bellman-Ford algorithm with an
example. How is it different from Dijkstra’s?
Answer:

Bellman-Ford is a graph algorithm used to find shortest paths from a single source vertex
to all other vertices in a weighted graph, including graphs with negative edge weights.

Algorithm Steps:
1. Initialize distance of all vertices as ∞; source = 0.

2. Repeat (V – 1) times:

For every edge (u, v) with weight w:

If dist[u] + w < dist[v] , then update dist[v]

3. After V–1 iterations, check for negative weight cycles.

Example:
Graph edges:

css

A → B (4), A → C (5), B → C (-3), C → D (2)

Initial distances: A=0, B=∞, C=∞, D=∞

Iteratively update:

A→B=4

A→C=5

B → C = 1 (4 – 3)

C→D=3

Final shortest paths:

B = 4, C = 1, D = 3

50/51
Time Complexity: O(V × E)

Difference from Dijkstra’s:


Feature Bellman-Ford Dijkstra

Handles negative weights Yes No

Complexity O(VE) O(E log V)

Edge Relaxation V–1 times Once (using min-heap)

Negative Cycle Detection Yes No

Bellman-Ford is suitable when the graph has negative weights, and Dijkstra’s is faster for
non-negative weighted graphs.

✅ This concludes all 8 sets of 10-mark questions (32 questions in total). Let me know if you'd
like:

Objective questions (MCQs)

5-mark or 2-mark sample questions

PDF export of all questions and notes

Unit-wise practice tests or summaries

I'm here to help with next steps!

Create Word Docs Create PDF Docs

51/51

You might also like