0% found this document useful (0 votes)
15 views58 pages

DSA Mcqs

The document provides a detailed syllabus for Data Structures and Algorithms (DSA), covering fundamental concepts such as algorithm analysis, linear data structures, trees, hashing, sorting, searching algorithms, graphs, and algorithm design techniques. It includes definitions, operations, complexities, and applications for various data structures like arrays, linked lists, stacks, queues, trees, and graphs. Additionally, it features multiple-choice questions to test understanding of the material.

Uploaded by

Prinsa Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views58 pages

DSA Mcqs

The document provides a detailed syllabus for Data Structures and Algorithms (DSA), covering fundamental concepts such as algorithm analysis, linear data structures, trees, hashing, sorting, searching algorithms, graphs, and algorithm design techniques. It includes definitions, operations, complexities, and applications for various data structures like arrays, linked lists, stacks, queues, trees, and graphs. Additionally, it features multiple-choice questions to test understanding of the material.

Uploaded by

Prinsa Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Detailed Syllabus for Data Structures

and Algorithms (DSA)


1. Introduction to Data Structures and Algorithms
●​ Basic Terminology: Data, Information, Data Structure, Algorithm, Complexity.
●​ Abstract Data Types (ADTs).
●​ Algorithm Analysis: Time and Space Complexity.
●​ Asymptotic Notations:
○​ Big-O Notation (O)
○​ Omega Notation (Ω)
○​ Theta Notation (Θ)
○​ Best, Worst, and Average Case Analysis.

2. Linear Data Structures


2.1 Arrays

●​ Definition and memory representation.


●​ Operations: Insertion, Deletion, Searching, Traversing.
●​ Applications.

2.2 Lists and Linked Lists

●​ Singly Linked List


○​ Definition and memory representation.
○​ Operations: Insertion, Deletion, Searching, Traversing.
●​ Doubly Linked List
○​ Operations and implementation.
●​ Circular Linked List
○​ Operations and real-life applications.

2.3 Stacks and Queues

●​ Stack
○​ Definition, operations, and applications.
○​ Implementation using arrays and linked lists.
○​ Applications: Function calls, Backtracking, Parenthesis matching,
Infix-to-Postfix conversion, Recursion.
●​ Queue
○​ Definition, operations, and applications.
○​ Implementation using arrays and linked lists.
○​ Circular Queue and Deque.
○​ Priority Queue.

3. Trees
●​ General Trees and Binary Trees
●​ Tree Representations and Traversals
○​ Preorder, Inorder, Postorder Traversal
●​ Binary Search Trees (BSTs)
○​ Insertion and Deletion in BST.
●​ Balanced Search Trees
○​ AVL Trees: Rotations, Insertion, Deletion.
○​ 2-3 Trees: Properties and Operations.
○​ Red-Black Trees: Properties, Rotations, Insertions, Deletions.
○​ Splay Trees and Self-adjusting Trees.
●​ Special Trees
○​ Huffman Trees and Huffman Algorithm.
○​ B-Trees and M-way Search Trees.

4. Hashing and Indexing


●​ Hashing Concepts
○​ Hash Functions.
○​ Collision Resolution Techniques: Chaining, Open Addressing.
○​ Applications of Hashing.
●​ Indexing Methods
○​ Suffix Trees and Tries.

5. Sorting Algorithms
●​ Internal Sorting
○​ Comparison-based Sorting:
■​ Bubble Sort, Selection Sort, Insertion Sort.
■​ Merge Sort, Quick Sort, Heap Sort.
○​ Non-comparison Sorting:
■​ Counting Sort, Radix Sort, Bucket Sort.
●​ External Sorting
○​ Two-way Merge Sort, Multi-way Merge Sort.

6. Searching Algorithms
●​ Linear Search
●​ Binary Search
●​ General Search Trees
7. Graphs and Graph Algorithms
7.1 Graph Basics

●​ Types of Graphs
○​ Directed and Undirected Graphs.
○​ Weighted and Unweighted Graphs.
○​ Representation of Graphs: Adjacency Matrix, Adjacency List.

7.2 Graph Traversals

●​ Breadth-First Search (BFS)


●​ Depth-First Search (DFS)
●​ Topological Sorting

7.3 Pathfinding Algorithms

●​ Shortest Path Algorithms


○​ Dijkstra’s Algorithm.
○​ Bellman-Ford Algorithm.
○​ Floyd-Warshall Algorithm.

7.4 Minimum Spanning Tree (MST)

●​ Prim’s Algorithm
●​ Kruskal’s Algorithm

8. Algorithm Design Techniques


●​ Greedy Algorithms
○​ Huffman Encoding, Kruskal’s MST, Prim’s MST.
●​ Divide and Conquer
○​ Merge Sort, Quick Sort, Binary Search.
●​ Dynamic Programming
○​ Fibonacci Sequence, Matrix Chain Multiplication, Longest Common
Subsequence (LCS), 0/1 Knapsack Problem.
●​ Backtracking
○​ N-Queens Problem, Graph Coloring, Hamiltonian Cycle.

1. Introduction to Data Structures and Algorithms

Data structures and algorithms are fundamental concepts in computer science that help
organize and manipulate data efficiently.
Basic Terminology

●​ Data: Raw facts and figures without meaning. Example: Numbers, Characters.
●​ Information: Processed data that has meaning. Example: A student’s marks
converted into a grade.
●​ Data Structure: A way to store and organize data efficiently. Example: Arrays,
Linked Lists.
●​ Algorithm: A step-by-step procedure to solve a problem.
●​ Complexity: The measure of the efficiency of an algorithm in terms of time
(execution time) and space (memory usage).

Abstract Data Types (ADTs)

●​ ADTs define how a data structure behaves rather than how it is implemented.
●​ Example ADTs:
○​ List: A collection of ordered elements.
○​ Stack: A last-in, first-out (LIFO) structure.
○​ Queue: A first-in, first-out (FIFO) structure.

Algorithm Analysis: Time and Space Complexity

●​ Time Complexity: The amount of time an algorithm takes to complete as a function


of input size.
●​ Space Complexity: The amount of memory an algorithm requires during execution.

Asymptotic Notations

Asymptotic notations describe the behavior of an algorithm as the input size grows.

1.​ Big-O Notation (O)


○​ Represents the upper bound (worst-case scenario).
○​ Example: If an algorithm takes at most O(n²) time, it means the execution
time grows at most proportional to the square of the input size.
2.​ Omega Notation (Ω)
○​ Represents the lower bound (best-case scenario).
○​ Example: If an algorithm takes Ω(n log n) time, it means that at minimum, the
algorithm will take this much time.
3.​ Theta Notation (Θ)
○​ Represents the average-case scenario (both upper and lower bounds).
○​ Example: If an algorithm has Θ(n log n) complexity, it means that the
execution time will always remain within this bound.

Best, Worst, and Average Case Analysis

●​ Best Case: The minimum time required for an algorithm to complete.


●​ Worst Case: The maximum time an algorithm might take.
●​ Average Case: The expected time an algorithm will take, based on an average input.
20 Multiple-Choice Questions (MCQs) with Answers
Basic Terminology

1.​ Which of the following best defines an algorithm?​


a) A set of well-defined rules​
b) A process of solving a problem step by step​
c) A computer program​
d) A machine learning model
○​ Answer: b) A process of solving a problem step by step
2.​ What is the primary goal of a data structure?​
a) To store data in memory​
b) To organize and manage data efficiently​
c) To reduce power consumption​
d) To make programs run infinitely
○​ Answer: b) To organize and manage data efficiently

Abstract Data Types (ADTs)

3.​ Which of the following is NOT an example of an Abstract Data Type (ADT)?​
a) Stack​
b) Queue​
c) Array​
d) Linked List
○​ Answer: c) Array
4.​ Which ADT follows the First In, First Out (FIFO) principle?​
a) Stack​
b) Queue​
c) Priority Queue​
d) Heap
○​ Answer: b) Queue
5.​ Which ADT is used in function calls (call stack) in programming?​
a) Queue​
b) Stack​
c) Linked List​
d) Hash Table
○​ Answer: b) Stack

Algorithm Analysis

6.​ What does time complexity measure?​


a) The total memory used by an algorithm​
b) The number of operations performed as input size increases​
c) The number of function calls​
d) The efficiency of hardware
○​ Answer: b) The number of operations performed as input size increases
7.​ Which algorithm complexity is the best among the following?​
a) O(n²)​
b) O(n³)​
c) O(n log n)​
d) O(2ⁿ)
○​ Answer: c) O(n log n)
8.​ What does space complexity measure?​
a) Time taken to execute an algorithm​
b) Amount of memory required by an algorithm​
c) Speed of execution​
d) CPU performance
○​ Answer: b) Amount of memory required by an algorithm

Asymptotic Notations

9.​ Which notation represents the worst-case scenario of an algorithm?​


a) Big-O (O)​
b) Theta (Θ)​
c) Omega (Ω)​
d) Phi (Φ)
○​ Answer: a) Big-O (O)
10.​Which notation describes both the upper and lower bounds of an algorithm?​
a) Big-O (O)​
b) Omega (Ω)​
c) Theta (Θ)​
d) Sigma (Σ)
●​ Answer: c) Theta (Θ)
11.​Which asymptotic notation is used for the best case?​
a) Big-O (O)​
b) Omega (Ω)​
c) Theta (Θ)​
d) Pi (π)
●​ Answer: b) Omega (Ω)
12.​Which complexity is better for an efficient algorithm?​
a) O(n²)​
b) O(n log n)​
c) O(n³)​
d) O(2ⁿ)
●​ Answer: b) O(n log n)

Best, Worst, and Average Case Analysis

13.​What is the best-case time complexity of the Binary Search algorithm?​


a) O(n)​
b) O(log n)​
c) O(1)​
d) O(n²)
●​ Answer: c) O(1)
14.​What is the worst-case time complexity of QuickSort?​
a) O(n²)​
b) O(n log n)​
c) O(n)​
d) O(log n)
●​ Answer: a) O(n²)
15.​What is the average-case time complexity of Merge Sort?​
a) O(n log n)​
b) O(n²)​
c) O(log n)​
d) O(n)
●​ Answer: a) O(n log n)
16.​Which case does Big-O notation refer to?​
a) Best case​
b) Worst case​
c) Average case​
d) All of the above
●​ Answer: b) Worst case
17.​Which sorting algorithm has the best worst-case time complexity?​
a) Merge Sort​
b) Quick Sort​
c) Bubble Sort​
d) Selection Sort
●​ Answer: a) Merge Sort
18.​Which of the following does NOT affect time complexity?​
a) Number of elements in input​
b) Processor speed​
c) Algorithm design​
d) Data structure used
●​ Answer: b) Processor speed
19.​If an algorithm runs in constant time, what is its complexity?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n²)
●​ Answer: a) O(1)
20.​Which of the following is NOT an asymptotic notation?​
a) O(n)​
b) Ω(n)​
c) Θ(n)​
d) Σ(n)
●​ Answer: d) Σ(n)

Explanation of Topics: Linear Data


Structures
2.1 Arrays
Definition and Memory Representation

An array is a collection of elements of the same data type stored at contiguous memory
locations. Each element can be accessed using an index.

Operations on Arrays

1.​ Insertion: Adding an element at a specific index.


○​ If inserting at the beginning or middle, elements need to be shifted.
○​ Best-case: O(1) (if inserting at the end), Worst-case: O(n).
2.​ Deletion: Removing an element from an index.
○​ Requires shifting elements.
○​ Best-case: O(1) (if deleting last element), Worst-case: O(n).
3.​ Searching: Finding an element.
○​ Linear Search: O(n).
○​ Binary Search (sorted array only): O(log n).
4.​ Traversing: Accessing each element one by one.
○​ Time Complexity: O(n).

Applications of Arrays

●​ Used in matrices, database indexing, and image processing.


●​ Useful in implementing data structures like stacks and queues.
●​ Used in sorting and searching algorithms.

2.2 Lists and Linked Lists


Singly Linked List

Definition and Memory Representation

A Singly Linked List (SLL) consists of nodes where each node contains:

●​ Data: Stores the value.


●​ Pointer (next): Points to the next node in the list.

Operations on Singly Linked List

1.​ Insertion:
○​ At the beginning: O(1).
○​ At the end: O(n).
○​ At a specific position: O(n).
2.​ Deletion:
○​ First node: O(1).
○​ Last node: O(n).
○​ Specific node: O(n).
3.​ Searching:
○​ Traversing until the element is found. O(n).
4.​ Traversing:
○​ Visiting each node from head to tail. O(n).

Doubly Linked List (DLL)

Operations and Implementation

●​ Each node contains three parts:


○​ Data
○​ Pointer to the next node (next)
○​ Pointer to the previous node (prev)

Advantages over Singly Linked List:

●​ Can be traversed in both directions.


●​ Deletion is more efficient (O(1) for known nodes).

Operations:

●​ Insertion: O(1) at the beginning, O(n) at the end.


●​ Deletion: O(1) for known nodes, O(n) otherwise.
●​ Traversal: O(n).
Circular Linked List (CLL)

Definition and Real-life Applications

A Circular Linked List is a variation where the last node points back to the first node,
forming a loop.

Types:

●​ Singly Circular Linked List: Last node’s next points to the first node.
●​ Doubly Circular Linked List: Both next and prev pointers create a circular structure.

Applications:

●​ Round-robin scheduling.
●​ Multiplayer games (turn-based).
●​ Buffer management in operating systems.

20 MCQs with Answers


Arrays

1.​ What is the time complexity of accessing an element in an array by index?​


a) O(n)​
b) O(1)​
c) O(log n)​
d) O(n²)
○​ Answer: b) O(1)
2.​ Which of the following is a characteristic of an array?​
a) Contiguous memory allocation​
b) Non-contiguous memory allocation​
c) Cannot store similar data types​
d) Dynamic resizing single continuous block of memory is assigned to
○​ Answer: a) Contiguous memory allocation a process.
3.​ Which operation in an array is the most expensive in terms of time complexity?​
a) Accessing an element​
b) Searching for an element​
c) Inserting an element at the beginning​
d) Appending an element
○​ Answer: c) Inserting an element at the beginning
4.​ What is the time complexity of Binary Search on a sorted array?​
a) O(n)​
b) O(log n)​
c) O(n log n)​
d) O(1)
○​ Answer: b) O(log n)
5.​ What is the advantage of arrays over linked lists?​
a) Dynamic memory allocation​
b) Constant time insertion​
c) Random access of elements​
d) No memory wastage
○​ Answer: c) Random access of elements

Singly Linked List

6.​ What is the time complexity of inserting a node at the head of a singly linked
list?​
a) O(1)​
b) O(n)​
c) O(n log n)​
d) O(log n)
○​ Answer: a) O(1)
7.​ Which of the following is NOT a disadvantage of a singly linked list?​
a) Requires extra memory for pointers​
b) Cannot be traversed backward​
c) Insertions and deletions are costly​
d) No random access
○​ Answer: c) Insertions and deletions are costly
8.​ Which of the following operations is more efficient in a singly linked list
compared to an array?​
a) Searching​
b) Traversing​
c) Insertion at the beginning​
d) Accessing an element at a specific index
○​ Answer: c) Insertion at the beginning

Doubly Linked List

9.​ What additional pointer does a node in a doubly linked list have?​
a) Previous pointer​
b) Middle pointer​
c) Random pointer​
d) Next-to-next pointer
○​ Answer: a) Previous pointer
10.​Which of the following is NOT an advantage of a doubly linked list?​
a) Can be traversed in both directions​
b) Requires extra memory for the previous pointer​
c) Easier deletion of nodes​
d) More efficient searching than singly linked list
●​ Answer: b) Requires extra memory for the previous pointer

Circular Linked List


11.​Which of the following applications uses a circular linked list?​
a) Stack operations​
b) CPU scheduling​
c) Binary search​
d) None of the above
●​ Answer: b) CPU scheduling
12.​In a circular linked list, the last node points to?​
a) NULL​
b) First node​
c) Itself​
d) Random node
●​ Answer: b) First node

General Linked List Questions

13.​What is the best-case time complexity for searching an element in a linked list?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n²)
●​ Answer: a) O(1)
14.​What is the time complexity for deleting a node in the middle of a linked list?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n²)
●​ Answer: b) O(n)

Arrays (MCQs 1-5)

1.​ What is the index of the first element in an array?​


a) 0​
b) 1​
c) -1​
d) Depends on programming language
○​ Answer: a) 0
2.​ What is the time complexity of inserting an element at the beginning of an
array?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n²)
○​ Answer: b) O(n)
3.​ Which searching algorithm is more efficient for sorted arrays?​
a) Linear Search​
b) Binary Search​
c) Jump Search​
d) Exponential Search
○​ Answer: b) Binary Search
4.​ If an array is sorted, which sorting algorithm has the best time complexity?​
a) Bubble Sort​
b) Insertion Sort​
c) Quick Sort​
d) Merge Sort
○​ Answer: b) Insertion Sort (O(n) for nearly sorted data)
5.​ What is the memory address formula for accessing an element in a 1D array?​
a) Base Address + (Index * Size of Element)​
b) Base Address - (Index * Size of Element)​
c) Base Address * Index​
d) Base Address / Index
○​ Answer: a) Base Address + (Index * Size of Element)

Singly Linked List (MCQs 6-10)

6.​ Which of the following is an advantage of linked lists over arrays?​


a) Fixed size allocation​
b) Direct access to elements​
c) Efficient insertion and deletion​
d) Contiguous memory allocation
○​ Answer: c) Efficient insertion and deletion
7.​ How many pointers does a node in a singly linked list have?​
a) 1​
b) 2​
c) 3​
d) None
○​ Answer: a) 1
8.​ Which operation has O(1) time complexity in a singly linked list?​
a) Searching an element​
b) Inserting at the beginning​
c) Inserting at the end​
d) Deleting a node at a specific index
○​ Answer: b) Inserting at the beginning
9.​ What happens if we traverse a singly linked list beyond the last node?​
a) Returns the first node​
b) Throws an error​
c) Moves to NULL​
d) Deletes the list
○​ Answer: c) Moves to NULL
10.​What is the best-case time complexity for searching an element in a singly
linked list?​
a) O(n)​
b) O(1)​
c) O(log n)​
d) O(n log n)
●​ Answer: b) O(1) (if the element is at the head)

Doubly Linked List (MCQs 11-15)

11.​What additional pointer does a doubly linked list have compared to a singly
linked list?​
a) Next pointer​
b) Previous pointer​
c) Tail pointer​
d) Random pointer
●​ Answer: b) Previous pointer
12.​Which of the following is NOT an advantage of a doubly linked list?​
a) Can be traversed in both directions​
b) Requires extra memory for the previous pointer​
c) Easier deletion of nodes​
d) More efficient searching than singly linked list
●​ Answer: b) Requires extra memory for the previous pointer
13.​What is the time complexity of deleting a node from the middle of a doubly
linked list (when the node’s address is known)?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n²)
●​ Answer: a) O(1)
14.​In which case is a doubly linked list more beneficial than a singly linked list?​
a) When insertions and deletions are done at both ends​
b) When the list needs less memory​
c) When random access is needed​
d) When using recursion
●​ Answer: a) When insertions and deletions are done at both ends
15.​What is the disadvantage of a doubly linked list compared to a singly linked
list?​
a) Slower traversal​
b) More memory is required​
c) Cannot store multiple data types​
d) Difficult to implement
●​ Answer: b) More memory is required

Circular Linked List (MCQs 16-20)

16.​In a circular linked list, the last node points to?​


a) NULL​
b) First node​
c) Itself​
d) Random node
●​ Answer: b) First node
17.​Which of the following applications uses a circular linked list?​
a) CPU scheduling​
b) Stack operations​
c) Binary search​
d) None of the above
●​ Answer: a) CPU scheduling
18.​Which of the following is true about circular linked lists?​
a) They do not contain NULL pointers​
b) They always contain at least two nodes​
c) Searching is faster than in arrays​
d) Deletion is not possible
●​ Answer: a) They do not contain NULL pointers
19.​What is the time complexity of traversing a circular linked list containing n
nodes?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n²)
●​ Answer: b) O(n)
20.​Which of the following real-life applications does NOT use a circular linked
list?​
a) Multiplayer games (turn-based)​
b) Round-robin scheduling​
c) Undo/Redo functionality in text editors​
d) Implementing queues in a web server
●​ Answer: d) Implementing queues in a web server

Explanation of Topics: Stacks and


Queues
2.3 Stacks and Queues
Stack

Definition

A stack is a linear data structure that follows the LIFO (Last In, First Out) principle. This
means that the last element inserted into the stack is the first one to be removed.

Operations on Stack
1.​ Push(x): Inserts an element x at the top of the stack. O(1).
2.​ Pop(): Removes the top element of the stack. O(1).
3.​ Peek()/Top(): Returns the top element without removing it. O(1).
4.​ isEmpty(): Checks if the stack is empty. O(1).
5.​ isFull(): Checks if the stack is full (only in arrays). O(1).

Implementation of Stack

●​ Using Arrays: Fixed-size stack (static memory allocation).


●​ Using Linked Lists: Dynamically allocated stack (efficient memory usage).

Applications of Stack

1.​ Function Calls: Function calls in programming languages use a stack to store return
addresses.
2.​ Backtracking: Used in problems like the N-Queens problem and maze solving.
3.​ Parenthesis Matching: Used in checking balanced parentheses in expressions.
4.​ Infix to Postfix Conversion: Helps in evaluating expressions efficiently.
5.​ Recursion: Recursive function calls use an implicit stack for execution.

Queue

Definition

A queue is a linear data structure that follows the FIFO (First In, First Out) principle. This
means that the element inserted first will be removed first.

Operations on Queue

1.​ Enqueue(x): Inserts an element x at the rear (end) of the queue. O(1).
2.​ Dequeue(): Removes an element from the front of the queue. O(1).
3.​ Front(): Returns the front element without removing it. O(1).
4.​ Rear(): Returns the last element without removing it. O(1).
5.​ isEmpty(): Checks if the queue is empty. O(1).
6.​ isFull(): Checks if the queue is full (only in arrays). O(1).

Implementation of Queue

●​ Using Arrays: Fixed-size queue (static memory allocation).


●​ Using Linked Lists: Dynamic queue with efficient memory usage.

Types of Queues

1.​ Circular Queue: The last position connects back to the first position (helps in
reducing memory wastage).
2.​ Deque (Double-Ended Queue): Allows insertion and deletion from both ends.
3.​ Priority Queue: Elements are dequeued based on priority rather than the insertion
order.
Applications of Queue

1.​ CPU Scheduling: Process scheduling in operating systems.


2.​ Task Scheduling: Printer job scheduling, call center systems.
3.​ Graph Traversal: BFS (Breadth-First Search) uses a queue.
4.​ Data Buffering: Used in IO Buffers, packet scheduling in networking.

20 MCQs with Answers


Stack (MCQs 1-10)

1.​ Which data structure follows the Last In, First Out (LIFO) principle?​
a) Queue​
b) Stack​
c) Linked List​
d) Graph
○​ Answer: b) Stack
2.​ What is the time complexity of the Push operation in a stack?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n²)
○​ Answer: a) O(1)
3.​ Which of the following is NOT an application of stacks?​
a) Backtracking​
b) Recursion​
c) CPU Scheduling​
d) Parenthesis Matching
○​ Answer: c) CPU Scheduling
4.​ What happens when a stack overflows?​
a) The program crashes​
b) The stack deletes old elements​
c) The stack increases its size automatically​
d) New elements are ignored
○​ Answer: a) The program crashes
5.​ Which data structure is used to convert infix expressions to postfix
expressions?​
a) Queue​
b) Stack​
c) Linked List​
d) Tree
○​ Answer: b) Stack
What will be the result of the following stack operations?​
scss​
CopyEdit​
Push(5)
Push(10)
Push(20)
Pop()
Peek()

6.​ a) 10​
b) 20​
c) 5​
d) 15
○​ Answer: a) 10
7.​ Which of the following is used in function calls?​
a) Stack​
b) Queue​
c) Linked List​
d) Hash Table
○​ Answer: a) Stack
8.​ What is the space complexity of a stack storing 'n' elements?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n log n)
○​ Answer: b) O(n)
9.​ Which operation is performed last in recursion?​
a) Function Call​
b) Push​
c) Pop​
d) Return
○​ Answer: d) Return
10.​Which of the following data structures can be used to implement recursion?​
a) Queue​
b) Stack​
c) Graph​
d) Tree
●​ Answer: b) Stack

Queue (MCQs 11-20)

11.​Which data structure follows the First In, First Out (FIFO) principle?​
a) Stack​
b) Queue​
c) Heap​
d) Tree
●​ Answer: b) Queue
12.​What is the time complexity of the Dequeue operation in a queue?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n²)
●​ Answer: a) O(1)
13.​Which of the following is NOT a type of queue?​
a) Circular Queue​
b) Priority Queue​
c) Stack Queue​
d) Deque
●​ Answer: c) Stack Queue
14.​What is a major problem with a simple queue implementation using arrays?​
a) Memory wastage due to fixed size​
b) Cannot perform enqueue operation​
c) Dequeue operation is slow​
d) Elements cannot be accessed randomly
●​ Answer: a) Memory wastage due to fixed size
15.​Which queue allows insertion and deletion from both ends?​
a) Simple Queue​
b) Circular Queue​
c) Priority Queue​
d) Deque
●​ Answer: d) Deque
16.​Which scheduling algorithm uses a queue?​
a) Round Robin Scheduling​
b) Priority Scheduling​
c) First-Come, First-Served Scheduling​
d) All of the above
●​ Answer: d) All of the above
17.​What happens when a circular queue is full?​
a) Enqueue is not possible​
b) Memory increases automatically​
c) Elements are shifted​
d) Rear pointer resets to 0
●​ Answer: a) Enqueue is not possible
18.​What is the front element of a queue containing [10, 20, 30] after performing
one dequeue operation?​
a) 10​
b) 20​
c) 30​
d) Queue is empty
●​ Answer: b) 20
19.​Which queue allows elements to be dequeued based on priority rather than
insertion order?​
a) Deque​
b) Priority Queue​
c) Circular Queue​
d) Stack
●​ Answer: b) Priority Queue
20.​Which of the following applications use a queue?​
a) Graph BFS traversal​
b) Printer job scheduling​
c) Call center system​
d) All of the above
●​ Answer: d) All of the above

Explanation of Topics: Trees


3. Trees
General Trees and Binary Trees

A tree is a hierarchical data structure that consists of nodes. The top node is called the
root, and each node may have child nodes.

General Trees

A general tree is a tree where each node can have any number of children.

●​ Used in file systems, organization charts, etc.

Binary Trees

A binary tree is a special type of tree where each node has at most two children:

●​ Left Child
●​ Right Child

Types of Binary Trees:

1.​ Full Binary Tree – Every node has either 0 or 2 children.


2.​ Complete Binary Tree – All levels are completely filled, except possibly the last
level, which is filled from left to right.
3.​ Perfect Binary Tree – All internal nodes have two children, and all leaves are at the
same level.
4.​ Balanced Binary Tree – The difference between the heights of left and right
subtrees is at most 1.
5.​ Degenerate Tree – Every parent node has only one child, making it behave like a
linked list.
Tree Representations and Traversals

Tree Representation

A tree can be represented in multiple ways:

1.​ Linked Representation: Each node has a pointer to its children.


2.​ Array Representation: The parent-child relationship is stored in an array.

Tree Traversals

Tree traversal refers to visiting each node in a tree exactly once in a systematic way.

1.​ Preorder Traversal (Root → Left → Right)


○​ Visit root first, then left, then right.
○​ Used in expression trees, prefix notation.
2.​ Inorder Traversal (Left → Root → Right)
○​ Visit left, then root, then right.
○​ Used in Binary Search Trees (BSTs) to get sorted order.
3.​ Postorder Traversal (Left → Right → Root)
○​ Visit left, then right, then root.
○​ Used in deleting trees, postfix notation.

Binary Search Trees (BSTs)

A Binary Search Tree (BST) is a binary tree where:

●​ Left subtree contains values less than the root.


●​ Right subtree contains values greater than the root.

Properties:

●​ Inorder traversal of BST gives sorted elements.


●​ Searching, insertion, and deletion operations take O(log n) time in an average case.

Insertion and Deletion in BST

Insertion in BST

1.​ Start from the root.


2.​ If the new value is less than the current node, go left.
3.​ If the new value is greater than the current node, go right.
4.​ If an empty spot is found, insert the new value there.

Deletion in BST
Three cases to consider:

1.​ Node is a leaf (no children) – Simply delete it.


2.​ Node has one child – Replace the node with its child.
3.​ Node has two children –
○​ Find inorder successor (smallest value in right subtree).
○​ Replace the node with its inorder successor.
○​ Delete the inorder successor from the right subtree.

20 MCQs with Answers


General Trees and Binary Trees (MCQs 1-5)

1.​ Which of the following is NOT a type of binary tree?​


a) Complete Binary Tree​
b) Circular Binary Tree​
c) Full Binary Tree​
d) Perfect Binary Tree
○​ Answer: b) Circular Binary Tree
2.​ In a binary tree, a node with two children has a degree of:​
a) 0​
b) 1​
c) 2​
d) 3
○​ Answer: c) 2
3.​ What is the height of a single-node binary tree?​
a) 0​
b) 1​
c) -1​
d) 2
○​ Answer: a) 0
4.​ In a perfect binary tree of height ‘h’, how many leaf nodes are there?​
a) 2^h​
b) 2^(h-1)​
c) 2^(h+1)​
d) h^2
○​ Answer: a) 2^h
5.​ Which of the following is NOT an application of binary trees?​
a) File system hierarchy​
b) Sorting algorithms​
c) Network routing​
d) Image processing
○​ Answer: d) Image processing
Tree Traversals (MCQs 6-10)

6.​ Which tree traversal method visits the root node first?​
a) Preorder​
b) Inorder​
c) Postorder​
d) Level Order
○​ Answer: a) Preorder
7.​ Which traversal method is used to process an expression tree in prefix
notation?​
a) Preorder​
b) Inorder​
c) Postorder​
d) Level Order
○​ Answer: a) Preorder
8.​ Which tree traversal method gives elements in sorted order for a BST?​
a) Preorder​
b) Inorder​
c) Postorder​
d) Level Order
○​ Answer: b) Inorder
9.​ Which of the following tree traversal methods processes nodes in
left-right-root order?​
a) Preorder​
b) Inorder​
c) Postorder​
d) Level Order
○​ Answer: c) Postorder
10.​Which of the following is NOT a depth-first traversal technique?​
a) Inorder​
b) Preorder​
c) Postorder​
d) Level Order
●​ Answer: d) Level Order

Binary Search Trees (BSTs) (MCQs 11-15)

11.​What is the worst-case time complexity of searching in a BST?​


a) O(1)​
b) O(log n)​
c) O(n)​
d) O(n log n)
●​ Answer: c) O(n) (for skewed BST)
12.​In a BST, where is the smallest element located?​
a) Root​
b) Leftmost node​
c) Rightmost node​
d) Middle node
●​ Answer: b) Leftmost node
13.​What happens when we insert an already existing key in a BST?​
a) It is ignored​
b) It is inserted again​
c) It replaces the existing value​
d) It deletes the old value
●​ Answer: a) It is ignored
14.​What is the best-case time complexity of searching in a BST?​
a) O(1)​
b) O(log n)​
c) O(n)​
d) O(n log n)
●​ Answer: b) O(log n)
15.​Which of the following is a self-balancing BST?​
a) AVL Tree​
b) Red-Black Tree​
c) B-Tree​
d) All of the above
●​ Answer: d) All of the above

Insertion and Deletion in BST (MCQs 16-20)

16.​Which method is used to delete a node with two children in a BST?​


a) Replace with its left child​
b) Replace with its right child​
c) Replace with its inorder successor​
d) Remove it directly
●​ Answer: c) Replace with its inorder successor
17.​In a BST, what is the inorder successor of the root node?​
a) Smallest value in left subtree​
b) Largest value in left subtree​
c) Smallest value in right subtree​
d) Largest value in right subtree
●​ Answer: c) Smallest value in right subtree
18.​What is the time complexity of inserting a node in a balanced BST?​
a) O(1)​
b) O(log n)​
c) O(n)​
d) O(n log n)
●​ Answer: b) O(log n)
19.​Which operation can cause a BST to become unbalanced?​
a) Insertion​
b) Deletion​
c) Searching​
d) Both (a) and (b)
●​ Answer: d) Both (a) and (b)
20.​What is the height of an empty BST?​
a) 0​
b) 1​
c) -1​
d) Undefined
●​ Answer: c) -1

Explanation of Topics: Balanced Search


Trees and Special Trees
Balanced Search Trees
Balanced search trees maintain their balance by ensuring that the height difference between
subtrees does not grow too large. This ensures that operations like search, insert, and
delete can be performed in logarithmic time, preventing the tree from degenerating into a
linear structure (like a linked list). Let's explore the specific types of balanced trees.

AVL Trees

An AVL tree is a self-balancing binary search tree in which the difference in heights
between the left and right subtrees of any node is at most 1. This height difference is called
the balance factor and must be between -1 and 1.

Rotations in AVL Trees

When an insertion or deletion operation causes the balance factor of a node to become less
than -1 or greater than 1, rotations are performed to restore balance.

●​ Left Rotation (LL Rotation): Used when a node is unbalanced due to a left-heavy
subtree (left child has a left child).
●​ Right Rotation (RR Rotation): Used when a node is unbalanced due to a
right-heavy subtree (right child has a right child).
●​ Left-Right Rotation (LR Rotation): A combination of left and right rotations used
when the left child has a right child.
●​ Right-Left Rotation (RL Rotation): A combination of right and left rotations used
when the right child has a left child.

Insertion in AVL Trees


●​ Perform a standard BST insertion.
●​ After insertion, check the balance factor of each node and perform necessary
rotations.

Deletion in AVL Trees

●​ Perform a standard BST deletion.


●​ After deletion, check the balance factor of each node and perform necessary
rotations to restore balance.

2-3 Trees

A 2-3 tree is a self-balancing tree in which each internal node has either two or three
children and stores either one or two keys. The tree is kept balanced, ensuring that all leaf
nodes are at the same level.

Properties of 2-3 Trees

●​ Every internal node has either two or three children.


●​ If a node has two children, it stores one key.
●​ If a node has three children, it stores two keys.
●​ The tree is balanced, meaning all leaf nodes are at the same depth.

Operations in 2-3 Trees

●​ Insertion: When a new key is inserted, the tree may split nodes to maintain balance.
●​ Deletion: Deletion involves merging nodes if necessary to keep the tree balanced.

Red-Black Trees

A red-black tree is a binary search tree with an extra bit of storage per node called the
color bit, which can be either red or black. This tree ensures that the tree remains
approximately balanced, guaranteeing O(log n) time for search, insertion, and deletion
operations.

Properties of Red-Black Trees

1.​ Every node is either red or black.


2.​ The root is always black.
3.​ Red nodes cannot have red children (no two consecutive red nodes).
4.​ Every path from a node to its descendant leaves must have the same number of
black nodes.
5.​ Every leaf node is black.

Rotations in Red-Black Trees


Similar to AVL trees, red-black trees require rotations to maintain balance after insertion or
deletion:

●​ Left Rotation: A rotation to the left when a node is right-heavy.


●​ Right Rotation: A rotation to the right when a node is left-heavy.

Insertion in Red-Black Trees

●​ Insert the node as you would in a regular binary search tree.


●​ After insertion, the tree may require re-coloring and rotations to maintain the
properties of the red-black tree.

Deletion in Red-Black Trees

●​ Deletion is more complex than insertion because it can violate multiple properties.
●​ After deletion, a series of recoloring and rotations may be needed to restore the
tree’s balance.

Splay Trees and Self-adjusting Trees

A splay tree is a self-adjusting binary search tree where recently accessed elements are
moved to the root to speed up future accesses. This makes frequently accessed elements
faster to find.

Properties of Splay Trees

●​ Splaying: Every operation (search, insert, delete) on the tree performs a splay
operation to move the accessed node to the root.
●​ No extra space required: Unlike AVL or red-black trees, splay trees do not require
extra storage for balancing factors or color bits.

Splaying Operations

●​ Zig: Single rotation when the accessed node is the child of the root.
●​ Zig-Zig: Double rotation when the accessed node and its parent are both left or both
right children.
●​ Zig-Zag: Double rotation when the accessed node is a left child and its parent is a
right child (or vice versa).

Special Trees
Huffman Trees and Huffman Algorithm

A Huffman tree is a type of binary tree used in the Huffman encoding algorithm to assign
variable-length codes to input characters, with shorter codes for more frequent characters.
Huffman Algorithm

1.​ Create a frequency table of the characters in the input.


2.​ Build a priority queue (min-heap) of all the characters, with their frequencies.
3.​ Build the tree by repeatedly merging the two nodes with the lowest frequencies until
only one node remains.
4.​ Assign codes to the characters based on their positions in the tree.

B-Trees and M-way Search Trees

A B-tree is a self-balancing search tree where each node can have multiple children and
stores multiple keys. It is designed to minimize disk access in systems with large amounts of
data.

Properties of B-Trees

1.​ Balanced: All leaf nodes are at the same depth.


2.​ Multiple Keys per Node: Each node can store multiple keys and have multiple
children.
3.​ Node Capacity: Each node has a maximum and minimum number of children and
keys.
4.​ Efficient Search: B-trees allow efficient insertion, deletion, and search
operations in logarithmic time, especially for large datasets.

M-way Search Trees

An M-way search tree is a generalization of the binary search tree, where each node can
have M children, and the tree is balanced. These trees are widely used in databases and file
systems.

20 MCQs with Answers


AVL Trees (MCQs 1-5)

1.​ In an AVL tree, the balance factor of a node is defined as:​


a) Height of left subtree - height of right subtree​
b) Height of right subtree - height of left subtree​
c) Left subtree + right subtree​
d) Left child - right child
○​ Answer: a) Height of left subtree - height of right subtree
2.​ Which of the following is used to restore balance in an AVL tree after insertion?​
a) Rotation​
b) Recoloring​
c) Merging​
d) Splaying
○​ Answer: a) Rotation
3.​ What is the time complexity for inserting a node into an AVL tree?​
a) O(1)​
b) O(log n)​
c) O(n)​
d) O(n log n)
○​ Answer: b) O(log n)
4.​ Which rotation is used when a node has a left-heavy subtree, and its left child
has a right child?​
a) Left Rotation​
b) Right Rotation​
c) Left-Right Rotation​
d) Right-Left Rotation
○​ Answer: c) Left-Right Rotation
5.​ What is the maximum allowed balance factor for a node in an AVL tree?​
a) 1​
b) 2​
c) 3​
d) -1
○​ Answer: a) 1

2-3 Trees and Red-Black Trees (MCQs 6-10)

6.​ In a 2-3 tree, how many children can a node have?​


a) One​
b) Two​
c) Three​
d) Four
○​ Answer: c) Three
7.​ Which of the following is a property of a red-black tree?​
a) The root is always red.​
b) The path from any node to its leaf nodes contains the same number of red nodes.​
c) Red nodes can have red children.​
d) All leaf nodes are red.
○​ Answer: b) The path from any node to its leaf nodes contains the same
number of red nodes.
8.​ In a red-black tree, what color must the root be?​
a) Red​
b) Black​
c) Any color​
d) White
○​ Answer: b) Black
9.​ In a red-black tree, which operation is performed to fix violations of the
red-black properties after insertion?​
a) Splaying​
b) Rotation​
c) Recoloring​
d) Both b and c
○​ Answer: d) Both b and c
10.​Which operation requires the use of rotations in a red-black tree?​
a) Insertion​
b) Deletion​
c) Both insertion and deletion​
d) Searching
●​ Answer: c) Both insertion and deletion

Splay Trees and Special Trees (MCQs 11-15)

11.​Which of the following is true for a splay tree?​


a) It is balanced after every insertion.​
b) Frequently accessed elements are moved to the root.​
c) It uses rotations only in the case of deletions.​
d) It stores extra information per node for balancing.
●​ Answer: b) Frequently accessed elements are moved to the root.
12.​What is the main advantage of splay trees?​
a) Logarithmic search time​
b) No need for extra space for balancing factors​
c) Faster insertions than other balanced trees​
d) It is always balanced
●​ Answer: b) No need for extra space for balancing factors
13.​In a Huffman tree, which nodes are merged first during the encoding process?​
a) Nodes with the highest frequency​
b) Nodes with the lowest frequency​
c) Nodes with the highest value​
d) Nodes with the lowest value
●​ Answer: b) Nodes with the lowest frequency
14.​What is the time complexity for building a Huffman tree?​
a) O(n)​
b) O(n log n)​
c) O(log n)​
d) O(n^2)
●​ Answer: b) O(n log n)
15.​Which of the following is a feature of B-Trees?​
a) Nodes can only have two children.​
b) All leaf nodes are at the same depth.​
c) The root node can have at most three children.​
d) It is not balanced.
●​ Answer: b) All leaf nodes are at the same depth.
Explanation of Topics: Hashing and
Indexing
4.1 Hashing Concepts
Hashing is a technique used to uniquely identify a data element based on its key. The idea
behind hashing is to use a hash function to map data to a fixed-size table (called a hash
table) for efficient storage and retrieval. It is commonly used to implement associative
arrays, database indexing, and caches.

Key Components in Hashing

●​ Hash Function: A function that takes a key and maps it to an index (hash value) in
the hash table. The hash function is responsible for distributing the keys uniformly
across the table.
●​ Hash Table: An array where data is stored at indices determined by the hash
function.

4.2 Hash Functions


A hash function is a mathematical function used to map data of arbitrary size (like a string
or an integer) to a fixed-size value, called the hash value or hash code.

Properties of a Good Hash Function

●​ Deterministic: The same input will always produce the same hash value.
●​ Uniform Distribution: The hash values should be evenly distributed to minimize
collisions.
●​ Efficient: The hash function should be computationally simple and fast.
●​ Minimizes Collisions: Collisions occur when two different keys hash to the same
index. A good hash function reduces the likelihood of this happening.

Common Hash Functions

●​ Division Method: h(k)=k%mh(k) = k \% mh(k)=k%m, where kkk is the key, and mmm
is the size of the hash table.
●​ Multiplicative Method: h(k)=⌊m(kA%1)⌋h(k) = \lfloor m(k A \% 1)
\rfloorh(k)=⌊m(kA%1)⌋, where AAA is a constant, and mmm is the table size.
●​ Cryptographic Hash Functions: Used for security, such as SHA-256 or MD5.

4.3 Collision Resolution Techniques


A collision occurs when two keys hash to the same index. Collision resolution techniques
are methods to handle such conflicts.
1. Chaining

In chaining, each index of the hash table points to a linked list of keys that hash to the same
index. This allows multiple keys to exist at the same index.

Steps for Chaining

●​ Each table entry contains a linked list.


●​ When a collision occurs, the new key is added to the corresponding linked list.

Advantages:

●​ Simple to implement.
●​ Supports dynamic resizing.
●​ Average search time is reduced if the table is sparsely populated.

Disadvantages:

●​ Extra memory overhead due to linked lists.


●​ The efficiency depends on how evenly keys are distributed.

2. Open Addressing

In open addressing, all elements are stored in the hash table itself, and when a collision
occurs, alternative locations are sought using a process called probing.

Probing Methods:

●​ Linear Probing: If a collision occurs at index iii, the next slot is checked at i+1i+1i+1,
i+2i+2i+2, and so on until an empty slot is found.
●​ Quadratic Probing: In case of a collision, check slots at i+12i+1^2i+12,
i+22i+2^2i+22, and so on, where iii is the original hash index.
●​ Double Hashing: If a collision occurs, use a second hash function to find the next
available slot.

Advantages:

●​ More memory efficient as it does not require additional structures like linked lists.
●​ Can be more efficient in practice for small tables.

Disadvantages:

●​ Slower for large tables due to clustering (when many elements hash to the same or
nearby locations).
●​ The table can become full faster, requiring resizing and rehashing.

Applications of Hashing:

●​ Database Indexing: Hashing is used to quickly locate records in a database.


●​ Caches: In web applications or databases, hashing can speed up access to
frequently requested data.
●​ Cryptography: Cryptographic hash functions are used in data integrity checks and
digital signatures.
●​ Symbol Tables: Hashing is used in compilers to store information about variables
and functions.

4.4 Indexing Methods


Indexing is a technique used to quickly locate and access data without having to search
through every element sequentially. It is widely used in databases for faster retrieval.

Common Indexing Methods

1.​ Single-Level Indexing: A single index file that points to data records.
2.​ Multi-Level Indexing: Hierarchical indexing where a higher-level index points to
lower-level indices that, in turn, point to data records.
3.​ B-Trees and B+ Trees: These are used in databases to maintain a balanced tree
structure for efficient indexing and search operations.
4.​ Bitmap Indexing: Uses bitmaps to represent the presence or absence of data in
specific columns.

Advantages of Indexing:

●​ Faster access to data.


●​ Allows range queries and partial matching.
●​ Helps optimize the performance of SQL queries.

4.5 Suffix Trees and Tries


Suffix Trees

A suffix tree is a compressed trie of all the suffixes of a string. It allows for fast string
matching and is particularly useful in applications like bioinformatics and text processing.

Properties of Suffix Trees:

●​ A suffix tree contains all suffixes of a string as its paths.


●​ It can be constructed in linear time O(n)O(n)O(n) for a string of length nnn.
●​ Allows fast substring search, pattern matching, and text processing.

Applications:
●​ Pattern matching and searching in large texts.
●​ Data compression algorithms.

Tries

A trie (or prefix tree) is a tree-like data structure that stores strings in a way that allows for
efficient searching. Each node represents a single character, and paths represent words or
prefixes.

Properties of Tries:

●​ Efficient for storing and searching a large dictionary of words.


●​ Supports fast prefix-based search.
●​ Space-efficient when storing strings with common prefixes.

Applications:

●​ Autocomplete in search engines or text editors.


●​ Spell-checking and word suggestion systems.
●​ Efficient dictionary lookups in databases.

20 MCQs with Answers


Hashing Concepts and Collision Resolution (MCQs 1-5)

1.​ In hashing, what is the primary purpose of a hash function?​


a) To store data efficiently​
b) To perform a binary search​
c) To map data to a fixed-size table​
d) To compute data length
○​ Answer: c) To map data to a fixed-size table
2.​ Which of the following is an example of open addressing in collision
resolution?​
a) Linear Probing​
b) Chaining​
c) Binning​
d) Merging
○​ Answer: a) Linear Probing
3.​ What happens if two keys hash to the same index in a hash table?​
a) A new hash table is created​
b) A collision occurs​
c) The data is deleted​
d) The table is rehashed
○​ Answer: b) A collision occurs
4.​ Which of the following is an example of chaining in collision resolution?​
a) Linear Probing​
b) Linked Lists​
c) Quadratic Probing​
d) Hashing
○​ Answer: b) Linked Lists
5.​ Which of the following is a disadvantage of open addressing?​
a) Requires extra memory​
b) Can cause clustering​
c) Slower for small datasets​
d) Cannot handle large datasets
○​ Answer: b) Can cause clustering

Indexing and Suffix Trees (MCQs 6-10)

6.​ Which indexing method uses bitmaps to represent the presence or absence of
data?​
a) Single-Level Indexing​
b) B-Tree Indexing​
c) Bitmap Indexing​
d) Multi-Level Indexing
○​ Answer: c) Bitmap Indexing
7.​ Which of the following is true for a suffix tree?​
a) It stores all substrings of a string.​
b) It is slower than a trie.​
c) It is used for prefix-based search only.​
d) It only supports exact matching.
○​ Answer: a) It stores all substrings of a string.
8.​ In a trie, each node represents:​
a) A complete word​
b) A character​
c) A key-value pair​
d) A string
○​ Answer: b) A character
9.​ Which of the following is a primary application of tries?​
a) Binary search in arrays​
b) Spell-checking​
c) Sorting​
d) Data compression
○​ Answer: b) Spell-checking
10.​Which structure allows fast substring search and pattern matching?​
a) Trie​
b) Hash Table​
c) Suffix Tree​
d) Binary Tree
●​ Answer: c) Suffix Tree
●​ Which of the following is a feature of a good hash function?​
a) It must always produce the same output for the same input.​
b) It must always map every key to a unique index.​
c) It must generate different outputs for every key.​
d) It must store keys in sorted order.
●​ Answer: a) It must always produce the same output for the same input.
12.​Which of the following techniques is used to resolve collisions in hashing by
storing elements in linked lists at each index?​
a) Open Addressing​
b) Quadratic Probing​
c) Chaining​
d) Double Hashing
●​ Answer: c) Chaining
13.​What is the worst-case time complexity for searching in a hash table using
chaining?​
a) O(1)​
b) O(log n)​
c) O(n)​
d) O(n log n)
●​ Answer: c) O(n)
14.​In open addressing, which probing technique uses the second hash function to
find the next available slot?​
a) Linear Probing​
b) Quadratic Probing​
c) Double Hashing​
d) Chaining
●​ Answer: c) Double Hashing
15.​What is the primary advantage of open addressing over chaining?​
a) No extra memory is required for linked lists.​
b) It is faster in practice.​
c) It supports more data.​
d) It uses less computation power.
●​ Answer: a) No extra memory is required for linked lists.

Indexing and Suffix Trees (MCQs 16-20)

16.​In a B-Tree, which of the following is true?​


a) It is an unbalanced binary tree.​
b) All leaf nodes are at different levels.​
c) All leaf nodes are at the same depth.​
d) It uses a linear search for searching.
●​ Answer: c) All leaf nodes are at the same depth.
17.​What is the primary purpose of using a trie for storing strings?​
a) To minimize space complexity​
b) To allow for fast prefix-based searches​
c) To allow for efficient sorting of strings​
d) To optimize for searching integers
●​ Answer: b) To allow for fast prefix-based searches
18.​In the construction of a suffix tree, what happens to the suffixes of the string?​
a) They are all concatenated together.​
b) They are stored in reverse order.​
c) They are used as paths in a tree structure.​
d) They are stored as individual arrays.
●​ Answer: c) They are used as paths in a tree structure.
19.​Which of the following indexing methods is primarily used for faster retrieval of
records in databases with large amounts of data?​
a) Linear Search​
b) B-Trees​
c) Array Indexing​
d) Linked List Indexing
●​ Answer: b) B-Trees
20.​Which is a major advantage of using a suffix tree over a regular trie for string
matching?​
a) It stores only prefixes of the string.​
b) It allows faster substring matching.​
c) It is more memory-efficient.​
d) It supports insertion and deletion operations efficiently.
●​ Answer: b) It allows faster substring matching.

Explanation of Topics: Sorting


Algorithms
5.1 Internal Sorting
Internal sorting refers to the process of sorting data that fits entirely into memory. In this
case, sorting is done directly in the memory (RAM) rather than using external storage like
disks. The main types of internal sorting algorithms are comparison-based and
non-comparison sorting algorithms.

Comparison-based Sorting

Comparison-based sorting algorithms determine the order of elements by comparing them.


The basic idea is to compare pairs of elements and rearrange them based on the results.

Common Comparison-based Sorting Algorithms:

1.​ Bubble Sort:


○​ Description: Repeatedly steps through the list, compares adjacent elements,
and swaps them if they are in the wrong order. This process is repeated until
the list is sorted.
○​ Time Complexity: O(n^2)
○​ Space Complexity: O(1) (in-place)
2.​ Selection Sort:
○​ Description: Divides the list into a sorted and unsorted region. It repeatedly
selects the smallest (or largest) element from the unsorted region and swaps
it with the first unsorted element.
○​ Time Complexity: O(n^2)
○​ Space Complexity: O(1) (in-place)
3.​ Insertion Sort:
○​ Description: Builds the sorted array one element at a time by repeatedly
inserting the next element into its correct position in the sorted region.
○​ Time Complexity: O(n^2) (in the worst case), O(n) (best case)
○​ Space Complexity: O(1) (in-place)
4.​ Merge Sort:
○​ Description: A divide-and-conquer algorithm that divides the list into halves,
recursively sorts each half, and then merges the two halves in sorted order.
○​ Time Complexity: O(n log n)
○​ Space Complexity: O(n) (requires extra space)
5.​ Quick Sort:
○​ Description: A divide-and-conquer algorithm that picks a "pivot" element,
partitions the array into two sub-arrays (those smaller and those greater than
the pivot), and then recursively sorts the sub-arrays.
○​ Time Complexity: O(n log n) on average, O(n^2) in the worst case
○​ Space Complexity: O(log n) (in-place)
6.​ Heap Sort:
○​ Description: Converts the array into a binary heap, then repeatedly extracts
the largest (or smallest) element and places it in the correct position.
○​ Time Complexity: O(n log n)
○​ Space Complexity: O(1) (in-place)

Non-comparison Sorting

Non-comparison sorting algorithms are based on the values of the elements themselves,
rather than comparing them. These algorithms typically perform better than
comparison-based algorithms under certain conditions.

1.​ Counting Sort:


○​ Description: Counts the occurrences of each distinct element in the list, then
computes the positions of each element in the sorted array.
○​ Time Complexity: O(n + k), where kkk is the range of the input
○​ Space Complexity: O(k)
2.​ Radix Sort:
○​ Description: Sorts numbers digit by digit, starting from the least significant
digit (LSD) or most significant digit (MSD). Each digit is sorted using a stable
sorting algorithm, often counting sort.
○​ Time Complexity: O(nk), where kkk is the number of digits in the largest
number
○​ Space Complexity: O(n + k)
3.​ Bucket Sort:
○​ Description: Divides the input into several "buckets" and sorts each bucket
individually, usually using another sorting algorithm (e.g., insertion sort).
○​ Time Complexity: O(n + k) if the data is uniformly distributed
○​ Space Complexity: O(n + k)

5.2 External Sorting


External sorting refers to sorting large datasets that do not fit into memory. These
algorithms make use of external storage (like disk drives) to handle the data efficiently.

Common External Sorting Algorithms:

1.​ Two-way Merge Sort:


○​ Description: Works similarly to merge sort but is adapted for situations where
the data is too large to fit in memory. It sorts two blocks of data at a time and
then merges them.
○​ Time Complexity: O(n log n)
○​ Space Complexity: O(n) (for the merge buffers)
2.​ Multi-way Merge Sort:
○​ Description: A generalization of two-way merge sort, where multiple sorted
sub-lists are merged at the same time. This is useful when more than two
blocks of data need to be sorted and merged.
○​ Time Complexity: O(n log k), where kkk is the number of sorted sub-lists
○​ Space Complexity: O(n)

20 MCQs with Answers


Comparison-based Sorting Algorithms (MCQs 1-10)

1.​ Which of the following sorting algorithms has the worst-case time complexity
of O(n^2)?​
a) Merge Sort​
b) Quick Sort​
c) Insertion Sort​
d) Heap Sort
○​ Answer: c) Insertion Sort
2.​ Which sorting algorithm is considered the most efficient for large datasets with
average time complexity of O(n log n)?​
a) Bubble Sort​
b) Merge Sort​
c) Quick Sort​
d) Selection Sort
○​ Answer: b) Merge Sort
3.​ What is the best-case time complexity for Quick Sort?​
a) O(n log n)​
b) O(n^2)​
c) O(1)​
d) O(n)
○​ Answer: a) O(n log n)
4.​ Which of the following sorting algorithms uses a "divide and conquer"
approach?​
a) Bubble Sort​
b) Merge Sort​
c) Insertion Sort​
d) Selection Sort
○​ Answer: b) Merge Sort
5.​ Which sorting algorithm is not comparison-based?​
a) Merge Sort​
b) Quick Sort​
c) Radix Sort​
d) Insertion Sort
○​ Answer: c) Radix Sort
6.​ What is the time complexity of Heap Sort?​
a) O(n log n)​
b) O(n^2)​
c) O(n)​
d) O(log n)
○​ Answer: a) O(n log n)
7.​ Which sorting algorithm is known for its "stability"?​
a) Selection Sort​
b) Heap Sort​
c) Merge Sort​
d) Quick Sort
○​ Answer: c) Merge Sort
8.​ What is the space complexity of Merge Sort?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n log n)
○​ Answer: b) O(n)
9.​ Which algorithm performs better when the data is already nearly sorted?​
a) Selection Sort​
b) Quick Sort​
c) Insertion Sort​
d) Merge Sort
○​ Answer: c) Insertion Sort
10.​Which sorting algorithm is most efficient when the range of input values is
small?​
a) Radix Sort​
b) Merge Sort​
c) Quick Sort​
d) Counting Sort
●​ Answer: d) Counting Sort

Non-comparison Sorting and External Sorting (MCQs 11-20)

11.​Which sorting algorithm is based on the digit-by-digit sorting of elements?​


a) Quick Sort​
b) Counting Sort​
c) Radix Sort​
d) Merge Sort
●​ Answer: c) Radix Sort
12.​In which sorting algorithm is the input data divided into buckets before
sorting?​
a) Bucket Sort​
b) Merge Sort​
c) Quick Sort​
d) Heap Sort
●​ Answer: a) Bucket Sort
13.​What is the worst-case time complexity of Counting Sort?​
a) O(n)​
b) O(n^2)​
c) O(n + k)​
d) O(n log n)
●​ Answer: c) O(n + k), where kkk is the range of the input
14.​Which of the following is not typically an advantage of using Radix Sort?​
a) It is non-comparison-based.​
b) It works well with large numbers of data.​
c) It is limited by the size of the input.​
d) It has a time complexity of O(n log n).
●​ Answer: d) It has a time complexity of O(n log n).
15.​Which of the following sorting algorithms is most suitable for external sorting
with large datasets?​
a) Quick Sort​
b) Merge Sort​
c) Radix Sort​
d) Heap Sort
●​ Answer: b) Merge Sort
16.​What is the main advantage of multi-way merge sort over two-way merge sort?​
a) It reduces the number of comparisons.​
b) It requires less memory.​
c) It can handle more blocks of data at once.​
d) It is faster for smaller datasets.
●​ Answer: c) It can handle more blocks of data at once.
17.​In two-way merge sort, the process of merging two sorted lists involves which
of the following?​
a) Sorting elements from left to right​
b) Combining the sorted lists into one​
c) Dividing the data into smaller sublists​
d) Discarding duplicate elements
●​ Answer: b) Combining the sorted lists into one
18.​Which sorting algorithm is suitable for data that does not fit into memory and
must be stored in external storage?​
a) Heap Sort​
b) Merge Sort​
c) Radix Sort​
d) Quick Sort
●​ Answer: b) Merge Sort
19.​What is the primary drawback of using counting sort?​
a) It is not stable.​
b) It cannot handle negative numbers.​
c) It has a high space complexity.​
d) It requires a fixed-size array.
●​ Answer: c) It has a high space complexity.
20.​Which algorithm is generally considered the most efficient for sorting large
datasets when all data is in memory?​
a) Selection Sort​
b) Quick Sort​
c) Insertion Sort​
d) Merge Sort
●​ Answer: b) Quick Sort

Explanation of Topics: Searching


Algorithms
6.1 Linear Search
Linear search is the simplest searching algorithm. It works by checking every element in the
list one by one until the desired element is found or the list is exhausted.

●​ Description: The algorithm iterates through the list sequentially and compares each
element with the target value.
●​ Time Complexity:
○​ Worst Case: O(n)
○​ Best Case: O(1) (if the element is found at the first position)
●​ Space Complexity: O(1) (in-place)
●​ Applications: Suitable for small lists or unsorted data.

6.2 Binary Search


Binary search is a more efficient algorithm than linear search, but it requires that the list is
sorted beforehand. It works by repeatedly dividing the search interval in half.

●​ Description: The algorithm compares the target value to the middle element of the
list. If the target is smaller, it narrows the search to the left half; if larger, to the right
half. This process is repeated until the element is found or the search interval is
empty.
●​ Time Complexity:
○​ Worst Case: O(log n)
○​ Best Case: O(1) (if the target is found at the middle)
●​ Space Complexity: O(1) (for iterative implementation)
●​ Applications: Suitable for large datasets that are sorted, especially when searching
for individual values.

6.3 General Search Trees


General Search Trees refer to tree data structures where each node can have any number
of children. They are more flexible than binary trees and can be used for a wide variety of
searching applications, including search operations that are more complex than binary
search.

●​ Description: In general search trees, each node can have any number of children.
Searching within such trees typically involves traversing the tree in a specific order
(e.g., Preorder, Inorder, or Postorder traversal).
●​ Time Complexity:
○​ Worst Case: O(n) (linear search through the tree)
●​ Space Complexity: O(n) (due to tree structure)
●​ Applications: Useful for hierarchical data structures, such as file systems, multi-level
indexing in databases, and more complex search queries.

20 MCQs with Answers for Searching


Algorithms
Linear Search (MCQs 1-5)

1.​ What is the time complexity of linear search in the worst case?​
a) O(1)​
b) O(log n)​
c) O(n)​
d) O(n log n)
○​ Answer: c) O(n)
2.​ Which of the following is true about linear search?​
a) It requires the list to be sorted.​
b) It is more efficient than binary search for large datasets.​
c) It is easy to implement but inefficient for large datasets.​
d) It divides the list into two halves.
○​ Answer: c) It is easy to implement but inefficient for large datasets.
3.​ In which case does the linear search algorithm terminate after checking the
first element?​
a) When the target element is at the first position.​
b) When the list is empty.​
c) When the list is sorted.​
d) When the target element is in the last position.
○​ Answer: a) When the target element is at the first position.
4.​ What is the main advantage of linear search over binary search?​
a) It works on sorted lists only.​
b) It is more efficient in terms of time complexity.​
c) It does not require the list to be sorted.​
d) It can handle large data better.
○​ Answer: c) It does not require the list to be sorted.
5.​ What is the space complexity of linear search?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n log n)
○​ Answer: a) O(1)

Binary Search (MCQs 6-10)

6.​ Binary search requires the list to be:​


a) Sorted​
b) Unsorted​
c) Sorted in descending order​
d) None of the above
○​ Answer: a) Sorted
7.​ What is the time complexity of binary search in the worst case?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n log n)
○​ Answer: c) O(log n)
8.​ Which of the following is a key characteristic of binary search?​
a) It checks every element sequentially.​
b) It reduces the search space by half after each comparison.​
c) It is used only for unsorted arrays.​
d) It works by iterating through the entire array.
○​ Answer: b) It reduces the search space by half after each comparison.
9.​ What is the best-case time complexity for binary search?​
a) O(n)​
b) O(log n)​
c) O(1)​
d) O(n log n)
○​ Answer: c) O(1)
10.​What is the space complexity of binary search?​
a) O(1)​
b) O(n)​
c) O(log n)​
d) O(n log n)
●​ Answer: a) O(1) (for iterative implementation)

General Search Trees (MCQs 11-15)

11.​What is the time complexity of searching in a general search tree in the worst
case?​
a) O(log n)​
b) O(n)​
c) O(n log n)​
d) O(1)
●​ Answer: b) O(n)
12.​Which of the following is a characteristic of general search trees?​
a) Each node can only have two children.​
b) Each node can have any number of children.​
c) The tree must be balanced.​
d) It is primarily used for binary search.
●​ Answer: b) Each node can have any number of children.
13.​Which traversal method can be used to search through a general search tree?​
a) Preorder traversal​
b) Inorder traversal​
c) Postorder traversal​
d) All of the above
●​ Answer: d) All of the above
14.​In which of the following scenarios would you prefer using a general search
tree?​
a) Searching for an element in a sorted array.​
b) Searching hierarchical data, like file systems.​
c) Searching a list of numbers.​
d) Searching through a database of flat records.
●​ Answer: b) Searching hierarchical data, like file systems.
15.​What is the time complexity of searching for an element in a binary search tree
(BST)?​
a) O(n)​
b) O(log n)​
c) O(n log n)​
d) O(1)
●​ Answer: b) O(log n) (for a balanced tree)

General Search Trees Continued (MCQs 16-20)

16.​Which of the following is an example of a general search tree?​


a) Binary Search Tree​
b) AVL Tree​
c) N-ary Tree​
d) Red-Black Tree
●​ Answer: c) N-ary Tree
17.​Which traversal technique is most commonly used for searching in general
search trees?​
a) Breadth-First Search (BFS)​
b) Depth-First Search (DFS)​
c) Quick Sort​
d) Linear Search
●​ Answer: b) Depth-First Search (DFS)
18.​In a general search tree, each node contains which of the following?​
a) Only the data.​
b) Data and the references to its children.​
c) Data and a reference to its parent.​
d) Data, children, and parent references.
●​ Answer: b) Data and the references to its children.
19.​Which of the following is a drawback of searching in general search trees?​
a) It can be slow due to lack of balance.​
b) It requires sorting the data.​
c) It requires a binary search approach.​
d) It uses more memory than binary trees.
●​ Answer: a) It can be slow due to lack of balance.
20.​Which of the following search algorithms is used when searching a general
search tree?​
a) Binary Search​
b) Linear Search​
c) Depth-First Search (DFS)​
d) Linear Search through a linked list
●​ Answer: c) Depth-First Search (DFS)
7. Graphs and Graph Algorithms
7.1 Graph Basics

Types of Graphs

●​ Directed Graph (Digraph): In this graph, edges have a direction. The edge (u, v)
means that there is a directed edge from node u to node v. The direction matters.
●​ Undirected Graph: In this graph, edges do not have a direction. The edge (u, v) is
the same as (v, u).
●​ Weighted Graph: A graph in which each edge has a weight or cost associated with
it. The weight typically represents the distance, cost, or any other measure of the
edge.
●​ Unweighted Graph: A graph in which edges do not have any weights associated
with them.

Representation of Graphs

●​ Adjacency Matrix: A 2D array where each cell (i, j) represents the presence or
absence of an edge between vertex i and vertex j. It is commonly used for dense
graphs.
○​ Time complexity for BFS/DFS: O(V^2) where V is the number of vertices.
●​ Adjacency List: A collection of lists or linked lists, where each vertex stores a list of
adjacent vertices. This representation is more efficient for sparse graphs.
○​ Time complexity for BFS/DFS: O(V + E) where V is the number of vertices
and E is the number of edges.

7.2 Graph Traversals

Breadth-First Search (BFS)

●​ Description: BFS explores all the vertices at the current depth level before moving
on to vertices at the next depth level. It uses a queue to explore the graph.
●​ Time Complexity: O(V + E) where V is the number of vertices and E is the number
of edges.
●​ Space Complexity: O(V) (due to the queue and visited list).

Depth-First Search (DFS)

●​ Description: DFS explores as far as possible along each branch before


backtracking. It uses a stack (or recursion) for traversal.
●​ Time Complexity: O(V + E) where V is the number of vertices and E is the number
of edges.
●​ Space Complexity: O(V) (due to the recursion stack or explicit stack).

Topological Sorting
●​ Description: Topological sorting is only possible in Directed Acyclic Graphs
(DAGs). It arranges the vertices in a linear order such that for every directed edge (u,
v), vertex u comes before vertex v.
●​ Time Complexity: O(V + E) where V is the number of vertices and E is the number
of edges.

7.3 Pathfinding Algorithms

Shortest Path Algorithms

1.​ Dijkstra’s Algorithm


○​ Description: Used to find the shortest path between a source node and all
other nodes in a weighted graph (with non-negative weights).
○​ Time Complexity: O(V^2) using an adjacency matrix or O((V + E) log V)
using a priority queue.
○​ Space Complexity: O(V).
2.​ Bellman-Ford Algorithm
○​ Description: Used to find the shortest path from a single source to all other
vertices, even with negative edge weights. It can also detect negative weight
cycles.
○​ Time Complexity: O(V * E) where V is the number of vertices and E is the
number of edges.
○​ Space Complexity: O(V).
3.​ Floyd-Warshall Algorithm
○​ Description: Used to find the shortest paths between all pairs of vertices in a
weighted graph.
○​ Time Complexity: O(V^3) where V is the number of vertices.
○​ Space Complexity: O(V^2).

7.4 Minimum Spanning Tree (MST)

Prim’s Algorithm

●​ Description: A greedy algorithm that grows a minimum spanning tree by adding


edges with the smallest weight that connect a new vertex to the tree.
●​ Time Complexity: O(V^2) (using an adjacency matrix) or O((V + E) log V) (using a
priority queue).
●​ Space Complexity: O(V).

Kruskal’s Algorithm

●​ Description: A greedy algorithm that adds edges in increasing order of their weight,
as long as they don’t form a cycle.
●​ Time Complexity: O(E log E), where E is the number of edges (sorting edges).
●​ Space Complexity: O(V).
MCQs for Graphs and Graph Algorithms
Graph Basics (MCQs 1-5)

1.​ Which of the following is a characteristic of a directed graph?​


a) The edges have no direction.​
b) Each edge is bidirectional.​
c) The edges have a direction.​
d) Each vertex has only one edge.
○​ Answer: c) The edges have a direction.
2.​ What is the main disadvantage of using an adjacency matrix for graph
representation?​
a) It uses a lot of memory.​
b) It is inefficient for sparse graphs.​
c) It requires sorting the graph’s edges.​
d) It is difficult to implement.
○​ Answer: b) It is inefficient for sparse graphs.
3.​ Which graph representation is most efficient for sparse graphs?​
a) Adjacency Matrix​
b) Adjacency List​
c) Incidence Matrix​
d) Graph Node List
○​ Answer: b) Adjacency List
4.​ Which of the following is true about an unweighted graph?​
a) Each edge has a cost or weight.​
b) There are no cycles in an unweighted graph.​
c) All edges have equal weight or cost.​
d) It can only be directed.
○​ Answer: c) All edges have equal weight or cost.
5.​ Which graph type requires each edge to have a direction associated with it?​
a) Directed Graph​
b) Weighted Graph​
c) Undirected Graph​
d) Unweighted Graph
○​ Answer: a) Directed Graph

Graph Traversals (MCQs 6-10)

6.​ What is the time complexity of both BFS and DFS?​


a) O(1)​
b) O(V^2)​
c) O(V + E)​
d) O(V log V)
○​ Answer: c) O(V + E)
7.​ In BFS, what is used to store the nodes to be explored next?​
a) Stack​
b) Queue​
c) Priority Queue​
d) Linked List
○​ Answer: b) Queue
8.​ Which of the following is true about DFS traversal?​
a) It uses a queue to explore nodes.​
b) It explores as far as possible along a branch before backtracking.​
c) It requires the graph to be sorted.​
d) It only works for undirected graphs.
○​ Answer: b) It explores as far as possible along a branch before backtracking.
9.​ What is the primary application of topological sorting?​
a) To find the shortest path.​
b) To arrange nodes in a linear order for a DAG.​
c) To detect cycles in a graph.​
d) To find the minimum spanning tree.
○​ Answer: b) To arrange nodes in a linear order for a DAG.
10.​Which of the following algorithms is used for Topological Sorting?​
a) Dijkstra’s Algorithm​
b) Bellman-Ford Algorithm​
c) DFS​
d) BFS
●​ Answer: c) DFS

Pathfinding Algorithms (MCQs 11-15)

11.​Which of the following algorithms is used to find the shortest path in a graph
with negative edge weights?​
a) Dijkstra’s Algorithm​
b) Floyd-Warshall Algorithm​
c) Bellman-Ford Algorithm​
d) Prim’s Algorithm
●​ Answer: c) Bellman-Ford Algorithm
12.​Which algorithm can be used to find the shortest path between all pairs of
vertices?​
a) Dijkstra’s Algorithm​
b) Bellman-Ford Algorithm​
c) Floyd-Warshall Algorithm​
d) BFS
●​ Answer: c) Floyd-Warshall Algorithm
13.​What is the primary difference between Dijkstra’s Algorithm and Bellman-Ford
Algorithm?​
a) Dijkstra’s Algorithm works only for directed graphs.​
b) Bellman-Ford can handle graphs with negative weights, but Dijkstra cannot.​
c) Dijkstra’s Algorithm is slower than Bellman-Ford.​
d) Bellman-Ford is faster than Dijkstra’s Algorithm.
●​ Answer: b) Bellman-Ford can handle graphs with negative weights, but Dijkstra
cannot.
14.​Which algorithm is used for finding the shortest path from a source node to all
other nodes in a graph with non-negative weights?​
a) Bellman-Ford Algorithm​
b) Dijkstra’s Algorithm​
c) Floyd-Warshall Algorithm​
d) A* Algorithm
●​ Answer: b) Dijkstra’s Algorithm
15.​What is the time complexity of Floyd-Warshall’s Algorithm?​
a) O(V^2)​
b) O(V^3)​
c) O(V + E)​
d) O(E log V)
●​ Answer: b) O(V^3)

Minimum Spanning Tree (MCQs 16-20)

16.​Which of the following is a greedy algorithm used to find the Minimum


Spanning Tree (MST)?​
a) Bellman-Ford Algorithm​
b) Prim’s Algorithm​
c) Dijkstra’s Algorithm​
d) Floyd-Warshall Algorithm
●​ Answer: b) Prim’s Algorithm
17.​Which algorithm is used to find the MST by sorting the edges in increasing
order of their weight?​
a) Kruskal’s Algorithm​
b) Dijkstra’s Algorithm​
c) Bellman-Ford Algorithm​
d) Prim’s Algorithm
●​ Answer: a) Kruskal’s Algorithm
18.​What is the time complexity of Kruskal’s Algorithm?​
a) O(V^2)​
b) O(E log E)​
c) O(V log V)​
d) O(V^3)
●​ Answer: b) O(E log E)
19.​Which of the following is true about Prim’s Algorithm?​
a) It starts with an edge and expands by adding vertices.​
b) It does not need a starting node.​
c) It sorts edges by weight.​
d) It uses a depth-first search for traversal.
●​ Answer: a) It starts with an edge and expands by adding vertices.
20.​What is the time complexity of Prim’s Algorithm using an adjacency matrix?​
a) O(E log V)​
b) O(V^2)​
c) O(V + E)​
d) O(E)
●​ Answer: b) O(V^2)

8. Algorithm Design Techniques


Greedy Algorithms

Greedy algorithms follow a problem-solving heuristic of making the locally optimal choice at
each stage with the hope that these local choices will lead to a globally optimal solution. It is
used in problems where choosing locally optimal solutions at each step leads to an optimal
solution for the whole problem.

Huffman Encoding

●​ Description: A lossless data compression algorithm. It assigns variable-length codes


to input characters, with shorter codes assigned to more frequent characters.
●​ Time Complexity: O(n log n) where n is the number of distinct characters in the
input.
●​ Applications: Used in file compression formats like ZIP, JPEG, and MP3.

Kruskal’s Minimum Spanning Tree (MST) Algorithm

●​ Description: A greedy algorithm to find a minimum spanning tree for a connected,


undirected graph. It adds edges in increasing order of weight, making sure no cycles
are formed.
●​ Time Complexity: O(E log E) where E is the number of edges.
●​ Applications: Network design, image segmentation.

Prim’s Minimum Spanning Tree (MST) Algorithm

●​ Description: Another greedy algorithm for finding the MST of a connected graph. It
grows the MST one edge at a time, selecting the smallest edge that connects a
vertex in the tree to a vertex outside the tree.
●​ Time Complexity: O(E log V) where E is the number of edges and V is the number
of vertices.
●​ Applications: Network design, computer networking.

Divide and Conquer

Divide and Conquer is a problem-solving paradigm that divides the problem into
subproblems, solves each subproblem independently, and combines their solutions to solve
the original problem.
Merge Sort

●​ Description: A sorting algorithm that divides the array into two halves, recursively
sorts them, and then merges the sorted halves.
●​ Time Complexity: O(n log n) where n is the number of elements in the array.
●​ Space Complexity: O(n)
●​ Applications: Sorting large datasets.

Quick Sort

●​ Description: A sorting algorithm that selects a "pivot" element and partitions the
array around the pivot so that elements less than the pivot go to the left, and
elements greater go to the right. It then recursively sorts the left and right partitions.
●​ Time Complexity: O(n log n) on average, but O(n²) in the worst case.
●​ Space Complexity: O(log n) for the recursive stack.
●​ Applications: Sorting, database query optimization.

Binary Search

●​ Description: A searching algorithm used on a sorted array. It repeatedly divides the


search interval in half, and the search continues in the half where the element is most
likely to be found.
●​ Time Complexity: O(log n) where n is the number of elements in the array.
●​ Space Complexity: O(1) for iterative implementation, O(log n) for recursive
implementation.
●​ Applications: Efficient searching in sorted arrays or databases.

Dynamic Programming

Dynamic programming (DP) is a technique for solving problems by breaking them down into
simpler subproblems and storing the results of these subproblems to avoid redundant
calculations.

Fibonacci Sequence

●​ Description: The sequence where each number is the sum of the two preceding
ones, usually starting with 0 and 1. It can be solved using dynamic programming by
storing the results of subproblems to avoid redundant computation.
●​ Time Complexity: O(n) where n is the number of terms.
●​ Space Complexity: O(n) for the DP table.

Matrix Chain Multiplication

●​ Description: The problem of finding the most efficient way to multiply a chain of
matrices. The goal is to minimize the number of scalar multiplications.
●​ Time Complexity: O(n³) where n is the number of matrices.
●​ Space Complexity: O(n²) for storing the DP table.
●​ Applications: Optimization in matrix computations.

Longest Common Subsequence (LCS)

●​ Description: The problem of finding the longest subsequence common to two


sequences. This is solved using a DP table where the solution is built from smaller
subproblems.
●​ Time Complexity: O(m * n) where m and n are the lengths of the two sequences.
●​ Space Complexity: O(m * n).
●​ Applications: Bioinformatics (sequence alignment), text comparison.

0/1 Knapsack Problem

●​ Description: A problem where a set of items, each with a weight and value, must be
selected such that the total weight does not exceed a given limit while maximizing the
total value. It is solved using dynamic programming.
●​ Time Complexity: O(n * W) where n is the number of items and W is the maximum
weight capacity.
●​ Space Complexity: O(n * W).

Backtracking

Backtracking is a general algorithmic technique for solving problems recursively by trying to


build a solution incrementally, removing those solutions that fail to meet the criteria at any
point.

N-Queens Problem

●​ Description: The problem of placing n queens on an n x n chessboard so that no


two queens threaten each other. It is solved by backtracking.
●​ Time Complexity: O(n!) for the worst case.
●​ Space Complexity: O(n).
●​ Applications: Solving combinatorial problems.

Graph Coloring

●​ Description: The problem of coloring the vertices of a graph such that no two
adjacent vertices have the same color. Backtracking is used to explore possible color
assignments.
●​ Time Complexity: O(c^n) where n is the number of vertices and c is the number of
colors.
●​ Space Complexity: O(n) for the coloring array.

Hamiltonian Cycle

●​ Description: A cycle that visits each vertex exactly once and returns to the starting
vertex. The problem is solved by backtracking.
●​ Time Complexity: O(n!) in the worst case.
●​ Space Complexity: O(n).

MCQs for Algorithm Design Techniques

Greedy Algorithms (MCQs 1-5)

1.​ Which of the following problems is solved using a greedy algorithm?​


a) Merge Sort​
b) Fibonacci Sequence​
c) Kruskal’s MST​
d) Longest Common Subsequence
○​ Answer: c) Kruskal’s MST
2.​ In Huffman Encoding, which of the following is true?​
a) All characters have the same length code.​
b) The most frequent characters have the longest code.​
c) The least frequent characters have the shortest code.​
d) More frequent characters have shorter codes.
○​ Answer: d) More frequent characters have shorter codes.
3.​ Which of the following is an example of a greedy algorithm?​
a) Binary Search​
b) Merge Sort​
c) Prim’s MST​
d) Fibonacci Sequence
○​ Answer: c) Prim’s MST
4.​ In the greedy approach, what does it mean to make a "locally optimal choice"?​
a) Choose the best solution overall.​
b) Choose the best solution for each individual subproblem.​
c) Choose the solution that looks most promising at the start.​
d) Make a choice that guarantees the global optimal solution.
○​ Answer: b) Choose the best solution for each individual subproblem.
5.​ Which algorithm is primarily used to find the minimum spanning tree of a
graph?​
a) Merge Sort​
b) Kruskal’s Algorithm​
c) Bellman-Ford Algorithm​
d) Dijkstra’s Algorithm
○​ Answer: b) Kruskal’s Algorithm

Divide and Conquer (MCQs 6-10)

6.​ What is the time complexity of Merge Sort?​


a) O(n)​
b) O(n log n)​
c) O(n^2)​
d) O(log n)
○​ Answer: b) O(n log n)
7.​ Which of the following is a characteristic of the Divide and Conquer approach?​
a) The problem is solved by breaking it into subproblems.​
b) The subproblems are solved without any recursion.​
c) The subproblems are solved in a greedy manner.​
d) The problem is solved by combining all subproblems at once.
○​ Answer: a) The problem is solved by breaking it into subproblems.
8.​ In Quick Sort, what is the main operation performed after choosing a pivot?​
a) Sorting the array in ascending order.​
b) Splitting the array into subarrays based on the pivot.​
c) Merging the subarrays.​
d) Sorting the subarrays.
○​ Answer: b) Splitting the array into subarrays based on the pivot.
9.​ What is the time complexity of Binary Search?​
a) O(n)​
b) O(log n)​
c) O(n log n)​
d) O(n^2)
○​ Answer: b) O(log n)
10.​Which algorithm is typically used for efficient sorting of large datasets?​
a) Bubble Sort​
b) Quick Sort​
c) Linear Search​
d) Insertion Sort
○​ Answer: b) Quick Sort

Divide and Conquer (MCQs 11-15)

11.​What is the main advantage of the Divide and Conquer technique?​


a) Simplicity of implementation.​
b) Efficient for problems that can be split into smaller subproblems.​
c) It guarantees the most optimal solution.​
d) It always provides a solution faster than brute force.
○​ Answer: b) Efficient for problems that can be split into smaller subproblems.
12.​Which of the following is an example of a Divide and Conquer algorithm?​
a) Merge Sort​
b) Depth-First Search (DFS)​
c) Linear Search​
d) Bubble Sort
○​ Answer: a) Merge Sort
13.​Which of the following algorithms does NOT follow the Divide and Conquer
paradigm?​
a) Merge Sort​
b) Quick Sort​
c) Binary Search​
d) Insertion Sort
○​ Answer: d) Insertion Sort
14.​In Quick Sort, after partitioning, the pivot is placed at which position in the
array?​
a) The beginning of the array.​
b) The middle of the array.​
c) The end of the array.​
d) The position where all elements to the left are smaller, and all elements to the right
are larger.
○​ Answer: d) The position where all elements to the left are smaller, and all
elements to the right are larger.
15.​Which of the following is true for Merge Sort?​
a) It is an in-place sorting algorithm.​
b) It has a time complexity of O(n log n).​
c) It is less efficient than Quick Sort in practice.​
d) It has a time complexity of O(n²).
○​ Answer: b) It has a time complexity of O(n log n).

Dynamic Programming (MCQs 16-18)

16.​Which of the following problems can be solved using Dynamic Programming?​


a) Matrix Chain Multiplication​
b) Quick Sort​
c) Merge Sort​
d) Depth-First Search (DFS)
○​ Answer: a) Matrix Chain Multiplication
17.​The time complexity of solving the Fibonacci sequence using Dynamic
Programming is?​
a) O(2^n)​
b) O(n)​
c) O(log n)​
d) O(n²)
○​ Answer: b) O(n)
18.​What is the space complexity of the Dynamic Programming approach for the
0/1 Knapsack Problem?​
a) O(n)​
b) O(n * W), where W is the weight capacity​
c) O(W)​
d) O(1)
○​ Answer: b) O(n * W), where W is the weight capacity

Backtracking (MCQs 19-20)

19.​Which of the following is an example of a problem that can be solved using


Backtracking?​
a) Matrix Chain Multiplication​
b) Fibonacci Sequence​
c) N-Queens Problem​
d) Merge Sort
○​ Answer: c) N-Queens Problem
20.​What does backtracking involve in algorithmic problem solving?​
a) Finding the globally optimal solution directly.​
b) Trying every possible solution and returning the best one.​
c) Breaking the problem into smaller subproblems.​
d) Incrementally building a solution and abandoning it when it cannot be extended.
○​ Answer: d) Incrementally building a solution and abandoning it when it
cannot be extended.

You might also like