DSA Mcqs
DSA Mcqs
● Stack
○ Definition, operations, and applications.
○ Implementation using arrays and linked lists.
○ Applications: Function calls, Backtracking, Parenthesis matching,
Infix-to-Postfix conversion, Recursion.
● Queue
○ Definition, operations, and applications.
○ Implementation using arrays and linked lists.
○ Circular Queue and Deque.
○ Priority Queue.
3. Trees
● General Trees and Binary Trees
● Tree Representations and Traversals
○ Preorder, Inorder, Postorder Traversal
● Binary Search Trees (BSTs)
○ Insertion and Deletion in BST.
● Balanced Search Trees
○ AVL Trees: Rotations, Insertion, Deletion.
○ 2-3 Trees: Properties and Operations.
○ Red-Black Trees: Properties, Rotations, Insertions, Deletions.
○ Splay Trees and Self-adjusting Trees.
● Special Trees
○ Huffman Trees and Huffman Algorithm.
○ B-Trees and M-way Search Trees.
5. Sorting Algorithms
● Internal Sorting
○ Comparison-based Sorting:
■ Bubble Sort, Selection Sort, Insertion Sort.
■ Merge Sort, Quick Sort, Heap Sort.
○ Non-comparison Sorting:
■ Counting Sort, Radix Sort, Bucket Sort.
● External Sorting
○ Two-way Merge Sort, Multi-way Merge Sort.
6. Searching Algorithms
● Linear Search
● Binary Search
● General Search Trees
7. Graphs and Graph Algorithms
7.1 Graph Basics
● Types of Graphs
○ Directed and Undirected Graphs.
○ Weighted and Unweighted Graphs.
○ Representation of Graphs: Adjacency Matrix, Adjacency List.
● Prim’s Algorithm
● Kruskal’s Algorithm
Data structures and algorithms are fundamental concepts in computer science that help
organize and manipulate data efficiently.
Basic Terminology
● Data: Raw facts and figures without meaning. Example: Numbers, Characters.
● Information: Processed data that has meaning. Example: A student’s marks
converted into a grade.
● Data Structure: A way to store and organize data efficiently. Example: Arrays,
Linked Lists.
● Algorithm: A step-by-step procedure to solve a problem.
● Complexity: The measure of the efficiency of an algorithm in terms of time
(execution time) and space (memory usage).
● ADTs define how a data structure behaves rather than how it is implemented.
● Example ADTs:
○ List: A collection of ordered elements.
○ Stack: A last-in, first-out (LIFO) structure.
○ Queue: A first-in, first-out (FIFO) structure.
Asymptotic Notations
Asymptotic notations describe the behavior of an algorithm as the input size grows.
3. Which of the following is NOT an example of an Abstract Data Type (ADT)?
a) Stack
b) Queue
c) Array
d) Linked List
○ Answer: c) Array
4. Which ADT follows the First In, First Out (FIFO) principle?
a) Stack
b) Queue
c) Priority Queue
d) Heap
○ Answer: b) Queue
5. Which ADT is used in function calls (call stack) in programming?
a) Queue
b) Stack
c) Linked List
d) Hash Table
○ Answer: b) Stack
Algorithm Analysis
Asymptotic Notations
An array is a collection of elements of the same data type stored at contiguous memory
locations. Each element can be accessed using an index.
Operations on Arrays
Applications of Arrays
A Singly Linked List (SLL) consists of nodes where each node contains:
1. Insertion:
○ At the beginning: O(1).
○ At the end: O(n).
○ At a specific position: O(n).
2. Deletion:
○ First node: O(1).
○ Last node: O(n).
○ Specific node: O(n).
3. Searching:
○ Traversing until the element is found. O(n).
4. Traversing:
○ Visiting each node from head to tail. O(n).
Operations:
A Circular Linked List is a variation where the last node points back to the first node,
forming a loop.
Types:
● Singly Circular Linked List: Last node’s next points to the first node.
● Doubly Circular Linked List: Both next and prev pointers create a circular structure.
Applications:
● Round-robin scheduling.
● Multiplayer games (turn-based).
● Buffer management in operating systems.
6. What is the time complexity of inserting a node at the head of a singly linked
list?
a) O(1)
b) O(n)
c) O(n log n)
d) O(log n)
○ Answer: a) O(1)
7. Which of the following is NOT a disadvantage of a singly linked list?
a) Requires extra memory for pointers
b) Cannot be traversed backward
c) Insertions and deletions are costly
d) No random access
○ Answer: c) Insertions and deletions are costly
8. Which of the following operations is more efficient in a singly linked list
compared to an array?
a) Searching
b) Traversing
c) Insertion at the beginning
d) Accessing an element at a specific index
○ Answer: c) Insertion at the beginning
9. What additional pointer does a node in a doubly linked list have?
a) Previous pointer
b) Middle pointer
c) Random pointer
d) Next-to-next pointer
○ Answer: a) Previous pointer
10.Which of the following is NOT an advantage of a doubly linked list?
a) Can be traversed in both directions
b) Requires extra memory for the previous pointer
c) Easier deletion of nodes
d) More efficient searching than singly linked list
● Answer: b) Requires extra memory for the previous pointer
13.What is the best-case time complexity for searching an element in a linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n²)
● Answer: a) O(1)
14.What is the time complexity for deleting a node in the middle of a linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n²)
● Answer: b) O(n)
11.What additional pointer does a doubly linked list have compared to a singly
linked list?
a) Next pointer
b) Previous pointer
c) Tail pointer
d) Random pointer
● Answer: b) Previous pointer
12.Which of the following is NOT an advantage of a doubly linked list?
a) Can be traversed in both directions
b) Requires extra memory for the previous pointer
c) Easier deletion of nodes
d) More efficient searching than singly linked list
● Answer: b) Requires extra memory for the previous pointer
13.What is the time complexity of deleting a node from the middle of a doubly
linked list (when the node’s address is known)?
a) O(1)
b) O(n)
c) O(log n)
d) O(n²)
● Answer: a) O(1)
14.In which case is a doubly linked list more beneficial than a singly linked list?
a) When insertions and deletions are done at both ends
b) When the list needs less memory
c) When random access is needed
d) When using recursion
● Answer: a) When insertions and deletions are done at both ends
15.What is the disadvantage of a doubly linked list compared to a singly linked
list?
a) Slower traversal
b) More memory is required
c) Cannot store multiple data types
d) Difficult to implement
● Answer: b) More memory is required
Definition
A stack is a linear data structure that follows the LIFO (Last In, First Out) principle. This
means that the last element inserted into the stack is the first one to be removed.
Operations on Stack
1. Push(x): Inserts an element x at the top of the stack. O(1).
2. Pop(): Removes the top element of the stack. O(1).
3. Peek()/Top(): Returns the top element without removing it. O(1).
4. isEmpty(): Checks if the stack is empty. O(1).
5. isFull(): Checks if the stack is full (only in arrays). O(1).
Implementation of Stack
Applications of Stack
1. Function Calls: Function calls in programming languages use a stack to store return
addresses.
2. Backtracking: Used in problems like the N-Queens problem and maze solving.
3. Parenthesis Matching: Used in checking balanced parentheses in expressions.
4. Infix to Postfix Conversion: Helps in evaluating expressions efficiently.
5. Recursion: Recursive function calls use an implicit stack for execution.
Queue
Definition
A queue is a linear data structure that follows the FIFO (First In, First Out) principle. This
means that the element inserted first will be removed first.
Operations on Queue
1. Enqueue(x): Inserts an element x at the rear (end) of the queue. O(1).
2. Dequeue(): Removes an element from the front of the queue. O(1).
3. Front(): Returns the front element without removing it. O(1).
4. Rear(): Returns the last element without removing it. O(1).
5. isEmpty(): Checks if the queue is empty. O(1).
6. isFull(): Checks if the queue is full (only in arrays). O(1).
Implementation of Queue
Types of Queues
1. Circular Queue: The last position connects back to the first position (helps in
reducing memory wastage).
2. Deque (Double-Ended Queue): Allows insertion and deletion from both ends.
3. Priority Queue: Elements are dequeued based on priority rather than the insertion
order.
Applications of Queue
1. Which data structure follows the Last In, First Out (LIFO) principle?
a) Queue
b) Stack
c) Linked List
d) Graph
○ Answer: b) Stack
2. What is the time complexity of the Push operation in a stack?
a) O(1)
b) O(n)
c) O(log n)
d) O(n²)
○ Answer: a) O(1)
3. Which of the following is NOT an application of stacks?
a) Backtracking
b) Recursion
c) CPU Scheduling
d) Parenthesis Matching
○ Answer: c) CPU Scheduling
4. What happens when a stack overflows?
a) The program crashes
b) The stack deletes old elements
c) The stack increases its size automatically
d) New elements are ignored
○ Answer: a) The program crashes
5. Which data structure is used to convert infix expressions to postfix
expressions?
a) Queue
b) Stack
c) Linked List
d) Tree
○ Answer: b) Stack
What will be the result of the following stack operations?
scss
CopyEdit
Push(5)
Push(10)
Push(20)
Pop()
Peek()
6. a) 10
b) 20
c) 5
d) 15
○ Answer: a) 10
7. Which of the following is used in function calls?
a) Stack
b) Queue
c) Linked List
d) Hash Table
○ Answer: a) Stack
8. What is the space complexity of a stack storing 'n' elements?
a) O(1)
b) O(n)
c) O(log n)
d) O(n log n)
○ Answer: b) O(n)
9. Which operation is performed last in recursion?
a) Function Call
b) Push
c) Pop
d) Return
○ Answer: d) Return
10.Which of the following data structures can be used to implement recursion?
a) Queue
b) Stack
c) Graph
d) Tree
● Answer: b) Stack
11.Which data structure follows the First In, First Out (FIFO) principle?
a) Stack
b) Queue
c) Heap
d) Tree
● Answer: b) Queue
12.What is the time complexity of the Dequeue operation in a queue?
a) O(1)
b) O(n)
c) O(log n)
d) O(n²)
● Answer: a) O(1)
13.Which of the following is NOT a type of queue?
a) Circular Queue
b) Priority Queue
c) Stack Queue
d) Deque
● Answer: c) Stack Queue
14.What is a major problem with a simple queue implementation using arrays?
a) Memory wastage due to fixed size
b) Cannot perform enqueue operation
c) Dequeue operation is slow
d) Elements cannot be accessed randomly
● Answer: a) Memory wastage due to fixed size
15.Which queue allows insertion and deletion from both ends?
a) Simple Queue
b) Circular Queue
c) Priority Queue
d) Deque
● Answer: d) Deque
16.Which scheduling algorithm uses a queue?
a) Round Robin Scheduling
b) Priority Scheduling
c) First-Come, First-Served Scheduling
d) All of the above
● Answer: d) All of the above
17.What happens when a circular queue is full?
a) Enqueue is not possible
b) Memory increases automatically
c) Elements are shifted
d) Rear pointer resets to 0
● Answer: a) Enqueue is not possible
18.What is the front element of a queue containing [10, 20, 30] after performing
one dequeue operation?
a) 10
b) 20
c) 30
d) Queue is empty
● Answer: b) 20
19.Which queue allows elements to be dequeued based on priority rather than
insertion order?
a) Deque
b) Priority Queue
c) Circular Queue
d) Stack
● Answer: b) Priority Queue
20.Which of the following applications use a queue?
a) Graph BFS traversal
b) Printer job scheduling
c) Call center system
d) All of the above
● Answer: d) All of the above
A tree is a hierarchical data structure that consists of nodes. The top node is called the
root, and each node may have child nodes.
General Trees
A general tree is a tree where each node can have any number of children.
Binary Trees
A binary tree is a special type of tree where each node has at most two children:
● Left Child
● Right Child
Tree Representation
Tree Traversals
Tree traversal refers to visiting each node in a tree exactly once in a systematic way.
Properties:
Insertion in BST
Deletion in BST
Three cases to consider:
6. Which tree traversal method visits the root node first?
a) Preorder
b) Inorder
c) Postorder
d) Level Order
○ Answer: a) Preorder
7. Which traversal method is used to process an expression tree in prefix
notation?
a) Preorder
b) Inorder
c) Postorder
d) Level Order
○ Answer: a) Preorder
8. Which tree traversal method gives elements in sorted order for a BST?
a) Preorder
b) Inorder
c) Postorder
d) Level Order
○ Answer: b) Inorder
9. Which of the following tree traversal methods processes nodes in
left-right-root order?
a) Preorder
b) Inorder
c) Postorder
d) Level Order
○ Answer: c) Postorder
10.Which of the following is NOT a depth-first traversal technique?
a) Inorder
b) Preorder
c) Postorder
d) Level Order
● Answer: d) Level Order
AVL Trees
An AVL tree is a self-balancing binary search tree in which the difference in heights
between the left and right subtrees of any node is at most 1. This height difference is called
the balance factor and must be between -1 and 1.
When an insertion or deletion operation causes the balance factor of a node to become less
than -1 or greater than 1, rotations are performed to restore balance.
● Left Rotation (LL Rotation): Used when a node is unbalanced due to a left-heavy
subtree (left child has a left child).
● Right Rotation (RR Rotation): Used when a node is unbalanced due to a
right-heavy subtree (right child has a right child).
● Left-Right Rotation (LR Rotation): A combination of left and right rotations used
when the left child has a right child.
● Right-Left Rotation (RL Rotation): A combination of right and left rotations used
when the right child has a left child.
2-3 Trees
A 2-3 tree is a self-balancing tree in which each internal node has either two or three
children and stores either one or two keys. The tree is kept balanced, ensuring that all leaf
nodes are at the same level.
● Insertion: When a new key is inserted, the tree may split nodes to maintain balance.
● Deletion: Deletion involves merging nodes if necessary to keep the tree balanced.
Red-Black Trees
A red-black tree is a binary search tree with an extra bit of storage per node called the
color bit, which can be either red or black. This tree ensures that the tree remains
approximately balanced, guaranteeing O(log n) time for search, insertion, and deletion
operations.
● Deletion is more complex than insertion because it can violate multiple properties.
● After deletion, a series of recoloring and rotations may be needed to restore the
tree’s balance.
A splay tree is a self-adjusting binary search tree where recently accessed elements are
moved to the root to speed up future accesses. This makes frequently accessed elements
faster to find.
● Splaying: Every operation (search, insert, delete) on the tree performs a splay
operation to move the accessed node to the root.
● No extra space required: Unlike AVL or red-black trees, splay trees do not require
extra storage for balancing factors or color bits.
Splaying Operations
● Zig: Single rotation when the accessed node is the child of the root.
● Zig-Zig: Double rotation when the accessed node and its parent are both left or both
right children.
● Zig-Zag: Double rotation when the accessed node is a left child and its parent is a
right child (or vice versa).
Special Trees
Huffman Trees and Huffman Algorithm
A Huffman tree is a type of binary tree used in the Huffman encoding algorithm to assign
variable-length codes to input characters, with shorter codes for more frequent characters.
Huffman Algorithm
A B-tree is a self-balancing search tree where each node can have multiple children and
stores multiple keys. It is designed to minimize disk access in systems with large amounts of
data.
Properties of B-Trees
An M-way search tree is a generalization of the binary search tree, where each node can
have M children, and the tree is balanced. These trees are widely used in databases and file
systems.
● Hash Function: A function that takes a key and maps it to an index (hash value) in
the hash table. The hash function is responsible for distributing the keys uniformly
across the table.
● Hash Table: An array where data is stored at indices determined by the hash
function.
● Deterministic: The same input will always produce the same hash value.
● Uniform Distribution: The hash values should be evenly distributed to minimize
collisions.
● Efficient: The hash function should be computationally simple and fast.
● Minimizes Collisions: Collisions occur when two different keys hash to the same
index. A good hash function reduces the likelihood of this happening.
● Division Method: h(k)=k%mh(k) = k \% mh(k)=k%m, where kkk is the key, and mmm
is the size of the hash table.
● Multiplicative Method: h(k)=⌊m(kA%1)⌋h(k) = \lfloor m(k A \% 1)
\rfloorh(k)=⌊m(kA%1)⌋, where AAA is a constant, and mmm is the table size.
● Cryptographic Hash Functions: Used for security, such as SHA-256 or MD5.
In chaining, each index of the hash table points to a linked list of keys that hash to the same
index. This allows multiple keys to exist at the same index.
Advantages:
● Simple to implement.
● Supports dynamic resizing.
● Average search time is reduced if the table is sparsely populated.
Disadvantages:
2. Open Addressing
In open addressing, all elements are stored in the hash table itself, and when a collision
occurs, alternative locations are sought using a process called probing.
Probing Methods:
● Linear Probing: If a collision occurs at index iii, the next slot is checked at i+1i+1i+1,
i+2i+2i+2, and so on until an empty slot is found.
● Quadratic Probing: In case of a collision, check slots at i+12i+1^2i+12,
i+22i+2^2i+22, and so on, where iii is the original hash index.
● Double Hashing: If a collision occurs, use a second hash function to find the next
available slot.
Advantages:
● More memory efficient as it does not require additional structures like linked lists.
● Can be more efficient in practice for small tables.
Disadvantages:
● Slower for large tables due to clustering (when many elements hash to the same or
nearby locations).
● The table can become full faster, requiring resizing and rehashing.
Applications of Hashing:
1. Single-Level Indexing: A single index file that points to data records.
2. Multi-Level Indexing: Hierarchical indexing where a higher-level index points to
lower-level indices that, in turn, point to data records.
3. B-Trees and B+ Trees: These are used in databases to maintain a balanced tree
structure for efficient indexing and search operations.
4. Bitmap Indexing: Uses bitmaps to represent the presence or absence of data in
specific columns.
Advantages of Indexing:
A suffix tree is a compressed trie of all the suffixes of a string. It allows for fast string
matching and is particularly useful in applications like bioinformatics and text processing.
Applications:
● Pattern matching and searching in large texts.
● Data compression algorithms.
Tries
A trie (or prefix tree) is a tree-like data structure that stores strings in a way that allows for
efficient searching. Each node represents a single character, and paths represent words or
prefixes.
Properties of Tries:
Applications:
6. Which indexing method uses bitmaps to represent the presence or absence of
data?
a) Single-Level Indexing
b) B-Tree Indexing
c) Bitmap Indexing
d) Multi-Level Indexing
○ Answer: c) Bitmap Indexing
7. Which of the following is true for a suffix tree?
a) It stores all substrings of a string.
b) It is slower than a trie.
c) It is used for prefix-based search only.
d) It only supports exact matching.
○ Answer: a) It stores all substrings of a string.
8. In a trie, each node represents:
a) A complete word
b) A character
c) A key-value pair
d) A string
○ Answer: b) A character
9. Which of the following is a primary application of tries?
a) Binary search in arrays
b) Spell-checking
c) Sorting
d) Data compression
○ Answer: b) Spell-checking
10.Which structure allows fast substring search and pattern matching?
a) Trie
b) Hash Table
c) Suffix Tree
d) Binary Tree
● Answer: c) Suffix Tree
● Which of the following is a feature of a good hash function?
a) It must always produce the same output for the same input.
b) It must always map every key to a unique index.
c) It must generate different outputs for every key.
d) It must store keys in sorted order.
● Answer: a) It must always produce the same output for the same input.
12.Which of the following techniques is used to resolve collisions in hashing by
storing elements in linked lists at each index?
a) Open Addressing
b) Quadratic Probing
c) Chaining
d) Double Hashing
● Answer: c) Chaining
13.What is the worst-case time complexity for searching in a hash table using
chaining?
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
● Answer: c) O(n)
14.In open addressing, which probing technique uses the second hash function to
find the next available slot?
a) Linear Probing
b) Quadratic Probing
c) Double Hashing
d) Chaining
● Answer: c) Double Hashing
15.What is the primary advantage of open addressing over chaining?
a) No extra memory is required for linked lists.
b) It is faster in practice.
c) It supports more data.
d) It uses less computation power.
● Answer: a) No extra memory is required for linked lists.
Comparison-based Sorting
Non-comparison Sorting
Non-comparison sorting algorithms are based on the values of the elements themselves,
rather than comparing them. These algorithms typically perform better than
comparison-based algorithms under certain conditions.
1. Which of the following sorting algorithms has the worst-case time complexity
of O(n^2)?
a) Merge Sort
b) Quick Sort
c) Insertion Sort
d) Heap Sort
○ Answer: c) Insertion Sort
2. Which sorting algorithm is considered the most efficient for large datasets with
average time complexity of O(n log n)?
a) Bubble Sort
b) Merge Sort
c) Quick Sort
d) Selection Sort
○ Answer: b) Merge Sort
3. What is the best-case time complexity for Quick Sort?
a) O(n log n)
b) O(n^2)
c) O(1)
d) O(n)
○ Answer: a) O(n log n)
4. Which of the following sorting algorithms uses a "divide and conquer"
approach?
a) Bubble Sort
b) Merge Sort
c) Insertion Sort
d) Selection Sort
○ Answer: b) Merge Sort
5. Which sorting algorithm is not comparison-based?
a) Merge Sort
b) Quick Sort
c) Radix Sort
d) Insertion Sort
○ Answer: c) Radix Sort
6. What is the time complexity of Heap Sort?
a) O(n log n)
b) O(n^2)
c) O(n)
d) O(log n)
○ Answer: a) O(n log n)
7. Which sorting algorithm is known for its "stability"?
a) Selection Sort
b) Heap Sort
c) Merge Sort
d) Quick Sort
○ Answer: c) Merge Sort
8. What is the space complexity of Merge Sort?
a) O(1)
b) O(n)
c) O(log n)
d) O(n log n)
○ Answer: b) O(n)
9. Which algorithm performs better when the data is already nearly sorted?
a) Selection Sort
b) Quick Sort
c) Insertion Sort
d) Merge Sort
○ Answer: c) Insertion Sort
10.Which sorting algorithm is most efficient when the range of input values is
small?
a) Radix Sort
b) Merge Sort
c) Quick Sort
d) Counting Sort
● Answer: d) Counting Sort
● Description: The algorithm iterates through the list sequentially and compares each
element with the target value.
● Time Complexity:
○ Worst Case: O(n)
○ Best Case: O(1) (if the element is found at the first position)
● Space Complexity: O(1) (in-place)
● Applications: Suitable for small lists or unsorted data.
● Description: The algorithm compares the target value to the middle element of the
list. If the target is smaller, it narrows the search to the left half; if larger, to the right
half. This process is repeated until the element is found or the search interval is
empty.
● Time Complexity:
○ Worst Case: O(log n)
○ Best Case: O(1) (if the target is found at the middle)
● Space Complexity: O(1) (for iterative implementation)
● Applications: Suitable for large datasets that are sorted, especially when searching
for individual values.
● Description: In general search trees, each node can have any number of children.
Searching within such trees typically involves traversing the tree in a specific order
(e.g., Preorder, Inorder, or Postorder traversal).
● Time Complexity:
○ Worst Case: O(n) (linear search through the tree)
● Space Complexity: O(n) (due to tree structure)
● Applications: Useful for hierarchical data structures, such as file systems, multi-level
indexing in databases, and more complex search queries.
1. What is the time complexity of linear search in the worst case?
a) O(1)
b) O(log n)
c) O(n)
d) O(n log n)
○ Answer: c) O(n)
2. Which of the following is true about linear search?
a) It requires the list to be sorted.
b) It is more efficient than binary search for large datasets.
c) It is easy to implement but inefficient for large datasets.
d) It divides the list into two halves.
○ Answer: c) It is easy to implement but inefficient for large datasets.
3. In which case does the linear search algorithm terminate after checking the
first element?
a) When the target element is at the first position.
b) When the list is empty.
c) When the list is sorted.
d) When the target element is in the last position.
○ Answer: a) When the target element is at the first position.
4. What is the main advantage of linear search over binary search?
a) It works on sorted lists only.
b) It is more efficient in terms of time complexity.
c) It does not require the list to be sorted.
d) It can handle large data better.
○ Answer: c) It does not require the list to be sorted.
5. What is the space complexity of linear search?
a) O(1)
b) O(n)
c) O(log n)
d) O(n log n)
○ Answer: a) O(1)
11.What is the time complexity of searching in a general search tree in the worst
case?
a) O(log n)
b) O(n)
c) O(n log n)
d) O(1)
● Answer: b) O(n)
12.Which of the following is a characteristic of general search trees?
a) Each node can only have two children.
b) Each node can have any number of children.
c) The tree must be balanced.
d) It is primarily used for binary search.
● Answer: b) Each node can have any number of children.
13.Which traversal method can be used to search through a general search tree?
a) Preorder traversal
b) Inorder traversal
c) Postorder traversal
d) All of the above
● Answer: d) All of the above
14.In which of the following scenarios would you prefer using a general search
tree?
a) Searching for an element in a sorted array.
b) Searching hierarchical data, like file systems.
c) Searching a list of numbers.
d) Searching through a database of flat records.
● Answer: b) Searching hierarchical data, like file systems.
15.What is the time complexity of searching for an element in a binary search tree
(BST)?
a) O(n)
b) O(log n)
c) O(n log n)
d) O(1)
● Answer: b) O(log n) (for a balanced tree)
Types of Graphs
● Directed Graph (Digraph): In this graph, edges have a direction. The edge (u, v)
means that there is a directed edge from node u to node v. The direction matters.
● Undirected Graph: In this graph, edges do not have a direction. The edge (u, v) is
the same as (v, u).
● Weighted Graph: A graph in which each edge has a weight or cost associated with
it. The weight typically represents the distance, cost, or any other measure of the
edge.
● Unweighted Graph: A graph in which edges do not have any weights associated
with them.
Representation of Graphs
● Adjacency Matrix: A 2D array where each cell (i, j) represents the presence or
absence of an edge between vertex i and vertex j. It is commonly used for dense
graphs.
○ Time complexity for BFS/DFS: O(V^2) where V is the number of vertices.
● Adjacency List: A collection of lists or linked lists, where each vertex stores a list of
adjacent vertices. This representation is more efficient for sparse graphs.
○ Time complexity for BFS/DFS: O(V + E) where V is the number of vertices
and E is the number of edges.
● Description: BFS explores all the vertices at the current depth level before moving
on to vertices at the next depth level. It uses a queue to explore the graph.
● Time Complexity: O(V + E) where V is the number of vertices and E is the number
of edges.
● Space Complexity: O(V) (due to the queue and visited list).
Topological Sorting
● Description: Topological sorting is only possible in Directed Acyclic Graphs
(DAGs). It arranges the vertices in a linear order such that for every directed edge (u,
v), vertex u comes before vertex v.
● Time Complexity: O(V + E) where V is the number of vertices and E is the number
of edges.
Prim’s Algorithm
Kruskal’s Algorithm
● Description: A greedy algorithm that adds edges in increasing order of their weight,
as long as they don’t form a cycle.
● Time Complexity: O(E log E), where E is the number of edges (sorting edges).
● Space Complexity: O(V).
MCQs for Graphs and Graph Algorithms
Graph Basics (MCQs 1-5)
11.Which of the following algorithms is used to find the shortest path in a graph
with negative edge weights?
a) Dijkstra’s Algorithm
b) Floyd-Warshall Algorithm
c) Bellman-Ford Algorithm
d) Prim’s Algorithm
● Answer: c) Bellman-Ford Algorithm
12.Which algorithm can be used to find the shortest path between all pairs of
vertices?
a) Dijkstra’s Algorithm
b) Bellman-Ford Algorithm
c) Floyd-Warshall Algorithm
d) BFS
● Answer: c) Floyd-Warshall Algorithm
13.What is the primary difference between Dijkstra’s Algorithm and Bellman-Ford
Algorithm?
a) Dijkstra’s Algorithm works only for directed graphs.
b) Bellman-Ford can handle graphs with negative weights, but Dijkstra cannot.
c) Dijkstra’s Algorithm is slower than Bellman-Ford.
d) Bellman-Ford is faster than Dijkstra’s Algorithm.
● Answer: b) Bellman-Ford can handle graphs with negative weights, but Dijkstra
cannot.
14.Which algorithm is used for finding the shortest path from a source node to all
other nodes in a graph with non-negative weights?
a) Bellman-Ford Algorithm
b) Dijkstra’s Algorithm
c) Floyd-Warshall Algorithm
d) A* Algorithm
● Answer: b) Dijkstra’s Algorithm
15.What is the time complexity of Floyd-Warshall’s Algorithm?
a) O(V^2)
b) O(V^3)
c) O(V + E)
d) O(E log V)
● Answer: b) O(V^3)
Greedy algorithms follow a problem-solving heuristic of making the locally optimal choice at
each stage with the hope that these local choices will lead to a globally optimal solution. It is
used in problems where choosing locally optimal solutions at each step leads to an optimal
solution for the whole problem.
Huffman Encoding
● Description: Another greedy algorithm for finding the MST of a connected graph. It
grows the MST one edge at a time, selecting the smallest edge that connects a
vertex in the tree to a vertex outside the tree.
● Time Complexity: O(E log V) where E is the number of edges and V is the number
of vertices.
● Applications: Network design, computer networking.
Divide and Conquer is a problem-solving paradigm that divides the problem into
subproblems, solves each subproblem independently, and combines their solutions to solve
the original problem.
Merge Sort
● Description: A sorting algorithm that divides the array into two halves, recursively
sorts them, and then merges the sorted halves.
● Time Complexity: O(n log n) where n is the number of elements in the array.
● Space Complexity: O(n)
● Applications: Sorting large datasets.
Quick Sort
● Description: A sorting algorithm that selects a "pivot" element and partitions the
array around the pivot so that elements less than the pivot go to the left, and
elements greater go to the right. It then recursively sorts the left and right partitions.
● Time Complexity: O(n log n) on average, but O(n²) in the worst case.
● Space Complexity: O(log n) for the recursive stack.
● Applications: Sorting, database query optimization.
Binary Search
Dynamic Programming
Dynamic programming (DP) is a technique for solving problems by breaking them down into
simpler subproblems and storing the results of these subproblems to avoid redundant
calculations.
Fibonacci Sequence
● Description: The sequence where each number is the sum of the two preceding
ones, usually starting with 0 and 1. It can be solved using dynamic programming by
storing the results of subproblems to avoid redundant computation.
● Time Complexity: O(n) where n is the number of terms.
● Space Complexity: O(n) for the DP table.
● Description: The problem of finding the most efficient way to multiply a chain of
matrices. The goal is to minimize the number of scalar multiplications.
● Time Complexity: O(n³) where n is the number of matrices.
● Space Complexity: O(n²) for storing the DP table.
● Applications: Optimization in matrix computations.
● Description: A problem where a set of items, each with a weight and value, must be
selected such that the total weight does not exceed a given limit while maximizing the
total value. It is solved using dynamic programming.
● Time Complexity: O(n * W) where n is the number of items and W is the maximum
weight capacity.
● Space Complexity: O(n * W).
Backtracking
N-Queens Problem
Graph Coloring
● Description: The problem of coloring the vertices of a graph such that no two
adjacent vertices have the same color. Backtracking is used to explore possible color
assignments.
● Time Complexity: O(c^n) where n is the number of vertices and c is the number of
colors.
● Space Complexity: O(n) for the coloring array.
Hamiltonian Cycle
● Description: A cycle that visits each vertex exactly once and returns to the starting
vertex. The problem is solved by backtracking.
● Time Complexity: O(n!) in the worst case.
● Space Complexity: O(n).