Ads & Aa Question Bank-Key
Ads & Aa Question Bank-Key
Asymptotic notations are used to describe the behavior of functions as the input size
approaches infinity. These notations help in analyzing the efficiency of algorithms by providing
a way to compare their growth rates.
Database Indexing: B-Trees are used to create indexes in databases for fast retrieval of data.
They allow efficient searching, insertion, and deletion operations.
File Systems: In file systems like NTFS, B-Trees are used to store directory structures and
file metadata, enabling efficient access to files.
Multi-Level Indexing: B-Trees are used in multi-level indexing for quick access to large
datasets, such as in search engines and large-scale databases.
Data Compression: B-Trees are applied in data compression algorithms, helping in efficient
storage and retrieval of compressed data.
An AVL tree is a self-balancing binary search tree (BST), where the difference in height
between the left and right subtrees of any node (called the balance factor) is at most 1. This
ensures that the tree remains balanced, providing O(log n) time complexity for search, insert,
and delete operations.
1. Searching and Sorting: AVL trees are used in dynamic searching and sorting
operations where data needs to be quickly inserted, deleted, or searched, such as in
database indexing.
2. Priority Queues: Used in implementing priority queues to ensure efficient insertion and
deletion of elements based on priority.
3. Memory Management: AVL trees are used in memory management algorithms, such as
in free list management to keep track of available memory blocks.
4. Maintaining Sorted Data: Applied in applications where the data is constantly
changing, like in file systems that need efficient data retrieval and updates.
Balance Factor is the difference in height between the left and right subtrees of a node in an
AVL tree.
For an AVL tree to remain balanced, the balance factor of every node must be -1, 0, or +1.
30
/ \
20 40
10
All balance factors are within [-1, 0, +1], so it's a valid AVL tree.
1) Explain the concept of algorithm analysis and its importance. Discuss different types of
complexities involved.
2) Describe the time and space complexity analysis of algorithms. How are Big O, Big Theta,
and Big Omega used to express complexity?
3) What are AVL trees? Explain the process of insertion and deletion in AVL trees with
examples.
An AVL tree defined as a self-balancing Binary Search Tree (BST) where the
difference between heights of left and right subtrees for any node cannot be more than one.
The difference between the heights of the left subtree and the right subtree for any node is
known as the balance factor of the node.
The AVL tree is named after its inventors, Georgy Adelson-Velsky and Evgenii Landis,
who published it in their 1962 paper “An algorithm for the organization of information”.
Time Complexities:
Construct an AVL Tree with the following elements
Structure:
Nodes: B-trees consist of nodes, each of which can hold multiple keys and pointers to
child nodes.
Children: The number of child nodes a node has is always one more than the number
of keys it contains.
Leaf Nodes: Nodes at the bottom level of the tree, holding the actual data entries.
Properties:
Balanced:
All leaf nodes are at the same level, ensuring consistent performance for all operations.
A B-tree is defined by its minimum degree, which specifies the minimum number
of keys each node (except the root) must contain.
Maximum Keys:
B-trees minimize the number of disk accesses required for search, insertion, and deletion
operations.
High Performance:
Their balanced structure guarantees logarithmic time complexity for these operations,
making them suitable for large datasets.
Dynamic:
B-trees adapt gracefully to data modifications, making them useful for dynamic
environments.
Applications:
Databases:
B-trees are widely used as the underlying data structure for indexing in relational databases.
File Systems:
Many file systems use B-trees to manage the directory structure and file location
information.
Properties of B Tree
1. Every node has at most m children, where m is the order of the B Tree.
2. A node having K children consists of K-1 keys.
3. Every non-leaf node, excluding the root node, must have at least [m/2] child nodes.
4. The root node must have at least two children if it is not the leaf node.
5. Unlike the other trees, the height of a B Tree increases upwards toward the
root node, and the insertion happens at the leaf node.
6. The Time Complexity of all the operations of a B Tree is O(log?n), where 'n' is
the number of data elements present in the B Tree.
1. Searching for the appropriate node where the element will be inserted.
2. Splitting of the node, if required.
Step 1: If the Tree is empty, a root node is allocated, and we will insert the key.
Step 2: We will then update the allowed number of keys in the node.
Step 3: We will then search for the appropriate node for the insertion of the element.
Step 4: If the node is filled, we will follow the steps shown below.
Step 4.2: Once the data elements exceed their limit, we will split the node at the median.
Step 4.3: We will then push the median key upwards, making the left keys the left
child nodes and the right keys the right child nodes.
Step 5: If the node is not full, we will follow the below step
Let us understand the steps mentioned above with the illustrations shown below.
Suppose that the following are some data elements that need to be inserted in a B Tree:
7, 8, 9, 10, 11, 16, 21, and 18.
1. Since the maximum degree of a node in the tree is 3; therefore, the maximum number
of keys per node will be 3 - 1 = 2.
3. We will insert the next data element, i.e., 8, into the tree. Since 8 is greater than 7,
it will be inserted to the right of 7 in the same node.
4. Similarly, we will insert another data element, 9, into the tree on the same to the
right of 8. However, since the maximum number of keys per node can only be 2, the node
will split, pushing the median key 8 upward, making 7 the key of the left child node and
9 the key of the right child node.
5. We will insert the next data element, i.e., 10, into the tree. Since 10 is greater than
9, it will be inserted as a key on the right of the node containing 9 as a key.
6. We will now insert another data element, 11, into the tree. Since 11 is greater than
10, it should be inserted to the right of 10. However, as we know, the maximum number
of keys per node cannot be more than 2; therefore, 10 being the median, will be pushed to
the root node right to 8, splitting 9 and 11 into two separate nodes.
7. We will now insert data element 16 into the tree. Since 16 is greater than 11, it will
be inserted as a key on the right of the node consisting of 11 as a key.
8. The next data element that we will insert into the tree is 21. Element 21 should be
inserted to the right of 16; however, it will exceed the maximum number of keys per node
limit. Therefore, a split will occur, pushing the median key 16 upward and splitting the
left and right keys into separate nodes. But this will again violate the maximum number
of keys per node limit; therefore, a split will once again push the median key 10 upward a
root node and make 8 and 11 its children.
9. At last, we will insert data element 18 into the tree. Since 18 is greater than 16 but
less than 21, it will be inserted as the left key in the node consisting of 21.
Deleting an element on a B-tree consists of three main events: searching the node where
the key to be deleted exists, deleting the key and balancing the tree if required.
While deleting a tree, a condition called underflow may occur. Underflow occurs when a
node contains less than the minimum number of keys it should hold.
Inorder Predecessor
The largest key on the left child of a node is called its inorder predecessor.
Inorder Successor
The smallest key on the right child of a node is called its inorder successor.
Deletion Operation
Before going through the steps below, one must know these facts about a B tree of
A node (except root node) should contain a minimum of ⌈m/2⌉ - 1 keys. (i.e. 1)
There are three main cases for deletion operation in a B
tree. Case I
The key to be deleted lies in the leaf. There are two cases for it.
The deletion of the key does not violate the property of the minimum number of
keys a node should hold.
In the tree below, deleting 32 does not violate the above properties.
The deletion of the key violates the property of the minimum number of keys a node
should hold. In this case, we borrow a key from its immediate neighboring sibling node
in the order of left to right.
First, visit the immediate left sibling. If the left sibling node has more than a minimum
number of keys, then borrow a key from this node.
In the tree below, deleting 31 results in the above condition. Let us borrow a key
from the left sibling node.
If both the immediate sibling nodes already have a minimum number of keys, then merge
the node with either the left sibling node or the right sibling node. This merging is done
through the parent node.
If the key to be deleted lies in the internal node, the following cases occur.
The internal node, which is deleted, is replaced by an inorder predecessor if the left child
has more than the minimum number of keys.
The internal node, which is deleted, is replaced by an inorder successor if the
right child has more than the minimum number of keys.
If either child has exactly a minimum number of keys then, merge the left and the
right children.
After merging if the parent node has less than the minimum number of keys
then, look for the siblings as in Case I.
Case III
In this case, the height of the tree shrinks. If the target key lies in an internal node, and
the deletion of the key leads to a fewer number of keys in the node (i.e. less than the
minimum required), then look for the inorder predecessor and the inorder successor. If
both the children contain a minimum number of keys then, borrowing cannot take
place. This leads to Case II(3) i.e. merging the children.
Again, look for the sibling to borrow a key. But, if the sibling also has only a minimum
number of keys then, merge the node with the sibling along with the parent. Arrange
the children accordingly (increasing order).
6) Explain the advantages of AVL Trees and B-Trees over other tree data structures in
terms of searching, insertion, and deletion operations.
UNIT-2
Min Heap: In a Min Heap, the parent node is always smaller than its child nodes.
Example:
10
/ \
20 30
/ \
40 50
Max Heap: In a Max Heap, the parent node is always larger than its child nodes.
Example:
50
/ \
30 40
/ \
10 20
Priority Queues:
A Priority Queue is a special type of queue in which each element is associated with a
priority, and elements are served based on their priority — higher priority elements are
served before lower priority ones, regardless of their insertion order.
Connected Component:
A connected component is a part of an undirected graph in which every pair of vertices is
connected by a path, and which is connected to no additional vertices in the graph.
Example:
In the graph below:
Component 1: A — B — C
Component 2: D — E
Biconnected Component:
A biconnected component is a maximal subgraph where removing any single vertex
does not disconnect the subgraph.
It has no articulation points (vertices whose removal increases the number of connected
components).
Example:
A
/\
B—C
This is a biconnected component because removing any one vertex still keeps the graph
connected.
Example:
Merge Sort Algorithm
Sol: Assume input set as (x, y) = {(0, -1), (0, 0), (0, 1), (1, -1), (1, 0), (1, 1), (2, 2), (3, 3),
(4, 4)}
In Greedy method, we will apply constraints for input set, then some of the inputs may be
eliminated from the input set.
The remaining inputs are known as feasible solutions.
Next, from feasible solutions, take one input at a time and check whether the input leads
to optimal solution or not.
The Greedy Method is used to solve the Fractional Knapsack Problem by selecting
items based on the highest profit-to-weight ratio.
Example Table:
Item Profit (P) Weight (W) P/W (Profit/Weight)
1 60 10 6.0
2 100 20 5.0
3 120 30 4.0
Knapsack Capacity = 50 kg
Greedy Selection Steps:
A Minimum Cost Spanning Tree (MST) is a way to connect all the vertices (nodes) in a
graph using the least total cost.
Example:
If cities are connected by roads with different costs, MST gives the cheapest way to
connect all cities.
Purpose:
Dijkstra’s algorithm is used to find the shortest path from a starting node to all other
nodes in a weighted graph (with non-negative weights).
1. Start with the source vertex. Set its distance as 0 and all others as infinity (∞).
2. Visit the nearest vertex that is not yet visited.
3. Update the distances of its neighboring vertices.
4. Repeat the process for the next nearest vertex.
5. Continue until all vertices are visited.
It helps in navigation apps, network routing, and many real-life shortest path problems.
It avoids repeating the same calculations, so it saves time and improves efficiency.
Example:
Fibonacci Series using DP
F(0) = 0, F(1) = 1
F(n) = F(n−1) + F(n−2)
Instead of calculating F(n−1) and F(n−2) again and again, we store the results in an array and
reuse them.
1) What is the Greedy method? Discuss the Job Sequencing with deadlines problem
and its solution using the Greedy approach.
Greedy Method:
The greedy method is useful for the problems which cannot be divisible. The
terminologies used in the greedy method are:
1. Objective Function:
The problems that can be solved using Greedy method have 'n' inputs and a set of
constraints. This is called as objective function.
2. Feasible Solutions:
Any input that satisfies the constraints of an objective function is called a feasible
solution.
We need to find a feasible solution that either maximizes or minimizes a given objective
function.
3. Optimal Solution:
It is a feasible solution that either maximizes or minimizes the function; this is called an
optimal solution.
Some problems consider maximum value as an optimal solution.
Ex: Problems related to find the profit value expect maximum value as output.
Some problems consider minimum value as an optimal solution.
Ex: Problems related to find the distance between two points expect minimum value as
output.
Examples of Greedy Method:
Sol: Assume input set as (x, y) = {(0, -1), (0, 0), (0, 1), (1, -1), (1, 0), (1, 1), (2, 2), (3, 3),
(4, 4)}
In Greedy method, we will apply constraints for input set, then some of the inputs may be
eliminated from the input set.
The remaining inputs are known as feasible solutions.
Next, from feasible solutions, take one input at a time and check whether the input leads
to optimal solution or not.
2) Explain the 0/1 Knapsack problem. How can the Greedy method be used to solve
this problem
3) Describe the process of finding the Minimum Cost Spanning Tree using Prim’s and
Kruskal’s algorithm.
4) What is Dynamic Programming? Explain the All-Pairs Shortest Paths problem and
the solution using Dynamic Programming.
5) Explain the Bellman-Ford algorithm and its application to find the shortest paths
from a single
source in a graph with negative weights.
UNIT-4
It is useful for solving problems with multiple possible solutions, like puzzles or
combinations.
Example:
N-Queens Problem
Place N queens on an N×N chessboard such that no two queens attack each other.
Backtracking tries placing a queen in one row, and if it causes a conflict later, it removes the
queen and tries another position.
The 8-Queens problem is a puzzle where you need to place 8 queens on a chessboard
of size 8x8 such that no two queens can attack each other. This means that no two queens
can be on the same row, same column, or on the same diagonal.
Simple Example:
....Q...
......Q.
..Q.....
.....Q..
...Q....
.Q......
.......Q
In this example, no two adjacent vertices have the same color, solving the problem
with 2 colors.
Example: If a salesperson needs to visit cities A, B, and C, the Branch and Bound method
would evaluate all possible routes, calculate lower bounds for each path, and eliminate routes
that exceed the shortest known path
UNIT-5
NP-Hard:
NP-Complete:
The Clique Decision Problem (CDP) asks whether a given graph contains a clique of a
specific size. A clique is a subset of vertices in a graph such that every two distinct
vertices are connected by an edge.
Formally:
Given a graph G and a number k, the problem is to determine if there exists a clique of
size k in G.
Example:
The Clique Decision Problem (CDP) asks whether a given graph contains a clique of a
specific size k. A clique is a subset of vertices in which every two distinct vertices are
connected by an edge.
Formally: Given a graph G and an integer k, the problem is to determine whether there exists
a clique of size k in G.
Example:
For a graph with vertices A, B, C, D, and edges A-B, A-C, B-C, C-D, and D-E, a clique of
size 3 would be {A, B, C}, since all three vertices are connected to each other.
P (Polynomial Time):
Problems that can be solved in polynomial time (efficiently).
Example: Sorting, searching.
NP (Nondeterministic Polynomial Time):
Problems for which a solution can be verified in polynomial time, but finding the
solution may take longer.
Example: Sudoku, graph coloring.
NP-Hard:
Problems that are at least as hard as the hardest NP problems. Not necessarily in NP and
may not have a verifiable solution.
Example: Halting problem.
NP-Complete:
Problems that are both in NP and as hard as any NP problem. If one NP-Complete
problem is solved in polynomial time, all NP problems can be solved in polynomial time.
Example: Traveling Salesman Problem (TSP).
6) What are NP-Hard and NP-Complete problems? Explain the difference between
these two classes with examples.
7) Explain Cook’s theorem and its significance in the theory of NP-completeness.
8) Discuss the Clique Decision Problem (CDP) and its relation to NP-Hard problems.
9) How is the Chromatic Number Decision Problem (CNDP) related to graph coloring
and NP-Hard problems?
10) Explain the Job Shop Scheduling problem in the context of NP-Hard problems.
How is it classified, and what are its practical applications?